Wednesday, August 20, 2014

Orchestrate Provider for Project Orleans

Previously we discussed Orleans as a new middle tier, actor model framework from Microsoft. Today I want to discuss creating a custom persistence provider using Orchestrate.io as the storage mechanism.

The Orleans Persistence Model

Persistence in Orleans is a simple declarative model where you identify the data to be saved in permanent storage via convention, and the programmer controls when and where the data is stored. Using this model is not required however, you can roll your own. 

How It Works

You declare what data needs to be saved using the IGrainState interface, and you pass this interface into the GrainBase when creating your grain class. You will also need to set a reference to the provider in the host project and set the provider type in the server configuration XML. Once this is done, the framework will attempt to load the grain's state information from permanent storage on activation. Saving is up to the developer and is done with a simple call to the provider's WriteStateAsync() method.

The Provider Interface

Orleans provides an IStorageProvider interface that we must implement if we are going to create an Orchestrate provider. It is fairly simple fortunately and here it is:

class OrchestrateProvider : IStorageProvider
{
    #region IStorageProvider Members

    public Task ClearStateAsync(string grainType, Orleans.GrainReference grainReference, Orleans.GrainState grainState)
    {
        throw new NotImplementedException();
    }

    public Task Close()
    {
        throw new NotImplementedException();
    }

    public Orleans.OrleansLogger Log
    {
        get { throw new NotImplementedException(); }
    }

    public Task ReadStateAsync(string grainType, Orleans.GrainReference grainReference, Orleans.IGrainState grainState)
    {
        throw new NotImplementedException();
    }

    public Task WriteStateAsync(string grainType, Orleans.GrainReference grainReference, Orleans.IGrainState grainState)
    {
        throw new NotImplementedException();
    }

    #endregion

    #region IOrleansProvider Members

    public Task Init(string name, Orleans.Providers.IProviderRuntime providerRuntime, Orleans.Providers.IProviderConfiguration config)
    {
        throw new NotImplementedException();
    }

    public string Name
    {
        get { throw new NotImplementedException(); }
    }

#endregion
}

Orchestrate Provider

Lets start with the OrleansProvider members. These are bits that set up the storage mechanism on first use. The name property interface can be satisfied with a simple private string with a public getter. I will leave that to you to do on your own. The Init task is a bit more interesting. Here we need to instantiate an Orchestrate.NET instance and configure it with our API key.

In the init function we will set the name property, get our API key and instantiate our Orchestrate.NET instance. The config parameter has a dictionary of all of the items declared in the server configuration file.

public Task Init(string name, IProviderRuntime providerRuntime, IProviderConfiguration config)
{
    Name = name;

    if (string.IsNullOrWhiteSpace(config.Properties["APIKey"])) 
        throw new ArgumentException("APIKey property not set");

    var apiKey = config.Properties["APIKey"];

    _orchestrate = new Orchestrate.Net.Orchestrate(apiKey);

    return TaskDone.Done;
}

Next lets set up our ReadStateAsync. Now we have a decision to make, what will be the name of our Orchestrate collection and what will be the items key? We will use the grain state's type name for the name of our collection and the grains key for our key. As you know when we read items out of Orchestrate we will get back a json string. So the next step is to deserialize it back out to the grain state type and cast that to IGrainState. Now we can call the grain states SetAll() method and we are set. Let's see the code:

public async Task ReadStateAsync(string grainType, GrainReference grainReference, IGrainState grainState)
{
    var collectionName = grainState.GetType().Name;
    var key = grainReference.ToKeyString();

    try
    {
        var results = await _orchestrate.GetAsync(collectionName, key);

        var dict =     ((IGrainState)JsonConvert.DeserializeObject(results.Value.ToString(),       grainState.GetType())).AsDictionary();
        grainState.SetAll(dict);
    }
    catch (Exception ex)
    {
        Console.WriteLine("==> No record found in {0} collection for id {1}\n\r", new object[] { collectionName, key });
        WriteStateAsync(grainType, grainReference, grainState);
    }
}

Now if we have an exception thrown, it means that item does not exist in our collection. Because all grains always exist in Orleans, we will go ahead and write the item to our collection.

Speaking of writing, lets go ahead and look at the WriteStateAsync code:

public async Task WriteStateAsync(string grainType, GrainReference grainReference, IGrainState grainState)
{
    var collectionName = grainState.GetType().Name;
    var key = grainReference.ToKeyString();

    try
    {
        var results = await _orchestrate.PutAsync(collectionName, key, grainState);
    }
    catch (Exception ex)
    {
        Console.WriteLine("==> Write failed in {0} collection for id {1}", new object[] { collectionName, key });
    }
}

And the ClearStateAsync:

public async Task ClearStateAsync(string grainType, GrainReference grainReference, GrainState grainState)
{
    var collectionName = grainState.GetType().Name;
    var key = grainReference.ToKeyString();

    await _orchestrate.DeleteAsync(collectionName, key, false);
}

The last method is the Close, with the Orchestrate.NET provider we can simply set our instance to null and be done. I will leave that code to you.

Wrap Up

And that is all there is to it. Check out the source code and sample project on github. I believe Orleans will have a place in your tool box and Orchestrate makes for a powerful persistence mechanism to pair with it.

If you are going to download and run the code yourself, make sure you create an app in Orchestrate, create ManagerState and EmployeeState collections, then grab your API key and put it at the appropriate place in the DevTestServerConfiguration.xml file.


Thursday, August 14, 2014

Project "Orleans" and Orchestrate.NET

Cloud applications are by default distributed and require parallel processing. Modern application users demand near real-time interaction and responses. Project "Orleans" is Microsoft's new framework for creating cloud (Azure) based distributed applications that meet these requirements. Based on established .Net code and practices, it brings the actor model to your toolbox.

Orleans

Orleans is a new middle tier framework that allows you to "cache" your business objects and their data. It accomplishes this with "grains". A grain is a single-threaded, encapsulated, light weight object that can communicate with other grains via asynchronous message passing. Grains are hosted in "Silos", typically one silo per server. Each silo in an application knows of the other silos and can pass grains and messages between them. Two main goals of Orleans is developer productivity and scaleability by default.

Developer Productivity

Productivity is achieved by providing a familiar environment for development. Grains are .Net objects with declared interfaces, this allows the grains to appear as simple remote objects that can be interacted with directly. Grains are also guaranteed to be single threaded, therefore the programmer never has to deal with locks or other synchronization methods to control access to shared resources. Grains are activated as needed, and if not in use can be garbage collected transparently. This makes grains behave like they are in a cache, "paged-in"/"paged-out" as required. The location of grains is transparent to the developer as well, programmers never need be concerned about which silo a grain is in, as all messaging is handled by the framework.

Scaleability

Fine-grain grains? Orleans makes it easy to break middle tier objects into small units. It can handle large numbers of actors with ease, millions or more, this allows Orleans to control what grains are active and where. Heavy loads on a specific section of grains will be load balanced automatically by the Orleans framework. Grains have logical end-points, with messaging multiplexed across a set of all-to-all physical connections via TCP, this allows a large number of addressable grains with low OS overhead. The Orleans run-time can schedule a large number of grains across a custom thread pool, allowing the framework to run at a high CPU utilization rate with high stability. The nature of messaging between actors allows programmers to develop non-blocking asynchronous code, allowing for a higher degree of parallelism and throughput without using multi-threading in the grains themselves.

Orchestrate.NET

Where does persistence come in? Orleans allows programmers to persist grain state via an integrated store. The grains will synchronize updates and guarantee that callers receive results only after the state has been successfully saved. This system is easily customized/extended and we will do so, and use Orchestrate.NET as the storage provider.

But not until the next post... Meantime you can read up on Orleans, Orleans @ build, Orchestrate and Orchestrate.NET.

Tuesday, May 6, 2014

Introducing Orchestrate.NET

I have had the pleasure of working on a new open source project called Orchestrate.NET, This project creates a wrapper for Orchestrate, the powerful new Database as a Service that allows you to combine multiple databases into a single service and query it with a simple API.

Once registered and signed in to the Orchestrate.io site, you can create Applications through the console. Each application comes with its own API key. Using this key and the Orchestrate.NET library it is easy to create collections, add/update/delete items, search using Lucene query strings as well as add events and object graphs.

With Orchestrate you pay by the number of data transactions, not by the size of your data set. With the first million transactions per month being free, its a great option for start ups without much traffic yet, and the enterprise with very large data sets. Orchestrate takes care of security, back ups and monitoring for you, leaving you free to concentrate on coding.

To add a reference to Orchestrate.NET to your project use the package manager:
PM> Install-Package Orchestrate.NET
or go to the repository on GitHub and get the code for yourself.

Once installed to create a collection use the following code (you have to insert an item to create a collection):
var orchestration = new Orchestrate("YOUR API KEY GOES HERE");
var result = orchestration.CreateCollection(<collectionname>, <itemid>, <item>);

To insert a new item (or update an existing one) you would do something like this:
var result = orchestration.Put(<collectionname>, <itemid>, <item>);

To retrieve an item by its Id:
var result = _orchestrate.Get(<collectionname>, <itemid>);

Please check out the documentation on GitHub and at Orchestrate.io for more in depth look at the capabilities available to you. Also please provide feedback, ideas for new features and pull requests as you can. We want to make this the best library possible.

I will be posting some more in depth looks at Orchestrate and how to use Orchestrate.NET to get the most out of it in your applications.

Tuesday, July 31, 2012

Wait Cursors

Ah, cursor management, the bane of desktop programmers everywhere. Truth be told, .NET makes it quite easy to swap between cursor styles, the most common is to alert the user that the application is waiting on a long running task. Usually you see code like this:

private void cmdSave_Click(object sender, EventArgs e)
    {
        Cursor.Current = Cursors.WaitCursor;
            
        // Do Stuff Here...

        Cursor.Current = Cursors.Default;
    }

Very straight forward, but wait! What if we want to have multiple exit points in our logic? Then  we end up with something like this:

private void cmdSave_Click(object sender, EventArgs e)
    {
        Cursor.Current = Cursors.WaitCursor;
            
        // Do Stuff Here...

        if (done == true)
        {
            Cursor.Current = Cursors.Default;
            return;
        }
            
        // Do Stuff Here...

        if (done == true)
        {
            Cursor.Current = Cursors.Default;
            return;
        }

        Cursor.Current = Cursors.Default;
    }

This is obviously prone to error, if for example we forget to add an assignment to set the cursor back prior to exiting, or we hit an error and forget to put the cursor back to default in a try/catch block we can end up with the never ending wait cursor.

But Rob, you ask, what can be done? Never fear I am here with your very own WaitCursor class that will automagically fix these issues for you. Here is the code:

public class WaitCursor : IDisposable
    {
        private readonly Cursor _originalCursor = Cursors.Default;

        public WaitCursor()
        {
            _originalCursor = Cursor.Current;
            Cursor.Current = Cursors.WaitCursor;
        }

        public WaitCursor(Cursor cursor)
        {
            _originalCursor = Cursor.Current;
            Cursor.Current = cursor;
        }

        #region IDisposable Members

        private bool _disposed;

        protected virtual void Dispose(bool disposing)
        {
            if (_disposed)
                return;
            
            if (disposing)
                Cursor.Current = _originalCursor;

            _disposed = true;
        }

        public void Dispose()
        {
            Dispose(true);
            GC.SuppressFinalize(this);
        }

        #endregion
    }


As you can see when instantiated, we grab the current cursor state, then set the current cursor to either the wait cursor, or whatever cursor we have passed in. When the class gets disposed, we set the cursor back to the original cursor. This allows us to write code like this:

private void cmdSave_Click(object sender, EventArgs e)
    {
        using (new WaitCursor())
        {
            // Do Stuff Here...

            if (done == true)
                return;
            
            // Do Stuff Here...

            if (done == true)
                return;
        }
    }

You are welcome

Thursday, July 19, 2012

Internet Defense League

Code Refugee is proud to be a member of The Internet Defense League. Ever since the US Government tried to foist the tragically draconian SOPA/PIPA censorship laws on the world, a number of organizations (Reddit, Fark, Imgur, TechDirt) have come together to found an organization to raise awareness of threats to internet freedom. There can be no more critical fight for the future of our world than the one to keep our most important communication mechanism free from governmental interference. Today the league has officially launched and we are showing the "cat signal" in recognition.

Please take a few minutes to familiarize yourself with the league and its goals. I think you will find them worthy of your attention.

Wednesday, June 27, 2012

Submitting Fake Tracks

Now that we have generated track data and saved it to our cloud storage. We can stub out the form that will post the fake GPS data to our (not yet implemented) API.

The Goal

We want to the user to be able to select a track, set a delay interval and indicate whether or not to randomly vary that interval (for realism). Then start the process of sending the data to our API. We also want to give the user some visual feedback of the data as it is being processed.

Track Faker

Here is a mock up of the track faker tab:


As you can see, we are capturing the device, track, delay and randomization flag at the top of the screen. There is also a Start/Cancel button at the top right. The bottom portion is a rolling log of events once the track has been started. We will make a small variation to this screen as we will make the log add events at the top of the log instead of the bottom.

Multi-Threading and Tasks

This feature is a great example of a long running task that will need to be broken out into its own thread in order to keep the application from becoming unresponsive. We will use the new Task Parallel Library to kick off a background thread that will read each GPS point, build the proper API post and update the UI with its progress. The task is simple to set up, we instantiate a cancellation token source and use it when we create the task like so:

_cancellationTokenSource = new CancellationTokenSource();
            CancellationToken cancellationToken = _cancellationTokenSource.Token; 
            
            Task.Factory.StartNew(() =>
            {
            }, cancellationToken);

This allows the task to be cancelled with a simple call from anywhere outside the task:
_cancellationTokenSource.Cancel();

Retrieving Files From Storage

So far we have gone through saving a file to blob storage, now lets have a look at retrieving a file. We can have the file retrieved as a memory stream or as text. Since we are going to deserialize it into an class we will use the memory stream.

public static MemoryStream RetrieveBlobStream(string containerName, string fileName)
        {
            try
            {
                var stream = new MemoryStream();
                var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");

                CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
                CloudBlobContainer container = blobClient.GetContainerReference(containerName);

                CloudBlob blob = container.GetBlobReference(fileName);
                blob.DownloadToStream(stream);
                return stream;
            }
            catch (Exception)
            {
                //TODO: Log error here...
                return null;
            }
        }

GPX POCO Generation

Wait? What? Deserialize it into a class? What class you say? That's easy the gpx.cs class we created using the xds.exe tool. Simply give it the proper parameters and the uri to your xsd file and it will generate a class for you.
//------------------------------------------------------------------------------
// 
//     This code was generated by a tool.
//     Runtime Version:4.0.30319.17379
//
//     Changes to this file may cause incorrect behavior and will be lost if
//     the code is regenerated.
// 
//------------------------------------------------------------------------------

// 
// This source code was auto-generated by xsd, Version=4.0.30319.17379.
// 
namespace iGOR.App.GPX
{
    /// 
    [System.CodeDom.Compiler.GeneratedCodeAttribute("xsd", "4.0.30319.17379")]
    [System.SerializableAttribute()]
    [System.Diagnostics.DebuggerStepThroughAttribute()]
    [System.ComponentModel.DesignerCategoryAttribute("code")]
    [System.Xml.Serialization.XmlTypeAttribute(Namespace="http://www.topografix.com/GPX/1/1")]
    [System.Xml.Serialization.XmlRootAttribute("gpx", Namespace="http://www.topografix.com/GPX/1/1", IsNullable=false)]
    public partial class gpxType {
        
        private metadataType metadataField;
        
        private wptType[] wptField;
        
        private rteType[] rteField;
        
        private trkType[] trkField;
        
        private extensionsType extensionsField;
        
        private string versionField;
        
        private string creatorField;
        
        public gpxType() {
            this.versionField = "1.1";
        }
        
        /// 
        public metadataType metadata {
            get {
                return this.metadataField;
            }
            set {
                this.metadataField = value;
            }
        }
        
        /// 
        [System.Xml.Serialization.XmlElementAttribute("wpt")]
        public wptType[] wpt {
            get {
                return this.wptField;
            }
            set {
                this.wptField = value;
            }
        }
        
        /// 
        [System.Xml.Serialization.XmlElementAttribute("rte")]
        public rteType[] rte {
            get {
                return this.rteField;
            }
            set {
                this.rteField = value;
            }
        }

        //Snip a ton of code...
    }
}

Putting It All Together

Now we have all of the pieces, lets look at the end result. Remember this is a stub and not the complete solution for this tab yet. We still need to get our list of devices from somewhere and do the actual calls to our API.
private void cmdStart_Click(object sender, EventArgs e)
        {
            if (cboTracks.Text.Length == 0)
            {
                MessageBox.Show(
                    "Please select a demo track",
                    "Demo Track",
                    MessageBoxButtons.OK,
                    MessageBoxIcon.Error);

                return;
            }

            _cancellationTokenSource = new CancellationTokenSource();
            CancellationToken cancellationToken = _cancellationTokenSource.Token; 
            
            Task.Factory.StartNew(() =>
            {
                decimal lowerTimer;
                decimal upperTimer;

                cmdStart.Invoke((Action)delegate { cmdStart.Visible = false; });
                cmdCancel.Invoke((Action)delegate { cmdCancel.Visible = true; });

                txtLog.Invoke((Action)delegate { txtLog.Text = "Loading track " + cboTracks.Text + "..." + Environment.NewLine; });

                var timeString = "Interval between location submits will be ";

                if (chkRandomize.Checked)
                {
                    decimal randomizerPercentage = Properties.Settings.Default.IntervalRandomizerPercentage;
                    decimal variance = spnInterval.Value * (randomizerPercentage / 100);
                    lowerTimer = spnInterval.Value - variance;
                    upperTimer = spnInterval.Value + variance;

                    timeString += "between " + lowerTimer + " and " + upperTimer + " seconds." + Environment.NewLine;
                }
                else
                {
                    lowerTimer = upperTimer = spnInterval.Value;
                    timeString += "exactly " + lowerTimer + " seconds." + Environment.NewLine;
                }

                txtLog.Invoke((Action)delegate { txtLog.Text = timeString + txtLog.Text; });

                if (cancellationToken.IsCancellationRequested)
                {
                    // another thread decided to cancel
                    txtLog.Invoke((Action)delegate { txtLog.Text = "Simulation Cancelled..." + Environment.NewLine + txtLog.Text; });
                    return;
                } 

                var containerName = ConfigurationManager.AppSettings["DemoTrackContainer"];
                var stream = Blob.RetrieveBlobStream(containerName, cboTracks.Text);
                stream.Position = 0;

                var mySerializer = new XmlSerializer(typeof(GPX.gpxType));
                var track = (GPX.gpxType)mySerializer.Deserialize(stream);

                var rdm = new Random();
                int min = Convert.ToInt32(lowerTimer * 1000);
                int max = Convert.ToInt32(upperTimer * 1000);

                foreach (var point in track.rte[0].rtept)
                {
                    if (cancellationToken.IsCancellationRequested)
                    {
                        // another thread decided to cancel
                        txtLog.Invoke((Action)delegate { txtLog.Text = "Simulation Cancelled..." + Environment.NewLine + txtLog.Text; });
                        return;
                    }

                    int waitTime = rdm.Next(min, max);
                    txtLog.Invoke((Action)delegate { txtLog.Text = "Waiting " + waitTime + " milliseconds..." + Environment.NewLine + txtLog.Text; });
                    Thread.Sleep(waitTime);
                    txtLog.Invoke((Action)delegate { txtLog.Text = "Sending Lat: " + point.lat + ", Lon: " + point.lon + Environment.NewLine + txtLog.Text; });
                    // TODO: Actually send the data to the API...
                }

                txtLog.Invoke((Action)delegate { txtLog.Text = "Simulation complete..." + Environment.NewLine + txtLog.Text; });

                cmdStart.Visible = true;
                cmdCancel.Visible = false;
            }, cancellationToken);
        }



Monday, June 4, 2012

Saving GPX Files to Azure Blob Storage

Recap

In the last post we gathered the track data from Google Maps, feed it into GPS Babel and have our GPX formatted file saved to a local directory. Now we want to enable our users to put a name to the file and store it in our blob storage account for later use.

GPX Viewer

We need a way for the user to validate the output from the conversion process and specify a name for the track. We can fulfill both requirements with a single form:


A text box to display the file GPX data, one for the file name and a save button are all we need. Create a public property to hold the GPX data as a string. In the form's shown event, assign it to the large text box. One more detail, GPS Babel output the GPX in the GPX/1/0 format and we want the GPX/1/1. A simple fix is to replace the xmlns property like so:

private void Document_Shown(object sender, EventArgs e)
        {
            Doc = Doc.Replace("http://www.topografix.com/GPX/1/0", "http://www.topografix.com/GPX/1/1");

            editDocument.Text = Doc;
        }

Save That Track!

Once the user enters a file name and presses the Save button we have two tasks; first validate the file name and then save the file to our Azure storage account.

Blob file names follow the same rules as windows file names, so we can use the built in .net function (GetInvalidFileNameChars) for this:

private bool FileNameIsValid()
        {
            if (string.IsNullOrEmpty(txtFileName.Text))
            {
                MessageBox.Show(
                    "Please enter a valid file name for this track.",
                    "No File Name",
                    MessageBoxButtons.OK,
                    MessageBoxIcon.Error);

                return false;
            }

            if (txtFileName.Text.IndexOfAny(System.IO.Path.GetInvalidFileNameChars()) != -1)
            {
                MessageBox.Show(
                    "Please enter a valid file name for this track.",
                    "Invalid File Name",
                    MessageBoxButtons.OK,
                    MessageBoxIcon.Error);

                return false;
            } 

            return true;
        }

An Aside About Blobs

There are two kinds of blobs: block and page.

Block blobs allow a single blob to be broken up into smaller blocks. These blocks allow parallel upload/download thus allowing for better performance. They are limited to 200GB in size. Each block can be up to 4MB in size (allowing for 50,000 blocks). Each block must be uploaded and then the entire blob is committed into storage. That means uploading block blobs is a two-step process. You can upload the blob in a single operation when the block blob is less than 64MB.

A page blob is collection of pages. Individual pages can be up to 1 TB, but each page must be a multiple of 512 bytes. A page is a range of data that is identified by its offset from the start of the blob. Pages can be randomly uploaded and accessed. Unlike block blobs, writes to a page blob are committed immediately.

Which should you use and when?

Well that depends on your file size and your usage scenario. If you have no need to pull individual pages and your files are < 200GB then use block blobs. Otherwise you will need to use page blobs. For our purposes block will do just fine.

Now Back To Our Regularly Scheduled Post

Wire up the Save button like so:

private void cmdSave_Click(object sender, EventArgs e)
        {
            if (FileNameIsValid())
            {
                var containerName = ConfigurationManager.AppSettings["DemoTrackContainer"];
                var fileName = txtFileName.Text + ".gpx";

                if (Blob.CreateBlockBlob(containerName, fileName, editDocument.Text))
                {
                    MessageBox.Show(
                        fileName + " was sucessfully saved.",
                        "Demo Track Saved",
                        MessageBoxButtons.OK,
                        MessageBoxIcon.Information);

                    Close();
                }
                else
                {
                    MessageBox.Show(
                        "There was an error saving " + fileName + ".",
                        "Error Saving Demo Track",
                        MessageBoxButtons.OK,
                        MessageBoxIcon.Error);
                }
            }
        }

Here you can see that we test our file name, grab our container name from app settings and add the ".gpx" extension to our file name. We then call a static class called Blob and it's CreateBlockBlob method. The method creates a file in the specified container with the passed string as it content. Here is the code:

public static bool CreateBlockBlob(string containerName, string fileName, string text)
        {
            try
            {
                var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
                CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();

                CloudBlobContainer container = blobClient.GetContainerReference(containerName);
                container.CreateIfNotExist();

                CloudBlockBlob blob = container.GetBlockBlobReference(fileName);
                blob.UploadText(text);

                return true;
            }
            catch (Exception)
            {
                return false;
            }
        }


And here is the code for creating a page blob (in case you were curious)

public static bool CreatePageBlob(string containerName, string fileName, string text, long size)
        {
            try
            {
                var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
                CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();

                CloudBlobContainer container = blobClient.GetContainerReference(containerName);
                container.CreateIfNotExist();

                CloudPageBlob blob = container.GetPageBlobReference(fileName);
                blob.Create(size);
                blob.UploadText(text);

                return true;
            }
            catch (Exception)
            {
                return false;
            }
        }

Now we are saving our tracks in the cloud (yea for marketing slang!) and have then available for future use. Speaking of the future, the next post we will get started on the track faker piece of the application. We are going to take advantage of Tasks to handle multi-threading so we will continue to have a responsive UI while faking a long running track.