Posted by: lluppes | October 14, 2012

Creating an Azure WebAPI for your Mobile Projects

This walkthrough will guide you through the steps necessary to create an Azure WebAPI project. There are several key components that are necessary for a successful Azure deployment if you want to make your project highly responsive, scalable, and affordable. The key pieces I will demonstrate here will be the use of Queues, Table Storage, and Worker Roles effectively.  There’s a lot to it, but once you have this down, it can make for some really nice back-end code.

The first step is to set up your Azure Account. You can register for a free 30-day trial account at http://windows.azure.com. Once you’ve set up the account, then go ahead and create a Storage account. You must come up with a unique name for your account. If you try to use one that already exists, you’ll get an error.
Bad:

Good:

Once you get it set up, you’ll want to record your name and find your storage keys for later use. Click on the Manage Keys link at the bottom of the screen and then copy that name and primary access key off for use later.

Once you have your Azure account all set up and ready, it’s time to open up Visual Studio 2012 and create a new cloud project. The first time you do this, you won’t be able to create any projects – you have to go load the Azure SDK first. The option for Cloud projects is there but just gives you a link to the SDK. Go ahead and load that and then restart Visual Studio.

One thing I don’t like about working with Azure projects is that you have to remember to start Visual Studio using the “Run as Administrator” option because the Azure Emulator requires it.

Now that you have the Azure SDK installed and you’ve started VS2012 in admin mode, it’s time to create your first project. If you have .NET 4.5 select at the top of the new project page you won’t have any available options. Currently you can only create .NET 4.0 projects (but .NET 4.5 support has been announced and is on the way very soon!). Make sure you have .NET Framework 4 select at the top of the page, then go ahead and select the Windows Azure Cloud Service project type and click OK (the real choices come on the next screen).

This is where you select the Options you want to use for your project. For this example, I’m going to create a Web site with some admin web pages and an API page for posting and returning data, and I also want a worker role to do background processing.

Since I selected an MVC4 Web Role, I’ll have to pick which type of MVC4 project I want – I chose Internet Application because I want a dual purpose website and API site.

We need to hook up our table storage account we created to this new Azure project, so we’ll add some configuration entries for that. Right click on the WebRole in the Roles section of your Azure project and click Properties, then select the Settings tab. Add the DataConnectionString and DiagnosticsConnectString to the All Configuration Settings tab, defaulting both to the UseDevelopmentStorage=true. On the Settings tab of your roles, switch to the Cloud configuration settings and change that to point to your real Azure storage account (not the Development storage). Get your Azure storage account name and primary access key you found previously when you set up your storage account.

Once you complete these for the WebRole, do the same thing with the Worker Role and add both settings to that project.

Because I have a web role and a worker role, and both of them will be processing data from the same storage location, I want to create a DataLayer Project that will handle all of the data interactions, so I’m going to create a plain old class library named DataLayer1. Once that’s created, I’ll add the NuGet packages of EntityFramework and Newtonsoft.Json, and then add references to Microsoft.WindowsAzure.ServiceRuntime, Microsoft.WindowsAzure.Configuration, Microsoft.WindowsAzure.StorageClient, Microsoft.WindowsAzure.Diagnostics, and System.Data.Services.Client.

After it’s compiled, then add a reference to your data layer project into in your WebRole and WorkerRole projects.

Let’s get back to our DataLayer project.  Rename Class1.cs to TimeRepository.cs and add the following placeholder code (which will get fleshed out later):

using Microsoft.WindowsAzure.StorageClient;

namespace LuppesTime
{
  public interface ITimeRepository
  {
    CloudQueueMessage GetMessage();
    bool ProcessQueueEntry(string partitionKey, string rowKey);
    void DeleteMessage(CloudQueueMessage msg);
  }
  public class TimeRepository : ITimeRepository
  {
    public CloudQueueMessage GetMessage()
    {
      //TODO: Implement this method later
      return null;
    }
    public bool ProcessQueueEntry(string partitionKey, string rowKey)
    {
      //TODO: Implement this method later
      return false;
    }
    public void DeleteMessage(CloudQueueMessage msg)
    {
      //TODO: Implement this method later
    }
  }
}

One of the odd things of working with Azure is the occasional errors that you get when trying to connect to the database or storage accounts, which are referred to as Transient Faults. You need to code for this in your application, but fortunately, there is a pretty easy way to handle that problem. For now, search for and install the NuGet Package “Transient Fault Handling Application Block” and apply it to your Datalayer, WebRole, and WorkerRole, and we’ll see where that comes into play soon.

Now it’s time to configure the WorkerRole with Azure goodness. To do that, we’ll replace the generic OnStart function with the following code to set up your environment:

private ITimeRepository db;
public override bool OnStart()
{
 // instantiate our DataLayer
 db = new TimeRepository();

 // read storage account configuration settings
 CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
 {
 configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
 });
 var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");

 // Set the maximum number of concurrent connections
 ServicePointManager.DefaultConnectionLimit = 12;

 // For information on handling configuration changes
 // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.

 DiagnosticMonitorConfiguration diagConfig = DiagnosticMonitor.GetDefaultInitialConfiguration();

 //Enable Scheduled Transfer
 diagConfig.Logs.ScheduledTransferPeriod = TimeSpan.FromSeconds(30);
 diagConfig.PerformanceCounters.ScheduledTransferPeriod = TimeSpan.FromMinutes(5.0);
 diagConfig.Directories.ScheduledTransferPeriod = TimeSpan.FromHours(1);

 //// OPTIONAL - add Performance Counter Monitoring
 //PerformanceCounterConfiguration procTimeConfig = new PerformanceCounterConfiguration();
 //// FYI - run typeperf.exe /q to query to see counter names
 //procTimeConfig.CounterSpecifier = @"\Processor(*)\% Processor Time";
 //procTimeConfig.SampleRate = System.TimeSpan.FromSeconds(10);
 //diagConfig.PerformanceCounters.DataSources.Add(procTimeConfig);

 // add Event Collection from Windows Event Log
 // http://msdn.microsoft.com/en-us/library/dd996910(VS.85).aspx
 diagConfig.WindowsEventLog.DataSources.Add("System!*");
 diagConfig.WindowsEventLog.DataSources.Add("Application!*");

 try
 {
 DiagnosticMonitor.Start("DiagnosticsConnectString", diagConfig);
 Trace.TraceInformation(string.Format("{0}", "WorkerRole OnStart completed"));
 }
 catch (Exception ex)
 {
 Trace.TraceInformation(string.Format("{0}", "Error Starting Diagnostics! " + ex.Message));
 }

 // Capture full crash dumps (FYI - ASP.net will trap some of these crashes and you might not see them)
 CrashDumps.EnableCollection(true);

 return base.OnStart();
}

Since this is a worker role, we’re going to want this worker to go out and search the Queue for things to process. Let’s replace the Run function with this code in order to set up your worker process to go get queue entries. In the previous code, we set up a link to the db = ITimeRepository (which is a reference to our Data Layer code), so we’ll call functions in that library to get and process Queue entries. This code just hangs around looking for new Queue entries. You can change the Sleep interval to whatever value you want. I’ve set it to 15 seconds here so you have a little delay in case you want to look at the records before they are processed.

public override void Run()
{
  Trace.TraceInformation("Worker: Listening for queue messages...");
  while (true)
  {
    try
    {
      // retrieve a new message from the queue
      CloudQueueMessage msg = db.GetMessage();
      if (msg != null)
      {
        // parse message retrieved from queue
        var messageParts = msg.AsString.Split(new char[] { ',' });
        var partitionKey = messageParts[0];
        var rowkey = messageParts[1];
        Trace.TraceInformation("Worker: Found queue entry - '{0}-{1}'.", partitionKey, rowkey);
        db.ProcessQueueEntry(partitionKey, rowkey);
        db.DeleteMessage(msg);
        Trace.TraceInformation("Worker: Finished Queue processing - '{0}-{1}'.", partitionKey, rowkey);
      }
      else
      {
        // sleep time is in milliseconds – this is set to 15 seconds so you have
        // time to look at the queue before it gets processed
        System.Threading.Thread.Sleep(15000);
      }
    }
    catch (StorageClientException ex)
    {
      Trace.TraceError("Worker: Exception when processing queue item. Message: '{0}'", ex.Message);
      System.Threading.Thread.Sleep(5000);
    }
  }
}

Now it’s time to configure the WebRole with the same Azure goodness. Replace the OnStart function with the following code to set up your environment:

public override bool OnStart()
{
  // For information on handling configuration changes
  // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.

  DiagnosticMonitorConfiguration diagConfig = DiagnosticMonitor.GetDefaultInitialConfiguration();

  //Enable Scheduled Transfer
  diagConfig.Logs.ScheduledTransferPeriod = TimeSpan.FromSeconds(30);
  diagConfig.PerformanceCounters.ScheduledTransferPeriod = TimeSpan.FromMinutes(5.0);
  diagConfig.Directories.ScheduledTransferPeriod = TimeSpan.FromHours(1);

  //// add Performance Counter Monitoring
  //PerformanceCounterConfiguration procTimeConfig = new PerformanceCounterConfiguration();
  //// FYI - run typeperf.exe /q to query to see  counter names
  //procTimeConfig.CounterSpecifier = @"\Processor(*)\% Processor Time";
  //procTimeConfig.SampleRate = System.TimeSpan.FromSeconds(10);
  //diagConfig.PerformanceCounters.DataSources.Add(procTimeConfig);

  // add Event Collection from Windows Event Log
  // Syntax: !
  // http://msdn.microsoft.com/en-us/library/dd996910(VS.85).aspx
  diagConfig.WindowsEventLog.DataSources.Add("System!*");
  diagConfig.WindowsEventLog.DataSources.Add("Application!*");

  try
  {
    DiagnosticMonitor.Start("DiagnosticsConnectString", diagConfig);
    System.Diagnostics.Trace.TraceInformation(string.Format("{0}", "WorkerRole OnStart completed"));
  }
  catch (Exception ex)
  {
    //WriteToLog(string.Format("{0}", "Error Starting Diagnostics! " + ex.Message));
    System.Diagnostics.Trace.TraceInformation(string.Format("{0}", "Error Starting Diagnostics! " + ex.Message));
  }

  // Capture full crash dumps  (FYI - ASP.net will trap some of these crashes and you might not see them)
  CrashDumps.EnableCollection(true);
  return base.OnStart();
}

In a previous section, we added the Transient Fault Tolerance block. In order for that to work right, we need add in the RetryPolicy Code and the TypeRegistrationProviders code to the Web.Config in the WebRole and the app.config in the WorkerRole.

<configuration>
  <configSections>
    <section name="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework, Version=4.4.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />
    <section name="typeRegistrationProvidersConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Common.Configuration.TypeRegistrationProvidersConfigurationSection, Microsoft.Practices.EnterpriseLibrary.Common, Version=5.0.505.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
    <section name="RetryPolicyConfiguration" type="Microsoft.Practices.EnterpriseLibrary.WindowsAzure.TransientFaultHandling.Configuration.RetryPolicyConfigurationSettings, Microsoft.Practices.EnterpriseLibrary.WindowsAzure.TransientFaultHandling" />
  </configSections>
  <RetryPolicyConfiguration defaultRetryStrategy="Fixed Interval Retry Strategy" defaultAzureStorageRetryStrategy="Fixed Interval Retry Strategy" defaultSqlCommandRetryStrategy="Backoff Retry Strategy">
    <incremental name="Incremental Retry Strategy" retryIncrement="00:00:01" initialInterval="00:00:01" maxRetryCount="10" />
    <fixedInterval name="Fixed Interval Retry Strategy" retryInterval="00:00:05" maxRetryCount="6" firstFastRetry="true" />
    <exponentialBackoff name="Backoff Retry Strategy" minBackoff="00:00:05" maxBackoff="00:00:45" deltaBackoff="00:00:04" maxRetryCount="10" />
  </RetryPolicyConfiguration>
  <typeRegistrationProvidersConfiguration>
    <clear />
    <add sectionName="RetryPolicyConfiguration" name="RetryPolicyConfiguration" />
  </typeRegistrationProvidersConfiguration>
</configuration>

In order to get access to our Azure table storage account in our web application, we need to configure that in the startup. To keep the Global.asax.cs clean, we’ll create a new AzureConfig.cs file in the App_Start folder of our WebRole project, and then add a call in our Application_Start event in the Global.asax.cs file.

// in Global.asax.cs
protected void Application_Start()
{
  AzureConfig.RegisterAzureSettings();
}

 

//[contents of App_Start\AzureConfig.cs file]
using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.ServiceRuntime;

namespace LuppesTime
{
 public static class AzureConfig
 {
 public static void RegisterAzureSettings()
 {
 CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
 {
 configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
 });
 }
 }
}

We’ve finished most of the Azure plumbing code and it’s time to start creating some real application code. Go back into the DataLayer project and let’s create some code there. First we’ll create a class that defines what the record looks like that we’re going to store in the TableStorage table. Create a file named TimeTableStorage.cs and put this code in it.

using System;
using Microsoft.WindowsAzure.StorageClient;

namespace LuppesTime
{
  public class TimeTableStorage : TableServiceEntity
  {
    public TimeTableStorage()
    {
      CreateDateTime = DateTime.UtcNow;
      ProcessedInd = "N";
      PartitionKey = DateTime.UtcNow.ToString("MMddyyyy");
      // Row key allows sorting, so we make sure the rows come back in time order.
      RowKey = string.Format("{0:10}_{1}", DateTime.MaxValue.Ticks - DateTime.Now.Ticks, Guid.NewGuid());
    }
    public string XMLData { get; set; }
    public string ProcessedInd { get; set; }
    public DateTime CreateDateTime { get; set; }
    public DateTime? ProcessedDateTime { get; set; }
    public string StatusMessage { get; set; }
  }
}

Next we’ll have to create a Storage Context class which defines this table to the Azure Table Service.

using System.Linq;
using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.StorageClient;

namespace LuppesTime
{
  public class TimeTableStorageContext : TableServiceContext
  {
    public TimeTableStorageContext(string baseAddress, StorageCredentials credentials)
      : base(baseAddress, credentials)
    {
    }
    public IQueryable TimeTable
    {
      get
      {
        return this.CreateQuery("TimeTable");
      }
    }
  }
}

Now that we have our table structure and data context set up, now we can start to use it. Edit your TimeRepository.cs file and put this code in it. This class initialization code sets up all of the table storage and queue plumbing that we’ll need to make everything work.

using System;
using System.Diagnostics;
using Microsoft.Practices.EnterpriseLibrary.Common.Configuration;
using Microsoft.Practices.EnterpriseLibrary.WindowsAzure.TransientFaultHandling;
using Microsoft.Practices.EnterpriseLibrary.WindowsAzure.TransientFaultHandling.AzureStorage;
using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.StorageClient;

#region Initialization
private TimeTableStorageContext context;
private static CloudStorageAccount storageAccount;
private static CloudBlobClient blobStorage = null;
private static CloudQueueClient queueStorage = null;
private static CloudQueue queue;
private static CloudBlobContainer container;

///
/// static constructor initializes the storage account and
/// creates the tables if they don't already exist.
/// static method makes sure that this is only called once
///
static TimeRepository()
{
  storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
  CloudTableClient.CreateTablesFromModel(
      typeof(TimeTableStorageContext),
      storageAccount.TableEndpoint.AbsoluteUri,
      storageAccount.Credentials);

  // initialize blob storage
  blobStorage = storageAccount.CreateCloudBlobClient();
  // warning: this name CANNOT contain capital letters!
  container = blobStorage.GetContainerReference("timeblobs");

  // initialize queue storage
  queueStorage = storageAccount.CreateCloudQueueClient();
  // warning: this name CANNOT contain capital letters!
  queue = queueStorage.GetQueueReference("timequeue");

  Trace.TraceInformation("Creating container and queue...");

  bool storageInitialized = false;
  while (!storageInitialized)
  {
    try
    {
      // create the blob container and allow public access
      container.CreateIfNotExist();
      var permissions = container.GetPermissions();
      permissions.PublicAccess = BlobContainerPublicAccessType.Container;
      container.SetPermissions(permissions);

      // create the message queue(s)
      queue.CreateIfNotExist();

      storageInitialized = true;
    }
    catch (StorageClientException e)
    {
      if (e.ErrorCode == StorageErrorCode.TransportError)
      {
        Trace.TraceError("Storage services initialization failure. "
          + "Check your storage account configuration settings. If running locally, "
          + "ensure that the Development Storage service is running. Message: '{0}'", e.Message);
        System.Threading.Thread.Sleep(5000);
      }
      else
      {
        throw;
      }
    }
  }
}
///
/// constructor sets up the retry policy and queue context
///
public TimeRepository()
{
  this.context = new TimeTableStorageContext(storageAccount.TableEndpoint.AbsoluteUri, storageAccount.Credentials);
  this.context.RetryPolicy = RetryPolicies.Retry(3, TimeSpan.FromSeconds(1));
}
#endregion

#region Transient Fault Tolerance Hooks
// --> to enable TFT, simply wrap your calls inside the retryPolicy.ExecuteAction call, like this:
// BEFORE:
//    return
//      (from p in db.Note select p);
// AFTER:
//    return this.retryPolicy.ExecuteAction( () =>
//      (from p in db.Note select p)
//    );
private RetryManager _RetryManager;
private Microsoft.Practices.TransientFaultHandling.RetryPolicy _RetryPolicy;
public Microsoft.Practices.TransientFaultHandling.RetryPolicy retryPolicy
{
  get
  {
    if (_RetryPolicy == null)
    {
      _RetryManager = EnterpriseLibraryContainer.Current.GetInstance();
      _RetryPolicy = _RetryManager.GetRetryPolicy("Incremental Retry Strategy");
    }
    return _RetryPolicy;
  }
}
#endregion

Now that we have all the plumbing set up, let’s create a function called AddTableAndQueueEntry method in your TimeRepository that will create a new TableStorage record and then add a queue entry for the WorkerRole telling where to find the next bit of work it should do.

public interface ITimeRepository
{
  bool AddTableAndQueueEntry(TimeTableStorage newItem);
}
public bool AddTableAndQueueEntry(TimeTableStorage newItem)
{
  try
  {
    if (newItem != null)
    {
      retryPolicy.ExecuteAction(() =>
      {
        this.context.AddObject("time", newItem);
        this.context.SaveChanges();
      });

      if (queueStorage != null)
      {
        // queue a message to process the image
        var queue = queueStorage.GetQueueReference("timequeue");
        var message = new CloudQueueMessage(String.Format("{0},{1}", newItem.PartitionKey, newItem.RowKey));
        retryPolicy.ExecuteAction(() =>
        {
          queue.AddMessage(message);
        });
        System.Diagnostics.Trace.TraceInformation("New Queue Entry: Process post '{0},{1}'", newItem.PartitionKey, newItem.RowKey);
      }

      return true;
    }
    else
    {
      return false;
    }
  }
  catch (Exception ex)
  {
    Trace.TraceError(string.Format("Error posting entry! {0} {1}", GetExceptionMessage(ex), SerializeObjectToJson(newItem)));
    return false;
  }
}

Now that we have our data repository set up, let’s go back to the WebRole and create an API method so that we can call it and create a new item. Create an API folder in the WebRole project, and add a Test.cs file to that folder as a new Controller.

Put the following code in the TestController. This code is pretty simplistic and doesn’t really reflect the reality of how you would create a web service API, but it works for this example. If we just hit the “/api/test” URL in the browser, it will give us a message telling us to supply a parameter so we can see it’s working. If we supply a parameter (i.e. /api/test/666), it will go into the other method and call the datalayer AddTableAndQueueEntry method with that parameter.

using System;
using System.Diagnostics;
using System.Net;
using System.Net.Http;
using System.Web.Http;

namespace LuppesTime.API
{
  public class TestController : ApiController
  {
    public ITimeRepository db = new TimeRepository();
    // GET api/test
    public string Get()
    {
      return "Call this with a parameter to create a queue entry!";
    }
    // GET api/test/5
    public HttpResponseMessage Get(int id)
    {
      TimeTableStorage post = new TimeTableStorage();
      try
      {
        post.XMLData = string.Format("{0}", id);
        if (db.AddTableAndQueueEntry(post))
        {
          Trace.TraceInformation(string.Format("Posted data from Id {0} Bytes {1}", post.XMLData, post.XMLData.Length));
          return Request.CreateResponse(HttpStatusCode.OK, "Data Accepted!");
        }
        else
        {
          Trace.TraceWarning(string.Format("Error while posting data {0}", post.XMLData));
          return Request.CreateResponse(HttpStatusCode.BadRequest, "Data Rejected!");
        }
      }
      catch (Exception ex)
      {
        Trace.TraceError(string.Format("Error while processing data {0} {1}", post.XMLData, ex.Message));
        return Request.CreateResponse(HttpStatusCode.BadRequest, "Error! " + ex.Message);
      }
    }
  }
}

Let’s return to the DataLayer project and update the functions we stubbed out earlier. The first two are pretty simple – to work with the queue we just do a Get or Delete call.

public CloudQueueMessage GetMessage()
{
  return queue.GetMessage();
}
public void DeleteMessage(CloudQueueMessage msg)
{
  queue.DeleteMessage(msg);
}

The ProcessQueueEntry is a little more complicated. We have to go fetch the table storage record, process it, then update it and remove it from the queue so we don’t process it again.

public bool ProcessQueueEntry(string partitionKey, string rowKey)
{
  TimeTableStorage entry = null;
  try
  {
    IEnumerable results = null;
    retryPolicy.ExecuteAction(() =>
    {
      results =
        from g in this.context.TimeTable
        where g.PartitionKey == partitionKey && g.RowKey == rowKey
        select g;
    });
    entry = results.FirstOrDefault();

    //TODO: write a process that will take the XML from table storage and process it, returning a batch number for cross reference
    int BatchId = 1;

    if (BatchId > 0)
    {
      Trace.TraceInformation(string.Format("Successfully Processed Queue Entry {0} {1} {2}", entry.XMLData, entry.PartitionKey, entry.RowKey));
      entry.ProcessedInd = "Y";
      entry.BatchId = BatchId;
      entry.StatusMessage = "Processed";
    }
    else
    {
      Trace.TraceError(string.Format("Failure Processing Queue Entry {0} {1} {2}", entry.XMLData, entry.PartitionKey, entry.RowKey));
      entry.ProcessedInd = "F";
      entry.BatchId = 0;
      entry.StatusMessage = "Processing failed!";
    }

    entry.ProcessedDateTime = DateTime.UtcNow;
    this.context.UpdateObject(entry);
    this.context.SaveChanges();
    return true;
  }
  catch (Exception ex)
  {
    Trace.TraceError(string.Format("Error processing queue entry: {0} {1} {2} {3}", GetExceptionMessage(ex), entry.XMLData, entry.PartitionKey, entry.RowKey));
    return false;
  }
}

We’re done with coding, now it’s time to test. Fire up your project and you should be able to test it out by using the Azure Emulator that’s built into Visual Studio once you install the Azure SDK.

Deploying to Azure

Once you are satisfied that your project is working on your local test environment, it’s time to deploy to Azure.  To deploy to Azure, right-click on the Azure project and select Publish, which will be pretty blank the first time.

When the Azure popup comes up blank, click on the Sign In link to go get your publishing credentials. Your browser will open up and prompt you to sign in to Azure and then will automatically download your credentials. Save it somewhere it your project folder, then return to the Azure popup and click on the Import button and then import your downloaded credentials, and you should see your Azure account info in the subscription dropdown, then click Next.

If you haven’t created any hosted services (and if you’ve been following along, you haven’t done that yet…), then give your new service a name here, otherwise just select one from the list.

Since we’re using Table Storage, we’ll have to flip over to the Advance Settings to enter that in the Storage Info.

Once you’ve done that, click Next, then click on Publish and sit back and wait – it could take 10 minutes or so for the deployment to finish. You should see your build output window pop up quickly, then an Azure deployment window.

While you are waiting, let’s go hook up our links to the table storage on the Azure server so we can see those tables. Open the Server Explorer and add a new storage account. You’ll need that same storage account name and key that you used way back in the beginning.

If you deploy as a Staging instance, you’ll get a url that looks like http://{guid}.cloudapp.net . If you deploy as a Production instance, you’ll be able to use your url that looks like http://yourappname.cloudapp.net.  Once it’s deployed, you should see a successful message.

Browse to your website at http://{guid}.cloudapp.net to verify it’s working by clicking the link now visible on the left hand side of the Azure Activity Log window. Browse to your API page at http://{guid}.cloudapp.net/api/Test to call the code you wrote that will create an entry in table storage and queue. You should get back an XML packet containing “Data Accepted!”

Go browse your Table Storage using the VS Server Explorer – you should have around 10-15 seconds to see it before the worker process picks it up and processes it. If you got there before the worker, the Processed flag should be set to “N”. If you want 15 seconds and refresh the view, you should see that Processed is set to “Y” now.

You can also check out the server logs by viewing the Tables\WADLogsTable – anything that you entered with the Trace statements should show up in these logs for you to view.

That’s it – you’ve now deployed a Windows Azure service that uses Table Storage, Queues, Worker Processes, Diagnostics Logging and all sorts of Azure goodness. You can scale this solution up to handle massive traffic volumes if your application takes off.

I’ve included a Sample Source Code Project here for your reference.  Enjoy!

Lyle

Written by .

About these ads

Responses

  1. Awesome post! Is the following code correct? TypeRegistrationProviders code to the Web.Config (XML looks incomplete)

    • You are correct… thanks for the tip! Not sure what happened there – it was in my original document but got zapped in the post. It should be fixed now. Thanks!

  2. [...] Creating an Azure WebAPI for your Mobile Projects [...]

  3. [...] Creating an Azure WebAPI for your Mobile Projects [...]

  4. [...] Creating an Azure WebAPI for your Mobile Projects [...]

  5. [...] Creating an Azure WebAPI for your Mobile Projects [...]

  6. [...] Creating an Azure WebAPI for your Mobile Projects (モバイル プロジェクトのために Azure… [...]

  7. [...] Creating an Azure WebAPI for your Mobile Projects (モバイル プロジェクトのために Azure… [...]


What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: