CQRS applied at the service level in Azure

Command and Query Responsibility Segregation (CQRS) is a software pattern where there is a separation between queries (read) and commands (modify).  The pattern, like many Object Oriented Design (OOD) patterns applies well to service oriented architecture (SOA) in building services that are scalable.

In a series of posts, CQRS is explored in how it relates to SOA and a practical implementation is used to illustrate the advantages and disadvantages:

As part of the series, an MSDN project was created to illustrate the pattern applied at the service level.

Posted in Architecture, Azure, SOA | Leave a comment

On-premises BizTalk integration with Azure File Storage

Problem

Azure File Storage (AFS) supports on-premises integration on servers supporting SMB 3.0.  For BizTalk integration this allows for AFS to be mapped just like a normal file locations using the File Receive Locations and File Send Ports.  In situations where SMB 3.0 is not supported, this project is provided as an alternative strategy by using the Azure Storage client library to integrate with AFS.

Context

The requirement to move files into AFS came about when legacy on-premises solutions needed to exchange messages with virtual machines running in Azure.  File storage was chosen over Blob storage because the solutions already supported the exchange of file based messages and because of AFS’s support for SMB 3.0.  As expected, this made integration with the solutions running in Azure VMs simple without requiring additional development effort.  Not expected was the client’s environment did not allow for AFS to be mapped within the environment (Error 53). An on-prem BizTalk server was available so this was the natural choice for a reliable messaging service.

The requirement was to reliably push messages from on-premises folders to an AFS share.  As AFS supports REST endpoints, this was explored first using the BizTalk WCF adapter with the AzureStorageBehaviorExtension endpoint behavior.  The version of the AzureStorageBehavior at the time of this article does not support the File storage type.  Weighing the cost of extending the behavior to support File storage versus the simplicity of using the Azure Storage client library, it was decided that the simplicity of the client library outweighed the loss of tracking by not using the WCF REST endpoints.

Solution

The solution was to use the Azure Storage client library to save the files into AFS. This greatly simplified coding as most of the functionality could be accomplished in a single expression. But, as anything in BizTalk, the issue was getting the information required into the expression. In particular this was information like the connection string and directory that would change with each environment (development, test, production). As there was not an existing pattern for this, for instance using the BRE or a config database, it was decided to create a custom receive pipeline component where this information could be supplied.

For the benefit of the community the basic project has been provided on MSDN: On-premises BizTalk integration with Azure File Storage.

Sample Project

The solution, MSDNFileMover, consists of two projects: MSDNFileMover and MSDNFileMoverPromoter.  The MSDNFileMoverPromoter contains a property promoter used to allow some values to be specified on receive pipeline while the MSDNFileMover contains the pipeline, schema and orchestration.

MSDNFileMoverPromoter

A single class, PropertyPromoter, is used to allow for the AFS storage connection string and a directory to be specified.  There are many posts on the web that illustrate how a BizTalk pipeline component can be created so this will not be repeated here.  The purpose is to allow a convenient mechanism to add context to a receive pipeline as shown below:

The following snippet shows the Execute method that promotes the filename, directory and storage connection string so they can be referenced in the orchestration.

public IBaseMessage Execute(IPipelineContext pContext, IBaseMessage pInMsg)
{
string systemPropertiesNamespace = @"http://schemas.microsoft.com/BizTalk/2003/system-properties Jump ";
string filePropertiesNamespace = @"http://schemas.microsoft.com/BizTalk/2003/file-properties Jump ";
 
var id = pInMsg.Context.Read("InterchangeID", systemPropertiesNamespace);
pInMsg.Context.Promote("InterchangeID", systemPropertiesNamespace, id);
 
var path = pInMsg.Context.Read("ReceivedFileName", filePropertiesNamespace);
var filename = System.IO.Path.GetFileName(path.ToString());
 
pInMsg.Context.Promote("StorageConnectionString", "https://MSDNFileMover.MSDNFileMoverProperties/v1 Jump ", StorageConnectionString);
pInMsg.Context.Promote("Directory", "https://MSDNFileMover.MSDNFileMoverProperties/v1 Jump ", ImportDirectory);
pInMsg.Context.Promote("Filename", "https://MSDNFileMover.MSDNFileMoverProperties/v1 Jump ", filename);
 
return pInMsg;
}

MSDNFileMover

The orchestration is the most interesting part of the MSDNFileMover project as the receive pipeline and properties schema are straight forward.  The orchestration receives an xml document and then uses the Azure Storage client library to save the xml document to Azure File Storage.  In the case of error, the orchestration will suspend to allow for the orchestration to be resumed and the send attempted again.

The orchestration is shown below:

A couple of things to note about the orchestration.  The orchestration is long running but contains an atomic scope.  This allows for the retry when a failure happens even though the variables used are not serializable.  The orchestration required the Azure Storage nuget package to be installed:

The expression, SendToAFS, uses the message properties to create a new file in AFS.

storageConnection = Message_Inbound(MSDNFileMover.StorageConnectionString);
 
directory = Message_Inbound(MSDNFileMover.Directory);
filename = Message_Inbound(MSDNFileMover.Filename);
 
storageAccount = Microsoft.WindowsAzure.Storage.CloudStorageAccount.Parse(storageConnection);
 
client = storageAccount.CreateCloudFileClient();
share = client.GetShareReference("MyShare");
 
rootDirRef = share.GetRootDirectoryReference();
dirRef = rootDirRef.GetDirectoryReference(directory);
 
fileRef = dirRef.GetFileReference(filename);
 
fileRef.UploadText(Message_Inbound.OuterXml, null, null, null, null);

Summary

This project provides an illustration of how Azure File Storage could be integration with an on-premises BizTalk server.  The Azure Storage client libraries are used in a single transaction.  If multiple operations are performed in this manner, it is recommended to do each one as a separate transaction to prevent the environment being in an unknown state in case of a failure.

It is also recommended that a solution around SMB 3.0 should be explored first.  SMB 3.0 might not be supported due to infrastructure restrictions or because of the infamous Error 53 (see links).  Extending the AzureStorageBehaviorExtnension and using WCF to communicate with the Azure Storage REST endpoints wsa also explored.  This proved cumbersome and challenging for two reasons.  The first is the level of control over the BizTalk WCF engine and secondly because of the two step process the REST endpoints require.  One call to create the file and the second to fill the content of the file.  A simpler and more reliable approach was chosen instead at the sacrifice of message level tracking.

 

More Information

The following are some links that may be helpful:

Posted in Uncategorized | Leave a comment

SQL Server Notifications – Polling and ServiceBroker

Introduction

As part of the CQRS in Azure MSDN blog series, two forms of notifications were used to notify the application tier of changes to the underlying data model.  For the SQL Azure, a polling mechanism was used and push notification was used for SQL Server versions that supported Service Broker.

The source can be found on MSDN Samples: here.

SQL Azure vs SQL Server

Service Broker is not supported in SQL Azure so using its support for messaging and queuing is not available.  Because of this alternative approaches are required. In this instance, polling was chosen.  Simply put, the application will periodically poll for changes in the database.  This could be simply checking for a change to a last update date, number of records, and in some instances become quite sophisticated in detecting change.  In this example, a simple query is used to identify if a new entry has been added to a insert only table.

Insert Only Table

Insert only tables are popular in accounting or ledger solutions as it supports both auditing and history by preventing updates or delete operations.  The advantage is access can be optimized for write operation and a clear history of all activity is captured. The disadvantage is the final state is determined by performing a full read of the table and often requires another persistent view or table to provide aggregated information.

Note: Blockchain implementations use insert only tables to guarantee state among distributed repositories.

Building for different versions of SQL

For this sample, the version of SQL was determined by retrieving the Edition server property.  This will have a string value of “SQL Azure” when the database is SQL Azure:

public bool IsAzureDb
{
    get
    {
        if(string.IsNullOrEmpty(_version))
        {
            _version = Database.SqlQuery("select SERVERPROPERTY ('Edition')").Single();
        }
 
        switch(_version)
        {
            case "SQL Azure": return true;
            default: return false;
        }
    }
}

RepositoryListener

To simplify the solution, an interface was created to allow the listener to be created at run-time based on if the database supports notifications or has to rely on polling:

/// <summary>
/// used for detecting changes related to specific conditions
/// </summary>
interface IRepositoryListener
{
    void StartListener();
    void StopListener();
    void Register(string query, Action callback);
}

In the startup of the application (Global.asax Application_Start), the specific listener is created, started and a register method is called.  The register method will use a given query in order to determine if the state has been updated:

string query;
 
using (var context = new PieShopContext())
{
    if(context.IsAzureDb)
    {
        _listener = new PollingListener();
        query = "select top 1 [InventoryEntryId] from [ledger].[InventoryEntries] order by 1 desc";
    }
    else
    {
        _listener = new DependencyListener();
        query = "select [InventoryEntryId] from [ledger].[InventoryEntries]";
    }               
}
 
_listener.StartListener();
_listener.Register(query, InventoryRepository.Instance.RefreshInventory);

PollingListener

The PollingListener will run until cancelled and will simply compare the result of the query against until change is detected.  Once it is detected, a callback method will be called:

public void StartListener()
{
    _poller = Task.Run(() =&gt;
    {
        while(!_token.IsCancellationRequested)
        {
            if(!string.IsNullOrEmpty(_query) &amp;&amp; _callback != null)
            {
                using (var context = new PieShopContext())
                {
                    var result = context.Database.SqlQuery(_query).Single();
 
                    if(result != _currentValue)
                    {
                        _currentValue = result;
                        _callback();
                    }
                }
            }
 
            Thread.Sleep(_frequency);
        }                   
    }
    , _token);
}

DependencyListener

The DependencyListener will take advantage of ServiceBroker by registering a new SQLDependency.  The dependency will then be triggered when a change happens.

public int GetData()
{
    using (var connection = new SqlConnection(_connectionString))
    {
        connection.Open();
 
        // simple queries work best - take a look at what is not supported
        // https://msdn.microsoft.com/library/ms181122.aspx?f=255&amp;MSPPError=-2147217396 Jump
        using (SqlCommand command = new SqlCommand(_query, connection))
        {
            // Make sure the command object does not already have
            // a notification object associated with it.
            command.Notification = null;
 
            SqlDependency dependency = new SqlDependency(command);
            dependency.OnChange += new OnChangeEventHandler(dependency_OnChange);
 
            if (connection.State == ConnectionState.Closed)
                connection.Open();
 
            using (var reader = command.ExecuteReader())
                return reader.Cast().Last().GetInt32(0);
        }
    }
}

An important note is the query that is used to detect is limited.  Take a look at the documentationfor details.  Also, it is important to note the starting and stopping of SQLDependency:

public void StartListener()
{
    SqlDependency.Start(_connectionString);           
}
 
public void StopListener()
{
    SqlDependency.Stop(_connectionString);
}public void StartListener()
{
    SqlDependency.Start(_connectionString);           
}
 
public void StopListener()
{
    SqlDependency.Stop(_connectionString);
}

Summary

Both SQLDepenency and polling were used to solve the problem of detecting change in model state.  The CQRS in Azure posts cover the scenario of having a dashboard page that periodically refreshes to display a consistent view of the model.

A related part, illustrating the updating of the table, is also in TechNet Wiki: Azure Functions – Entity Framework

Please see the sample project for full implementation details.

Posted in Azure, Azure SQL, C#, Entity Framework | Leave a comment

Visual Studio Team Services: Connecting a BitBucket Repository

This is a walk-through of how to connect an existing BitBucket repository to Visual Studio Team Services.

Step 1: Create a build definition

In the project inside VSTS, navigate to the Build definitions and click the plus icon to create a new definition:

In this example, let’s perform a Visual Studio build. You can see other options are available as shown below:

Next select Remote Git Repository.

Next, we need to setup the connection to BitBucket and visually VSTS is complaining to us:

Step 2: Connect the BitBucket Repository

In BitBucket, navigate to your code base and grab the URL of your repo.  You can construct this by taking the project URL (re., https://bitbucket.org/team/project) and appending the repo.  You can get the repo by taking a look at the generated command in
the clone option:

In VSTS on the repository tab, click Manage to add a new connection to BitBucket:

Add a new external Git repository connection. This is where the URL determined above goes!

Hopefully, this saves someone some time!

Posted in Uncategorized | Leave a comment

Faking out Azure – Unit Testing with Microsoft Fakes

Overview

Microsoft Fakes provides an excellent framework for code isolation within Unit Tests and is only available for Visual Studio Enterprise subscriptions. Fakes does provide advantages over other frameworks (re., Moq and Rhino Mocks) as it allows for full code isolation and not just interfaces or public virtual members.  There are alternatives to Fakes (e.g., Telerik JustMock and TypeMocks).

The MSDN project, Azure Storage and Microsoft Fakes, was created to show some of the cool things Microsoft Fakes provides when isolating Azure Storage.

Please see the project for more information and the following is shown here as an overview.

Sample

To illustrate a simple class, StorageManager, was created to send an update a product to an Azure Storage Table called Products.  The entire class is shown below:

namespace MSDNFakeOutStorage
{
  public class StorageManager
  {
    public bool UpdateProducts(ProductEntity entity)
    {
      var table = GetClientForTable("Products"); 
      var result = table.Execute(TableOperation.InsertOrReplace(entity));
 
      return result.HttpStatusCode == 204;
    }
 
    private readonly string ConnectionString = ConfigurationManager.AppSettings["AzureStorageConnectionString"];
 
    private CloudTable GetClientForTable(string tableName)
    {
      var account = CloudStorageAccount.Parse(ConnectionString);
      var tableClient = account.CreateCloudTableClient();
      var table = tableClient.GetTableReference(tableName);
 
      table.CreateIfNotExists();
 
      return table;
    }
 
  }
}

We will create two basic tests.  One to test how the StorageManager behaves when the Products table is successfully updated and when it is not updated successfully.

To start we need to add to fakes to our project: Microsoft.WindowsAzure.Storage and System.Configuration:

ms1

The following shows the generated fakes in the project:

ms2

Unit Test – System.Configuration

The first step is to isolate the ConfigurationManager.AppSettings from the StorageManager.ConnectionString.  The following will divert all ConfigurationManager.AppSettings[] calls to a new NameValueCollection and is shown below:

ShimConfigurationManager.AppSettingsGet = () =>
{
  var settings = new NameValueCollection();
  settings.Add("AzureStorageConnectionString", "valueOfAzureStorageConnectionString");
 
  return settings;
};

When debugging the unit test you can see the magic working:

ms3

Unit Test – Microsoft.WindowsAzure.Storage

The next step is to create a shim for the Azure Storage components used in the project.  A shim was used instead of a stub as we are not replacing the entire object but only diverting messages to our test functionality.

In the code snippet below, note how all calls from the StorageManager class to the underlying storage classes are supplied:

ShimCloudStorageAccount.ParseString = (connectionString) =>
{
  if (connectionString != "valueOfAzureStorageConnectionString")
    Assert.Fail(string.Format($"Unexpected connection string value of {connectionString} received!"));
 
  return new ShimCloudStorageAccount
  {
    CreateCloudTableClient = () => new ShimCloudTableClient
    {
      GetTableReferenceString = (tableName) =>
      {
        if (tableName != "myTable")
          Assert.Fail("The table name should be Products");
 
        return new ShimCloudTable
        {
          CreateIfNotExistsTableRequestOptionsOperationContext = (requestOptions, operationContext) => true,
          ExecuteTableOperationTableRequestOptionsOperationContext = (operation, requestOptions, context) =>
          {
            return new TableResult
            {
              // successful response code is No Content
              // https://msdn.microsoft.com/en-us/library/azure/hh452242.aspx
              HttpStatusCode = 204
            };
         }
        };
      }
    }
  };
};

Summary

This article provides an example of how Microsoft Fakes can isolate Azure Storage from code.  Microsoft Fakes empowers some developers to write more complete unit tests and for others helps to reduce the credibility of using the “It’s too hard” excuse in avoiding to write unit tests.

More Information

Isolating Code Under Test with Microsoft Fakes
Using shims to isolate code
Using stubs to isolate code

Posted in Azure, AzureStorage, C#, Microsoft Fakes, Unit Testing | Leave a comment

Continuous Delivery with Visual Studio Team Services

In a series of posts by the Azure Development Community, some of the features available in Azure and Visual Studio Team Services are explored. VSTS Build is an excellent mechanism for enabling automation for a development team. It allow for teams to construct build pipelines to handle a variety of scenarios and technology. The service can run hosted or on-premises, and supports a large number of build steps ranging from Node.js and Gulp to MSBuild and Azure Deployment. Its flexibility extends to the ability to bring in tasks from the Marketplace and write custom build steps.

Visual Studio Team Services – Creating a build pipeline (Part 1)

Posted in Azure, Visual Studio Team Services | Tagged | Leave a comment

Azure Service Bus Queue – Sessions

In scenarios where there are multiple Azure Service Bus clients and you want to route particular messages to particular consumers, sessions are an ideal feature of the Azure Service Bus Queue. This post builds upon the sample project available on MSDN: Getting Started: Messaging with Queues.

Setup

Install either the MSDN sample project or create a new project using the SDK Template shown below:
GettingStartedTemplate
This gives us a great starting point to look at sessions.

Scenario

To illustrate sessions, three messages will be posted to the service bus. Two will have the same session id and the third will have a different session id. We will then use two clients to consume the messages. There are many scenarios where this technique could be useful. Here are three examples:

  • Priority – A low priority queue and a high priority queue could be created where the high priority items are worked before the low priority queue items.
  • Partitioning – The distribution of the queue items across multiple message brokers.
  • Aggregation – Routing messages to specific message brokers

Code

The first step is to set the queue to require sessions when being created:

private static void CreateQueue()
{
	NamespaceManager namespaceManager = NamespaceManager.Create();
 
	Console.WriteLine("\nCreating Queue '{0}'...", QueueName);
 
	// Delete if exists
	if (namespaceManager.QueueExists(QueueName))
	{
		namespaceManager.DeleteQueue(QueueName);
	}
 
	var description = new QueueDescription(QueueName)
	{
		RequiresSession = true
	};
 
	namespaceManager.CreateQueue(description);
}

When messages are sent to the service bus, a specific session is required to be specified.

private static BrokeredMessage CreateSampleMessage(string messageId, string messageBody, string sessionId)
{
	BrokeredMessage message = new BrokeredMessage(messageBody);
	message.MessageId = messageId;
	message.SessionId = sessionId;
	return message;
}

This post illustrates two ways to consume the messages. The first creates a queue client per client:

private static void ReceiveMessages()
{
	Console.WriteLine("\nReceiving message from Queue...");
	BrokeredMessage message = null;
 
	var sessions = queueClient.GetMessageSessions();
 
	foreach (var browser in sessions)
	{
		Console.WriteLine(string.Format("Session discovered: Id = {0}", browser.SessionId));
		var session = queueClient.AcceptMessageSession(browser.SessionId);                
 
		while (true)
		{
			try
			{
				message = session.Receive(TimeSpan.FromSeconds(5));
 
				if (message != null)
				{
					Console.WriteLine(string.Format("Message received: Id = {0}, Body = {1}", message.MessageId, message.GetBody<string>()));
 
					// Further custom message processing could go here...
					message.Complete();
				}
				else
				{
					//no more messages in the queue
					break;
				}
			}
			catch (MessagingException e)
			{
				if (!e.IsTransient)
				{
					Console.WriteLine(e.Message);
					throw;
				}
				else
				{
					HandleTransientErrors(e);
				}
			}
		}
	}
	queueClient.Close();
}

The second uses an implementation of IMessageSessionHandler:

public class MyMessageSessionHandler : IMessageSessionHandler
{
	private string WhoAmI = Guid.NewGuid().ToString().Substring(0, 4);
 
	public void OnCloseSession(MessageSession session)
	{
		Console.WriteLine(string.Format("MyMessageSessionHandler {1} close session Id = {0}", session.SessionId, WhoAmI));
	}
 
	public void OnMessage(MessageSession session, BrokeredMessage message)
	{
		Console.WriteLine(string.Format("MyMessageSessionHandler {3} received messaged on session Id = {0}, Id = {1}, Body = {2}", session.SessionId, message.MessageId, message.GetBody<string>(), WhoAmI));
 
		message.Complete();
	}
 
	public void OnSessionLost(Exception exception)
	{
		Console.WriteLine(string.Format("MyMessageSessionHandler {1} OnSessionLost {0}", exception.Message, WhoAmI));
	}
}

To register the message handler register using the queueclient:

queueClient.RegisterSessionHandler(typeof(MyMessageSessionHandler), new SessionHandlerOptions { AutoComplete = false });

The completed project has been uploaded onto MSDN here.

Posted in Azure, C#, Service Bus | Tagged , , , | Leave a comment

Publish-AzureServiceProject: Failed to generate package

Recently when creating a new service project in Azure I ran into this disturbing error when publishing from powershell:

Publish-AzureServiceProject : Failed to generate package. Error: Microsoft(R) Azure(TM) Packaging Tool version 2.5.0.0 for Microsoft(R) .NET Framework 4.0
Copyright ¸ Microsoft Corporation. All rights reserved.
F:\putti\AzurePHPPoc\ServiceDefinition.csdef: Warning  CloudServices040 : The ‘schemaVersion’ attribute is unspecified. Please set the attribute to avoid this warning.
cspack.exe: Warning   : CloudServices68 : No TargetFrameworkVersion specified for role AzurePHPPocWeb. Using .NET framework v4.0 for packaging.
cspack.exe: Error   : Could not load file or assembly ‘Microsoft.WindowsAzure.Packaging, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35’ or one of its dependencies. The system cannot find the file specified.

It turns out this is an issue with the windows authoring tools

image

The solution is to remove the installations (if more than one version installed) and then reinstall the Azure SDK for .NET:

image

Note: In my case I had to install both the 2.5.1 and the 2.6 versions of the SDK.

Posted in Uncategorized | Leave a comment

Azure Deployment Failure

If you receive a deployment failure specifying the certificate was not found, here are some steps to hopefully resolve this.

image

I stumbled upon this issue for several reasons but all relate to an unfamiliarity with the tools.  My situation has multiple cloud services in multiple environments.  The issue above was caused when the remote access certificate was not located on the server.  This is specified in the service configuration properties:

image

The certificate selected must exist on the cloud service.  You can verify this by looking in the management portal:

image

If it is not missing then the certificate can be uploaded to the server.

Also, it is probably best to fix this via the publishing tools as updating this value directly in properties causes the publishing wizard to get upset: .

You can set the certificate here also:

image

By selecting the more options:

image

Posted in Azure, Deployment | Leave a comment

Managing Multiple Azure Environments with Visual Studio Online Builds

Carrying on from the Multiple Azure Cloud Environments post, I now want to manage the progression of my solution through the development, staging, and production environment. 

Staging in the context indicates an environment and not the staging deployment of a cloud service.  In hindsight, I probably should have named it Test to avoid confusion but the word Test could have be perceived as temporary.  UAT is another popular term for the environment before production.  Please post a comment for the terms that your organisation(s) uses.

My preferred approach is to use Visual Studio Online to manage the build process due to its tight integration with Visual Studio and Azure.  The image below illustrates this process:

image  

The development team develops and tests locally before checking in their changes. When a check-in occurs, the Visual Studio Online server then gets the latest changes, builds and then deploys to the Development cloud service.  Then once the changes have been reviewed, another build and deployment is then made to the Staging and production environments.

This post will illustrate this approach by using the SpikeCloud solution created in the earlier post.

Add to Visual Studio Online

The first step is to add the solution to Visual Studio Online:

image

You will then be asked to authorise the connection:

image

This step links your cloud service to Visual Studio Online

Add solution to source control

Now that the link is complete, the solution needs to be added to Visual Studio Online.  From within your solution, this can be done by adding the solution to source control:

image

You will be prompted for the type of source control system:

image

Once the solution has been added, Visual Studio will indicate this:

image

If we were only supporting a single environment, we could check-in and the solution would then be deployed to Azure using continuous integration.  Because of the multiple environments, we have a bit more configuration to do.

Go ahead and check-in the changes now as the build is disabled by default.

Supporting Multiple Environments

In the Team Explorer, select Builds.  If you do not see the Team Explorer, you can open it under the View menu.

image

Edit the Build definition.

image

The first page of properties allows us name and set the queue processing status.  Go ahead and enable the build:

image

For the development environment, leave the Trigger set to continuous as we want to run a build each time someone checks in:

image

For the purpose of this post, we will leave the Source Settings, Build Defaults and Retention Policy as the defaults.  In the Process tab we need to set some information about our solution.  First of all the solution to build needs to be specified by clicking the elipse in the Projects field:

image

The solution file needs to be added by first clicking the add button:

image

And then navigating to the solution file:

image

In order to specify the azure configuration we require, we need to indicate this as an msbuild parameter.  This is pretty straightforward as the target profile matches the service configuration we created in the previous post:

image

The following shows our completed build settings with the fields we were required to set indicated:

image

In order to test the changes, check-in a change or manually start a build by selecting Queue New Build:

image

Conclusion

By combining source control and build automation, linking your Azure cloud services to Visual Studio Online provides a number of benefits.  Besides providing a robust and feature rich collaboration tool, managed builds provides a team with a simple and efficient mechanism for deploying to Azure.  In my experience, I have seen my deployment time reduced up to 5 times by deploying from VSO instead of local workstations.

Posted in Azure, C#, Deployment | Leave a comment