Managing Multiple Azure Environments with Visual Studio Online Builds

Carrying on from the Multiple Azure Cloud Environments post, I now want to manage the progression of my solution through the development, staging, and production environment. 

Staging in the context indicates an environment and not the staging deployment of a cloud service.  In hindsight, I probably should have named it Test to avoid confusion but the word Test could have be perceived as temporary.  UAT is another popular term for the environment before production.  Please post a comment for the terms that your organisation(s) uses.

My preferred approach is to use Visual Studio Online to manage the build process due to its tight integration with Visual Studio and Azure.  The image below illustrates this process:

image  

The development team develops and tests locally before checking in their changes. When a check-in occurs, the Visual Studio Online server then gets the latest changes, builds and then deploys to the Development cloud service.  Then once the changes have been reviewed, another build and deployment is then made to the Staging and production environments.

This post will illustrate this approach by using the SpikeCloud solution created in the earlier post.

Add to Visual Studio Online

The first step is to add the solution to Visual Studio Online:

image

You will then be asked to authorise the connection:

image

This step links your cloud service to Visual Studio Online

Add solution to source control

Now that the link is complete, the solution needs to be added to Visual Studio Online.  From within your solution, this can be done by adding the solution to source control:

image

You will be prompted for the type of source control system:

image

Once the solution has been added, Visual Studio will indicate this:

image

If we were only supporting a single environment, we could check-in and the solution would then be deployed to Azure using continuous integration.  Because of the multiple environments, we have a bit more configuration to do.

Go ahead and check-in the changes now as the build is disabled by default.

Supporting Multiple Environments

In the Team Explorer, select Builds.  If you do not see the Team Explorer, you can open it under the View menu.

image

Edit the Build definition.

image

The first page of properties allows us name and set the queue processing status.  Go ahead and enable the build:

image

For the development environment, leave the Trigger set to continuous as we want to run a build each time someone checks in:

image

For the purpose of this post, we will leave the Source Settings, Build Defaults and Retention Policy as the defaults.  In the Process tab we need to set some information about our solution.  First of all the solution to build needs to be specified by clicking the elipse in the Projects field:

image

The solution file needs to be added by first clicking the add button:

image

And then navigating to the solution file:

image

In order to specify the azure configuration we require, we need to indicate this as an msbuild parameter.  This is pretty straightforward as the target profile matches the service configuration we created in the previous post:

image

The following shows our completed build settings with the fields we were required to set indicated:

image

In order to test the changes, check-in a change or manually start a build by selecting Queue New Build:

image

Conclusion

By combining source control and build automation, linking your Azure cloud services to Visual Studio Online provides a number of benefits.  Besides providing a robust and feature rich collaboration tool, managed builds provides a team with a simple and efficient mechanism for deploying to Azure.  In my experience, I have seen my deployment time reduced up to 5 times by deploying from VSO instead of local workstations.

Posted in Azure, C#, Deployment | Leave a comment

Redeploy Previous Visual Studio Online Build

One feature to highlight with controlling your Azure deployments from a build server is the ability to track the current deployment and the ability to perform a redeploy of a build.

this feature only works well when the entire deployment process is controlled from Visual Studio Online.  If another deployment method is used, the Deployments in Azure will not recognise the change and not reflect the deployment accurately.

In the management portal, the deployments can be viewed in the Deployments tab.  The following illustrates a collection of deployments to the PRODUCTION environment.

image

Besides having convenient access to view the log of the deployment, it is also a powerful mechanism to perform a re-deploy of an existing package.  For example, if the deployment from October 29th was displayed then the option to redeploy can be used.

image

Posted in Azure, Deployment | Leave a comment

Multiple Azure Cloud Environments

Most real world Azure deployments will require more than one environment.  A typical topology is represented below:

image

There are many resources available that describe this concept so it will not be re-explained here.  Instead this post will present how to set this up in Azure. 

My scenario includes two cloud services: a web role and a worker role.  The creation of the two roles is illustrated below:

image

As the content of the roles is not significant to the post, I just created a basic MVC web app:

image

My first step in setting up publishing to multiple environments is to create the basic publishing profile.  First, select publish on the created cloud project and select the option to create a cloud service.  This is shown below:

image

After the basic settings have been selected the publishing settings are shown below:

image

And once the cloud service has been successfully published:

image

I can then browse my deployed service:

image

And in the server explorer, I can see the newly published service running.

image

And just to illustrate the worker role is running, I will update the diagnostic settings.

image

And set the Event log’s log level to information:

image

After a couple of minutes I can see an error was written.  Interesting error but should not affect the demo!

image

Multiple Environment Configurations

Now that we have a basic service established it is time to set up our different environments.  In my scenario I want to create a Development and a Production environment.

image

The first step is to create two configurations: one for the development environment and one for the production environment.  On the cloud service project, the Manage Configuration option is selected.

image

The first step is to rename the Cloud configuration to Development using the Rename button:

image

And changing the name to Development.

image

Next, the Development configuration is copied and the copy renamed to Production.

image

Below shows the Development and Production configuration files:

image

To illustrate the different configuration settings, let’s update the production configuration to be different than the development configuration.  To customise the configuration settings for a particular cloud service configuration, double-click the role in the Roles folder. 

In our example, the WebRole was selected.

There are several sections that allow you to customise the settings of the cloud role.  In this example the instance count will be updated to 2 VMs in production.  First the Production configuration is selected.

image

And then the instance count is set to 2.

image

Multiple Publishing Profiles

Because we now have both development and production environments to publish to we will need multiple publishing profiles.  To create a development and a production environment, first select the Publish option of the cloud project context menu:

image

In the Target profile drop down, select the manage option:

image

We will create a copy and rename the profiles by adding a suffix to identify the different environments:

image

Now there are two profiles available in the Target profile drop down:

image

Using the previous button we will go back to settings to update our production setting to use the Production service configuration:

image

Because we want separate development cloud roles, we will need to create a new service for the development role:

image

When making changes to multiple publishing profiles, do not forget to click save:

image

Below shows the final settings for production:

image

Below shows the final settings for development:

image

After deploying both profiles the effect of setting the instance count to 2 for the production web role is evident:

image

Conclusion

The above scenario is simple and does not include the complexity of other components.  In particular storage and database resources.  When dealing with the multiple environments managing the connection strings may become tricky as a combination of service configuration and application or web configuration (app.config/web.config) may be required. 

Posted in Azure, Deployment | Leave a comment

EF Database Migrations–No Connection String

Sometimes things don’t work the way you would expect them to or how they worked in the past.  Entity Framework is one of those things that frustrates me when it goes wrong…

I had created a simple database migration in a class library and could not figure out why I kept getting a

System.InvalidOperationException: No connection string named ‘OperationsDatabase’ could be found in the application config file.

I had the connection string defined in the app.config and built the library but no luck.  And I selected the default project in the package manager console.  Simple enough in the end after stumbling upon this post.

update-database –startupprojectname “myclasslibrary”

Posted in C#, Entity Framework | Leave a comment

DataTables.net–sorting on date column

DataTables.net plug-in for jQuery is a powerful javascript library for managing HTML tables.

The following is a note on how to solve a common issue: sorting on the value of a column when the displayed value is different.  An example of this is when a date is displayed as a friendly string like Month Year:

image

The default sorting behaviour would end up with less than desirable results:

image

There are more than one way to solve this, but a simple solution is to include an additional hidden column with the date value.  Then in the column defs the column to be sorted, in this case the first column or 0 index, can reference the hidden column when sorting:

“columnDefs”: [{                       
                       “targets”: [0],
                       “dataSort”: 4 },
                           {                       
                       “targets”: [4],
                       “bVisible”: false
                       }],

The end result is correct sorting on the hidden column:

image

Posted in DataTables.net, HTML, JavaScript, JQuery | Leave a comment

npm install fails with CERT_UNTRUSTED

I have not investigated why but during the install of the azure node.js package, I received a CERT_UNTRUSTED error as indicated below:

PS C:\> npm install azure
npm http GET https://registry.npmjs.org/azure
npm http GET https://registry.npmjs.org/azure
npm http GET https://registry.npmjs.org/azure
npm ERR! Error: SSL Error: CERT_UNTRUSTED
npm ERR! at ClientRequest.<anonymous> (C:\Program Files (x86)\nodejs\node_modules\npm\node_modules\request\main.js:4
40:26)
npm ERR! at ClientRequest.g (events.js:156:14)
npm ERR! at ClientRequest.emit (events.js:67:17)
npm ERR! at HTTPParser.parserOnIncomingClient [as onIncoming] (http.js:1256:7)
npm ERR! at HTTPParser.parserOnHeadersComplete [as onHeadersComplete] (http.js:91:29)
npm ERR! at CleartextStream.socketOnData [as ondata] (http.js:1288:20)
npm ERR! at CleartextStream._push (tls.js:375:27)
npm ERR! at SecurePair.cycle (tls.js:734:20)
npm ERR! at EncryptedStream.write (tls.js:130:13)
npm ERR! at Socket.ondata (stream.js:38:26)
npm ERR! [Error: SSL Error: CERT_UNTRUSTED]
npm ERR! You may report this log at:
npm ERR! <http://github.com/isaacs/npm/issues>
npm ERR! or email it to:
npm ERR! <npm-@googlegroups.com>
npm ERR! System Windows_NT 6.2.9200
npm ERR! command “C:\\Program Files (x86)\\nodejs\\\\node.exe” “C:\\Program Files (x86)\\nodejs\\node_modules\\npm\\bin\
\npm-cli.js” “install” “azure”
npm ERR! cwd C:\
npm ERR! node -v v0.6.20
npm ERR! npm -v 1.1.37
npm ERR! message SSL Error: CERT_UNTRUSTED
npm ERR!
npm ERR! Additional logging details can be found in:
npm ERR! C:\npm-debug.log
npm ERR! not ok code undefined
npm ERR! not ok code 1

To continue with the install, you can use the following:

npm config set strict-ssl false

Posted in Azure, JavaScript, Uncategorized | Leave a comment

Knockout.js defer applyBindings to avoid nodeType of null error

Knockout requires all html and the bound view model to be complete before applyBindings is called otherwise you might receive a nodeType of null error.  This is usually easily avoided by making sure the sequence of scripts is correct and applyBindings script is referenced at the end.

In more complex projects this may get messy especially when using a framework like MVC with partial views.  I found a simple technique worked for more: marking the reference as deferred.  For example if my applyBindings is located in the following javascript file, then I can reference my javascript in the logical location in the project but add the defer attribute:

<script type=”text/javascript” src=”viewmodel.js” defer=”defer”></script>

Posted in ASP.Net, HTML, JavaScript, Knockout.js, MVC, Uncategorized | Leave a comment

Azure Storage Client Version 3 – development storage not supported – 400 Bad Request

This post is just to save those unfortunates that have taken the latest Storage Client 3.0 and found that the storage emulator (re., local development storage) returns 400 Bad Request responses.  This is simply that the storage client version 3 is not compatible with the emulator (only version 2).  The advice provided by Microsoft is either use an older client or test against Azure storage and not the emulator.

http://blogs.msdn.com/b/windowsazurestorage/archive/2013/11/23/windows-azure-storage-known-issues-november-2013.aspx

This is not the first time but still a time waster:

http://social.msdn.microsoft.com/Forums/windowsazure/en-US/b1b66cc0-5143-41fb-b92e-b03d017ea3c1/400-bad-request-connecting-to-development-storage-using-azure-storage-client-ver-20?forum=windowsazuredata

 

Posted in Azure, AzureStorage, Uncategorized | Leave a comment

MVC 4 ApiController Session access

Nothing is accidental when it comes to the behavior of Microsoft technology. It was a conscious decision to not provide access to session state from the ApiController in the same manner as the Controller class. I assume this was an attempt to appear more RESTful in the eyes of the development community by being stateless. Yes, the session is not easily available but with a quick look at an ApiController request, the session cookie is being transferred:
Capture

There are many solutions to extending the ApiController to have access to session state and this post illustrates one for MVC version 4.

Starting with the basic template, the WebApiConfig class located in the App_Start folder can be easily change to extend the routing handler to have access to session.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web.Http;
using System.Web.Routing;
 
namespace MySample.Web
{
    public static class WebApiConfig
    {
        public static void Register(HttpConfiguration config)
        {
            RouteTable.Routes.MapHttpRoute(
                name: "DefaultApi",
                routeTemplate: "api/{controller}/{id}",
                defaults: new { id = RouteParameter.Optional }
            ).RouteHandler = new SessionRouteHandler();            
        }
 
        public class SessionRouteHandler : System.Web.Routing.IRouteHandler
        {
            System.Web.IHttpHandler System.Web.Routing.IRouteHandler.GetHttpHandler(System.Web.Routing.RequestContext requestContext)
            {
                return new SessionControllerHandler(requestContext.RouteData);
            }
        }
 
        public class SessionControllerHandler : System.Web.Http.WebHost.HttpControllerHandler, System.Web.SessionState.IRequiresSessionState
        {
            public SessionControllerHandler(System.Web.Routing.RouteData routeData)
                : base(routeData)
            { }
        }
    }
}

The session can then be referenced in the ApiController using HttpContext.Current.Session.

So why not plumb it in by default?  Yes, session should be used sparingly in order to maximize scalibility but one of the benefits of using MVC is its rich framework.  Good design is using the available resources and technology as effective as possible.

Posted in ASP.Net, C#, MVC | 4 Comments

MVC–Displaying Content from Ajax calls

This article illustrates two approaches to retrieving content from a service using an Ajax query in the MVC framework.  The first calls a method on a controller to paint a partial view.  The second retrieves html markup from a Web API controller.

Real World Scenario

For a project I worked on recently a majority of the content needed to be branded depending on the customer who was currently logged in.  The solution was to retrieve branded content from Azure storage.  For example, terms and conditions or contact us pages.  For most of the requirements I used an Web API controller to retrieve the content and I used a partial view when the html contained content updated by a model.

Sample Project

This sample starts with the ASP.NET MVC 4 Web Application template (Internet Application):

image

The template provides a simple website having two links: About and Contact.

image

The plan is to retrieve the content of the About via a Web API controller and the Contact via a partial view.

Ajax Web API controller

The project currently does not contain any apicontroller classes so we will add on into the project.  To make this sample a little more realistic, the purpose of the controller is to pull static html markup from a repository.  The first step is to create a Resources web API controller:

image

By convention, I create these in an Api folder but the folder name is not important

After removing the sample methods provided, I created the following method:

[System.Web.Mvc.OutputCache(Duration = int.MaxValue, VaryByParam = "id")]
public HttpResponseMessage Get(string id)
{
    var resource = GetResourceById(id);
 
    HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);
    result.Content = new StringContent(resource);
    result.Content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("text/html");
    return result;
}

Because we are not sending a JSON result back we need to return a HttpResponseMessage with the content and the content type.  The GetResourceById() method simulates a call to some form of storage.  In this example it is just coded to return an html string:

private string GetResourceById(string id)
{
    return "&lt;article&gt;&lt;p&gt;Use this area to provide additional information.&lt;/p&gt;&lt;p&gt;Use this area to provide additional information.&lt;/p&gt;&lt;p&gt;Use this area to provide additional information.&lt;/p&gt;&lt;/article&gt;" 
}

The next step is to alter the About view so the content can be placed in a div when the result comes back:

<div id="aboutDiv"></div>
<img id="aboutImg" src="~/Images/ajax-loader.gif" />

Using jQuery the api controller is retrieved and the result is displayed:

<script type="text/javascript">
(function (spike, $) {
$.get('/Api/Resources/about',
function (data) {
$('#aboutDiv').html(data);
$('#aboutImg').hide();
});
}(this.spike = this.spike || {}, jQuery));
</script>

Ajax MVC Partial View

To make the Contact example a little more interesting, let’s create a new controller, partial view and model for the response.  The interesting thing to note is the HomeController will still return the main view but the contact details will be returned from the new controller.

Our new controller will be ContactController and will simply return a Contact model with the partial view:

public class ContactController : Controller
{
    public ActionResult Index()
    {
        var model = new Contact
        {
            Name = "John Smith",
            Email = "jsmith@spike.com"
        };
 
        return PartialView(model);
    }
}

The view itself if simple and uses Razor to apply the model to the markup:

@model Spike.Blog140131.Models.Contact
 
<section class="contact">
    <header>
        <h3>Email @Model.Name</h3>
    </header>
    <p>
        <span class="label">Support:</span>
        <span><a href="mailto:@Model.Email">@Model.Email</a></span>
    </p>
</section>

The javascript uses a similar structure and calls the Contact controller directly to get the partial view result:

<script type="text/javascript">
    (function (spike, $) {
        $.get('/Contact/Index',
               function (data) {
                  $('#contactDiv').html(data);
                  $('#aboutImg').hide();
                 });
     }(this.spike = this.spike || {}, jQuery));
</script>

The complete project is here: Spike.Blog140131

Posted in ASP.Net, C#, HTML, JavaScript, JQuery, MVC, Uncategorized | Tagged , | Leave a comment