Posts in the 'Azure' category

The posts below all relate to the topic you selected.

A few months ago I let the community know that I'd be deprecating some features from the Azure Tools for Visual Studio Code. Some amazing new extensions have appeared in the Azure category of the Visual Studio Marketplace that make it easy and fun to party with Azure. With the great work that's getting done by folks in the Visual Studio Code Azure Tools team, I think it's safe to say that full deprecation is in order.

Why? What's the deal?

To make it as short and sweet as possible, I don't want to confuse anyone who is looking in the marketplace for the real Azure tools. This extension has been in the marketplace for a long time, and as a result of frequent updates at our beginning, we've had a great running and our download count has gotten pretty great. We wholly appreciate the great feedback, contributions, and issues we've received on the extension.

I started this extension with a few goals:

  1. Learn more about the Node.js SDK.
  2. Use the Node.js SDK with the VS Code extensibility API.
  3. Have a tool I could use to do things I have to do frequently when I'm building stuff in Azure.

The extension definitely gave an opportunity for those learnings, but it also introduced a lot of folks to developing with Azure within VS Code. Since those early beginnings there's been a huge set of investments in enabling Azure developer experiences within VS Code, and I want to make sure customers don't stumble onto our humble beginnings and miss out on the true awesomeness of the official Azure extensions.

When?

I'll remove the extension from the marketplace on April 27th, 2018.

What's Next Here?

Simply - nothing. The original extension's repository will remain intact so developers can learn from the code or clone-and-package it on your own using the VSCE tools for packaging and publishing your own extensions.

Again - thanks for your use of the extension, contributions, and ideas. Please take my advice, try out the new and official extensions for App Service, Cosmos DB, mssql, Azure Functions, IoT - all of it! Happy coding!

 

Visual Studio Code offers some great extensions for use when you're developing your applications on Azure or using Azure resources. You can find the collection of Azure extensions in the Visual Studio Marketplace. I'm investigating ways I can use these extensions together to build apps. This post will walk through how you can use Visual Studio Code to build a set of Docker images. Then those images will be published to the Azure Container Registry, so I can share or deploy them later. I'll use the Docker extension for Visual Studio Code to see the images once they're published.

Starting with the SmartHotel360 Microservices

Given the recent work I've been doing with the SmartHotel360 code and demonstrations, I had the back-end microservices built on my machine, so I'll use those images for this demonstration. You can learn more about SmartHotel360 by watching some of the great videos we've recorded on the Visual Studio Toolbox show with Robert Green. Our demo leader Erika introduces SmartHotel360, then Maria and I talk about the Web & Serverless areas of SmartHotel360 with .NET Core 2 and Azure Functions, and Robert talks to James about the SmartHotel360 mobile apps. We have a few more shows coming, so stay tuned to Visual Studio Toolbox.

Build the Images

The screen shot below shows the SmartHotel360 docker-compose.yml file open in Visual Studio Code. Since I have the Docker extension installed, I can see the base images I've got from other builds on my machine. Note, none of the actual SmartHotel360 microservices are built yet, so none of those images are visible.

No images built

The first step is to run docker-compose build to build all of the images. This will take a few minutes to build the numerous images that comprise all of the SmartHotel360 back-end microservices.

docker-compose build

Once the images are built, they'll show up in the Docker Visual Studio Code extension's tree view. I especially love that this is automatic, and I don't even have to manually refresh the list.

Images visible in VS Code

Composing the Databases

The docker-compose.yml file indicates we'll be creating a PostgreSQL database and a Microsoft SQL Server database. You can learn more interesting things on how to run SQL Server in Containers from the Connect(); 2017 session, Use SQL Server 2017 in Docker containers for your CI/CD process.

I prefer to make sure my data tier services are up first, so I'll run docker-compose up sql-data reviews-data tasks-data in the Visual Studio Code terminal.

Data services building

Once these images are built, I see the containers light up in the Visual Studio Code Docker extension treeview.

Data services contained and running

Now, I'll run a docker-compose up to fire up the remaning images. Each of these provide a specific unit of functionality to the SmartHotel360 public web app, or to the individual mobile apps. If you peruse the code, you'll see Java, Node.js and C# code dispersed all throughout the source. This demonstrates how a team of developers using a variety of technologies and frameworks to isolate their functionality, but share in the construction of a community of dependent microservices.

Building the APIs

Now that the containers are built I can see all of them in the Docker tools for Visual Studio Code. The handy context tools allow for easy management of the images and containers, and the integrated command line gives me a place to enter those frequently-used Docker commands.

APIs and Data services are ready

Deploying to Azure Container Registry (ACR)

I want to make use of Azure Container Registry to store the SmartHotel360 private images, so I can share or deploy them later. I'll use the Docker extension for Visual Studio Code to see the images once they're published.

Below is a screen shot of the SmartHotel360 private container registry, provided as an ACR instance.

SmartHotel360 private registry

Note in particular that the Admin user property is disabled. This setting is documented well, as is implications for using it (or not using it) on docs.microsoft.com. The short story is, I can't really access my registry outside of my cloud resources.

Admin account disabled

As long as I've got this set, I won't be able to see my ACR instances and the repositories they contain.

No registries found

So when I first get started, I'll want to enable Admin user (you can always turn this off later, and passwords can be regenerated whenever you need).

Then I'll use the username and password to publish images from the command line. As I do this, I'll see my container images lighting up in the Azure Registry pane in Visual Studio Code's Docker extension.

Enabling the Admin user

Now my Azure Container Registry (ACR) instances light up, and I can begin using the Docker command line interface from within Visual Studio Code to push images into Azure.

Registry visible

Now I'll enter a few commands into the terminal that will log me into my Azure Container Register instance and allow me to publish a Docker image into ACR. There, it'll be ready for deploymnet to App Service, Azure Container Service, or Azure Kubernetes Service.

Pushing images to ACR

Once the image has been published, you can see it as a repository in the ACR tree view node in the Docker panel in Visual Studio Code.

See the repository in VS Code

Other Ideas for using Visual Studio Code & Azure?

This post shows just one simple thing you can do with Visual Studio Code and Azure together. If you have other ideas you'd like to see, drop me a comment below, or send me a message on twitter to let me know what sorts of things you'd like to see.

 

Visual Studio Code offers some great extensions for use when you're developing your applications on Azure or using Azure resources. You can find the collection of Azure extensions in the Visual Studio Marketplace. I'm investigating ways I can use these extensions together to build apps. This post demonstrates how, when you're developing an Azure Function using Visual Studio Code's Azure Functions extension, you can also use the Azure Storage extension to make configuration a snap.

The Azure Functions extension makes it really easy to create a new Functions project (red arrow) in your workspace's active directory. It also provides a button in the Functions Explorer pane that makes it easy to create new Function files in my project (green arrow).

Function toolbar buttons

In my case, I want to create a Function that wakes up when new blobs are dropped into a blob container. You can learn more about blob-triggered functions on docs.microsoft.com.

Create a blob trigger function

During the handy Function-creation process, I'm asked if I'd like to configure my Function project with the storage account's connection string. I don't have this yet, as I'm still experimenting, so I click the close button.

Skip this

The new Function code opens up in Visual Studio Code, and I'm ready to start coding. At this point I'm curious what'll happen, so I go ahead and hit F5 to start debugging.

Blob Triggered Function code

Debugging immediately throws an error to let me know I should've given it a connection string.

Humorous narrator: If you're like me, you don't always let the tools do their jobs for you, accept the defaults, and pray things will be okay. As a result, I tend to do a lot of configuring (which I later replace with devops fanciness). This "I'll do it later myself" attitude results with my hastily copying connection strings and accidentally pasting them into emails later. Keep reading and I'll save you hours of the repeated process of copying and not-properly pasting your connection string.

The good news is that the Azure Storage tools make it easy to fix the Missing value for AzureWebJobsStorage in local.settings.json error message shown below.

Missing value for AzureWebJobsStorage in local.settings.json error

Whether I forgot to create a Storage account, or didn't have one and wanted to create it later, or I'm migrating from my localhost Storage emulated environment to a live one in Azure, the tools make it easy.

By simply expanding the subscription so I can see all of my Storage Accounts, I can right-click the account I'm after and select the Copy Connection String context menu.

Copy the connection string

The results can be pasted into my local.settings.json file in a jiffy.

Paste the connection string

And with that, I've configured my Azure Function, and it runs just fine in the debugger.

Running fine now

Other Ideas for using Visual Studio Code & Azure?

This post shows just one simple thing you can do with Visual Studio Code and Azure together. If you have other ideas you'd like to see, drop me a comment below, or send me a message on twitter to let me know what sorts of things you'd like to see.

 

Some time has passed since the last update of the Azure Tools for Visual Studio Code, and in that time there have been some great advancements in the extensions available to Azure developers who use Visual Studio Code. Some amazing new extensions have appeared in the Azure category of the Visual Studio Marketplace that make it easy and fun to party with Azure.

V2 will remove some features

The short version is that the official extensions - Azure App Service and Azure Storage - provide much better experiences for their service areas than does our (myself, Christos Matskas, and Ian Philpot) extension. So we're removing the redundant functionality from the Azure Tools for Visual Studio Code in lieu of the official extensions' features. There will be a notice of this change when the extension is updated.

The official extensions are so much easier to use, offer richer experiences and more discoverable features, and the authentication experience is great. Plus, the official extensions share certain components, yet revise independently. Smaller pieces, easier autonomy for the individual feature owners. Yay extensibility!

Future Plans

The great folks in the Azure SDK and Visual Studio Code teams have worked together to create not only a great set of extensions for Azure developers, but they've also given extension authors a common dependency to get started, the Azure Account extension. There's guidance for developers who want to write Azure extensions for Visual Studio Code, too.

With so many improvements in the community of extensions being developed around this great SDK and new directions in how authors contribute Azure features our team has some opportunities ahead for distributing the individual resource management areas into separate, smaller extensions. Obviously, there's a huge benefit to retrofitting our extension with the new common underpinnings, too. We'll keep everyone updated in the README.md updates to the extension, link back to this blog for updates to the extension.

Question - to depend or not?

There are various schools of thought on adding arbitrary dependencies for feature redirection. My personal feelings on this is that I'd be receptive to an extension author who takes a dependency on a later extension that improves/replaces the original feature. But some find this intrusive.

What's your opinion? Leave a comment. Given the potential intrusiveness of taking a dependency on the official extensions, I'll hold off, but given it'd be my own personal preference, I'd implement this next if the community prefers it.

 

A few days ago I blogged about using Functions, .NET, and Cosmos DB's Graph API together, and as I pointed out in an update to that post accompanying this one, the experience of working with Cosmos DB's Graph API was exciting and had offered some interesting opportunities and mental exercises. The first [somewhat constructive] criticism I received on the initial post was from my friend and colleague Burke Holland, who asked "where's the Node.js love?," to which I replied "I wanted to use the VS Functions tools and I'm more comfortable with .NET and..." and Burke cut me off in mid-message with clear judgment and purpose:

"Nobody puts Node in the corner."

I can't ignore a strong argument that conjures that gem of filmography and choreography, so the second post in the series will give Node.js a break from the battlefield, escorting it right back onto the dancefloor. I'll show you how to build a quick-and-dirty serverless Azure Function that uses the open-source gremlin-secure NPM package to query the same Cosmos DB Graph database from the original post and docs.microsoft.com article Azure Cosmos DB's Graph API .NET Client. The Node.js post will give us an HTTP API we can call when we want to get a list of all the people who know one specific person.

Part 2 of the Serverless & Schemaless Series

Functions and Cosmos DB's nature together create a Serverless & Schemaless Scenario, and the opportunities this scenario provides for agile developers dealing with evolving structures and processes seem vast. This post is one in what I call the Serverless & Schemaless Series:

  1. Querying Azure Cosmos DB's Graph API using an Azure Function - Walks through creating a .NET Function that queries Cosmos DB's Graph API using the Cosmos DB Graph API .NET SDK.
  2. Querying Azure Cosmos DB using serverless Node.js (this post)
  3. TBD
  4. TBD

Creating an Azure Function with Node.js

The Azure Functions JavaScript developer guide has a wealth of information on some of the internal Function SDKs, as well as how to do things like reading environment variables into your code that you configure in the Azure portal, and more. This is definitely recommended reading for more context on some of the lower-level details we won't cover in this post.

Let's dive in and create a new Azure Function, using JavaScript as the language du jour and the HttpTrigger template as a getting-started point.

Create a new trigger

Once the new Function has been created and deployed into your Azure subscription and resource group, the Functions editing experience will open in the portal. Note a few key areas of the window:

  1. The code editor, where we can make updates to our serverless code.
  2. The Test area on the right side of the code editor. This dialog is useful for executing test commands against an Azure Function. The HttpTrigger template has a name parameter expected, so the Request body widget provides a quick way of editing the request content.
  3. The View Files tab. It provides quick access to the *.js, *.json, and other source code files that comprise a Function.

The new Function in the editor

The code below is also available in my fork of the original code used in the article that got me started. Copy this code and paste it into the code editor in the Azure portal. Portions of the source code below was borrowed (okay, it was stolen and bastardized) from the Cosmos DB team's great code sample on Getting Started with Node.js and the Graph API.

var Gremlin = require("gremlin-secure");

// read the configuration from the environment settings
const config = {
    "database" : process.env["database"],
    "collection" : process.env["collection"],
    "AuthKey" : process.env["AuthKey"],
    "nodeEndpoint" : process.env["nodeEndpoint"]
};

// handle the http request
module.exports = function (context, req) {
    context.log('JavaScript HTTP trigger function processed a request.');

    if (req.query.name || (req.body && req.body.name)) {

        // create a new client
        const client = Gremlin.createClient(443, 
            config.nodeEndpoint,
            {
                "session":false,
                "ssl":true,
                "user":"/dbs/" + config.database + "/colls/" + config.collection,
                "password": config.AuthKey
            }
        );

        // execute the query
        client.execute("g.V().outE('knows').where(inV().has('id','" + (req.query.name || req.body.name) + "'))", { }, (err, results) => {
            if (err) return console.error(err);
            console.log(results);

            // return the results
            context.res = {
                // status: 200, /* Defaults to 200 */
                body: results
            };

            context.done();
        });
    }
    else {
        context.res = {
            status: 400,
            body: "Please pass a name on the query string or in the request body"
        };

        context.done();
    }
};

Once you paste in the code the editor should look more like this.

Alt

Note in particular these lines from the source code for the Node.js Function. These lines of code read from the environment variables for the Function, which can be set using the Azure portal.

// read the configuration from the environment settings
const config = {
    "database" : process.env["database"],
    "collection" : process.env["collection"],
    "AuthKey" : process.env["AuthKey"],
    "nodeEndpoint" : process.env["nodeEndpoint"]
};

You can avoid putting the environment variables into your source code and your source control repository. To set up these variables you can go to the Application Settings blade of your Function (or any other App Service). You can get to the Application Settings blade from the Platform features blade in your Function app.

Getting to Application Settings

Each of the Application Settings are available to the Node.js code as environment variables, accessible using the process.env[name] coding pattern. Note the AuthKey setting - it is the same key that's being used by the .NET code from the first post in this series. Since the .NET Function and the Node.js Function would be running in the same App Service instance, both can share these Application settings.

That said, the Gremlin NPM package for Node.js expects a slightly different URL structure than the .NET client, so the nodeEndpoint Application Setting will function to provide the Node.js Function the URL it'll reach out to when communicating with the Graph API.

Configuring environment variables in the portal

Installing Dependencies via NPM

Since the Node.js code for the Azure Function will make use of the gremlin-secure NPM package to issue queries over to the Cosmos DB Graph, I'll need to install that and any other dependent NPM packages into my Function before it'll run properly.

If I was using a Git deploy model here, the Kudu deployment engine is smart enough to know how to install NPM packages for me. But I'm not using Kudu explicitly in this particular App Service, I'm building it using the web-based Azure Portal code editor and testing tools.

Luckilly, Kudu and App Service make it just as easy to install NPM packages using the portal as it is using Git deploy. As with any proper Node.js app, I'll need to add a package.json file to my app so that I can inform NPM which dependent packages the code will need. To create a new file in my Function's file store named package.json.

Alt

Next, add the following code to the package.json file you just added to the Function. The code is also in the repository containing this blog's sample code.

{
  "name": "node-gremlin",
  "version": "0.0.0",
  "description": "",
  "homepage": "",
  "author": {
    "name": "username",
    "email": "email@address.com",
    "url": "yoursite.com"
  },
  "main": "lib/index.js",
  "keywords": [
    "cosmos db",
    "gremlin",
    "node",
    "azure"
  ],
  "license": "MIT",
  "dependencies": {
    "gremlin-secure": "^1.0.2"
  }
}

The screen shot below shows what the package.json looks like once I added it to my own function. Note, I've customized my author metadata and added keywords to the file's content; you may want to customize these to reflect your own... um... meta?

Alt

On the Platform Features blade there's an icon in the Development Tools area called Console. Clicking this icon will bring up an interactive console window, right in the browser. By executing NPM commands using this console it is possible to install all the dependencies your app needs manually.

Alt

The screen shot below shows how I first need to cd into the correct folder to find the package.json I just created. Once I find that folder and move into it, executing an npm install will bring down all the NPM dependencies needed by the Node.js code.

Alt

When I flip back to the View files blade in the portal, I can see that the node_modules folder has been created. The dependencies have been installed, so now I can safely execute to Function and test it out using the portal's development tools.

Alt

Now that all the dependencies have been installed I can test the function directly in the Azure portal.

Testing the Function

The Test segment of the Azure portal is great for making sure Functions operate as expected. The test tool helps debugging and validating that things are working properly.

Using the Request body area of the portal, I can customize the JSON content that will be POSTed over HTTP to the Function endpoint.

Alt

Note the JSON below in the bottom-right corner of the Azure portal. This JSON was returned from the Function based on the request body passed into the server-side function.

Summary

This is the second post in a series about Azure Functions and Cosmos DB, using the Graph API to access the Cosmos DB data. We've covered .NET and Node.js Functions that wrap around the Cosmos DB database, providing a variety of HTTP operations that clients can call to consume the data in the Cosmos DB Graph.

The next post will probably highlight how a single client app could be built to reach out to these back-end Functions, and we'll take a look at how these items could automated in a later post. If you have any inquiries, requests, or ideas, feel free to use the comment tools below to throw something out for me to include in the series and I'll try to add some content and additional posts to the series.

This is an exciting time for doing serverless microservices to make use of a variety of back-end storage containers in Azure, and I'm enjoying learning a few new tricks. Thanks for reading, and happy coding!

 

Azure Cosmos DB is an exciting new way to store data. It offers a few approaches to data storage, one of which has been mysterious to me - Graph APIs. Cosmos DB has support for dynamic graph api structures and allows developers to use the rich Gremlin query syntax to access and update data. When I read the great walk-through article on docs.microsoft.com describing how to use Azure Cosmos DB's Graph API .NET Client I was really excited how dynamic my data could be in the Cosmos DB Graph API. I'd been looking for some demo subject matter to try out the new Azure Functions tools for Visual Studio, so I decided to put these three new ideas together to build an Azure Function that allows a serverless approach to searching a Cosmos DB via the Graph API.

Update: I've made a few updates below after doing some due dilligence on a dependency-related exception I observed during local debugging. If SDKs evolve I'll update the post and make additional updates in this area of the post.

Introducing the Serverless & Schemaless Series

Authoring this post and learning about Graph API was really exciting, and like all good technologists I found myself becoming a little obsessed with the Graph and the opportunities the world of Functions and Cosmos DB has to offer. Functions and Cosmos DB's nature together create a Serverless & Schemaless Scenario, and the opportunities this scenario provides for agile developers dealing with evolving structures and processes seem vast. This post is one in what I call the Serverless & Schemaless Series:

  1. Querying Azure Cosmos DB's Graph API using an Azure Function (this post)
  2. Querying Azure Cosmos DB using serverless Node.js - Walks through creating a Node.js Function that queries Cosmos DB's Graph API using an open-source Gremlin package
  3. TBD
  4. TBD

Building a Function using Visual Studio

The article I mentioned earlier, Azure Cosmos DB's Graph API .NET Client, has a companion GitHub repository containing some great getting-started code. The console application project basically sets up a Cosmos DB database with some sample person data. I forked the companion repository into my own fork here, which contains the basic Function code I'll describe below. So if you just want the code and aren't interested my verbose discussion, you can find it here.

First step should be obvious - we need to add a Function to the solution.

Alt

Once the Function project is created there are a few NuGet related updates and installs we'll need to make. To make sure we're using the latest and greatest Functions SDK, it'd be a good idea to update the Microsoft.NET.Sdk.Functions package.

Alt

The Cosmos DB Graph API team was nice enough to give us a .NET Client SDK, so we should use it. Use the handy NuGet tools to install the Microsoft.Azure.Graphs package.

Alt

During debugging, I noticed a few runtime errors that indicated the Mono.CSharp assemblies couldn't be found. I presume this has something to do with the emulated environment, but don't quote me on that. I followed up Donna Malayeri, one of the awesome Program Managers on the Functions team to get some details here, thinking the reference might indicate a func.exe issue or potential emulator variance. She confirmed there's no dependency on Mono.CSharp in the Functions emulator.

So then I checked in with Andrew Liu, one of the awesome Program Managers in the Cosmos DB team. He confirmed that one of the dependencies in the Cosmos DB SDK is Mono.CSharp. My debugging experience did error with this dependency mentioned during a Graph API call, come to think of it.

I mention all these great folks not to name-drop, but so you know how to find them if you have questions, too. They're super receptive to feedback and love making their products better, so hit them up if you have ideas.

Either way - to debug this guy locally you'll need to install the Mono.CSharp package.

Alt

Once the dependencies are all in place (see the NuGet node in the Solution Explorer below for what-it-should-look-like), we'll need to write some code. To do this, add a new Function item to the project. I've named mine Search.cs, since the point of this Function will be to provide database searching.

Alt

The Function will respond to HTTP requests, so the Http Trigger template is appropriate here. We want this Function to be "wide open," too, so we'll set the Access rights menu to be Anonymous, which lets everyone through.

Alt

Once Search.cs is added to the Function project, add these using statements to the top of the file.

using Microsoft.Azure.Documents;
using Microsoft.Azure.Documents.Client;
using Microsoft.Azure.Documents.Linq;
using Microsoft.Azure.Graphs;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Azure.WebJobs.Host;
using System;
using System.Collections.Generic;
using System.Configuration;
using System.Linq;
using System.Net;
using System.Net.Http;
using System.Threading.Tasks;

Once those have been added, replace the Function's class code with the code below. The code will simply search the Cosmos DB database using the Graph API for either all the people, or for the specific person identified via the name querystring parameter.

public static class Search
{
    static string endpoint = ConfigurationManager.AppSettings["Endpoint"];
    static string authKey = ConfigurationManager.AppSettings["AuthKey"];

    [FunctionName("Search")]
    public static async Task<HttpResponseMessage> Run(
        [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]HttpRequestMessage req,
        TraceWriter log)
    {
        log.Info("C# HTTP trigger function processed a request.");

        // the person objects will be free-form in structure
        List<dynamic> results = new List<dynamic>();

        // open the client's connection
        using (DocumentClient client = new DocumentClient(
            new Uri(endpoint),
            authKey,
            new ConnectionPolicy
            {
                ConnectionMode = ConnectionMode.Direct,
                ConnectionProtocol = Protocol.Tcp
            }))
        {
            // get a reference to the database the console app created
            Database database = await client.CreateDatabaseIfNotExistsAsync(
                new Database
                {
                    Id = "graphdb"
                });

            // get an instance of the database's graph
            DocumentCollection graph = await client.CreateDocumentCollectionIfNotExistsAsync(
                UriFactory.CreateDatabaseUri("graphdb"),
                new DocumentCollection { Id = "graphcollz" },
                new RequestOptions { OfferThroughput = 1000 });

            // build a gremlin query based on the existence of a name parameter
            string name = req.GetQueryNameValuePairs()
                .FirstOrDefault(q => string.Compare(q.Key, "name", true) == 0)
                .Value;

            IDocumentQuery<dynamic> query = (!String.IsNullOrEmpty(name))
                ? client.CreateGremlinQuery<dynamic>(graph, string.Format("g.V('{0}')", name))
                : client.CreateGremlinQuery<dynamic>(graph, "g.V()");

            // iterate over all the results and add them to the list
            while (query.HasMoreResults)
                foreach (dynamic result in await query.ExecuteNextAsync())
                    results.Add(result);
        }

        // return the list with an OK response
        return req.CreateResponse<List<dynamic>>(HttpStatusCode.OK, results);
    }
}

The code is basically the same connection logic as in the original console application which seeded the database, with a simple query to retrieve the matching records.

Debugging the Function Locally

Now that the code is complete, the Functions local debugging tools and emulator can be used to run the code locally so we can test it out.

Before the code will run properly, it must be configured for local execution with the Cosmos DB connection information. The local.settings.json file can be used to configure the Function for local execution much in the same way the App.config file is used to configure the original console application for execution.

Alt

Once the Function app has been configured locally so it knows how to find the Cosmos DB database, hitting F5 will launch the local debugging tool (which probably has the funkiest name of all time), func.exe, with the Function code hosted and ready for use.

At the end of the initial output from func.exe, I'll see that my Function is being hosted at localhost:7071. This will be helpful to test it in a client.

Alt

To test my Function, I'll use Visual Studio Code with Huachao Mao's excellent extension, REST Client. REST Client offers local or remote HTTP request capability in a single right-click. I'll add the URL of my person search function and execute the HTTP request.

Alt

I'm immediately presented with the raw HTTP response from the locally-running Function! Headers, JSON body content, everything.

Alt

By adding the name querystring parameter with a value I know to be in the database, I can filter the results the Function returns.

Alt

Once the Function is validated and seems to be working properly, the last step is publishing it to Azure App Service and configuring it to run in the cloud.

Publishing the Function to Azure

Luckilly, publishing is nearly a single mouse-click. By right-clicking the project I can see the familiar Publish context menu item.

Alt

I'm ready to publish this to the Cloud so it can be tested running in a publicly available scenario. So I'll select the first option, Azure Function App and enable the Create New radio button here so I can create a brand new Function in my Azure subscription.

Alt

The publish panel opens next, allowing me to name my Function. I'll opt for creating a new Consumption-based App Service Plan since I intend on using the pay-per-use billing method for my serverless Function. In addition, I'll create a new Storage Account to use with my Function in case I ever need support for Blobs, Tables, or Queues to trigger execution of other functionality I've not yet dreamed up.

Alt

Clicking the Create button in the dialog will result in all the resources being created in my Azure subscription. Then, Visual Studio will download a publish profile file (which is a simple XML file) that it'll use for knowing how to publish the Function code the next time I want to do so.

Once the Function is published, I can flip to the Azure Portal blade for my Function. There, I'll see a link to the Function's Application settings. I'll need to go here, as this is where I'll configure the live Function for connectivity to the Cosmos DB database with my Person data.

Alt

Just as I did earlier in the console application's App.config file and in the Function app's local.settings.json file, I'll need to configure the published Function with the Endpoint and AuthKey values appropriate for my Cosmos DB database. This way, I never have to check in configuration code that contains my keys - I can configure them in the portal and be sure they're not stored in source control.

Alt

Once the Function is configured properly in my Azure subscription, I can again use the Visual Studio Code REST Client extension to query my publicly-available Function URL.

Alt

Summary

This post summarized how to write a basic Azure Function to search a super-small Cosmos DB database using the Graph API, but there's so much more opportunity here. I enjoyed this quick-and-dirty introduction to using Cosmos DB and Functions together - it really drives home the flexibility of using a serverless back-end together with a schemaless data storage mechanism. These two tools are powerful when used together, to enable really rapid, fluid evolution of an API that can evolve around the underlying data structure.

I'll definitely be investigating Cosmos DB and Azure Functions together for some upcoming side project ideas on my backlog, and encourage you to take a look at it. The sample code in my fork of the repository - though kind of brute-force still - demonstrates how easy it is to get up and running with a serverless front-end atop a Graph database with worldwide distribution capability.

 

Today you can download the new version (v1.2.0) of the Azure Tools for Visual Studio Code (ATVSC). A team of contributors have collaborated, responded to issues reported on our public GitHub repository, and spent some time cleaning up some of the less-than-desirable early code in the extension to make contributions more easy and isolated. This post will describe the new features in v1.2.0. In addition to a doing some refactoring to ease extension contributions, we've added a ton of new features in this version.

Export Template

In 1.2.0 we've added support for exporting existing resource groups to ARM templates saved in your workspace. First, you invoke the Export command using the palette.

Export command

Then you select an existing resource group.

Select a resource group

A few seconds later, the resource group's contents are downloaded as an ARM template and stored into your current workspace's arm-templates folder.

Export command

Note: As of the 1.2.0 release time frame there are a few kinks in the particular Azure API call we're using; certain details of your resources might not be persisted exactly right. You can use the great features contained in the Azure Resource Manager Tools extension (which bundled with this extension) to make tweaks. The API owners are working on making great improvements to this functionality so it'll improve in future releases of the back-end API.

Azure Batch account creation

Christos Matskas, a colleague and fellow Azure speaker rich in the arts of FOSS contributions submitted some great features like Key Vault in 1.1.0, and he continued to contribute in 1.2.0, adding support for Azure Batch.

From within Visual Studio Code you can use the Create Azure Batch command from the palette, shown below, to create new Azure Batch accounts. Future releases may add support for scripting against your Batch account, creating Jobs, and so forth. Feel free to send the team requests for additional Batch features via our GitHub Issues page.

Create Key Vault

Telemetry Collection

This release introduces the collection of basic usage telemetry. We're using Application Insights to collect and understand how customers are using the extension. To disable the collection of telemetry data simply edit set the azure.enableTelemetry configuration setting to false as shown below.

How to disable usage telemetry

Note: No performance degradation has occurred during this addition, and no private customer information is being persisted. Our telemetry code tracks the name of the call being made (like CreateAppService) and the GUID-based subscription id being affected. We capture the subscription ID so we can understand the frequency of customer usage; the ID can't be used to reversely-identify customers. No customer-identifying data, passwords, connection strings, or resource names are being persisted to our telemetry system.

Existing Features

You can learn more about the ATVSC in the initial announcement post or on the ATVSC marketplace page. Here's a bullet list of the other features the tools provide to make it easy for developers to party in the cloud.

  • Azure Key Vault Creation
  • Search the Azure QuickStart Templates GitHub repository
  • Download Azure Resource Manager (ARM) template and parameter files to workspace
  • Uses the Azure Resource Manager Tools extension to simplify the template-editing experience
  • ARM template deployment to live Azure subscription
  • Login with Microsoft Account (@hotmail.com, @live.com, etc.)
  • Login with Azure Active Directory (or "Organizational") account
  • Create App Service Web Apps
  • Create App Service Function Apps
  • Create Storage Accounts
  • Get Storage Account connection string
  • Browse to resource in portal
  • Browse to resource group in portal
  • Support for multiple Azure subscriptions
  • Supports all Azure data centers

Thanks!

Thanks for taking the time to learn about all the new features we've put into the ATVCS. If you're a Visual Studio Code user, take a moment to try the tools out. Send us a GitHub issue if you have any ideas. Your votes or comments are also welcome on the marketplace site, if you're finding the tools useful.

Happy coding!

 

Today I'm excited to officially announce the 1.0.0 release of the Azure Tools for Visual Studio Code extension. This post will summarize the features available in the extension, provide you with links to demonstration videos so you can learn more, and let you know how you can install the extension and get going with it. In seconds you'll be managing your Azure resources and deploying new topologies on a Mac, PC, or Linux development workstation.

Bring Azure Development to Every Platform

The Azure Tools for Visual Studio Code (ATVSC) began as a simple enough idea: Visual Studio Code supports extensions built using, among other languages, Node.js. We have a great Node.js SDK for Azure, so it felt like a natural combination that would deliver Azure developer features to any desktop that supports Visual Studio Code. Even though developers already have great Azure management features via the Azure CLI and XPlat CLI tools for managing resources, Visual Studio offers an integrated experience tailored for the developer. With so many developers using non-Windows workstations to develop for Azure, why should Visual Studio developers be the only ones who have an integrated Azure development and management experience in their code editors?

So I started working on the product and within a few hours was installing a very early prototype on my wife's Apple laptop as she stood behind me with a concerned look on her face wondering why I was typing in a command line and enjoying it.

Minimum Viable Product and Reality

After a few days I'd made some decisions about the extension in order to come up with a tangible list of feature ideas I'd tackle for my initial release:

  • Be built using the Node.js SDK for Azure
  • Target Web, Mobile, API, and Azure Function developers
  • Provide Azure Resource Manager (ARM) template search and deployment features
  • Not persist authentication tokens between Visual Studio Code user sessions
  • Be exposed primarily via the Visual Studio Code command palette

Realistically, I didn't have a lot of time to dedicate to the development so I recruited help, too. My good friends and colleagues Ian Philpot from Atlanta and Anze Vodovnik from Slovenia (who's now in the UK), offered some ideas and coding assistance on a few features in the extension. We also ran some of our ideas by Stuart Leeks, who naturally had some great feedback. During the 1.1.0 release, Christos Matskas got involved and contributed a great list of features like Key Vault and Batch account creation functionality.

All of the folks working on the extension did so in their spare time. We worked for a few hours here and there in the evenings and put this together skunkworks-style. The ATVSC is free-form, open-source, and unsupported aside from the growing community of folks interested in it.

It was great to see an organic interest in my colleagues and from the Visual Studio Code team, namely Chris Dias and Wade Anderson who supported us and gave some ideas. What's interesting is that the time I'd thought I'd need to get things done was far loftier than reality - the Visual Studio Code extension framework and Azure Node SDK are both pretty easy to use, if not elegant in places.

Features

We came up with a great list of features for the 1.0.0 release, which are also listed on the Azure Tools for Visual Studio Code marketplace page.

  • Browser-based login - No logging into the editor directly. Instead, we use device authentication and a browser-based authentication flow to keep the tool less complex. There's no authentication code to manually copy, either - the extension copies the authentication code to your clipboard and directs you to the device login page, where it can be easily pasted, streamlining login.

  • Multiple Azure Subscription support - login once, select any of your subscriptions to manage.

  • Multiple region support - Use the toolbar button or command palette to specify your active data center.

  • Web App Creation - the extension offers a "basic" and "advanced" mode. Basic captures the name of your Web App and then creates a resource group, hosting plan, and app named the same. Advanced mode allows you to specify new or existing resource group, customize the hosting plan and resource group name. Here is a demonstration video on a Mac of developing a .NET Core web app that's deployed to an Azure Web App created using the Azure Tools Visual Studio Code extension.

  • Function App Creation - the extension offers a "basic" and "advanced" mode. Basic captures the name of your Function App and then creates a resource group, hosting plan, and app named the same. Advanced mode allows you to specify new or existing resource group, customize the hosting plan and resource group name. Watch a video here on deploying an Azure Function built using Yeoman templates to a live Azure subscription.

  • Storage Account Features - Create storage accounts directly within Visual Studio Code and retrieve their connection strings for pasting them into configuration files or Azure Key Vault. You can watch a video here on Storage functionality provided in the extension.

  • Azure Resource Manager (ARM) template support - From within Visual Studio Code, search for and download Azure Resource Manager (ARM) templates directly from the QuickStarts repository. The ATVSC takes ships bundled with the Azure Resource Manager Tools, so once you download the templates they're easily editable directly within Visual Studio Code.

  • Azure Resource Manager (ARM) template Deployment - Once you've edited your template and parameters files, these can be deployed directly to Azure using the ARM template deployment features of the Azure SDK. Here is a video (also shown below) demonstrating an end-to-end experience of searching for, downloading, editing, and finally deploying a topology directly to Azure.

Installation

Thanks to Visual Studio Code's built-in extension tab, finding the extension is as easy as searching for "Azure Tools" and then installing the extension.

Installation

Once the extension installs you'll need to restart Code. Then you can login and get started. The login command supports Microsoft Account and Organizational Account login, so you have a pair of authentication options.

Note: If you want to login with a @outlook.com, @live.com, or @hotmail.com account - a.k.a. a "Microsoft Account," you'll need to set the azure.tenantId setting using the File -> Preference -> User or Workspace settings gesture within Visual Studio Code. The screen shot below demonstrates this.

Configuration

What's Next?

At this point we're sure the Azure Tools for Visual Studio code offers a great tool for developer who are already using the Azure portal for managing their resources and the command line tools for provisioning or configuring things during the development or devops lifecycle. Our goal was to enable developers for whom Visual Studio isn't an option with Azure development tools and with provisioning and template capabilities, and this first release delivers that goal.

The extension is available as an open-source project and can be found in the GitHub repository https://github.com/bradygaster/azure-tools-vscode. If you experience any issues or have suggestions or feature requests, use the GitHub repository's Issues page. Of course, contributions are welcome too, and all pull requests will be reviewed carefully.

Enjoy it!

Take a moment and give the extension a spin. We're interested in what you think, how to take it to the next level, and what sort of features you'd like to see next. Happy coding!

 

The Deploy to Azure Button

posted 6 years ago in Azure

Update: Two awesome developers from the Azure Websites team, David Ebbo  and Elliott Hamai, took the idea of Deploy to Azure, looked at the code, laughed and pointed at me, and used all kinds of new APIs in Azure to bring the idea to life in a manner that will provide far more value than I'd imagined. Keep reading to learn about the prototype I built, but make sure to learn about the real thing, and try it out yourself. 

Over the weekend I received an email from my friend and colleague Devin Rader . In the email he directed me to a neat feature Heroku offers, called the Heroku Button, and asked me if there was anything in the works like this for Azure. I wasn't aware of anything that was in the works at the time, and given that I'm one of the program managers on the Azure SDK team who builds the Azure Management Libraries, I figured this was a challenge for which I'd be well-suited. So I replied to Devin to let him know I was inspired by the idea and wanted to build it myself. This blog post will introduce you to the idea of the Deploy to Azure Button. My good buddy Vittorio Bertocci has a complementary post on how the identity flow works in Deploy to Azure, as he and I consulted on the idea, so make sure to check that out too.  

What is the Deploy to Azure Button?

The idea of the Deploy to Azure button is to make it dirt simple for owners of pre-packaged web applications to make it dirt simple for their potential customers to deploy their applications to Azure. As a web application's author, I could place a button in my GitHub repository's readme file to give users a visual cue that one-click deployment is possible. Here's a screen shot of the Deploy to Azure button being used in a demonstration repository I created for an early demo of Azure Websites, the " Hello All Worlds " demo.

clip_image001

See that gorgeous blue button? If I click that button, the GitHub repository page URL will be passed as the HTTP referrer and the associated Git repository URI can pretty easily be guessed. Here's the Deploy to Azure site with this GitHub repository as it's referring URL. Note the Git repository URL is pre-populated into the form.

clip_image002

Once I provide a name for my site and select a region, the site name is verified as one that's available. If it isn't I'm informed as such.

clip_image003

Once I find a site name I like, and that is available, clicking the "Deploy to Azure" button will submit the form. The form's data is collected and posted back to a Web API controller, which in turn bubbles up status information about the process of cloning the code, creating the site, and then deploying the site's code via SignalR . As the site's deployed I'm provided real-time status.

clip_image004

Once the site's been deployed, a button is added to the form that I can use to pop open the new site in a new tab or browser window.

clip_image005

I've also added a Deploy to Azure button to my fork of my good buddy Mads' MiniBlog source code, which I've frequently used as a test case for the idea of enabling SaaS with the Management Libraries.

clip_image006

Video is so much more effective at conveying the idea of the Deploy to Azure button, so I've created 3-minute video walking through it on YouTube and embedded it below.

Want to use the Deploy to Azure Button?

Feel free! The app is live and working at http://deployto.azurewebsites.net, and will accept requests today. I'd encourage you to grab the button below or use the direct link to the button in your own GitHub repositories. Just put the button below on your repository, and link over to the URL above and the app will do the rest.

clip_image007

Below I describe how Deploy to Azure works, as well as put forth a few caveats of the app in its current state, so keep reading to understand some of the limitations of Deploy to Azure, as well as some of the plans we have for its future.

How it Works

There are a lot of ideas we could build out of the Deploy to Azure idea, but the code isn't too elegant just yet. The idea was to prove how easy it'd be to enable a one-click deployment story directly from a public repository. Now that we're there, we're seeing a lot of other ideas popping up.

For the time being I'm doing some really simple Git tricks on the server side and then some even simpler tricks on the deployment side. I'll go into the identity stuff later, but the Deploy to Azure code base started with Vittorio's sample on using the new OWIN middleware with Open IDC and multi-tenancy .

The workflow of Deploy to Azure is pretty simple. I'll walk through it at a very high level in this post, then dive a little deeper into the code of the site to explain how it works. The code for the site is open-source, too, so feel free to check out the GitHub repository where the code for Deploy to Azure is stored if you'd like to see more. Feel free to submit a pull request, too, if you feel you can make it better.

  1. A user is authenticated to their Azure account during the AAD handshake, driven by the OpenId Connect OWIN middleware
  2. The OWIN middleware hands over an authentication code to ADAL, which uses it to obtain a new AAD token for accessing the Azure Management API
  3. Once a token is obtained, MAML clients can be used to communicate with the Azure management APIs
  4. The list of regions available to a user's subscription are retrieved and displayed in the form's menu
  5. When a user submits the form the data is sent to a Web API controller
  6. The Web API controller clones the Git repository down to a new folder on the server side
  7. The Web API controller creates an instance of the Web Site Management Client and a site is created
  8. The site's publish profiles are pulled
  9. The site is deployed via Web Deploy back up to Azure Websites

The diagram below demonstrates this process visually.

clip_image008

This code isn't perfect, though, and Deploy to Azure should be considered a beta release. We have some ideas on how to make it better. New APIs are being released frequently, and during my discussions of this prototype with David Ebbo I learned of some upcoming APIs that will mitigate some of this functionality and add some features to the Deploy to Azure application. For now, consider Deploy to Azure a prototype of something awesome that we might push to the next level in the upcoming weeks.

Deploy to Azure is in Beta

As I mention above, Deploy to Azure has a few caveats. I'll cut to the chase real quick and break down some of the limitations of Deploy to Azure. I know you're excited and you want to start using the button, but first I feel I must clarify a few of the limitations it has in this first iteration.

Authentication

Deploy to Azure uses a multi-tenant Active Directory application. This way, users can allow the application access to their own Azure subscriptions and allow it to spin up new sites on their behalf. Since Deploy to Azure uses the multi-tenancy functionality of Azure Active Directory and isn't an official Microsoft application the login functionality is limited to Azure Active Directory users. This means you can't log in using your live.com, hotmail.com, or outlook.com accounts. Instead, you have to create an Active Directory user who is a Global Administrator of your Active Directory domain. Below I got into a little more detail on the identity aspect of Deploy to Azure and link off to a complementary post Vittorio Bertocci wrote up to describe how that portion of Deploy to Azure works.

No Solution Builds

Since the code for Deploy to Azure just clones the repository for a site and then publishes it, everything you want to deploy must be in your repository. Whereas Kudu will facilitate the solution build prior to publishing, which pulls in your NuGet packages, Deploy to Azure simply clones and publishes. This is one area where Mr. Ebbo and I are joining forces and working together to augment Deploy to Azure with more Kudu-like functionality in a later release.

Customizing Deployments

That said, it is highly probably that a Git repository might contain a site and other stuff unrelated to the site. In the case of MiniBlog, for instance, the actual web site that is MiniBlog is contained in a sub folder called "Website." Given this, if I simply re-publish the entire repository up to Azure, the site obviously won't work. For this reason, I've given users of the Deploy to Azure button a JSON file that the server-side code checks during deployments. In the screen shot below from my MiniBlog fork, you'll see two things highlighted. One is the Website folder, which contains the MiniBlog site source code.

clip_image009

See the arrow in the screen shot above? That arrow points to the file named deploytoazure.json. This file has a specific property in it that the Deploy to Azure code checks at run-time. The screen shot below shows this file in GitHub.

clip_image010

Once the Git repository has been cloned, I check for the presence of the deploytoazure.json file in the root of the cloned source. If the file exists, I open it up and check the value of the subdirectoryWithWebsite property. Then, I use the value of that property to determine which folder I'll publish up to the site. This gives developers a little more control over how the deployment works.

I'd imagine a later iteration of Deploy to Azure including other settings and flags in this file, but for now, the path to the web site code was really all I needed.

The Identity Component

One of the components about my own product I'd not really mastered was to work through some complex customer scenarios where the Azure Management Libraries would be used. Repeatedly customers asked me for server-side examples using ADAL and MAML together. The Deploy to Azure button was a perfect scenario for me to learn more about the code our customers would need to author to take advantage of these two together. I knew multi-tenancy would be crucial to Deploy to Azure- I'll host it in one subscription, but users of other (or multiple) subscriptions will want to deploy web applications into their own subscriptions, not into mine. So Deploy to Azure would have to allow for multi-tenant authorization, and I'd need to be able to get the user's AAD token in my code, since the Management Libraries' TokenCloudCredential class needs a bearer token at construction.

I spent the weekend learning some more about Azure Active Directory. By learning more about AAD, I really meant to say "emailing my colleague Vittorio Bertocci ." Vittorio and I are working on a lot of things together now - the Azure Management Libraries, Java, Visual Studio, and basically everywhere else where the notion of identity is important in the conversation. Vittorio was interested in supporting my little project some support. My first question - how to get the token on the server side once a user was authenticated via AAD - was answered via Vittorio's excellent sample using the new OWIN middleware with Open IDC and multi-tenancy . The code in this repository was the starting point, in fact, for Deploy to Azure. I just added the functionality once I knew all the identity bits were wired up properly and I could grab the token.

clip_image011

As Deploy to Azure evolved and became a reality and the eventual creation of this blog post arose, Vittorio offered to write a complementary post explaining the details of the AAD-related functionality in Deploy to Azure. His post explains the entire end-to-end of the identity flow in the Deploy to Azure button process really well . I encourage you to continue reading over at Vittorio's post on the topic.

Next Steps

As I pointed out above, Deploy to Azure began as an idea and evolved pretty quickly. It has been a lot of fun to build, and in so doing I've successfully created an example of how you could use the ADAL library along with the Azure Management Libraries on the server side. We're discussing more features and ideas to add to Deploy to Azure. I'll post another, more technical post that walks through the code in more detail, but this post's purpose is to introduce you to the idea of the button and to invite you to try it out. Feel free to fork the code, too, and submit a pull request or issues that you run into as you're using it.

 

I’d like to officially introduce you to the 1.0 release of the Microsoft Azure Management Libraries. The official announcement of the libraries came out a few days ago on the Microsoft Azure Service Updates blog. Update: Jeff Wilcox wrote up am excellent piece introducing the Management Libraries, in which he covers a lot of ground. 

As I was busy travelling and presenting at the //build conference in San Francisco and enjoying my son Gabriel’s 6th birthday, I was a little tied up and unable to get this post out, but it gave me time to shore up a few little things, publish the code, and prepare a surprise for you that I’ll describe below in this post. Let’s just say I wanted to make it as easy as possible for you to get up and running with the 1.0 bits, since I’m so proud of all the work our teams have put into it. This week at the //build/ 2014 conference I presented a session with my buddy Joe Levy on many new automation stack we’ve added to Microsoft Azure. You can watch our //build/ session on Channel 9, which covers all of the topics from the slide image below. Joe and I talked about the Automation Stack in Microsoft Azure, from the SDK Common NuGet package up through the Microsoft Azure Management Libraries for .NET and into PowerShell and how the new Microsoft Azure Automation Service sits atop all of it for true PowerShell-as-a-service automation that you can use for just about anything.

automation-stack

Demonstrating the Management Libraries

My part of the session was primarily focused on how developers can make use of the Management Libraries (MAML, for short) for various scenarios. I’ve created 2 GitHub projects where the code for these demos, and I also have another surprise I’ll discuss later in the post. First, the demos!

Integration Testing

One scenario in which I think MAML has a lot to offer is to enable integration testing. Imagine having a Web Site that talks to a Storage Account to display data to the user in HTML. Pretty common scenario that can have all sorts of problems. Connection string incorrectness, a dead back-end, misconfiguration – you never know what could happen. Good integration tests offer more confidence that “at least the environment is right and everything is configured properly.” This scenario is represented by the code in the MAML Integration Testing code repository . Using xunit tests and MAML together, I was able to automate the entire process of:

  1. Creating a web site
  2. Publishing the web site’s code
  3. Creating a storage account
  4. Getting that storage account’s connection string
  5. Saving data to the storage account that I intend on displaying on the web site
  6. Configuring the web site’s connection string so that it can find the storage account and pull the data for display
  7. Hit the web site and verify it displays the correct information
  8. Delete the storage account
  9. Delete the web site

If this sounds like a common practice for your Microsoft Azure web apps, you might get some value from this demo, as it could streamline your entire process of integration testing. Here’s the best part – if you’re not really an Azure storage person, and your typical scenario involves a non-cloud-hosted ASP.NET web site that talks to SQL Server, you could still make use of MAML for your own integration tests. Simply use the SQL Management Client to fire up a Microsoft Azure SQL Database, insert a few records, and do basically the rest of the integration testing “stuff” but set up your page to read from the database instead of the storage account. Then, whether you’re deploying your production site to Microsoft Azure or not, you can make sure it all works using a scorched-earth testing environment.

Enabling SaaS

Cloud is a great place for software-as-a-service vendors. In typical SaaS situations, a customer can hit a web site, provide some information, and voila’, their newly-customized web site is all ready. The final demonstration I did during the //build/ session was geared towards these sorts of scenarios. In my demo at //build/, I demonstrated this sort of scenario by creating an MVC application I called MiniBlogger, for it generates live MiniBlog sites running in Microsoft Azure. When the user clicks the button, a Web API controller is invoked using JavaScript. The controller code makes a few calls out to the Microsoft Azure REST API using MAML. It first verifies the site name is available and if not, the user is provided a subtle visual cue that their requested site name isn’t available:

image

When the user finds a name they like that’s also not already in use, they can create the site. As the API controller iterates over each step of the process it sends messages to a SignalR Hub (yes, I can work SignalR in anywhere ), and the user is provided real-time status on the process of the site being created and deployed.

image

Once the deployment is complete, the site pops up in a new browser, all ready for use. The code for this demo is also on GitHub, so fork it and party .

Get Your Very Own MAML Project Template (the surprise)

In this session I made use of Sayed and Mads’ work on SideWaffle and Template Builder to create a Visual Studio Extension that makes it easy to get up and running with MAML. Sayed and Mads have long thought SideWaffle would be great for coming up with canned presentations, and this was my first attempt at delivering on their goal. I asked them both tons of questions throughout the process, so first and foremost, thanks to them for SideWaffle and their patience as I fumbled through aspects of getting the hang of using it.

You can get the Microsoft Azure Management Libraries extension now in the Visual Studio Extensions Gallery. I’ve also created a little YouTube video demonstrating its usage . In five minutes, you can have a running Console Application that creates Virtual Machines in Microsoft Azure.

This Visual Studio Extension I created contains a few elements. First, it has a project template that references all of the MAML NuGet packages and the Active Directory Authentication Library NuGet package, which are dependencies for the demonstration. When you install the extension you’ll get a new project template like the one highlighted below.

SNAGHTMLae8a6c3

The project is a basic Console Application, but with all the MAML/ADAL NuGets referenced. Also contained within the extension are five item templates and 6 code snippets that walk you through the process of authoring code that will result in the following workflow:

  1. Authenticate to Microsoft Azure using Azure Active Directory
  2. Retrieve a list of Microsoft Azure subscriptions that the authenticated user can access
  3. Find a specific subscription and associate the AAD token with that subscription
  4. Create a new Cloud Service in the subscription
  5. Create a new Storage Account in the subscription
  6. Get the list of Virtual Machine images containing a filter (this is provided in the snippets as a parameter)
  7. Create a Virtual Machine running in the newly-created Cloud Service container, using the VHD of the image selected earlier
  8. Deploy the Virtual Machine and start it up

The screen shot below is from my own instance of Visual Studio testing out the item templates. I’m on step 2 in this screen shot, about to add the class that facilitates the subscription-selection process described above.

SNAGHTMLaf38d3a

Likewise, here’s my code being edited during step 2. Note how the snippet is really doing the work, and the comments provide guidance.

image

Each step of the process is pretty-well documented. I tried really hard to think of the easiest way to help the Microsoft Azure community get up and running with MAML following our 1.0 release, and this extension seemed to be the best answer I could come up with. I hope you find it as helpful as I think you’ll find it, but I welcome any feedback you may have on the extension and how it could be improved. Same thing for MAML – we’re all about taking feedback, so let us know what we can do to make the future better for you as you automate everything in Microsoft Azure.

Congrats to the Team

I’d definitely like to congratulate my team, and all the teams in Microsoft Azure who brought their awesome in full force this year in preparation for //build/. We had some great releases, amazing announcements, and heard great things from the community. Happy coding!

 

Since the initial release of the Azure Management Libraries (WAML) as Portable Class Libraries (PCL, or pickles as I like to call them) we’ve had tons of questions from community members like Patriek asking how Windows Phone applications can be written that make use of the cloud management functionality WAML offers. At first glance it would seem that WAML is “unsupported on Windows Phone,” but the reality is that WAML is fully supported via it being a PCL, but some external factors made it seem just shy of impossible to be able to glue our PCL framework to each of the targeted client platforms. In the case of Phone, there are two blockers:

  1. Since the X509Certificate2 class is unsupported on Windows Phone developers can’t make use of the certificate-based method of authenticating against the Azure REST API
  2. Since there’s no ADAL implementation that targets Phone there’s no way to use ADAL’s slick functionality to get my authentication token. The assumption here is that some creative coding is going to have to be done to go deeper into the OAuth stack. Sounds nasty.

Well, that’s never stopped any motivated coder before, right? Armed with some code from Patriek, a problem statement, and a pretty informative blog post written by none other than Vittorio on how he had some fun tying Phone to REST Services using AAD, I set out to start at ground zero to get some code working on my phone that would enable me to see what’s happening in my Azure subscription, across all my sites, services, and data stores. I set out to create a new type of phone app that will give me complete control over everything I’ve got published to my Azure cloud subscription. This post will walk through the code I worked up to get a baseline Windows Phone mobile cloud management app running.

Creating and Configuring the Phone App

First, a little background is required here. As Vittorio’s post makes very clear, the OAuth dance the WP8 code is going to be doing is pretty interesting, challenging, and results with a great set of functionality and an even greater starting point if this app gives you some ideas on what you could make it do next. In fact, if you’re into that whole social coding thing, the code in this post has a GitHub repository right here . The code in the repository contains the project I created in this screen shot in more detail.

01

The first step is to pull down the NuGet pacakges I’ll need to write this application. In this particular case I’m kind of thinking of writing a Windows Phone app that I’ll use to management my Azure Web Sites, Cloud Services, SQL Databases, Storage Accounts, and everything in between. So I’ll right-click my Phone project and select the correct packages in next step.

02

I’ll go ahead and pull down the entire WAML NuGet set, so I can manage and view all of my asset types throughout the my Azure platform subscription. By selecting the Microsoft.Azure.Management.Libraries package, all of the SDK Common and WAML platform NuGets will all be pulled down for this application to use.

03

The Common SDK, shown below in the project’s references window, provides the baseline HTTP, Parsing, and Tracing functionality that all the higher-order WAML components need to function. Next to that is the newest member of the Common SDK tools, the Common Windows Phone reference, which is new in the latest release of the Common SDK package. This new addition provides the high-level functionality for taking token credentials from within the Windows Phone client and being able to attach those credentials to the SDK client code so that calls made to the Azure REST API from a phone are authenticated properly.

04

If you read my last blog post on using WAAD and WAML together to authenticate WAML-based client applications, there are a few important steps in that post about working through the process of adding an application that can authenticate on behalf of your WAAD tenant. Take a look at that post here to get the values for the App Resources dialog in the screen shot below. In a sense, you’ll be setting application settings here that you can get from the Azure portal. You’ll need to create an application, then get that app’s client id, and your subscription and active directory tenant id or domain name, and place those values into the code’s resources file, as shown below.

06

The Login View

The first view shows a built-in web browser control. This web browser will be used to direct the user to the login page. From there, the browser’s behavior and events will be intercepted. The authentication data the form contains will then be used to manually authenticate the application directly to AAD.

Reminder – As Vittorio’s post pointed out, you might not want to use this exact code in your production app. This is a prototype to demonstrate putting it all together in the lack of important resources. Something better will come along soon. For now the OAuth dance is sort of up to me.

Note the web browser control has already been event-bound to a method named webBrowser_Navigating, so each time the browser tries to make a request this method handler will run first, and allow my code to intercept the response coming back to the browser.

05

Once the login page loads up, I’ll presume a login is in order, so I’ll go ahead and navigate to the login page, passing in a few of my own configuration parameters to make sure I’m hitting my own AAD tenant.

07

Vittorio deep-dives on the workflow of this code a lot better than I’ll try to do here. The basic idea is that the browser flow is being intercepted. Once the code realizes that AAD has authenticated me it hands back a URL beginning with the string I set for the redirectUrl property of this application in the portal. When my code sees my redirectUrl come through, I just run the GetToken() method to perform the authentication. It posts some data over the wire again, and this time, the response activity is handled by the RetrieveTokenEndpointResponse() event handler.

08

Once this handler wakes up to process the HTTP response I’ve received from the AAD authentication workflow (that we did via the OAuth dance), my access token is pulled out of the response. The subscription id value is then persisted via one of my App’s properties, and the next view in the application is shown.

09

When the user first opens the app, the obvious first step is visible. We need to show the user the AAD login site, and allow the user to manually enter in their email address and password. In this window. I could use a Microsoft (Live/Hotmail/etc.) account or an organizational account. The latter, so long as they’ve been marked as Azure subscription co-admins, would be able to log in and make changes, too.

16

Once the user logs in they’ll be shown a list of their subscriptions, so let’s take a look at how the code in the subscriptions view is constructed.

Selecting a Subscription

The subscription selection view just shows a list of subscriptions to the user in a long scrolling list. This list is data-bound to the results of a REST API call made via the SubscriptionClient class, which goes out and gets the list of subscriptions a user can view.

Note – even though I can see my full list of subscriptions, I’ve only installed the Application into one AAD tenant in one of my subscriptions. If I click any of the other subscriptions, things won’t work so well. A subscription and a tenant are tied together. Unless your application gets special permission (it does happen from time to time, I hear ) your application can only access Azure assets in the same subscription. If you wanted to work across subscription you’d have a little more work to do.

The XAML code below shows how the list of subscriptions will be data-bound to the view. I may eventually go into this code and add some view/hide logic that would show only those subscriptions I could impact using the application settings I’ve got running in this app. Point is, I can see all of the subscriptions I can manage and conceptually manage them with some additional coding.

10

The code below is what executes to log a user in with their stored access token and subscription id, when the TokenCloudCredentials class is used to wrap their credentials up as a single construction parameter to most of the management client. Since I was able to get a token from the OAuth dance with AAD, and since I know which subscription ID I want to affect, I’ve got everything I need to start working against the rest of the service management REST API endpoints.

11

When the subscriptions page loads, it gives me the chance to select the subscription I want to manage. Note: I just need to be cautious and only click the subscription that matches the subscription where my AAD tenant is set up . In my case, I’ve got the application installed into my AAD tenant that lives in the subscription named Azure MSDN – Visual Studio Ultimate . If I were to select any of my other subscriptions I’d get an exception.

To have true parity across all subscriptions you’d need to have an application setup in each of your directories and some additional configuration, or you’d have to allow other tenants set up multi-tenancy with your AAD tenant.

17

Once the subscription list is data-bound the user can select a subscription to work with. Selection of a subscription item in the list fires the following code.

12

It persists the subscription id of the subscription the user selected, and then navigates the user to the AssetListView page, which will show more details on what’s in a user’s Azure subscription.

Viewing Cloud Assets via Mobile Phone

To provide a simple method of being able to scroll through various asset types rather than emulate the Azure portal, I made use of the LongListSelector and Pivot controls. In each Pivot I display the type of Azure asset being represented by that face of the pivot control. Web Sites, slide, Storage, slide, SQL, slide, Cloud Services… and so on. The idea for the phone prototype was to show how WAML could be used to load ViewModel classes that would then be shown in a series of views to make it easy to get right to a particular asset in your Azure architecture to see details about it in a native Phone interface.

13

Since the assets list page is primarily just a logically-grouped series of lists for each of the types of assets one can own in their Azure stack, the ViewModel class’s main purpose is to expose everything that will be needed on the view. It exposes Observable Collections of each type of Azure asset the app can display.

14

What would be better as controller functionality, the code in the codebehind of the view ends up doing most of the data-acquisition from the Azure REST APIs via WAML calls. In the code sample below the GetWebSites method is highlighted, but the others – GetSqlDatabases, GetStorageAccounts, and GetHostedServices – all do the same sorts of things. Each method creates instances of appropriate management client classes, retrieves asset lists, and data-binds the lists so that the assets are all visible in the Windows Phone client.

15

The web sites pivot is shown below. I’m thinking of adding a deeper screen here, one that would show site statistics over the last N days, maybe add an appSetting and connectionString editor page, or something else. Have any ideas?

18

Here’s a list of my cloud services – 2 empty PaaS instances and 1 newly set-up VM instance.

19

Summary

Though the OAuth dance has a few drawbacks and the code is a little less elegant than when I could make use of ADAL in my previous example, it works. Once I have the token information I need from the browser, I can run with that information by creating an instance of a TokenCloudCredential, passing in the token I received from the OAuth dance to get to authenticate WAML and therefore, make calls out to the Azure Service Management API to automate my asset management. Since WAML is a PCL package, and as a result of the introduction of support for the Windows Phone platform into the SDK Common NuGet package, the community can now freely experiment with solutions combining Windows Phone and the Management Libraries.

Happy Coding!

 

Integration with Azure Active Directory was one of the main features we wanted to get into the 1.0 release of the Azure Management Libraries. Since Visual Studio and even the PowerShell Cmdlets (which use WAML, in fact) already have support for AAD, it was really important to all of us on the team that we have a method of providing authentication directly via AAD from within C# code using WAML. This makes for a synonymous login experience across all three of these (and more) areas. Most importantly, it makes it easy for Azure users to be able to manage all their subscription assets when they only need to login using their username/password portal credentials rather than go through the process of creating and uploading management certificates. This blog post will answer some of the questions and requests we’ve had from the community on how to tie AAD and WAML together, to create AAD-authenticated applications code that can be used to manage your Azure account assets.

This won’t be a deep-dive into how AAD works, but more a short examination on how it and WAML can be used together.

I worked quite a bit with the AAD team, especially Vittorio Bertocci, on this post. Our teams have a regular meeting so we’re on the same page, but the guys on their team spent some additional cycles with us spelunking, coming up with features, and so on. As we talked about ideas, the AAD team gave me some great resources, like this post which walks through the process of setting up a client app so that it can be authenticated using AAD . Vittorio’s site has a series of great examples on how to go deeper with AAD. I won’t go too deep into the inner-working of AAD in this post, so keep check out those great resources if you want more information. Thanks to the AAD team for all your help and patience and supporting our team!

Create a client app for managing Azure assets

The code for this sample application won’t be too complicated. I’m actually going to retrofit a small command line app I wrote the other day when asked by a peer in another team who wanted to export the list of Virtual Machine images in the gallery. The code for this itty-bitty application’s beginnings is below, which makes use of the Compute Management Management Library .

waad-waml-03

With this code already working using my Management Certificate, and with it being a relatively simple app, it seemed like a great case for demonstrating how easy it is to switch out the CertificateCloudCredentials for the TokenCloudCredentials class once the AAD infrastructure is set up and ready. Speaking of that, this would be a good time to walk through the process using the Azure portal to set up an AAD application in my AAD directory that I can develop that will use the Management Libraries to manage my subscription.

Setting up an AAD application using the portal

To authenticate using AAD I first need to create an application in my Azure subscription’s existing Active Directory tenant. To do this I go to my AAD tenant page in the portal and click the Applications tab at the top.

waad-waml-05

I currently have no applications authenticating against my directory so I’ll click the Add button at the bottom of the portal.

waad-waml-06

Selecting the second option in this list presents me with a huge list of existing applications with which AAD can integrate. Since I’m interested in authenticating an application I’m writing, I’ll click the first option. 

waad-waml-07

The application I’m writing is a small console application, not a web app. Hence I’ll select the second option – Native Client Application - in the Add Application dialog, shown below. If I’d wanted to put an ASP.NET Web API up in the cloud that would use AAD on the back end for authentication, I’d select the top option, Web Application and/or Web API. The API my client app will need to access is the actual Azure Service Management API, which AAD has a special provision for in the portal that will be examined in a moment.

waad-waml-08

Next, I’ll need to provide a redirect URI. Even though the application I’m writing is a native app, I still need to provide this URI to give AAD more details on the specific application that will be authenticating.

waad-waml-09

Once these settings are made, I can see the application’s client id in the portal, along with the information I provided during the application’s creation process. The MSDN code site article I mentioned above, written by the AAD team, walks through more details of how the client application authentication workflow functions and offers more details, so definitely check out that sample if you need more information on these settings. For now, what’s important to remember from this window are the client id text box and the redirect URI created below. Those two strings make up 2 of the 3 strings I’ll need to authenticate a WAML application with AAD.

waad-waml-10

The final piece of information is the AAD tenant id, which can be found by clicking the View Endpoints button in the portal when on the page for the directory I wish to use to authenticate users.

waad-waml-11

The URI provided in each of the text boxes in the resultant dialog contain a GUID. This GUID is the tenant id, so it’ll need to be copied out for use in my code.

waad-waml-12

Changing the WAML code to perform AAD authentication

Back in Visual Studio I’m ready to change the code to make use of AAD authentication. The first step is to reference the required NuGet package, the Active Directory Authentication Library (ADAL) . This will enable my project the ability of prompting the user with a login dialog, into which they can enter their Microsoft account username and password. It will also add all sorts of goodness from the AAD folks that you can make use of in your client applications.

waad-waml-01

In the code I’ll add a method called GetAuthorizationHeader that will take my tenant id as a parameter. I’ll presume the calling code might want to make use of the common tenant, but will give callers the ability to pass in their own tenant GUID identifying their custom Active Directory tenant. Take note that within this method I’m making use of the application’s settings, the redirect URL and the client id property from the portal. As well. I’m passing the base URI of the Azure REST API as the value for the resource parameter to the AuthenticationContext.AcquireToken method. Vittorio has a great blog post introducing ADAL and what you can do with it, so if you’re looking to dive deeper on this topic head on over to CloudIdentity.com and check it out. In a more practical implementation I should probably be setting those values up as appSetting variables, but for this demonstration code sheer pasting in the values is sufficient.

waad-waml-04

The final step in the code is to comment out the old authentication mechanism, where I was using the X509Certificate2 class to authenticate using a management certificate. In place of this code, which creates an instance of a new CertificateCloudCredentials class, I’ll make a call to the new GetAuthorizationHeader method to get my token, then use that token as a parameter to an instance of the TokenCloudCredentials class.

waad-waml-15

Authorizing access to the Azure Service Management API to my client application

At this point the code almost works. However, when I run it I get an error that pretty clearly indicates what’s wrong in the exception message. Clearly, this application hasn’t been granted access to the service management API.

waad-waml-19

A recently-added (okay, a personally recently-discovered ) feature within the portal allows me to specify which applications I’m going to allow access to this client application. Since I’ve not yet created any APIs that also use AAD as an authentication mechanism I’ll only see the two default items in this drop-down menu (the Graph Explorer and the Service Management API). By selecting the option Azure Service Management API, I effectively grant my client application access to any of the REST API URLs available under https://management.core.windows.net . Once I save this setting, the code should run and give me back the list of Virtual Machine images from the gallery.

waad-waml-20

When I run the code this time, it works as expected. I’m prompted for my credentials, thanks to some of the functionality provided in the ADAL package.

waad-waml-16

Once I log in using my subscription’s credentials, WAML can authenticate and pull down the list of virtual machines.

waad-waml-17

From here, I could use the rest of the functionality available in WAML to do whatever I need. Now that I’m able to authenticate using AAD, I won’t need to create, upload, configure, and track my management certificates. Rather, my applications can just make use of Active Directory to log in and manage my subscription’s assets.

I hope you find this to be an exciting new option available in the Azure Management Libraries. I’ve been looking forward to showing this feature off, as I think it shows two awesome areas of the client developer experience – AAD via ADAL and the REST API via WAML – being used together to create some amazing client-side magic against the Azure cloud. Thanks again to the AAD folks for lending a hand from time to time with some of the details and fine tuning!

 

I received an email from Hanselman this week, a forward of an email he received after posting his [much-appreciated and far too kind] blog post on WAML. The email was from a community member experiencing a behavior when trying to use WAML to create web sites (or manage their existing sites) from code running on the server side from another Azure Web Site. I can imagine lots of user stories when a Web Site could be used with WAML:

  • I’m a software-as-a-service application business owner. I want to give users a form that, once filled out and submitted, will take user input and create a new web site and copy my application’s code into their web site
  • My web application needs to create a storage account when it first starts up
  • My web application needs to create a SQL Database when it first starts up

Automation isn’t limited to the desktop. With WAML you can pick and choose which areas you need to offer and install the appropriate NuGets and get up and running quickly. There are a few caveats, however, mostly deliberate design decisions based on the basic ideas of cryptography and data integrity. I spent a few hours this week talking to my good friends in the Web Sites team, along with my own awesome team in Azure Developer Experience, to work through some certificate-loading problems I was seeing in Web Sites. The ability to use a management certificate is pretty important when programming against WAML (yes, AAD support is coming soon in WAML). I’ve seen a few different forums mention similar issues. Given WAML makes use of certs, and sometimes using certs on the server side in the Web Site can be a little tricky, I thought a walk-through was in order.

How Meta. A Web Site that Makes Web Sites.

I’ve created a Visual Studio 2013 solution, with an ASP.NET project in the solution, that I’ll be using for this blog post. The code for this site is on GitHub, so go grab that first. The code in the single MVC controller shows you a list of the sites you have in your subscription. It also gives you the ability to create a new site. The results of this look like the code below.

SNAGHTML2c3c0b1

Here’s a snapshot of the code I’m using in an MVC controller to talk to the Azure REST API using WAML.

There are a few areas that you’ll need to configure, but I’ve made all three of them appSettings so it should be relatively easy to do. The picture below shows all of these variables. Once you edit these and work through the certificate-related setup steps below, you’ll have your very own web site-spawning web site. You probably already have the first of these variables but if you don’t, what are you waiting for ?

image

Once your Azure subscription ID is pasted in you’ll need to do a little magic with certificates. Before we get to all the crypto-magic, here’s the method that the controller calls that prepare WAML for usage by setting up an X509Certificate .

image

I’d been using a base 64 encoded string representation of the certificate, but that wouldn’t work on top of Web Sites. Web Sites needs a real physical certificate file .Which makes sense – you want for access to your subscription to be a difficult thing to fake, so this configuration you have to go through once to secure the communication? It’s worth it. The code below then takes that credential and runs some calls to the WebSiteManagementClient object, which is a client class in the Web Sites Management Package .

image

This next part is all about cryptography, certificates, and moving things around properly. It’s not too complicated or deep into the topics, just a few steps you should know just in case you need to do this again.

Don’t worry. If it were complicated, you wouldn’t be reading about it here.

Creating a Self-Signed Cert and Using the PFX and CER Files Properly with Web Sites

I’ll run through these steps pretty quickly, with pictures. There are many other great resources online on how to create certificates so I’m not going to go into great detail. This section has three goals:

  1. Create a self-signed certificate
  2. Create a *.CER file that I can use to upload to the Azure portal as a management certificate
  3. Use the *.PFX file I created on the way to creating my *.CER file on my web site

To create the self-signed cert open up IIS Manager (some would prefer to do this using makecert.exe ) and click the Server Certificates feature.

SNAGHTML2649c9d

Then, click the Create Self-Signed Certificate action link.

SNAGHTML267808a

You get to walk through a wizard:

SNAGHTML269d26a

Then the new certificate will appear in the list:

image

Select it and click the Export action link:

SNAGHTML26c6682

Now that you’ve got the PFX file exported, it’d be a good time to drop that into the web site. Drop the PFX file into the App_Data folder…

image

Once the .PFX is in the App_Data folder, copy it’s location into the Web.Config or in the portal’s configure tab.

image

Double-click the PFX file. Run through the subsequent steps needed to import the PFX into the personal certificate store. Once the wizard completes you’ll have the certificate installed, so the final step will be to export it. Open up your user certificates tile. I always find mine using the new Modern Tiles method.

image

Open up the file in the user certificate manager, and select the new certificate just created. Select the Export context menu.

image

Select the DER option. This is when you’ll output a CER file that can be used as your management certificate in the next step.

SNAGHTML28b0253

Save the output *.CER file on your desktop. With the PFX file set up in the web site and this file created, we’re almost finished.

Uploading the Management Cert to the Portal

With the CER file ready, all one needs to do to upload it is to go to the Management Portal. So long as the web site you’re running WAML in is trying to access resources in the same subscription, everything should work. Go to the management portal, select Settings from the navigation bar, and then select the Management Certificates navigation bar.Click the Upload button to upload the *.CER file only. NOT the PFX, yet!

image

Once the CER is uploaded it’ll appear in the list of management certificates.

image

With those configuration changes in place, I can finish the configuration by adding the password for the PFX to the Web.Config file. This part isn’t perfect, but it’s just to get you started with the difficult connecting-of-the-dots that can occur when embarking on a new feature or prototype.

image

Deploying the Site

The last step, following the configuration and certificates being set up, is to deploy the site. I can do that from right within Visual Studio using the publish web features. Here, I’m just creating a new site.

SNAGHTML2a45cae

Once the site deploys and loads up in a browser, you can see what capabilities it’ll offer – the simple creation of other Azure Web Sites.

SNAGHTML2c01155

Summary

This article covers more how to prepare a web site with the proper certificate setup and contains code that explains the actual functional code. I’d welcome you to take a look at the repository, submit questions in the comments below, or even fork the repository and come up with a better way, or to add features, whatever you think of. Have fun, and happy coding!

 

The Azure SDK team has been working hard to finish up more of the Azure Management Libraries so you can use .NET code to manage your Azure resources, and today’s blog will highlight one of the newest additions to the WAML stack. Today I’ll introduce you to the Azure SQL Database Management Library . We released the SQL Management Library this week, and it’s loaded with features for managing your Azure SQL Databases. Like the rest of the management libraries, the SQL library provides most of the functionality you’d previously only been able to do using the Azure portal, so you’ll be able to write .NET code to do pretty much any level of automation against your SQL databases and servers.

Here’s a list of some of the features supported by the new SQL library:

  • Create, update, delete, and list your Azure SQL Servers
  • Create, update, delete, and list your Azure SQL Databases
  • Create, update, delete, and list your Azure SQL Servers’ Firewall Rules
  • Import and Export SQL databases to & from Azure Blob Storage

Let’s dive in to some of these features by exploring their usage in the sample code I’ve written to demonstrate all these cool new WAML features. I’ve put this code up in GitHub so you can fork it, add your own functionality and test cases, and [please, feel free] spruce up the UX if you’re so inclined. The code for this demonstration includes the demonstrations from my previous blog posts on WAML, so if you’ve been wanting to see that code, you can dive in as of this latest installment in the blog series.

The WPF app shown below sums up today’s code demonstration for using the SQL management library. Once I select my publish settings file and select the subscription in which I want to work WAML makes a REST call out to Azure to retrieve the list of database servers in my subscription. I data-bind a list view in the WPF app with the list of servers. Next to each you’ll find buttons to allow for convenient access to the databases residing on the servers, as well as a button giving you access to the servers’ firewall rules.

SNAGHTML1a48f73

Creating the SQL Management Client

As with the other demonstrations, the first step is to create an instance of the SqlManagementClient class using a subscription ID and an X509 Certificate, both of which are available in my publish settings file. The code below is similar to the manner in which the other management clients (Compute, Infrastructure, Storage, and so on) are created.

image

SQL Server Operations

The code below shows how, in two lines of code, I can get back the list of servers from my subscription. Once I’ve got the list, I set a property that’s data-bound in the WPF app equal to the list of servers the REST API returned for my subscription.

image

Database Operations

Once a user clicks the “Databases” button in one of the server lines in the WPF app a subsequent call is made out to the Azure REST API to pull back the list of databases running on the selected server. 

image

You can do more than just list databases using the SQL management library – you have full control over the creation and deletion of databases, too, using the easily-discoverable syntax. Below, you’ll see how easy it is to figure out using nothing more than the Intellisense features of Visual Studio to figure out how to create a new database.

image

Firewall Rules

You no longer need to load up a server in the portal to manage your firewall rules – you can do that using the SQL management library, too. The code below demonstrates how easy it is to retrieve the list of firewall rules for a particular SQL Servers. When the user hits the “Firewall Rules” button in the WPF app, this code runs and loads up the rules in the UX.

image

In addition to the SQL library, we’ve also released the preview libraries for Media Services and Service Bus, too. We’re continuing to improve the landscape of WAML every day, and have some convenience features we’re adding into the overall WAML functionality set, too. Keep watching this blog for more announcements and tutorials on the Azure Management Libraries. As always, if you have any suggestions or ideas you’d like to see bubble up in the management libraries, feel free to post a comment below.

Happy Coding!

 

I am not very good at cryptography, even the most fundamental basic aspects of it. I’m humble enough to be willing to admit this (or smart enough to put it out there so folks don’t ask me questions about it, take your pick). That said, the idea of working with X509 Certificates in my code is something I obviously don’t relish doing. It’s a necessary thing, however, when working with various authentication paradigms, like the traditional method of attaching certificates to individual requests I’d like to make to the Azure API. Since the first implementation of the credential-passing logic with the Management Libraries use X509 certificates, I needed to come up with an easy way of testing my code with management certificates. Luckily, the *.publishsettings files one can download from Azure via techniques like the PowerShell cmdlets’ Get-AzurePublishSettingsFile command provides a Base-64 encoded representation of an X509 Certificate. This post will walk through how I use the information in a publish settings file to authenticate my requests with the management libraries.

You won’t want to put your publish settings files onto a web server, store the information in these files in your code, or anything like that. This is just for demonstration purposes, and makes the assumption you’re working on your own machine, controlled by you, while you’re logged into it in a nice and secure fashion. If some nefarious villain gets ahold of your publish settings file they could theoretically authenticate as you. Point is, be careful with these files and with the strings embedded into them. Keep them somewhere safe and secure when you’re not using them.

Once you’ve downloaded a publish settings file to your computer, you can write code that will open the files up and parse them. Don’t worry, this is pretty easy, as the files are just normal XML files. So, you can use XLinq or any other XML-parsing techniques to read the values out of the file. In my example code for this and the other posts I’m writing on the topic, you’ll notice that I’ve provided a menu you can use to open up a publish settings file. A screen shot of this application running is below:

Selecting the publish settings file

When the user clicks this menu, I simply throw open a new Open File Dialog object and allow the user to select a *.publishsettings file from their machine. The screen shot below demonstrates this.

Opening the file

When the user selects the file the code does just what you’d expect – it opens the file up, parses out the XML, and builds an array of subscription models the rest of my code will use when it needs to make calls out the API via the management libraries. The section of the code below that’s highlighted is doing something pretty simple – just building a list of a model type known as PublishSettingsSubscriptionItem to which I’ll data-bind a list of child menu items in my form.

Loading up the subscriptions

The XAML code that does this data-binding logic is below. This sample code (which will soon be available for your viewing pleasure) shown below does the data-binding magic.

Data-binding the list of subscriptions

Once this code is in place and I select a publish settings file, the menu will be augmented with the list of subscriptions found in that publish settings file.

The data-bound list

Since I’ve got my XAML wired up in a nice command-like manner, selecting any of those items will fire the SelectSubscriptionCommand method in my form’s view model class. That method in turn sets the current selected subscription so that it can be used later on by management library client classes that need the information. The code below, which is the handler method for that command, does this magic.

Setting the selected subscription

Now, whenever I need to run any management library code that reaches out to the Azure REST API, I can use the properties of my selected subscription to set up the two elements of data each client needs – the subscription ID and the Base-64 encoded management certificate. The code below does exactly this to make a call out to the REST API to get the list of supported regions in the Azure fabric.

Using the selected subscription

Thanks to my good buddy and evil coding genius Mike Lorbetske for MVVM-ifying this code so the demos can be easily added to support the upcoming blog posts in this series. I was mixing MVVM techniques and nasty, old-fashioned codebehind brutality together, and Mike came in just in time and helped me make this code a little more robust using MEF and some other tricks.

 

When we built out the Azure Management Libraries, one of the things we knew we needed to do was to make it easy for developers to trace API conversations however they want or need, to whatever medium they’d like to save the trace logs. The deeper the trace information available during conversations with the Azure REST API, the easier it is to figure out what’s going on. To make tracing easier and more flexible, we added an interface to the SDK common package called the ICloudTracingInterceptor. You can implement this method however you want to drop the trace messages from the API conversations. This post will walk through a simple example of using the tracing interception functionality in the management libraries in a WPF application.

Building a Custom Interceptor

First, a quick inspection of the tracing interceptor interface is in order. The object browser window below shows the interface and all of the methods it offers.

Object Browser

An obvious implementation of the interceptor logic would be to dump trace messages to a Console window or to a debug log. Given I’m building a desktop application I’d like to see that trace output dropped into a multi-line textbox. Rather than pass the constructor of my implementation a reference to a WPF TextBox control I’ll take a reference to an Action, as I may need to be proactive in handling threading issues associated with the API calls running in an asynchronous manner. Better safe than sorry, right? Below is the beginning of the code for my custom tracing interceptor.

Custom tracing interceptor implementation

With the custom implementation built, I’m ready to use it in an application.

Hooking the Interceptor to the Cloud Context

Also provided within our common package is another helpful object that abstracts the very idea of the cloud’s context and the conversations occurring with the REST API. This abstraction, appropriately named CloudContext, is visible in the object browser below.

Object Browser

The Cloud Context abstraction makes it simple to dial in a custom tracing interceptor. In my WPF application code, shown below, I simply use the Configuration.Tracing.AddTracingInterceptor method to add a new instance of my interceptor to the context.

Hooking in the tracing interceptor

Now, each call I make to the Azure API via the management libraries will be dumped to the multi-line text box in my WPF form. The screen shot below is the WPF application running. In this example, I’ve made a call to the management API’s list locations method, which returns a listing of all of the regions supported by the Azure fabric.

Example code running

Since the tracing interceptor is bound to the cloud context, any calls made through the management libraries will be traced and dropped into the text box (or whatever you specify in your own implementation). We think this adds a nice, one-stop method of tracing your API conversations. Keep watching this space for more blog posts about the management libraries. My next post, which will be released this week, will cover the code in this WPF application responsible for loading up a list of subscriptions from a publish settings file, and how the information in the publish settings file can be used to hydrate an X509 Certificate at run-time (note the “Select Publish Settings File” UX elements in this example).

 

The first thing I had the opportunity to work on when I joined the Azure team was something that I’m excited to show off today. I demonstrated the early bits of the Azure Management Libraries at the TechEd Australia Developer kick-off session, and now that they’re out I’m really excited to walk you through getting started with their use. This post will sum up what the Azure Management Libraries are and why you should care to take a peek at them, and then I’ll dive into some code to show you how to get started.

What are these libraries you speak of?

With this release, a broad surface area of the Azure cloud infrastructure can be accessed and automated using the same technology that was previously available only from the Azure PowerShell Cmdlets or directly from the REST API. Today’s initial preview includes support for hosted Cloud Services, Virtual Machines, Virtual Networks, Web Sites, Storage Accounts, as well as infrastructure components such as affinity groups.

We’ve spent a lot of time designing natural .NET Framework APIs that map cleanly to their underlying REST endpoints. It was very important to expose these services using a modern .NET approach that developers will find familiar and easy to use:

  • Supports Portable Class Library (PCL), which targets apps that are built for .NET Framework 4.5, Windows Phone 8, Windows Store, and Silverlight
  • Ships as a set of focused NuGet packages with minimal dependencies to simplify versioning
  • Supports async/await-based task asynchrony (with easy synchronous overloads)
  • Has a shared infrastructure for common error handling, tracing, configuration, and HTTP pipeline manipulation
  • Is factored for easy testability and mocking
  • Is built on top of popular libraries such as HttpClient and Json.NET

These packages open up a rich surface area of Azure services, giving you the power to automate, deploy, and test cloud infrastructure with ease. These services support Azure Virtual Machines, Hosted Services, Storage, Virtual Networks, Web Sites and core data center infrastructure management.

Getting Started

As with any good SDK, it helps to know how you could get started using it by taking a look at some code. No code should ever be written to solve a problem that doesn’t exist, so let’s start with a decent, but simple, problem statement:

I have this process I run as in Azure as a Worker Role. It runs great, but the process it deals with really only needs to be run a few times a week. It’d be great if I could set up a new service, deploy my code to it, then have it run. Once the process finishes it’d be even better if the service could “phone home” so it could be deleted automatically. I sometimes forget to turn it off when I’m not using it, and that can be expensive. It’d be great if I could automate the creation of what I need, then let it run, then have it self-destruct.

Until this preview release of the Azure Management Libraries (WAML for short hereafter, though this is not an official acronym, I’m just being lazy), this wasn’t very easy. There’ve been some great open-source contributions to answering the .NET layer in managing Azure services and their automation, but nothing comprehensive that delivers C# wrappers for nearly all of the Azure Management REST APIs. If you needed to use these API to generate your own “stuff” in Azure, you pretty much had to write your own HTTP/XML code to communicate with the REST API. Not fun. Repetitive. Boring, maybe, after you do a few dozen out of hundreds of API methods.

Getting the Management Libraries

I decided to do this work in a simple WPF application I’ll run on my desktop for the time being. I’ll want to run it as long-running app or service later, but for now this will work just fine. Since I’ve got a Azure Cloud Service with a Worker Role I’ll want to run in the cloud, I’ve just added all three projects to a single solution, which you’ll see below.

You probably noticed that I’m preparing to add some NuGet packages to the WPF application. That’s because all of the Azure Management Libraries are available as individual NuGet packages. I’m going to select the Microsoft.WindowsAzure.Management.Libraries package, as that one will pull everything in the Management Libraries into my project. If I wanted to manage one aspect of Azure rather than all of it, I’d reference one of the more specific packages, like Microsoft.WindowsAzure.Management.WebSites, which provides management functionality specific only to the Azure Web Sites component.

Once I’ve referenced the NuGet packages, I’m ready to set up client authentication between my WPF application and the Azure REST APIs.

Authenticating

The first implementation we’ve built out for authenticating users who are using WAML and Azure is a familiar one – using X509 Certificates. Integrated sign-in was added recently in SDK 2.2 to Visual Studio and to PowerShell, and we’re working on a solution for this in WAML, too. With this first preview release we’re shipping certificate authentication, but stay tuned, we’re doing our best to add in additional functionality.

Don’t panic. We’ve made this so easy even I can do it.

I’m not going to go deep into a discussion of using certificate-based authentication in this post. In fact, I’m going to be as brute-force as possible just to move into the functional areas of this tutorial. I’ll need two pieces of information to be able to log into the Azure API:

  • A subscription ID
  • A management certificate

I obtained these values from one of my publish settings files. The XML for this file is below.

With the key and the subscription ID in my code later on, I can call the GetCredentials method below that returns an instance of the abstract class, SubscriptionCloudCredentials, we’re using to represent a credential instance in the Management Library code. That way, if I add single-sign on later it’ll be easy for me to replace the certificate authentication with something else. The code the the CertificateAuthenticationHelper class from my sample code is below:

Now I’ll write a controller class that’ll do the work between my WPF application and the Management Libraries – a convenience layer, in a sense.

Management Convenience Layer

To map out all the various parameters I’ll have in my workflow I’ve created the ManagementControllerParameters class shown below. This class will summarize all of the pieces of data I’ll need to create my services and deploy my code.

Then, I’ll create a class that will provide convenience functionality between the UX code and the Management Library layer. This code will make for cleaner code in the UX layer later on. Note the constructor of the code below. In it, two clients are being created. One, the StorageManagementClient, will provide the ability for me to manage storage accounts. The other, the ComputeManagementClient, provides the ability for me to work with most of the Azure compute landscape – hosted services, locations, virtual machines, and so on.

For the purposes of explaining these steps individually, I've created a partial class named ManagementController that's spread across multiple files. This just breaks up the code into functional units to make it easier to explain in this post, and to provide for you as a public Gist so that you can clone all the files and use them in your own code.

Now, let’s wire up some management clients and do some work.

Create a New Storage Account using the Storage Management Client

The first thing I’ll need in my deployment strategy is a storage account. I’ll be uploading the .cspkg file I packaged up from a Cloud project in Visual Studio into a Azure blob. Before I can do that, I’ll need to create an account into which that package file can be uploaded. The code below will create a new storage account in a specified region.

Once the storage account has finished creating, I'm ready to use it. Given that I'll need a connection string to connect my application (and my soon-to-be-created cloud service) to the storage account, I'll create a method that will reach out to the Azure REST APIs to get the storage account's connection keys. Then, I'll build the connection string and hand it back to the calling code.

Now that the storage account has been created I'll create my cloud service and publish my package up to Azure.

Create and Deploy a new Cloud Service using the Compute Management Client

The call to create a cloud service is surprisingly simple. All I need to do is to provide the name of the cloud service I intend on creating and the region in which I'd like it to be created.

Finally, all I need to do to deploy the cloud service is to upload the cloud service package file I created in Visual Studio to a blob, then call the REST API. That call will consist of the blob URI of the package I uploaded to my storage account, and the XML data from the cloud project's configuration file. This code will make use of the Azure Storage SDK, which is also available as a NuGet package .

Now that all the code's written to create my Azure application, I'll write some code to destroy it once it wraps up all of the work it was designed to do.

Deleting Assets from Azure

Deleting assets using the Azure Management Libraries is as easy as creating the assets. The code below cleans up the storage account I created. Then, it deletes the cloud service deployment and the cloud service altogether.

With all the convenience code written at this point, the user experience code should be relatively painless to write next.

The User Experience

The UX for this application is relatively simplistic. I'm just providing a pair of buttons on a WPF form. One will create the assets I need in Azure and perform the deployment. The other will delete the assets from Azure. XAML code for this UX is below. It isn't much to look at but the idea here is to keep this simple.

The codebehind for the UX is also just as easy. In the Create button-click event, I create a new ManagementController instance, providing it all of the parameters I'll need to create the application's components in the Azure fabric. Then I call all of the methods to created everything.

I also handle the Delete button-click by cleaning up everything I just created.

I could modify this code to use the Windows Storage SDK to watch a storage queue on the client side. When the cloud service is finished doing its job, it could send a message into that queue in the cloud. The message would then be caught by the client, which would in turn call the Cleanup method and delete the entire application.

Endless Automation Possibilities

The Azure Management Libraries provide a great automation layer between your code and Azure. You can use these libraries, which are in their preview release as of this week, to automate your entire Azure creation and destruction processes. In this first preview release, we're providing these management libraries for our compute and storage stacks, as well as for Azure Web Sites. In time, we'll be adding more functionality to the libraries. The goal is to give you automation capabilities for everything in Azure.

We're also excited about your feedback and look forward to suggestions during this preview phase. Please try out the Management Libraries, use them in your own experimentation, and let us know what you're using them to facilitate. If you have ideas or questions about the design, we're open to that too. The code for the libraries, like many other things in the Azure stack, are open source. We encourage you to take a look at the code in our GitHub repository .

This Team is Astounding. I am Not Worthy.

Jeff Wilcox’s team of amazing developers have put in a lot of time on the Management Libraries and today we’re excited to share them with you via NuGet . Jeff’s build script and NuGet wizardry have been a lot of fun to watch. The pride this team takes in what it does and the awesomeness of what they’ve produced is evident in how easy the Management Libraries are to use. We think you’ll agree, and welcome your feedback and stories of how you’re finding ways to use them.

 

If you’ve not yet used WebMatrix, what are you waiting for?!?!?!? you’re missing out on a great IDE that helps you get a web site up and running in very little time. Whether you’re coding your site in PHP, Node.js, or ASP.NET, WebMatrix has you covered with all sorts of great features. One of the awesome features of WebMatrix is the number of templates it has baked in. Starter templates for any of the languages it supports are available, as are a number of practical templates for things like stores, personal sites, and so on. As of the latest release of Azure Web Sites, all the awesome WebMatrix templates are now also available in the Web Sites application gallery. Below, you’ll see a screen shot of the web application gallery, with the Boilerplate template selected.

image

Each of the WebMatrix gallery projects is  now duplicated for you as a Web Site application gallery entry. So even if you’re not yet using WebMatrix, you’ll still have the ability to start your site using one of its handy templates.

Impossible, you say?

See below for a screenshot from within the WebMatrix templates dialog, and you’re sure to notice the similarities that exist between the Application Gallery entries and those previously only available from directly within WebMatrix.

SNAGHTML42c4ada

As before, WebMatrix is fully supported as the easiest Azure-based web development IDE. After you follow through with the creation of a site from the Application Gallery template list from within the Azure portal, you can open the site live directly within WebMatrix. From directly within the portal’s dashboard of a new Boilerplate site, I can click the WebMatrix icon and the site will be opened up in the IDE.

image

Now that it’s open, if I make any changes to the site, they’ll be uploaded right back into Azure Web Sites, live for consumption by my site’s visitors. Below, you’ll see the site opened up in WebMatrix.

image

If you’ve not already signed up, go ahead and sign up for a free trial of Azure, and you’ll get 10 free sites (per region) to use for as long as you want. Grab yourself a free copy of WebMatrix, and you’ll have everything you need to build – and publish – your site directly into the cloud.

 

I’ve been asked a lot of great questions about Azure Web Sites since the feature launched in June. Things like on-premise integration, connecting to service bus, and having multiple environments (like staging, production, etc), are all great questions that arise on a pretty regular cadence. With this post, I’m going to kick off a series on solving real-world problems for web site and PaaS owners that will try to address a lot of these questions and concerns. I’ve got a few blog posts in the hopper that will address some of these questions, rather than just cover how certain things are done. Those posts are great and all (and a lot of fun to write), but they don’t answer some real-world, practical questions I’ve been asked this year. Stay tuned to this area of my site, as I’ll be posting these articles over the next few weeks and probably into the new year. As I post each of these solutions I’ll update this post so you have a one-stop shop to go to when you need to solve one of these problems.

Posts in this Series

Multiple Environments with Azure Web Sites
In this post I demonstrate how to have production and staging sites set up for your web site so that you can test your changes in a sandbox site before pushing your production site and potentially causing damage to it (and your reputation). If you’ve wondered how to gate your deployments using Azure Web Sites, this is a good place to start. You’ll learn how to use Azure Web Sites with a GitHub.com repository and some creative branching strategies to maintain multiple environments for a site.

Managing Multiple Azure Web Site Environments using Visual Studio Publishing Profiles
This post takes the same sort of scenario as presented in the first article. Rather than use GitHub.com as a means to executing a series of gated environment deployments it focuses on the awesome features within Visual Studio for web publishing. Specifically, you’ll see how to use publishing profiles to deploy to multiple Azure Web Sites, so that a team of peers responsible for releasing a site can do so without ever needing to leave Visual Studio.

This post also takes a look at the idea of release management and how this solution answers the question of doing proper release management with a cloud-hosted web site. If you’ve wondered how your SDLC could fit in the idea of continuously maintaining a series of environments for gating your releases using Visual Studio’s super-simple publishing features, this is a great place to start.

Connecting Azure Web Sites to On-Premises Databases Using Azure Service Bus
This post introduces the idea creating a hybrid cloud setup using a Azure Web Site and the Azure Service Bus, to demonstrate how a web site hosted in Azure can connect to your on-premises enterprise database. If you’ve been wondering how to save data from your Azure Web Site into your local database but didn’t know how to do it, or if you’re thinking of taking baby steps in your move toward cloud computing, this post could provide some good guidance on how to get started.

Continuous Delivery to Azure Web Sites using TeamCity
My good friend Magnus put together a very extensive and informative blog post on using Azure Web Sites with TeamCity . If you're a CI user, or a TeamCity user, you'll want to check this out, as it is a great recipe for implementing your CI builds against Azure Web Sites. Magnus worked really hard on this blog post and started working on it for his Windows AzureConf talk, and I'm proud to see how it came together for him.

Jim O'Neil's Blog Series on Integrating Azure Web Sites and Notifications
Jim's blog series - part 1, part 2, and part 3 - are great if you're looking into implementing instant messaging with your Azure Web Site. This series is great, and I feel it shows off some amazing potential and adds a whole new dimension to what's possible on the platform. Take a look at these posts, they're quite inspiring.

G. Andrew Duthie’s Series on Back-end Data Services with Windows 8 Applications
The @devhammer himself has created an awesome series on putting your data in the cloud, so your Windows 8 applications have a common place from which to pull the data. In this series, he discusses how to use Azure Web Sites to create an API that can be called by a Windows 8 application. If you’re delving into Windows 8 development and have been wondering how you could use Web Sites to create the APIs you’ll be calling, this series is a must-read.

Maarten Balliauw on Configuring IIS Methods for ASP.NET Web API with Azure Web Sites
My good friend and dedicated MVP Maarten just wrote up a great post on configuring IIS within Azure Web Sites to support additional HTTP methods like HEAD and PATCH using configuration files. If you’re trying to do some deeper Web API functionality and have struggled with getting these types of HTTP methods supported, this could solve your problem.

Branches, Team Foundation Services, and Azure Web Sites
Wouter de Kort covers how to achieve the multiple-environment approach using Team Foundation Services online. If you're using TFS and you want to set up multiple branches that deploy to a variety of web sites, this article will help you get going. This is a great introduction to using TFS for multiple branches with Azure Web Sites. 

 

The third post in the Solving Real-world Problems with Azure Web Sites blog series I’ll demonstrate one manner in which a web site can be connected to an on-premises enterprise. A common use-case for a web site is to collect data for storage in a database in an enterprise environment. Likewise, the first thing most customers want to move into the cloud is their web site. Ironically, the idea of moving a whole enterprise architecture into the cloud can appear to be a daunting task. So, if one wants to host their site in the cloud but keep their data in their enterprise, what’s the solution? This post will address that question and point out how the Azure Service Bus between a Azure Web Site and an on-premises database can be a great glue between your web site and your enterprise.

We have this application that’d be great as a cloud-based web application, but we’re not ready to move the database into the cloud. We have a few other applications that talk to the database into which our web application will need to save data, but we’re not ready to move everything yet. Is there any way we could get the site running in Azure Web Sites but have it save data back to our enterprise? Or do we have to move everything or nothing works?

I get this question quite frequently when showing off Azure Web Sites. People know the depth of what’s possible with Azure,  but they don’t want to have to know everything there is to know about Azure just to have a web site online. More importantly, most of these folks have learned that Azure Web Sites makes it dirt simple to get their ASP.NET, PHP, Node.js, or Python web site hosted into Azure. Azure Web Sites provides a great starting point for most web applications, but the block on adoption comes when the first few options are laid out, similar to these:

  • It is costing us increased money and time to manage our web sites so we’re considering moving them into Azure Web Sites
  • It will it cost us a lot of money to move our database into the cloud
  • We don’t have the money to rework all the applications that use that database if moving it to the cloud means those applications have to change.
  • We can’t lose any data, and the enterprise data must be as secure as it already was, at the very least.
  • Our web sites need to scale and we can’t get them what we need unless we go to Azure Web Sites.

This is a common plight and question whenever I attend conferences. So, I chose to take one of these conversations as a starting point. I invented a customer situation, but one that emulates the above problem statement and prioritizations associated with the project we have on the table.

Solving the Problem using Azure Service Bus

The first thing I needed to think about when I was brought this problem would be the sample scenario. I needed to come up with something realistic, a problem I had seen customers already experiencing. Here’s a high-level diagram of the idea in place. There’s not much to it, really, just a few simple steps.

on-prem-data-flow

In this diagram I point out how my web site will send data over to the Service Bus. Below the service bus layer is a Console EXE application that subscribes to the Service Bus Topic the web application will be publishing into the enterprise for persistence. That console EXE will then use Entity Framework to persist the incoming objects – Customer class instances, in fact – into a SQL Server database.

The subscription internal process is also exploded in this diagram. The Console EXE waits for any instance of a customer being thrown in, and it wakes up any process that’s supposed to handle the incoming instance of that object.

The Console EXE that runs allows the program to subscribe to the Service Bus Topic. When customers land on the topics from other applications, the app wakes up and knows what to do to process those customers. In this first case, the handler component basically persists the data to a SQL Server installation on their enterprise.

Code Summary and Walk-through

This example code consists of three projects, and is all available for your perusal as a GitHub.com repository . The first of these projects is a simple MVC web site, the second is console application. The final project is a core project that gives these two projects a common language via a domain object and a few helper classes. Realistically, the solution could be divided into 4 projects; the core project could be divided into 2 projects, one being the service bus utilities and the other being the on-premises data access code. For the purposes of this demonstration, though, the common core project approach was good enough. The diagram below shows how these projects are organized and how they depend on one another.

on-prem-project-structure

The overall idea is simple -  build a web site that collects customer information in a form hosted in a Azure Web Site, then ship that data off to the on-premises database via the Azure Service Bus for storage. The first thing I created was a domain object to represent the customers the site will need to save. This domain object serves as a contract between both sides of the application, and will be used by an Entity Framework context class structure in the on-premises environment to save data to a SQL Server database.

With the domain language represented by the Customer class I’ll need some Entity Framework classes running in my enterprise to save customer instances to the SQL database. The classes that perform this functionality are below. They’re not too rich in terms of functionality or business logic implementation, they’re just in the application’s architecture to perform CRUD operations via Entity Framework, given a particular domain object (the Customer class, in this case).

This next class is sort of the most complicated spot in the application if you’ve never done much with the Azure Service Bus. The good thing is, if you don’t want to learn how to do a lot with the internals of Service Bus, this class could be reused in your own application code to provide a quick-and-dirty first step towards using Service Bus.

The ServiceBusHelper class below basically provides a utilitarian method of allowing for both the publishing and subscribing features of Service Bus. I’ll use this class on the web site side to publish Customer instances into the Service Bus, and I’ll also use it in my enterprise application code to subscribe to and read messages from the Service Bus whenever they come in. The code in this utility class is far from perfect, but it should give me a good starting point for publishing and subscribing to the Service Bus to connect the dots.

Now that the Service Bus helper is there to deal with both ends of the conversation I can tie the two ends together pretty easily. The web site’s code won’t be too complicated. I’ll create a view that site users can use to input their customer data. Obviously, this is a short form for demonstration purposes, but it could be any shape or size you want (within reason, of course, but you’ve got ample room if you’re just working with serialized objects).

If I’ve got an MVC view, chances are I’ll need an MVC action method to drive that view. The code for the HomeController class is below. Note, the Index action is repeated – one to display the form to the user, the second to handle the form’s posting. The data is collected via the second Index action method, and then passed into the Service Bus.

The final piece of code to get this all working is to write the console application that runs in my enterprise environment. The code below is all that’s needed to do this; when the application starts up it subscribes to the Service Bus topic and starts listening for incoming objects. Whenever they come in, the code then makes use of the Entity Framework classes to persist the Customer instance to the SQL Server database.

Now that the code’s all written, I’ll walk you through the process of creating your Service Bus topic using the Azure portal. A few configuration changes will need to be made to the web site and the on-premise console application, too, but the hard part is definitely over.

Create the Service Bus Topic

Creating a Service Bus topic using the Azure portal is relatively painless. The first step is to use the New menu in the portal to create the actual Service Bus topic. The screen shot below, from the portal, demonstrates the single step I need to take to create my own namespace in the Azure Service Bus.

creating-service-bus-topic

Once I click the Create a New Topic button, the Azure portal will run off and create my very own area within the Service Bus. The process won’t take long, but while the Service Bus namespace is being created, the portal will make sure I know it hasn’t forgotten about me.

service- bus-getting-created

After a few seconds, the namespace will be visible in the Azure portal. If I select the new namespace and click the button at the bottom of the portal labeled Access Key, a dialog will open that shows me the connection string I’ll need to use to connect to the Service Bus.

image

I’ll copy that connection string out of the dialog. Then, I’ll paste that connection string into the appropriate place in the Web.config file of the web application. The screen shot below shows the Web.config file from the project, with the appropriate appSettings node highlighted.

image

A similar node also needs to be configured in the console application’s App.config file, as shown below.

image

In all, there are only two *.config files that need to be edited in the solution to get this working – the console application’s App.config file and the web application’s Web.config file. Both of these files are highlighted in the solution explorer view of the solution included with this blog post.

image

With the applications configured properly and the service bus namespace created, I can now run the application to see how things work.

Running the Code

Since I’ll be using Entity Framework to scaffold the SQL Server database on the fly, all I’ll need to do to set up my local enterprise environment is to create a new database. The screen shot below shows my new SQL database before running the console application on my machine. Note, there are no tables or objects in the database yet.

image

The first thing I’ll do to get running is to debug the console application in Visual Studio. I could just hit F5, but I’ll be running the web application in debug mode next. The idea here is to go ahead and fire up the console application so that it can create the database objects and prepare my enterprise for incoming messages.

image

The  console application will open up, but will display no messages until it begins processing Customer objects that land on the Service Bus. To send it some messages, I’ll now debug the web application, while leaving the console application running locally.

image

When the web site fires up and opens in my web browser, I’ll be presented the simple form used to collect customer data. If I fill out that form and click the Save button, the data will be sent into the Service Bus.

image

By leaving the console application running as I submit the form, I can see the data coming into my enterprise environment.

SNAGHTMLcceb1f2

Going back into SQL Server Management Studio and refreshing the list of tables I can see that the Entity Framework migrations ran  perfectly, and created the table into which the data will be saved. If I select the data out of that table using a SQL query, I can verify that indeed, the data was persisted into my on-premises database.

SNAGHTMLcd1045e

At this point, I’ve successfully pushed data from my Azure Web Site up into the Service Bus, and then back down again into my local enterprise database.

Summary

One of the big questions I’ve gotten from the community since the introduction of Azure Web Sites to the Azure platform is on how to connect these sites to an enterprise environment. Many customers aren’t ready to move their whole enterprise into Azure, but they want to take some steps towards getting their applications into the cloud. This sort of hybrid cloud setup is one that’s perfect for Service Bus. As you’ve seen in this demonstration, the process of connecting a Azure Web Site to your on-premises enterprise isn’t difficult, and it allows you the option of moving individual pieces as you’re ready. Getting started is easy, cheap, and will allow for infinite scaling opportunities. Find out how easy Azure can be for Web Sites, mobile applications, or hybrid situations such as this by getting a free trial account today . I’m sure you’ll see that pretty much anything’s possible with Azure.

 

This is the second post in the Real World Problems with Azure Web Sites . The first post summarized how one can manage multiple environments (development, staging, production, etc) using a Git repository with a branching strategy . Not everyone wants to use Git, and most would prefer to stay in their favorite IDE – Visual Studio 2012 – all day to do pretty much everything. My buddy Sayed Hashimi told me about Visual Studio profiles a few weeks ago and I’d been wanting to write up something on how it could work with Azure Web Sites. This post follows up on the idea of managing multiple Azure Web Sites, but rather than do it with Git, I’ll show you how to manage multiple sites with only Visual Studio’s awesome publishing-with-profiles features.

Set Up the Environments

The first step in the process is to have your multiple sites set up so that you have environmental isolation. In this case, I’m being thorough and requiring there are two gates prior to production release. All three of these sites are in the free zone, for this demonstration.

01-sites-provisioned

If this was fully realistic, the production zone would probably be at least shared or reserved, so that it had a domain name mapped to it. That’s the only site that would cost money, so the development and staging sites would have no impact on the cost I’ll incur for this setup.

Once the sites have been created I’ll go into each site’s dashboard to download the site’s publish settings profile. The publish settings files will be used from within Visual Studio to inform the IDE how to perform a web deploy up to my Azure Web Site environment.

02-download-publish-profile

Once I’ve downloaded each of these files I’ll have them all lined up in my downloads folder. I’ll be using these files in a moment once I’ve got some code written for my web site.

SNAGHTMLbb9ed58

Now that I’ve got all my environments set up and have the publishing settings downloaded I can get down to business and write a little code.

Setting up the Web Application Project

I know I’ll have some environmental variances in the deployment details of this web application. I’ll want to use different databases for each environment, so I’ll need to have three different connection strings each site will have to be configured to use for data persistence. There’ll be application settings and details and stuff, so the first thing I’ll do in this simple ASP.NET MVC project is to prepare the different publishing profiles and the respective configuration for those environments.

To do this, I’ll just right-click my web project and select the Publish menu item. I’m not going to publish anything just yet, but this is the super-easiest way of getting to the appropriate dialog.

image

When the publishing dialog opens, I’ll click the Import button to grab the first environment’s publish settings files.

SNAGHTMLbc3d1f0

I’ll grab the first publish settings file I find in my downloads folder, for the site’s development environment.

SNAGHTMLbc4eff3

Once I click Open, the wizard will presume I’m done and advance to the next screen. I’ll click the Profile link in the navigation bar at this point one more time, to go back to the first step in the wizard.

If, at any point during this process you’re asked if you want to saved the profile, click yes .

SNAGHTMLbc7a83d

I’ll repeat the import process for the staging and production files. The idea here is, to get all of the publish settings files imported as separate profiles for the same Visual Studio web application project. Once I’ve imported all those files I’ll click the Manage Profiles button. The dialog below should open up, which will show me all of the profiles I’ve imported.

SNAGHTMLbcad50b

This part isn’t a requirement for you or a recommendation, but I don’t typically need the FTP profile so I’ll go through and delete all of the *FTP profiles that were imported. Again, not a requirement, just a preference, but once I’m done with it I’ll have all the web deploy profiles left in my dialog.

SNAGHTMLbcc102b

I’ll just click Close now that I’ve got the profiles set up. Now that the profiles are setup they’ll be visible under the Properties/PublishProfiles project node in Visual Studio. This folder is where the XML files containing publishing details are stored.

image

With the profile setup complete, I’m going to go ahead and set up the configuration specifics for each environment. By right-clicking on each *.pubxml file and selecting the Add Config Transform menu item, a separate *.config will be created in the project.

image

Each file represents the transformations I’ll want to do as I’m deploying the web site to the individual environment sites. Once I’ve added a configuration transformation for each profile, there’ll be a few nodes under the Web.config file I’ll have the opportunity of configuring specific details for each site.

image

Now that I’ve got the publish profiles and their respective configuration transformation files set up for each profile, I’ll write some code to make use of an application setting so I can check to make sure the per-profile deployment does what I think it’ll do.

Now, if you’re thinking to yourself this isn’t very practical, since I couldn’t allow my developers to have the ability of deploying to production and you’re compelled to blow off the rest of this post since you feel I’ve completely jumped the shark at this point, keep on reading. I bring it back down to Earth and even talk a little release-management process later on.

Environmental Configuration via Profiles

Now I’ll go into the Web.config file and add an appSetting to the file that will reflect the message I want users to see whenever they browse to the home page. This setting will be specific per environment, so I’ll use the transformation files in a moment to make sure each environment has its very own welcome message.

image

This is the message that would be displayed to a user if they were to hit the home page of the site. I need to add some code to my controller and view to display this message. It isn’t very exciting code, but I’ve posted it below for reference.

First, the controller code that reads from configuration and injects the message into the view.

image

Then I’ll add some code to the view to display the message in the browser.

image

When I browse the site I’ll get the obvious result, a simple hello message rendered from the configuration file on my local machine.

SNAGHTMLd3eb220

I’ll go into the development configuration profile file and make a few changes – I strip out the comments and stuff I don’t need, and then I add the message appSetting variable to the file and set the transformation to perform a replace when the publish happens. This basically replaces everything in the Web.config file with everything in the Web.MySite-Dev - Web Deploy.config file that has a xdt:Transform attribute set to Replace .

image

I do the same thing for the staging profile’s configuration file…

image

… and then for the production profile’s configuration file.

image

With the environmentally-specific configuration attributes set up in the profile transformations and the publish profiles set up, everything should work whenever I need to do a deployment to any of the environments. Speaking of which, let’s wrap this up with a few deployments to our new environments!

Deployment

The final step will be to deploy the code for the site into each environment to make sure the profile configuration is correct. This will be easy, since I’ve already imported all of my environments’ configuration files. I’ll deploy development first by right-clicking the project and again selecting the Publish context menu item. When the publish wizard opens up I need to select the development environment’s profile from the menu.

SNAGHTMLd55a0ea

Once the publish process completes the site will open up in my browser and I can see that the appropriate message is being displayed, indicating the configuration transformation occurred properly based on the publish profile I’d selected to deploy.

SNAGHTMLd572671

Next, I right-click the project and select Publish again, this time selecting the staging environment.

SNAGHTMLd585116

When the publish completes, the staging welcome message is displayed.

SNAGHTMLd59b886

If I repeat the same steps for production, the appropriate message is displayed there, too.

SNAGHTMLd5aeeb4

In a few short steps, I’m able to set up a series of environments and publish profiles, that work together to allow me separate deployment environments, with little extra work or overhead. Since the profiles are linked to the configuration transformations explicitly, it all just works when I deploy the site.

Release Management

As promised earlier in that blockquote up there, I want to stay with the “these are real world scenarios as much as possible based on my real-world experiences and questions I’ve been asked” mantra, I feel it’s necessary to get into the idea of release management insomuch as how it’d apply here. In the previous example I was using Git branches to gate releases. In this example, I’m not using any centralized build solution, but rather assuming there’s a source control environment in between the team members – developers, testers, release management, and so on – but that the whole team just chooses to use the Web Deploy awesomesauce built into Visual Studio.

Think of a company with aggressive timelines but who still take care to gate releases but choose not (for whatever reason) to set up a centralized build system. This company still feels strongly about managing the release process and about maintaining separate chains of testing and signoff responsibility as code is moved through the environments on the way to a production release, but they love using Visual Studio and Web Deploy to get things into the environments as quickly as possible. 

The diagram below demonstrates one potential release cycle that could make use of the publish profile method of gating deployments through a series of environmental gates.

deployment-strategy

Assume the team has come to a few conclusions and agreements on how their release cycle will execute.

  • All the team members are relatively technical and comfortable using Visual Studio with web application projects
  • The team uses a source control method to share source code and to distribute it internally between team members
  • The web application project checked into source control has with it the publish profile for deploying the site into the development Azure Web Site
  • Testers maintain copies of the staging publish profile setting, are regarded as the owners of the staging environment, and are the only team members who can deploy code to the staging Azure Web Site
  • Release managers maintain copies of the production publish settings files, are regarded as the owners of the production releases, and are the only team members who can deploy code to the production environment
  • As developers, testers, and RM’s complete their respective testing phases in the environments they own and are ready to sign off, they escalate the deployment process to the next level
  • Following escalation, the first general step is to test the previous environment for verification purposes, then to deploy to the next environment and to begin testing of the deployment in that environment

Luckily, this sort of situation is quite possible using publish profiles and free Azure Web Sites used as environmental weigh stations on the way to be deployed to a production site that’s deployed to multiple large reserved instances (for instance).

Summary

The convenient partnership between web publishing and Azure Web Sites shouldn’t be regarded as an indicator of it creating the potential for cowboy coding, but more considered a tool that when coupled with a responsible release cycle and effective deployment gating can streamline and simplify the entire SDLC when your business is web sites.

I hope this post has introduced you to a method of controlling your deployment environments, while also allowing you to do the whole thing from within Visual Studio. Later on, I’ll follow up this post with an example of doing this sort of thing using Team Foundation Services.

Hopefully, you have enough ammunition to get started with your very own Azure Web Site account today, for free, and you feel confident you’ll be able to follow your very own release management process, without the process or architecture slowing you down. If you have any questions about this approach or the idea in general, feel free to use the comments form below.

Happy coding!

 

As announced by Scott Guthrie recently, Azure Web Sites now supports the .NET Framework 4.5. Some awesome ASP.NET features are now available to web developers who want to host their ASP.NET applications on Azure following the web sites offering getting support for .NET 4.5. One feature I’m especially excited about is Entity Framework Spatial support. Only available in .NET 4.5, EF Spatial is something that gives developers who want to build location-aware applications the ability to easily save and retrieve location data without having to invent crazy solutions using SQL code. I’ve implemented the Haversine formula using a SQL stored procedure in the past, and I can speak from experience when I say that EF Spatial is about 10,000 times easier and more logical. Don’t take my word for it, though. Take a look at the sample code I’ll show you in this blog post, which demonstrates how you can develop a location-aware API using ASP.NET Web API, EF Spatial, and host the whole thing on Azure Web Sites.

Creating the Site in Azure

Before diving into code I’ll go out to the Azure portal and create a new web site. For this API example, I create a site with a database, as I’ll want to store the data in a Azure SQL Database. The screen shot below shows the first step of creating a new site. By simply selecting the new web site option, then selecting “with database,” I’m going to be walked through the process of creating both assets in Azure in a moment.

1

The first thing Azure will need to know is the URL I’ll want associated with my site. The free Azure Web Sites offer defaults to [yoursitename].azurewebsites.net, so this first step allows me to define the URL prefix associated with my site.

Simultaneously, this first step gives me the opportunity to define the name of the connection string I’ll expect to use in my Web.config file later, that will connect the site to the Azure SQL Database I’ll create in a moment.

2

The last steps in the site creation process will collect the username and SQL Server information from you. In this example, I’m going to create a new database and a new SQL Server in the Azure cloud. However, you can select a pre-existing SQL Server if you’d prefer during your own setup process.

I specifically unchecked the “Configure Advanced Database Settings” checkbox, as there’s not much I’ll need to do to the database in the portal. As you’ll see in a moment, I’ll be doing all my database “stuff” using EF’s Migrations features.

3

Once I’ve entered the username I’d like to use, the password, and selected (or created) a SQL server, I click the check button to create the site and the SQL database. In just a few seconds, both are created in Azure, and I can get started with the fun stuff – the c0d3z !

Preparing for Deployment

Just so I have a method of deploying the site once I finish the code, I’ll select the new site from the Azure portal by clicking on it’s name once the site-creation process completes.

4

The site’s dashboard will open up in the browser. If I scroll down, the Quick Glance links are visible on the right side of the dashboard page. Clicking the link labeled Download Publish Profile will do just that – download a publish settings file, which contains some XML defining how Visual Studio or WebMatrix 2 (or the Web Deploy command line) should upload the files to the server. Also contained within the publish settings file is the metadata specific to the database I created for this site.

5

As you’ll see in a moment when I start the deployment process, everything I need to know about deploying a site and a database backing that site is outlined in the publish settings file. When I perform the deployment from within Visual Studio 2012, I’ll be given the option of using Entity Framework Migrations to populate the database live in Azure. Not only will the site files be published, the database will be created, too. All of this is possible via the publish settings file’s metadata.

Building the API in Visual Studio 2012

The code for the location API will be relatively simple to build (thanks to the Entity Framework, ASP.NET, and Visual Studio teams). The first step is to create a new ASP.NET MVC project using Visual Studio 2012, as is shown below. If you’d rather just grab the code than walk through the coding process, I’ve created a public GitHub.com repository for the Spatial Demo solution, so clone it from there if you’d rather view the completed source code rather than create it from scratch.

Note that I’m selecting the .NET Framework 4.5 in this dialog. Previous to the 4.5 support in Azure Web Sites, this would always need to be set at 4.0 or my deployment would fail. As well, I would have had compilation issues for anything relating to Entity Framework Spatial, as those libraries and namespaces are also only available under .NET 4.5. Now, I can select the 4.5 Framework, satisfy everyone, and keep on trucking.

new-project

In the second step of the new MVC project process I’ll select Web API, since my main focus in this application is to create a location-aware API that can be used by multiple clients.

web-api-project

By default, the project template comes with a sample controller to demonstrate how to create Web API controllers, called ValuesController.cs. Nothing against that file, but I’ll delete it right away, since I’ll be adding my own functionality to this project.

Domain Entities

The first classes I’ll add to this project will represent the entity domains pertinent to the project’s goals. The first of these model classes is the LocationEntity class. This class will be used in my Entity Framework layer to represent individual records in the database that are associated with locations on a map. The LocationEntity class is quite simple, and is shown in the gist below.

Some of the metadata associated with a DbGeography object isn’t easily or predictably serialized, so to minimize variableness (okay, I’m a control freak when it comes to serialization) I’ve also created a class to represent a Location object on the wire. This class, the Location class, is visible in the following gist. Take note, though, it’s not that much different from the typical LocationEntity class aside from one thing. I’m adding the explicit Latitude and Longitude properties to this class. DbGeography instances offer a good deal more functionality, but I won’t need those in this particular API example. Since all I need is latitude and longitude in the API side, I’ll just work up some code in the API controller I’ll create later to convert the entity class to the API class.

Essentially, I’ve created a data transfer object and a view model object. Nothing really new here aside from the Entity Framework Spatial additions of functionality from what I’ve done in previous API implementations which required the database entity be loosely coupled away from the class the API or GUI will use to display (or transmit) the data.

Data Context, Configuring Migrations, and Database Seeding

Now that the models are complete I need to work in the Entity Framework “plumbing” that gives the controller access to the database via EF’s magic. The first step in this process is to work up the Data Context class that provides the abstraction layer between the entity models and the database layer. The data context class, shown below, is quite simple, as I’ve really only got a single entity in this example implementation.

Take note of the constructor, which is overridden from the base’s constructor . This requires me to make a change in the web.config file created by the project template. By default, the web.config file is generated with a single connection string, the name of which is DefaultConnection . I need to either create a secondary connection string with the right name, change the default one (which I’ve done in this example), or use Visual Studio’s MVC-generation tools to create an EF-infused controller, which will add a new connection string to the web.config automatically. Since I’m coding up this data context class manually, I just need to go into the Web.config and change the DefaultConnection connection string’s name attribute to match the one I’ve added in this constructor override, SpatialDemoConnectionString . Once that’s done, this EF data context class will use the connection string identified in the configuration file with that name.

During deployment, this becomes a very nifty facet of developing ASP.NET sites that are deployed to Azure Web Sites using the Visual Studio 2012 publishing functionality. We’ll get to that in a moment, though…

EF has this awesome feature called Migrations that gives EF the ability of setting up and/or tearing down database schema objects, like tables and columns and all indexes (oh my!).  So the next step for me during this development cycle is to set up the EF Migrations for this project. Rowan Miller does a great job of describing how EF Migrations work in this Web Camps TV episode, and Robert Green’s Visual Studio Toolbox show has a ton of great content on EF, too, so check out those resources for more information on EF Migrations’ awesomeness. The general idea behind Migrations, though, is simple – it’s a way of allowing EF to scaffold database components up and down, so I won’t have to do those items using SQL code.

What’s even better than the fact the EF has Migrations is that I don’t need to memorize how to do it because the NuGet/PowerShell/Visual Studio gods have made that pretty easy for me.  To turn Migrations on for my project, which contains a class that derives from EF’s data context class (the one I just finished creating in the previous step), I simply need to type the command enable-migrations into the NuGet package management console window.

Once I enable migrations, a new class will be added to my project. This class will be added to a new Migrations folder, and is usually called Configuration.cs . Within that file is contained a constructor and a method I can implement however I want called – appropriately – Seed. In this particular use-case, I enable automatic migrations and add some seed data to the database.

Enabling automatic migrations basically assumes any changes I make will automatically be reflected in the database later on (again, this is super-nifty once we do the deployment, so stay tuned!).

Quick background on what types of locations we’ll be saving… My wife and I moved from the Southeast US to the Pacific Northwest region recently. Much to our chagrin, there are far fewer places to pick up great chicken wings than there were in the Southeast. So, I decided I needed to use our every-Sunday-during-football snack of chicken wings as a good use-case for a location-based app. What a better example than to give you a list of good chicken wing restaurants listed in order of proximity? Anyway, that’s the inspiration for the demo. Dietary recommendation is not implied, BTW.

The API Controller Class

With all the EF plumbing and domain models complete, the last step in the API layer is to create the API controller itself. I simply add a new Web API controller to the Controllers folder, and change the code to make use of the plumbing work I’ve completed up to now. The dialog below shows the first step, when I create a new LocationController Web API controller.

8

This controller has one method, that takes the latitude and longitude from a client. Those values are then used in conjunction with EF Spatial’s DbGeography.Distance method to sort the records from closest in proximity, then the first five records are returned. The result of this call is that the closest five locations are returned with a client provides its latitude and longitude coordinates to the API method. The Distance method is used again to determine how far away each location is from the provided coordinates. The results are then returned using the API-specific class rather than the EF-specific class (thereby separating the two layers and easing some of the potential serialization issues that could arise), and the whole output is formatted to either XML or JSON and sent down the wire via HTTP.

At this point, the API is complete and can be deployed to Azure directly from within Visual Studio 2012 using the great publishing features created by the Visual Studio publishing team (my buddy Sayed Hashimi loves to talk about this stuff, so ping him on Twitter if you have any questions or suggestions on this awesome feature-set).

Calling the Location API using an HTML 5 Client

In order to make this a more comprehensive sample, I’ve added some HTML 5 client code and Knockout.js -infused JavaScript code to the Home/Index.cshtml view that gets created by default with the ASP.NET MVC project template. This code makes use of the HTML 5 geospatial capabilities to read the user’s current position. The latitude and longitude are then used to call directly the location API, and the results are rendered in the HTML client using a basic table layout.

The final step is to deploy the whole thing up to Azure Web Sites. This is something I wasn’t able to do until last week, so I’m super-stoked to be able to do it now and to share it with you on a demo site, the URL of which I’ll hand out at the end of this post.

One Last NuGet to Include

Entity Framework Spatial has some new data types that add support for things like… well... latitude and longitude, in this particular case. By default, these types aren’t installed into a Azure instance, as they’re part of the database SDK. Most times, those assemblies aren’t needed on a web server, so by default you won’t have them when you deploy. To work around this problem and to make Entity Framework Spatial work on the first try following your deployment to Azure, install the Microsoft.SqlServer.Types NuGet package into your project by typing install-package Microsoft.SqlServer.Types in the Package Manager Console or by manually finding the package in the “Manage NuGet References” dialog.

Thanks to Scott Hunter for this extremely valuable piece of information, which I lacked the first time I tried to do this. This solution was so obvious I hid in my car with embarrassment after realizing how simple it was and that I even had to ask. NuGet, again, to the rescue!

Once this package is installed, deploying the project to Azure will trigger automatic retrieval of that package, and the support for the location data types in SQL Server will be added to your site.

Publishing from Visual Studio 2012 is a Breeze

You’ve probably seen a ton of demonstrations on how to do deployment from within Visual Studio 2012, but it never ceases to amaze me just how quick and easy the team has made it to deploy sites – with databases – directly up to Azure in so few, simple steps. To deploy to a site from within Visual Studio 2012, I just right-click the site and select – get this – Publish. The first dialog that opens gives me the option to import a publish settings file, which I downloaded earlier just after having created the site in the Azure portal.

publish-1

Once the file is imported, I’m shown the details so I have the chance to verify everything is correct, which I’ve never seen it not be, quite frankly. I just click Next here to move on.

publish-2

This next step is where all the magic happens that I’ve been promising you’d see. This screen, specifically the last checkbox (highlighted for enthusiasm), points to the database I created earlier in the first step when I initially created the “site with database” in the Azure portal. If I check that box, when I deploy the web site, the database schema will be automatically created for me, and the seed data will be inserted and be there when the first request to the site is made . All that, just by publishing the site!

publish-3

Can you imagine anything more convenient? I mean seriously. I publish my site and the database is automatically created, seeded, and everything wired up for me using Entity Framework, with a minimal amount of code. Pretty much magic, right?

Have at it!

Now that the .NET 4.5 Framework is supported by Azure Web Sites, you can make use of these and other new features, many of which are discussed or demonstrated on at www.asp.net’s page set aside just on the topic of ASP.NET 4.5 awesomeness . If you want to get started building your own location API’s built on top of Entity Framework Spatial, grab your very own Azure account here, that offers all kinds of awesomeness for free. You can take the sample code for this blog, or copy the gists and tweak them however you want.

Happy Coding!

 

Running Nancy on Azure Web Sites

posted 8 years ago in Azure

I’m a huge fan of finding cleaner ways to build web applications. Simplicity is a really good thing. Nancy is a simple little web framework inspired by Ruby’s Sinatra . Nancy is open source, hosted on GitHub.com, and distributed like any other awesome .NET open source library – via NuGet packages. When a coding challenge landed on my desk this week that I’ll be responsible for prototyping, Nancy seemed like a good option, so I’ve taken the time to tinker with it a little. I’ll spare you the details of the coding assignment for this post. Instead, I’ll focus on the process of getting Nancy working in your own Azure Web Site.

To start with, I’ve created an empty web site using the Azure portal. This site won’t use a database, so I just logged into the portal and selected New –> Web Site –> Quick Create and gave it a pretty logical name.

1

 

Once the site was finished cooking, I grab the publish profile settings from the site’s dashboard.

2

Getting Started with Nancy

First, I create an empty ASP.NET web application. To make sure everything’s as clean as it could be, I remove a good portion of the files and folders from the site. Since I’ll be creating the most basic level functionality in this example, I don’t need a whole lot of extraneous resources and functionality in the site. The project below shows what I have left over once I ransacked the project’s structure. Take note of the Uninstall-Package command I made in the Package Manager Console. I ran similar commands until I had as bare-bones a project structure as possible. Then I removed some stuff from the Web.config until it literally was quite minimalistic.

3

 

To run Nancy on Azure Web Sites, I’ll need to install 2 packages from NuGet. The first of these is the Nancy core package . The second package I’ll need is the package that enables Nancy hosting on ASP.NET .

4

 

In my web project I create a new folder called NancyStuff and add a new class called HelloWorldModule .

5

 

This class is a very basic Nancy Module, and I learned how to create it by perusing the Nancy project’s GitHub.com Wiki . I basically want to have a route that, when called, will just say hello to the user. You don’t get much simpler than this for assigning functionality to a given route. When the user requests the /hello route using an HTTP GET request, the message Hello World is rendered to the response stream. The HelloWorldModule class will extend NancyModule. In Nancy, routes are handled by classes which inherit from NancyModule. According to the Nancy documentation, Modules are the lynchpin of any given Nancy application .  

 

At this point, everything should work, so I’ll go ahead and deploy the web site up to Azure. To do this, I select the Publish context menu item from within Visual Studio 2012.

6

 

Then, I import the publish settings file using the Import feature on the publishing dialog.

7

 

Once I find the .publishsettings file I downloaded from the Azure portal and import it, I’m ready to publish. Clicking the publish button in the dialog at this point will result in the site being deployed up to my Azure Web Site.

8

 

The site will open once the deployment has completed and will probably present me with a 404 error, indicating there’s no route configured to answer requests to the root of the site. Changing the URL to hit /hello will result with the module answering the request and doing what I expected it to do:

9

 

Enabling Static File Browsing

With this first module functioning properly I want to create a static file from which the Nancy module could be called using jQuery on the client. The idea is, now that I have this Nancy module working, I might want to make calls to it using some AJAX method to display the message it returns to the user. So I add a new static HTML page to my solution.

10

 

The problem here is that, since Nancy’s handing requests to my site and using the modules I create to respond to those requests, the ability to browse to static files is… well… sort of turned off. So attempts to hit a seemingly innocent-enough page results with a heinous – yet adorable in a self-loathing sort of way - response.

11

 

Thanks to the Nancy documentation, it was pretty easy to find a solution to this behavior. See, Nancy basically starts intercepting requests to my site, and it takes precedence over the default behavior of “serving up static files when they’re requested.” Nancy’s routing engine listens for requests and if one’s made to the site for which no module has been created and routed, the site has no idea how to handle the request.

To solve this problem, I need to create another class that extends the DefaultNancyBootstrapper class. This class, explained pretty thoroughly in the GitHub.com Nancy Wiki’s article on managing static content, is what I’ll need to use to instruct Nancy on how to route to individual static files. For now I’m only in need of a class to handle this one particular static file, but setting up a bootstrapper to allow static browsing in a directory is possible. Other options exist too, such as routes that use regular expressions, but that’s something I’ll look at in a later episode of this series. For now, I just want to tell Nancy to serve up the page Default.html whenever a request is made to /Default.html . I’m also enabling static file browsing out of the /scripts folder of the site. The main thing to look at here is the call to the StaticContentsConventions.Add method, into which I pass the name of the file and the route on which it should be served up.

 

Now, I’ll add some jQuery code to the static page that calls the HelloWorldModule and displays whatever it responds with in an HTML element on the page.

 

When the static file loads up in the browser, the jQuery code makes an AJAX request back to the server to the /hello route, and then drops the response right into the page. When I deploy the site again to Azure Web Sites and hit the Default.html file, the behavior is just what I’d expected it would be; the page loads, then the message is obtained via AJAX and displayed.

12

 

Hopefully this introduction has demonstrated the support Azure Web Sites has for Nancy. Since Nancy can be hosted under ASP.NET, and since Azure Web Sites support the ASP.NET pipeline, everything works. I’ll continue to tinker with Nancy from here as I work on the coding project for which I chose it to be a puzzle piece. Hopefully during that development work I’ll learn more about Nancy, have the time to demonstrate what I learn, and ease someone’s day when they decide to move their Nancy site over to Azure Web Sites.

Happy coding!

 

Azure Web Sites Log Cleaner

posted 8 years ago in Azure

Azure Web Sites customers who run web sites in shared instance mode have a generous amount of storage space to use for multiple web sites. The 1GB limit for free customers is more than enough to get a site up and running with lots of room left over for storage. A site’s log files contribute to the amount of space being used by a site. If a site has a lot of traffic, the log files could grow and eat into the space available for the important assets for a site – the HTML, executables, images, and other relevant assets. That’s the sole purpose for the Azure Web Sites Log Cleaner, a NuGet utility you can grab and use in your site with little impact or work.

Potential Impacts of Not Backing Up Logs

Log analysis is important. It can show all sorts of things, from how much traffic you have to each section of your site, or if you’re getting strange requests that indicate some sort of intrusion. If your logs are important to you, there are some important things to remember about Azure Web Sites:

  • Once the logs reach 35mb in size, logging will be disabled
  • If a site’s logging functions are toggled off, it could take a few minutes for the logging to re-activate (or deactivate). So if logging is turned off because you exceeded 35mb, then you realize it and clean out your log files, it might take a few seconds for the service to start logging again, resulting in visitors you don’t capture in your logs
  • If your site starts to spew unhandled exceptions, the failed request and exception logging logs will reflect extensive information about each request, on a file-per-request basis. If the log count – as in number of files – exceeds 50 for either of these types of logs, the logging service for the type of log exceeding 50 (failed request or exception logging, specifically) will be disabled.

 

For these reasons, it’s very helpful to set up a utility such as this to automatically backup and clean out your log files. This utility makes that a snap, and since you can fire it via the execution of a simple URL, there are countless scheduling options, which I introduce later on in this post. For now, let’s take a look at how to get started.

How to Set it Up

Given the convenient features made available via the NuGet team, the installation of this package will make a few changes to your Web.config file. An appSetting node will be added to your file, and an HTTP handler will be registered. The handler, which by default answers requests at the URL /CleanTheLogs.axd, will zip up your log files in a flattened ZIP file, then send that ZIP file into Azure Blob Storage in a conveniently-named file for as long as you’ll need it and delete the files from your sites’ storage space. All you need to do is to hit the URL, and magic happens.

All you need to do is to create a storage account using the Azure Web Sites portal and to put the account name and key into the your Web.config file. The NuGet package adds placeholders to the file to point out what you’ll need to add on your own. Once it’s configured, you could set up a scheduled task to hit the URL on intervals, and you back up your logs and keep your sites tidy.

Placeholders in Web.config

Sounds too easy? If you have any issues or want a little more detailed explanation of exactly what happens, check out the video below. It’ll walk you through the whole process, from creating a site to peeking at the downloaded ZIP file.

Scheduling

There are a multitude of methods you could use to schedule the call to the URL and execute the log cleanup process. You could script a Powershell cmdlet to do it, write some C# code, or use a separate Cron Job service to run the task routinely for you. Nathan Totten wrote a great blog post on how to achieve task scheduling using a Cron Job service . Whatever your choice, one call to a URL does the entire job of saving and clearing your logs.

I can’t guide you on how often you should schedule your logs; each site is different. The guarantee is, if you put a site out there, you’ll get some traffic even if it’s only robot or search engine traffic, and with traffic comes log data. If you’re getting a lot of traffic it stands to reason you could reach the 35mb limit in very little time. With a scheduled job like the one Nathan demonstrates in his post, you could set the log backup URL to be called as often as every 10 minutes.

Summary

You’ve got a lot more room in Azure Storage space than you do in your web site folders, and you can use that space to store your logs permanently without using up your web site space. Hopefully this utility will solve that problem for you. If you have any issues setting it up, submit a comment below or find me on Twitter .

Update

The Azure Web Sites Log Cleaner source code has been published into a GitHub.com public repository . If you want to see how it works, or you have an interest in changing the functionality in some way, feel free to clone the source code repository. 

 

The Kinectonitor has received a lot of commentary and I’ve received some great ideas and suggestions on how it could be improved. There are a few architectural aspects about it that gave me some heartburn. One of those areas is in that, I failed to make use of any of Azure’s storage functionality to store the images. This post sums up how Blob Storage was added to the Kinectonitor’s architecture so that images could be stored in the cloud, not on the individual observer site’s web servers.

So we’re taking all these pictures with our Kinect device. Where are we storing the images? Are they protected by disaster recovery procedures? What if I need to look at historical evidence to see photos of intruders from months or years ago? Where will all of this be stored? Do I have to worry about buying a new hard drive for all those images?

A legitimate concern that can be solved by storing the photographs taken by the Kinect into the Azure cloud. The video below shows a test harness I use in place of the Kinect device. This tiny application allows me to select an image from my hard drive. The image content is then sent along the same path as the images that would be captured and sent into the cloud by the Kinect. This video demonstrates the code running at debug time. Using ClumsyLeaf’s CloudXplorer client I then show the files as they’ve been stored in the local development storage account’s Blob storage container.

Now we’ll take a slightly deeper dive into the changes that have been made in this update of the Kinectonitor source code. If you’d like to grab that code it is available on GitHub .

The Kinectonitor Worker Role

This new project basically serves the purpose of listening for ImageMessage instances. There’s not a whole lot of code in the worker role. We’ll examine its purpose and functionality in more detail in a moment. For now, take a look at the role’s code in the Object Browser to get a quick understanding of what functions the role will provide the overall Kinectonitor architecture.

image

In the previous release of the code, ImageMessage instances were the only things being used to pass information from the Kinect monitoring client, to the Azure Service Bus, and then back down to the ASP.NET MVC client site. This release of the code simplifies things somewhat, especially around the service bus area. The previous code actually shipped the binary data into and out of the Azure Service bus; obviously this sort of arrangement would make for huge transfer rates. Before, the communication was required because the images were being stored in the ASP.NET MVC site structure as image files. Now, the image data will be stored in Azure Blob Storage, so all the ASP.NET MVC client sites will need is the URL of the image to be shown to the user in the SignalR-powered HTML observation client.

If you haven’t yet taken in some of the great resources on the Microsoft Azure site, now would be a great time. The introductory how-to on Blob Storage was quite helpful in my understanding of how to do some of the Blob Storage-related functionality. It goes a good deal deeper into the details of how Blob Storage works, so I’ll refer you to that article for a little background.

The worker role does very little handiwork with the Blob Storage. Basically, a container is created in which the images will be saved, and that container’s accessibility is set to public. Obviously the images in the container will be served up in a web browser, so they’ll need to be publicly viewable.

image

The SaveImageToBlobStorage method, shown below, does the work of building a stream to use to pass the binary data into the Blob Storage account, where it is saved permanently (or until a user deletes it).

image

Note how the CloudBlob.Uri property exposes the URL where the blob can be accessed. In the case of images, this is quite convenient – all we need to be able to display an image is its URL and we’ve got that as soon as the image is saved to the cloud.

Simplifying the Service Bus Usage

As previously mentioned, the image data had been getting sent not only to the cloud, but out of the cloud and then stored in the folder tree of an ASP.NET MVC web site. Not exactly optimal for archival, definitely not for long-term storage. We’ve solved the storage problem by adding in the Blob Storage service, so the next step is to clean up the service bus communication. The sole message type that had been used between all components of the Kinectonitor architecture in the first release was the ImageMessage class, shown below.

image

Since the ImageMessage class is really only needed when the need exists to pass the binary content of the image, a second class has been added to the messaging architecture. The ImageStoredMessage class, shown below, now serves the purpose of information the SignalR-driven web observation client that new images have been taken and saved into the cloud.

image

With the added event concept of images being stored and the client needing to only know the URL of the last image that’s shown automatically in the browser, the message bus usage is in need of rework. When the worker role spins up, the service bus subscription is established, as was being done directly from within the MVC site previously. The worker role listens for messages that come from the Kinect monitor.

image

When those messages are received, the worker role saves them to Blob Storage using the PublishBlobStoredImageUrl method that was highlighted earlier.

image

Finally, one last change that will surely be augmented in a later revision is within the SignalR hub area. Previously, Hubs were wired up through a middle object, a service that wasn’t too thoroughly implemented. That service has been removed, and the Hubs are wired directly to the service bus subscription.

image

Obviously, this results in all clients seeing all updates. Not very secure, not very serviceable in terms of simplicity and customer separation. The next post in this series will add some additional features around customer segmentation and subscription, as well as potential authentication via the bolted-on Kinect identification code.

 

The Azure Service Bus Simplifier

posted 8 years ago in Azure

One of the things that makes enterprise service development of any type difficult is the requirement to learn that ESB’s programming model. The Azure has a very simple programming model already, but for those developers getting started with Azure Service Bus programming for the first time who mainly want a simple publish/subscribe-style bus architecture limited to a few types of custom messages, I’ve created a NuGet package called the ServiceBusSimplifier.

If all you need from the Azure service bus is a simple implementation of the pub/sub model there’s no longer a need to learn all the plumbing and deeper detail work. Of course, there’s nothing wrong with learning the plumbing and doing so is encouraged, but for those just getting started with Azure’s Topic-based messaging, the ServiceBusSimplifier package might be just the answer. It provides extremely basic access to publishing and subscribing custom messages through the Azure service bus. The video below demonstrates how to use the package, and there is some detailed usage instruction below the video.

Usage and Source Code

If you’re just trying to use the package to simplify your entry into using Azure’s service bus, just pull down the package from NuGet .

image

If you’d like to peruse the source code, which the remaining sections of this post will take a slightly deeper dive into an element at a time or you’d like to add functionality to it, the code is available as a GitHub repository .

Setting up the Connection

To set up the service bus connection, call the Setup method, passing it an instance of the class the package uses to supply authentication and connectivity information to the service bus, the InitializationRequest class. The ServiceBus abstraction class offers a Fluent interface, so the methods could be chained together if need be.

image

Subscribing

As mentioned earlier, the ServiceBus abstraction offers very simple usage via self-documenting methods. The code below has been augmented to subscribe to an instance of a custom class.

image

Note, there are no special requirements or inheritance chain necessary for a class to be passed around within this implementation. The class below is the one being used in this example, and in the GitHub repository.

image

Finally, here’s the HandleSimpleMessage method from the program class. Note, this could have been passed as an anonymous method rather than a pointer to a class member or static member. The video demonstration above shows such a usage, but it’s important to note that either a static, instance, or anonymous method would be appropriate being passed to the Subscribe method.

image

Publishing

The final piece of this demonstration involves publishing messages into the Azure service bus. The code below shows how to publish a message to the bus, using the self-explanatory Publish method

image

Hopefully, the ServiceBusSimplifier package will ease your development experience with the Azure service bus. Even though the Azure service bus is dirt-simple to use, this handy utility library will give your code 1-line access to efficient publish/subscribe mechanisms built into the cloud. Happy coding!

 

The Kinectonitor

posted 8 years ago in Azure Kinect SignalR

Suppose you had a some scary-looking hoodlum walking around your house when you were out? You’d want to know about it, wouldn’t you? Take one Kinect, mix in a little Azure Service Bus, sprinkle in some SignalR, and mix it all together with some elbow grease, and you could watch in near-real-time as sinewy folks romp through your living room. Here’s how.

You might not be there (or want to be there) when some maniac breaks in, but it’d be great to have a series of photographs with the dude’s face to aid the authorities in their search for your home stereo equipment. The video below is a demonstration of the code this post will dive into in more detail. I figured it’d give some context to the problem this article will be trying to solve.

I’d really like to have a web page where I could go to see what’s going on in my living room when I’m not there. I know that fancy Kinect I picked up my kids for their XBox can do that sort of thing and I know how to code some .NET. Is it possible to make something at home that’d give me this sort of thing?

Good news! It isn’t that difficult. To start with, take a look at the Kinectonitor Visual Studio solution below.

image

At a high level it’ll provide the following high-level functions. The monitor application will watch over a room. When a skeleton is detected in the viewing range, a photograph will be taken using the Kinect camera. The image will be published to the Azure Service Bus, using a Topic publish/subscribe approach. An MVC web site will subscribe to the Azure Service Bus topic, and whenever the subscriber receives a ping from the service bus with a new image taken by the Kinect, it will use a SignalR hub to update an HTML client with the new photo. Here's a high-level architectural diagram of how the whole thing works, end-to-end. 

The Kinectonitor Core Project

Within the core project will exist a few common areas of functionality. The idea behind the core project is to provide a domain structure, functional abstraction, and initial implementation of the image publication concept. For all intents and purposes, the Kinect will be publishing messages containing image data to the Azure Service Bus, and allow subscribers (which we’ll get to in a moment) their own autonomy. The ImageMessage class below illustrates the message that’ll be transmitted through the Azure Service Bus.

image

A high-level abstraction will be needed to represent consumption of image messages coming from the Azure cloud. The purpose of the IImageMessageProcessor service is to receive messages from the cloud and to then notify it’s own listeners that an image has been received.

image

A simple implementation is needed to receive image messages and to notify observers when they’re received. This implementation will allow the SignalR hub, which we’ll look at next, to get updates from the service bus; this abstraction and implementation are the custom glue that binds the service bus subscriber to the web site.

image

Next up is the MVC web site, in which the SignalR hub is hosted and served up to users in an HTML client.

Wiring Up a SignalR Hub

Just before we dive into the MVC code itself, take a quick look at the solution again and note the ServiceBusSimplifier project. This is a super naïve, demo-class wrapper around the Azure Service Bus that was inspired by the far-more-complete implementation Joe Feser shares on GitHub . I used Joe’s library to get started with Azure Service Bus and really liked some of his hooks, but his implementation was overkill for my needs so I borrowed some of his ideas in a tinier implementation. If you’re deep into Azure Service Bus, though, you should totally look into Joe’s code.

image

Within the ServiceBusSimplifier project is a class that provides a Fluent wrapper abstraction around the most basic Azure Service Bus publish/subscribe concepts. The ServiceBus class (which could probably stand be renamed) is below, but collapsed. The idea is just to get the idea of how this abstraction is going to simplify things from here on out. I’ll post a link to download the source code for this article in a later section. For now, just understand that the projects will be using this abstraction to streamline development and form a convention around the Azure Service Bus usage.

image

A few calls are going to be made in calls to the ServiceBus.Setup method, specifically to provide Azure Service Bus authentication details. The classes that represent this sort of thing are below.

image

Now that we’ve covered the shared code that the MVC site and WPF/Kinect app will use to communicate via the Azure Service Bus, let’s keep rolling and see how the MVC site is connected to the cloud using this library.

In this prototype code, the Global.asax.cs file is edited. A property is added to the web application to expose an instance of the MockImageMessageProcessor, (a more complete implementation would probably make use of an IoC container to store an instance of the service) for use later on in the SignalR Hub. Once the service instance is created, the Azure Service Bus wrapper is created and the ImageMessage messages are subscribed to by the site MessageProcessor instance’s Process method.

image

When the application starts up, the instance is created and shared with the web site’s server-side code. The SignalR Hub, then, can make use of that service implementation. The SignalR Hub listens for ImageReceived events coming from the service. Whenever the Hub handles the event, it turns around and notifies the clients connected to it that a new photo has arrived.

image

With the Hub created, a simple Index view (and controller action) will provide the user-facing side of the Kinectonitor. The HTML/JQuery code below demonstrates how the client responds when messages arrive. There isn’t much to this part, really. The code just changes the src attribute of an img element in the HTML document, then fades the image in using JQuery sugar.

image

Now that the web client code has been created, we’ll take a quick look at the Kinect code that captures the images and transmits them to the service bus.

The Kinectonitor Monitor WPF Client

Most of the Kinect-interfacing code comes straight from the samples available with the Kinect SDK download . The main points to be looked at in the examination of the WPF client is to see how it publishes the image messages into the Azure cloud.

The XAML code for the main form of the WPF app is about as dirt-simple as you could get. It just needs a way to display the image being taken by the Kinect and the skeletal diagram (the code available from the Kinect SDK samples). The XAML for this sample client app is below.

image

When the WPF client opens, the first step is, of course, to connect to the Kinect device and the Azure Service Bus. The OnLoad event handler below is how this work is done. Note that this code also instantiates a Timer instance. That timer will be used to control the delay between photographs, and will be looked at in a moment.

image

Whenever image data is collected from the camera it’ll be displayed in the WPF Image control shown earlier. The OnKinectVideoReady handler method below is where the image processing/display takes place. Take note of the highlighted area; this code sets an instance of a BitmapSource object, which will be used to persist the image data to disk later.

image

Each time the Kinect video image is processed a new BitmapSource instance is created. Remember the Timer instance from the class earlier? That timer’s handler method is where the image data is saved to disk and transmitted to the cloud. Note the check being performed on the AreSkeletonsBeingTracked property. That’s the last thing that’ll be looked at next, that’ll tie the WPF functionality together.

image

If the Kinectonitor WPF client just continually took snapshots and sent them into the Azure cloud the eventual price would probably be prohibitive. The idea behind a monitor like this, really, though, is to show only when people enter during unexpected times. So, the WPF client will watch for skeletons using the built-in Kinect skeleton tracking functionality (and code from the Kinect SDK samples). If a skeleton is being tracked, we know someone’s in the room and that a photo should be taken. Given that the skeleton might be continually tracked for a few seconds (or minutes, or longer), the Kinect will continue to photograph while a skeleton is being tracked. As soon as the tracking stops, a final photo is taken, too. The code that sets the AreSkeletonsBeingTracked property value during the skeleton-ready event handler is below.

image

Some logic occurs during the setter of the AreSkeletonsBeingTracked method, just to sound the alarms whenever a skeleton is tracked, without having to wait the typical few seconds until the next timer tick.

image

That’s it for the code! One more note – it helps if the Kinect is up high or in front of a room (or both). During development of this article I just placed mine on top of the fridge, which is next to the kitchen bar where I do a lot of work. It could see the whole room pretty well and picked up skeletons rather quickly. Just a note for your environment testing phase.

IMAG0356

Summary

This article took a few of the more recent techniques and tools to be released to .NET developers. With a little creativity and some time, it’s not difficult to use those components to make something pretty neat at home in the physical computing world. Pairing these disciplines up to create something new (or something old someway different) is great fodder for larger projects using the same technologies later.

If you’d like to view the Kinectonitor GitHub source code, it’s right here.