An approach to building .NET Core apps using Bamboo and Cake

Bamboo is my build server of choice because I find it simple to setup and has great integration with the rest of the Atlassian stack, such as our JIRA and Bitbucket Server instances.

Bamboo has had native support for MSBuild-based for ages, but with dotnet build the new sexiness, I wanted to get up and running with my CI workflow for my .NET Core applications.

Now there are quite a few challenges to face when setting up a CI build:

  1. Versioning
  2. Building
  3. Testing
  4. Deployment

I decided to tackle these things in stages.

Versioning

In pre-project.json world, if you wanted to version a set of projects within the same solution together, you could achieve that either by generating something like an AssemblyInfo.cs file at build time, or perhaps using a SharedAssemblyInfo.cs link approach whereby you manually set your numbers for the entire solution.

Currently, in project.json world this isn't possible because the [Assembly*Version] attributes are generated by dotnet-build. You might be able to manually add these yourself, but I haven't experimented with that.

So let's look at an example, here is one library MyLibrary.Abstractions.

{
  "version": "1.0.0-*",
  "dependencies": { },
  "frameworks": {
    "netstandard1.6": { }
  }
}

And here's my implementation library, MyLibrary:

{
  "version": "1.0.0-*",
  "dependencies": { 
    "MyLibrary.Abstractions": "1.0.0-*"
  },
  "frameworks": {
    "netstandard1.6": { }
  }
}

Right off the bat, I can see a couple of issues. The version number is fixed into the version key, and the dependency version of the abstractions library is also fixed.

The version string 1.0.0-* is significant in that it pins an exact Major.minor.patch number, but allows matching on any pre-release string. What this means is that when a build occurs, dotnet-build generates a pre-release string (because of the presence of -*) which is a timestamp. This aligns the version numbers of both the MyLibrary and MyLibrary.Abstractions packages. Previously, prior to RC2, you could simply do this:

"dependences": {
  "MyLibrary.Abstractions": ""
}

This is no longer possible, so if I need to version both components together, I need to do something different. Firstly, I need to tackle that version key and have that carry the same value in both project.json files.

Setting the version before project.json

I don't really believe in the term "calculating a version" because that implies some sort of formulaic-approach to determining the version components.

Unless you had some awesome code-analysis tool that could compare your codebase before and after a commit to determine the type of changes and how this affects your public API, the choice of Major.minor.patch solely has to rely on the developer, because only they will know the intent of their change. To this end, I decided to take the approach similar to GitVersion's GitVersion.yaml file where I can express part of the version number myself (the part I care about - Major.minor.patch) and generate the pre-release string from the branch/commit information. I also needed to be able to surface this information in Bamboo itself so I can attribute it to future deployments.

For this, I define a simple text file, build.props:

versionMajor=1  
versionMinor=0  
versionRev=0  

This file would be committed to source control so it can be shared with other devs (to align versions) and the CI server.

Next, I use my branch information to determine the pre-release string (if any), so for instance:

  • If the branch is not release, I will generate a pre-release string.
    • If we are building locally, the pre-release string is simply -<branch-name>, e.g. -master, or -feature-A
    • If we are building on the CI server which drops packages into a NuGet feed, we include the total commit count as -<branch-name>-<commit-count>. I can't take advantage of the +build metadata yet because our deployment server (Octopus Deploy) targets an older version of NuGet. I use commit count and not build number because if I do multiple builds of the same commit, they are the same output, so should carry the same version number.
  • If the branch is release, I will not generate a pre-release string.

This means I can generate version numbers such as 1.0.0 (release branch), 1.0.0-master-1 (master branch on CI server), 1.0.0-feature-A (feature/A branch on a local machine).

I wrap up the logic for this version number generation into a Powershell script named version.ps1. This script generates the version number and writes it out to a local file named version.props. This version information is then stamped into each project.json file.

version=1.0.0  
semanticVersion=1.0.0-master  
prerelease=master  

Handling dependencies versioning

We still haven't solved how we update the dependency versions in project.json for projects in the same solution. The truth is, we don't. Right at the start, we just change the version number in the dependency version string, to an object (a great tip from Andrew Lock):

"dependences": {
  "MyLibrary.Abstractions": { "target": "project" }
}

This allows the version resolution to match on any version. It's not a perfect approach, in fact, the compiler explicitly warns about a version mismatch, but as these are projects in the same solution being versioned together, that is a warning I am happy to put up with. You wouldn't apply * as the version to dependencies outside of the current solution, really these are project-project references only.

Building

Now building my solution could be as easy as dotnet build **/project.json, but the build process is a bit more involved because we have to stamp our version information in (detailed above), as well as run the test and pack commands to prepare our outputs. Enter Cake.

I've been following Cake for a while because I've honestly struggled with other build systems, such as Fake, PSake, etc. I'm a C# developer and Cake for me is a breeze because it presents a DSL that you write in C#, my language of choice! Cake is also extensible, so that was my point of entry for handling my version stamping. I first define a task named Version:

Task("Version")  
.Does(() =>
{
    if (Bamboo.IsRunningOnBamboo)
    {
        // MA - We are running a CI build - so need make sure we execute the script with -local = $false
        StartPowershellFile("./version.ps1", args => 
        {
            args.Append("local", "$false");
            args.Append("branch", EnvironmentVariable("bamboo_planRepository_branchName"));
        });
    }
    else
    {
        StartPowershellFile("./version.ps1", args => args.Append("local", "$true"));
    }

    string[] lines = System.IO.File.ReadAllLines("./version.props");
    foreach (string line in lines)
    {
        if (line.StartsWith("version"))
        {
            version = line.Substring("version=".Length).Trim();
        }
        else if (line.StartsWith("semanticVersion"))
        {
            semanticVersion = line.Substring("semanticVersion=".Length).Trim();
        }
        else if (line.StartsWith("prerelease"))
        {
            prerelease = line.Substring("prerelease=".Length).Trim();
        }
    }

    Console.WriteLine("Version: {0}", version);
    Console.WriteLine("SemanticVersion: {0}", semanticVersion);
    Console.WriteLine("PreRelease: {0}", prerelease);

    DotNetCoreVersion(new DotNetCoreVersionSettings
    {
        Files = GetFiles("**/project.json"),
        Version = semanticVersion
    });
});

The last method call is the key part, once I've executed my versioning script, I read the version number and use a custom Cake extension I've built DotNetCoreVersion to load each target project.json as a JObject, set the version key and write them back out again.

Now I can perform my build using another task Build:

Task("Build")  
.Does(() =>
{
    // MA - Build the libraries
    DotNetCoreBuild("./src/**/project.json", new DotNetCoreBuildSettings
    {
        Configuration = configuration
    });

    // MA - Build the test libraries
    DotNetCoreBuild("./tests/**/project.json", new DotNetCoreBuildSettings
    {
        Configuration = configuration
    });
});

Cake has built in methods for building .NET Core applications, so that made it a lot easier! On the Bamboo side of things, Cake is bootstrapped by another Powershell script build.ps1, so thanks to Bamboo's native Powershell script integration, we simply execute our build script:

Configuring the Cake bootstrapper

Testing

Currently, although there is now support for both NUnit and MSTest, the best test library for .NET Core apps is currently Xunit and that's purely a side-effect of the Microsoft team favouring Xunit itself during development. We have a problem here - Bamboo doesn't understand Xunit test result XML. Luckily, there exists an XSLT for transforming from Xunit to NUnit, which Bamboo does understand.

We wrap this up in our Cake build script:

Task("Test")  
.WithCriteria(() => HasArgument("test"))
.Does(() =>
{
    var tests = GetFiles("./tests/**/project.json");
    foreach (var test in tests) 
    {
        string projectFolder = System.IO.Path.GetDirectoryName(test.FullPath);
        string projectName = projectFolder.Substring(projectFolder.LastIndexOf('\\') + 1);
        string resultsFile = "./test-results/" + projectName + ".xml";

        DotNetCoreTest(test.FullPath, new DotNetCoreTestSettings
        {
            ArgumentCustomization = args => args.Append("-xml " + resultsFile)
        });

        // MA - Transform the result XML into NUnit-compatible XML for the build server.
        XmlTransform("./tools/NUnitXml.xslt", "./test-results/" + projectName + ".xml", "./test-results/NUnit." + projectName + ".xml");
    }
});

With us now outputting NUnit test results XML, we can read that information in during a Bamboo build plan and surface the test results in the interface. This also means that builds can now fail because of test result failure, which is what we want.

Configuring the

Test results surfaced in Bamboo

Deployments

Bamboo does have a built-in deployment mechanism, and for our internal libraries we utilise this to push our packages into one of two NuGet feeds:

  • If it is stable build from our release branch, these go into the stable NuGet feed. These do not automatically deploy, but they can easily be done with the push of a button (continuous delivery).
  • If it is build from our master branch, these automatically pushed to our volatile NuGet feed (continuous deployment).

We use ProGet by Inedo, as it is a superbly stable, multi-feed package host which is easy to setup and quick. By deploying our packages to these feeds, it is internal to our development environment and we can quickly start using our updated packages in our other projects. If we need to, we can quickly spin up a project-specific feed, or perhaps a branch-specific feed and deploy different versions of our code for different clients/scenarios.

One of the last steps of the build script, is to pack everything together:

Task("Pack")  
.WithCriteria(() => HasArgument("pack"))
.Does(() =>
{
    var projects = GetFiles("./src/**/project.json");
    foreach (var project in projects)
    {
        // MA - Pack the libraries
        DotNetCorePack(project.FullPath, new DotNetCorePackSettings
        {
            Configuration = configuration,
            OutputDirectory = "./artifacts/"
        });
    }
});

The dotnet-pack tool generates our NuGet packages for us, both the binaries and the symbols. ProGet can host both of these, so we just ship them all to the ProGet API and it handles the rest for us. This deployment step is handled as a Bamboo deployment project. For each module in our framework, we have two deployment plans, the first is the Volatile plan that uses continuous deployment to drop new packages into our volatile feed. The second plan is our stable plan which (when manually triggered) deploys to our stable feed.

We need to make sure the version information is carried through to the deployment plan, so to tackle that, in the source Bamboo build plan we read in the contents of our generated version.props file:

Reading the generated version number

The "Inject Bamboo variables" task allows us to read files in <key>=<value> format and append them as Bamboo variables. In this instance, we read in the version number and add it to the bamboo.props.semanticVersion variable. The variables need to be available to the result otherwise we can't use them later.

Configuring the release version:

Configuring the release version

And that's pretty much it! Obviously, this is an approach that works well for me, it may not suit your needs, but luckily there are so many ways of achieving the same thing. This will likely all need to change anyway, as the Microsoft team are busily migrating back to MSBuild which means we may be able to use more familiar methods of generating AssemblyInfo.cs files again.

The source files for the different components are available as a Gist: https://gist.github.com/Antaris/8ad52a96e0f2d9f682d1cd6342c44936

Let me know what you think.

ASP.NET Core 1.0 - Routing - Under the hood

Routing was introduced to .NET with the release of ASP.NET MVC 1.0 back in 2009. Routing is the process of taking an input URL, and mapping it to a route handler. This integrated into the ASP.NET pipeline as an IHttpModule - the UrlRoutingModule.

Current ASP.Net Routing

Let's remind ourselves about how the ASP.NET 4.x pipeline works:

ASP.NET Pipeline

Routing integrated with the pipeline as an IHttpModule, and when a route was resolved, it would bypass the rest of the pipeline and delegate to the final IHttpHandler through a new factory-type interface, the IRouteHandler:

public interface IRouteHandler  
{
    IHttpHandler GetHttpHandler(RequestContext context);
}

It was through this IRouteHandler that MVC integrated with Routing, and this is important, because generally MVC-style URLs are extensionless, so the routing system enabled these types of URLs to be mapped to a specific IHttpHandler, and in the case of MVC, this means mapping to the MvcHandler which was the entry point for controller/action execution. This means we didn't need to express a whole host of <httpHandler> rules for each unique route in our web.config file.

MVC integration with Routing in ASP.NET Pipeline

Mapping Routes

The MVC integration provided the MapRoute methods as extensions for a RouteCollection. Each Route instance provides a Handler property - which by default it set to the MvcRouteHandler (through the IRouteHandler abstraction):

routes.MapRoute(  
    "Default", 
    "{controller}/{action}/{id}", 
    new { controller = "Home", action = "Index", Id = UrlParameter.Optional }); 

This method call creates a Route instance with the MvcRouteHandler set. You could always override that if you wanted to do something slightly more bespoke:

var route = routes.MapRoute(  
    "Default", 
    "{controller}/{action}/{id}", 
    new { controller = "Home", action = "Index", Id = UrlParameter.Optional }); 

route.Handler = new MyCustomHandler();  

The Routing table

In current Routing (System.Web.Routing) routes are registered into a RouteCollection which forms a linear collection of all possible routes. When routes are being processed against an incoming URL, they form a top-down queue, where the first Route that matches, wins. For ASP.NET 4.x there can only be one route collection for your application, so all of your routing requirements had to be fulfilled by this instance.

ASP.Net Core 1.0 Routing

How has routing changed for ASP.Net Core 1.0? Quite significantly really, but they had managed to maintain a very familiar shape of API, but it is now integrated into the ASP.NET Core middleware pipeline.

The new Routing framework is based around the concept of an IRouter:

public interface IRouter  
{
    Task RouteAsync(RouteContext context);

    VirtualPathData GetVirtualPath(VirtualPathContext context);
}

An instance of RouterMiddleware can be created using any instance of IRouter. You can think of this as the root of a routing tree. Routers can be implemented any which way, and can be plugged directly into the pipeline using this middleware. To handle the classical routing collection (or route table), we now have an IRouteCollection abstraction, itself extending IRouter. This means that any RouteCollection instance acts as a router and can be used directly with the middleware:

ASP.NET Core Routing Middleware

This is how MVC hooks into the pipeline. When you call app.UseMvc(r => { }) and configure your routes, you're actually using a new IRouteBuilder abstraction which is used to build a router instance:

public interface IRouteBuilder  
{
    IRouter DefaultHandler { get; set; }

    IServiceProvider ServiceProvider { get; }

    IList<IRouter> Routes { get; }

    IRouter Build();
}

For MVC, the DefaultHandler property is an instance of MvcRouteHandler, and this does the work of selecting an action to execute.

The MapRoute methods are now provided as extensions of IRouteBuilder and they work by creating new routes and adding them to the builder's Routes collection. When the final Build call is executed, the standard RouteBuilder creates an instance of RouteCollection, which acts as our router for our middleware instance.

Remembering this is important if you are migrating from an ASP.NET 4.x application to ASP.NET Core and you've invested heavily on tweaking the Routing framework to suit your needs.

A quick note on Attribute Routing

Attribute Routing is a feature of MVC and not directly tied to the Routing framework. Because MVC creates its own router, it can control at what point it integrates Attribute routing. It does this during the call to UseMvc(r => { }) by injecting a single instance of AttributeRoute at the start of the route collection after all other routes have been configured.

This single instance of AttributeRoute acts as the router for handling Controller and Action-level attribute routes, using the application model as the source of truth.

Making decisions

Using the standard MapRoute method you end up with an instance of a TemplateRoute which works on a route template string, such as {controller}/{action}/{id?}. When the RouteAsync method is evaluating for this type, it does so by checking to see if the incoming request path matches the route template. If this is true, it then checks against any applied constraints to determine if the route values lifted from the incoming request path are valid. If either of these steps returns false, then the route does not match and control is returned back to the route collection to test the next route. This is very similar to how conventional working currently operates for ASP.NET 4.x.

Finishing up

Hopefully you can appreciate this run through of the under-the-hood changes made to the Routing framework. This newer Routing framework offers up greater flexibility in composing our applications because of the integration with the middelware pipeline.

It's worth having a look at the GitHub repo code for yourself.

ASP.NET Core 1.0 - Dependency Injection - What it is, and what it is not

Since its inception, ASP.NET 5 (now known as ASP.NET Core 1.0) has had the concept of Dependency Injection (DI) baked into its foundation. Where previous iterations of MVC supported this mechanism, it was always value-added. Also, at that stage, it wasn't DI, it was a Service Locator pattern, of which you could plug in a compatible IoC container to make it DI.

With the ability to redesign the entire stack from the ground up, the team took the approach of building in support for DI as a first-class feature. This was a natural extension of the desire to break up the stack into a set of composable NuGet packages - they needed a way to bring a whole host of components together - built around a set of known abstractions. These components needed to be testable too.

The built-in container represents a baseline set of functionality required to support ASP.NET Core 1.0. It only supports scenarios that are used by the framework - it is very focused on what it needs to do. With that in mind, you won't find it supporting some advanced concepts such as mutable containers, named child scopes, etc. found in other containers. That being said, ASP.NET Core is designed to allow you to plug in an alternative container that supports that additional functionality.

Foundations - IServiceProvider

The IServiceProvider interface has been kicking around since .NET 1.1 and is used by a variety of components throughout the Desktop CLR (.NET Framework) for providing a mechanism through which to resolve service instances. This includes the classic HttpContext (but don't try to resolve your IoC components through it - you can only return a few types, like HttpRequest, HttpResponse, etc. - It's not hooked up to your IoC container).

For the DI story in ASP.NET Core - the team have re-used this pre-existing interface, but it now is the service locator abstraction for the entire ASP.NET Core stack. In that sense, it sort of fills the role of the CommonServiceLocator for ASP.NET Core - to plug any other IoC/DI system into ASP.NET Core - you have to implement this standard interface. The stack uses this interface for resolving its types - and this means that although the framework has built-in support through their own container, you can easily plug in any other container - as long as they bring their implementation of IServiceProvider along for the ride. Sort of...

There is actually another contract that will need to be implemented to make an ASP.NET Core compatible container - IServiceScopeFactory. This interface is used for provisioning a new lifetime scope (in terms of built-in DI). For our built-in story, this is provided out of the box - and it is through this mechanism that Request-scoped services are resolved.

ServiceDescriptor and IServiceCollection

The ServiceDescriptor type (Microsoft.Extensions.DependencyInjection.Abstractions package) provides a container-agnostic approach to describing your services and their lifetimes. Generally, you take the approach of using a set of extension methods over IServiceCollection, such as:

services.AddTransient<IMyService, MyService>();  
services.AddMvc();  

These are wrappers around calls to ServiceDescriptor.[Instance|Transient|Scoped|Singleton] You can easily use ServiceDescriptor directly:

var descriptor = ServiceDescriptor.Transient<IMyService, MyService>();  

An IServiceCollection is a mutable collection of ServiceDescriptor instances.

Under the hood of the ServiceProvider

The built-in container is an internal implementation named ServiceProvider found in the Microsoft.Extensions.DependencyInjection package. An extension method of IServiceCollection is provided to initialise a new instance of it:

public static IServiceProvider BuildServiceProvider(this IServiceCollection services)  
{
    return new ServiceProvider(services);
}

When you initialise a service provider with an IServiceCollection instance, it creates a new root container. It contains an instance of ServiceTable which contains the blueprints for creating instances of services in their required lifetime. Through the use of IServiceScopeFactory it is also possible to initialise a new instance of ServiceProvider using the existing container as the root. The important thing about this, is they share the same ServiceTable instance - which means it is not designed to allow modifications to service registrations in child scopes. The idea is you configure your container once - reducing the set of moving parts.

A ServiceTable represents one or more IService instances, whereby an IService is a binding between a type, a ServiceLifetime and an IServiceCallsite, the latter of which actually realizes the instance of the service type. When a call to GetService is received, it follows the following steps to obtain the implementation instance:

  1. Look in the cache of realized-services to look for a delegate used to obtain the instance.
  2. If one does not exist - go through the ServiceTable and find the IServiceCallsite instance.
  3. Create a delegate used to obtain the instance through the IServiceCallsite
  4. Cache the delegate for future calls.

On the first call for a service instance, the container uses a reflection-based approach for obtaining the instance, but for any subsequent calls may result in the container opting to generate an expression tree, which compiles down to a delegate for future calls. This is to optimise occurrences where a component may be requested multiple times, depending on your chosen ServiceLifetime.

Optimizations for IEnumerable<T> services

The built-in container supports IEnumerable<T> directly, and it optimizes discovering T instances by chaining IService entries together through the IService.Next property. This means when the container is realizing an instance of IEnumerable<T>, it can move through the ServiceTable in a linked-list fashion to obtain the IService instances quickly.

Use in ASP.NET Core

The out-of-the-box experience provides the built-in set of DI components. The default convention is to simply use the Startup.ConfigureServices method to apply your service registrations:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddTransient<IMyService, MyService>();
    services.AddMvc();
}

The framework will take care of calling services.BuildServiceProvider() after this call has completed.

Replacing the built-in container

This same mechanism for registering services can be used for returning a custom IServiceProvider:

public IServiceProvider ConfigureServices(IServiceCollection service)  
{
    services.AddTransient<IMyService, MyService>();
    services.AddMvc();

    return services.BuildServiceProvider();
}

That final call return services.BuildServiceProvider() could easily be replaced with other containers, such as Autofac and perhaps Ninject. After ASP.NET Core RTWs (but hopefully before!), I would expect to see most if not all of the popular IoC containers to implement a compatible IServiceProvider, allowing you to use the container of your choice, if the built-in container does not fit your requirements.

Use outside of ASP.NET Core

It is entirely possible to use the ASP.NET Core built-in DI container outside of ASP.NET Core - this is actually signified by the fact the abstractions and implementations aren't actually part of the Microsof.AspNetCore namespace. Like many of these utility services (such as FileProviders, etc.) they can be used independently.

Creating a container manually

You can easily create your own container using the same mechanism, the IServiceCollection. The Microsoft.Extensions.DependencyInjection.ServiceCollection implementation needs to be spun up and some registrations need to be added:

var services = new ServiceCollection();

services.AddTransient<IMyService, MyService>();  
services.AddScoped<IMyOtherService, MyOtherService>();  

You can then build your container through the BuildServiceProvider extension method:

var container = services.BuildServiceProvider();

var myService = container.GetService<IMyService>();  
var myOtherService = container.GetService<IMyOtherService>();  

To create a child scope, you can resolve the scope factory:

var scope = container.GetService<IServiceScopeFactory>();  
var scopedContainer = scope.ServiceProvider;

var myOtherServiceScoped = scopedContainer.GetService<IMyOtherService>();  

Finishing up

I hope this post gives you more of an in-depth look the built-in container - what it is, and how it works. Don't forget to check out the aspnet/DependencyInjection repo on GitHub.

Hosting Visual Studio Extensions on a private NuGet Gallery

NuGet is a great way of distributing software packages you use during development of your .NET applications. Because NuGet itself is open-source, the team have given tools (such as NuGet.Server and NuGetGallery) to help you jump-start deploying your own package feeds which can become integral to your company's development workflow.

Another aspect of development with Visual Studio is extensions, which can come in a variety of forms, from IDE updates, language services, and even project and item templates. Sadly at this point, the mechanism by which updates are made available does not currently align with how software packages are provide.

For Visual Studio Extensions, these are provided through an Atom feed. Because this is feed-driven, it does open up scenarios whereby we could publish our own Atom feed for our own internal extensions, much like running a private NuGet feed.

This got me thinking, that perhaps we could utilise the NuGet Gallery backend to provide a feed, but adapted as an Atom feed to be consumed by Visual Studio.

Turns out, it's relatively easy to do, so first things first, I forked the NuGetGallery repo and got to work.

Adding support for an Atom feed

My first port of call was to understand how the Atom feed was structured, and broadly it boils down to adding a custom <Vsix></Vsix> for handling versioning and Id, and update the <content> element to point to the location of the .vsix package.

I've added a controller, the ExtensionsController to generate the feed. VS can't consume a standard V1/V2/V3 NuGet feed, but we can shape our own feed based on the same data. Here's the controller:

public class ExtensionsController : Controller
{
  private readonly ISearchService _searchService;

  public ExtensionsController(ISearchService searchService)
  {
    _searchService = searchService;
  }

  public async Task<XDocumentResult> Feed()
  {
      var doc = await CreateFeedDocument();
      return new XDocumentResult(doc);
  }
}

We'll also need some routes:

routes.MapRoute(
    "ExtensionV1Feed",
    "api/extensions/v1/feed",
    new { controller = "Extensions", action = "Feed" }
);

routes.MapRoute(
    "ExtensionV1Content",
    "api/extensions/v1/{id}/{version}",
    new { controller = "Extensions", action = "VsixContent" }
);

I've added a custom XDocumentResult type which renders an XDocument to the response stream, but I've omitted it here for brevity. My CreateFeedDocument result constructs an XDocument by naively reading from the ISearchService with no arguments. I wanted to short-cut this part for getting results.

var filter = SearchAdapter.GetSearchFilter(string.Empty, 1, null, SearchFilter.UISearchContext);
var results = await _searchService.Search(filter);

Next up, we start constructing our document:

var doc = new XDocument(new XDeclaration("1.0", "utf-8", "false"));

XNamespace atom = "http://www.w3.org/2005/Atom";
XNamespace vsix = "http://schemas.microsoft.com/developer/vsx-syndication-schema/2010";

var root = new XElement(atom + "feed");
doc.Add(root);
root.Add(new XElement(atom + "title", new XAttribute("type", "text"), "My Extension Gallery"));
root.Add(new XElement(atom + "id", "My Extension Gallery"));

If we have results, we can mark an <updated> entry in the feed using the max Published date of the available packages:

if (results.Data.Any())
{
    var updated = results.Data.Max(p => p.Published);
    root.Add(new XElement(atom + "updated", updated.ToString("yyyy-MM-ddTHH:mm:ssZ")));
}

Now, for each package we are creating an <entry> item to be added to the <feed> element. Most of the attributes we can grab from our package model:

var entry = new XElement(atom + "entry");
entry.Add(new XElement(atom + "id", package.PackageRegistration.Id));
entry.Add(new XElement(atom + "title", new XAttribute("type", "text"), package.Title));
entry.Add(new XElement(atom + "summary", new XAttribute("type", "text"), package.Description));
entry.Add(new XElement(atom + "published", package.Published.ToString("yyyy-MM-ddTHH:mm:ssZ")));
entry.Add(new XElement(atom + "updated", package.Created.ToString("yyyy-MM-ddTHH:mm:ssZ")));
entry.Add(new XElement(atom + "author", new XElement(atom + "name", package.FlattenedAuthors)));

if (!string.IsNullOrWhiteSpace(package.IconUrl))
{
    entry.Add(new XElement(atom + "link", new XAttribute("rel", "icon"), new XAttribute("type", "text"), new XAttribute("href", package.IconUrl)));
}

if (!string.IsNullOrWhiteSpace(package.ProjectUrl))
{
    entry.Add(new XElement(atom + "link", new XAttribute("rel", "alternate"), new XAttribute("type", "text/html"), new XAttribute("href", package.ProjectUrl)));
}

In terms of tags, there is nothing similar with Atom feeds, but we can map NuGet tags to Atom categories:

var tags = (package.Tags ?? "").Split(new[] { ',', ' ' }, StringSplitOptions.RemoveEmptyEntries).Select(t => t.Trim());
foreach (string tag in tags)
{
    entry.Add(new XElement(atom + "category", new XAttribute("term", tag)));
}

Now, the last two import parts, content and the Vsix extension element. The <content> element needs to point to a URL that Visual Studio can download the .vsix package. As NuGet packages are zip archives, we need a way of accessing the .vsix package from within that. So we'll add this soon, but for now, let's generate a URL:

entry.Add(new XElement(atom + "content",
    new XAttribute("type", "application/octet-stream"),
    new XAttribute("src", Url.RouteUrl("ExtensionV1Content", new { id = package.PackageRegistration.Id, version = package.NormalizedVersion }))));

This will generate a URL similar to /api/extensions/v1/<package-name>/<version> which will implement soon.

Lastly, implement the custom <Vsix> element:

var ext = new XElement(vsix + "Vsix",
    new XAttribute(XNamespace.Xmlns + "xsd", "http://www.w3.org/2001/XMLSchema"),
    new XAttribute(XNamespace.Xmlns + "xsi", "http://www.w3.org/2001/XMLSchema-instance"),
    new XElement(vsix + "Id", package.PackageRegistration.Id),
    new XElement(vsix + "Version", package.NormalizedVersion));

And that's really it, this will allow us to generate the feed, so visiting /api/extensions/v1/feed will give us something like:

<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title type="text">My Extension Gallery</title>
  <id>My Extension Gallery</id>
  <updated>2015-12-07T12:12:48Z</updated>
  <entry>
    <id>6165843C-6289-4B47-B61B-18C969AD9547</id>
    <title type="text">Sample Extension</title>
    <summary type="text">Sample Extension</summary>
    <published>2015-12-07T12:12:48Z</published>
    <updated>2015-12-07T12:12:48Z</updated>
    <author>
      <name>Matthew Abbott</name>
    </author>
    <content type="application/octet-stream" src="/api/extensions/v1/6165843C-6289-4B47-B61B-18C969AD9547/1.8.0" />
    <Vsix xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/developer/vsx-syndication-schema/2010">
      <Id>6165843C-6289-4B47-B61B-18C969AD9547</Id>
      <Version>1.8.0</Version>
    </Vsix>
  </entry>
</feed>

The <Id> element is quite important, as this unique ID (which is the same ID set in the .vsixmanifest file) is used by VS to determine what is installed and at what versions.

We can now test this by adding the extension feed to Visual Studio:

And to test, let's browse for the extension:

Flowing .vsix packages from NuGet packages

As mentioned previously, NuGet packages are really just zip containers, and the standard NuGetGallery project doesn't do much in terms of reading the content, but with a few small modifications to the INupkg and NuPkg types, we can expose a method to read from the package on server.

We add a GetFileStream method which allows us to read a file, buffered in a memory stream:

public Stream GetFileStream(string filePath)
{
    if (filePath.StartsWith("\\"))
    {
        filePath = filePath.Substring(1);
    }
    filePath = filePath.Replace("\\", "/");

    using (var za = new ZipArchive(GetStream(), ZipArchiveMode.Read, true))
    {
        var entry = za.GetEntry(filePath);
        if (entry == null)
        {
            return null;
        }

        var memoryStream = new MemoryStream();

        using (var entryStream = entry.Open())
        {
            entryStream.CopyTo(memoryStream);
        }

        memoryStream.Seek(0, SeekOrigin.Begin);

        return memoryStream;
    }
}

Next, let's now add the method to our ExtensionsController:

public async Task<ActionResult> VsixContent(string id, string version)
{
    string fileName = string.Format(Constants.PackageFileSavePathTemplate, id.ToLower(), version.ToLower(), Constants.NuGetPackageFileExtension);
    var file = await _fileStorageService.GetFileReferenceAsync(Constants.PackagesFolderName, fileName);
    if (file == null)
    {
        return HttpNotFound();
    }

    using (var stream = file.OpenRead())
    {
        using (var pkg = new Nupkg(stream, true))
        {
            string embeddedFileName = pkg.GetFiles().FirstOrDefault(f => Path.GetExtension(f).Equals(".vsix", StringComparison.OrdinalIgnoreCase));

            var vsix = pkg.GetFileStream(embeddedFileName);
            if (vsix != null)
            {
                return new FileStreamResult(vsix, "application/octet-stream")
                {
                    FileDownloadName = Path.GetFileName(embeddedFileName)
                };
            }
        }
    }

    return HttpNotFound();
}

Here we are actually first finding the .vsix package within the NuGet package, and then return that to the response using the standard FileStreamResult.

We should now be in a position to install the package from our custom feed.

Updating extensions from NuGet feeds.

Because extensions are provided through the same NuGet core as normally packages, you can use nuget.exe the same way as part of your workflow, either manually, or through CI/CD. This allows a great deal of automation when you are authoring updates to your internal extensions. Alternatively, because this is based on NuGet Gallery, you can simply upload a new version of your extension, e.g, I currently have this listed in my personal extensions gallery:

I can use the UI to update this. If I deploy 1.8.0 over 1.7.0, this should be reflected in the Extensions dialog:

If Automatically update this extension is enabled, then you may not even have to update the extension, as VS will check daily (and every time you open the Extensions dialog) for new extensions.

So, imagine now how you can utilise the power of NuGet to deploy your internal extensions, like project templates and items, using the same tool chain as deploying your own NuGet packages.

The code is available as a fork on GitHub.

Install Issues for ASP.NET 5 RC1

ASP.NET 5 has hit Release Candidate 1 status, and if you have been actively using the alpha and beta bits, you may find you have a couple of issues trying to install the RC-1 release.

The install instructions do note an issue you may encounter during setup:

NOTE: There is currently a known issue with the ASP.NET 5 RC installer. If you run the installer from a folder that contains previous versions of the MSI installers for DNVM (DotNetVersionManager-x64.msi or DotNetVersionManager-x86.msi) or the ASP.NET tools for Visual Studio (WebToolsExtensionsVS14.msi or WebToolsExtensionsVWD14.msi), the installer will fail with an error “0x80091007 - The hash value is not correct”. To work around this issue, run the installer from a folder that does not contain previous versions of the installer files.

But actually, I encountered a different issue:

Install Issue

Looking through the logfile, we had a few of these entries:

Acquiring package: WebToolsExtensionsVS14, payload: WebToolsExtensionsVS14, download from: https://go.microsoft.com/fwlink/?LinkId=691117 Error 0x80072f08: Failed to send request to URL: https://go.microsoft.com/fwlink/?LinkId=691117, trying to process HTTP status code anyway.

Not sure if it was a proxy issue, but luckily I've found another way, firstly:

And then after, ensure your DNVM is upgraded by running dnvm upgrade from a command prompt.

Now onto project.json and global.json changes, don't forget to check the aspnet/Announcements repo for the list of breaking changes (filter to milestone:1.0.0-rc1).