ASP.NET Core 2.0 - Thinking about the future

It has been an eventful week in ASP.NET Core land.

It started with the discovery by community member and ASP.NET Core contributor Kévin Chalet that a PR was being merged into the release branch of ASP.NET Core 2.0 Preview 1, rel/2.0.0-preview1. It was a significant PR in that it was changing the target framework monikers (TFMs) which denote compile targets. In this PR, the previous TFMs of netstandard1.x and net4x were changed to netcoreapp2.0. If this was being merged into a non-release branch, the community probably wouldn't have baulked, but in a release branch, this signals intent to change.

This means one thing in broad strokes:

ASP.NET Core 2.0 was dropping support for .NET Framework (netfx), in favour of specifically targeting .NET Core (netcore) 2.0

This seemingly came out of nowhere community-wise (I'm pretty sure it would have been discussed extensively internally). Why would a message of compatibility that was sold to us with the release of ASP.NET Core 1.0, a message that ensured you could use the latest technology stack on both the stable and mature .NET Framework, and the fast-moving .NET Core, why would that message now change.

Naturally, the community as a whole was divided, some in favour, some completely objecting. Let's try and take an objective look at the pros and cons of this change.

For moving to .NET Core 2.0 only

The primary motivation as I understand of this change was around how quickly ASP.NET Core can move when we were promising support for .NET Standard compatability. .NET Standard was introduced to simplify compatability between libraries and platforms. Please refer to this blog post as a refresher. The idea is that we could take the .NET Framework, .NET Core, Mono and other implementations of a 'standard', and provide a meta library (the .NET Standard library) that provided a consistent API surface that you could target across all runtimes. Different versions of each runtime could provide support for a version of .NET Standard.

For instance, if I target netstandard1.4 I know my library should work on .NET Core 1.0 (as it supports up to .NET Standard 1.6), .NET Framework >= 4.6.1 and also UWP 10.0. This was great because .NET Standard ensures API compatibility for me, meaning I worry less about #if specific code to cater for specific build platforms.

.NET Standard is a promise.

But one thing that potentially wasn't considered when promising .NET Standard compatibility for ASP.NET Core and ensuring ASP.NET Core could run on both .NET Framework and .NET Core was that they both move at different speeds. So how can we take advantages of the new APIs in .NET Core (which would eventually be ported to .NET Framework, and defined in .NET Standard), when .NET Framework has a completely different release cadence? Remembering that .NET Framework changes happen slowly, because they need to be tested to ensure support across the billions of devices that currently run the .NET Framework.

Targeting .NET Core breaks compatibility with .NET Framework

The above chart denotes the API surface of .NET Standard 2.0 and it's compatibility with .NET Framework 4.6.1 and .NET Core vNext, it does not express the full API surface available to netfx and netcore

.NET Core releases will come thick and fast in comparison to .NET Framework, so if we are promising this support to run ASP.NET Core on .NET Framework, we can't take advantage of new APIs implemented in a future version of .NET Standard until .NET Framework has been updated.

By sticking purely to .NET Standard, it means we are bound by the release cadence of the slower moving .NET Framework.

By unburdening ourselves from .NET Framework, it opens the door for get ASP.NET Core moving a lot quicker.

Only for parts of ASP.NET

Another thing to consider, is that it is not the complete ASP.NET Core 2.0 stack being re-targeted. Those libraries which are more likely to be consumed outside of the application model of web application (e.g. Microsoft.Extensions.*, Razor, etc.) will continue to target .NET Standard to ensure maximum compatibility.

Against moving to .NET Core 2.0 only

From what I see there are two big reasons why we should stick with .NET Standard 2.0:

  1. By removing support for the .NET Framework means businesses with heavy investment in .NET Framework are cut out of the will. ASP.NET Core 2.0 means .NET Core or nothing, and there is no support for referencing netcore libraries from netfx. Coupling this with a short(ish) support lifetime for ASP.NET Core 1.x means that businesses implementing ASP.NET Core with their legacy .NET Framework components can not upgrade to ASP.NET Core 2.0.

    This gives business only a couple of options - stick with .NET Framework and the ASP.NET stack (read: not Core), in which case your codebase becomes stagnant and very slow moving, or stick with ASP.NET Core 1.x which has a limited lifetime.

  2. By adopting a pattern of latest-framework-only, it may lead to future library authors also switching gears because their primary application model is ASP.NET Core 2.0. Leading by example could be damaging to the ecosystem in this world of promised compatibility (.NET Standard). For example, a library as prolific as JSON.NET, in some weird future where Jason determines that he also only wants to support future APIs in .NET Core? (This is only an example, I doubt this would actually happen!)

Where do we go from here?

At Build, Microsoft reaffirmed a commitment to delivering ASP.NET Core 2.0 for both .NET Core and .NET Framework, meaning targeting .NET Standard 2.0. This seems like backtracking, and in light of the amount of noise generated by the GitHub issue and in other channels, Microsoft have decided to please the masses. This is great in that they are listening to customers, but we also have to be realistic about this outcome.

What that now means is:

ASP.NET Core can only move as fast as .NET Standard, and therefore .NET Framework The above chart denotes the API surface of .NET Standard 2.0 and it's compatibility with .NET Framework 4.6.1 and .NET Core vNext, it does not express the full API surface available to netfx and netcore

ASP.NET Core can only move as fast as .NET Standard in terms of using new APIs.

What about me?

I can't agree one way or another, both sets of arguments are completely valid, we must just accept the outcome and how that ensures maximum compatibility for all, but also limits advancements.

The one thing I would like to see out of all of this, is more visibility on these (major) decisions. Much like the C# language repos, new ideas are thrashed out in discussion before implementation - I feel ASP.NET Core needs to work like this for these critical issues, so perhaps this is an idea the team can take on board.

Smarter build scripts with MSBuild and .NET Core

With the migration back to MSBuild for .NET Core projects, a few new avenues are opened up to us as developers when it comes to managing our projects. Package references are now part of the MSBuild XML definition which means we can start using the existing power of MSBuild to move to a more centrally managed list of packages. But it doesn't stop there, we can start making our MSBuild files a bit smarter, more convention-based. This is not new, Microsoft themselves are doing this today as they evolve the project template .csproj files. This was highlighted recently and it demonstrates how minimal a .NET Core MSBuild project can be:

<Project Sdk="Microsoft.NET.Sdk.Web">  
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp1.0</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <Folder Include="wwwroot\" />
  </ItemGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.NETCore.App" Version="1.0.1" />
    <PackageReference Include="Microsoft.AspNetCore.Diagnostics" Version="1.0.0" />
    <PackageReference Include="Microsoft.AspNetCore.Server.IISIntegration" Version="1.0.0" />
    <PackageReference Include="Microsoft.AspNetCore.Server.Kestrel" Version="1.0.1" />
    <PackageReference Include="Microsoft.Extensions.Logging.Console" Version="1.0.0" />
  </ItemGroup>
</Project>  

You'll notice a few things - No need for a ToolsVersion attribute, and no standard <Compile> and <EmbeddedResource> elements:

<Compile Include="**\*.cs" />  
<EmbeddedResource Include="**\*.resx" />  

These are in inferred by your use of an Sdk attribute.

Making things smarter

Now, imagine we have a project solution similar to the following:

\Common.props
\src\MyPackage.Abstractions\MyPackage.Abstractions.csproj
\src\MyPackage.Host\MyPackage.Host.csproj
\src\MyPackage\MyPackage.csproj
\test\MyPackage.Tests\MyPackage.Tests.csproj

Each one of those MSBuild XML files would contain their own independent set of package references, and target frameworks monikers (TFMs), etc. There are a couple of goals I want to achieve:

  1. I want to ensure that my NuGet package references are version-aligned through all of my projects.
  2. I want to import a standard set of packages depending on the type of project (website, library, unit test, etc.)

So how do we go about this? Let's start simple, we'll take our Abstractions library:

<Project Sdk="Microsoft.NET.Sdk">  
  <Import Project="..\..\Common.props" />

  <PropertyGroup Label="Output">
    <AssemblyName>MyPackage.Abstractions</AssemblyName>
    <AssemblyTitle>MyPackage.Abstractions</AssemblyTitle>
    <TargetFramework>netstandard1.6</TargetFramework>
  </PropertyGroup>
</Project>  

Pretty much nothing except a bare-bones file describing the output and an import (we'll come to the import later).

Now, let's move into out Implementations library:

<Project Sdk="Microsoft.NET.Sdk">  
  <Import Project="..\..\Common.props" />

  <PropertyGroup Label="Common">
    <UsesEntityFramework>true</UsesEntityFramework>
  </PropertyGroup>

  <PropertyGroup Label="Output">
    <AssemblyName>MyPackage</AssemblyName>
    <AssemblyTitle>MyPackage</AssemblyTitle>
    <TargetFramework>netstandard1.6</TargetFramework>
  </PropertyGroup>

  <ItemGroup Label="ProjectReferences">
    <ProjectReference Include="..\..\src\MyPackage.Abstractions\MyPackage.Abstractions.csproj" />
  </ItemGroup>
</Project>  

Ok, a little more here - I've defined a custom <PropertyGroup>, which I have labelled as Common. The Label isn't required, I just prefer to put some commentary around certain elements. Within my custom PropertyGroup, I've declared a property called UsesEntityFramework. We'll use this property later in our Common.props file.

Next up, add a Test project:

<Project Sdk="Microsoft.NET.Sdk">  
  <Import Project="..\..\Common.props" />

  <PropertyGroup Label="Common">
    <HostType>UnitTest</HostType>
    <DatabaseProvider>InMemory</DatabaseProvider>
  </PropertyGroup>

  <PropertyGroup Label="Package">
    <AssemblyTitle>MyPackage.Tests</AssemblyTitle>
    <AssemblyName>MyPackage.Tests</AssemblyName>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp1.0</TargetFramework>
  </PropertyGroup>

  <ItemGroup Label="ProjectReferences">
    <ProjectReference Include="..\..\src\MyPackage\MyPackage.csproj" />
    <ProjectReference Include="..\..\src\MyPackage.Abstractions\MyPackage.Abstractions.csproj" />
  </ItemGroup>
</Project>  

Again, very similar, except this time we're adding another custom property, HostType with a value of UnitTest, and a property DatabaseProvider with a value of InMemory. Additionally, we're now targeting netcoreapp1.1 instead of netstandard1.6. Still, all very standard MSBuild. Now, I'm currently using the Preview5 CLI bits and I don't know if the Microsoft.NET.Test.Sdk value will work, so I'll stick to the standard Microsoft.NET.Sdk value, and we'll manually handle pulling in our SDK. You'll also notice I haven't added any package references to my unit test framework of choice (Xunit in my case).

Lastly, our Host project:

<Project Sdk="Microsoft.NET.Sdk">  
  <Import Project="..\..\Common.props" />

  <PropertyGroup Label="Common">
    <HostType>Website</HostType>
    <DatabaseProvider>SqlServer</DatabaseProvider>
  </PropertyGroup>

  <PropertyGroup Label="Package">
    <AssemblyTitle>MyPackage.Host</AssemblyTitle>
    <AssemblyName>MyPackage.Host</AssemblyName>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp1.0</TargetFramework>
  </PropertyGroup>

  <ItemGroup Label="ProjectReferences">
    <ProjectReference Include="..\..\src\MyPackage\MyPackage.csproj" />
    <ProjectReference Include="..\..\src\MyPackage.Abstractions\MyPackage.Abstractions.csproj" />
  </ItemGroup>
</Project>  

Much like the Test project, or Host project defines a custom property HostType with a value of Website, but this time, I've set my DatabaseProvider property to be SqlServer.

Conditional logic in Common.props

The glue that binds this all together is a shared MSBuild XML file, Common.props which sits at the root. Through this file, which is imported into everything, we can apply custom package references and a bit more logic.

Version-aligning package references

My first goal, is that I wanted to version align my package references. This is to ensure my libraries are all built against the same version of a NuGet package. This is one of the design goals behind Paket. In our design, we're using MSBuild directly.

Using the Target Framework Moniker to import the .NET Standard or App library

<ItemGroup Condition="'$(TargetFramework)'=='netstandard1.6'">  
  <PackageReference Include="NETStandard.Library" Version="1.6.1" />
</ItemGroup>

<ItemGroup Condition="'$(TargetFramework)'=='netcoreapp1.1'">  
  <PackageReference Include="Microsoft.NETCore.App" Version="1.1.0" />
</ItemGroup>  

We can take advantage of simple MSBuild conditionals to determine which version of our standard library we want to import. Where we are targeting netstandard1.6, we'll import NETStandard.Library v1.6.1. Where we are targeting netcoreapp1.1, we'll import Microsoft.NETCore.App v1.1.0.

We no longer have to explicitly reference them in the projects, we apply them by convention.

Next, we can use our custom HostType property to determine which packages to import, for instance:

<ItemGroup Condition="'$(HostType)'=='Website'">  
  <PackageReference Include="Microsoft.NET.Sdk.Web" Version="1.0.0-alpha-20161104-2-112" />
  <!-- Other ASP.NET Core Packages -->
</ItemGroup>

<ItemGroup Condition="'$(HostType)'=='UnitTest'">  
  <PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.0.0-preview-20161024-02" />
  <PackageReference Include="xunit" Version="2.2.0-beta3-build3402" />
  <PackageReference Include="xunit.runner.visualstudio" Version="2.2.0-beta4-build1194" />
</ItemGroup>  

This is where it is starting to get a bit smarter. For our HostType = Website condition, we'll pull in the Web SDK, and at this point, you could specify ASP.NET Core package references, such as Kestrel.

For our HostType = UnitTest condition, we'll pull in the Test SDK and Xunit libraries required to build and run our tests.

Hopefully you can already see how we are using existing MSBuild functionality to ensure some consistency in our projects.

Importing a standard set of projects

Much like the use of our HostType attribute, we also designed another property, DatabaseProvider. Firstly, we defined a property group:

<PropertyGroup Condition="'$(DatabaseProvider)'!=''">  
  <UsesEntityFramework>true</UsesEntityFramework>
</PropertyGroup>  

You don't have to do this, as the build and project systems support transitive dependencies. That being sub-dependencies of your project's direct dependencies. But it's good to demonstrate what you can do.

And lastly, a final set of package references:

<ItemGroup Condition="'$(UsesEntityFramework)'=='true'">  
  <PackageReference Include="Microsoft.EntityFrameworkCore" Version="1.1.0" />
</ItemGroup>

<ItemGroup Condition="'$(DatabaseProvider)'=='SqlServer'">  
  <PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="1.1.0" />
</ItemGroup>

<ItemGroup Condition="'$(DatabaseProvider)'=='InMemory'">  
  <PackageReference Include="Microsoft.EntityFrameworkCore.InMemory" Version="1.1.0" />
</ItemGroup>  

So quite simply, if we are targeting Sql Server, we'll pull in the Microsoft.EntityFrameworkCore.SqlServer package, and if we target the In Memory database for testing, we'll pull in Microsoft.EntityFrameworkCore.InMemory.

Conclusions

As we can see, going back to the MSBuild XML files actually gives us a lot more power than what was possible with project.json. Ignoring the JSON vs XML argument, MSBuild itself is stable and mature and already has a great eco-system of tools from IDEs to build servers. It makes sense to take advantage of that.

Personally, I've never really spent a great deal of time with MSBuild because using Visual Studio, that is a hidden power. Visual Studio takes care of the file for you but now in the .NET Core world, MSBuild is becoming an ever more important aspect of the toolchain, so I am appreciating what it can do a lot more.

Example project files are on a gist:

https://gist.github.com/Antaris/b7d86d3485606e9f1b9fc8698092d56d

An approach to building .NET Core apps using Bamboo and Cake

Bamboo is my build server of choice because I find it simple to setup and has great integration with the rest of the Atlassian stack, such as our JIRA and Bitbucket Server instances.

Bamboo has had native support for MSBuild-based for ages, but with dotnet build the new sexiness, I wanted to get up and running with my CI workflow for my .NET Core applications.

Now there are quite a few challenges to face when setting up a CI build:

  1. Versioning
  2. Building
  3. Testing
  4. Deployment

I decided to tackle these things in stages.

Versioning

In pre-project.json world, if you wanted to version a set of projects within the same solution together, you could achieve that either by generating something like an AssemblyInfo.cs file at build time, or perhaps using a SharedAssemblyInfo.cs link approach whereby you manually set your numbers for the entire solution.

Currently, in project.json world this isn't possible because the [Assembly*Version] attributes are generated by dotnet-build. You might be able to manually add these yourself, but I haven't experimented with that.

So let's look at an example, here is one library MyLibrary.Abstractions.

{
  "version": "1.0.0-*",
  "dependencies": { },
  "frameworks": {
    "netstandard1.6": { }
  }
}

And here's my implementation library, MyLibrary:

{
  "version": "1.0.0-*",
  "dependencies": { 
    "MyLibrary.Abstractions": "1.0.0-*"
  },
  "frameworks": {
    "netstandard1.6": { }
  }
}

Right off the bat, I can see a couple of issues. The version number is fixed into the version key, and the dependency version of the abstractions library is also fixed.

The version string 1.0.0-* is significant in that it pins an exact Major.minor.patch number, but allows matching on any pre-release string. What this means is that when a build occurs, dotnet-build generates a pre-release string (because of the presence of -*) which is a timestamp. This aligns the version numbers of both the MyLibrary and MyLibrary.Abstractions packages. Previously, prior to RC2, you could simply do this:

"dependences": {
  "MyLibrary.Abstractions": ""
}

This is no longer possible, so if I need to version both components together, I need to do something different. Firstly, I need to tackle that version key and have that carry the same value in both project.json files.

Setting the version before project.json

I don't really believe in the term "calculating a version" because that implies some sort of formulaic-approach to determining the version components.

Unless you had some awesome code-analysis tool that could compare your codebase before and after a commit to determine the type of changes and how this affects your public API, the choice of Major.minor.patch solely has to rely on the developer, because only they will know the intent of their change. To this end, I decided to take the approach similar to GitVersion's GitVersion.yaml file where I can express part of the version number myself (the part I care about - Major.minor.patch) and generate the pre-release string from the branch/commit information. I also needed to be able to surface this information in Bamboo itself so I can attribute it to future deployments.

For this, I define a simple text file, build.props:

versionMajor=1  
versionMinor=0  
versionRev=0  

This file would be committed to source control so it can be shared with other devs (to align versions) and the CI server.

Next, I use my branch information to determine the pre-release string (if any), so for instance:

  • If the branch is not release, I will generate a pre-release string.
    • If we are building locally, the pre-release string is simply -<branch-name>, e.g. -master, or -feature-A
    • If we are building on the CI server which drops packages into a NuGet feed, we include the total commit count as -<branch-name>-<commit-count>. I can't take advantage of the +build metadata yet because our deployment server (Octopus Deploy) targets an older version of NuGet. I use commit count and not build number because if I do multiple builds of the same commit, they are the same output, so should carry the same version number.
  • If the branch is release, I will not generate a pre-release string.

This means I can generate version numbers such as 1.0.0 (release branch), 1.0.0-master-1 (master branch on CI server), 1.0.0-feature-A (feature/A branch on a local machine).

I wrap up the logic for this version number generation into a Powershell script named version.ps1. This script generates the version number and writes it out to a local file named version.props. This version information is then stamped into each project.json file.

version=1.0.0  
semanticVersion=1.0.0-master  
prerelease=master  

Handling dependencies versioning

We still haven't solved how we update the dependency versions in project.json for projects in the same solution. The truth is, we don't. Right at the start, we just change the version number in the dependency version string, to an object (a great tip from Andrew Lock):

"dependences": {
  "MyLibrary.Abstractions": { "target": "project" }
}

This allows the version resolution to match on any version. It's not a perfect approach, in fact, the compiler explicitly warns about a version mismatch, but as these are projects in the same solution being versioned together, that is a warning I am happy to put up with. You wouldn't apply * as the version to dependencies outside of the current solution, really these are project-project references only.

Building

Now building my solution could be as easy as dotnet build **/project.json, but the build process is a bit more involved because we have to stamp our version information in (detailed above), as well as run the test and pack commands to prepare our outputs. Enter Cake.

I've been following Cake for a while because I've honestly struggled with other build systems, such as Fake, PSake, etc. I'm a C# developer and Cake for me is a breeze because it presents a DSL that you write in C#, my language of choice! Cake is also extensible, so that was my point of entry for handling my version stamping. I first define a task named Version:

Task("Version")  
.Does(() =>
{
    if (Bamboo.IsRunningOnBamboo)
    {
        // MA - We are running a CI build - so need make sure we execute the script with -local = $false
        StartPowershellFile("./version.ps1", args => 
        {
            args.Append("local", "$false");
            args.Append("branch", EnvironmentVariable("bamboo_planRepository_branchName"));
        });
    }
    else
    {
        StartPowershellFile("./version.ps1", args => args.Append("local", "$true"));
    }

    string[] lines = System.IO.File.ReadAllLines("./version.props");
    foreach (string line in lines)
    {
        if (line.StartsWith("version"))
        {
            version = line.Substring("version=".Length).Trim();
        }
        else if (line.StartsWith("semanticVersion"))
        {
            semanticVersion = line.Substring("semanticVersion=".Length).Trim();
        }
        else if (line.StartsWith("prerelease"))
        {
            prerelease = line.Substring("prerelease=".Length).Trim();
        }
    }

    Console.WriteLine("Version: {0}", version);
    Console.WriteLine("SemanticVersion: {0}", semanticVersion);
    Console.WriteLine("PreRelease: {0}", prerelease);

    DotNetCoreVersion(new DotNetCoreVersionSettings
    {
        Files = GetFiles("**/project.json"),
        Version = semanticVersion
    });
});

The last method call is the key part, once I've executed my versioning script, I read the version number and use a custom Cake extension I've built DotNetCoreVersion to load each target project.json as a JObject, set the version key and write them back out again.

Now I can perform my build using another task Build:

Task("Build")  
.Does(() =>
{
    // MA - Build the libraries
    DotNetCoreBuild("./src/**/project.json", new DotNetCoreBuildSettings
    {
        Configuration = configuration
    });

    // MA - Build the test libraries
    DotNetCoreBuild("./tests/**/project.json", new DotNetCoreBuildSettings
    {
        Configuration = configuration
    });
});

Cake has built in methods for building .NET Core applications, so that made it a lot easier! On the Bamboo side of things, Cake is bootstrapped by another Powershell script build.ps1, so thanks to Bamboo's native Powershell script integration, we simply execute our build script:

Configuring the Cake bootstrapper

Testing

Currently, although there is now support for both NUnit and MSTest, the best test library for .NET Core apps is currently Xunit and that's purely a side-effect of the Microsoft team favouring Xunit itself during development. We have a problem here - Bamboo doesn't understand Xunit test result XML. Luckily, there exists an XSLT for transforming from Xunit to NUnit, which Bamboo does understand.

We wrap this up in our Cake build script:

Task("Test")  
.WithCriteria(() => HasArgument("test"))
.Does(() =>
{
    var tests = GetFiles("./tests/**/project.json");
    foreach (var test in tests) 
    {
        string projectFolder = System.IO.Path.GetDirectoryName(test.FullPath);
        string projectName = projectFolder.Substring(projectFolder.LastIndexOf('\\') + 1);
        string resultsFile = "./test-results/" + projectName + ".xml";

        DotNetCoreTest(test.FullPath, new DotNetCoreTestSettings
        {
            ArgumentCustomization = args => args.Append("-xml " + resultsFile)
        });

        // MA - Transform the result XML into NUnit-compatible XML for the build server.
        XmlTransform("./tools/NUnitXml.xslt", "./test-results/" + projectName + ".xml", "./test-results/NUnit." + projectName + ".xml");
    }
});

With us now outputting NUnit test results XML, we can read that information in during a Bamboo build plan and surface the test results in the interface. This also means that builds can now fail because of test result failure, which is what we want.

Configuring the

Test results surfaced in Bamboo

Deployments

Bamboo does have a built-in deployment mechanism, and for our internal libraries we utilise this to push our packages into one of two NuGet feeds:

  • If it is stable build from our release branch, these go into the stable NuGet feed. These do not automatically deploy, but they can easily be done with the push of a button (continuous delivery).
  • If it is build from our master branch, these automatically pushed to our volatile NuGet feed (continuous deployment).

We use ProGet by Inedo, as it is a superbly stable, multi-feed package host which is easy to setup and quick. By deploying our packages to these feeds, it is internal to our development environment and we can quickly start using our updated packages in our other projects. If we need to, we can quickly spin up a project-specific feed, or perhaps a branch-specific feed and deploy different versions of our code for different clients/scenarios.

One of the last steps of the build script, is to pack everything together:

Task("Pack")  
.WithCriteria(() => HasArgument("pack"))
.Does(() =>
{
    var projects = GetFiles("./src/**/project.json");
    foreach (var project in projects)
    {
        // MA - Pack the libraries
        DotNetCorePack(project.FullPath, new DotNetCorePackSettings
        {
            Configuration = configuration,
            OutputDirectory = "./artifacts/"
        });
    }
});

The dotnet-pack tool generates our NuGet packages for us, both the binaries and the symbols. ProGet can host both of these, so we just ship them all to the ProGet API and it handles the rest for us. This deployment step is handled as a Bamboo deployment project. For each module in our framework, we have two deployment plans, the first is the Volatile plan that uses continuous deployment to drop new packages into our volatile feed. The second plan is our stable plan which (when manually triggered) deploys to our stable feed.

We need to make sure the version information is carried through to the deployment plan, so to tackle that, in the source Bamboo build plan we read in the contents of our generated version.props file:

Reading the generated version number

The "Inject Bamboo variables" task allows us to read files in <key>=<value> format and append them as Bamboo variables. In this instance, we read in the version number and add it to the bamboo.props.semanticVersion variable. The variables need to be available to the result otherwise we can't use them later.

Configuring the release version:

Configuring the release version

And that's pretty much it! Obviously, this is an approach that works well for me, it may not suit your needs, but luckily there are so many ways of achieving the same thing. This will likely all need to change anyway, as the Microsoft team are busily migrating back to MSBuild which means we may be able to use more familiar methods of generating AssemblyInfo.cs files again.

The source files for the different components are available as a Gist: https://gist.github.com/Antaris/8ad52a96e0f2d9f682d1cd6342c44936

Let me know what you think.

ASP.NET Core 1.0 - Routing - Under the hood

Routing was introduced to .NET with the release of ASP.NET MVC 1.0 back in 2009. Routing is the process of taking an input URL, and mapping it to a route handler. This integrated into the ASP.NET pipeline as an IHttpModule - the UrlRoutingModule.

Current ASP.Net Routing

Let's remind ourselves about how the ASP.NET 4.x pipeline works:

ASP.NET Pipeline

Routing integrated with the pipeline as an IHttpModule, and when a route was resolved, it would bypass the rest of the pipeline and delegate to the final IHttpHandler through a new factory-type interface, the IRouteHandler:

public interface IRouteHandler  
{
    IHttpHandler GetHttpHandler(RequestContext context);
}

It was through this IRouteHandler that MVC integrated with Routing, and this is important, because generally MVC-style URLs are extensionless, so the routing system enabled these types of URLs to be mapped to a specific IHttpHandler, and in the case of MVC, this means mapping to the MvcHandler which was the entry point for controller/action execution. This means we didn't need to express a whole host of <httpHandler> rules for each unique route in our web.config file.

MVC integration with Routing in ASP.NET Pipeline

Mapping Routes

The MVC integration provided the MapRoute methods as extensions for a RouteCollection. Each Route instance provides a Handler property - which by default it set to the MvcRouteHandler (through the IRouteHandler abstraction):

routes.MapRoute(  
    "Default", 
    "{controller}/{action}/{id}", 
    new { controller = "Home", action = "Index", Id = UrlParameter.Optional }); 

This method call creates a Route instance with the MvcRouteHandler set. You could always override that if you wanted to do something slightly more bespoke:

var route = routes.MapRoute(  
    "Default", 
    "{controller}/{action}/{id}", 
    new { controller = "Home", action = "Index", Id = UrlParameter.Optional }); 

route.Handler = new MyCustomHandler();  

The Routing table

In current Routing (System.Web.Routing) routes are registered into a RouteCollection which forms a linear collection of all possible routes. When routes are being processed against an incoming URL, they form a top-down queue, where the first Route that matches, wins. For ASP.NET 4.x there can only be one route collection for your application, so all of your routing requirements had to be fulfilled by this instance.

ASP.Net Core 1.0 Routing

How has routing changed for ASP.Net Core 1.0? Quite significantly really, but they had managed to maintain a very familiar shape of API, but it is now integrated into the ASP.NET Core middleware pipeline.

The new Routing framework is based around the concept of an IRouter:

public interface IRouter  
{
    Task RouteAsync(RouteContext context);

    VirtualPathData GetVirtualPath(VirtualPathContext context);
}

An instance of RouterMiddleware can be created using any instance of IRouter. You can think of this as the root of a routing tree. Routers can be implemented any which way, and can be plugged directly into the pipeline using this middleware. To handle the classical routing collection (or route table), we now have an IRouteCollection abstraction, itself extending IRouter. This means that any RouteCollection instance acts as a router and can be used directly with the middleware:

ASP.NET Core Routing Middleware

This is how MVC hooks into the pipeline. When you call app.UseMvc(r => { }) and configure your routes, you're actually using a new IRouteBuilder abstraction which is used to build a router instance:

public interface IRouteBuilder  
{
    IRouter DefaultHandler { get; set; }

    IServiceProvider ServiceProvider { get; }

    IList<IRouter> Routes { get; }

    IRouter Build();
}

For MVC, the DefaultHandler property is an instance of MvcRouteHandler, and this does the work of selecting an action to execute.

The MapRoute methods are now provided as extensions of IRouteBuilder and they work by creating new routes and adding them to the builder's Routes collection. When the final Build call is executed, the standard RouteBuilder creates an instance of RouteCollection, which acts as our router for our middleware instance.

Remembering this is important if you are migrating from an ASP.NET 4.x application to ASP.NET Core and you've invested heavily on tweaking the Routing framework to suit your needs.

A quick note on Attribute Routing

Attribute Routing is a feature of MVC and not directly tied to the Routing framework. Because MVC creates its own router, it can control at what point it integrates Attribute routing. It does this during the call to UseMvc(r => { }) by injecting a single instance of AttributeRoute at the start of the route collection after all other routes have been configured.

This single instance of AttributeRoute acts as the router for handling Controller and Action-level attribute routes, using the application model as the source of truth.

Making decisions

Using the standard MapRoute method you end up with an instance of a TemplateRoute which works on a route template string, such as {controller}/{action}/{id?}. When the RouteAsync method is evaluating for this type, it does so by checking to see if the incoming request path matches the route template. If this is true, it then checks against any applied constraints to determine if the route values lifted from the incoming request path are valid. If either of these steps returns false, then the route does not match and control is returned back to the route collection to test the next route. This is very similar to how conventional working currently operates for ASP.NET 4.x.

Finishing up

Hopefully you can appreciate this run through of the under-the-hood changes made to the Routing framework. This newer Routing framework offers up greater flexibility in composing our applications because of the integration with the middelware pipeline.

It's worth having a look at the GitHub repo code for yourself.

ASP.NET Core 1.0 - Dependency Injection - What it is, and what it is not

Since its inception, ASP.NET 5 (now known as ASP.NET Core 1.0) has had the concept of Dependency Injection (DI) baked into its foundation. Where previous iterations of MVC supported this mechanism, it was always value-added. Also, at that stage, it wasn't DI, it was a Service Locator pattern, of which you could plug in a compatible IoC container to make it DI.

With the ability to redesign the entire stack from the ground up, the team took the approach of building in support for DI as a first-class feature. This was a natural extension of the desire to break up the stack into a set of composable NuGet packages - they needed a way to bring a whole host of components together - built around a set of known abstractions. These components needed to be testable too.

The built-in container represents a baseline set of functionality required to support ASP.NET Core 1.0. It only supports scenarios that are used by the framework - it is very focused on what it needs to do. With that in mind, you won't find it supporting some advanced concepts such as mutable containers, named child scopes, etc. found in other containers. That being said, ASP.NET Core is designed to allow you to plug in an alternative container that supports that additional functionality.

Foundations - IServiceProvider

The IServiceProvider interface has been kicking around since .NET 1.1 and is used by a variety of components throughout the Desktop CLR (.NET Framework) for providing a mechanism through which to resolve service instances. This includes the classic HttpContext (but don't try to resolve your IoC components through it - you can only return a few types, like HttpRequest, HttpResponse, etc. - It's not hooked up to your IoC container).

For the DI story in ASP.NET Core - the team have re-used this pre-existing interface, but it now is the service locator abstraction for the entire ASP.NET Core stack. In that sense, it sort of fills the role of the CommonServiceLocator for ASP.NET Core - to plug any other IoC/DI system into ASP.NET Core - you have to implement this standard interface. The stack uses this interface for resolving its types - and this means that although the framework has built-in support through their own container, you can easily plug in any other container - as long as they bring their implementation of IServiceProvider along for the ride. Sort of...

There is actually another contract that will need to be implemented to make an ASP.NET Core compatible container - IServiceScopeFactory. This interface is used for provisioning a new lifetime scope (in terms of built-in DI). For our built-in story, this is provided out of the box - and it is through this mechanism that Request-scoped services are resolved.

ServiceDescriptor and IServiceCollection

The ServiceDescriptor type (Microsoft.Extensions.DependencyInjection.Abstractions package) provides a container-agnostic approach to describing your services and their lifetimes. Generally, you take the approach of using a set of extension methods over IServiceCollection, such as:

services.AddTransient<IMyService, MyService>();  
services.AddMvc();  

These are wrappers around calls to ServiceDescriptor.[Instance|Transient|Scoped|Singleton] You can easily use ServiceDescriptor directly:

var descriptor = ServiceDescriptor.Transient<IMyService, MyService>();  

An IServiceCollection is a mutable collection of ServiceDescriptor instances.

Under the hood of the ServiceProvider

The built-in container is an internal implementation named ServiceProvider found in the Microsoft.Extensions.DependencyInjection package. An extension method of IServiceCollection is provided to initialise a new instance of it:

public static IServiceProvider BuildServiceProvider(this IServiceCollection services)  
{
    return new ServiceProvider(services);
}

When you initialise a service provider with an IServiceCollection instance, it creates a new root container. It contains an instance of ServiceTable which contains the blueprints for creating instances of services in their required lifetime. Through the use of IServiceScopeFactory it is also possible to initialise a new instance of ServiceProvider using the existing container as the root. The important thing about this, is they share the same ServiceTable instance - which means it is not designed to allow modifications to service registrations in child scopes. The idea is you configure your container once - reducing the set of moving parts.

A ServiceTable represents one or more IService instances, whereby an IService is a binding between a type, a ServiceLifetime and an IServiceCallsite, the latter of which actually realizes the instance of the service type. When a call to GetService is received, it follows the following steps to obtain the implementation instance:

  1. Look in the cache of realized-services to look for a delegate used to obtain the instance.
  2. If one does not exist - go through the ServiceTable and find the IServiceCallsite instance.
  3. Create a delegate used to obtain the instance through the IServiceCallsite
  4. Cache the delegate for future calls.

On the first call for a service instance, the container uses a reflection-based approach for obtaining the instance, but for any subsequent calls may result in the container opting to generate an expression tree, which compiles down to a delegate for future calls. This is to optimise occurrences where a component may be requested multiple times, depending on your chosen ServiceLifetime.

Optimizations for IEnumerable<T> services

The built-in container supports IEnumerable<T> directly, and it optimizes discovering T instances by chaining IService entries together through the IService.Next property. This means when the container is realizing an instance of IEnumerable<T>, it can move through the ServiceTable in a linked-list fashion to obtain the IService instances quickly.

Use in ASP.NET Core

The out-of-the-box experience provides the built-in set of DI components. The default convention is to simply use the Startup.ConfigureServices method to apply your service registrations:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddTransient<IMyService, MyService>();
    services.AddMvc();
}

The framework will take care of calling services.BuildServiceProvider() after this call has completed.

Replacing the built-in container

This same mechanism for registering services can be used for returning a custom IServiceProvider:

public IServiceProvider ConfigureServices(IServiceCollection service)  
{
    services.AddTransient<IMyService, MyService>();
    services.AddMvc();

    return services.BuildServiceProvider();
}

That final call return services.BuildServiceProvider() could easily be replaced with other containers, such as Autofac and perhaps Ninject. After ASP.NET Core RTWs (but hopefully before!), I would expect to see most if not all of the popular IoC containers to implement a compatible IServiceProvider, allowing you to use the container of your choice, if the built-in container does not fit your requirements.

Use outside of ASP.NET Core

It is entirely possible to use the ASP.NET Core built-in DI container outside of ASP.NET Core - this is actually signified by the fact the abstractions and implementations aren't actually part of the Microsof.AspNetCore namespace. Like many of these utility services (such as FileProviders, etc.) they can be used independently.

Creating a container manually

You can easily create your own container using the same mechanism, the IServiceCollection. The Microsoft.Extensions.DependencyInjection.ServiceCollection implementation needs to be spun up and some registrations need to be added:

var services = new ServiceCollection();

services.AddTransient<IMyService, MyService>();  
services.AddScoped<IMyOtherService, MyOtherService>();  

You can then build your container through the BuildServiceProvider extension method:

var container = services.BuildServiceProvider();

var myService = container.GetService<IMyService>();  
var myOtherService = container.GetService<IMyOtherService>();  

To create a child scope, you can resolve the scope factory:

var scope = container.GetService<IServiceScopeFactory>();  
var scopedContainer = scope.ServiceProvider;

var myOtherServiceScoped = scopedContainer.GetService<IMyOtherService>();  

Finishing up

I hope this post gives you more of an in-depth look the built-in container - what it is, and how it works. Don't forget to check out the aspnet/DependencyInjection repo on GitHub.