ASP.NET Core 2.0 - Thinking about the future

It has been an eventful week in ASP.NET Core land.

It started with the discovery by community member and ASP.NET Core contributor Kévin Chalet that a PR was being merged into the release branch of ASP.NET Core 2.0 Preview 1, rel/2.0.0-preview1. It was a significant PR in that it was changing the target framework monikers (TFMs) which denote compile targets. In this PR, the previous TFMs of netstandard1.x and net4x were changed to netcoreapp2.0. If this was being merged into a non-release branch, the community probably wouldn't have baulked, but in a release branch, this signals intent to change.

This means one thing in broad strokes:

ASP.NET Core 2.0 was dropping support for .NET Framework (netfx), in favour of specifically targeting .NET Core (netcore) 2.0

This seemingly came out of nowhere community-wise (I'm pretty sure it would have been discussed extensively internally). Why would a message of compatibility that was sold to us with the release of ASP.NET Core 1.0, a message that ensured you could use the latest technology stack on both the stable and mature .NET Framework, and the fast-moving .NET Core, why would that message now change.

Naturally, the community as a whole was divided, some in favour, some completely objecting. Let's try and take an objective look at the pros and cons of this change.

For moving to .NET Core 2.0 only

The primary motivation as I understand of this change was around how quickly ASP.NET Core can move when we were promising support for .NET Standard compatability. .NET Standard was introduced to simplify compatability between libraries and platforms. Please refer to this blog post as a refresher. The idea is that we could take the .NET Framework, .NET Core, Mono and other implementations of a 'standard', and provide a meta library (the .NET Standard library) that provided a consistent API surface that you could target across all runtimes. Different versions of each runtime could provide support for a version of .NET Standard.

For instance, if I target netstandard1.4 I know my library should work on .NET Core 1.0 (as it supports up to .NET Standard 1.6), .NET Framework >= 4.6.1 and also UWP 10.0. This was great because .NET Standard ensures API compatibility for me, meaning I worry less about #if specific code to cater for specific build platforms.

.NET Standard is a promise.

But one thing that potentially wasn't considered when promising .NET Standard compatibility for ASP.NET Core and ensuring ASP.NET Core could run on both .NET Framework and .NET Core was that they both move at different speeds. So how can we take advantages of the new APIs in .NET Core (which would eventually be ported to .NET Framework, and defined in .NET Standard), when .NET Framework has a completely different release cadence? Remembering that .NET Framework changes happen slowly, because they need to be tested to ensure support across the billions of devices that currently run the .NET Framework.

Targeting .NET Core breaks compatibility with .NET Framework

The above chart denotes the API surface of .NET Standard 2.0 and it's compatibility with .NET Framework 4.6.1 and .NET Core vNext, it does not express the full API surface available to netfx and netcore

.NET Core releases will come thick and fast in comparison to .NET Framework, so if we are promising this support to run ASP.NET Core on .NET Framework, we can't take advantage of new APIs implemented in a future version of .NET Standard until .NET Framework has been updated.

By sticking purely to .NET Standard, it means we are bound by the release cadence of the slower moving .NET Framework.

By unburdening ourselves from .NET Framework, it opens the door for get ASP.NET Core moving a lot quicker.

Only for parts of ASP.NET

Another thing to consider, is that it is not the complete ASP.NET Core 2.0 stack being re-targeted. Those libraries which are more likely to be consumed outside of the application model of web application (e.g. Microsoft.Extensions.*, Razor, etc.) will continue to target .NET Standard to ensure maximum compatibility.

Against moving to .NET Core 2.0 only

From what I see there are two big reasons why we should stick with .NET Standard 2.0:

  1. By removing support for the .NET Framework means businesses with heavy investment in .NET Framework are cut out of the will. ASP.NET Core 2.0 means .NET Core or nothing, and there is no support for referencing netcore libraries from netfx. Coupling this with a short(ish) support lifetime for ASP.NET Core 1.x means that businesses implementing ASP.NET Core with their legacy .NET Framework components can not upgrade to ASP.NET Core 2.0.

    This gives business only a couple of options - stick with .NET Framework and the ASP.NET stack (read: not Core), in which case your codebase becomes stagnant and very slow moving, or stick with ASP.NET Core 1.x which has a limited lifetime.

  2. By adopting a pattern of latest-framework-only, it may lead to future library authors also switching gears because their primary application model is ASP.NET Core 2.0. Leading by example could be damaging to the ecosystem in this world of promised compatibility (.NET Standard). For example, a library as prolific as JSON.NET, in some weird future where Jason determines that he also only wants to support future APIs in .NET Core? (This is only an example, I doubt this would actually happen!)

Where do we go from here?

At Build, Microsoft reaffirmed a commitment to delivering ASP.NET Core 2.0 for both .NET Core and .NET Framework, meaning targeting .NET Standard 2.0. This seems like backtracking, and in light of the amount of noise generated by the GitHub issue and in other channels, Microsoft have decided to please the masses. This is great in that they are listening to customers, but we also have to be realistic about this outcome.

What that now means is:

ASP.NET Core can only move as fast as .NET Standard, and therefore .NET Framework The above chart denotes the API surface of .NET Standard 2.0 and it's compatibility with .NET Framework 4.6.1 and .NET Core vNext, it does not express the full API surface available to netfx and netcore

ASP.NET Core can only move as fast as .NET Standard in terms of using new APIs.

What about me?

I can't agree one way or another, both sets of arguments are completely valid, we must just accept the outcome and how that ensures maximum compatibility for all, but also limits advancements.

The one thing I would like to see out of all of this, is more visibility on these (major) decisions. Much like the C# language repos, new ideas are thrashed out in discussion before implementation - I feel ASP.NET Core needs to work like this for these critical issues, so perhaps this is an idea the team can take on board.

Pub/Sub Event Model for DNX and ASP.NET 5 with Dependency Injection

I've been continuing my adventures with the new DNX and ASP.NET 5 platform as it progresses through it's various beta stages heading towards RC. I am really liking a lot of the new features that are being built, and how those features open up new scenarios for us developers.

The new stack is incredibly versatile and it is already offering up a myriad of ways of plugging into the new framework. And it is this extensibility, and the way things get composed together that gets the cogs whirring again. It is an exciting time to be a .NET developer, if you have the chance.

One thing which I feel is missing, and not from the ASP.NET 5 stack specifically, is the concept of an evented programming model. That being the ability to respond to your events using your code. Desktop applications (or should we say stateful applications) commonly use an evented programming model. WinForms is an example, and even WPF-based applications - the concept of implementing events is not new. But with a stateless application, like a website, events then become limited in functionality, because I guess you could consider the lifetime of an event to be the lifetime of the request itself.

I wanted to investigate whether or not we could bring in an event model for ASP.NET 5 applications, that uses a little trickery from the DI system to support a dynamic composition of event subscribers. And I started by looking at Prism.

The Event Provider

If you haven't discovered Prism before, it's a toolkit for building composite WPF applications - WPF-based applications composed of loosely coupled components. One feature of Prism as an event aggregator component which is designed to provision events which a subject can subscribe to dynamically. This seemed like a good basis for a dynamic pub/sub event model, so I took what we have there, and defined a new type in my project, the EventProvider. I wasn't overly sure if the naming of *Aggregator was correct - it's not really aggregating anything, and it's not really a *Factory either, because in my model, it's not explicitly creating events either - so I went with *Provider. Here is my contract:

public interface IEventProvider
{
    IEventSubscription<TPayload> CreateEventSubscription<TPayload>(
      SubscriptionToken token, 
      Func<TPayload, CancellationToken, Task> notificationFunc, 
      Func<TPayload, CancellationToken, Task<bool>> filterFunc = null
    );

    TEvent GetEvent<TEvent>() 
      where TEvent : IEvent;

    TEvent GetEvent<TEvent>(Func<TEvent> factory) 
      where TEvent : IEvent;

    IEnumerable<IEventSubscriber<TEvent, TPayload>> GetExternalEventSubscribers<TEvent, TPayload>() 
      where TEvent : IEvent;
}

It follows the original EventAggregator quite closely, but there are a few differences:

  • CreateEventSubscription is a factory method used to create a subscription - it is not the subscriber, but more of the binding between an event and a subscriber.
  • async/await subscribers. Because we don't know what subscribers may be doing on notification, we treat all subscribers as asynchronous. We give them a CancellationToken, they give us back a Task. This means we can await on each subscriber to perform their work.
  • GetExternalEventSubscribers is our hook into the DI system to return event subscribers provided through the IOC container.

The Event

So, what is an event? An event is a .NET class that implements the IEvent<T> contract, where T is our payload type. We split the concerns here with IEvent being the event, and T being the data for our event. E.g., the event could be UserRegistered and the User being the contextual data. The non-generic IEvent contract provides a marker interface used mainly for generic constraints, but also provides a reference to the source IEventProvider.

public interface IEvent<TPayload> : IEvent
{
    Task PublishAsync(
      TPayload payload, 
      CancellationToken cancellationToken = default(CancellationToken));

    SubscriptionToken Subscribe(
      Func<TPayload, CancellationToken, Task> notificationFunc, 
      Func<TPayload, CancellationToken, Task<bool>> filterFunc = null);

    void Unsubscribe(SubscriptionToken token);
}

An individual event controls its own set of subscribers, and by obtaining a reference to the event from the provider - means you can trigger your publications as part of your workflow.

What this mechanism enables is a classic pub/sub event mechanism you can use through the IDisposable pattern of using:

var provider = new EventProvider();
var @event = provider.GetEvent<UserRegistered>();
using (
  @event.Subscribe(async (user, ct) => await SendRegistrationEmailAsync(user, ct))
) {
  var user = new User { Name = "Matt" }:
  await @event.PublishAsync(user, cancellationToken);
}

While this is great because it means we can subscribe to those events we care about, it does add the complication that we now have to make our workflow more complex - which violates the single responsibility principal. We want to keep our code simple, which is more predictable, easier to debug and easier to test. So how do we go about doing this?

Providing Subscribers through DI

In stateless applications, like websites, the actual application composition occurs at startup. I can't think of a great many solutions that offer dynamic composition during the lifetime of a web application. Typically you have a set of components you wire up at the start, and the application really doesn't change from that point onwards. By that, I mean code doesn't change - obviously data does.

We can take advantage of this by allowing our event provider to use the IoC container to resolve our event subscribers. In this model, event subscribers provided through the DI system I've called external event subscribers, and otherwise they are direct. So how do we do this, firstly we define another contract:

public interface IEventSubscriber<TEvent, TPayload> where TEvent : IEvent
{
    Task<bool> FilterAsync(
      TPayload payload, 
      CancellationToken cancelationToken = default(CancellationToken));

    Task NotifyAsync(
      TPayload payload, 
      CancellationToken cancellationToken = default(CancellationToken));
}

This interface contract provides the subscribe methods for notification (called when an item is published), and filtering (so the subscriber can be notified of only the payloads it cares about).

So let's implement an external event subscriber for the ASP.NET 5 starter template project. Firstly, let's define our event:

public class ApplicationUserCreatedEvent : Event<ApplicationUser>
{
}

The event will be trigged when the ApplicationUser class is saved. Next, let's define our event subscriber:

public class SendAuthorisationEmailEventSubscriber : EventSubscriber<ApplicationUserCreatedEvent, ApplicationUser>
{
    private readonly IEmailSender _emailSender;

    public SendAuthorisationEmailEventSubscriber(IEmailSender emailSender)
    {
        _emailSender = emailSender;
    }

    public override Task<bool> FilterAsync(ApplicationUser payload, CancellationToken cancelationToken = default(CancellationToken))
    {
        return Task.FromResult(payload.Email != null);
    }

    public override async Task NotifyAsync(ApplicationUser payload, CancellationToken cancellationToken = default(CancellationToken))
    {
        await _emailSender.SendEmailAsync(payload.Email, $"Welcome to the site", $"Hi {payload.UserName}, this is your welcome email.");
    }
}

What this example event subscriber is doing, is sending confirmation (authorisation) emails to new users. The subscriber itself is provided through the DI system, which means it itself can have its own dependencies provided the same way. There are a number of benefits to this approach:

  • Event subscribers define their own dependencies.
  • Event publishers are simplified because they don't need to know about the dependencies of subscribers.
  • Both subscribers and publisher's code surface is a lot smaller - it's easier to test, very decoupled and can be focused on that single responsibility.

We can register our services in Startup:

public void ConfigureServices(IServiceCollection services)
{
    // Other services...

    // Register event provider and subscribers.
    services.AddScoped<IEventProvider>(sp => new EventProvider(() => sp));
    services.AddTransientEventSubscriber<ApplicationUserCreatedEvent, ApplicationUser, SendAuthorisationEmailEventSubscriber>();
}

We make the IEventProvider scoped to the request, but events can either be scoped or transient - depending on the services you want them to consume. In my demo project, I've taken the basic ASP.NET 5 starter template, and modified the AccountController:

[Authorize]
public class AccountController : Controller
{
    // Code removed for berevity...
    private readonly IEventProvider _eventProvider;
    private readonly ApplicationUserCreatedEvent _userCreatedEvent;

    public AccountController(
        // Code removed for berevity...
        ApplicationDbContext applicationDbContext,
        IEventProvider eventProvider)
    {
        // Code removed for berevity...
        _applicationDbContext = applicationDbContext;
        _eventProvider = eventProvider;

        _userCreatedEvent = eventProvider.GetEvent<ApplicationUserCreatedEvent>();
    }

    // Code removed for berevity...

    [HttpPost]
    [AllowAnonymous]
    [ValidateAntiForgeryToken]
    public async Task<IActionResult> Register(RegisterViewModel model)
    {
        if (ModelState.IsValid)
        {
            var user = new ApplicationUser { UserName = model.Email, Email = model.Email };
            var result = await _userManager.CreateAsync(user, model.Password);
            if (result.Succeeded)
            {
                await _userCreatedEvent.PublishAsync(user);

                // Code removed for berevity...
            }
            AddErrors(result);
        }

        // If we got this far, something failed, redisplay form
        return View(model);
    }
}

Now all the AccountController needs to do is publish the event and allow the framework to take care of the rest.

I've pushed this sample project to GitHub. Comments, criticisms welcome.

Meta-programming with Roslyn on ASP.NET vNext

The new ASP.NET vNext platform (ASP.NET 5) takes advantage of the new Roslyn compiler infrastructure. Roslyn is a managed-code implementation of the .NET compiler, but in reality, it is so much more than that. Roslyn is the compiler-as-a-service, realising a set of services that have long been the locked away. With this new technology, we have a new set of APIs, which allows us to understand a lot more of the code we are writing.

In ASP.NET vNext, the whole approach to the project system has changed, creating a leaner project system, built on the new Roslyn compiler services. The team have enabled some new scenarios with this approach, and one quite exciting scenario is meta-programming. That being developing programs that understand other programs, or in our instance, writing code to understand our own code, and update/modify our projects at compile time. Meta-programming with your projects can be achieved using the new ICompileModule (github) interface, which the Roslyn compiler can discover at compile time, and utilise both before and after your compilation:

public interface ICompileModule
{
    BeforeCompile(BeforeCompileContext context);

    AfterCompile(AfterCompileContext context);
}

The interesting thing about how the ICompileModule itself is used, is it is included as part of your own target assembly, and can act on code within that assembly itself.

Example project

Let's look at a project structure:

/project.json
/compiler/preprocess/ImplementGreeterCompileModule.cs
/Greeter.cs

This is a very much simplified project, and what we are going to get it to do, is implement the body of a method on our Greeter class:

public class Greeter
{
    public string GetMessage()
    {
        // Implement this method.
    }
}

So first things first, we need to make sure we have a reference to the required assemblies, so edit the project.json file and add the following:

{
    "version": "1.0.0-*",
    "dependencies": {
        "Microsoft.CodeAnalysis.CSharp": "1.0.0-*",
        "Microsoft.Framework.Runtime.Roslyn.Abstractions": "1.0.0-*"
    },

    "frameworks": {
        "dnx451": {
            "frameworkAssemblies": {
                "System.Runtime": "4.0.10.0",
                "System.Threading.Tasks": "4.0.0.0",
                "System.Text.Encoding": "4.0.0.0"
            }
        },
        "dnxcore50": {
            "dependencies": {
                "System.ComponentModel": "4.0.0-*",
                "System.IO": "4.0.10-*",
                "System.Reflection": "4.0.10-*",
                "System.Runtime": "4.0.10-*",
                "System.Runtime.Extensions": "4.0.10-*",
                "System.Threading.Tasks": "4.0.10-*"
            }
        }
    }
}

I'm not going to talk about the frameworks section and how this works as there is already a great deal already written about this concept.

The packages Microsoft.CodeAnalysis.CSharp brings in the Roslyn APIs for working with C# code, and the Microsoft.Framework.Runtime.Roslyn.Abstractions is a contract assembly for bridging between your code and the Roslyn compiler when using the DNX. The DNX implementation with Roslyn is what gives us the ability to use these techniques, you can check out the implementation here.

How does ICompileModule work?

One of the interest bits about how this all works, is when DNX is building your projects, it actually goes through a couple of stages, and these are very broad descriptions:

  1. Discover all applicable source files
  2. Convert to SyntaxTree instances
  3. Discover all references
  4. Create a compilation

Now at this stage, the RoslynCompiler will go ahead and discover any ICompileModules and if they exist, will create a !preprocess assembly, with the same references as your project. It will perform the same as above (steps 1-4), but with just the code in the compiler/preprocess/..., compile it, and load the assembly. The next step is to create an instance of the BeforeCompileContext class which gives us information on the current main project compilation, its references and syntax trees. When the preprocess assembly types are found, they are instantiated, and the BeforeCompile method is executed. At this stage, it sort of feels like Inception.

Implementing our Compile Module

So, now we have some of an understanding about how compile modules work, lets use one to implement some code. We start off with our basic implementation:

using System.Diagnostics;
using System.Linq;

using Microsoft.CodeAnalysis;
using Microsoft.CodeAnalysis.CSharp.Syntax;
using Microsoft.Framework.Runtime.Roslyn;

using T = Microsoft.CodeAnalysis.CSharp.CSharpSyntaxTree;
using F = Microsoft.CodeAnalysis.CSharp.SyntaxFactory;
using K = Microsoft.CodeAnalysis.CSharp.SyntaxKind;

public class ImplementGreeterCompileModule : ICompileModule
{
    public void AfterCompile(AfterCompileContext context)
    {
        // NoOp
    }

    public void BeforeCompile(BeforeCompileContext context)
    {

    }
}

When working with syntax trees, I prefer to use shorthand namespace import aliases, so F is SyntaxFactory, T is the CSharpSyntaxTree type, and K is the SyntaxKind type. This short hand lets me write more terse code which is still quite readable.

So first things first, what do we need to do? Well, firstly, we need to find our Greeter class within our compilation. So we can use the context.Compilation.SyntaxTrees collection for this. So we're first going to do a little digging to find it:

// Get our Greeter class.
var syntaxMatch = context.Compilation.SyntaxTrees
    .Select(s => new
    {
        Tree = s,
        Root = s.GetRoot(),
        Class = s.GetRoot().DescendantNodes()
            .OfType<ClassDeclarationSyntax>()
            .Where(cs => cs.Identifier.ValueText == "Greeter")
            .SingleOrDefault()
    })
    .Where(a => a.Class != null)
    .Single();

We keep a reference to the tree itself, its root and the matching class declaration, as we'll need these later on. Next up, let's find our GetMessage method within our class:

var tree = syntaxMatch.Tree;
var root = syntaxMatch.Root;
var classSyntax = syntaxMatch.Class;

// Get the method declaration.
var methodSyntax = classSyntax.Members
    .OfType<MethodDeclarationSyntax>()
    .Where(ms => ms.Identifier.ValueText == "GetMessage")
    .Single();

Of course, we're ignoring things like overloads, etc. here, these are things you would need to consider in production code, but as an example, this is very naive. Now we have our method, we need to implement our body. What I want to create, is a simple return "Hello World!" statement. Now you could shortcut this by using SyntaxFactory.ParseStatement("return \"Hello World!\"");, but let's try building it from scratch:

// Let's implement the body.
var returnStatement = F.ReturnStatement(
    F.LiteralExpression(
        K.StringLiteralExpression,
        F.Literal(@"""Hello World!""")));

So here we are creating a return statement using the SyntaxFactory type. The body of the statement is implemented as the return keyword + a string literal for "Hello World!".

Next up, we need to start updating the syntax tree. The thing to remember at this point, is that compilations, syntax tree, and its nodes are immutable. That being they are read-only structures, so we can't simply add things to an existing tree, we need to create new trees, and replace nodes. So with the next couple of lines, let's do that.

// Get the body block
var bodyBlock = methodSyntax.Body;

// Create a new body block, with our new statement.
var newBodyBlock = F.Block(new StatementSyntax[] { returnStatement });

// Get the revised root
var newRoot = (CompilationUnitSyntax)root.ReplaceNode(bodyBlock, newBodyBlock);

// Create a new syntax tree.
var newTree = T.Create(newRoot);

We've doing a couple of things here, we've first obtained the body block of the GetMessage method declaration. Next, we create a new block with our returnStatement. We then need to go back to the root node, and tell it to replace the bodyBlock node with the newBodyBlock node It does this, but returns us a new root node. The original root node is left unchanged, so to finish that off, we have to create the new syntax tree from this revised root node.

Lastly, we'll replace the current syntax tree with our new one:

// Replace the compilation.
context.Compilation = context.Compilation.ReplaceSyntaxTree(tree, newTree);

If you build now, even though the Greeter.GetMessage method does not currently have an implementation, it will build fine, because we've now dynamically implemented it using our ImplementGreeterCompileModule.

So our complete implementation looks like this:

using System.Diagnostics;
using System.Linq;

using Microsoft.CodeAnalysis;
using Microsoft.CodeAnalysis.CSharp.Syntax;
using Microsoft.Framework.Runtime.Roslyn;

using T = Microsoft.CodeAnalysis.CSharp.CSharpSyntaxTree;
using F = Microsoft.CodeAnalysis.CSharp.SyntaxFactory;
using K = Microsoft.CodeAnalysis.CSharp.SyntaxKind;

public class ImplementGreeterCompileModule : ICompileModule
{
    public void AfterCompile(AfterCompileContext context)
    {
        // NoOp
    }

    public void BeforeCompile(BeforeCompileContext context)
    {
        // Uncomment this to step through the module at compile time:
        //Debugger.Launch();

        // Get our Greeter class.
        var syntaxMatch = context.Compilation.SyntaxTrees
            .Select(s => new
            {
                Tree = s,
                Root = s.GetRoot(),
                Class = s.GetRoot().DescendantNodes()
                  .OfType<ClassDeclarationSyntax>()
                  .Where(cs => cs.Identifier.ValueText == "Greeter")
                  .SingleOrDefault()
            })
            .Where(a => a.Class != null)
            .Single();

        var tree = syntaxMatch.Tree;
        var root = syntaxMatch.Root;
        var classSyntax = syntaxMatch.Class;

        // Get the method declaration.
        var methodSyntax = classSyntax.Members
            .OfType<MethodDeclarationSyntax>()
            .Where(ms => ms.Identifier.ValueText == "GetMessage")
            .Single();

        // Let's implement the body.
        var returnStatement = F.ReturnStatement(
            F.LiteralExpression(
                K.StringLiteralExpression,
                F.Literal(@"""Hello World!""")));

        // Get the body block
        var bodyBlock = methodSyntax.Body;

        // Create a new body block, with our new statement.
        var newBodyBlock = F.Block(new StatementSyntax[] { returnStatement });

        // Get the revised root
        var newRoot = (CompilationUnitSyntax)root.ReplaceNode(bodyBlock, newBodyBlock);

        // Create a new syntax tree.
        var newTree = T.Create(newRoot);

        // Replace the compilation.
        context.Compilation = context.Compilation.ReplaceSyntaxTree(tree, newTree);
    }
}

I've added a Debugger.Launch() step in which can be useful if you want to step through the compilation process. Simply hit a CTRL+SHIFT+B build and attach to the IDE instance.

Where do we go from here?

There are a myriad of possibilities with this new technology, and the areas I am very much interested in, is component modularity. In future posts, I'll show you how you can use a compile module to discover modules in your code and/or nuget packages and generate dependency registrations at compile time.

I've added the code as both a Gist and pushed it to a public repo on GitHub