ASP.NET 5 has hit Release Candidate 1 status, and if you have been actively using the alpha and beta bits, you may find you have a couple of issues trying to install the RC-1 release.

The install instructions do note an issue you may encounter during setup:

NOTE: There is currently a known issue with the ASP.NET 5 RC installer. If you run the installer from a folder that contains previous versions of the MSI installers for DNVM (DotNetVersionManager-x64.msi or DotNetVersionManager-x86.msi) or the ASP.NET tools for Visual Studio (WebToolsExtensionsVS14.msi or WebToolsExtensionsVWD14.msi), the installer will fail with an error “0x80091007 - The hash value is not correct”. To work around this issue, run the installer from a folder that does not contain previous versions of the installer files.

But actually, I encountered a different issue:

Looking through the logfile, we had a few of these entries:

Not sure if it was a proxy issue, but luckily I've found another way, firstly:

And then after, ensure your DNVM is upgraded by running dnvm upgrade from a command prompt.

Now onto project.json and global.json changes, don't forget to check the aspnet/Announcements repo for the list of breaking changes (filter to milestone:1.0.0-rc1).

Entity Framework 7 introduces a new concept to the EF runtime known as shadow properties. Shadow properties are properties declared as part of the database model, but not part of the specific entity class. You can declare shadow properties as part of your model, and then EF provides a way to access and reason against those properties as part of your queries, through a special type called EF.

Firstly, we'll declare an example entity, the User:

public class User
{
public int Id { get; set; }

public string Name { get; set; }
}


So, my model building code might be something like:

var entity = builder.Entity<User>();
entity.Property(u => u.Name)
.IsRequired()
.HasMaxLength(20);


To declare a shadow property for this type, I could do the following:

entity.Property<string>("GooglePlusProfile")
.HasMaxLength(10);


I'd recommend watching this video with Rowan Miller who guides you through some simple examples.

In my example above, I've detailed a my user entity, and have added a my entity properties, and now using the EF type, I can provide a predicate for my queries:

from u in context.Users
select u;


The mechanism is good, but I'm not really a fan of the magic strings in there, I'd prefer something that we can actually statically declare, like any other property.

#### What is a entity mixin?

In my prototype, an entity mixin is another type that is declared separately from my entity, but it's properties become shadow properties of the host entity. So first things first, I need to find a way of declaring properties of one type, and have them become properties of another, in the context of my entity model. So, I define a custom MixinTypeBuilder and provide an extension method to EntityTypeBuilder. The calls to my custom builder are forwarded to a parent EntityTypeBuilder where the properties are actually declared. This provides nice syntactic sugar:

var entity = builder.Entity<User>();
entity.Property(u => u.Name)
.IsRequired()
.HasMaxLength(20);

var mixin = entity.Mixin<Author>();
.IsRequired()
.HasMaxLength(50);


Here is my custom builder:

public class MixinTypeBuilder<TMixin> where TMixin : Mixin
{
private readonly string _propertyPrefix = $"{typeof(TMixin).Name}_"; internal MixinTypeBuilder(EntityTypeBuilder entityTypeBuilder) { _entityTypeBuilder = entityTypeBuilder; _entityTypeBuilder.Annotation("MixinType", typeof(TMixin)); } public PropertyBuilder<TProperty> Property<TProperty>(Expression<Func<TMixin, TProperty>> propertyExpression) { if (propertyExpression == null) { throw new ArgumentNullException(nameof(propertyExpression)); } string propertyName = propertyExpression.GetPropertyAccess().Name; string propertyKey =$"{_propertyPrefix}{propertyName}";

var builder = _entityTypeBuilder.Property<TProperty>(propertyKey);

return builder;
}
}


My example mixin, Author, now declares my shadow property GooglePlusProfile, which I rewrite as Author_GooglePlusProfile (this would need to be reflected in any table naming - migrations should handle this automatically). We also add an annotation to the entity being built to give it a reference to a mixin type, this will be used later through query generation.

#### Querying with Mixins

At this point, we haven't done anything to enhance how we query against our custom shadow properties. What I want to provide, is a nice statically type way of expressing access to Mixin properties. I want to achieve something like this:

from u in context.Users
select u;


We're now needing to define a method Mixin<T>() which will allow us access the properties of the mixin as part of our query. EF7 won't support this out of the box, so we need to make some tweaks.

EF7 is extensible and modular by nature and when you're composing your application through DI, you can easily replace parts of the framework with your own implementations.

My first port of call is to get the actual method working, so I define an interface called ISupportMixins and an abstract implementation MixinHost:

public interface ISupportMixins
{

T Mixin<T>() where T : Mixin;
}

public abstract class MixinHost : ISupportMixins
{
private readonly List<Mixin> _mixins = new List<Mixin>();

{
}

public T Mixin<T>() where T : Mixin
{
return _mixins.OfType<T>().FirstOrDefault() ?? Activator.CreateInstance<T>();
}
}


We can now adjust our entity implementation for User:

public class User : MixinHost
{
public int Id { get; set; }

public string Name { get; set; }
}


The key part of this experiment is being able to transform expressions from what you see as part of your code, to what is actually executed against the database. Given my User and Author classes, I want to do the following:

// From:

// To:


EF7 already has the plumbing in place to support the call to EF.Property<string>(u, "Author_GooglePlusProfile"), so we just need to transform our form of u.Mixin<Author>().GooglePlusProfile into this. We do this with an expression visitor, which allows us to walk to expression tree and replace it.

public class MixinExpressionVisitor : ExpressionVisitorBase
{
public Expression TransformMixinMemberExpression(MemberExpression member)
{
var method = (MethodCallExpression)member.Expression;
var target = method.Object;
string propertyName = $"{method.Type.Name}_{member.Member.Name}"; return Expression.Call( EntityQueryModelVisitor.PropertyMethodInfo.MakeGenericMethod(member.Type), target, Expression.Constant(propertyName)); } protected override Expression VisitMember(MemberExpression node) { var method = node.Expression as MethodCallExpression; if (method != null && method.Method.IsGenericMethod && method.Method.Name == "Mixin") { // Here we are transforming calls like "entity.Mixin<Type>().Property" to "EF.Property<PropertyType>(entity, "FullPropertyName")" return TransformMixinMemberExpression(node); } return base.VisitMember(node); } }  EF7 uses Remotion's re-linq for handling the LINQ provider work, and this provides an intermediate query model with which we can thread our code. Currently, I've only been playing with the relational model with EF7, so to do what I needed, I provided my own implementation of Microsoft.Data.Entity.Query.RelationalQueryModelVisitor: using EFRelationalQueryModelVisitor = Microsoft.Data.Entity.Query.RelationalQueryModelVisitor; public class RelationalQueryModelVisitor : EFRelationalQueryModelVisitor { // ctor ommited for brevity.... public override void VisitQueryModel(QueryModel queryModel) { queryModel.TransformExpressions(e => new MixinExpressionVisitor().Visit(e)); base.VisitQueryModel(queryModel); } }  This is one of the parts we're replacing on EF, but we extend the pre-existing type so we don't lose anything it is already doing. To replace this type, you can replace the service description: services.Replace(ServiceDescriptor.Scoped<IEntityQueryModelVisitorFactory, RelationalQueryModelVisitorFactory>());  When EF now runs, it'll be using our implementation instead of the stock version. We do have another scenario to consider, and that is the projection, if I want to do the following: select u.Mixin<Author>();  This will currently work, but the property values will not be provided because that'll be transformed to a simple new Author() expression. Instead, let's make some adjustments, so that: // From: select u.Mixin<Author>(); // To: select new Author { GooglePlusProfile = EF.Property<string>(u, "Author_GooglePlusProfile") };  We do this by enhancing our visitor type: public class MixinExpressionVisitor : ExpressionVisitorBase { // Other Visitor code ommited for brevity private readonly IModel _model; public MixinExpressionVisitor(IModel model) { _model = model; } public Expression TransformMixinMethodExpression(MethodCallExpression method) { var mixinType = method.Type; var entityType = method.Object.Type; string prefix =$"{mixinType.Name}_";

// Get the available properties of the mixin.
var entity = _model.GetEntityType(entityType);
var properties = entity
.GetProperties()
.Where(p => p.Name.StartsWith(prefix))
.ToArray();

// Create an object initializer expression.
var ctor = Expression.New(mixinType);
var memberBindings = new MemberBinding[properties.Length];
for (int i = 0; i < properties.Length; i++)
{
var property = properties[i];
string propertyName = property.Name.Replace(prefix, "");
var member = mixinType.GetProperty(propertyName);
var value = Expression.Call(
EntityQueryModelVisitor.PropertyMethodInfo.MakeGenericMethod(member.PropertyType),
method.Object,
Expression.Constant(property.Name));

memberBindings[i] = Expression.Bind(member, value);
}

return Expression.MemberInit(ctor, memberBindings);
}

protected override Expression VisitMethodCall(MethodCallExpression node)
{
var method = node.Method;
if (method != null && method.IsGenericMethod && method.Name == "Mixin")
{
// Here we are transforming calls like "select entity.Mixin<Type>()" to "new Type { PropertyName = EF.Property<PropertyType>(entity, "FullPropertyName") }
return TransformMixinMethodExpression(node);
}
return base.VisitMethodCall(node);
}
}


The visitor now has to consider the the IModel of the database context, because we need to understand what properties are available to the mixin. We use the annotations we provided previously to the model to grab those properties that are shadow properties and properties of our mixin.

We take each of those properties, and transform them to a call to EF.Property<T>(...), and aggregate these as an object initializer, using the Expression.MemberInit(...) method.

This is all very transparent to your end query, because the return type of .Mixin<T>() is T, which is also the return type of the constructor to T.

#### Hydrating the mixin and offering change detection

The last steps our of experiment is to actually get the values from the entity to the mixin, and this now actually introduces a restriction on where these mixins can actually be used. Shadow property values are actually stored by the change tracker, so we can't use them in untracked scenarios.

I haven't really found a decent point in the framework to hook into the entity hydration, and this is because EF generates expressions instead of using Activator to create the types, so I went a layer higher and modified how the state manager is actually creating references to items it is tracking. This is another part of "stock" EF we're going to replace, the Microsoft.Data.Entity.ChangeTracking.Internal.InternalEntityEntryFactory

using EFInternalEntityEntryFactory = Microsoft.Data.Entity.ChangeTracking.Internal.InternalEntityEntryFactory;

public class InternalEntityEntryFactory : EFInternalEntityEntryFactory
{
{
}

public override InternalEntityEntry Create(IStateManager stateManager, IEntityType entityType, object entity)
{
var entry = base.Create(stateManager, entityType, entity);

BindMixins(entry, entityType, entity);

return entry;
}

public override InternalEntityEntry Create(IStateManager stateManager, IEntityType entityType, object entity, ValueBuffer valueBuffer)
{
var entry = base.Create(stateManager, entityType, entity, valueBuffer);

BindMixins(entry, entityType, entity);

return entry;
}

private void BindMixins(InternalEntityEntry entry, IEntityType entityType, object entity)
{
var mixinHost = entity as ISupportMixins;
if (mixinHost != null)
{
var mixinTypes = entityType
.Annotations
.Where(a => a.Name == "MixinType")
.Select(a => (Type)a.Value)
.Distinct()
.ToArray();

foreach (var mixinType in mixinTypes)
{
// Create the mixin.
var mixin = (Mixin)Activator.CreateInstance(mixinType);

// Set the resolver.
mixin.SetPropertyEntryResolver(p => new PropertyEntry(entry, p));

// Assign to the host entity.
}
}
}
}


This process firstly determines if the entity is a mixin candidate by looking for our ISupportMixins contract. Next, we determine what mixins are applied to the entity through the annotations, and for each one, we'll create an instance of it (mixins must derive from Mixin), and assign a delegate which allows us to get or set values against the current state manager. Lastly, we then add the mixin to the host.

So, putting it all together, this means I can do the following:

var users = (
from u in context.Users
select u //u.Mixin<Author>()
).ToList();

var user = users.FirstOrDefault();
if (user != null)
{
var author = user.Mixin<Author>();

// You can make changes here:

// Save changes
context.SaveChanges();
}


It is by no means a perfect solution, there are a few scenarios we don't currently support:

• Support chance detection when selecting only the mixin: from u in context.Users select u.Mixin<Author>(). In this scenario, the properties set against the mixin don't support change detection (they are stored internally by the Mixin base type), because the parent entity is not loaded for the state manager work to kick in.
• Haven't found a nice way of attaching a Mixin instance to an existing entity and then saving changes.

After all, this is just an experiment.

Code available on GitHub

I've been continuing my adventures with the new DNX and ASP.NET 5 platform as it progresses through it's various beta stages heading towards RC. I am really liking a lot of the new features that are being built, and how those features open up new scenarios for us developers.

The new stack is incredibly versatile and it is already offering up a myriad of ways of plugging into the new framework. And it is this extensibility, and the way things get composed together that gets the cogs whirring again. It is an exciting time to be a .NET developer, if you have the chance.

One thing which I feel is missing, and not from the ASP.NET 5 stack specifically, is the concept of an evented programming model. That being the ability to respond to your events using your code. Desktop applications (or should we say stateful applications) commonly use an evented programming model. WinForms is an example, and even WPF-based applications - the concept of implementing events is not new. But with a stateless application, like a website, events then become limited in functionality, because I guess you could consider the lifetime of an event to be the lifetime of the request itself.

I wanted to investigate whether or not we could bring in an event model for ASP.NET 5 applications, that uses a little trickery from the DI system to support a dynamic composition of event subscribers. And I started by looking at Prism.

#### The Event Provider

If you haven't discovered Prism before, it's a toolkit for building composite WPF applications - WPF-based applications composed of loosely coupled components. One feature of Prism as an event aggregator component which is designed to provision events which a subject can subscribe to dynamically. This seemed like a good basis for a dynamic pub/sub event model, so I took what we have there, and defined a new type in my project, the EventProvider. I wasn't overly sure if the naming of *Aggregator was correct - it's not really aggregating anything, and it's not really a *Factory either, because in my model, it's not explicitly creating events either - so I went with *Provider. Here is my contract:

public interface IEventProvider
{
SubscriptionToken token,
);

TEvent GetEvent<TEvent>()
where TEvent : IEvent;

TEvent GetEvent<TEvent>(Func<TEvent> factory)
where TEvent : IEvent;

where TEvent : IEvent;
}


It follows the original EventAggregator quite closely, but there are a few differences:

• CreateEventSubscription is a factory method used to create a subscription - it is not the subscriber, but more of the binding between an event and a subscriber.
• async/await subscribers. Because we don't know what subscribers may be doing on notification, we treat all subscribers as asynchronous. We give them a CancellationToken, they give us back a Task. This means we can await on each subscriber to perform their work.
• GetExternalEventSubscribers is our hook into the DI system to return event subscribers provided through the IOC container.

### The Event

So, what is an event? An event is a .NET class that implements the IEvent<T> contract, where T is our payload type. We split the concerns here with IEvent being the event, and T being the data for our event. E.g., the event could be UserRegistered and the User being the contextual data. The non-generic IEvent contract provides a marker interface used mainly for generic constraints, but also provides a reference to the source IEventProvider.

public interface IEvent<TPayload> : IEvent
{
CancellationToken cancellationToken = default(CancellationToken));

SubscriptionToken Subscribe(

void Unsubscribe(SubscriptionToken token);
}


An individual event controls its own set of subscribers, and by obtaining a reference to the event from the provider - means you can trigger your publications as part of your workflow.

What this mechanism enables is a classic pub/sub event mechanism you can use through the IDisposable pattern of using:

var provider = new EventProvider();
var @event = provider.GetEvent<UserRegistered>();
using (
@event.Subscribe(async (user, ct) => await SendRegistrationEmailAsync(user, ct))
) {
var user = new User { Name = "Matt" }:
await @event.PublishAsync(user, cancellationToken);
}


While this is great because it means we can subscribe to those events we care about, it does add the complication that we now have to make our workflow more complex - which violates the single responsibility principal. We want to keep our code simple, which is more predictable, easier to debug and easier to test. So how do we go about doing this?

### Providing Subscribers through DI

In stateless applications, like websites, the actual application composition occurs at startup. I can't think of a great many solutions that offer dynamic composition during the lifetime of a web application. Typically you have a set of components you wire up at the start, and the application really doesn't change from that point onwards. By that, I mean code doesn't change - obviously data does.

We can take advantage of this by allowing our event provider to use the IoC container to resolve our event subscribers. In this model, event subscribers provided through the DI system I've called external event subscribers, and otherwise they are direct. So how do we do this, firstly we define another contract:

public interface IEventSubscriber<TEvent, TPayload> where TEvent : IEvent
{
CancellationToken cancelationToken = default(CancellationToken));

CancellationToken cancellationToken = default(CancellationToken));
}


This interface contract provides the subscribe methods for notification (called when an item is published), and filtering (so the subscriber can be notified of only the payloads it cares about).

So let's implement an external event subscriber for the ASP.NET 5 starter template project. Firstly, let's define our event:

public class ApplicationUserCreatedEvent : Event<ApplicationUser>
{
}


The event will be trigged when the ApplicationUser class is saved. Next, let's define our event subscriber:

public class SendAuthorisationEmailEventSubscriber : EventSubscriber<ApplicationUserCreatedEvent, ApplicationUser>
{

public SendAuthorisationEmailEventSubscriber(IEmailSender emailSender)
{
_emailSender = emailSender;
}

{
}

{
await _emailSender.SendEmailAsync(payload.Email, $"Welcome to the site",$"Hi {payload.UserName}, this is your welcome email.");
}
}


What this example event subscriber is doing, is sending confirmation (authorisation) emails to new users. The subscriber itself is provided through the DI system, which means it itself can have its own dependencies provided the same way. There are a number of benefits to this approach:

• Event subscribers define their own dependencies.
• Event publishers are simplified because they don't need to know about the dependencies of subscribers.
• Both subscribers and publisher's code surface is a lot smaller - it's easier to test, very decoupled and can be focused on that single responsibility.

We can register our services in Startup:

public void ConfigureServices(IServiceCollection services)
{
// Other services...

// Register event provider and subscribers.
services.AddScoped<IEventProvider>(sp => new EventProvider(() => sp));
}


We make the IEventProvider scoped to the request, but events can either be scoped or transient - depending on the services you want them to consume. In my demo project, I've taken the basic ASP.NET 5 starter template, and modified the AccountController:

[Authorize]
public class AccountController : Controller
{
// Code removed for berevity...

public AccountController(
// Code removed for berevity...
ApplicationDbContext applicationDbContext,
IEventProvider eventProvider)
{
// Code removed for berevity...
_applicationDbContext = applicationDbContext;
_eventProvider = eventProvider;

_userCreatedEvent = eventProvider.GetEvent<ApplicationUserCreatedEvent>();
}

// Code removed for berevity...

[HttpPost]
[AllowAnonymous]
[ValidateAntiForgeryToken]
{
if (ModelState.IsValid)
{
var user = new ApplicationUser { UserName = model.Email, Email = model.Email };
var result = await _userManager.CreateAsync(user, model.Password);
if (result.Succeeded)
{
await _userCreatedEvent.PublishAsync(user);

// Code removed for berevity...
}
}

// If we got this far, something failed, redisplay form
return View(model);
}
}


Now all the AccountController needs to do is publish the event and allow the framework to take care of the rest.

I've pushed this sample project to GitHub. Comments, criticisms welcome.

Having not tackled any major TypeScript projects recently, the language itself has continued to be at the top of my 'to learn' list. I wanted to find a new project that I could use to learn TypeScript, and I decided on tackling a port of the Razor parsing framework to TypeScript.

I've enjoyed significant experience with the Razor parsing framework, initially through my open source project RazorEngine and most recently through my new project FuManchu, a C# implementation of HandlebarsJS using the Razor parsing framework and MVC's metadata framework.

Now, converting any library like this to JavaScript for the browser or NodeJS is quite an undertaking, but given how clean the codebase and any lack of significant dependencies, it's seems to me a perfect fit.

### Tools

To tackle this project, I'm going out of my comfort zone - that means dropping Visual Studio in favour of a simpler text editor - and Visual Studio Code is again, a perfect choice. I still gives me intellisense, and provides built-in support for TypeScript. I also considered alternatives such as Sublime, Brackets or Atom, all using their respective OmniSharp plugins.

Razor itself comes with a wealth of unit tests already predefined, so that assists ensuring that my TypeScript port at least conforms to the same set of tests that the main project supports, but additionally we're not building a C# language parser, I'm attempting an ECMAScript 6 language parser, so although there are a great many similarities, there are also differences. My unit test framework of choice is Jasmine, combined with Karma. I hope to implement the majority of the unit tests as proof of the implementation.

### Where to start

There is a lot going on in Razor, from the tokenizers, to language parsers, chunk generators and tag helpers, it's important to break down the project into smaller blocks which can be ported and tested. So let's start at the beginning, and tackle some simple concepts - reading text. Razor provides a number of text reader implementations that either operate and implement some abstract contract types - the ITextBuffer and ITextDocument. Most of the text readers implement from .NET's TextReader type also - which we'd need to implement. Then there are a few utility types, such as StringBuilder, IEquatable<T>, IComparable<T> and of course, IDisposable.

IDisposable is an interesting concept as this is very much implemented with a language feature in C# and Visual Basic, the using statement. We can implement something similar in TypeScript, and proide that as an export function as part of library.

Firstly, let's define IDisposable:

namespace Razor
{
export interface IDisposable
{
dispose(): void;
}
}


And now let's implement our using function.

namespace Razor
{
export function Using(contextOrDisposable: any|IDisposable, disposableOrAction: IDisposable|Function, action?: (disposable: IDisposable) => void)
{
if (arguments.length === 2)
{
action = <(disposable: IDisposable|Function) => void>disposableOrAction;
disposableOrAction = <IDisposable>contextOrDisposable;
contextOrDisposable = null;
}

try
{
action.apply(contextOrDisposable, [disposableOrAction]);
}
finally
{
(<IDisposable>disposableOrAction).dispose();
}
}
}


We're taking advantage of TypeScript's ability to support multiple types as arguments - this helps us implement a form of method overloading, but it means our method has to consider and/or test arguments to determine the actual intention of the method calls. This doesn't feel entirely right to me, and it's something I may come back and revisit and refactor. Now, one thing that TypeScript handles really well, is the control of closure scope - that being the meaning of this, so although I've provided arguments for specifying the context of your disposable closure, if you're taking advantage of TypeScript's support for arrow functions (=>), it handles this for you in the generated code.

import using = Razor.Using;

export class MyClass
{
public get someProp(): string
{
return "value";
}

public someMethod(): string
{
var disposable: IDisposable = // ... some disposable instance
using (disposable, () =>
{
// "this" still means the instance of "MyClass
var value = this.someProp;
});
}
}


### Next series post

The next post in the series will deal with implementing our text services - including the SourceLocation type and our text readers.

You can follow my progress at the GitHub rep - https://github.com/Antaris/RazorJS

The new ASP.NET vNext platform (ASP.NET 5) takes advantage of the new Roslyn compiler infrastructure. Roslyn is a managed-code implementation of the .NET compiler, but in reality, it is so much more than that. Roslyn is the compiler-as-a-service, realising a set of services that have long been the locked away. With this new technology, we have a new set of APIs, which allows us to understand a lot more of the code we are writing.

In ASP.NET vNext, the whole approach to the project system has changed, creating a leaner project system, built on the new Roslyn compiler services. The team have enabled some new scenarios with this approach, and one quite exciting scenario is meta-programming. That being developing programs that understand other programs, or in our instance, writing code to understand our own code, and update/modify our projects at compile time. Meta-programming with your projects can be achieved using the new ICompileModule (github) interface, which the Roslyn compiler can discover at compile time, and utilise both before and after your compilation:

public interface ICompileModule
{
BeforeCompile(BeforeCompileContext context);

AfterCompile(AfterCompileContext context);
}


The interesting thing about how the ICompileModule itself is used, is it is included as part of your own target assembly, and can act on code within that assembly itself.

##### Example project

Let's look at a project structure:

/project.json
/compiler/preprocess/ImplementGreeterCompileModule.cs
/Greeter.cs


This is a very much simplified project, and what we are going to get it to do, is implement the body of a method on our Greeter class:

public class Greeter
{
public string GetMessage()
{
// Implement this method.
}
}


So first things first, we need to make sure we have a reference to the required assemblies, so edit the project.json file and add the following:

{
"version": "1.0.0-*",
"dependencies": {
"Microsoft.CodeAnalysis.CSharp": "1.0.0-*",
"Microsoft.Framework.Runtime.Roslyn.Abstractions": "1.0.0-*"
},

"frameworks": {
"dnx451": {
"frameworkAssemblies": {
"System.Runtime": "4.0.10.0",
"System.Text.Encoding": "4.0.0.0"
}
},
"dnxcore50": {
"dependencies": {
"System.ComponentModel": "4.0.0-*",
"System.IO": "4.0.10-*",
"System.Reflection": "4.0.10-*",
"System.Runtime": "4.0.10-*",
"System.Runtime.Extensions": "4.0.10-*",
}
}
}
}


I'm not going to talk about the frameworks section and how this works as there is already a great deal already written about this concept.

The packages Microsoft.CodeAnalysis.CSharp brings in the Roslyn APIs for working with C# code, and the Microsoft.Framework.Runtime.Roslyn.Abstractions is a contract assembly for bridging between your code and the Roslyn compiler when using the DNX. The DNX implementation with Roslyn is what gives us the ability to use these techniques, you can check out the implementation here.

### How does ICompileModule work?

One of the interest bits about how this all works, is when DNX is building your projects, it actually goes through a couple of stages, and these are very broad descriptions:

1. Discover all applicable source files
2. Convert to SyntaxTree instances
3. Discover all references
4. Create a compilation

Now at this stage, the RoslynCompiler will go ahead and discover any ICompileModules and if they exist, will create a !preprocess assembly, with the same references as your project. It will perform the same as above (steps 1-4), but with just the code in the compiler/preprocess/..., compile it, and load the assembly. The next step is to create an instance of the BeforeCompileContext class which gives us information on the current main project compilation, its references and syntax trees. When the preprocess assembly types are found, they are instantiated, and the BeforeCompile method is executed. At this stage, it sort of feels like Inception.

### Implementing our Compile Module

So, now we have some of an understanding about how compile modules work, lets use one to implement some code. We start off with our basic implementation:

using System.Diagnostics;
using System.Linq;

using Microsoft.CodeAnalysis;
using Microsoft.CodeAnalysis.CSharp.Syntax;
using Microsoft.Framework.Runtime.Roslyn;

using T = Microsoft.CodeAnalysis.CSharp.CSharpSyntaxTree;
using F = Microsoft.CodeAnalysis.CSharp.SyntaxFactory;
using K = Microsoft.CodeAnalysis.CSharp.SyntaxKind;

public class ImplementGreeterCompileModule : ICompileModule
{
public void AfterCompile(AfterCompileContext context)
{
// NoOp
}

public void BeforeCompile(BeforeCompileContext context)
{

}
}


When working with syntax trees, I prefer to use shorthand namespace import aliases, so F is SyntaxFactory, T is the CSharpSyntaxTree type, and K is the SyntaxKind type. This short hand lets me write more terse code which is still quite readable.

So first things first, what do we need to do? Well, firstly, we need to find our Greeter class within our compilation. So we can use the context.Compilation.SyntaxTrees collection for this. So we're first going to do a little digging to find it:

// Get our Greeter class.
var syntaxMatch = context.Compilation.SyntaxTrees
.Select(s => new
{
Tree = s,
Root = s.GetRoot(),
Class = s.GetRoot().DescendantNodes()
.OfType<ClassDeclarationSyntax>()
.Where(cs => cs.Identifier.ValueText == "Greeter")
.SingleOrDefault()
})
.Where(a => a.Class != null)
.Single();


We keep a reference to the tree itself, its root and the matching class declaration, as we'll need these later on. Next up, let's find our GetMessage method within our class:

var tree = syntaxMatch.Tree;
var root = syntaxMatch.Root;
var classSyntax = syntaxMatch.Class;

// Get the method declaration.
var methodSyntax = classSyntax.Members
.OfType<MethodDeclarationSyntax>()
.Where(ms => ms.Identifier.ValueText == "GetMessage")
.Single();


Of course, we're ignoring things like overloads, etc. here, these are things you would need to consider in production code, but as an example, this is very naive. Now we have our method, we need to implement our body. What I want to create, is a simple return "Hello World!" statement. Now you could shortcut this by using SyntaxFactory.ParseStatement("return \"Hello World!\"");, but let's try building it from scratch:

// Let's implement the body.
var returnStatement = F.ReturnStatement(
F.LiteralExpression(
K.StringLiteralExpression,
F.Literal(@"""Hello World!""")));


So here we are creating a return statement using the SyntaxFactory type. The body of the statement is implemented as the return keyword + a string literal for "Hello World!".

Next up, we need to start updating the syntax tree. The thing to remember at this point, is that compilations, syntax tree, and its nodes are immutable. That being they are read-only structures, so we can't simply add things to an existing tree, we need to create new trees, and replace nodes. So with the next couple of lines, let's do that.

// Get the body block
var bodyBlock = methodSyntax.Body;

// Create a new body block, with our new statement.
var newBodyBlock = F.Block(new StatementSyntax[] { returnStatement });

// Get the revised root
var newRoot = (CompilationUnitSyntax)root.ReplaceNode(bodyBlock, newBodyBlock);

// Create a new syntax tree.
var newTree = T.Create(newRoot);


We've doing a couple of things here, we've first obtained the body block of the GetMessage method declaration. Next, we create a new block with our returnStatement. We then need to go back to the root node, and tell it to replace the bodyBlock node with the newBodyBlock node It does this, but returns us a new root node. The original root node is left unchanged, so to finish that off, we have to create the new syntax tree from this revised root node.

Lastly, we'll replace the current syntax tree with our new one:

// Replace the compilation.
context.Compilation = context.Compilation.ReplaceSyntaxTree(tree, newTree);


If you build now, even though the Greeter.GetMessage method does not currently have an implementation, it will build fine, because we've now dynamically implemented it using our ImplementGreeterCompileModule.

So our complete implementation looks like this:

using System.Diagnostics;
using System.Linq;

using Microsoft.CodeAnalysis;
using Microsoft.CodeAnalysis.CSharp.Syntax;
using Microsoft.Framework.Runtime.Roslyn;

using T = Microsoft.CodeAnalysis.CSharp.CSharpSyntaxTree;
using F = Microsoft.CodeAnalysis.CSharp.SyntaxFactory;
using K = Microsoft.CodeAnalysis.CSharp.SyntaxKind;

public class ImplementGreeterCompileModule : ICompileModule
{
public void AfterCompile(AfterCompileContext context)
{
// NoOp
}

public void BeforeCompile(BeforeCompileContext context)
{
// Uncomment this to step through the module at compile time:
//Debugger.Launch();

// Get our Greeter class.
var syntaxMatch = context.Compilation.SyntaxTrees
.Select(s => new
{
Tree = s,
Root = s.GetRoot(),
Class = s.GetRoot().DescendantNodes()
.OfType<ClassDeclarationSyntax>()
.Where(cs => cs.Identifier.ValueText == "Greeter")
.SingleOrDefault()
})
.Where(a => a.Class != null)
.Single();

var tree = syntaxMatch.Tree;
var root = syntaxMatch.Root;
var classSyntax = syntaxMatch.Class;

// Get the method declaration.
var methodSyntax = classSyntax.Members
.OfType<MethodDeclarationSyntax>()
.Where(ms => ms.Identifier.ValueText == "GetMessage")
.Single();

// Let's implement the body.
var returnStatement = F.ReturnStatement(
F.LiteralExpression(
K.StringLiteralExpression,
F.Literal(@"""Hello World!""")));

// Get the body block
var bodyBlock = methodSyntax.Body;

// Create a new body block, with our new statement.
var newBodyBlock = F.Block(new StatementSyntax[] { returnStatement });

// Get the revised root
var newRoot = (CompilationUnitSyntax)root.ReplaceNode(bodyBlock, newBodyBlock);

// Create a new syntax tree.
var newTree = T.Create(newRoot);

// Replace the compilation.
context.Compilation = context.Compilation.ReplaceSyntaxTree(tree, newTree);
}
}


I've added a Debugger.Launch() step in which can be useful if you want to step through the compilation process. Simply hit a CTRL+SHIFT+B build and attach to the IDE instance.

### Where do we go from here?

There are a myriad of possibilities with this new technology, and the areas I am very much interested in, is component modularity. In future posts, I'll show you how you can use a compile module to discover modules in your code and/or nuget packages and generate dependency registrations at compile time.

I've added the code as both a Gist and pushed it to a public repo on GitHub