@Shazwazza

Shannon Deminick's blog all about web development

Umbraco passwords and ASP.NET Machine Keys

July 3, 2017 04:26

image

This blog post is the result of a thread on Twitter which starts here: https://twitter.com/crumpled_jeavon/status/880522105795870720 and works its way into confusion. Suffice to say I can’t answer these questions in 140 chars so here’s re-cap in the form of Q and A about Machine Keys and Umbraco.  Please note that I am not an expert in hashing algorithms, some of these answers are based on my own research. Hope this clears things up!

How is the password hashed?

It is hashed in the same way that the ASP.NET Universal membership provider (DefaultMembershipProvider) and SqlMembershipProvider hashes passwords which by default uses the HMACSHA256 algorithm.

Jeffrey Schoemaker has been discussing updating Umbraco’s default password hashing to use an even stronger hash algorithm and I’ve recently updated a new task on the issue tracker to research this but it really comes down to the fact that Microsoft does not offer the best stronger hashing method in it’s standard .NET Framework so we’ll see where we end up.

Is the Machine Key used for password hashing?

Yes, In 7.6.0+ it is by default because useLegacyEncoding is false by default in this release. Previous to 7.6.0 the useLegacyEncoding value was true by default purely to preserve some backwards compatibility settings for other products but you’ve been able to set it to true from 7.0.0 (IIRC). Since those products support the more secure password formats, this is now false by default and should be kept as false for new installs. By default the hashing algorithm uses HMACSHA256 which uses the ASP.NET Machine Key to perform part of it’s hashing function. This algorithm is configurable but you really shouldn’t change this to be a lesser strength hashing algorithm.

The HMAC part of this algorithm means it’s derived from a keyed algorithm and the machine key is used to create this key by default. There doesn’t seem to be any documentation or reference to this online that I can find but trying to look through the crypto source code (which isn’t nice to look at) it seems that the default key gets set based on some logic in the RSACryptoServiceProvider class which reads some info from the machine key.

Do machine keys change between environments?

If you explicitly generate and set your own machine key in your web.config then the answer is No.

If you don’t explicitly generate and set your own machine key than you will get an auto-generated machine key. The simple answer for this is: In most cases No, an auto-generated machine key will not change between environments.

To understand when it will change between environments is a little more complicated and comes down to a combination of IIS user, IIS website virtual path (i.e. if you are running a site in a virtual directory), and a combination of a few settings set at the machine config level: “IsolateApps” and “IsolateByAppId”.  Unless a server administrator specifically changes these settings than chances are you won’t be affected by different auto-generated machine keys for your site. If you are really keen, here’s a series all about this topic and other cryptographic changes in .NET 4.5:

  • Part 1 – see the “A brief digression: auto-generated machine keys” for info on auto-generating keys
  • Part 2 – in-depth info about the machine key and hashing changes
  • Part 3 – interesting info especially with regards to  PBKDF2 in .NET Framework

Can I change my machine key?

No.

However, I realize In some cases you might need to change it or move from an auto-generated machine key to an explicit machine key. If that is the case there will be a lot of manual work you’ll need to do. If you simply just change the machine key or add an explicit one when you previously didn’t have one than your members/users will not be able to log in! A few ideas that could use to try to work around this issue (NOTE: I’ve never tried any of these, these are just ideas I thought up)

  • Create a new password column in the database which will store the newly hashed password based on the new machine key
  • Create another website on the same server that doesn’t have a machine key and use that as a REST API (which you’ll also need to auth) to validate existing/old passwords and then re-hash the submitted password on the existing site and store that in the new password column
  • Set a new machine key and then send an email out to all users explaining they’ll need to reset their password. The token/link in the email that you’d use to validate the user can be generated based on their existing hashed password. You’d also have to add in some logic to your login screen to manually reset the password which would re-send this email
  • You could try to figure out the currently auto-generated machine key and use that to attempt to validate the old password format
  • If you are just changing to another explicitly defined machine key, you could try to use the old machine key stored somewhere else to validate the old password format

Can I change from useLegacyEncoding ‘true’ to ‘false’?

Not easily.

This will require a bit of extra work just like the above question but it’s not quite as tricky as changing a Machine Key. When you have useLegacyEncoding=’true’ , the machine key is not used, so you could do something like:

  • Create a new password column in the database which will store the newly hashed password based on the new machine key
  • When a user logs in you can check if they have the new password format or not.
    • If not, then you can validate the password based on the old format then re-hash with the new format and store it and delete the old stored hash password.
    • If so, then you can validate the password based on the new format
  • To do this would probably require inheriting from the current Umbraco membership provider and implementing this logic yourself

Do I need to generate and store a custom Machine Key if I am planning on Load Balancing?

Yes.

This is also mentioned in the Umbraco docs for load balancing

Do I need to generate and install a Machine Key before I run the Umbraco installer?

Yes.

This is because during the Umbraco installation it will hash and store the password for the admin account and if you then add a Machine Key after the fact, you will no longer be able to log in.

Can Umbraco generate a custom Machine Key on install for me?

Yes!

but it doesn’t do that right now. I created that functionality in a PR but we decided to remove the machine key generation part when it was accepted. We have decided to bring it back though so that will be part of an upcoming Umbraco release, whether that is the 7.6.x series or 7.7 series is still being decided.

Do Machine Key’s exist in ASP.NET Core?

No.

Well sort of but not really. I haven’t been able to research too much into this but when speaking to a couple MS Dev’s about this a couple years ago the answer was that the way machine key’s work will be different. If keys need to be shared between apps in ASP.NET Core the data protection APIs need to be used and these keys would then be stored in the registry or the Cloud (i.e. Azure Key Vault), here’s a SO article on this.

 

Clear as mud?! ;)

OWIN Cookie Authentication with variable cookie paths

February 17, 2017 04:17

By default OWIN Cookie Authentication let’s you specify a single configurable cookie path that does not change for the lifetime of the application, for example

app.UseCookieAuthentication(new CookieAuthenticationOptions
{                
    CookiePath = "/Client1",
    CookieSecure = CookieSecureOption.SameAsRequest,
    CookieHttpOnly = true
});

This is going to only allow cookie authentication to occur when the request is for  any path under /Client1. But what if you wanted this same cookie and cookie authentication provider to work for other variable paths, what if we wanted it to execute for multiple configured paths like: /Client1, /Client2/Secured, /Client3/Private ? Or what if we wanted wanted this Cookie Authentication Provider to execute dynamically based on the request object?

ICookieManager to the rescue

The CookieAuthenticationOptions has a property: CookieManager which you can set to any instance of ICookieManager. ICookieManager contains these methods: GetRequestCookie, AppendResponseCookie, DeleteCookie but all we really need to worry about GetRequestCookie. It turns out that the CookieAuthenticationHandler will detect if this method returns null and if so it will just not continue trying to authenticate the request. To make things easy we’ll just inherit from the default OWIN ICookieManager which is ChunkingCookieManager, although it’s methods are not marked as virtual we can still override them by explicitly implementing the ICookieManager.GetRequestCookie method (Pro Tip! You can always override a non virtual method if you explicitly implement an interfaces method).

//explicit implementation
string ICookieManager.GetRequestCookie(IOwinContext context, string key)
{
    //TODO: Given what is in the context, we can check pretty much anything, if we don't want
    //this request to continue to be authenticated, just return null. Example:
    var toMatch = new[] {"/Client1", "/Client2/Secured", "/Client3/Private"};
    if (!toMatch.Any(m => context.Request.Uri.AbsolutePath.StartsWith(m, StringComparison.OrdinalIgnoreCase)))
    {
        return null;
    }

    //if we don't want to ignore it then continue as normal:
    return base.GetRequestCookie(context, key);            
}

The last step is to not worry about the CookiePath since the custom ICookieManager.GetRequestCookie is going to deal with whether the middleware receives a cookie value or not so in that case, the CookiePath for the CookieAuthenticationOptions will remain the default of “/”

We’ve been doing this in Umbraco CMS core for quite some time, I meant to blog about this then but just didn’t find the time. In Umbraco we have a few custom request paths that we need authenticated with our custom back office cookie, for reference the BackOfficeCookieManager source is found here: https://github.com/umbraco/Umbraco-CMS/blob/dev-v7/src/Umbraco.Web/Security/Identity/BackOfficeCookieManager.cs

Smidge 2.0 alpha is out

December 30, 2016 03:39
ASP.NET-Core-Logo_2colors_RGB_bitmap_MEDIUM

What is Smidge? Smidge is a lightweight runtime bundling library (CSS/JavaScript file minification, combination, compression) for ASP.NET Core.

If you’ve come from ASP.NET 4.5 you would have been familiar with the bundling/minification API and other bundling options like ClientDependency, but that is no longer available in ASP.NET Core, instead it is advised to do all the bundling and pre-processing that you need as part of your build process …which certainly makes sense! So why create this library? A few reasons: some people just want to have a very simple bundling library and don’t want to worry about Gulp or Grunt or WebPack, in a lot of cases the overhead of runtime processing is not going to make any difference, and lastly, if you have created something like a CMS that dynamically loads in assets from 3rd party packages or plugins, you need a runtime bundler since these things don’t exist at build time.

Over the past few months I’ve been working on some enhancements to Smidge and have found a bit of time to get an alpha released.  There’s loads of great new features in Smidge 2.0! You can install via Nuget and is targets .NET Standard 1.6 and .NET Framework 4.5.2

PM> Install-Package Smidge -Pre

New to Smidge?

It’s easy to get started with Smidge and there’s lots of docs available on GitHub that cover installation, configuration, creating bundles and rendering  them.

New Features

Here’s a list of new features complete with lots of code examples

Customizable Debug and Production options

https://github.com/Shazwazza/Smidge/issues/58

Previous to version 2.0, you could only configure aspects of the Production options and the Debug assets that were returned were just the raw static files. With 2.0, you have full control over how your assets are processed in both Debug and Production configurations. For example, if you wanted you could have your assets combined but not minified in Debug mode. This will also allow for non native web assets such as TypeScript to have pre-processors running and able to work in Debug mode.

Example:

services.AddSmidge(_config)
    .Configure<SmidgeOptions>(options =>
    {
        //set the default e-tag options for Debug mode
        options.DefaultBundleOptions.DebugOptions.CacheControlOptions.EnableETag = false        
    });

Fluent syntax for declaring/configuring bundles

https://github.com/Shazwazza/Smidge/issues/55

If you want to customize Debug or Production options per bundle, you can do so with a fluent syntax, for example:

app.UseSmidge(bundles =>
{                
    //For this bundle, enable composite files for Debug mode, enable the file watcher so any changes
    //to the files are automatically re-processed and cache invalidated, disable cache control headers
    //and use a custom cache buster. You could of course use the .ForProduction options too 
    bundles.Create("test-bundle-2", WebFileType.Js, "~/Js/Bundle2")
        .WithEnvironmentOptions(BundleEnvironmentOptions.Create()
                .ForDebug(builder => builder
                    .EnableCompositeProcessing()
                    .EnableFileWatcher()
                    .SetCacheBusterType<AppDomainLifetimeCacheBuster>()
                    .CacheControlOptions(enableEtag: false, cacheControlMaxAge: 0))
                .Build()
        );                
});

Customizable Cache Buster

https://github.com/Shazwazza/Smidge/issues/51

In version 1.0 the only cache busting mechanism was Smidge’s version property which is set in config, in 2.0 Smidge allows you to control how cache busting is controlled at a global and bundle level. 2.0 ships with 2 ICacheBuster types:

  • ConfigCacheBuster – the default and uses Smidge’s version property in config

  • AppDomainLifetimeCacheBuster – if enabled will mean that the server/browser cache will be invalidated on every app domain recycle

If you want a different behavior, you can define you own ICacheBuster add it to the IoC container and then just use it globally or per bundle. For example:

//Set a custom MyCacheBuster as the default one for Debug assets:
services.AddSmidge(_config)
    .Configure<SmidgeOptions>(options =>
    {
        options.DefaultBundleOptions.DebugOptions.SetCacheBusterType<MyCustomCacheBuster>();       
    });

//Set a custom MyCacheBuster as the cache buster for a particular bundle in debug mode:
bundles.Create("test-bundle-2", WebFileType.Js, "~/Js/Bundle2")
    .WithEnvironmentOptions(BundleEnvironmentOptions.Create()
            .ForDebug(builder => builder
                .SetCacheBusterType<MyCacheBuster>()
            .Build()
    );

Customizable cache headers

https://github.com/Shazwazza/Smidge/issues/48 

You can now control if you want the ETag header output and you can control the value set for max-age/s-maxage/Expires header at a global or bundle level, for example:

//This would set the max-age header for this bundle to expire in 5 days
bundles.Create("test-bundle-5", WebFileType.Js, "~/Js/Bundle5")
    .WithEnvironmentOptions(BundleEnvironmentOptions.Create()
            .ForProduction(builder => builder                                
                .CacheControlOptions(enableEtag: true, cacheControlMaxAge: (5 * 24)))
            .Build()
    );

Callback to customize the pre-processor pipeline per web file

https://github.com/Shazwazza/Smidge/issues/59

This is handy in case you want to modify the pipeline for a given web file at runtime based on some criteria, for example:

services.AddSmidge(_config)
    .Configure<SmidgeOptions>(options =>
    {
        //set the callback
        options.PipelineFactory.OnGetDefault = GetDefaultPipelineFactory;
    });

//The GetDefaultPipeline method could do something like modify the default pipeline to use Nuglify for JS processing:

private static PreProcessPipeline GetDefaultPipelineFactory(WebFileType fileType, IReadOnlyCollection<IPreProcessor> processors)
{
    switch (fileType)
    {
        case WebFileType.Js:
            return new PreProcessPipeline(new IPreProcessor[]
            {
                processors.OfType<NuglifyJs>().Single()
            });                
    }
    //returning null will fallback to the logic defined in the registered PreProcessPipelineFactory
    return null;
}

File watching with automatic cache invalidation

https://github.com/Shazwazza/Smidge/pull/42 

During the development process it would be nice to be able to test composite files but have them auto re-process and invalidate the cache whenever one of the source files changes… in 2.0 this is possible!  You can enable file watching at the global level or per bundle. Example:

//Enable file watching for all files in this bundle when in Debug mode
bundles.Create("test-bundle-7",
    new CssFile("~/Js/Bundle7/a1.js"),
    new CssFile("~/Js/Bundle7/a2.js"))
    .WithEnvironmentOptions(BundleEnvironmentOptions.Create()
            .ForDebug(builder => builder.EnableFileWatcher())
            .Build()
    );

What’s next?

This is an alpha release since there’s a few things that I need to complete. Most are already done but I just need to make Nuget packages for them:

More pre-processors

I’ve enabled support for a Nuglify pre-processor for both CSS and JS (Nuglify is a fork of the Microsoft Ajax Minifier for ASP.NET Core + additional features). I also enabled support for an Uglify NodeJs pre-processor which uses Microsoft.AspNetCore.NodeServices to invoke Node.js from ASP.NET and run the JS version of Uglify. I just need to get these on Nuget but haven’t got around to that yet.

A quick note on minifier performance

Though Nuglify and Uglify have a better minification engine (better/smarter size reduction) than JsMin because they create an AST (Abstract Syntax Tree) to perform it’s processing, they are actually much slower and consume more resources than JsMin. Since Smidge is a Runtime bundling engine, its generally important to ensure that the bundling/minification is performed quickly. Smidge has strict caching so the bundling/minification will only happen once (depending on your ICacheBuster you are using) but it is still recommended to understand the performance implications of replacing JsMin with another minifier. I’ve put together some benchmarks (NOTE: a smaller Minified % is better):

Method Median StdDev Scaled Scaled-SD Minified % Gen 0 Gen 1 Gen 2 Bytes Allocated/Op
JsMin 10.2008 ms 0.3102 ms 1.00 0.00 51.75% - - - 155,624.67
Nuglify 69.0778 ms 0.0180 ms 6.72 0.16 32.71% 53.00 22.00 15.00 4,837,313.07
JsServicesUglify 1,548.3951 ms 7.6388 ms 150.95 3.73 32.63% 0.97 - - 576,056.55
The last benchmark may be a bit misleading because the processing is done via NodeJs which executes in a separate process so I'm unsure if the actual memory usage of that can be properly captured by BenchmarkDotNet but you can see it's speed is much slower.

Thanks!

Big thanks to @dazinator for all the help, recommendations, testing, feedback, etc… and for the rest of the community for filing bugs, questions, and comments. Much appreciated :)

FCN (File Change Notification) Viewer for ASP.NET

December 19, 2016 05:04

This is a follow up post about another article I previously wrote about FCN on ASP.NET and how it affects your application, performance, etc…

imageSince that post, I’ve discovered a few more tidbits about FCN and application restarts and have decided to release a Nuget package that you can install to generate a report of all file/directory change monitors that ASP.NET creates.

The code for the FCN report generator lives on GitHub and you can install it via nuget:

PM> Install-Package FCNViewer

Once that is installed you’ll get a readme on how to enable it, basically just this in your WebApi startup/route config:

config.Routes.MapFcnViewerRoute();

Then you can navigate to /fcn and you’ll get a nice report showing all of the files/folders being watched by ASP.NET.

FCN modes & app domain restarts

I won’t go into full details about all the FCN modes again (you can find those details on my previous post) but I want to provide some info about “Single” vs “NotSet” (the default).

When the default (“NotSet”) is used, ASP.NET will create a directory monitor inside of every folder that the ASP.NET runtime accesses. This includes accessing a folder to return an asset request like CSS or JS, or rendering a razor file or reading from a data file. Basically it means ASP.NET has the potential to create a directory monitor for every directory you have in your site. The good part about the default mode is that each directory monitor will have it’s own buffer to manage the files it is watching so there’s less chance these buffers can overflow. The problem with the default is file system performance especially if you are hosting your site from a remote file server – the more directory monitors that are created the more strain will be put on your file server/network which could cause unwanted app domain restarts.

When set to “Single”, ASP.NET will create one directory monitor and this single directory monitor will be used to monitor all folders that ASP.NET accesses. This means there is a single buffer that is used for monitoring all of the files and folders. This buffer is larger than the buffer used for each individual monitor created with the default is used, however the problem with “Single” is that it means that if you have tons of folders there’s a higher chance this buffer can overflow which could cause unwanted app domain restarts. The good part about “Single” is that its much better for performance for the file system especially when you are hosting your site from a remote file server.

As you can see there’s no perfect scenario and both can cause unwanted app domain restarts. These restarts will end up in your logs with very peculiar reasons like:

Application shutdown. Details: ConfigurationChange
_shutDownMessage=Overwhelming Change Notification in 
    HostingEnvironment initiated shutdown
    CONFIG change
    Overwhelming Change Notification in D:\inetpub\test
    CONFIG change
    Change Notification for critical directories.
    Overwhelming Change Notification in bin
    Change Notification for critical directories.
    Overwhelming Change Notification in App_LocalResources
    CONFIG change
    CONFIG change
    CONFIG change
    CONFIG change

Or other strange “ConfigurationChange” reasons that might tell you that all sorts of files have been changed – but you definitely know that is not the cause. The cause of this is very likely FCN issues.

Files outside of the web root

An interesting bit of info is that ASP.NET wont create directory/file monitors for files that exist outside of the web root even if ASP.NET accesses them. I’d recommend where possible that you store any sort of cache files, data files, etc… that ASP.NET doesn’t need to serve up as static file requests outside of the web root. This is actually the default way of working in ASP.NET Core too. As an example, you may have heard or use Image Processor which creates quite a substantial amount of cache folders for processed images. These files by default exist in /App_Data/cache and since that is in the web root, ASP.NET will create a directory watcher for each folder in there … and there can be literally thousands if you use Image Processor extensively! If you are using FCN “Single” mode and have that many folders in the Image Processor cache, there’s a good chance that you’ll have app domain restart issues - though changing it to “NotSet” could result in serious file system performance issues. The good news for Image Processor is I created a PR to allow for storing it’s cache outside of the web root: https://github.com/JimBobSquarePants/ImageProcessor/pull/521 so that functionality should be available in an upcoming release soon.

FCN & ASP.NET CacheDependency

Another thing to be aware of is that if you use ASP.NET’s cache (i.e. HttpRuntime.Cache or HttpContext.Cache or MemoryCache) and you create cache dependencies on specific files, this effectively goes straight to the underlying FCN engine of ASP.NET and it will create these same file/directory monitors that ASP.NET creates by default … even if these files are stored outside of the web root! So if you are creating a ton of CacheDependency objects, you will be creating a ton of FCN directory/file monitors which could be directly causing FCN issues with app domain restarts.

The FCN Report

At least with this report viewer you can see how many files and folders are being watched. It’s important to know that these watchers are created lazily whenever ASP.NET accesses files so on first load you might not see too many but if you start browsing around your site, you’ll see the number grow.

You can also modify the default route by using an overload:

config.Routes.MapFcnViewerRoute("myfcnreport");

This can hopefully help debug these strange restart issues, give you an idea of how much ASP.NET is actually watching and show you how the FCN Modes affect these monitors.

Umbraco CLI running on ASP.NET Core

October 26, 2016 14:13

TL;DR I’ve got Umbraco (the Core part) running on .NET Core (not just a .NET Core CLI wrapping a non .NET Core Umbraco). See below for a quick video of it working on Ubuntu and simple instructions on how to get it running yourself.

Over the past couple of years I’ve slowly been working on getting Umbraco to run on ASP.NET Core. Unlike many other ASP.NET frameworks and products that have rewritten their apps in ASP.NET Core, I’ve gone a different path and have updated Umbraco’s existing code to compile against both .Net Framework and .Net Core. This is a necessary transition for the Umbraco codebase, we don’t plan on rewriting Umbraco, just updating it to play nicely with both .Net Framework and .Net Core.

During my talk at CodeGarden this year I spoke about the Future of Umbraco Core. An important aspect of that talk was the fact that we need to build & release Umbraco version 8 before we can consider looking in to upgrading all of Umbraco to work with ASP.NET Core. A primary reason for this is because we need to remove all of the legacy code from Umbraco (including plenty of old Webforms views) and updated plenty of other things such as old libraries that are simply not going to work with ASP.NET Core.

I have been doing all of the work on updating Umbraco to work with ASP.NET Core on a fork on GitHub. It’s been a very tedious process made even more tedious due to the constant changes of ASP.NET Core over the last 2 years. I started this whole process by modifying the VS sln file to exclude all of the projects and only including the Umbraco.Core project, then starting with the most fundamental classes to include. I’d include one class at a time using the project.json file, compile, make changes if required until it built, include the next class, rinse/repeat.  I did this up until the point where I could compile the Umbraco.Core project to include Umbraco’s ApplicationContext object and CoreBootManager. This basically meant I had everything I needed in order to bootstrap Umbraco and perform the business logic operations for Umbraco on ASP.NET Core :)

I did start looking at updating the Umbraco.Web project but this is going to be a much more involved process due to the way that MVC and WebApi have changed with regards to ASP.NET Core. It is worth noting that I do have the routing working, even things like hijacked routes, SurfaceController’s, etc… !

But before I continued down that road I wanted to see if I could get the core part of Umbraco running cross platform, so I tinkered around with making an Umbraco .NET Core CLI

… And it just works :)

On a side note, the Git branch that this live in is a fork of Umbraco’s current source code and the branch that it exists in is a branch from v8 so it is fully merge-able. This means that as we continue developing on v7 and v8 all of these fixes/changes will merge up into this branch.

Umbraco Interactive CLI

I didn’t want to reinvent the wheel with a CLI argument parser so a quick Googling on what was available on .Net Core pointed me to a great framework that Microsoft had already built. What I wanted to make was an interactive CLI so I could either execute a single command line statement to perform an operation, or I could start the CLI and be prompted to work with Umbraco data until I exited. This Microsoft framework didn’t quite support that OOTB but it wasn’t difficult to make it work the way I wanted without modifying it’s source. From there I wrote the code to boot Umbraco and started implementing some commands. Here’s the commands and sub-commands so far (each command has a help option: –h):

  • db – Used to configure the database
    • install – used to install a new Umbraco db, schema and default data
    • connect – used to connect to an existing Umbraco db
  • schema – Used for manipulating schema elements
    • doctype – Used for Document type operations
      • create, del, list
      • groups – Used for property group operations (create + list)
      • props – Used for property type operations (create + list)
    • medtype– Used for Media type operations
      • create, del , list
      • groups – Used for property group operations (create + list)
      • props – Used for property type operations (create + list)

See it in action

Cross Platform

I was very interested to see if this would work on Linux so I got Ubuntu up and running and put MySql on there and created a new db to use. Then I updated the solution to build a standalone .NET Core app, published that using

dotnet publish -c release -r ubuntu.14.04-x64 

and unzipped that on my Ubuntu installation. Then I tried it by running

./Umbraco.Test.Console

and …. It worked !! There’s something quite magical about the whole thing, it really was very easy to get this to run on a non-windows environment. Very impressive :)

Try it out!

You should be able to get this working pretty easily – hopefully on any OS – here’s the steps:

I’ve not tested any of this with OSX, so hopefully it will ‘just work’ there too

You’ll need either an MS SQL or MySql empty database setup on your OS environment, then note the connection string since you’ll need to enter it in the CLI.

Clone the repo (It’s big, it’s the actual current Umbraco source):

git clone -b dev-v9 https://github.com/Shazwazza/Umbraco-CMS.git

Go to the project folder

cd Umbraco-CMS
cd src
cd Umbraco.Test.Console

Restore packages

dotnet restore

Build the project, depending on your OS

dotnet build -r win10-x64
dotnet build -r osx.10.10-x64
dotnet build -r ubuntu.16.04-x64

Publish it, depending on your OS

dotnet publish -c release -r win10-x64
dotnet publish -c release -r osx.10.10-x64
dotnet publish -c release -r ubuntu.16.04-x64

Run it, depending on your OS

bin\release\netcoreapp1.0\win10-x64\Umbraco.Test.Console.exe
./bin/release/netcoreapp1.0/osx.10.10-x64/Umbraco.Test.Console
./bin/release/netcoreapp1.0/ubuntu.16.04-x64/Umbraco.Test.Console

NOTE: On Linux you’ll probably have to mark it to be executable first by doing this:

chmod +x ./bin/release/netcoreapp1.0/ubuntu.16.04-x64/Umbraco.Test.Console

Next steps

I’m very excited about what has been achieved so far but there’s certainly a long way to go. As I mentioned above getting v8 completed is a requirement to getting a version of Umbraco fully working with ASP.NET Core. During that development time I do plan on continuing to tinker around with getting more stuff to work. I’d like to see some progress made with the web project, the first steps will require getting the website boot process working  (in progress) and I think a good first milestone will be getting the installer all working. From there, there’s updating the controllers and authentication/authorization mechanisms for the back office and then looking into actually getting content rendered on the front-end ( this part is actually the easiest and mostly done already ). 

Installing .NET Core 1.01 on Ubuntu 16.10

October 23, 2016 23:03

TL;DR  You’ll need to manually install libicu55

image

Warning: Linux noob content below

I’ve been testing out some .NET Core cross platform stuff and originally had been using Ubuntu 14.04 with .NET Core 1.0.0 and that all worked fine along with the installation instructions from https://www.microsoft.com/net/core#ubuntu , however some of the latest tests I’ve been doing needed a MySQL version later than 5.5. It would seem that when I installed MySQL on Ubuntu 14.04 by executing apt-get mysql-server that I got 5.5 which was not compatible with what I needed. So attempting to upgrade gave me other issues for which I would require a later version of Ubuntu. Long story short, I’m a linux noob and I couldn’t get anything to upgrade, ended up executing all sorts of commands I didn’t understand and probably shouldn’t have and ultimately killed my Linux install.

So a clean install of Ubuntu 16.04 it was … there’s a catch though, you can choose between LTS (Long Term Support) or not. I chose not to since It’s a VM and I don’t mind newer updates, etc… Turns out that was a bad idea with .NET Core installs! It would seem that once the non LTS is installed you end up with 16.10 which has installed some newer versions of required libraries, namely something called libicu which is now on 57 instead of a required 55.

Trying to run the normal installation procedure from the web instructions mentioned above for 16.04 ended up telling me this:

sudo apt-get install dotnet-dev-1.0.0-preview2-003131

Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
dotnet-dev-1.0.0-preview2-003131 : Depends: dotnet-sharedframework-microsoft.netcore.app-1.0.1 but it is not going to be installed
E: Unable to correct problems, you have held broken packages.

So what the heck does that mean?! So after some Googling, I tried to just install the dependency:

sudo apt-get install dotnet-sharedframework-microsoft.netcore.app-1.0.1

Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
dotnet-sharedframework-microsoft.netcore.app-1.0.1 : Depends: libicu55 (>=55.1.1~) but it is not installable
E: Unable to correct problems, you have held broken packages.

Ok, not much further but I gather that I need libicu55 installed, so let’s try:

sudo apt-get install libicu55

Package libicu55 is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source

E: Package libicu55 has no installation candidate

Manual libicu55 installation

I suppose normal linux users would probably just know that you need to download and install libicu55 manually. Well it took a little bit of research for me to figure that out, but here’s what to do:

  • head over to http://packages.ubuntu.com/en/xenial/amd64/libicu55/download
  • click one of the mirror links to download the file
  • in Terminal, head to the folder you downloaded it (i.e. probably ~/Downloads)
  • install it using this command: sudo dpkg –i libicu55_55.1-7_amd64.deb  (or whatever file name you saved it as)

That should install just fine, then you can run sudo apt-get install dotnet-dev-1.0.0-preview2-003131 and everything will be fine again :)

ASMX SOAP Webservices with abstract models without using XmlInclude

July 22, 2016 10:12

I’m hoping this post might be useful to some folks out there that might be stuck using old ASMX/SOAP webservices in ASP.Net. If you’ve tried to return an abstract or superclass from an ASMX webservice without using XmlInclude or SoapInclude, you’ll get an error like:

System.InvalidOperationException: There was an error generating the XML document. ---> System.InvalidOperationException: The type MyAwesomeClass was not expected. Use the XmlInclude or SoapInclude attribute to specify types that are not known statically.
   at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationWriter1.Write6_Item(String n, String ns, Item o, Boolean isNullable, Boolean needType)
   at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationWriter1.Write10_Item(Object o)
   at Microsoft.Xml.Serialization.GeneratedAssembly.ItemSerializer.Serialize(Object objectToSerialize, XmlSerializationWriter writer)
   at System.Xml.Serialization.XmlSerializer.Serialize(XmlWriter xmlWriter, Object o, XmlSerializerNamespaces namespaces, String encodingStyle, String id)
   --- End of inner exception stack trace ---
   at System.Xml.Serialization.XmlSerializer.Serialize(XmlWriter xmlWriter, Object o, XmlSerializerNamespaces namespaces, String encodingStyle, String id)
   at System.Xml.Serialization.XmlSerializer.Serialize(TextWriter textWriter, Object o, XmlSerializerNamespaces namespaces)
   at System.Web.Services.Protocols.XmlReturnWriter.Write(HttpResponse response, Stream outputStream, Object returnValue)
   at System.Web.Services.Protocols.HttpServerProtocol.WriteReturns(Object[] returnValues, Stream outputStream)
   at System.Web.Services.Protocols.WebServiceHandler.WriteReturns(Object[] returnValues)
   at System.Web.Services.Protocols.WebServiceHandler.Invoke()

The normal way to work around this is to attribute your ASMX class with [XmlInclude(typeof(MyAwesomeClass)] and repeat this for every subclass that you might be returning. This essentially tells the SOAP handler what types it should expect to serialize so it can ‘warm’ up a list of XmlSerializers.

The problem with this is that you need to know about all of these types up-front, but what if you have a plugin system where other developers can define their own types? There would be no way of knowing up-front what types to register so this approach will not work.

IXmlSerializer to the rescue

To work around this problem you can define a wrapper class for your abstract/superclass.  Working with IXmlSerializer is pretty annoying and I highly recommend this great article if you are going to use it since one mistake can cause all sorts of problems

The following class should work for any object. Also note the usage of the static dictionary to store references to created XmlSerializer instances since these are expensive to create per type.

public class SerializedObjectWrapper : IXmlSerializable
{
    /// <summary>
    /// The underlying Object reference that is being returned
    /// </summary>
    public object Object { get; set; }

    /// <summary>
    /// This is used because creating XmlSerializers are expensive
    /// </summary>
    private static readonly ConcurrentDictionary<Type, XmlSerializer> TypeSerializers 
        = new ConcurrentDictionary<Type, XmlSerializer>();

    public XmlSchema GetSchema()
    {
        return null;
    }

    public void ReadXml(XmlReader reader)
    {
        reader.MoveToContent();

        //Get the Item type attribute
        var itemType = reader.GetAttribute("ItemType");
        if (itemType == null) throw new InvalidOperationException("ItemType attribute cannot be null");
            
        //Ensure the type is found in the app domain
        var itemTypeType = Type.GetType(itemType);
        if (itemTypeType == null) throw new InvalidOperationException("Could not find the type " + itemType);

        var isEmptyElement = reader.IsEmptyElement;
                    
        reader.ReadStartElement();

        if (isEmptyElement == false)
        {
            var serializer = TypeSerializers.GetOrAdd(itemTypeType, t => new XmlSerializer(t));
            Object = serializer.Deserialize(reader);
            reader.ReadEndElement();
        }
    }

    public void WriteXml(XmlWriter writer)
    {
        var itemType = Object.GetType();
        var serializer = TypeSerializers.GetOrAdd(itemType, t => new XmlSerializer(t));
            
        //writes the object type so we can use that to deserialize later
        writer.WriteAttributeString("ItemType", 
            itemType.AssemblyQualifiedName ?? Object.GetType().ToString());

        serializer.Serialize(writer, Object);
    }
}

Usage

Here’s an example of the usage of the SerializedObjectWrapper class along with the example that would cause the above mentioned exception so you can see the difference:

public abstract class MyAbstractClass
{
}

public class MyAwesomeClass : MyAbstractClass
{
}

//WONT WORK
[WebMethod]
public MyAbstractClass GetStuff()
{
    return new MyAwesomeClass();
}

//WILL WORK
[WebMethod]
public SerializedObjectWrapper GetStuff()
{
    return new SerializedObjectWrapper
    {
        Object = new MyAwesomeClass()
    };
}

 

I know most people aren’t using AMSX web services anymore but in case your stuck on an old project or have inherited one, this might be of use :)

ASP.NET Core application shutdown events

June 3, 2016 09:39

While porting an existing library to ASP.NET Core I had to find the equivalent functionality of IRegisteredObject which I use for graceful shutdowns of running tasks in background threads. The newer & nicer approach to this in ASP.NET Core is Microsoft.AspNetCore.Hosting.IApplicationLifetime:

  /// <summary>
  /// Allows consumers to perform cleanup during a graceful shutdown.
  /// </summary>
  public interface IApplicationLifetime
  {
    /// <summary>
    /// Triggered when the application host has fully started and is about to wait
    /// for a graceful shutdown.
    /// </summary>
    CancellationToken ApplicationStarted { get; }

    /// <summary>
    /// Triggered when the application host is performing a graceful shutdown.
    /// Requests may still be in flight. Shutdown will block until this event completes.
    /// </summary>
    CancellationToken ApplicationStopping { get; }

    /// <summary>
    /// Triggered when the application host is performing a graceful shutdown.
    /// All requests should be complete at this point. Shutdown will block
    /// until this event completes.
    /// </summary>
    CancellationToken ApplicationStopped { get; }

    /// <summary>Requests termination the current application.</summary>
    void StopApplication();
  }

Unlike the old IRegisteredObject this interface is pretty clear on it’s functionality.

Registering a method to be called for any of the three operations is simple:

//register the application shutdown handler
 applicationLifetime.ApplicationStopping.Register(DisposeResources);

protected void DisposeResources()
{
    //Cleanup stuff when the app is shutting down
}

Obtaining an instance of IApplicationLifetime can be done during Startup.cs in the Configure method

public void Configure(IApplicationBuilder app, IApplicationLifetime applicationLifetime)
{
    // start your app
}

Happy coding!

Custom Assembly loading with Asp.Net Core

March 15, 2016 14:58

Building a plugin system in Asp.Net Core is a dream compared to previous Asp.Net versions!

In previous versions it was not really feasible to load Assemblies located outside of the /bin folder for a web application. I battled with this concept quite a long time ago and although it’s sort of possible, the notion of having a plugin system that supported loading DLLs from outside of the /bin folder was riddled with hacks/problems and not really supported OOTB.

A large part of the issues has to do with something called an ‘Assembly Load Context’. In traditional .Net there are 3 of these context types: “Load”, “LoadFrom” and “Neither”, here’s a very old but very relevant post about these contexts from Suzanne Cook. In traditional Asp.Net, the “Load” context is used as the default context and it is managed by something called Fusion (.Net’s normal Assembly Loader/Binder). The problem with this context is that it is difficult to load an assembly into it that isn’t located in Fusion’s probing paths (i.e. /bin folder). If you load in an Assembly with a different Assembly Load Context and then try to mix it’s Types with the Types from the default context  … you’ll quickly see that it’s not going to work.

The “Neither” context

Here is the Neither context definition as defined by Suzanne Cook:

If the user generated or found the assembly instead of Fusion, it's in neither context. This applies to assemblies loaded by Assembly.Load(byte[]) and Reflection Emit assemblies (that haven't been loaded from disk). Assembly.LoadFile() assemblies are also generally loaded into this context, even though a path is given (because it doesn't go through Fusion).

In Asp.Net Core (targeting CoreCLR), the default Assembly Load Context is the “Neither” context. This is a flexible context because it doesn’t use Fusion and  it allows for loading assemblies any way that you want - including loading an assembly from a byte array, from a path or by a name. Since all of Asp.Net Core uses this context it means that all of the types loaded in with this context can talk to each other without having the previous Asp.Net problems.

I would assume that Asp.Net Core targeting Desktop CLR would still operate the same as before and still have the 3 types of Assembly Load Context’s … Maybe someone over at Microsoft can elaborate on that one? (David Fowler… surely you know? :)

Finding referenced plugin assemblies

In many cases if you create a product that supports plugin types, developers will create plugins for your product and ship them via Nuget. This is a pretty standard approach since it allows developers that are using your product to install plugins from the Nuget command line or from within Visual Studio. In this case plugin types will be found in referenced assemblies to your application and will be automatically loaded. Asp.Net Core has an interface called Microsoft.Extensions.PlatformAbstractions.ILibraryManager that can be used to resolve your application’s currently referenced ‘Libraries’ (i.e Nuget packages) and then each ‘Library’ returned exposes the Assemblies that it includes. Asp.Net MVC 6 has an even more helpful interface called Microsoft.AspNet.Mvc.Infrastructure.IAssemblyProvider which returns a list of referenced assemblies that are filtered based on if they are assemblies that reference a subset of MVC assemblies. The default implementation of IAssemblyProvider (DefaultAssemblyProvider) is extensible and we can use it to override it’s property ReferenceAssemblies in order to supply our own product assembly names instead of the MVC ones. This is perfect since this allows us to get a list of candidate assemblies that might contain plugins for your product:

public class ReferencePluginAssemblyProvider : DefaultAssemblyProvider
{
    //NOTE: The DefaultAssemblyProvider uses ILibraryManager to do the library/assembly querying
    public ReferencePluginAssemblyProvider(ILibraryManager libraryManager) : base(libraryManager)
    {
    }

    protected override HashSet<string> ReferenceAssemblies 
        => new HashSet<string>(new[] {"MyProduct.Web", "MyProduct.Core"});
}

now if you want to get a list of candidate assemblies that your application is referencing you could do:

//returns all assemblies that reference your product Assemblies
var candidateReferenceAssemblies = referencedPluginAssemblyProvider.CandidateAssemblies;

Finding and loading non-referenced plugin assemblies

This is where things get fun since this is the type of thing that wasn’t really very feasible with traditional Asp.Net web apps. Lets say you have a plugin framework where a plugin is installed via your web app, not in Visual Studio and therefore not directly referenced in your project. For this example, the plugin is a self contained collection of files and folders which could consist of: Css, JavaScript, Razor Views, and Assemblies. This plugin model is pretty nice since to install the plugin would mean just dropping the plugin folder into the right directory in your app and similarly  to uninstall it you can just remove the folder.  The first step is to be able to load in these plugin Assemblies from custom locations. For an example, let’s assume the web app has the following folder structure:

  • App Root
    • App_Plugins <—This will be the directory that contains plugin folders
      • MyPlugin1
        • bin <—by convention we’ll search for Assemblies in the /bin folder inside of a plugin
        • Views
      • MyPlugin2
        • bin <—by convention we’ll search for Assemblies in the /bin folder inside of a plugin
        • css
    • Views
    • wwwroot

IAssemblyLoader

The first thing we need is an ‘Microsoft.Extensions.PlatformAbstractions.IAssemblyLoader’, this is the thing that will do the assembly loading into the Assembly Load Context based on an AssemblyName and a location of a DLL:

public class DirectoryLoader : IAssemblyLoader
{
    private readonly IAssemblyLoadContext _context;
    private readonly DirectoryInfo _path;

    public DirectoryLoader(DirectoryInfo path, IAssemblyLoadContext context)
    {
        _path = path;
        _context = context;
    }

    public Assembly Load(AssemblyName assemblyName)
    {
        return _context.LoadFile(Path.Combine(_path.FullName, assemblyName.Name + ".dll"));
    }

    public IntPtr LoadUnmanagedLibrary(string name)
    {
        //this isn't going to load any unmanaged libraries, just throw
        throw new NotImplementedException();
    }
}

IAssemblyProvider

Next up we’ll need a custom IAssemblyProvider but instead of using the one MVC ships with, this one will be totally custom in order to load and resolve the assemblies based on the plugin’s /bin folders. The following code should be pretty straight forward, the CandidateAssemblies property iterates over each found /bin folder inside of a plugin’s folder inside of App_Plugins. For each /bin folder found it creates a DirectoryLoader mentioned above and loads in each DLL found by it’s AssemblyName into the current Assembly Load Context.

/// <summary>
/// This will return assemblies found in App_Plugins plugin's /bin folders
/// </summary>
public class CustomDirectoryAssemblyProvider : IAssemblyProvider
{
    private readonly IFileProvider _fileProvider;
    private readonly IAssemblyLoadContextAccessor _loadContextAccessor;
    private readonly IAssemblyLoaderContainer _assemblyLoaderContainer;

    public CustomDirectoryAssemblyProvider(
            IFileProvider fileProvider, 
            IAssemblyLoadContextAccessor loadContextAccessor, 
            IAssemblyLoaderContainer assemblyLoaderContainer)
    {
        _fileProvider = fileProvider;
        _loadContextAccessor = loadContextAccessor;
        _assemblyLoaderContainer = assemblyLoaderContainer;
    }

    public IEnumerable<Assembly> CandidateAssemblies
    {
        get
        {
            var content = _fileProvider.GetDirectoryContents("/App_Plugins");
            if (!content.Exists) yield break;
            foreach (var pluginDir in content.Where(x => x.IsDirectory))
            {
                var binDir = new DirectoryInfo(Path.Combine(pluginDir.PhysicalPath, "bin"));
                if (!binDir.Exists) continue;
                foreach (var assembly in GetAssembliesInFolder(binDir))
                {
                    yield return assembly;
                }
            }
        }
    }

    /// <summary>
    /// Returns assemblies loaded from /bin folders inside of App_Plugins
    /// </summary>
    /// <param name="binPath"></param>
    /// <returns></returns>
    private IEnumerable<Assembly> GetAssembliesInFolder(DirectoryInfo binPath)
    {
        // Use the default load context
        var loadContext = _loadContextAccessor.Default;

        // Add the loader to the container so that any call to Assembly.Load 
        // will call the load context back (if it's not already loaded)
        using (_assemblyLoaderContainer.AddLoader(
            new DirectoryLoader(binPath, loadContext)))
        {
            foreach (var fileSystemInfo in binPath.GetFileSystemInfos("*.dll"))
            {
                //// In theory you should be able to use Assembly.Load() here instead
                //var assembly1 = Assembly.Load(AssemblyName.GetAssemblyName(fileSystemInfo.FullName));
                var assembly2 = loadContext.Load(AssemblyName.GetAssemblyName(fileSystemInfo.FullName));
                yield return assembly2;
            }
        }
    }
}

That’s pretty much it! If you have an instance of CustomDirectoryAssemblyProvider then you can get Assembly references to all of the assemblies found in App_Plugins:

//returns all plugin assemblies found in App_Plugins
var candidatePluginAssemblies = customDirectoryAssemblyProvider.CandidateAssemblies;

Integrating non-referenced plugins/Assemblies with MVC

What if you had custom plugin types as MVC Controllers or other MVC types? By default MVC only knows about assemblies that your project has references to based on the DefaultAssemblyLoader.  If we wanted MVC to know about Controllers that exist in a plugin not referenced by your project (i.e. in App_Plugins) then it’s a case of registering a custom IAssemblyProvider in IoC which will get resolved by MVC. To make this super flexible we can create a custom IAssemblyProvider that wraps multiple other ones and allows you to pass in a custom referenceAssemblies filter if you wanted to use this to resolve your own plugin types:

public class CompositeAssemblyProvider : DefaultAssemblyProvider
{
    private readonly IAssemblyProvider[] _additionalProviders;
    private readonly string[] _referenceAssemblies;

    /// <summary>
    /// Constructor
    /// </summary>
    /// <param name="libraryManager"></param>
    /// <param name="additionalProviders">
    /// If passed in will concat the assemblies returned from these 
    /// providers with the default assemblies referenced
    /// </param>
    /// <param name="referenceAssemblies">
    /// If passed in it will filter the candidate libraries to ones
    /// that reference the assembly names passed in. 
    /// (i.e. "MyProduct.Web", "MyProduct.Core" )
    /// </param>
    public CompositeAssemblyProvider(
        ILibraryManager libraryManager,
        IAssemblyProvider[] additionalProviders = null,
        string[] referenceAssemblies = null) : base(libraryManager)
    {
        _additionalProviders = additionalProviders;
        _referenceAssemblies = referenceAssemblies;
    }

    /// <summary>
    /// Uses the default filter if a custom list of reference
    /// assemblies has not been provided
    /// </summary>
    protected override HashSet<string> ReferenceAssemblies
        => _referenceAssemblies == null
            ? base.ReferenceAssemblies
            : new HashSet<string>(_referenceAssemblies);
    
    /// <summary>
    /// Returns the base Libraries referenced along with any DLLs/Libraries
    /// returned from the custom IAssemblyProvider passed in
    /// </summary>
    /// <returns></returns>
    protected override IEnumerable<Library> GetCandidateLibraries()
    {
        var baseCandidates = base.GetCandidateLibraries();
        if (_additionalProviders == null) return baseCandidates;
        return baseCandidates               
            .Concat(
            _additionalProviders.SelectMany(provider => provider.CandidateAssemblies.Select(
                x => new Library(x.FullName, null, Path.GetDirectoryName(x.Location), null, Enumerable.Empty<string>(),
                    new[] { new AssemblyName(x.FullName) }))));
    }
}

To register this in IoC you just need to make sure it’s registered after you register MVC so that it overrides the last registered IAssemblyProvider:

//Add MVC services
services.AddMvc();
//Replace the default IAssemblyProvider with the composite one
services.AddSingleton<IAssemblyProvider, CompositeAssemblyProvider>(provider =>
{
    //create the custom plugin directory provider
    var hosting = provider.GetRequiredService<IApplicationEnvironment>();
    var fileProvider = new PhysicalFileProvider(hosting.ApplicationBasePath);
    var pluginAssemblyProvider = new CustomDirectoryAssemblyProvider(
        fileProvider,         
        PlatformServices.Default.AssemblyLoadContextAccessor,
        PlatformServices.Default.AssemblyLoaderContainer);
    //return the composite one - this wraps the default MVC one
    return new CompositeAssemblyProvider(
        provider.GetRequiredService<ILibraryManager>(),
        new IAssemblyProvider[] {pluginAssemblyProvider});
});

 

Your all set! Now you have the ability to load in Assemblies from any location you want, you could even load them in as byte array’s from an external data source.  What’s great about all of this is that it just works and you can integrate these external Assemblies into MVC.

Some things worth noting:

  • Parts of the assembly loading APIs are changing a bit in Asp.Net Core RC2: https://github.com/aspnet/Announcements/issues/149
  • The above code doesn’t take into account what happens if you load in the same Assembly from multiple locations. In this case, the last one in wins/is active AFAIK – I haven’t tested this yet but I’m pretty sure that’s how it works.
  • You may have some issues if load in the same Assembly more than once from multiple locations if those Assemblies have different strong names, or major versions applied to them – I also haven’t tested this yet

AppVeyor and ASP.Net Core (Previously ASP.Net 5)

February 12, 2016 16:23

Last year I created a runtime Js/Css pre-processor for ASP.Net Core (Previously ASP.Net 5) called “Smidge” and have been meaning to blog about how I integrated this with AppVeyor – to run my tests, build the project and output the Nuget files I need, so here it goes…

The build script

I use Powershell for my build scripts for my projects since it’s reasonably easy to read and the same script format has worked quite well for ASP.Net Core projects too. You can see the whole build file here. Here’s the important things to note:

With AppVeyor (and probably other build servers), you need to ensure that it actually has the dnx version you need:

# ensure the correct version
& $DNVM install 1.0.0-rc1-update1

Next you need to make sure that the current process is using the version you need to build:

# use the correct version
& $DNVM use 1.0.0-rc1-update1

Then we need to use DNU to make sure that your project has everything it needs to build:

& $DNU restore "$ProjectJsonPath"

Lastly it’s just building and packaging the project:

& $DNU build "$ProjectJsonPath"
& $DNU pack "$ProjectJsonPath" --configuration Release --out "$ReleaseFolder"

The rest of the build file is normal Powershell bits.

The test script

I’m using xunit for unit tests in this project and similarly to the build script I’m using a simple Powershell script to execute the tests on the build server, the test runner file is here. The important parts are just like the above: Ensure the correct version is installed and being used by the current process and making sure that the project has everything it needs to build and finally to build it. The last missing piece is to actually run the tests:

& $DNX -p "$TestsFolder" test

Where ‘test’ is a command defined in my project.json as part of my unit test project.

AppVeyor configuration

The good news is that there’s really not a lot to setup, it’s super easy. In your AppVeyor settings just go to the ‘Build’ section and tell it to execute the Powershell script with its build version information:

image

Then for the unit tests is basically the same, click on the ‘Tests’ section and tell it to execute the Powershell test script:

image

And that’s pretty much it! The only other part I’ve had to setup is the paths to my Artifacts (Nuget files) based on the current build number.

Now whenever I commit, AppVeyor will execute the build script and test script and we can see the output:

image

And it’s smart enough to know that the test runner executed is for unit tests, so all the unit test output shows up in the ‘Tests’ tab of the build

image

Now that that’s all setup, AppVeyor even gives you a handy Nuget feed that you can use to test your packages based on each build, this can be configured on the ‘NuGet’ settings section, for example here’s the Smidge feed: https://ci.appveyor.com/nuget/smidge

Smidge 1.0.0-RC3

It’s worth noting that I’ve also released a new version of Smidge since I finally had some time to work on it. Couple of bugs fixed in this release and also a handy new feature too! You can see the release notes here: https://github.com/Shazwazza/Smidge/releases/tag/1.0.0-rc3.  I’ve also updated a lot of the documentation, the main readme file was getting quite long so I’ve moved all of the docs over to the project’s Wiki on GitHub and condensed the readme for the most important bits. Have a look here: https://github.com/Shazwazza/Smidge