@Shazwazza

Shannon Deminick's blog all about web development

Auto upgrade your Nuget packages with Azure Pipelines or GitHub Actions

February 25, 2021 06:12
Auto upgrade your Nuget packages with Azure Pipelines or GitHub Actions

Before we start I just want to preface this with some 🔥 warnings 🔥

  • This works for me, it might not work for you
  • To get this working for you, you may need to tweak some of the code referenced
  • This is not under any support or warranty by anyone
  • Running Nuget.exe update command outside of Visual Studio will overwrite your files so there is a manual review process (more info below)
  • This is only for ASP.NET Framework using packages.config – Yes I know that is super old and I should get with the times, but this has been an ongoing behind the scenes project of mine for a long time. When I need this for Package Reference projects, ASP.NET Core/5, I’ll update it but there’s nothing stopping you from tweaking this to work for you
  • This only works for a specified csproj, not an entire sln – it could work for that but I’ve not tested, there would be a few tweaks to make that work
  • This does not yet work for GitHub actions but the concepts are all here and could probably very easily be converted UPDATE: This works now!

Now that’s out of the way …

How do I do it?

With a lot of PowerShell :) This also uses a few methods from the PowerShellForGitHub project.

The process is:

  • Run a pipeline/action on a schedule (i.e. each day)
  • This checks against your source code for the installed version for a particular package
  • Then it checks with Nuget (using your Nuget.config file) to see what the latest stable version is
  • If there’s a newer version:
  • Create a new branch
  • Run a Nuget update against your project
  • Build the project
  • Commit the changes
  • Push the changes
  • Create a PR for review

Azure Pipelines/GitHub Actions YAML

The only part of the YAML that needs editing is the variables, here's what they mean:

  • ProjectFile = The relative path to your csproj that you want to upgrade
  • PackageFile = The relative path to your packages.config file for this project
  • PackageName = The Nuget package name you want upgraded
  • GitBotUser = The name used for the Git commits
  • GitBotEmail = The email used for the Git commits

For Azure Pipelines, these are also required:

Then there are some variables to assist with testing:

  • DisableUpgradeStep = If true will just check if there’s an upgrade available and exit
  • DisableCommit = If true will run the upgrade and will exit after that (no commit, push or PR)
  • DisablePush = If true will run the upgrade + commit and will exit after that (no push or PR)
  • DisablePullRequest = If true will run the upgrade + commit + push and will exit after that (no PR)

Each step in the yaml build more or less either calls Git commands or PowerShell functions. The PowerShell functions are loaded as part of a PowerShell Module which is committed to the repository. This module’s functions are auto-loaded by PowerShell because the first step configures the PowerShell environment variable PSModulePath to include the custom path. Once that is in place, all functions exposed by the module are auto-loaded.

In these examples you’ll see that I’m referencing Umbraco Cloud names and that’s because I’m using this on Umbraco Cloud for my own website and the examples are for the UmbracoCms package. But this should in theory work for all packages!

Show me the code

The code for all of this is here in a new GitHub repo and here’s how you use it:

You can copy the folder structure in the repository as-is. Here's an example of what my site's repository folder structure is to make this work (everything except the src folder is in the GitHub repo above):

  • [root]
    • auto-upgrader.devops.yml (If you are using Azure Pipelines)
    • .github
      • workflows
        • auto-upgrader.gh.yml (If you are using GitHub Actions)
    • build
      • PowershellModules
        • AutoUpgradeFunctions.psd1
        • AutoUpgradeFunctions.psm1
        • AutoUpgradeFunctions
    • src
      • Shazwazza.Web
        • Shazwazza.Web.csproj
        • packages.config

All of the steps have descriptive display names and it should be reasonably self documenting.

The end result is a PR, here’s one that was generated by this process:

Nuget overwrites

Nuget.exe works differently than Nuget within Visual Studio’s Package Manager Console. All of those special commands like Install-Package, Update-Package, etc… are all PowerShell module commands built into Visual Studio and they are able to work with the context of Visual Studio. This allows those commands to try to be a little smarter when running Nuget updates and also allows the legacy Nuget features like running PowerShell scripts on install/update to run. This script just uses Nuget.exe and it’s less smart especially for these legacy .NET Framework projects. As such, it will just overwrite all files in most cases (it does detect file changes it seems but isn’t always accurate).

With that 🔥 warning 🔥 it is very important to make sure you review the changed files in the PR and revert or adjust any changes you need before applying the PR.

You’ll see a note in the PowerShell script about Nuget overwrites. There are other options that can be used like "Ignore" and "IgnoreAll" but all my tests have showed that for some reason this setting will end up deleting a whole bunch of files so the default overwrite setting is used.

Next steps

Get out there and try it! Would love some feedback on this if/when you get a change to test it.

PackageReference support with .NET Framework projects could also be done (but IMO this is low priority) along with being able to upgrade the entire SLN instead of just the csproj.

Then perhaps some attempts at getting a NET Core/5 version of this running. In theory that will be easier since it will mostly just be dotnet commands.

 

How to execute one Controller Action from another in ASP.NET 5

February 15, 2021 05:06
How to execute one Controller Action from another in ASP.NET 5

This will generally be a rare thing to do but if you have your reasons to do it, then this is how…

In Umbraco one valid reason to do this is due to how HTTP POSTs are handled for forms. Traditionally an HTML form will POST to a specific endpoint, that endpoint will handle the validation (etc), and if all is successful it will redirect to another URL, else it will return a validation result on the current URL (i.e. PRG POST/REDIRECT/GET). In the CMS world this may end up a little bit weird because URLs are dynamic. POSTs in theory should just POST to the current URL so that if there is a validation result, this is still shown on the current URL and not a custom controller endpoint URL. This means that there can be multiple controllers handling the same URL, one for GET, another for POST and that’s exactly what Umbraco has been doing since MVC was enabled in it many years ago. For this to work, a controller is selected during the dynamic route to handle the POST (a SurfaceController in Umbraco) and if successful, typically the developer will use: return RedirectToCurrentUmbracoPage (of type RedirectToUmbracoPageResult) or if not successful will use: return CurrentUmbracoPage (of type UmbracoPageResult). The RedirectToUmbracoPageResult is easy to handle since this is just a redirect but the UmbracoPageResult is a little tricky because one controller has just handled the POST request but now it wants to return a page result for the current Umbraco page which is handled by a different controller.

IActionInvoker

The concept is actually pretty simple and the IActionInvoker does all of the work. You can create an IActionInvoker from the IActionInvokerFactory which needs an ActionContext. Here’s what the ExecuteResultAsync method of a custom IActionResult could look like to do this:

public async Task ExecuteResultAsync(ActionContext context)
{
    // Change the route values to match the action to be executed
    context.RouteData.Values["controller"] = "Page";
    context.RouteData.Values["action"] = "Index";

    // Create a new context and excute the controller/action
    // Copy the action context - this also copies the ModelState
    var renderActionContext = new ActionContext(context)
    {
        // Normally this would be looked up via the EndpointDataSource
        // or using the IActionSelector
        ActionDescriptor = new ControllerActionDescriptor
        {
            ActionName = "Index",
            ControllerName = "Page",
            ControllerTypeInfo = typeof(PageController).GetTypeInfo(),
            DisplayName = "PageController.Index"
        }
    };

    // Get the factory
    IActionInvokerFactory actionInvokerFactory = context.HttpContext
                .RequestServices
                .GetRequiredService<IActionInvokerFactory>();

    // Create the invoker
    IActionInvoker actionInvoker = actionInvokerFactory.CreateInvoker(renderActionContext);

    // Execute!
    await actionInvoker.InvokeAsync();
}

That’s pretty must the gist of it. The note about the ControllerActionDescriptor is important though, it’s probably best to not manually create these since they are already created with all of your routing. They can be queried and resolved in a few different ways such as interrogating the EndpointDataSource or using the IActionSelector. This execution will execute the entire pipeline for the other controller including all of it’s filters, etc…

Allowing dynamic SupportedCultures in RequestLocalizationOptions

January 13, 2021 03:30
Allowing dynamic SupportedCultures in RequestLocalizationOptions

The documented usage of RequestLocalizationOptions in ASP.NET 5/Core is to assign a static list of SupportedCultures since ASP.NET is assuming you’ll know up-front what cultures your app is supporting. But what if you are creating a CMS or another web app that allows users to include cultures dynamically?

This isn’t documented anywhere but it’s certainly possible. RequestLocalizationOptions.SupportedCultures is a mutable IList which means that values can be added/removed at runtime if you really want.

Create a custom RequestCultureProvider

First thing you need is a custom RequestCultureProvider. The trick is to pass in the RequestLocalizationOptions into it’s ctor so you can dynamically modify the SupportedCultures when required.

public class MyCultureProvider : RequestCultureProvider
{
    private readonly RequestLocalizationOptions _localizationOptions;
    private readonly object _locker = new object();

    // ctor with reference to the RequestLocalizationOptions
    public MyCultureProvider(RequestLocalizationOptions localizationOptions)
        => _localizationOptions = localizationOptions;

    public override Task<ProviderCultureResult> DetermineProviderCultureResult(HttpContext httpContext)
    {
        // TODO: Implement GetCulture() to get a culture for the current request
        CultureInfo culture = GetCulture(); 

        if (culture is null)
        {
            return NullProviderCultureResult;
        }

        lock (_locker)
        {
            // check if this culture is already supported
            var cultureExists = _localizationOptions.SupportedCultures.Contains(culture);

            if (!cultureExists)
            {
                // If not, add this as a supporting culture
                _localizationOptions.SupportedCultures.Add(culture);
                _localizationOptions.SupportedUICultures.Add(culture);
            } 
        }

        return Task.FromResult(new ProviderCultureResult(culture.Name));
    }
}

Add your custom culture provider

You can configure RequestLocalizationOptions in a few different ways, this example registers a custom implementation of IConfigureOptions<RequestLocalizationOptions> into DI

public class MyRequestLocalizationOptions : IConfigureOptions<RequestLocalizationOptions>
{
    public void Configure(RequestLocalizationOptions options)
    {
        // TODO: Configure other options parameters

        // Add the custom provider,
        // in many cases you'll want this to execute before the defaults
        options.RequestCultureProviders.Insert(0, new MyCultureProvider(options));
    }
}

Then just register these options: Services.ConfigureOptions<MyRequestLocalizationOptions>();

That’s it, now you can have dynamic SupportedCultures in your app!

How to change HttpRequest.IsSecureConnection

December 15, 2020 13:27
How to change HttpRequest.IsSecureConnection

This post is for ASP.NET Framework, not ASP.NET Core

This is an old problem that is solved in dozens of different ways…

Typically in an ASP.NET Framework site when you need to check if the request is running in HTTPS you would do something like HttpContext.Request.IsSecureConnection. But this value won’t return the anticipated value if your site is running behind a proxy or load balancer which isn’t forwarding HTTPS traffic and instead is forwarding HTTP traffic. This is pretty common and the way people deal with it is to check some special headers, which will vary depending on the load balancer you are using. The primary one is HTTP_X_FORWARDED_PROTO but there are others too like: X-Forwarded-Protocol, X-Forwarded-Ssl, X-Url-Scheme , and the expected values for each are slightly different too.

So if you can’t rely on HttpContext.Request.IsSecureConnection then you’ll probably make an extension method, or better yet an abstraction to deal with checking if the request is HTTPS. That’s totally fine but unfortunately you can’t rely on all of the dependencies your website uses to deal with this scenario. You’d also need to be able to configure them all to possibly handle some weird header scheme that your proxy/load balancer uses.

Aha! HttpRequestBase is abstract

If you are using MVC you’ll probably know that the HttpContext.Request within a controller is HttpRequestBase which is an abstract class where it’s possible to re-implement IsSecureConnection. Great, that sounds like a winner! Unfortunately this will only get you as far as being able to replace that value within your own controllers, this will still not help you with the dependencies your website uses.

And controllers aren’t the only thing you might need to worry about. You might have some super old ASP.NET Webforms or other code lying around using HttpContext.Current.Request.IsSecureConnection which is based on the old HttpRequest object which cannot be overridden.

So how does the default HttpContext.Current get this value?

The HttpContextBase is based on HttpContextWrapper which simply wraps HttpContext.Current. This is what is passed to your MVC controllers. As for HttpContext.Current, this is the source of all of this data and with a bit (a LOT) of hacking you can actually change this value.

The HttpContext.Current is actually a set-able singleton value. The constructor of HttpContext has 2x overloads, one that takes in a request/response object and another that accepts something called HttpWorkerRequest. So it is possible (sort of) to construct a new HttpContext instance that wraps the old one … so can we change this value? Yes!

HttpWorkerRequest is an abstract class and all methods are overridable so if we can access the underlying default HttpWorkerRequest (which is IIS7WorkerRequest) from the current context, we could wrap it and return anything we want for IsSecureConnection. This is not a pretty endeavor but it can be done.

Wrapping the HttpContext

I created a gist to show this. The usage is easy, just create an IHttpModule and replace the HttpContext by using a custom HttpWorkerRequest instance that accepts a delegate to return whatever you want for IsSecureConnection:

 

Now to create the wrapper, there’s a ton of methods and properties to override:

 

So what about OWIN?

Owin is easy to take care of, it’s OwinContext.Request.IsSecure is purely based on whether or not the scheme is https and the .Scheme property is settable:

 

Does this all actually work?

This was an experiment to see how possible this is and based on my rudimentary tests, yes it works. I have no idea if this will have some strange side affects but in theory its all just wrapping what normally executes. There will be a small performance penalty because this is going to create a couple of new objects each request and use a reflection call but I think that will be very trivial.

Let me know if it works for you!

Controller Scoped Model Binding in ASP.NET Core

May 12, 2020 01:51
Controller Scoped Model Binding in ASP.NET Core

Want to avoid [FromBody] attributes everywhere? Don’t want to use [ApiController] strict conventions? Don’t want to apply IInputFormatter’s globally?

ASP.NET Core MVC is super flexible but it very much caters towards configuring everything at a global level. Perhaps you are building a framework or library or a CMS in .NET Core? In which case you generally want to be as unobtrusive as possible so mucking around with global MVC configuration isn’t really acceptable. The traditional way of dealing with this is by applying configuration directly to controllers which generally means using controller base classes and attributes. This isn’t super pretty but it works in almost all cases from applying authorization/resource/action/exception/result filters to api conventions. However this doesn’t work for model binding.

Model binding vs formatters

Model binding comes in 2 flavors: formatters for deserializing the request body (like JSON) into models  and value providers for getting data from other places like form body, query string, headers, etc… Both of these things internally in MVC use model binders though typically the language used for binding the request body are called formatters. The problem with formatters (which are of type IInputFormatter) is that they are only applied at the global level as part of MvcOptions which are in turn passed along to a special model binder called BodyModelBinder. Working with IInputFormatter at the controller level is almost impossible.

There seems to be a couple options that look like you might be able to apply a custom IInputFormatter to a specific controller:

  • Create a custom IModelBinderProvider – this unfortunately will not work because the ModelBinderProviderContext doesn’t provide the ControllerActionDescriptor executing so you cannot apply this provider to certain controllers/actions (though this should be possible).
  • Assign a custom IModelBinderFactory to the controller explicitly by assigning ControllerBase.ModelBinderFactory in the controllers constructor – this unfortunately doesn’t work because the ControllerBase.ModelBinderFactory isn’t used for body model binding

So how does [ApiController] attribute work?

The [ApiController] attribute does quite a lot of things and configures your controller in a very opinionated way. It almost does what I want and it somehow magically does this

[FromBody] is inferred for complex type parameters

That’s great! It’s what I want to do but I don’t want to use the [ApiController] attribute since it applies too many conventions and the only way to toggle these …. is again at the global level :/ This also still doesn’t solve the problem of applying a specific IInputFormatter to be used for the model binding but it’s a step in the right direction.

The way that the [ApiController] attribute works is by using MVC’s “application model” which is done by implementing IApplicationModelProvider.

A custom IApplicationModelProvider

Taking some inspiration from the way [ApiController] attribute works we can have a look at the source of the application model that makes this happen: ApiBehaviorApplicationModelProvider. This basically assigns a bunch of IActionModelConvention’s: ApiVisibilityConvention, ClientErrorResultFilterConvention, InvalidModelStateFilterConvention, ConsumesConstraintForFormFileParameterConvention, ApiConventionApplicationModelConvention, and InferParameterBindingInfoConvention. The last one InferParameterBindingInfoConvention is the important one that magically makes complex type parameters bind from the request body like JSON like good old WebApi used to do.

So we can make our own application model to target our own controllers and use a custom IActionModelConvention to apply a custom body model binder:


public class MyApplicationModelProvider : IApplicationModelProvider
{
    public MyApplicationModelProvider(IModelMetadataProvider modelMetadataProvider)
    {
        ActionModelConventions = new List<IActionModelConvention>()
        {
            // Ensure complex models are bound from request body
            new InferParameterBindingInfoConvention(modelMetadataProvider),
            // Apply custom IInputFormatter to the request body
            new MyModelBinderConvention()
        };
    }

    public List<IActionModelConvention> ActionModelConventions { get; }

    public int Order => 0;

    public void OnProvidersExecuted(ApplicationModelProviderContext context)
    {
    }

    public void OnProvidersExecuting(ApplicationModelProviderContext context)
    {
        foreach (var controller in context.Result.Controllers)
        {
            // apply conventions to all actions if attributed with [MyController]
            if (IsMyController(controller))
                foreach (var action in controller.Actions)
                    foreach (var convention in ActionModelConventions)
                        convention.Apply(action);
        }
    }

    // returns true if the controller is attributed with [MyController]
    private bool IsMyController(ControllerModel controller)
        => controller.Attributes.OfType<MyControllerAttribute>().Any();
}

And the custom convention:


public class MyModelBinderConvention : IActionModelConvention
{
    public void Apply(ActionModel action)
    {
        foreach (var p in action.Parameters
            // the InferParameterBindingInfoConvention must execute first,
            // which assigns this BindingSource, so if that is assigned
            // we can then assign a custom BinderType to be used.
            .Where(p => p.BindingInfo?.BindingSource == BindingSource.Body))
        {
            p.BindingInfo.BinderType = typeof(MyModelBinder);
        }
    }
}

Based on the above application model conventions, any controller attributed with our custom [MyController] attribute will have these conventions applied to all of it’s actions. With the above, any complex model that will be bound from the request body will use the IModelBinder type: MyModelBinder, so here’s how that implementation could look:


// inherit from BodyModelBinder - it does a bunch of magic like caching
// that we don't want to miss out on
public class MyModelBinder : BodyModelBinder
{
    // TODO: You can inject other dependencies to pass to GetInputFormatter
    public MyModelBinder(IHttpRequestStreamReaderFactory readerFactory)
        : base(GetInputFormatter(), readerFactory)
    {
    }

    private static IInputFormatter[] GetInputFormatter()
    {  
        return new IInputFormatter[]
        {
            // TODO: Return any IInputFormatter you want
            new MyInputFormatter()
        };
    }
}

The last thing to do is wire it up in DI:


services.TryAddSingleton<MyModelBinder>();            
services.TryAddEnumerable(
    ServiceDescriptor.Transient<IApplicationModelProvider,
    MyApplicationModelProvider>());

That’s a reasonable amount of plumbing!

It could certainly be simpler to configure a body model binder at the controller level but at least there’s actually a way to do it. For a single controller this is quite a lot of work but for a lot of controllers the MVC “application mode” is quite brilliant! … it just took a lot of source code reading to figure that out :)

How to register MVC controllers shipped with a class library in ASP.NET Core

September 20, 2019 03:20
How to register MVC controllers shipped with a class library in ASP.NET Core

In many cases you’ll want to ship MVC controllers, possibly views or taghelpers, etc… as part of your class library. To do this correctly you’ll want to add your assembly to ASP.NET’s “Application Parts” on startup. Its quite simple to do but you might want to make sure you are not enabling all sorts of services that the user of your library doesn’t need.

The common way to do this on startup is to have your own extension method to “Add” your class library to the services. For example:

public static class MyLibStartup
{
    public static IServiceCollection AddMyLib(this IServiceCollection services)
    {
        //TODO: Add your own custom services to DI

        //Add your assembly to the ASP.NET application parts
        var builder = services.AddMvc();
        builder.AddApplicationPart(typeof(MyLibStartup).Assembly);
    }
}

This will work, but the call to AddMvc() is doing a lot more than you might think (also note in ASP.NET Core 3, it’s doing a similar amount of work). This call is adding all of the services to the application required for: authorization, controllers, views, taghelpers, razor, api explorer, CORS, and more… This might be fine if your library requires all of these things but otherwise unless the user of your library also wants all of these things, in my opinion it’s probably better to only automatically add the services that you know your library needs.

In order to add your assembly application part you need a reference to IMvcBuilder which you can resolve by calling any number of the extension methods to add the services you need. Depending on what your application requires will depend on what services you’ll want to add. It’s probably best to start with the lowest common feature-set which is a call to AddMvcCore(), the updated code might look like this:

//Add your assembly to the ASP.NET application parts
var builder = services.AddMvcCore();
builder.AddApplicationPart(typeof(MyLibStartup).Assembly);

From there you can add the other bits you need, for example, maybe you also need CORS:

//Add your assembly to the ASP.NET application parts
var builder = services.AddMvcCore().AddCors();
builder.AddApplicationPart(typeof(MyLibStartup).Assembly);

Configuring Azure Active Directory login with Umbraco Members

February 18, 2019 02:09
Configuring Azure Active Directory login with Umbraco Members

This post is about configuring Azure Active Directory with Umbraco Members (not Users), meaning this is for your front-end website, not the Umbraco back office. I did write up a post about Azure AD with back office users though, so if that is what you are looking for then this is the link.

Install the Nuget packages

First thing to do is get the UmbracoIdentity package installed.

PM > Install-Package UmbracoIdentity

(This will also install the UmbracoIdentity.Core base package)

This package installs some code snippets and updates your web.config to enable ASP.Net Identity for Umbraco members. Umbraco ships with the old and deprecated ASP.Net Membership Providers for members and not ASP.Net Identity so this package extends the Umbraco CMS and the Umbraco members implementation to use ASP.Net Identity APIs to interact with the built in members data store. Installing this package will remove the (deprecated) FormsAuthentication module from your web.config and it will no longer be used to authenticate members, so the typical members snippets built into Umbraco macros will not work. Instead use the supplied snippets shipped with this package.

To read more about this package see the GitHub repo here.

Next, the OpenIdConnect package needs to be installed

PM > Install-Package Microsoft.Owin.Security.OpenIdConnect

Configure Azure Active Directory

Head over to the Azure Active Directory section on the Azure portal, choose App Registrations (I’m using the Preview functionality for this) and create a New registration

image

Next fill out the app details

image

You may also need to enter other redirect URLs depending on how many different environments you have. All of these URLs can be added in the Authentication section of your app in the Azure portal.

For AAD configuration for front-end members, the redirect Urls are just your website’s root URL and it is advised to keep the trailing slash.

Next you will need to enable Id Tokens

image

Configure OpenIdConnect

The UmbracoIdentity package will have installed an OWIN startup class in ~/App_Start/UmbracoIdentityStartup.cs (or it could be in App_Code if you are using a website project). This is how ASP.Net Identity is configured for front-end members and where you can specify the configuration for different OAuth providers. There’s a few things you’ll need to do:

Allow external sign in cookies

If you scroll down to the ConfigureMiddleware method, there will be a link of code to uncomment: app.UseExternalSignInCookie(DefaultAuthenticationTypes.ExternalCookie); this is required for any OAuth providers to work.

Enable OpenIdConnect OAuth for AAD

You’ll need to add this extension method class to your code which is some boiler plate code to configure OpenIdConnect with AAD:

public static class UmbracoADAuthExtensions
{
    public static void ConfigureAzureActiveDirectoryAuth(this IAppBuilder app,
        string tenant, string clientId, string postLoginRedirectUri, Guid issuerId,
        string caption = "Active Directory")
    {
        var authority = string.Format(
            System.Globalization.CultureInfo.InvariantCulture,
            "https://login.windows.net/{0}",
            tenant);

        var adOptions = new OpenIdConnectAuthenticationOptions
        {
            ClientId = clientId,
            Authority = authority,
            RedirectUri = postLoginRedirectUri
        };

        adOptions.Caption = caption;
        //Need to set the auth type as the issuer path
        adOptions.AuthenticationType = string.Format(
            System.Globalization.CultureInfo.InvariantCulture,
            "https://sts.windows.net/{0}/",
            issuerId);
        app.UseOpenIdConnectAuthentication(adOptions);
    }
}

Next you’ll need to call this code, add the following line underneath the app.UseExternalSignInCookie method call:

app.ConfigureAzureActiveDirectoryAuth(
    ConfigurationManager.AppSettings["azureAd:tenantId"],
    ConfigurationManager.AppSettings["azureAd:clientId"],
    //The value of this will need to change depending on your current environment
    postLoginRedirectUri: ConfigurationManager.AppSettings["azureAd:redirectUrl"],
    //This is the same as the TenantId
    issuerId: new Guid(ConfigurationManager.AppSettings["azureAd:tenantId"]));

Then you’ll need to add a few appSettings to your web.config (based on your AAD info):

<add key="azureAd:tenantId" value="YOUR-TENANT-ID-GUID" />
<add key="azureAd:clientId" value="YOUR-CLIENT-ID-GUID" />
<add key="azureAd:redirectUrl" value="http://my-test-website/" />

Configure your Umbraco data

The UmbracoIdentity repository has the installation documentation and you must follow these 2 instructions, and they are very simple:

  1. You need to update your member type with the securityStamp property
  2. Create the Account document type

Once that is done you will have an Member account management page which is based off of the installed views and snippets of the UmbracoIdentity package. This account page will look like this:

image

As you can see the button text under the “Use another service to log in” is the login provider name which is a bit ugly. The good news is that this is easy to change since this is just a partial view that was installed with the UmbracoIdentity package. You can edit the file: ~/Views/UmbracoIdentityAccount/ExternalLoginsList.cshtml, the code to render that button text is using @p.Authentication provider but we can easily change this to @p.Caption which is actually the same caption text used in the extension method we created. So the whole button code can look like this instead:


<button type="submit" class="btn btn-default"
        id="@p.AuthenticationType"
        name="provider"
        value="@p.AuthenticationType"
        title="Log in using your @p.Caption account">
    @p.Caption
</button>

This is a bit nicer, now the button looks like:

image

The purpose of all of these snippets and views installed with UmbracoIdentity is for you to customize how the whole flow looks and works so you’ll most likely end up customizing a number of views found in this folder to suit your needs.

That’s it!

Once that’s all configured, if you click on the Active Directory button to log in as a member, you’ll be brought to the standard AAD permission screen:

image

Once you accept you’ll be redirect back to your Account page:

image

Any customizations is then up to you. You can modify how the flow works, whether or not you accepting auto-linking accounts (like in the above example), or if you require a member to exist locally before being able to link an OAuth account, etc… All of the views and controller code in UmbracoIdentity is there for you to manipulate. The main files are:

  • ~/Views/Account.cshtml
  • ~/Views/UmbracoIdentityAccount/*
  • ~/Controllers/UmbracoIdentityAccountController.cs
  • ~/App_Start/UmbracoIdentityStartup.cs


Happy coding!

Smidge 2.0 alpha is out

December 30, 2016 03:39
ASP.NET-Core-Logo_2colors_RGB_bitmap_MEDIUM

What is Smidge? Smidge is a lightweight runtime bundling library (CSS/JavaScript file minification, combination, compression) for ASP.NET Core.

If you’ve come from ASP.NET 4.5 you would have been familiar with the bundling/minification API and other bundling options like ClientDependency, but that is no longer available in ASP.NET Core, instead it is advised to do all the bundling and pre-processing that you need as part of your build process …which certainly makes sense! So why create this library? A few reasons: some people just want to have a very simple bundling library and don’t want to worry about Gulp or Grunt or WebPack, in a lot of cases the overhead of runtime processing is not going to make any difference, and lastly, if you have created something like a CMS that dynamically loads in assets from 3rd party packages or plugins, you need a runtime bundler since these things don’t exist at build time.

Over the past few months I’ve been working on some enhancements to Smidge and have found a bit of time to get an alpha released.  There’s loads of great new features in Smidge 2.0! You can install via Nuget and is targets .NET Standard 1.6 and .NET Framework 4.5.2

PM> Install-Package Smidge -Pre

New to Smidge?

It’s easy to get started with Smidge and there’s lots of docs available on GitHub that cover installation, configuration, creating bundles and rendering  them.

New Features

Here’s a list of new features complete with lots of code examples

Customizable Debug and Production options

https://github.com/Shazwazza/Smidge/issues/58

Previous to version 2.0, you could only configure aspects of the Production options and the Debug assets that were returned were just the raw static files. With 2.0, you have full control over how your assets are processed in both Debug and Production configurations. For example, if you wanted you could have your assets combined but not minified in Debug mode. This will also allow for non native web assets such as TypeScript to have pre-processors running and able to work in Debug mode.

Example:

services.AddSmidge(_config)
    .Configure<SmidgeOptions>(options =>
    {
        //set the default e-tag options for Debug mode
        options.DefaultBundleOptions.DebugOptions.CacheControlOptions.EnableETag = false        
    });

Fluent syntax for declaring/configuring bundles

https://github.com/Shazwazza/Smidge/issues/55

If you want to customize Debug or Production options per bundle, you can do so with a fluent syntax, for example:

app.UseSmidge(bundles =>
{                
    //For this bundle, enable composite files for Debug mode, enable the file watcher so any changes
    //to the files are automatically re-processed and cache invalidated, disable cache control headers
    //and use a custom cache buster. You could of course use the .ForProduction options too 
    bundles.Create("test-bundle-2", WebFileType.Js, "~/Js/Bundle2")
        .WithEnvironmentOptions(BundleEnvironmentOptions.Create()
                .ForDebug(builder => builder
                    .EnableCompositeProcessing()
                    .EnableFileWatcher()
                    .SetCacheBusterType<AppDomainLifetimeCacheBuster>()
                    .CacheControlOptions(enableEtag: false, cacheControlMaxAge: 0))
                .Build()
        );                
});

Customizable Cache Buster

https://github.com/Shazwazza/Smidge/issues/51

In version 1.0 the only cache busting mechanism was Smidge’s version property which is set in config, in 2.0 Smidge allows you to control how cache busting is controlled at a global and bundle level. 2.0 ships with 2 ICacheBuster types:

  • ConfigCacheBuster – the default and uses Smidge’s version property in config

  • AppDomainLifetimeCacheBuster – if enabled will mean that the server/browser cache will be invalidated on every app domain recycle

If you want a different behavior, you can define you own ICacheBuster add it to the IoC container and then just use it globally or per bundle. For example:

//Set a custom MyCacheBuster as the default one for Debug assets:
services.AddSmidge(_config)
    .Configure<SmidgeOptions>(options =>
    {
        options.DefaultBundleOptions.DebugOptions.SetCacheBusterType<MyCustomCacheBuster>();       
    });

//Set a custom MyCacheBuster as the cache buster for a particular bundle in debug mode:
bundles.Create("test-bundle-2", WebFileType.Js, "~/Js/Bundle2")
    .WithEnvironmentOptions(BundleEnvironmentOptions.Create()
            .ForDebug(builder => builder
                .SetCacheBusterType<MyCacheBuster>()
            .Build()
    );

Customizable cache headers

https://github.com/Shazwazza/Smidge/issues/48 

You can now control if you want the ETag header output and you can control the value set for max-age/s-maxage/Expires header at a global or bundle level, for example:

//This would set the max-age header for this bundle to expire in 5 days
bundles.Create("test-bundle-5", WebFileType.Js, "~/Js/Bundle5")
    .WithEnvironmentOptions(BundleEnvironmentOptions.Create()
            .ForProduction(builder => builder                                
                .CacheControlOptions(enableEtag: true, cacheControlMaxAge: (5 * 24)))
            .Build()
    );

Callback to customize the pre-processor pipeline per web file

https://github.com/Shazwazza/Smidge/issues/59

This is handy in case you want to modify the pipeline for a given web file at runtime based on some criteria, for example:

services.AddSmidge(_config)
    .Configure<SmidgeOptions>(options =>
    {
        //set the callback
        options.PipelineFactory.OnGetDefault = GetDefaultPipelineFactory;
    });

//The GetDefaultPipeline method could do something like modify the default pipeline to use Nuglify for JS processing:

private static PreProcessPipeline GetDefaultPipelineFactory(WebFileType fileType, IReadOnlyCollection<IPreProcessor> processors)
{
    switch (fileType)
    {
        case WebFileType.Js:
            return new PreProcessPipeline(new IPreProcessor[]
            {
                processors.OfType<NuglifyJs>().Single()
            });                
    }
    //returning null will fallback to the logic defined in the registered PreProcessPipelineFactory
    return null;
}

File watching with automatic cache invalidation

https://github.com/Shazwazza/Smidge/pull/42 

During the development process it would be nice to be able to test composite files but have them auto re-process and invalidate the cache whenever one of the source files changes… in 2.0 this is possible!  You can enable file watching at the global level or per bundle. Example:

//Enable file watching for all files in this bundle when in Debug mode
bundles.Create("test-bundle-7",
    new CssFile("~/Js/Bundle7/a1.js"),
    new CssFile("~/Js/Bundle7/a2.js"))
    .WithEnvironmentOptions(BundleEnvironmentOptions.Create()
            .ForDebug(builder => builder.EnableFileWatcher())
            .Build()
    );

What’s next?

This is an alpha release since there’s a few things that I need to complete. Most are already done but I just need to make Nuget packages for them:

More pre-processors

I’ve enabled support for a Nuglify pre-processor for both CSS and JS (Nuglify is a fork of the Microsoft Ajax Minifier for ASP.NET Core + additional features). I also enabled support for an Uglify NodeJs pre-processor which uses Microsoft.AspNetCore.NodeServices to invoke Node.js from ASP.NET and run the JS version of Uglify. I just need to get these on Nuget but haven’t got around to that yet.

A quick note on minifier performance

Though Nuglify and Uglify have a better minification engine (better/smarter size reduction) than JsMin because they create an AST (Abstract Syntax Tree) to perform it’s processing, they are actually much slower and consume more resources than JsMin. Since Smidge is a Runtime bundling engine, its generally important to ensure that the bundling/minification is performed quickly. Smidge has strict caching so the bundling/minification will only happen once (depending on your ICacheBuster you are using) but it is still recommended to understand the performance implications of replacing JsMin with another minifier. I’ve put together some benchmarks (NOTE: a smaller Minified % is better):

Method Median StdDev Scaled Scaled-SD Minified % Gen 0 Gen 1 Gen 2 Bytes Allocated/Op
JsMin 10.2008 ms 0.3102 ms 1.00 0.00 51.75% - - - 155,624.67
Nuglify 69.0778 ms 0.0180 ms 6.72 0.16 32.71% 53.00 22.00 15.00 4,837,313.07
JsServicesUglify 1,548.3951 ms 7.6388 ms 150.95 3.73 32.63% 0.97 - - 576,056.55
The last benchmark may be a bit misleading because the processing is done via NodeJs which executes in a separate process so I'm unsure if the actual memory usage of that can be properly captured by BenchmarkDotNet but you can see it's speed is much slower.

Thanks!

Big thanks to @dazinator for all the help, recommendations, testing, feedback, etc… and for the rest of the community for filing bugs, questions, and comments. Much appreciated :)

FCN (File Change Notification) Viewer for ASP.NET

December 19, 2016 05:04

This is a follow up post about another article I previously wrote about FCN on ASP.NET and how it affects your application, performance, etc…

imageSince that post, I’ve discovered a few more tidbits about FCN and application restarts and have decided to release a Nuget package that you can install to generate a report of all file/directory change monitors that ASP.NET creates.

The code for the FCN report generator lives on GitHub and you can install it via nuget:

PM> Install-Package FCNViewer

Once that is installed you’ll get a readme on how to enable it, basically just this in your WebApi startup/route config:

config.Routes.MapFcnViewerRoute();

Then you can navigate to /fcn and you’ll get a nice report showing all of the files/folders being watched by ASP.NET.

FCN modes & app domain restarts

I won’t go into full details about all the FCN modes again (you can find those details on my previous post) but I want to provide some info about “Single” vs “NotSet” (the default).

When the default (“NotSet”) is used, ASP.NET will create a directory monitor inside of every folder that the ASP.NET runtime accesses. This includes accessing a folder to return an asset request like CSS or JS, or rendering a razor file or reading from a data file. Basically it means ASP.NET has the potential to create a directory monitor for every directory you have in your site. The good part about the default mode is that each directory monitor will have it’s own buffer to manage the files it is watching so there’s less chance these buffers can overflow. The problem with the default is file system performance especially if you are hosting your site from a remote file server – the more directory monitors that are created the more strain will be put on your file server/network which could cause unwanted app domain restarts.

When set to “Single”, ASP.NET will create one directory monitor and this single directory monitor will be used to monitor all folders that ASP.NET accesses. This means there is a single buffer that is used for monitoring all of the files and folders. This buffer is larger than the buffer used for each individual monitor created with the default is used, however the problem with “Single” is that it means that if you have tons of folders there’s a higher chance this buffer can overflow which could cause unwanted app domain restarts. The good part about “Single” is that its much better for performance for the file system especially when you are hosting your site from a remote file server.

As you can see there’s no perfect scenario and both can cause unwanted app domain restarts. These restarts will end up in your logs with very peculiar reasons like:

Application shutdown. Details: ConfigurationChange
_shutDownMessage=Overwhelming Change Notification in 
    HostingEnvironment initiated shutdown
    CONFIG change
    Overwhelming Change Notification in D:\inetpub\test
    CONFIG change
    Change Notification for critical directories.
    Overwhelming Change Notification in bin
    Change Notification for critical directories.
    Overwhelming Change Notification in App_LocalResources
    CONFIG change
    CONFIG change
    CONFIG change
    CONFIG change

Or other strange “ConfigurationChange” reasons that might tell you that all sorts of files have been changed – but you definitely know that is not the cause. The cause of this is very likely FCN issues.

Files outside of the web root

An interesting bit of info is that ASP.NET wont create directory/file monitors for files that exist outside of the web root even if ASP.NET accesses them. I’d recommend where possible that you store any sort of cache files, data files, etc… that ASP.NET doesn’t need to serve up as static file requests outside of the web root. This is actually the default way of working in ASP.NET Core too. As an example, you may have heard or use Image Processor which creates quite a substantial amount of cache folders for processed images. These files by default exist in /App_Data/cache and since that is in the web root, ASP.NET will create a directory watcher for each folder in there … and there can be literally thousands if you use Image Processor extensively! If you are using FCN “Single” mode and have that many folders in the Image Processor cache, there’s a good chance that you’ll have app domain restart issues - though changing it to “NotSet” could result in serious file system performance issues. The good news for Image Processor is I created a PR to allow for storing it’s cache outside of the web root: https://github.com/JimBobSquarePants/ImageProcessor/pull/521 so that functionality should be available in an upcoming release soon.

Virtual Directories

Updated 13/08/2018

As it turns out, Virtual Directories in IIS are treated differently as well! Any fcnMode that you configure has no affect on files within a Virtual Directory except for if you disable FCN. So if you are thinking of using “Single” and are using Virtual Directories, think again. The only FCN setting that affects Virtual Directories is “Disabled”. Here’s the actual code that creates directory watches for Virtual Directories: https://referencesource.microsoft.com/#System.Web/FileChangesMonitor.cs,0820c837402228ef and as you can see, if it’s not disabled, it will be creating a directory watcher for every sub directory in the Virtual Directory. This problem is compounded if the Virtual Directory is pointing to a network share.

FCN & ASP.NET CacheDependency

Another thing to be aware of is that if you use ASP.NET’s cache (i.e. HttpRuntime.Cache or HttpContext.Cache or MemoryCache) and you create cache dependencies on specific files, this effectively goes straight to the underlying FCN engine of ASP.NET and it will create these same file/directory monitors that ASP.NET creates by default … even if these files are stored outside of the web root! So if you are creating a ton of CacheDependency objects, you will be creating a ton of FCN directory/file monitors which could be directly causing FCN issues with app domain restarts.

The FCN Report

At least with this report viewer you can see how many files and folders are being watched. It’s important to know that these watchers are created lazily whenever ASP.NET accesses files so on first load you might not see too many but if you start browsing around your site, you’ll see the number grow.

You can also modify the default route by using an overload:

config.Routes.MapFcnViewerRoute("myfcnreport");

This can hopefully help debug these strange restart issues, give you an idea of how much ASP.NET is actually watching and show you how the FCN Modes affect these monitors.

Umbraco CLI running on ASP.NET Core

October 26, 2016 14:13
Umbraco CLI running on ASP.NET Core

TL;DR I’ve got Umbraco (the Core part) running on .NET Core (not just a .NET Core CLI wrapping a non .NET Core Umbraco). See below for a quick video of it working on Ubuntu and simple instructions on how to get it running yourself.

Over the past couple of years I’ve slowly been working on getting Umbraco to run on ASP.NET Core. Unlike many other ASP.NET frameworks and products that have rewritten their apps in ASP.NET Core, I’ve gone a different path and have updated Umbraco’s existing code to compile against both .Net Framework and .Net Core. This is a necessary transition for the Umbraco codebase, we don’t plan on rewriting Umbraco, just updating it to play nicely with both .Net Framework and .Net Core.

During my talk at CodeGarden this year I spoke about the Future of Umbraco Core. An important aspect of that talk was the fact that we need to build & release Umbraco version 8 before we can consider looking in to upgrading all of Umbraco to work with ASP.NET Core. A primary reason for this is because we need to remove all of the legacy code from Umbraco (including plenty of old Webforms views) and updated plenty of other things such as old libraries that are simply not going to work with ASP.NET Core.

I have been doing all of the work on updating Umbraco to work with ASP.NET Core on a fork on GitHub. It’s been a very tedious process made even more tedious due to the constant changes of ASP.NET Core over the last 2 years. I started this whole process by modifying the VS sln file to exclude all of the projects and only including the Umbraco.Core project, then starting with the most fundamental classes to include. I’d include one class at a time using the project.json file, compile, make changes if required until it built, include the next class, rinse/repeat.  I did this up until the point where I could compile the Umbraco.Core project to include Umbraco’s ApplicationContext object and CoreBootManager. This basically meant I had everything I needed in order to bootstrap Umbraco and perform the business logic operations for Umbraco on ASP.NET Core :)

I did start looking at updating the Umbraco.Web project but this is going to be a much more involved process due to the way that MVC and WebApi have changed with regards to ASP.NET Core. It is worth noting that I do have the routing working, even things like hijacked routes, SurfaceController’s, etc… !

But before I continued down that road I wanted to see if I could get the core part of Umbraco running cross platform, so I tinkered around with making an Umbraco .NET Core CLI

… And it just works :)

On a side note, the Git branch that this live in is a fork of Umbraco’s current source code and the branch that it exists in is a branch from v8 so it is fully merge-able. This means that as we continue developing on v7 and v8 all of these fixes/changes will merge up into this branch.

Umbraco Interactive CLI

I didn’t want to reinvent the wheel with a CLI argument parser so a quick Googling on what was available on .Net Core pointed me to a great framework that Microsoft had already built. What I wanted to make was an interactive CLI so I could either execute a single command line statement to perform an operation, or I could start the CLI and be prompted to work with Umbraco data until I exited. This Microsoft framework didn’t quite support that OOTB but it wasn’t difficult to make it work the way I wanted without modifying it’s source. From there I wrote the code to boot Umbraco and started implementing some commands. Here’s the commands and sub-commands so far (each command has a help option: –h):

  • db – Used to configure the database
    • install – used to install a new Umbraco db, schema and default data
    • connect – used to connect to an existing Umbraco db
  • schema – Used for manipulating schema elements
    • doctype – Used for Document type operations
      • create, del, list
      • groups – Used for property group operations (create + list)
      • props – Used for property type operations (create + list)
    • medtype– Used for Media type operations
      • create, del , list
      • groups – Used for property group operations (create + list)
      • props – Used for property type operations (create + list)

See it in action

Cross Platform

I was very interested to see if this would work on Linux so I got Ubuntu up and running and put MySql on there and created a new db to use. Then I updated the solution to build a standalone .NET Core app, published that using

dotnet publish -c release -r ubuntu.14.04-x64 

and unzipped that on my Ubuntu installation. Then I tried it by running

./Umbraco.Test.Console

and …. It worked !! There’s something quite magical about the whole thing, it really was very easy to get this to run on a non-windows environment. Very impressive :)

Try it out!

You should be able to get this working pretty easily – hopefully on any OS – here’s the steps:

I’ve not tested any of this with OSX, so hopefully it will ‘just work’ there too

You’ll need either an MS SQL or MySql empty database setup on your OS environment, then note the connection string since you’ll need to enter it in the CLI.

Clone the repo (It’s big, it’s the actual current Umbraco source):

git clone -b dev-v9 https://github.com/Shazwazza/Umbraco-CMS.git

Go to the project folder

cd Umbraco-CMS
cd src
cd Umbraco.Test.Console

Restore packages

dotnet restore

Build the project, depending on your OS

dotnet build -r win10-x64
dotnet build -r osx.10.10-x64
dotnet build -r ubuntu.16.04-x64

Publish it, depending on your OS

dotnet publish -c release -r win10-x64
dotnet publish -c release -r osx.10.10-x64
dotnet publish -c release -r ubuntu.16.04-x64

Run it, depending on your OS

bin\release\netcoreapp1.0\win10-x64\Umbraco.Test.Console.exe
./bin/release/netcoreapp1.0/osx.10.10-x64/Umbraco.Test.Console
./bin/release/netcoreapp1.0/ubuntu.16.04-x64/Umbraco.Test.Console

NOTE: On Linux you’ll probably have to mark it to be executable first by doing this:

chmod +x ./bin/release/netcoreapp1.0/ubuntu.16.04-x64/Umbraco.Test.Console

Next steps

I’m very excited about what has been achieved so far but there’s certainly a long way to go. As I mentioned above getting v8 completed is a requirement to getting a version of Umbraco fully working with ASP.NET Core. During that development time I do plan on continuing to tinker around with getting more stuff to work. I’d like to see some progress made with the web project, the first steps will require getting the website boot process working  (in progress) and I think a good first milestone will be getting the installer all working. From there, there’s updating the controllers and authentication/authorization mechanisms for the back office and then looking into actually getting content rendered on the front-end ( this part is actually the easiest and mostly done already ).