Shannon Deminick's blog all about web development

Auto upgrade your Nuget packages with Azure Pipelines or GitHub Actions

February 25, 2021 06:12
Auto upgrade your Nuget packages with Azure Pipelines or GitHub Actions

Before we start I just want to preface this with some 🔥 warnings 🔥

  • This works for me, it might not work for you
  • To get this working for you, you may need to tweak some of the code referenced
  • This is not under any support or warranty by anyone
  • Running Nuget.exe update command outside of Visual Studio will overwrite your files so there is a manual review process (more info below)
  • This is only for ASP.NET Framework using packages.config – Yes I know that is super old and I should get with the times, but this has been an ongoing behind the scenes project of mine for a long time. When I need this for Package Reference projects, ASP.NET Core/5, I’ll update it but there’s nothing stopping you from tweaking this to work for you
  • This only works for a specified csproj, not an entire sln – it could work for that but I’ve not tested, there would be a few tweaks to make that work
  • This does not yet work for GitHub actions but the concepts are all here and could probably very easily be converted UPDATE: This works now!

Now that’s out of the way …

How do I do it?

With a lot of PowerShell :) This also uses a few methods from the PowerShellForGitHub project.

The process is:

  • Run a pipeline/action on a schedule (i.e. each day)
  • This checks against your source code for the installed version for a particular package
  • Then it checks with Nuget (using your Nuget.config file) to see what the latest stable version is
  • If there’s a newer version:
  • Create a new branch
  • Run a Nuget update against your project
  • Build the project
  • Commit the changes
  • Push the changes
  • Create a PR for review

Azure Pipelines/GitHub Actions YAML

The only part of the YAML that needs editing is the variables, here's what they mean:

  • ProjectFile = The relative path to your csproj that you want to upgrade
  • PackageFile = The relative path to your packages.config file for this project
  • PackageName = The Nuget package name you want upgraded
  • GitBotUser = The name used for the Git commits
  • GitBotEmail = The email used for the Git commits

For Azure Pipelines, these are also required:

Then there are some variables to assist with testing:

  • DisableUpgradeStep = If true will just check if there’s an upgrade available and exit
  • DisableCommit = If true will run the upgrade and will exit after that (no commit, push or PR)
  • DisablePush = If true will run the upgrade + commit and will exit after that (no push or PR)
  • DisablePullRequest = If true will run the upgrade + commit + push and will exit after that (no PR)

Each step in the yaml build more or less either calls Git commands or PowerShell functions. The PowerShell functions are loaded as part of a PowerShell Module which is committed to the repository. This module’s functions are auto-loaded by PowerShell because the first step configures the PowerShell environment variable PSModulePath to include the custom path. Once that is in place, all functions exposed by the module are auto-loaded.

In these examples you’ll see that I’m referencing Umbraco Cloud names and that’s because I’m using this on Umbraco Cloud for my own website and the examples are for the UmbracoCms package. But this should in theory work for all packages!

Show me the code

The code for all of this is here in a new GitHub repo and here’s how you use it:

You can copy the folder structure in the repository as-is. Here's an example of what my site's repository folder structure is to make this work (everything except the src folder is in the GitHub repo above):

  • [root]
    • auto-upgrader.devops.yml (If you are using Azure Pipelines)
    • .github
      • workflows
        • auto-upgrader.gh.yml (If you are using GitHub Actions)
    • build
      • PowershellModules
        • AutoUpgradeFunctions.psd1
        • AutoUpgradeFunctions.psm1
        • AutoUpgradeFunctions
    • src
      • Shazwazza.Web
        • Shazwazza.Web.csproj
        • packages.config

All of the steps have descriptive display names and it should be reasonably self documenting.

The end result is a PR, here’s one that was generated by this process:

Nuget overwrites

Nuget.exe works differently than Nuget within Visual Studio’s Package Manager Console. All of those special commands like Install-Package, Update-Package, etc… are all PowerShell module commands built into Visual Studio and they are able to work with the context of Visual Studio. This allows those commands to try to be a little smarter when running Nuget updates and also allows the legacy Nuget features like running PowerShell scripts on install/update to run. This script just uses Nuget.exe and it’s less smart especially for these legacy .NET Framework projects. As such, it will just overwrite all files in most cases (it does detect file changes it seems but isn’t always accurate).

With that 🔥 warning 🔥 it is very important to make sure you review the changed files in the PR and revert or adjust any changes you need before applying the PR.

You’ll see a note in the PowerShell script about Nuget overwrites. There are other options that can be used like "Ignore" and "IgnoreAll" but all my tests have showed that for some reason this setting will end up deleting a whole bunch of files so the default overwrite setting is used.

Next steps

Get out there and try it! Would love some feedback on this if/when you get a change to test it.

PackageReference support with .NET Framework projects could also be done (but IMO this is low priority) along with being able to upgrade the entire SLN instead of just the csproj.

Then perhaps some attempts at getting a NET Core/5 version of this running. In theory that will be easier since it will mostly just be dotnet commands.


How to execute one Controller Action from another in ASP.NET 5

February 15, 2021 05:06
How to execute one Controller Action from another in ASP.NET 5

This will generally be a rare thing to do but if you have your reasons to do it, then this is how…

In Umbraco one valid reason to do this is due to how HTTP POSTs are handled for forms. Traditionally an HTML form will POST to a specific endpoint, that endpoint will handle the validation (etc), and if all is successful it will redirect to another URL, else it will return a validation result on the current URL (i.e. PRG POST/REDIRECT/GET). In the CMS world this may end up a little bit weird because URLs are dynamic. POSTs in theory should just POST to the current URL so that if there is a validation result, this is still shown on the current URL and not a custom controller endpoint URL. This means that there can be multiple controllers handling the same URL, one for GET, another for POST and that’s exactly what Umbraco has been doing since MVC was enabled in it many years ago. For this to work, a controller is selected during the dynamic route to handle the POST (a SurfaceController in Umbraco) and if successful, typically the developer will use: return RedirectToCurrentUmbracoPage (of type RedirectToUmbracoPageResult) or if not successful will use: return CurrentUmbracoPage (of type UmbracoPageResult). The RedirectToUmbracoPageResult is easy to handle since this is just a redirect but the UmbracoPageResult is a little tricky because one controller has just handled the POST request but now it wants to return a page result for the current Umbraco page which is handled by a different controller.


The concept is actually pretty simple and the IActionInvoker does all of the work. You can create an IActionInvoker from the IActionInvokerFactory which needs an ActionContext. Here’s what the ExecuteResultAsync method of a custom IActionResult could look like to do this:

public async Task ExecuteResultAsync(ActionContext context)
    // Change the route values to match the action to be executed
    context.RouteData.Values["controller"] = "Page";
    context.RouteData.Values["action"] = "Index";

    // Create a new context and excute the controller/action
    // Copy the action context - this also copies the ModelState
    var renderActionContext = new ActionContext(context)
        // Normally this would be looked up via the EndpointDataSource
        // or using the IActionSelector
        ActionDescriptor = new ControllerActionDescriptor
            ActionName = "Index",
            ControllerName = "Page",
            ControllerTypeInfo = typeof(PageController).GetTypeInfo(),
            DisplayName = "PageController.Index"

    // Get the factory
    IActionInvokerFactory actionInvokerFactory = context.HttpContext

    // Create the invoker
    IActionInvoker actionInvoker = actionInvokerFactory.CreateInvoker(renderActionContext);

    // Execute!
    await actionInvoker.InvokeAsync();

That’s pretty must the gist of it. The note about the ControllerActionDescriptor is important though, it’s probably best to not manually create these since they are already created with all of your routing. They can be queried and resolved in a few different ways such as interrogating the EndpointDataSource or using the IActionSelector. This execution will execute the entire pipeline for the other controller including all of it’s filters, etc…

Searching with IPublishedContentQuery in Umbraco

July 31, 2020 04:13
Searching with IPublishedContentQuery in Umbraco

I recently realized that I don’t think Umbraco’s APIs on IPublishedContentQuery are documented so hopefully this post may inspire some docs to be written or at least guide some folks on some functionality they may not know about.

A long while back even in Umbraco v7 UmbracoHelper was split into different components and UmbracoHelper just wrapped these. One of these components was called ITypedPublishedContentQuery and in v8 is now called IPublishedContentQuery, and this component is responsible for executing queries for content and media on the front-end in razor templates. In v8 a lot of methods were removed or obsoleted from UmbracoHelper so that it wasn’t one gigantic object and tries to steer developers to use these sub components directly instead. For example if you try to access UmbracoHelper.ContentQuery you’ll see that has been deprecated saying:

Inject and use an instance of IPublishedContentQuery in the constructor for using it in classes or get it from Current.PublishedContentQuery in views

and the UmbracoHelper.Search methods from v7 have been removed and now only exist on IPublishedContentQuery.

There are API docs for IPublishedContentQuery which are a bit helpful, at least will tell you what all available methods and parameters are. The main one’s I wanted to point out are the Search methods.

Strongly typed search responses

When you use Examine directly to search you will get an Examine ISearchResults object back which is more or less raw data. It’s possible to work with that data but most people want to work with some strongly typed data and at the very least in Umbraco with IPublishedContent. That is pretty much what IPublishedContentQuery.Search methods are solving. Each of these methods will return an IEnumerable<PublishedSearchResult> and each PublishedSearchResult contains an IPublishedContent instance along with a Score value. A quick example in razor:

@inherits Umbraco.Web.Mvc.UmbracoViewPage
@using Current = Umbraco.Web.Composing.Current;
    var search = Current.PublishedContentQuery.Search(Request.QueryString["query"]);

    <h3>Search Results</h3>
        @foreach (var result in search)
                Id: @result.Content.Id
                Name: @result.Content.Name
                <br />
                Score: @result.Score

The ordering of this search is by Score so the highest score is first. This makes searching very easy while the underlying mechanism is still Examine. The IPublishedContentQuery.Search methods make working with the results a bit nicer.

Paging results

You may have noticed that there’s a few overloads and optional parameters to these search methods too. 2 of the overloads support paging parameters and these take care of all of the quirks with Lucene paging for you. I wrote a previous post about paging with Examine and you need to make sure you do that correctly else you’ll end up iterating over possibly tons of search results which can have performance problems. To expand on the above example with paging is super easy:

@inherits Umbraco.Web.Mvc.UmbracoViewPage
@using Current = Umbraco.Web.Composing.Current;
    var pageSize = 10;
    var pageIndex = int.Parse(Request.QueryString["page"]);
    var search = Current.PublishedContentQuery.Search(
        pageIndex * pageSize,   // skip
        pageSize,               // take
        out var totalRecords);

    <h3>Search Results</h3>
        @foreach (var result in search)
                Id: @result.Content.Id
                Name: @result.Content.Name
                <br />
                Score: @result.Score

Simple search with cultures

Another optional parameter you might have noticed is the culture parameter. The docs state this about the culture parameter:

When the culture is not specified or is *, all cultures are searched. To search for only invariant documents and fields use null. When searching on a specific culture, all culture specific fields are searched for the provided culture and all invariant fields for all documents. While enumerating results, the ambient culture is changed to be the searched culture.

What this is saying is that if you aren’t using culture variants in Umbraco then don’t worry about it. But if you are, you will also generally not have to worry about it either! What?! By default the simple Search method will use the “ambient” (aka ‘Current’) culture to search and return data. So if you are currently browsing your “fr-FR” culture site this method will automatically only search for your data in your French culture but will also search on any invariant (non-culture) data. And as a bonus, the IPublishedContent returned also uses this ambient culture so any values you retrieve from the content item without specifying the culture will just be the ambient/default culture.

So why is there a “culture” parameter? It’s just there in case you want to search on a specific culture instead of relying on the ambient/current one.

Search with IQueryExecutor

IQueryExecutor is the resulting object created when creating a query with the Examine fluent API. This means you can build up any complex Examine query you want, even with raw Lucene, and then pass this query to one of the IPublishedContentQuery.Search overloads and you’ll get all the goodness of the above queries. There’s also paging overloads with IQueryExecutor too. To further expand on the above example:

@inherits Umbraco.Web.Mvc.UmbracoViewPage
@using Current = Umbraco.Web.Composing.Current;
    // Get the external index with error checking
    if (ExamineManager.Instance.TryGetIndex(
        Constants.UmbracoIndexes.ExternalIndexName, out var index))
        throw new InvalidOperationException(
            $"No index found with name {Constants.UmbracoIndexes.ExternalIndexName}");

    // build an Examine query
    var query = index.GetSearcher().CreateQuery()
        .GroupedOr(new [] { "pageTitle", "pageContent"},

    var pageSize = 10;
    var pageIndex = int.Parse(Request.QueryString["page"]);
    var search = Current.PublishedContentQuery.Search(
        query,                  // pass the examine query in!
        pageIndex * pageSize,   // skip
        pageSize,               // take
        out var totalRecords);

    <h3>Search Results</h3>
        @foreach (var result in search)
                Id: @result.Content.Id
                Name: @result.Content.Name
                <br />
                Score: @result.Score

The base interface of the fluent parts of Examine’s queries are IQueryExecutor so you can just pass in your query to the method and it will work.


The IPublishedContentQuery.Search overloads are listed in the API docs, they are:

  • Search(String term, String culture, String indexName)
  • Search(String term, Int32 skip, Int32 take, out Int64 totalRecords, String culture, String indexName)
  • Search(IQueryExecutor query)
  • Search(IQueryExecutor query, Int32 skip, Int32 take, out Int64 totalRecords)

Should you always use this instead of using Examine directly? As always it just depends on what you are doing. If you need a ton of flexibility with your search results than maybe you want to use Examine’s search results directly but if you want simple and quick access to IPublishedContent results, then these methods will work great.

Does this all work with ExamineX ? Absolutely!! One of the best parts of ExamineX is that it’s completely seamless. ExamineX is just an index implementation of Examine itself so all Examine APIs and therefore all Umbraco APIs that use Examine will ‘just work’.

Filtering fields dynamically with Examine

July 6, 2020 04:05
Filtering fields dynamically with Examine

The index fields created by Umbraco in Examine by default can lead to quite a substantial amount of fields. This is primarily due in part by how Umbraco handles variant/culture data because it will create a different field per culture but there are other factors as well. Umbraco will create a “__Raw_” field for each rich text field and if you use the grid, it will create different fields for each grid row type. There are good reasons for all of these fields and this allows you by default to have the most flexibility when querying and retrieving your data from the Examine indexes. But in some cases these default fields can be problematic. Examine by default uses Lucene as it’s indexing engine and Lucene itself doesn’t have any hard limits on field count (as far as I know), however if you swap the indexing engine in Examine to something else like Azure Search with ExamineX then you may find your indexes are exceeding Azure Search’s limits.

Azure Search field count limits

Azure Search has varying limits for field counts based on the tier service level you have (strangely the Free tier allows more fields than the Basic tier). The absolute maximum however is 1000 fields and although that might seem like quite a lot when you take into account all of the fields created by Umbraco you might realize it’s not that difficult to exceed this limit. As an example, lets say you have an Umbraco site using language variants and you have 20 languages in use. Then let’s say you have 15 document types each with 5 fields (all with unique aliases) and each field is variant and you have content for each of these document types and languages created. This immediately means you are exceeding the field count limits: 20 x 15 x 10 = 1500 fields! And that’s not including the “__Raw_” fields or the extra grid fields or the required system fields like “id” and “nodeName”. I’m unsure why Azure Search even has this restriction in place

Why is Umbraco creating a field per culture?

When v8 was being developed a choice had to be made about how to handle multi-lingual data in Examine/Lucene. There’s a couple factors to consider with making this decision which mostly boils down to how Lucene’s analyzers work. The choice is either: language per field or language per index. Some folks might think, can’t we ‘just’ have a language per document? Unfortunately the answer is no because that would require you to apply a specific language analyzer for that document and then scoring would no longer work between documents. Elastic Search has a good write up about this. So either language per field or different indexes per language. Each has pros/cons but Umbraco went with language per field since it’s quite easy to setup, supports different analyzers per language and doesn’t require a ton of indexes which also incurs a lot more overhead and configuration.

Do I need all of these fields?

That really depends on what you are searching on but the answer is most likely ‘no’. You probably aren’t going to be searching on over 1000s fields, but who knows every site’s requirements are different. Umbraco Examine has something called an IValueSetValidator which you can configure to include/exclude certain fields or document types. This is synonymous with part of the old XML configuration in Examine. This is one of those things where configuration can make sense for Examine and @callumwhyte has done exactly that with his package “Umbraco Examine Config”. But the IValueSetValidator isn’t all that flexible and works based on exact naming which will work great for filtering content types but perhaps not field names. (Side note – I’m unsure if the Umbraco Examine Config package will work alongside ExamineX, need to test that out).

Since Umbraco creates fields with the same prefixed names for all languages it’s relatively easy to filter the fields based on a matching prefix for the fields you want to keep.

Here’s some code!

The following code is relatively straight forward with inline comments: A custom class “IndexFieldFilter” that does the filtering and can be applied different for any index by name, a Component to apply the filtering, a Composer to register services. This code will also ensure that all Umbraco required fields are retained so anything that Umbraco is reliant upon will still work.

/// <summary>
/// Register services
/// </summary>
public class MyComposer : ComponentComposer<MyComponent>
    public override void Compose(Composition composition)

public class MyComponent : IComponent
    private readonly IndexFieldFilter _indexFieldFilter;

    public MyComponent(IndexFieldFilter indexFieldFilter)
        _indexFieldFilter = indexFieldFilter;

    public void Initialize()
        // Apply an index field filter to an index
            // Filter the external index 
            // Ensure fields with this prefix are retained
            new[] { "description", "title" },
            // optional: only keep data for these content types, else keep all
            new[] { "home" });

    public void Terminate() => _indexFieldFilter.Dispose();

/// <summary>
/// Used to filter out fields from an index
/// </summary>
public class IndexFieldFilter : IDisposable
    private readonly IExamineManager _examineManager;
    private readonly IUmbracoTreeSearcherFields _umbracoTreeSearcherFields;
    private ConcurrentDictionary<string, (string[] internalFields, string[] fieldPrefixes, string[] contentTypes)> _fieldNames
        = new ConcurrentDictionary<string, (string[], string[], string[])>();
    private bool disposedValue;

    /// <summary>
    /// Constructor
    /// </summary>
    /// <param name="examineManager"></param>
    /// <param name="umbracoTreeSearcherFields"></param>
    public IndexFieldFilter(
        IExamineManager examineManager,
        IUmbracoTreeSearcherFields umbracoTreeSearcherFields)
        _examineManager = examineManager;
        _umbracoTreeSearcherFields = umbracoTreeSearcherFields;

    /// <summary>
    /// Apply a filter to the specified index
    /// </summary>
    /// <param name="indexName"></param>
    /// <param name="includefieldNamePrefixes">
    /// Retain all fields prefixed with these names
    /// </param>
    public void ApplyFilter(
        string indexName,
        string[] includefieldNamePrefixes,
        string[] includeContentTypes = null)
        if (_examineManager.TryGetIndex(indexName, out var e)
            && e is BaseIndexProvider index)
            // gather all internal index names used by Umbraco 
            // to ensure they are retained
            var internalFields = new[]

            _fieldNames.TryAdd(indexName, (internalFields, includefieldNamePrefixes, includeContentTypes ?? Array.Empty<string>()));

            // Bind to the event to filter the fields
            index.TransformingIndexValues += TransformingIndexValues;
            throw new InvalidOperationException(
                $"No index with name {indexName} found that is of type {typeof(BaseIndexProvider)}");

    private void TransformingIndexValues(object sender, IndexingItemEventArgs e)
        if (_fieldNames.TryGetValue(e.Index.Name, out var fields))
            // check if we should ignore this doc by content type
            if (fields.contentTypes.Length > 0 && !fields.contentTypes.Contains(e.ValueSet.ItemType))
                e.Cancel = true;
                // filter the fields
                e.ValueSet.Values.RemoveAll(x =>
                    if (fields.internalFields.Contains(x.Key)) return false;
                    if (fields.fieldPrefixes.Any(f => x.Key.StartsWith(f))) return false;
                    return true;

    protected virtual void Dispose(bool disposing)
        if (!disposedValue)
            if (disposing)
                // Unbind from the event for any bound indexes
                foreach (var keys in _fieldNames.Keys)
                    if (_examineManager.TryGetIndex(keys, out var e)
                        && e is BaseIndexProvider index)
                        index.TransformingIndexValues -= TransformingIndexValues;
            disposedValue = true;

    public void Dispose()
        Dispose(disposing: true);

That should give you the tools you need to dynamically filter your index based on fields and content type’s if you need to get your field counts down. This would also be handy even if you aren’t using ExamineX and Azure Search since keeping an index size down and storing less data means less IO operations and storage size.

Examine and Azure Blob Storage

February 11, 2020 04:52
Examine and Azure Blob Storage

Quite some time ago - probably close to 2 years - I created an alpha version of an extension library to Examine to allow storing Lucene indexes in Blob Storage called Examine.AzureDirectory. This idea isn’t new at all and in fact there’s been a library to do this for many years called AzureDirectory but it previously had issues and it wasn’t clear on exactly what it’s limitations are. The Examine.AzureDirectory implementation was built using a lot of the original code of AzureDirectory but has a bunch of fixes (which I contributed back to the project) and different ways of working with the data. Also since Examine 0.1.90 still worked with lucene 2.x, this also made this compatible with the older Lucene version.

… And 2 years later, I’ve actually released a real version 🎉

Why is this needed?

There’s a couple reasons – firstly Azure web apps storage run on a network share and Lucene absolutely does not like it’s files hosted on a network share, this will bring all sorts of strange performance issues among other things. The way AzureDirectory works is to store the ‘master’ index in Blob Storage and then sync the required Lucene files to the local ‘fast drive’. In Azure web apps there’s 2x drives: ‘slow drive’ (the network share) and the ‘fast drive’ which is the local server’s temp files on local storage with limited space. By syncing the Lucene files to the local fast drive it means that Lucene is no longer operating over a network share. When writes occur, it writes back to the local fast drive and then pushes those changes back to the master index in Blob Storage. This isn’t the only way to overcome this limitation of Lucene, in fact Examine has shipped a work around for many years which uses something called SyncDirectory which does more or less the same thing but instead of storing the master index in Blob Storage, the master index is just stored on the ‘slow drive’.  Someone has actually taken this code and made a separate standalone project with this logic called SyncDirectory which is pretty cool!

Load balancing/Scaling

There’s a couple of ways to work around the network share storage in Azure web apps (as above), but in my opinion the main reason why this is important is for load balancing and being able to scale out. Since Lucene doesn’t work well over a network share, it means that Lucene files must exist local to the process it’s running in. That means that when you are load balancing or scaling out, each server that is handling requests will have it’s own local Lucene index. So what happens when you scale out further and another new worker goes online? This really depending on the hosting application… for example in Umbraco, this would mean that the new worker will create it’s own local indexes by rebuilding the indexes from the source data (i.e. database). This isn’t an ideal scenario especially in Umbraco v7 where requests won’t be served until the index is built and ready. A better scenario is that the new worker comes online and then syncs an existing index from master storage that is shared between all workers …. yes! like Blob Storage.

Read/Write vs Read only

Lucene can’t be written to concurrently by multiple processes. There are some workarounds here a there to try to achieve this by synchronizing processes with named mutex/semaphore locks and even AzureSearch tries to handle some of this by utilizing Blob Storage leases but it’s not a seamless experience. This is one of the reasons why Umbraco requires a ‘master’ web app for writing and a separate web app for scaling which guarantees that only one process writes to the indexes. This is the setup that Examine.AzureDirectory supports too and on the front-end/replica/slave web app that scales you will configure the provider to be readonly which guarantees it will never try to write back to the (probably locked) Blob Storage.

With this in place, when a new front-end worker goes online it doesn’t need to rebuild it’s own local indexes, it will just check if indexes exist and to do that will make sure the master index is there and then continue booting. At this stage there’s actually almost no performance overhead. Nothing actually happens with the local indexes until the index is referenced by this worker and when that happens Examine will lazily just sync the Lucene files that it needs locally.

How do I get it?

First thing to point out is that this first release is only for Examine 0.1.90 which is for Umbraco v7. Support for Examine 1.x and Umbraco 8.x will come out very soon with some slightly different install instructions.

The release notes of this are here, the install docs are here, and the Nuget package for this can be found here.

PM> Install-Package Examine.AzureDirectory -Version 0.1.90

To activate it, you need to add these settings to your web.config

<add key="examine:AzureStorageConnString" value="YOUR-STORAGE-CONNECTION-STRING" />
<add key="examine:AzureStorageContainer" value="YOUR-CONTAINER-NAME" />

Then for your master server/web app you’ll want to add a directoryFactory attribute to each of your indexers in ExamineSettings.config, for example:

<add name="InternalIndexer" type="UmbracoExamine.UmbracoContentIndexer, UmbracoExamine"
      directoryFactory="Examine.AzureDirectory.AzureDirectoryFactory, Examine.AzureDirectory"
      analyzer="Lucene.Net.Analysis.WhitespaceAnalyzer, Lucene.Net"/>

For your front-end/replicate/slave server you’ll want a different readonly value for the directoryFactory like:

<add name="InternalIndexer" type="UmbracoExamine.UmbracoContentIndexer, UmbracoExamine"
      directoryFactory="Examine.AzureDirectory.ReadOnlyAzureDirectoryFactory, Examine.AzureDirectory"
      analyzer="Lucene.Net.Analysis.WhitespaceAnalyzer, Lucene.Net"/>

Does it work?

Great question :) With the testing that I’ve done it works and I’ve had this running on this site for all of last year without issue but I haven’t rigorously tested this at scale with high traffic sites, etc… I’ve decided to release a real version of this because having this as an alpha/proof of concept means that nobody will test or use it. So now hopefully a few of you will give this a whirl and let everyone know how it goes. Any bugs can be submitted to the Examine repo.



Web Application projects with Umbraco Cloud

January 8, 2020 05:12
Web Application projects with Umbraco Cloud

This is a common topic for developers when working with Umbraco Cloud because Umbraco Cloud simply hosts an ASP.Net Framework “Website”. The setup is quite simple, a website is stored in a Git repository and when it’s updated and pushed to Umbraco Cloud, all of the changes are live. You can think of this Git repository as a deployment repository (which is very similar to how Azure Web Apps can work with git deployments). When you create a new Umbraco Cloud site, the git repository will be pre-populated with a runnable website. You can clone the website and run it locally with IIS Express and it all just works. But this is not a compile-able website and it’s not part of a visual studio project or a solution and if you want to have that, there’s numerous work arounds that people have tried and use but in my personal opinion they aren’t the ideal working setup that I would like.

Ideal solution

In my opinion the ideal solution for building web apps in .NET Framework is:

  • A visual studio solution
    • A compile-able Web Application project (.csproj)
    • Additional class library projects (as needed)
    • Unit/Integration test projects (as needed)
    • All dependencies are managed via Nuget
  • Git source control for my code, probably stored in GitHub
  • A build server, CI/CD, I like Azure Pipelines

I think this is a pretty standard setup for building websites but trying to wrangle this setup to work with Umbraco Cloud isn’t as easy as you’d think. A wonderful Umbraco community member Paul Sterling has written about how to do this a couple of times, here and here and there’s certainly a few hoops you’d need to jump through. These posts were also written before the age of Azure YAML Pipelines which luckily has made this process a whole lot easier

Solution setup

NOTE: This is for Umbraco v8, there’s probably some other edge cases you’ll need to discover on your own for v7. 

Setting up a Visual Studio solution with a web application compatible for Umbraco Cloud is pretty straight forward and should be very familiar. It will be much easier to do this starting from scratch with a new Umbraco Cloud website though it is more than possible to do this for an existing website (i.e. I did this for this website!) but most of those details are just migrating custom code, assets, etc… to your new solution.

I would suggest starting with a new Umbraco Cloud site that has no modifications to it but does have a content item or two that renders a template.

  • Create a new VS solution/project for a web application running .NET 4.7.2
  • Add this Nuget.config to the root folder (beside your .sln file)
    • <?xml version="1.0" encoding="utf-8"?>
      	<add key="NuGet" value="https://api.nuget.org/v3/index.json" />
          <add key="UmbracoCloud" value="https://www.myget.org/F/uaas/api/v3/index.json" />
  • Install the Nuget package for the same Umbraco version that you are currently running on your Umbraco Cloud website. For example if you are running 8.4.0 then use Install-Package UmbracoCms –Version 8.4.0
  • Install Forms (generally the latest available): Install-Package UmbracoForms
  • Install Deploy (generally the latest available):
    • Install-Package UmbracoDeploy
    • Install-Package UmbracoDeploy.Forms
    • Install-Package UmbracoDeploy.Contrib
  • Then you’ll need to install some additional Nuget packages that are required to run your site on Umbraco Cloud. This is undocumented but Umbraco Cloud adds a couple extra DLLs when it creates a website that are required.
    • Install-Package Serilog.Sinks.MSSqlServer -Version 5.1.3-dev-00232
  • Copy these files from your Umbraco Cloud deployment repository to your web application project:
    • ~/data/*
    • ~/config/UmbracoDeploy.config
    • ~/config/UmbracoDeploy.Settings.config
  • You then need to do something weird. These settings need to be filled in because Umbraco Deploy basically circumvents the normal Umbraco installation procedure and if you don’t have these settings populated you will get YSODs and things won’t work.
    • Make sure that you have your Umbraco version specified in your web.config like: <add key="Umbraco.Core.ConfigurationStatus" value="YOURVERSIONGOESHERE" />
    • Make sure your connectionStrings in your web.config is this:
      • <connectionStrings>
            <remove name="umbracoDbDSN" />
            <add name="umbracoDbDSN"
                 connectionString="Data Source=|DataDirectory|\Umbraco.sdf"
                 providerName="System.Data.SqlServerCe.4.0" />

But I don’t want to use SqlCE! Why do I need that connection string? In actual fact Umbraco Deploy will configure your web application to use Sql Express LocalDb if it’s available on your machine (which it most likely is). This is why when running Umbraco Cloud sites locally you’ll see .mdf and .ldf files in your App_Data folder instead of SqlCE files. Local Db operates just like Sql Server except the files are located locally, it’s really sql server under the hood. You can even use Sql Management Studio to look at these databases by connecting to the (localdb)\umbraco server locally with Windows Authentication. It is possible to have your local site run off of a normal Sql server database with a real connection string but I think you’d have to install Umbraco first before you install the UmbracoDeploy nuget package. Ideally UmbracoDeploy would allow the normal install screen to run if there was no Umbraco version detected in the web.config, but that’s a whole other story.

That should be it! In theory your web application is now configured to be able to publish a website output that is the same as what is on Umbraco Cloud.


At this stage you should be able to run your solution, it will show the typical Umbraco Deploy screen to restore from Cloud


In theory you should be able to restore your website and everything should ‘just work’

Working with code

Working with your code is now just the way you’re probably used to working. Now that you’ve got a proper Visual Studio solution with a Web Application Project, you can do all of the development that you are used to. You can add class libraries, unit test projects, etc… Then you commit all of these changes to your own source control like GitHub. This type of repository is not a deployment repository, this is a source code repository.

How do I get this to Umbraco Cloud?

So far there’s nothing too special going on but now we need to figure out how to get our Web Application Project to be deployed to Umbraco Cloud.

There’s a couple ways to do this, the first way is surprisingly simple:

  • Right click your web application project in VS
  • Click Publish
  • Choose Folder as a publish target
  • Select your cloned Umbraco Cloud project location
  • Click advanced and choose “Exclude files from App_Data folder’
  • Click Create Profile
  • Click Publish – you’ve just published a web application project to a website
  • Push these changes to Umbraco Cloud

The publish profile result created should match this one: https://github.com/umbraco/vsts-uaas-deploy-task/blob/master/PublishProfiles/ToFileSys.pubxml

This of course requires some manual work but if you’re ok with that then job done!

You should do this anyways before continuing since it will give you an idea of how in-sync your web application project and the output website is to the Umbraco Cloud website, you can then see what Git changes have been made and troubleshoot anything that might seem odd.

Azure Pipelines

I’m all for automation so instead I want Azure Pipelines to do my work. This is what I want to happen:

  • Whenever I commit to my source code repo Azure Pipelines will:
    • Build my solution
    • Run any unit tests that I have
    • Publish my web application project to a website
    • Zip the website
    • Publish my zipped website artifact
  • When I add a “release-*” tag to a commit I want Azure Pipelines to do all of the above and also:
    • Clone my Umbraco Cloud repository
    • Unzip my website artifact onto this cloned destination
    • Commit these changes to the Umbraco Cloud deployment repository
    • Push this commit to Umbraco Cloud

Luckily this work is all done for you :) and with YAML pipelines it’s fairly straight forward. Here’s how:

  • Go copy this PowerShell file and commit it to the /build folder of your source code repository (our friends Paul Sterling and Morten Christensen had previously done this work, thanks guys!). This PS script essentially does all of that Git work mentioned above, the cloning, committing and pushing files. It’s a bit more verbose than just running these git comands directly in your YAML file but it’s also a lot less error prone and handles character encoding properly along with piping the output of the git command to the log.
  • Go copy this azure-pipelines.yml file and commit it to the root of your git source code repository. This file contains a bunch of helpful notes so you know what it’s doing. (This pipelines file does run any tests, etc… that exercise will be left up to you.)
  • In Azure Pipelines, create a new pipeline, choose your Git source control option, choose “Existing Azure Pipelines YAML file”, select azure-pipelines.yml file in the drop down, click continue.
  • Click Variables and add these 3:
    • gitAddress = The full Git https endpoint for your Dev environment on Umbraco Cloud
    • gitUsername = Your Umbraco Cloud email address
    • gitPassword = Your Umbraco Cloud password - ensure this value is set to Secret
  • Click Run!

And that’s it! … Really? … In theory yes :)

Your pipeline should run and build your solution. The latest commit you made is probably the azure-pipelines.yml files so it didn’t contain a release-* tag so it’s not going to attempt to push any changes to Umbraco Cloud. So first thing to do is make sure that your your pipeline is building your solution and doing what its supposed to. Once that’s all good then it’s time to test an Umbraco Cloud deployment.

Deploying to Umbraco Cloud

A quick and easy test would be to change the output of a template so you can visibly see the change pushed.

  • Go ahead and make a change to your home page template
  • Run your site locally with your web application project and make sure the change is visible there
  • Commit this change to your source control Git repository
  • Create and push a release tag on this commit. For example, the tag name could be: “release-v1.0.0-beta01” … whatever suites your needs but based on the YAML script it needs to start with “release-“

Now you can sit back and watch Azure Pipelines build your solution and push it to Umbraco Cloud. Since this is a multi-stage pipeline, the result will look like:


And you should see a log output like this on the Deploy stage


Whoohoo! Automated deployments to Umbraco Cloud using Web Application Projects.

What about auto-upgrades?

All we’ve talked about so far is a one-way push to Umbraco Cloud but one thing we know and love about Umbraco Cloud is the automated upgrade process. So how do we deal with that? I actually have this working on my site but want to make the process even simpler so you’re going to have to be patient and wait for another blog post :)

The way it works is also using Azure Pipelines. Using a separate pipeline with a custom Git repo pointed at your Umbraco Cloud repository, this pipeline can be configured to poll for changes every day (or more often if you like). It then checks if changes have been made to the packages.config file to see if there’s been upgrades made to either the CMS, Forms or Deploy (in another solution I’m actually polling Nuget directly for this information). If an upgrade has been made, It clones down your source code repository, runs a Nuget update command to upgrade your solution. Then it creates a new branch, commits these changes, pushes it back GitHub and creates a Pull Request (currently this only works for GitHub).

This same solution can be used for Deploy files in case administrators are changing schema items directly on Umbraco Cloud so the /deploy/* files can be automatically kept in sync with your source code repository.

This idea is entirely inspired by Morten Christensen, thanks Morten! Hopefully I’ll find some time to finalize this.

Stay tuned!

How I upgraded my site to Umbraco 8 on Umbraco Cloud

November 12, 2019 06:33
How I upgraded my site to Umbraco 8 on Umbraco Cloud

I have a Development site and a Live site on Umbraco Cloud. You might have some additional environments but in theory these steps should be more or less the same. This is just a guide that hopefully helps you, it’s by no means a fail-safe step by step guide, you’ll probably run into some other issues and edge cases that I didn’t.  You will also need access to Kudu since you will most likely need to delete some left over files manually, you will probably also need to toggle debug and custom errors settings in your web.config to debug any YSODs you get along the way, you will need to manually change the Umbraco version number in the web.config during the upgrade process and you might need access to the Live Git repository endpoint in case you need to rollback.

… Good luck!

Make sure you can upgrade

Make sure you have no Obsolete data types

You cannot upgrade to v8 if you have any data types referencing old obsolete property editors. You will first need to migrate any properties using these to the non-obsolete version of these property editors. You should do this on your Dev (or lowest environment): Go to each data type and check if the property editor listed there is prefixed with the term “Obsolete”. If it is you will need to change this to a non-obsolete property editor. In some cases this might be tricky, for others it might be an easy switch. For example, I’m pretty sure you can switch from the Obsolete Content Picker to the normal Content Picker. Luckily for me I didn’t have any data relying on these old editors so I could just delete these data types.

Make sure you aren’t using incompatible packages

If you are using packages, make sure that any packages you are using also have a working v8 version of that package.

Make sure you aren’t using legacy technology

If you are using XSLT, master pages, user controls or other weird webforms things, you are going to need to migrate all of that to MVC before you can continue since Umbraco 8 doesn’t support any of these things.

Ensure all sites are in sync

Very important that all Cloud environments are in sync with all of your latest code and there’s no outstanding code that needs to be shipped between them. Then you need to make sure that all content and media are the same across your environments since each one will eventually be upgraded independently and you want to use your Dev site for testing along with pulling content/media to your local machine.

Clone locally, sync & backup

Once all of your cloud sites are in sync, you’ll need to clone locally – I would advise to start with a fresh clone. Then restore all of your content and media and ensure your site runs locally on your computer. Once that’s all running and your site is basically running and operating like your live site you’ll want to take a backup. This is just for piece of mind, when upgrading your actual live site you aren’t going to lose any data. To do this, close VS Code (or whatever tool you use to run your local site) and navigate to ~/App_Data/ and you’ll see Umbraco.mdf and Umbraco_log.mdf files. Make copies of those and put them someplace. Also make a zip of your ~/media folder.

Now to make things easy in case you need to start again, make a copy of this entire site folder which you can use for the work in progress/upgrade/migration. If you ever need to start again, you can just delete this copied wip folder and re-copy the original.

Create/clone a new v8 Cloud site

This can be a trial site, it’s just a site purely to be able to clone so we can copy some files over from it. Once you’ve cloned locally feel free to delete that project.

Update local site files

Many people will be using a visual studio web application solution with Nuget, etc… In fact I am too but for this migration it turned out to be simpler in my case to just upgrade/migrate the cloned website.

Next, I deleted all of the old files:

  • The entire /bin directly – we’ll put this back together with only the required parts, we can’t have any old left over DLLs hanging around
  • /Config
  • /App_Plugins/UmbracoForms, /App_Plugins/Deploy, /App_Plugins/DiploTraceLogViewer
  • /Umbraco & /Umbraco_Client
  • Old tech folders - /Xslt, /Masterpages, /UserControls, /App_Browsers
  • /web.config

If you use App_Code, then for now rename this to something else. You will probably have to refactor some of the code in there to work and for now the goal is to just get the site up and running and the database upgraded. So rename to _App_Code or whatever you like so long as it’s different.

Copy over the files from the cloned v8 sites:

  • /bin
  • /Config
  • /App_Plugins/UmbracoForms, /App_Plugins/Deploy
  • /Umbraco
  • /Views/Partials/Grid, /Views/MacroPartials, /Views/Partials/Forms – overwrite existing files, these are updated Forms and Umbraco files
  • /web.config

Merge any custom config

Create a git commit before continuing.

Now there’s some manual updates involved. You may have had some custom configuration in some of the /Config/* files and in your /web.config file. So it’s time to have a look in your git history. That last commit you just made will show all of the changes overwritten in any /config files and your web.config file so now you can copy any changes you want to maintain back to these files. Things like custom appSettings, etc…

One very important setting is the Umbraco.Core.ConfigurationStatus appSetting, you must change this to your previous v7 version so the upgrader knows it needs to upgrade and from where.

Upgrade the database

Create a git commit before continuing.

At this stage, you have all of the Umbraco files, config files and binary files needed to run Umbraco v8 based on the version that was provided to your from your cloned Cloud site. So go ahead and try to run the website, with any luck it will run and you will be prompted to login and upgrade. If not and you have some YSODs or something, then the only advise I can offer at this stage is to debug the error.

Now run the upgrader – this might also require a bit of luck and depends on what data is in your site, if you have some obscure property editors or if your site is super old and has some strange database configurations. My site is super old, from v4 and over the many years I’ve managed to wrangle it through the version upgrades and it also worked on v8 (after a few v8 patch releases were out to deal with old schema issues). If this doesn’t work, you may be prompted with a detailed error specifically telling you way (i.e. you have obsolete property editors installed), or it might just fail due to old schema problems. For the latter problem, perhaps some of these tickets might help you resolve it.

When you get this to work, it’s a good time to make a backup of your local DB. Close down the running website and tool you used to launch it, then make a backup of the Umbraco.mdf and Umbraco_log.mdf files.

Fix your own code

You will probably have noticed that the site now runs, you can probably access the back office (maybe?!) but your site probably has YSODs. This is most likely because:

  • Your views and c# code needs to be updated to work with the v8 APIs (remember to rename your _App_Code folder back to App_Code if you use it!)
  • Your packages need to be re-installed or upgraded or migrated into your new website with compatible v8 versions

This part of the migration process is going to be different for everyone. Basic sites will generally be pretty simple but if you are using lots of packages or custom code or a visual studio web application and/or additional class libraries, then there’s some manual work involved on your part. My recommendation is that each time you fix part of your sites you create a Git commit. You can always revert to previous commits if you want and you also have a backup of your v8 database if you ever need to revert that too. The API changes from v7 –> v8 aren’t too huge, you’ll have your local site up and running in no time!

Rebuild your deploy files

Create a git commit before continuing.

Now that your site is working locally in v8, it’s time to prep everything to be sent to Umbraco Cloud.

Since you are now running a newer version of Umbraco deploy you’ll want to re-generate all of the deploy files. You can do this by starting up your local site again, then open the command prompt and navigate to /data folder of your website. Then type in :

echo > deploy-export

All of your schema items will be re-exported to new deploy files.

Create a git commit before continuing.

Push to Dev

In theory if your site is working locally then there’s no reason why it won’t work on your Dev site once you push it up to Cloud. Don’t worry though, if all else fails, you can always revert back to a working commit for your site.

So… go ahead and push!

Once that is done, the status bar on the Cloud portal will probably be stuck at the end stage saying it’s trying to process Deploy files… but it will just hang there because it’s not able to. This is because your site is now in Upgrade mode since we’ve manually upgraded.

At this stage, you are going to need to login to Kudu. Go to the cmd prompt and navigate to /site/wwwroot/web.config and edit this file. The Umbraco.Core.ConfigurationStatus is going to be v8 already because that is what you committed to git and pushed to Cloud but we need Umbraco to detect an upgrade is required, so change this value to the v7 version you originally had (this is important!). While you are here, ensure that debug = false and CustomErrors = Off so you can see any errors that might occur.

Now visit the root of the site, you should be redirected to the login screen and then to the upgrade screen. With some more luck, this will ‘just work’!

Because the Deploy files couldn’t be processed when you first pushed because the site was in upgrade mode, you need to re-force the deploy files to be processed so go back to kudu cmd prompt and navigate to /site/wwwroot/data and type in:

echo > deploy


Make sure your dev site is working as you would expect it to. There’s a chance you might have missed some code that needs changing in your views or other code. If that is the case, make sure you fix it first on your local site, test there and then push back up to Dev and then test again there. Don’t push to a higher environment until you are ready.

Push to Live

You might have other environments between Dev and Live so you need to follow the same steps as pushing to Dev (i.e. once you push you will need to go to Kudu, change the web.config version, debug and custom error mode). Pushing to Live is the same approach but of course your live site is going to incur some downtime. If you’ve practiced with a Staging site, you’ll know how much downtime to expect, in theory it could be as low as a couple minutes but of course if something goes wrong it could be for longer.

… And Hooray! You are live on v8 :)

Before you go, there’s a few things you’ll want to do:

  • log back into kudu on your live site and in your web.config turn off debug and change custom errors back to RemoteOnly
  • be sure to run “echo > deploy”
  • in kudu delete the temp file folder: App_Data/Temp
  • Rebuild your indexes via the back office dashboard
  • Rebuild your published caches via the back office dashboard

What if something goes wrong?

I mentioned above that you can revert to a working copy, but how? Well this happened to me since I don’t follow my own instructions and I forgot to get rid of the data types with Obsolete property editors on live which means all of my environments were not totally synced before I started since I had fixed that on Dev. When I pushed to live and then ran the upgrader, it told me that I had data types with old Obsolete property editors … well in that scenario there’s nothing I could do about it since I can’t login to the back office and change anything. So I had to revert the Live site to the commit before the merge from Dev –> Live. Luckily all database changes with the Upgrader are done in a transaction so your live data isn’t going to be changed unless the upgrader successfully completes.

To rollback, I logged into Kudu and on the home page there is a link to “Source control info” where you can get the git endpoint for your Live environment. Then I cloned that all down locally and reverted the merge, committed and pushed back up to the live site. Now the live site was just back to it’s previous v7 state and I could make the necessary changes. Once that was done, I reverted my revert commit locally and pushed back to Live, and went through the upgrade process again.

Next steps?

Now your site is live on v8 but there’s probably more to do for you solution. If you are like me, you will have a Visual Studio solution with a web application to power your website. I then run this locally and publish to my local file system – which just happens to be the location of my cloned git repo for my Umbraco Cloud Dev site, then I push those changes to Cloud. So now I needed to get my VS web application to produce the same binary output as Cloud. That took a little bit to figure out since Umbraco Cloud includes some extra DLLs/packages that are not included in the vanilla Umbraco Cms package, namely this one: “Serilog.Sinks.MSSqlServer - Version 5.1.3-dev-00232” so you’ll probably need to include that as a package reference to your site too.

That’s about as far as I’ve got myself, best of luck!

Articulate 4.0.0 released for Umbraco version 8

May 3, 2019 02:37
Articulate 4.0.0 released for Umbraco version 8

It’s finally out in the wild! Articulate 4.0.0 is a pretty huge release so here’s the rundown…


As a developer, my recommendation is to install packages with Nuget

PM > Install-Package Articulate -Version 4.0.0

If you install from Nuget you will not automatically get the Articulate data structures installed because Nuget can’t talk to your live website/database so once you’ve installed the package and run your site, head over to the “Settings” section and you’ll see an “Articulate” dashboard there, click on the “Articulate Data Installer” tile and you’ll get all the data structures and a demo blog installed.

Alternatively you can install it directly from the Umbraco back office by searching for “Articulate” in the Packages section, or you can download the zip from https://our.umbraco.com/packages/starter-kits/articulate/ and install that in the Umbraco back office. If you install this way all of the data structures will be automatically installed.


I have no official documentation or way of doing this right now 😉. I’ve written up some instructions on the GitHub release here but essentially it’s going to require you to do some investigations and manual updates yourselves. There’s very little schema changes and only small amount of model changes so it shouldn’t be too painful. Good luck!

(note: I have yet to give it a try myself)

Support for Umbraco 8

I think it will come as no surprise that Articulate 4.0.0 is not compatible with any Umbraco v7 version. Articulate 4.0.0 requires a minimum of Umbraco 8.0.2. Moving forward I will only release more Articulate 3.x versions to support v7 based on community pull requests, my future efforts  will be solely focused on 4.x and above for Umbraco 8+.

Theme, Features + Bug fixes

There are several nice bug fixes in this release including a few PR sent in by the community – THANK YOU! 🤗

As for features, this is really all about updating the Themes. Previously Articulate shipped with 6 themes and all of them had a vast range of different features which I never really liked so I spent some time enhancing all of the ones I wanted to keep and made them look a bit prettier too. I’ve removed my own “Shazwazza” theme since it was way outdated to my own site here, plus I don’t really want other people to have the exact same site as me ;) But since that was the most feature rich theme I had to upgrade other ones. I also removed the old ugly Edictum them… pretty sure nobody used that one anyways.

Here’s the theme breakdown (it’s documented too)


I’ve also updated the default installation data to contain more than one blog post and an author profile so folks can see a better representation of the blog features on install. And I updated the default images and styling so it has a theme (which is Coffee ☕) and is less quirky (no more bloody rabbit or horse face photos 😛)

Here’s the breakdown of what they look like now…


This is the default theme installed, it is a very clean & simple theme. Originally created by Seth Lilly



This is based of of Google's material design lite and is based off their their blog template.



Original theme for Ghost can be found here: https://github.com/Bartinger/phantom/. A nice simple responsive theme.



The original author's site can be found here: http://www.thyu.org/www/ but unfortunately their demo site for the Ghost theme is down. The theme's repository is here https://github.com/thyu/minighost.



Hope you enjoy the updates!

Need to remove an auto-routed controller in Umbraco?

April 11, 2019 05:06
Need to remove an auto-routed controller in Umbraco?

Umbraco will auto-route some controllers automatically. These controllers are any MVC SurfaceControllers or WebApi UmbracoApiController types discovered during startup. There might be some cases where you just don’t want these controllers to be routed at all, maybe a package installs a controller that you’d rather not have routable or maybe you want to control if your own plugin controllers are auto-routed based on some configuration.

The good news is that this is quite easy by just removing these routes during startup. There’s various ways you could do this but I’ve shown below one of the ways to interrogate the routes that have been created to remove the ones you don’t want:

Version 8

//This is required to ensure this composer runs after
//Umbraco's WebFinalComposer which is the component
//that creates all of the routes during startup    
public class MyComposer : ComponentComposer<MyComponent>{ }

//The component that runs after WebFinalComponent
//during startup to modify the routes
public class MyComponent : IComponent
    public void Initialize()
        //list the routes you want removed, in this example
        //this will remove the built in Umbraco UmbRegisterController
        //and the TagsController from being routed.
        var removeRoutes = new[]

        foreach (var route in RouteTable.Routes.OfType().ToList())
            if (removeRoutes.Any(r => route.Url.InvariantContains(r)))

    public void Terminate() { }

Version 7

public class MyStartupHandler : ApplicationEventHandler
    protected override void ApplicationStarted(
        UmbracoApplicationBase umbracoApplication,
        ApplicationContext applicationContext)

        //list the routes you want removed, in this example
        //this will remove the built in Umbraco UmbRegisterController
        //and the TagsController from being routed.
        var removeRoutes = new[]

        foreach(var route in RouteTable.Routes.OfType<Route>().ToList())
            if (removeRoutes.Any(r => route.Url.InvariantContains(r)))

Umbraco Down Under Festival 2019

March 4, 2019 05:00
Umbraco Down Under Festival 2019

I had the pleasure of attending and speaking at this year’s Umbraco Down Under Festival which was fantastic! Thanks to everyone at KØBEN digital for putting on such a nice event as well to all of the sponsors Zero Seven, Tea Commerce and Luminary in helping make it all happen. And what great timing to have an Umbraco festival just after Umbraco v8 is launched! Big thanks to Niels Hartvig for coming all the way from Denmark, it certainly means a lot to us Australians (yes, I am one too even with this Canadian accent!).


We had quite a few people at the Hackathon this year (18!) and we were able to close 3 issues and merge 6 Pull Requests along with finding and submitting 4 other issues, all for Umbraco v8, great work! Looking forward to seeing the Australian community submit even more PRs for v8 and hope to see you all at the Australian Umbraco meetups :)


Slide Deck

imageMy talk this year was on Umbraco Packages in v8 though much of it was really about transitioning to v8 in general.

Here is the rendered PDF version of my slides, of course it doesn’t come with all of the nice transitions but it’s better than nothing. My slides were done with the brilliant GitPitch service which I absolutely love. Not only does it make presenting code so much nicer/easier, it just makes sense to me as a developer since I can just write my slides in Mardown and style them with css. Plus having your slide deck in Git means making new slides out of old slides quite nice since all your history is there!

I tried to break down the talk into 3 sections: Migrating, Building and Packaging.


“Migrating” was a bit of a walk through between some fundamental things that have changed between v7 and v8 that not only package developers will need to be aware of but anyone making the transition from v7 to v8.


“Building” showcased some new features for packages and v8, though I didn’t talk about one of the major v8 features: Content Apps, because Robert Foster was already doing a talk all about them in the morning. Instead I focused on how Dashboards work in v8 and a couple currently undisclosed v8 features: Full Screen Sections (sans c#) and Package Options.


“Packaging” may have been a bit rushed but I thought I was going to go overtime :P I talked about the new packager UI in v8 and that it is certainly possible to build packages for CI/CD integration with PowerShell scripts to build an Umbraco package from a command line. I’d like to make this far more straight forward than it is now which is something I’ll work on this year. You can find this PowerShell script here and a newer example in Articulate here. Lastly I mentioned that there is a disconnect between the Umbraco package format and the Nuget package format with regards to installing Umbraco data and that it would be nice to make both of these work seamlessly as one thing… and this is certainly possible. I created a PR a very long time ago to address this called Package Migrations (even though it says Morten made it … he actually just submitted the PR ;) ). I need to write a blog post about what this is and how it is intended to work so we can hopefully get some traction on this this year. The very brief overview is that package updates would work very similarly to Umbraco updates, if an update is detected that requires a migration, the installer will execute to guide the user through the process and to provide UI feedback if anything fails along the way. This way package developers can properly leverage the Migrations system built into Umbraco and Umbraco data will happily be installed on startup even if you install a package from Nuget.

The main barrier currently is that Umbraco Cloud will need to natively support it otherwise people will get the installer screen on every environment when they push a package update upstream which is not great, Umbraco Cloud should instead run the migration on the upstream environment in the background just like it does with Umbraco updates.

Lastly I talked about how Articulate currently manages this situation between the Umbraco package format and the Nuget package format.

UDUF 2020

Looks like UDUF is moving to Sydney next year, so we’ll so you all there!