Shannon Deminick's blog all about web development

Auto upgrade your Nuget packages with Azure Pipelines or GitHub Actions

February 25, 2021 06:12
Auto upgrade your Nuget packages with Azure Pipelines or GitHub Actions

Before we start I just want to preface this with some 🔥 warnings 🔥

  • This works for me, it might not work for you
  • To get this working for you, you may need to tweak some of the code referenced
  • This is not under any support or warranty by anyone
  • Running Nuget.exe update command outside of Visual Studio will overwrite your files so there is a manual review process (more info below)
  • This is only for ASP.NET Framework using packages.config – Yes I know that is super old and I should get with the times, but this has been an ongoing behind the scenes project of mine for a long time. When I need this for Package Reference projects, ASP.NET Core/5, I’ll update it but there’s nothing stopping you from tweaking this to work for you
  • This only works for a specified csproj, not an entire sln – it could work for that but I’ve not tested, there would be a few tweaks to make that work
  • This does not yet work for GitHub actions but the concepts are all here and could probably very easily be converted UPDATE: This works now!

Now that’s out of the way …

How do I do it?

With a lot of PowerShell :) This also uses a few methods from the PowerShellForGitHub project.

The process is:

  • Run a pipeline/action on a schedule (i.e. each day)
  • This checks against your source code for the installed version for a particular package
  • Then it checks with Nuget (using your Nuget.config file) to see what the latest stable version is
  • If there’s a newer version:
  • Create a new branch
  • Run a Nuget update against your project
  • Build the project
  • Commit the changes
  • Push the changes
  • Create a PR for review

Azure Pipelines/GitHub Actions YAML

The only part of the YAML that needs editing is the variables, here's what they mean:

  • ProjectFile = The relative path to your csproj that you want to upgrade
  • PackageFile = The relative path to your packages.config file for this project
  • PackageName = The Nuget package name you want upgraded
  • GitBotUser = The name used for the Git commits
  • GitBotEmail = The email used for the Git commits

For Azure Pipelines, these are also required:

Then there are some variables to assist with testing:

  • DisableUpgradeStep = If true will just check if there’s an upgrade available and exit
  • DisableCommit = If true will run the upgrade and will exit after that (no commit, push or PR)
  • DisablePush = If true will run the upgrade + commit and will exit after that (no push or PR)
  • DisablePullRequest = If true will run the upgrade + commit + push and will exit after that (no PR)

Each step in the yaml build more or less either calls Git commands or PowerShell functions. The PowerShell functions are loaded as part of a PowerShell Module which is committed to the repository. This module’s functions are auto-loaded by PowerShell because the first step configures the PowerShell environment variable PSModulePath to include the custom path. Once that is in place, all functions exposed by the module are auto-loaded.

In these examples you’ll see that I’m referencing Umbraco Cloud names and that’s because I’m using this on Umbraco Cloud for my own website and the examples are for the UmbracoCms package. But this should in theory work for all packages!

Show me the code

The code for all of this is here in a new GitHub repo and here’s how you use it:

You can copy the folder structure in the repository as-is. Here's an example of what my site's repository folder structure is to make this work (everything except the src folder is in the GitHub repo above):

  • [root]
    • auto-upgrader.devops.yml (If you are using Azure Pipelines)
    • .github
      • workflows
        • auto-upgrader.gh.yml (If you are using GitHub Actions)
    • build
      • PowershellModules
        • AutoUpgradeFunctions.psd1
        • AutoUpgradeFunctions.psm1
        • AutoUpgradeFunctions
    • src
      • Shazwazza.Web
        • Shazwazza.Web.csproj
        • packages.config

All of the steps have descriptive display names and it should be reasonably self documenting.

The end result is a PR, here’s one that was generated by this process:

Nuget overwrites

Nuget.exe works differently than Nuget within Visual Studio’s Package Manager Console. All of those special commands like Install-Package, Update-Package, etc… are all PowerShell module commands built into Visual Studio and they are able to work with the context of Visual Studio. This allows those commands to try to be a little smarter when running Nuget updates and also allows the legacy Nuget features like running PowerShell scripts on install/update to run. This script just uses Nuget.exe and it’s less smart especially for these legacy .NET Framework projects. As such, it will just overwrite all files in most cases (it does detect file changes it seems but isn’t always accurate).

With that 🔥 warning 🔥 it is very important to make sure you review the changed files in the PR and revert or adjust any changes you need before applying the PR.

You’ll see a note in the PowerShell script about Nuget overwrites. There are other options that can be used like "Ignore" and "IgnoreAll" but all my tests have showed that for some reason this setting will end up deleting a whole bunch of files so the default overwrite setting is used.

Next steps

Get out there and try it! Would love some feedback on this if/when you get a change to test it.

PackageReference support with .NET Framework projects could also be done (but IMO this is low priority) along with being able to upgrade the entire SLN instead of just the csproj.

Then perhaps some attempts at getting a NET Core/5 version of this running. In theory that will be easier since it will mostly just be dotnet commands.


ASMX SOAP Webservices with abstract models without using XmlInclude

July 22, 2016 10:12

I’m hoping this post might be useful to some folks out there that might be stuck using old ASMX/SOAP webservices in ASP.Net. If you’ve tried to return an abstract or superclass from an ASMX webservice without using XmlInclude or SoapInclude, you’ll get an error like:

System.InvalidOperationException: There was an error generating the XML document. ---> System.InvalidOperationException: The type MyAwesomeClass was not expected. Use the XmlInclude or SoapInclude attribute to specify types that are not known statically.
   at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationWriter1.Write6_Item(String n, String ns, Item o, Boolean isNullable, Boolean needType)
   at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationWriter1.Write10_Item(Object o)
   at Microsoft.Xml.Serialization.GeneratedAssembly.ItemSerializer.Serialize(Object objectToSerialize, XmlSerializationWriter writer)
   at System.Xml.Serialization.XmlSerializer.Serialize(XmlWriter xmlWriter, Object o, XmlSerializerNamespaces namespaces, String encodingStyle, String id)
   --- End of inner exception stack trace ---
   at System.Xml.Serialization.XmlSerializer.Serialize(XmlWriter xmlWriter, Object o, XmlSerializerNamespaces namespaces, String encodingStyle, String id)
   at System.Xml.Serialization.XmlSerializer.Serialize(TextWriter textWriter, Object o, XmlSerializerNamespaces namespaces)
   at System.Web.Services.Protocols.XmlReturnWriter.Write(HttpResponse response, Stream outputStream, Object returnValue)
   at System.Web.Services.Protocols.HttpServerProtocol.WriteReturns(Object[] returnValues, Stream outputStream)
   at System.Web.Services.Protocols.WebServiceHandler.WriteReturns(Object[] returnValues)
   at System.Web.Services.Protocols.WebServiceHandler.Invoke()

The normal way to work around this is to attribute your ASMX class with [XmlInclude(typeof(MyAwesomeClass)] and repeat this for every subclass that you might be returning. This essentially tells the SOAP handler what types it should expect to serialize so it can ‘warm’ up a list of XmlSerializers.

The problem with this is that you need to know about all of these types up-front, but what if you have a plugin system where other developers can define their own types? There would be no way of knowing up-front what types to register so this approach will not work.

IXmlSerializer to the rescue

To work around this problem you can define a wrapper class for your abstract/superclass.  Working with IXmlSerializer is pretty annoying and I highly recommend this great article if you are going to use it since one mistake can cause all sorts of problems

The following class should work for any object. Also note the usage of the static dictionary to store references to created XmlSerializer instances since these are expensive to create per type.

public class SerializedObjectWrapper : IXmlSerializable
    /// <summary>
    /// The underlying Object reference that is being returned
    /// </summary>
    public object Object { get; set; }

    /// <summary>
    /// This is used because creating XmlSerializers are expensive
    /// </summary>
    private static readonly ConcurrentDictionary<Type, XmlSerializer> TypeSerializers 
        = new ConcurrentDictionary<Type, XmlSerializer>();

    public XmlSchema GetSchema()
        return null;

    public void ReadXml(XmlReader reader)

        //Get the Item type attribute
        var itemType = reader.GetAttribute("ItemType");
        if (itemType == null) throw new InvalidOperationException("ItemType attribute cannot be null");
        //Ensure the type is found in the app domain
        var itemTypeType = Type.GetType(itemType);
        if (itemTypeType == null) throw new InvalidOperationException("Could not find the type " + itemType);

        var isEmptyElement = reader.IsEmptyElement;

        if (isEmptyElement == false)
            var serializer = TypeSerializers.GetOrAdd(itemTypeType, t => new XmlSerializer(t));
            Object = serializer.Deserialize(reader);

    public void WriteXml(XmlWriter writer)
        var itemType = Object.GetType();
        var serializer = TypeSerializers.GetOrAdd(itemType, t => new XmlSerializer(t));
        //writes the object type so we can use that to deserialize later
            itemType.AssemblyQualifiedName ?? Object.GetType().ToString());

        serializer.Serialize(writer, Object);


Here’s an example of the usage of the SerializedObjectWrapper class along with the example that would cause the above mentioned exception so you can see the difference:

public abstract class MyAbstractClass

public class MyAwesomeClass : MyAbstractClass

public MyAbstractClass GetStuff()
    return new MyAwesomeClass();

public SerializedObjectWrapper GetStuff()
    return new SerializedObjectWrapper
        Object = new MyAwesomeClass()


I know most people aren’t using AMSX web services anymore but in case your stuck on an old project or have inherited one, this might be of use :)

Configuring ASP.Net Identity OAuth login providers for multi-tenancy

March 26, 2015 01:48

Say for example you have a CMS :) You want to give full control to the developer to manage how their front-end members with authenticate, which could of course include ASP.Net Identity OAuth login providers. At the same time you want to easily allow your CMS to be configured so that ASP.Net Identity OAuth providers can be used for logging into the back office.  In this scenario, the same OAuth provider might be used for both front-end and back-office authentication but authenticated under 2 different OAuth accounts. Another example might be if you have multi-tenancy set up for your front-end site and perhaps you want to use the same OAuth login provider but have members authenticate with different OAuth accounts for different domain names.

The defaults

As an example, lets assume that front-end members are configured to authenticate with the ASP.Net Identity Google OAuth2 provider. This is easily done by just following one of the many tutorials out there. Your startup code might look like:

app.UseCookieAuthentication(new CookieAuthenticationOptions ....


              clientId: "123456789...",
              clientSecret: "987654321....");

Great, but I need 2 (or more) Google OAuth2 providers, so what now? I can’t just add 2 declarations of:

              clientId: "123456789...",
              clientSecret: "987654321....");

              clientId: "abcdef...",
              clientSecret: "zyxwv....");

you’ll quickly realize that doesn’t work and only one provider instance will actually be used. This is because of the default underlying settings that get used to instantiate the Google provider. Let’s have a look at what the default options are in this case. The above code is equivalent to this:

app.UseGoogleAuthentication(new GoogleOAuth2AuthenticationOptions
    AuthenticationType = "Google",
    ClientId = "123456789...",
    ClientSecret = "987654321....",
    Caption = "Google",
    CallbackPath = new PathString("/signin-google"),
    AuthenticationMode = AuthenticationMode.Passive,
    SignInAsAuthenticationType = app.GetDefaultSignInAsAuthenticationType(),
    BackchannelTimeout = TimeSpan.FromSeconds(60),
    BackchannelHttpHandler = new System.Net.Http.WebRequestHandler(),
    BackchannelCertificateValidator = null,
    Provider = new GoogleOAuth2AuthenticationProvider()

The AuthenticationType

One very important aspect of the default settings is the AuthenticationType. This is a unique identifier for the provider instance and this is one of the reasons why if you have 2 x UseGoogleAuthentication declarations with the defaults only one will ever be used.

Knowing this, it’s clear that each declaration of UseGoogleAuthentication needs to specify custom options and have the AuthenticationType unique amongst them. So we might end up with something like:

//keep defaults for front-end
    clientId: "123456789...",
    clientSecret: "987654321....");

//custom options for back-office
app.UseGoogleAuthentication(new GoogleOAuth2AuthenticationOptions
    AuthenticationType = "GoogleBackOffice",
    ClientId = "abcdef...",
    ClientSecret = "zyxwv...."    

If you test this now, you’ll find out that only the first declaration is actually working even when you explicitly tell IOwinContext.Authentication.Challenge to use the “GoogleBackOffice” provider.

The CallbackPath

The reason that the default (first) declaration is the one that activates is because the response from Google is sending the request to the path: “/signin-google”, which is the default. The GoogleAuthenticationMiddleware will delegate to the GoogleAuthenticationHandler for each request and inspect the request to see if it should execute. For this logic it checks:

if (Options.CallbackPath.HasValue && Options.CallbackPath == Request.Path)
     //If the path matches, auth the request...

Since the CallbackPath will be the same by default on both above declarations, the first one that is registered will match and the other registered authenticators will be ignored. To fix this we’ll need to update the path that Google sends back and then update the second declaration to match that path.

To tell Google to send the request back on a different path, in your Google Developers Console change the REDIRECT URIS value for the second provider:


Then we need to update the 2nd declaration with the custom CallbackPath so that it matches and activates properly:

app.UseGoogleAuthentication(new GoogleOAuth2AuthenticationOptions
    AuthenticationType = "GoogleBackOffice",
    ClientId = "abcdef...",
    ClientSecret = "zyxwv....",
    CallbackPath = new PathString("/custom-signin-google")

Hooray, now it should work!

This concept is the same for most external login providers. For example for the Facebook one the default value is “/signin-facebook”, you’d need to configure Facebook’s “Valid OAuth redirect URIs” property with the correct callback path in Facebook’s developer portal:


What is SignInAsAuthenticationType?

The last thing to point out is that by default the SignInAsAuthenticationType for each provider will resolve to: app.GetDefaultSignInAsAuthenticationType(), which by default is: DefaultAuthenticationTypes.ExternalCookie  = “ExternalCookie”. Each OAuth provider is linked to another middleware that is responsible for actually issuing a user’s ClaimsIdentity, so by default this will be “ExternalCookie”. In some cases you won’t want the default external cookie authentication middleware to assign the ClaimsIdentity for your OAuth provider, you might need to issue a different ClaimsIdentity or just have more granular control over what happens with the callback for each OAuth provider. In this case you’ll need to specify another custom cookie authentication declaration, for example:

app.UseCookieAuthentication(new CookieAuthenticationOptions
    AuthenticationType = "CustomExternal",
    AuthenticationMode = AuthenticationMode.Passive,
    CookieName = "MyAwesomeCookie",
    ExpireTimeSpan = TimeSpan.FromMinutes(5),
    //Additional custom cookie options....

And then you can link that up to your OAuth declaration like:

//custom options for back-office
app.UseGoogleAuthentication(new GoogleOAuth2AuthenticationOptions
    AuthenticationType = "GoogleBackOffice",
    ClientId = "abcdef...",
    ClientSecret = "zyxwv....",
    SignInAsAuthenticationType = "CustomExternal"

ASP.Net 5 Re-learning a few things (part 2)

November 21, 2014 01:34

This is part 2 of a series of posts about some fundamental changes in ASP.Net 5 that we’ll need to re-learn (or un-learn!)

Part 1: http://shazwazza.com/post/aspnet-5-re-learning-a-few-things-part-1/


This probably isn’t new news to most people since it’s really one of the fundamental shifts for ASP.Net 5 – There won’t be a System.Web DLL. Everything that you’ll install in your website will come as different, smaller, separate libraries. For example, if you want to serve static files, you’d reference the Microsoft.AspNet.StaticFiles package, if you want to use MVC, you’d include the Microsoft.AspNet.Mvc package.

ASP.Net 5: “consists of modular components with minimal overhead, so you retain flexibility while constructing your solutions

Web Forms

Gone! YAY! :-)

Web Forms will still be a part of .Net as part of the main framework in System.Web, just not part of ASP.Net 5.


An HttpModule simply doesn’t exist in Asp.Net 5, but of course there is a new/better way to achieve this functionality, it’s called Middleware. In an HttpModule, you had to execute code based on the various stages of a request, things such as AuthenticateRequest, AuthorizeRequest, PostResolveRequestCache, and other slightly confusingly named events. This is no longer the case with Middleware, things just make sense now … everything is simply a linear execution of your code. You can have multiple middleware’s defined to execute in your application and each one is registered explicitly in your Startup.cs file. As a developer, you are in full control of what get’s executed and in what order instead of not knowing which HttpModules are executing and not really in control of their order of execution. Middleware can simply modify a request and continue calling the next one in the chain, or it can just terminate the pipeline and return a result.

There’s loads of examples in the source for middleware, ranging from the static file middleware to cookie authentication middleware,  etc…

And here’s a good article that explains middeware registration and the flow of control.


HttpHandlers are also a thing of the past. All they really were was a request handler that was based on a specific request path. MVC (which now also includes WebApi) has got this covered. If you really wanted, you could create middleware for this type of functionality as well but unless you require something extraordinarily complex that MVC cannot do (and it can do a lot!), I’d recommend just sticking with MVC.

ASP.Net 5 - Re-learning a few things (part 1)

November 14, 2014 03:20

ASP.Net 5 (aka vNext) is now in Beta and the Visual Studio 2015 preview is now out! So, what is this new ASP.Net? The 2 biggest features is that it’s totally open source and it will support a cross platform runtime = great!! But it’s worth knowing that this is more or less a rebuild of ASP.Net and there’s quite a few things that we’ve become accustomed to that will now be totally different.

Configuration files

You know all of that junk in the web.config and perhaps in a bunch of other *.config files? Well that is gone! I think this is wonderful news because creating those configuration sections was a huge pain. Even better news is how easy creating custom configuration inputs will be in  ASP.Net 5 using the new IConfiguration and ConfigurationModel sources. OOTB Microsoft is releasing support for JSON, XML and INI files for configuration but of course you can easily create your own.  The repository for the configuration bits is here and a nice tutorial can be found here.

So what about configuration transforms?? That is also a thing of the past. In ASP.Net 5, “configuration” will mostly be done with code found in your Startup.cs file, anything that you want to enable in your application is done in this class. Any package that is installed in your app, you will need to opt-in to use it in your Startup.cs file. In some cases however, a configuration file might need to be updated/transformed… this could be due to an upgrade of a package. The good news is that the configuration sources in ASP.Net 5 are writable (IConfigurationSource) which means during startup or first access to your config, you could detect what needs to be updated to support your new version, make the updates in code and commit (ICommitableConfigurationSource) the changes.

Wait… isn’t that going to restart the app domain?? 

NOTE: If you are using IIS, there can still be a web.config which can be used to configure IIS settings under the system.webserver section.

AppDomain restarts

This is something that we’ve all become familiar with… you want to restart your app, just bump/touch the web.config and your app domain is restarted. This is something we’ll all have to un-learn. In ASP.Net 5 auto app domain restarts don’t happen. First, there is no web.config (or global.asax for that matter) so there are no files to bump/touch. Next, a big reason why auto app domain restarts can’t occur is because ASP.Net 5 will be able to be run on various different web servers which don’t really know about what it means to restart an app domain. For example if you’ve been playing around with vNext before you had a chance to use VS 2015, you might be familiar with the command line “k web” (see docs for details). This command will start up a simple web server: Microsoft.AspNet.Server.WebListener which will serve web requests. In order for app domain restarts to occur, it would need to know how to restart itself after it’s been shutdown which isn’t exactly possible with a simple command line process. Instead if you made any changes to your code and wanted to restart your app, you’d kill the process (ctrl + c) and just call k web again. Another thing to be aware of is that when you kill a process like this, your app domain does not gracefully shutdown/unwind, it’s simply terminated.

But not to worry! If you have a web app that requires restarting (i.e. maybe it installs plugins, etc…) and needs to gracefully unwind, it’s still possible and actually much more pleasant since you’ll be in full control of how/when it happens. In order for this to work you’ll need to be running a web server that knows how to start itself back up again - like IIS! The way to gracefully shutdown your app domain is by using: IApplicationShutdown when you want to gracefully shutdown your app. You could even use that in combination with an IFileWatcher and your own configuration files … if you really wanted to mimic an app domain restart by bumping/touching a file.

Deployments, bin folder and App_Code

Speaking of app domain restarts, how will ASP.Net 5 work when I put a new assembly in the /bin folder or add a new class to App_Code?? These are a couple more things that need to be un-learned. There really isn’t a /bin folder anymore (well, there is but it only contains one very special assembly if you are running IIS) and there isn’t any App_Code folder.  So where does all of this go?

When you publish a web project in VS 2015 (which uses kpm pack) you end up with a very different looking deployment structure. There’s two folder: approot and wwwroot.

wwwroot – is the content of your website, nothing more. Even things like configuration files don’t exist here, it’s just content that your webserver can serve.

approot – this is the brains of your website. It includes all of the binaries, config files, source code, etc… that is used to run your website.

Here’s a blog post that describes the deployed file structure 

Did you say source code?? Yup! By default kpm pack will deploy your site with all of it’s packages and the source code for all of your projects. The Roslyn compiler will take care of everything for you when your site needs to start serving requests. You can of course opt-out of this and have your site deployed as a compiled package.

Did you say Package?? Yup, as in Nuget package! Instead of a /bin folder full of assemblies, all of your dependencies will actually be Nuget references and stored in approot/packages and if you choose to deploy your website without source, it will be compiled into a Nuget package and deployed in the packages folder as well.

More to come….

So there’s a a few of the things that are pretty different in ASP.Net 5, there’s still more to come and hopefully I’ll find some time to write them all up!

ClientDependency 1.8 released

July 7, 2014 08:02

It’s taken me forever to get this release out purely due to not having enough time, but here it finally is. This update now multi-targets framework versions:

  • Core project now targets both .Net 4 and 4.5
  • MVC project is now targets both .Net 4 and 4.5 for MVC 4 and .Net 4.5 for MVC 5

There are also a couple of minor bug fixes:

The update to the CDF .Less project is the update to use the latest .Less version.

To install the CDF core:

PM> Install-Package ClientDependency

To install CDF for MVC (less than v5):

PM> Install-Package ClientDependency-MVC

If you are using MVC 5 then you’ll need to use the MVC 5 specific version:

PM> Install-Package ClientDependency-MVC5

To install the .Less update:

PM> Install-Package ClientDependency-Less

Remember CDF also supports TypeScript, CoffeeScript and SASS!

PM> Install-Package ClientDependency-TypeScript

PM> Install-Package ClientDependency-CoffeeScript

PM> Install-Package ClientDependency-SASS

Creating code-behind files for Umbraco templates

January 29, 2011 04:04
This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
I’ve always had this idea in my head that one of the downfalls of using Umbraco when coming form standard ASP.Net web application was the missing code-behind files. You know, when you create a new web application and add an .aspx page to it it conveniently comes with a .cs and design.cs file. Most of the time I would even let the code-behind file inherit from my own custom Page/MasterPage implementation, e.g. a SecuredPage that comes with various properties and methods to handle authentication. Although Umbraco uses regular masterpages (if you haven’t turned it off in the web.config) all you get in the backoffice is the actual page template. Now, don’t get me wrong: I love the way Umbraco let’s you edit all aspects of your site via the backend and gives you the utmost flexibility and 100% control over the output, presented in a refreshingly simple manner. Yet sometimes you need a bit more, and it’s just another clear plus for Umbraco that you are able do the following without ever having to modify the core.

The 'aha' moment that it is actually quite easy to add code-behind files to Umbraco masterpages came to me when I had to port a quite big ASP.Net website to Umbraco. The website had grown organically over the years with lots of custom templates, user controls, etc. The site also had multi-language support, all of which was handled in the code-behind files of the pages. The goal was to get it over to Umbraco as quick as possible, then rework the functionality bit by bit. So I started by creating a new Umbraco site and ‘wrapped’ it in a web application project in Visual Studio.


1-28-2011 5-00-55 PM

[Please refer to the comments below to find more information on how to set this up in Visual Studio.]

After adding a couple of document types and templates in Umbraco the masterpages folder looks something like this:

1-28-2011 5-28-34 PM

The Root.master file is the main master page, Page1.master and Page2.master are nested master pages in Umbraco. I’ve included all three of them in the solution. Now it’s time to create the code-behind file: right-click on the masterpages folder and add three C# classes and name them Root.master.cs, Page1.master.cs and Page2.master.cs. The result should be something like this:

1-28-2011 5-29-38 PM

Visual Studio automatically groups them together, fantastic. Yet they are not really hooked up yet, VS does the grouping just based on file names. The master directive on Root.master currently looks like this:

<%@ Master Language="C#" MasterPageFile="~/umbraco/masterpages/default.master" AutoEventWireup="true" %>

To hook up the cs file we need to add the CodeBehind and Inherits attributes like so:

<%@ Master Language="C#" MasterPageFile="~/umbraco/masterpages/default.master" AutoEventWireup="true" CodeBehind="Root.master.cs" Inherits="Umbraco_4._6._1.masterpages.Root"%>

You should get an error at this point as the compiler complains that Root is not convertible to System.Web.UI.MasterPage, so we need to fix this in the cs file as well by making the class partial (necessary if you want to later add designer files as well) and inheriting from System.Web.UI.MasterPage. An empty Page_Load message can’t hurt as well:

using System; namespace Umbraco4_6_1.masterpages { public partial class Root : System.Web.UI.MasterPage { protected void Page_Load(object sender, EventArgs e) { } } }

You should now be able to switch between both files by pressing F7 in Visual Studio. Let’s try to add a Property and reference that from the template:

public string Message { get; set; } protected void Page_Load(object sender, EventArgs e) { Message = "All the best from your code-behind file!! :)"; }

and something like this on the template:

<div> <%= Message %> </div>

Now we just need to compile the project and navigate to a content page that uses the Root template to see the message.


Adding designer files

[As Simon Dingley pointed out below there is an even easier way to create the designer files: right-click on the master.aspx page and select "Convert to web application", which will create the .designer file for the selected item.]

We can also add a designer file to the duo to make things even better. After adding Root.master.designer.cs, Page1.master.designer.cs and Page2.master.designer.cs the solution looks like this:

1-28-2011 5-49-22 PM

Visual Studio is now rightfully complaining that it got duplicate definitions for the classes and even suggests to add the partial keyword, which we will quickly do. After that is all working and compiling nicely we need to give Visual Studio control over the designer files. That is easily accomplished by slightly modifying each .master file (e.g. by adding a single space to an empty line) and saving it, VS will do the rest for you. The most important thing this will do for you is to reference all controls you add to the template so they are available for use in the code-behind file.

Now let’s try to modify the message value from the code-behind of Page1 by adding

protected void Page_Load(object sender, EventArgs e) { ((Root) Master).Message = "Hello from the nested master page!"; }

to it. Browsing to any Umbraco page that uses the Page1 template will now show the new message.

Snapshot CMS API

June 25, 2010 10:30
This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
As we’ve been working on the API for Snapshot we realised that there’s a bunch of cool aspects to the API which we think that everyone can benefit from.

To this end we at TheFARM have decided that we’re going to give away the CMS API’s for Snapshot free!

What does this include?

What we’re providing is a set of API’s which can be used as replacements for some of the Umbraco API’s, most importantly the Media and NodeFactory API’s.

In Snapshot we’ve got quite a bit of caching built in for working with both media and node, along with some handy features around the creation of strongly typed representations of your objects, similar to LINQ to Umbraco, but not tied to Umbraco 4.1 and more focused on being used in the scope of NodeFactory.

What’s not included?

This is just the Snapshot API for working with the Umbraco XML cache, it does not include the Snapshot export engine, nor does it include the API for working in the published Snapshot environment (sorry, those aren’t going to be free!).

Why would I want to use it?

Well this is a good question, we’ve already got both Media and NodeFactory, why would you want to use something more custom for it?

Media caching is an obvious advantage, but most importantly the Snapshot API is designed with dependency injection as a forethought. This means that when you’re working on your applications in Umbraco you can have the Media or Node API’s injected using Autofac. This makes testable Umbraco development quite easy.

Lastly, since Snapshot is an abstraction away from the Umbraco API you can even write your own implementations which don’t require the Umbraoc XML file to read data, you could pull it in from any source.

Getting it

The pre-release of the Snapshot CMS API is available on the Snapshot page of FarmCode.org. Keep in mind that this is an early build of the Snapshot CMS API, and it is subject to change. For updated releases keep an eye on our blog.

Using an iPhone with the Visual Studio development server & Charles

June 11, 2010 23:26

Dave Ward did a good post recently on how to use the Visual Studio development server from a mobile devise such as an iPhone. But there’s a problem for us here, we use Charles which I have found to be a better than Fiddler (it’s also cross-platform so I can use it both on my Mac and Windows machines).

So after reading Dave’s post I decided to have a look at how to do it if you’re using Charles, and well it’s pretty darn simple.

I’d suggest that you read Dave’s post first as I’m going to assume that you have, I’m just going to point out what you need to do different for Charles.

Charles Configuration

The first thing you need to do is find out on what port Charles is running on, by default Charles is on port 8888, but you can find the settings under Proxy > Proxy Settings


Next we need to configure the external access to the HTTP Proxy that Charles is running. This is something that Charles handles differently to Fiddler, it’s actually a lot more configurable as you can define individual IP’s or IP ranges for access.

To do this you need to navigate to Proxy > Access Control Settings


Then you just need to click Add and enter the IP (or range) which you want to allow access to. I’ve just allowed access to the IP of my iPhone, which is


The rest of Dave’s post is all you need to get this working, you connect to your computer from your external device in just the same way.

Hopefully this helps you out if you’re not a Fiddler user but want to be able to use a mobile device with Visual Studio’s development server.

Backing up Document Types

June 11, 2010 20:20
This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
Something I’ve heard a number of people say is that they want a way in which they can store the DocumentType in their source control system.

This is obviously a bit of a problem since they are actually stored in the database, not on the file system. Hmmm…

Then yesterday I was talking to Tatham Oddie about it and how you could go about CI with Umbraco. Then after bouncing a few ideas of Shannon we had a great idea, that when you say a DocumentType it would just dump it to the file system. You can then check this file into your source control system and you have a backup.

Sounds pretty simple, and in fact, Umbraco has all the stuff you’d need for this, it’s just a matter of doing it. So while waiting for a rather large project to check out of source control I decided to just write it.

Please note, the following code is not tested, it’s just a POC, when I get some time I do plan on actually testing it :P

How do go about it

It’s actually quite simple, you just need to tie into the Umbraco event model for a DocumentType and use the built in XML export feature.

I’ve also done the code so you can either dump to a single file or to multiple files (depending which is easiest in your solution.

It doesn’t check the files out for you, so if you’re using something like TFS you’ll have a problem, but I have put in handlers for read-only files.

Also, there’s no error checking, like I said, this is POC code :P.

Code baby!

using System.Linq;
using System.IO;
using System.Web;
using System.Xml.Linq;
using umbraco;
using umbraco.BusinessLogic;
using umbraco.cms.businesslogic.web;

namespace AaronPowell.Umbraco
    public class DocumentTypeSerializer : ApplicationBase
        public DocumentTypeSerializer()
            DocumentType.AfterSave += new DocumentType.SaveEventHandler(DocumentType_AfterSave);
            DocumentType.AfterDelete += new DocumentType.DeleteEventHandler(DocumentType_AfterDelete);

        void DocumentType_AfterDelete(DocumentType sender, umbraco.cms.businesslogic.DeleteEventArgs e)

        void DocumentType_AfterSave(DocumentType sender, umbraco.cms.businesslogic.SaveEventArgs e)

        private static void DumpDocumentTypes(bool useSingleFile)
            var allDocTypes = DocumentType.GetAllAsList();
			var storageFolder = GlobalSettings.StorageDirectory + "/";
			System.Xml.XmlDocument xmlDoc = new System.Xml.XmlDocument();
				var xdoc = new XDocument(new XElement("DocumentTypes"));

				foreach (var dt in allDocTypes)

				var file = storageFolder + "DocumentTypes.config";
				var fileOnFileSystem = new FileInfo(HttpContext.Current.Server.MapPath(file));
				if (fileOnFileSystem.Exists)
					if (fileOnFileSystem.Attributes == FileAttributes.ReadOnly)
						fileOnFileSystem.Attributes &= ~FileAttributes.ReadOnly;

				storageFolder += "DocumentTypes/";
					var di = new DirectoryInfo(storageFolder);
					var files = di.GetFiles();
					foreach(var file in files) 
						if (file.Exists)
							if (file.Attributes == FileAttributes.ReadOnly)
								file.Attributes &= ~FileAttributes.ReadOnly;
				foreach(var dt in allDocTypes) 
					var xdoc = XDocument.Parse(dt.ToXml(xmlDoc).ToString());
					var file = storageFolder + dt.Alias + ".config";
					var fileOnFileSystem = new FileInfo(HttpContext.Current.Server.MapPath(file));
					if (fileOnFileSystem.Exists)
						if (fileOnFileSystem.Attributes == FileAttributes.ReadOnly)
							fileOnFileSystem.Attributes &= ~FileAttributes.ReadOnly;


I'll look at cleaning this up and testing it soon and releasing it as an actual Umbraco package, but in the mean time feel free to have a play around with it.