Shannon Deminick's blog all about web development

In almost all of the examples online about how to deploy various services to Azure, they always list the super easy way to do it and that is to authenticate your current account to your Azure subscription which then grants your VSTS build to do all sorts of things… The problem is that not everyone has the security clearance to use the super easy tools in VSTS.

When you attempt to use these nice tools in VSTS you might get an error like this: “Failed to set Azure permission ‘RoleAssignmentId: some-guid-goes-here’ for the service principal … does not have authorizationto perform action ‘Microsoft.Authorization/roleAssignments/write’ over scope” This is because these nice VSTS tools actually creates a custom user behind the scenes in your azure subscription to use but your account doesn’t have access to authorize that.

Luckily there’s a work around

MS Deploy … sigh

Maybe there are other work arounds but this works, however it’s not the most elegant. I thought I’d post my findings here because it was a bit of a pain in the ass to get this all correct.

So here’s the steps:

1. Download the publish profile

You need to get the publish profile from your app service that you want to deploy to. This can be a website, a staging slot, an Azure function, (probably a bunch of others)


The file downloaded is an XML file containing a bunch of info you’ll need

2. Create a release definition and environment for your deployment

This assumes that you are pretty familiar with VSTS

You’ll want to create an empty environment in your release definition. Normally this is where you could choose the built in fancy VSTS deployment templates like “Azure App Service Deployment” … but as above, this doesn’t work if you don’t have security clearance. Instead, choose ‘Empty’


Then in your environment tasks, add Batch Script


3. Setup your batch script

There’s 2 ways to go about this and both depend on a msdeploy build output. This build output is generated by your build in VSTS if you are using a standard VSTS Visual Studio solution build. This will create msdeploy packages for you and will have put them in your artifacts folder. Along with msdeploy packages this will also generate a cmd batch file that executes msdeploy and a readme file to tell you how to execute it which contains some important info that you should read.

So here’s 2 options: Execute the cmd file, or execute msdeploy.exe directly

Execute the cmd file

There’s a bit of documentation about this online but most of it is based on using the SetParameters.xml file to adjust settings… but i just don’t want to use that.

Here’s the Path and Arguments that you need to run:

/y "/m:https://${publishUrl}/MSDeploy.axd?site=${msdeploySite}" /u:$(userName) /p:$(userPWD) /a:Basic -enableRule:DoNotDeleteRule "-setParam:name='IIS Web Application Name',value='${msdeploySite}'"

The parameters should be added to your VSTS Variables: ${msdeploySite}, $(userName), $(userPWD) and these variables correspond exactly to what is in your publish profile XML file that you downloaded. These parameters need to be pretty much exact, any misplaced quote or if you don’t include https, etc… will cause this to fail.

Important: the use of -enableRule:DoNotDeleteRule is totally optional, if you want to reset your site to exactly what is in the msdeploy package you do not want this. If however, you have user generated images, content or custom config files that exist on your site and you don’t want them deleted when you deploy, then you need to set this.

I’m unsure if this will work for Azure Functions deployment (it might!) … but I used the next option to do that:

Execute msdeploy.exe directly

If you execute the CMD file, you’ll see in the VSTS logs the exact msdeploy signature used which is:

"C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe" -source:package='d:\a\r1\a\YOUR_PROJECT_NAME\drop\YOUR_MSDEPLOY_PACKAGE_FILE.zip' -dest:auto,computerName="https://YOUR_PUBLISH_URL/MSDeploy.axd?site=YOUR_PROFILE_NAME",userName=********,password=********,authtype="Basic",includeAcls="False" -verb:sync -disableLink:AppPoolExtension -disableLink:ContentExtension -disableLink:CertificateExtension -setParamFile:"d:\a\r1\a\YOUR_PROJECT_NAME\drop\YOUR_MSDEPLOY_PACKAGE_FILE.SetParameters.xml" -enableRule:DoNotDeleteRule -setParam:name='IIS Web Application Name',value='YOUR_PROFILE_NAME'

So if you wanted, you could take this and execute that directly instead of the CMD file. I use this method to deploy Azure Functions but the script is a little simpler since that deployment doesn’t require all of these parameters. For that I use this for the Path and Arguments:

C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe
-verb:sync -source:package='$(System.DefaultWorkingDirectory)/YOUR_BUILD_NAME/drop/YOUR_MSDEPLOY_PACKAGE.zip' -dest:auto,computerName="https://$(publishUrl)/msdeploy.axd?site=$(msdeploySite)",UserName='$(userName)',Password='$(userPWD)',AuthType='Basic' -setParam:name='IIS Web Application Name',value='$(msdeploySite)'

Hopefully this comes in handy for someone Winking smile

For wildcard queries in Lucene that you would like to have the results ordered by Score, there’s a trick that you need to do otherwise all of your scores will come back the same. The reason for this is because the default behavior of wildcard queries uses CONSTANT_SCORE_AUTO_REWRITE_DEFAULT which as the name describes is going to give a constant score. The code comments describe why this is the default:

a) Runs faster

b) Does not have the scarcity of terms unduly influence score

c) Avoids any "TooManyBooleanClauses" exceptions

Without fully understanding Lucene that doesn’t really mean a whole lot but the Lucene docs give a little more info

NOTE: if setRewriteMethod(org.apache.lucene.search.MultiTermQuery.RewriteMethod) is either CONSTANT_SCORE_BOOLEAN_QUERY_REWRITE or SCORING_BOOLEAN_QUERY_REWRITE, you may encounter a BooleanQuery.TooManyClauses exception during searching, which happens when the number of terms to be searched exceeds BooleanQuery.getMaxClauseCount(). Setting setRewriteMethod(org.apache.lucene.search.MultiTermQuery.RewriteMethod) to CONSTANT_SCORE_FILTER_REWRITE prevents this.

The recommended rewrite method is CONSTANT_SCORE_AUTO_REWRITE_DEFAULT: it doesn't spend CPU computing unhelpful scores, and it tries to pick the most performant rewrite method given the query. If you need scoring (like FuzzyQuery, use MultiTermQuery.TopTermsScoringBooleanQueryRewrite which uses a priority queue to only collect competitive terms and not hit this limitation. Note that org.apache.lucene.queryparser.classic.QueryParser produces MultiTermQueries using CONSTANT_SCORE_AUTO_REWRITE_DEFAULT by default.

So the gist is, unless you are ordering by Score this shouldn’t be changed because it will consume more CPU and depending on how many terms you are querying against you might get an exception (though I think that is rare).

So how do you change the default?

That’s super easy, it’s just this line of code:


But there’s a catch! You must set this flag before you parse any queries with the query parser otherwise it won’t work. All this really does is instruct the query parser to apply this scoring method to any MultiTermQuery or FuzzyQuery implementations it creates. So what if you don’t know if this change should be made before you use the query parser? One scenario might be: At the time of using the query parser, you are unsure if the user constructing the query is going to be sorting by score. In this case you want to change the scoring mechanism just before executing the search but after creating your query.

Setting the value lazily

The good news is that you can set this value lazily just before you execute the search even after you’ve used the query parser to create parts of your query. There’s only 1 class type that we need to check for that has this API: MultiTermQuery however not all implementations of it support rewriting so we have to check for that. So given an instance of a Query we can recursively update every query contained within it and manually apply the rewrite method like:

protected void SetScoringBooleanQueryRewriteMethod(Query query)
	if (query is MultiTermQuery mtq)
		catch (NotSupportedException)
			//swallow this, some implementations of MultiTermQuery don't support this like FuzzyQuery
	if (query is BooleanQuery bq)
		foreach (BooleanClause clause in bq.Clauses())
			var q = clause.GetQuery();

So you can call this method just before you execute your search and it will still work without having to eagerly use QueryParser.SetMultiTermRewriteMethod(MultiTermQuery.SCORING_BOOLEAN_QUERY_REWRITE); before you use the query parser methods.

Happy searching!

Using AspNet5 OptionsModel

January 12, 2015 04:21

If you’ve used AspNet5 then you’ve probably been using some MVC, in which case you’ve probably seen something like this in your Startup class:

// Add MVC services to the services container
    .Configure<MvcOptions>(options =>
        //Configure some MVC options like customizing the 
        // view engines, etc...
        options.ViewEngines.Insert(0, typeof(TestViewEngine));

It turns out this syntax for specifying ‘options’ for a given service is a generic pattern that you can use in your own code. In fact the OptionsModel framework is it’s own code repository: https://github.com/aspnet/Options

I’ve implemented custom options in my AspNet5 project called Smidge (a runtime JavaScript/CSS pre-processing engine) and wanted to share the details since as far as I’ve seen there isn’t any documentation about this.

What are options?

Probably the simplest way to describe the options framework is that: Options allow you to configure your application via code during startup.

Options are just a POCO class that can contain configuration options to customize the behavior of your library. These option classes can be injected into any of your services with IoC using an interface called Microsoft.Framework.OptionsModel.IOptions. There’s a caveat to this POCO class however: It must contain an parameter-less/empty constructor which means you cannot have services injected into the your options class via it’s constructor.  This options framework also allows for ‘named’ options. So for example, perhaps you have a single options class that you would like to have configured in 2 different ways, one for ‘staging’ and one for your ‘live’ website.

Creating options

Here’s a really simple example of a POCO options class:

public class CustomMessageOptions
    public CustomMessage()
        Message = "";

    public string Message { get; set; }

In order to use this options class you need to create an options configuration class. For example:

public class CustomMessageOptionsSetup : ConfigureOptions<CustomMessageOptions>
    public CustomMessageOptionsSetup() 
    : base(ConfigureMessageOptions)

    /// <summary>
    /// Set the default options
    /// </summary>
    public static void ConfigureMessageOptions(CustomMessageOptions options)
        options.Message = "Hello world";

Then you need to add this class to your IoC container of type Microsoft.Framework.OptionsModel.IConfigureOptions:

services.AddTransient<IConfigureOptions<CustomMessageOptions>, CustomMessageOptionsSetup>();

Using options

To configure your options during startup, you do so in the ConfigureServices method like:

services.Configure<CustomMessageOptions>(options =>
    options.Message = "Hello there!";

Now you can have these options injected into any of your services using the IOptions interface noted previously:

public class MyCoolService 
    public MyCoolService(IOptions<CustomMessageOptions> messageOptions)
        //IOptions exposes an 'Options' property which resolves an instance
        //of CustomMessageOptions
        ConfiguredMessage = messageOptions.Options.Message;

    public string ConfiguredMessage {get; private set;}

Named options

As an example, lets say that you want a different message configured for your ‘staging’ and ‘live’ websites. This can be done with named options, here’s an example:

    .Configure<CustomMessageOptions>(options =>
        options.Message = "Hi! This is the staging site";
    }, "staging")
    .Configure<CustomMessageOptions>(options =>
        options.Message = "Hi! This is the live site";
    }, "live");

Then in your service you can resolve the option instance by name:

public class MyCoolService 
    public MyCoolService(IOptions<CustomMessageOptions> messageOptions)
        //IRL This value would probably be set via some environment variable
        var configEnvironment = "staging";

        //IOptions exposes an 'GetNamedOptions' method which resolves an instance
        //of CustomMessageOptions based on a defined named configuration
        ConfiguredMessage = messageOptions.GetNamedOptions(configEnvironment);

    public string ConfiguredMessage {get; private set;}

Configuring options with other services

Since your options class is just a POCO object and must have a parameter-less/empty constructor, you cannot inject services into the options class. However, there is a way to use IoC services in your options classes by customizing the ConfigureOptions class created above.  In many cases this won’t be necessary but this really depends on how you are using options.  As a (bad) example, lets say we wanted to expose a custom helper service called SiteHelper on the CustomMessageOptions class that can be used by a developer to create the message. The end result syntax might look like:

services.Configure<CustomMessageOptions>(options =>
        var siteId = options.SiteHelper.GetSiteId();
        options.Message = "Hi! This is the staging site with id: " + siteId;

In order for that to work the options.SiteHelper property needs to be initialized. This is done with the CustomMessageOptionsSetup class (created above) which has been added to the IoC container, this means it can have other services injected into it. The resulting class would look like:

public class CustomMessageOptionsSetup : ConfigureOptions<CustomMessageOptions>
    //SiteHelper gets injected via IoC
    public CustomMessageOptionsSetup(SiteHelper siteHelper) 
    : base(ConfigureMessageOptions)
        SiteHelper = siteHelper;

    public SiteHelper SiteHelper { get; private set; }

    /// <summary>
    /// Set the default options
    /// </summary>
    public static void ConfigureMessageOptions(CustomMessageOptions options)
        options.Message = "Hello world";

    /// <summary>
    /// Allows for configuring the options instance before options are set
    /// </summary>
    public override void Configure(Bundles options, string name = "")
        //Assign the site helper instance
        options.SiteHelper = SiteHelper;

        base.Configure(options, name);

IRL to give you an example of why this might be useful, in my Smidge project I allow developers to create named JavaScript/CSS bundles during startup using options. In some cases a developer might want to manipulate the file processing pipeline for a given bundle and in that case they need access to a service called PreProcessPipelineFactory which needs to come from IoC. The usage might look like:

    .Configure<Bundles>(bundles =>
                //add as many processor types as you want
                typeof(DotLess), typeof(JsMin)), 

In the above, the bundles.PipelineFactory is a property on the bundles (options) class which gets initialized in my own ConfigureOptions class.


Hopefully this helps anyone looking to use custom options in their AspNet5 libraries!

During the past month I decided to dive deep into learning ASP.NET 5, and what better way to learn than to start a new OSS project :)

I chose to make a new new simple and extensible Javascript/CSS runtime pre-processor for ASP.NET 5. It does file minification, combination and compression, has a nice file caching layer and it’s all done in async operations. I ported over a few ideas and code snippets from CDF (client dependency framework) but with a more modern approach. I’ve called it ‘Smidge’ = something really small.

The project is on GitHub, it’s still a work in progress but its functional and there’s even some documentation! In the next few weeks I’ll get more of the code and docs updated and hopefully have a beta release out. In the meantime, you can clone the source, browse the code, build it and of course use it if you like.

Project details

It’s currently only targeting aspnet50 and not the Core CLR… I didn’t start with Core CLR because there was some legacy code I had to port over and I wanted to get something up and working relatively quickly. It shouldn’t be too much work to convert to Core CLR and Mono, hopefully I’ll find time to do that soon. It’s referencing all of the beta-* libraries from the ASP.NET 5 nightly myget feeds since there’s some code I’m using that isn’t available in the current beta1 release (like Microsoft.AspNet.WebUtilities.UriHelper). The target KRE version is currently KRE-CLR-amd64 1.0.0-beta2-10760.


I’ve put up an Alpha 1 release on Nuget, so you can install it from there:

PM> Install-Package Smidge -Pre

There’s some installation instructions here, you’ll need to add the smidge.json file yourself for now, can’t figure out how to get VS 2015 (kpm pack) to package that up … more learning required!


There’s certainly a lot of detective work involved in learning ASP.NET 5 but with the code being open source and browse-able/searchable on GitHub, it makes finding what you need fairly easy.

If you are using PetaPoco, or NPoco (which seams to be the most up-to-date fork of the project), the title of this post might be a bit scary… but hopefully you won’t have to worry. This really depends on how you create your queries and how many different query structures you are executing.

High memory usage

Here is the code in relation to the memory growth when using PetaPoco:


What is happening here is that every time a POCO needs to be mapped from a data source, this will add more values to a static cache, specifically this one:

https://github.com/toptensoftware/PetaPoco/blob/master/PetaPoco/PetaPoco.cs#L2126  (m_PocoDatas)

This isn’t a bad thing… but it can be if you are:

  • using non-parameterized where clauses
  • you have dynamically generated where clauses
  • you use a lot of sql ‘IN’ clauses – since the items in the array being passed to the ‘IN’ clauses is dynamic
  • you have tons of differently statically unique where clauses

Each time a unique SQL query is sent to PetaPoco it will store this SQL string and associate it to a delegate (which is also cached). Over time, as these unique SQL queries are executed, the internal static cache will grow. In some cases this could consume quite a lot of memory.

The other thing to note is how large the ‘key’ that PetaPoco/NPoco uses:

var key = string.Format("{0}:{1}:{2}:{3}:{4}", sql, connString, ForceDateTimesToUtc, firstColumn, countColumns);

Considering how many queries might be executing in your application, the storage for these keys alone could take up quite a lot of memory! An SQL statement combined with a connection string could be very long, and each of these combinations gets stored in memory for every unique SQL query executed that returns mapped POCO objects.

Parameterized queries vs. non-parameterized

Here’s some examples of why non-parameterized queries will cause lots of memory consumption. Lets say we have a simple query like:

db.Query<MyTable>("WHERE MyColumn=@myValue", new {myValue = "test"})

Which results in this SQL:

SELECT * FROM MyTable WHERE MyColumn = @myValue

This query can be used over and over again with a different value and PetaPoco will simply store a single SQL key in it’s internal cache. However, if you are executing queries without real parameters such as:

db.Query<MyTable>("WHERE MyColumn='hello'");
db.Query<MyTable>("WHERE MyColumn='world'");
db.Query<MyTable>("WHERE MyColumn='hello world'");

Which results in this SQL:

SELECT * FROM MyTable WHERE MyColumn = 'hello';
SELECT * FROM MyTable WHERE MyColumn = 'world';
SELECT * FROM MyTable WHERE MyColumn = 'hello world';

Then PetaPoco will store each of these statements against a delegate in it’s internal cache since each of these string statements are not equal to each other.

Depending on your application you still might have a very large number of unique parameterized queries, though I’d assume you’d have to have a terrifically huge amount for it to be a worry.

Order by queries

Unfortunately even if you use parameterized queries, PetaPoco will store the SQL query key with it’s Order By clause which isn’t necessary and will again mean more duplicate SQL keys and delegates being tracked. For example if you have these resulting queries:

SELECT * FROM MyTable WHERE MyColumn = @myValue ORDER BY SomeField;
SELECT * FROM MyTable WHERE MyColumn = @myValue ORDER BY AnotherField;

PetaPoco will store each of these statements in it’s internal cache separately since the strings don’t match, however the delegate that PetaPoco is storing against these SQL statements isn’t concerned about the ordering output, it’s only concerned about the column and table selection so in theory it should be stripping off the last Order By clause (and other irrelevant clauses) to avoid this duplication.

A slightly better implementation

First, if you are using PetaPoco/NPoco, you shouldn’t use dynamic queries for the point’s mentioned above. If you need this functionality then I suppose it might be worth these libraries adding some sort of property on the Database object or a parameter in either the Fetch or Query methods to specify that you don’t want to use the internal cache (this will be slower, but you won’t get unwanted memory growth). I’d really just suggest not using dynamically created where clauses ;-)

Next, there’s a few things that could be fixed in the PetaPoco/NPoco core to manage memory a little better:

  • The size the the key that is stored in memory doesn’t need to be that big. A better implementation would be to use a hash combiner class to combine the GetHashCode result of each of those parameters that make up the key. This is a very fast way to create a hash of some strings that will result in a much smaller key. An example of a hash combiner class is here (which is actually inspired by the various internal hash code combiner classes in .Net): https://github.com/umbraco/Umbraco-CMS/blob/7.2.0/src/Umbraco.Core/HashCodeCombiner.cs
  • Instead of storing all of this cache in static variables, have them stored in an ObjectCache/MemoryCache (http://msdn.microsoft.com/en-us/library/system.runtime.caching.objectcache(v=vs.110).aspx) with a sliding expiration so the memory can get collected when it’s unused
  • The Order By clause should be ignored based on the point mentioned above

I’ve created a PR for NPoco here, and also created an issue on the original PetaPoco source here.

Decided to write this quick post for anyone searching on this topic. AngularJS has it’s own convention for CSRF (Cross Site Request Forgery) protection but in some cases you’ll be calling these same server side services via JQuery so you might need to get JQuery requests to also follow Angular’s convention.

For information about Angular’s CSRF protection see the “Security Considerations” part of Angular’s $http documentation.

Luckily it’s pretty darn easy to get JQuery to follow this convention too and this will also work with 3rd party plugins that use JQuery for requests like Blueimp file uploader. The easiest way to get this done is to set the global JQuery $.ajax rules. Probably the best place to do this is in your Angular app.run statement:

app.run(function ($cookies) {

    //This sets the default jquery ajax headers to include our csrf token, we
    // need to user the beforeSend method because the token might change 
    // (different for each logged in user)
        beforeSend: function (xhr) {
            xhr.setRequestHeader("X-XSRF-TOKEN", $cookies["XSRF-TOKEN"]);

That’s it!

It’s important to note to set the header using beforeSend, if you just set $.ajax options ‘headers’ section directly that means the header cannot be dynamic – which you’ll probably want if you have users logging in/out.

If you application supports plugins or extensions in some cases it might be useful to scan a packages assemblies before importing them in to your app. Some reasons for this might be:

  • Checking if the package has missing assembly references
  • Checking if the assembly references obsolete types that might make the package unstable
  • Checking the .Net targeted framework of the assembly
  • Any other assembly inspection to determine it is compatible with your app

To do this you can load assemblies using Assembly.ReflectionOnlyLoadFrom and Assembly.ReflectionOnlyLoad methods which load assemblies into a special assembly load context called “reflection-only context” which will safely let you inspect these assemblies.

Further reading:

A great article on Reflection Only Assembly Loading and if you want to know more about assembly load contexts, here’s an explanation.

Loading in assemblies

For this example, we’ll assume that all of the assemblies for a package are in some folder outside of the normal /bin folder (not loaded in to the current app) and each assembly for the package will need to be inspected for type references that are not supported.

A common mistake when loading in assemblies with reflection (especially in the LoadFrom context) is to load them in one at a time, whereas they generally will need to be all loaded in before inspecting them since they probably have references to each other. Another thing that generally must be done is adding an event listener on AppDomain.ReflectionOnlyAssemblyResolve because even though all known referenced assemblies are loading into the context some assemblies might not be explicitly referenced but are need to load the assembly. This handler provides a way to resolve those missing references.

The first thing to do is setup the event handler

AppDomain.CurrentDomain.ReflectionOnlyAssemblyResolve += (s, e) =>
    var a = Assembly.ReflectionOnlyLoad(e.Name);
    if (a == null) throw new TypeLoadException("Could not load assembly " + e.Name);
    return a;

Next we need to load all of the assembly files in the folder

foreach (var f in files) Assembly.ReflectionOnlyLoadFrom(f);

Then load all of their referenced assemblies in to the reflection context

//Then load each referenced assembly into the context
var assembliesWithErrors = new List<string>();
foreach (var f in files)
    var reflectedAssembly = Assembly.ReflectionOnlyLoadFrom(f);
    foreach (var assemblyName in reflectedAssembly.GetReferencedAssemblies())
        catch (FileNotFoundException)
            //if an exception occurs it means that a referenced assembly could not be found                        
                string.Concat("This package references the assembly '",
                    "' which was not found, this package may have problems running"));

In the catch, I’m detecting assembly reference errors and adding an error message to the outgoing method response and also adding that assembly to the assembliesWithErrors list which is used later to ensure we’re not inspecting assemblies that couldn’t be loaded.

Now that all the assemblies are loaded we can inspect them (ignoring ones with errors). This example is looking for any assemblies that have types implementing ‘MyType’. If they do implement this type, add the assembly to a list to return from the current method.

//now that we have all referenced types into the context we can look up stuff
foreach (var f in files.Except(assembliesWithErrors))
    //now we need to see if they contain any type 'MyType'
    var reflectedAssembly = Assembly.ReflectionOnlyLoadFrom(f);
    var found = reflectedAssembly.GetExportedTypes()
    if (found.Any())

Separate AppDomain

It’s best to execute all of this logic in a separate AppDomain because once assemblies are loaded in to a context, they cannot be unloaded and since we are loading in from files, those files will remain locked until the AppDomain is shutdown. Explaining how to create a separate AppDomain is outside the scope of this article but the code is included in the source below.

Source Code

Here’s a class the encapsulates all of this logic and of course if you can do much more when inspecting assemblies for various types.


I previously wrote a post about Listening for validation changes in AngularJS which with my knowledge at that time required a handy hack to get a reference to the currently scoped form controller (ngForm) for a given input control. I also complained a bit that it seemed that angular didn’t really provide a way to reference the current form controller without this little hack… well, it turns out I was wrong! :)

AngularJS seems kind of like ASP.Net MVC in the early days when there wasn’t much documentation…  It definitely pays off to read through the source code to figure out how to do more complicated things. I had a bit of a ‘light bulb’ moment when I realized that ngForm was itself a directive/controller and had recently noticed that the ‘require’ parameter of setting up a directive allows you to search for controllers in the current directives ancestry (i.e. prefix the required controller with a hat: ^ )

What does the require parameter of a directive do?

Lets face it, the directive documentation for AngularJS is in desperate need of being updated so that human beings can understand it (as noted by the many comments at the bottom). So I’ll try to explain what the ‘require’ parameter actually does and how to use it.

We’ll create a simple custom validation directive which will invalidate a field if the value is “blah”

function blahValidator() {
    return {
        require: 'ngModel',
        link: function(scope, elm, attr, ctrl) {
            var validator = function(value) {
                if (ctrl.$viewValue == "blah") {
                    ctrl.$setValidity('blah', false);
                    return null;
                else {
                    ctrl.$setValidity('blah', true);
                    return value;


You’ll notice that we have a ‘require’ parameter specified for ‘ngModel’. What is happening here is that when we assign this directive to an input field, angular will ensure that the input field also has a defined ng-model attribute on it as well. Then angular will pass in the instance of the ng-model controller to the ‘ctrl’ parameter of the link function.

So, the ‘require’ parameter dictates what the ‘ctrl’ parameter of the link function equals.

You can also require multiple controllers:


NOTE: the ctrl/ctrls parameter in the above 2 examples can be called whatever you want

Special prefixes

Angular has 2 special prefixes for the ‘require’ parameter:

^ = search the current directives ancestry for the controller

? = don’t throw an exception if the required controller is not found, making it ‘optional’ not a requirement

You can also combine them so angular will search the ancestry but it can be optional too such as: ^?ngController'

In the above example, the blahValidator will only work if the directive is declared inside of an ng-controller block.

Referencing the current ng-form

Given the above examples, and knowing the ngForm itself is a controller we should be able to just make a requirement on ngForm and have it injected into the directive. BUT, it wont work the way you expect. For some reason angular references the ngForm controller by the name “form” which i discovered by browsing the source of angular.

So now its easy to get a reference to the containing ngForm controller, all you need to do is add a ‘require’ parameter to your directive that looks like:

require: '^form'
and it will be injected into your ctrl parameter of your link function.

NDepend review = pretty cool!

June 21, 2013 23:47

For a while I’ve been wanting to automate some reporting from our build process to show some fairly simple statistical information such as

  • Obsoleted code
  • Internal code planned to be made public
  • Code flagged to be obsoleted
  • Code flagged as experimental

NOTE: some of the above are based on our own internal c# attributes

Originally I was just going to write some code to show me this which would have been fairly straight forward but I knew that NDepend had a code query language and natively outputs reports and can be integrated into the build process, so I figured I’d give it a go.

So I went and installed NDepend and ran the analysis against my current project … I wasn’t quite prepared for the amount of information it gave me! :)


I quickly found out that the reports that I wanted to create were insanely simple to write. I then lost a large part of my day exploring the reports it generates OOTB and I realized I hadn’t even touched the surface of the information this tool could give me. I was mostly interested in statistical analysis but this will also happily tell you about how bad or good your code is, how much test coverage you’ve got, what backwards compatibility you broke with your latest release, what code is most used in your project and what code isn’t used at all… and the list goes on and on.

To give you an example of the first report I wanted to write, here’s all I wrote to give me what methods in my codebase were obsoleted:

from m in JustMyCode.Methods where m.HasAttribute("System.ObsoleteAttribute") select m

What’s cool about the report output is that you can output to HTML or XML, so if you want to publish it on your site you can just write some XSLT to make it look the way you want.

Project comparison

Another really great feature that I thought would be fun to automate is the code analysis against previous builds so you know straight away if you are breaking compatibility with previous releases. I think it has great potential but what I did discover is that in some cases the OOTB queries for this aren’t perfect. It will tell you what public types, methods, and fields have been removed or changed and the query is pretty straightforward but… In many cases when we’re trying to maintain backwards compatibility while creating new APIs we’ll remove a type’s methods/properties and just make the legacy type inherit from the new API class (or some other trickery similar to that). This might break compatibility if people are using reflection or specific type checking but it wont break the API usage which I’m mostly interested in. I haven’t figured it out yet but I’m sure there’s some zany NDepend query I can write to check that.


So now that I know I can extract all this info, it’s time to decide what to export. Thankfully NDepend comes fully equipped with it’s own MSBuild integration so automating the report output should be pretty straightforward. I also noticed that if you wanted to be ultra careful you can integrate it with your build server and have your builds fail if some of your queries cause errors, like the above: If you break compatibility the build will fail. Or anything you want really, like failing the build if we detect you’ve written a method that has too many lines of code.

Pretty cool indeed!

I was going to use NDepend just to output reports which it clearly does without issue. What I didn’t realize was how much detail you can get about your code with this tool. There’s some tools in there that were a bit over my head but I suppose if you are working on really huge projects and need anything from just a snapshot or insane details into what dependencies your code has on other code it’ll do that easily as well.

The UI is a bit overwhelming at first and could probably do with some work but TBH I’m not sure how else you’d display that much information nicely. Once I figured out how the whole query and reporting structures work then it was dead simple.

Happy reporting :)

In some applications it can be really useful to have controllers listen for validation changes especially in more complicated AngularJS apps where ‘ng-repeat’ is used to render form controls. There’s plenty of cases where a parent scope might need to know about validation changes based on child scopes… one such case is a validation summary. There’s a couple ways to implement this (and probably more) but they all seem a bit hacky such as:

  • Apply a $watch to the current form object’s $valid property in the parent scope, then use jQuery to look for elements that have a class like ‘invalid’
    • You could then use the scope() function on the DOM element that ng-repeat is used on to get any model information about the invalid item
  • In child scopes you could apply a $watch to individual form elements’ $valid property then change the $parent scope’s model values to indicate validation changes

Instead what I wanted to achieve was a re-usable way to ‘bubble’ up validation changes from any scope’s form element to ancestor scopes without having to do any of the following:

  • No jquery DOM selection
  • No hard coding of form names to access the validation objects
  • No requirement to modifying other scopes’ values


The way I went about this was to create a very simple custom directive which I’ve called ‘val-bubble’ since it has to do with validation and it ‘bubbles’ up a message to any listening scopes. An input element might then look like this:

<input name="FirstName" type="text" required val-bubble />

Then in an outer scope I can then listen for validation changes and do whatever I want with the result:

scope.$on("valBubble", function(evt, args) {
alert("Validation changed for field " + args.ctrl.$name + ". Valid? " + args.isValid);

The args object contains these properties:

  • isValid = is the field valid
  • ctrl = the current form controller object for the field
  • scope = the scope bound to the field being validated
  • element = the DOM element of the field being validated
  • expression = the current $watch expression used to watch this fields validation changes

With all of that information you can easily adds some additional functionality to your app based on the current validating inputs such as a validation summary or whatever.

Custom directive

The val-bubble custom directive is pretty simple, here’s the code and an explanation below:

app.directive('valBubble', function (formHelper) {
return {
require: 'ngModel',
restrict: "A",
link: function (scope, element, attr, ctrl) {

if (!attr.name) {
throw "valBubble must be set on an input element that has a 'name' attribute";

var currentForm = formHelper.getCurrentForm(scope);
if (!currentForm || !currentForm.$name)
throw "valBubble requires that a name is assigned to the ng-form containing the validated input";

//watch the current form's validation for the current field name
scope.$watch(currentForm.$name + "." + ctrl.$name + ".$valid", function (isValid, lastValue) {
if (isValid != undefined) {
//emit an event upwards
scope.$emit("valBubble", {
isValid: isValid, // if the field is valid
element: element, // the element that the validation applies to
expression: this.exp, // the expression that was watched to check validity
scope: scope, // the current scope
ctrl: ctrl // the current controller

The first thing we’re doing here is limiting this directive to be used only as an attribute and ensuring the element has a model applied to it. Then we make sure that the element has a ‘name’ value applied. After that we are getting a reference to the current form object that this field is contained within using a custom method: formHelper.getCurrentForm … more on this below. Lastly we are applying a $watch to the current element’s $valid property and when this value changes we $emit an event upwards to parent/ancestor scopes to listen for.


Above I mentioned that I wanted a re-usable solution where I didn’t need to hard code things like the current form name. Unfortunately Angular doesn’t really provide a way to do this OOTB (as far as I can tell!) (Update! see here on how to access the current form: http://shazwazza.com/post/Reference-the-current-form-controller-in-AngularJS) so I’ve just created a simple factory object that finds the current form object applied to the current scope. The type check is fairly rudimentary but it works, it’s simply checking each property that exists on the scope object and tries to detect the object that matches the definition of an Angular form object:

app.factory('formHelper', function() {
return {
getCurrentForm: function(scope) {
var form = null;
var requiredFormProps = ["$error", "$name", "$dirty", "$pristine", "$valid", "$invalid", "$addControl", "$removeControl", "$setValidity", "$setDirty"];
for (var p in scope) {
if (_.isObject(scope[p]) && !_.isFunction(scope[p]) && !_.isArray(scope[p]) && p.substr(0, 1) != "$") {
var props = _.keys(scope[p]);
if (props.length < requiredFormProps.length) continue;
if (_.every(requiredFormProps, function(item) {
return _.contains(props, item);
})) {
form = scope[p];
return form;

NOTE: the above code has a dependency on UnderscoreJS

So now you can just apply the val-bubble attribute to any input element to ensure it’s validation changes are published to listening scopes!