@Shazwazza

Shannon Deminick's blog all about web development

Configuring Azure Active Directory login with Umbraco Members

February 18, 2019 02:09
Configuring Azure Active Directory login with Umbraco Members

This post is about configuring Azure Active Directory with Umbraco Members (not Users), meaning this is for your front-end website, not the Umbraco back office. I did write up a post about Azure AD with back office users though, so if that is what you are looking for then this is the link.

Install the Nuget packages

First thing to do is get the UmbracoIdentity package installed.

PM > Install-Package UmbracoIdentity

(This will also install the UmbracoIdentity.Core base package)

This package installs some code snippets and updates your web.config to enable ASP.Net Identity for Umbraco members. Umbraco ships with the old and deprecated ASP.Net Membership Providers for members and not ASP.Net Identity so this package extends the Umbraco CMS and the Umbraco members implementation to use ASP.Net Identity APIs to interact with the built in members data store. Installing this package will remove the (deprecated) FormsAuthentication module from your web.config and it will no longer be used to authenticate members, so the typical members snippets built into Umbraco macros will not work. Instead use the supplied snippets shipped with this package.

To read more about this package see the GitHub repo here.

Next, the OpenIdConnect package needs to be installed

PM > Install-Package Microsoft.Owin.Security.OpenIdConnect

Configure Azure Active Directory

Head over to the Azure Active Directory section on the Azure portal, choose App Registrations (I’m using the Preview functionality for this) and create a New registration

image

Next fill out the app details

image

You may also need to enter other redirect URLs depending on how many different environments you have. All of these URLs can be added in the Authentication section of your app in the Azure portal.

For AAD configuration for front-end members, the redirect Urls are just your website’s root URL and it is advised to keep the trailing slash.

Next you will need to enable Id Tokens

image

Configure OpenIdConnect

The UmbracoIdentity package will have installed an OWIN startup class in ~/App_Start/UmbracoIdentityStartup.cs (or it could be in App_Code if you are using a website project). This is how ASP.Net Identity is configured for front-end members and where you can specify the configuration for different OAuth providers. There’s a few things you’ll need to do:

Allow external sign in cookies

If you scroll down to the ConfigureMiddleware method, there will be a link of code to uncomment: app.UseExternalSignInCookie(DefaultAuthenticationTypes.ExternalCookie); this is required for any OAuth providers to work.

Enable OpenIdConnect OAuth for AAD

You’ll need to add this extension method class to your code which is some boiler plate code to configure OpenIdConnect with AAD:

public static class UmbracoADAuthExtensions
{
    public static void ConfigureAzureActiveDirectoryAuth(this IAppBuilder app,
        string tenant, string clientId, string postLoginRedirectUri, Guid issuerId,
        string caption = "Active Directory")
    {
        var authority = string.Format(
            System.Globalization.CultureInfo.InvariantCulture,
            "https://login.windows.net/{0}",
            tenant);

        var adOptions = new OpenIdConnectAuthenticationOptions
        {
            ClientId = clientId,
            Authority = authority,
            RedirectUri = postLoginRedirectUri
        };

        adOptions.Caption = caption;
        //Need to set the auth type as the issuer path
        adOptions.AuthenticationType = string.Format(
            System.Globalization.CultureInfo.InvariantCulture,
            "https://sts.windows.net/{0}/",
            issuerId);
        app.UseOpenIdConnectAuthentication(adOptions);
    }
}

Next you’ll need to call this code, add the following line underneath the app.UseExternalSignInCookie method call:

app.ConfigureAzureActiveDirectoryAuth(
    ConfigurationManager.AppSettings["azureAd:tenantId"],
    ConfigurationManager.AppSettings["azureAd:clientId"],
    //The value of this will need to change depending on your current environment
    postLoginRedirectUri: ConfigurationManager.AppSettings["azureAd:redirectUrl"],
    //This is the same as the TenantId
    issuerId: new Guid(ConfigurationManager.AppSettings["azureAd:tenantId"]));

Then you’ll need to add a few appSettings to your web.config (based on your AAD info):

<add key="azureAd:tenantId" value="YOUR-TENANT-ID-GUID" />
<add key="azureAd:clientId" value="YOUR-CLIENT-ID-GUID" />
<add key="azureAd:redirectUrl" value="http://my-test-website/" />

Configure your Umbraco data

The UmbracoIdentity repository has the installation documentation and you must follow these 2 instructions, and they are very simple:

  1. You need to update your member type with the securityStamp property
  2. Create the Account document type

Once that is done you will have an Member account management page which is based off of the installed views and snippets of the UmbracoIdentity package. This account page will look like this:

image

As you can see the button text under the “Use another service to log in” is the login provider name which is a bit ugly. The good news is that this is easy to change since this is just a partial view that was installed with the UmbracoIdentity package. You can edit the file: ~/Views/UmbracoIdentityAccount/ExternalLoginsList.cshtml, the code to render that button text is using @p.Authentication provider but we can easily change this to @p.Caption which is actually the same caption text used in the extension method we created. So the whole button code can look like this instead:


<button type="submit" class="btn btn-default"
        id="@p.AuthenticationType"
        name="provider"
        value="@p.AuthenticationType"
        title="Log in using your @p.Caption account">
    @p.Caption
</button>

This is a bit nicer, now the button looks like:

image

The purpose of all of these snippets and views installed with UmbracoIdentity is for you to customize how the whole flow looks and works so you’ll most likely end up customizing a number of views found in this folder to suit your needs.

That’s it!

Once that’s all configured, if you click on the Active Directory button to log in as a member, you’ll be brought to the standard AAD permission screen:

image

Once you accept you’ll be redirect back to your Account page:

image

Any customizations is then up to you. You can modify how the flow works, whether or not you accepting auto-linking accounts (like in the above example), or if you require a member to exist locally before being able to link an OAuth account, etc… All of the views and controller code in UmbracoIdentity is there for you to manipulate. The main files are:

  • ~/Views/Account.cshtml
  • ~/Views/UmbracoIdentityAccount/*
  • ~/Controllers/UmbracoIdentityAccountController.cs
  • ~/App_Start/UmbracoIdentityStartup.cs


Happy coding!

Deploying to Azure from VSTS using publish profiles and msdeploy

October 26, 2017 06:39
Deploying to Azure from VSTS using publish profiles and msdeploy

In almost all of the examples online about how to deploy various services to Azure, they always list the super easy way to do it and that is to authenticate your current account to your Azure subscription which then grants your VSTS build to do all sorts of things… The problem is that not everyone has the security clearance to use the super easy tools in VSTS.

When you attempt to use these nice tools in VSTS you might get an error like this: “Failed to set Azure permission ‘RoleAssignmentId: some-guid-goes-here’ for the service principal … does not have authorizationto perform action ‘Microsoft.Authorization/roleAssignments/write’ over scope” This is because these nice VSTS tools actually creates a custom user behind the scenes in your azure subscription to use but your account doesn’t have access to authorize that.

Luckily there’s a work around

MS Deploy … sigh

Maybe there are other work arounds but this works, however it’s not the most elegant. I thought I’d post my findings here because it was a bit of a pain in the ass to get this all correct.

So here’s the steps:

1. Download the publish profile

You need to get the publish profile from your app service that you want to deploy to. This can be a website, a staging slot, an Azure function, (probably a bunch of others)

image

The file downloaded is an XML file containing a bunch of info you’ll need

2. Create a release definition and environment for your deployment

This assumes that you are pretty familiar with VSTS

You’ll want to create an empty environment in your release definition. Normally this is where you could choose the built in fancy VSTS deployment templates like “Azure App Service Deployment” … but as above, this doesn’t work if you don’t have security clearance. Instead, choose ‘Empty’

image

Then in your environment tasks, add Batch Script

image

3. Setup your batch script

There’s 2 ways to go about this and both depend on a msdeploy build output. This build output is generated by your build in VSTS if you are using a standard VSTS Visual Studio solution build. This will create msdeploy packages for you and will have put them in your artifacts folder. Along with msdeploy packages this will also generate a cmd batch file that executes msdeploy and a readme file to tell you how to execute it which contains some important info that you should read.

So here’s 2 options: Execute the cmd file, or execute msdeploy.exe directly

Execute the cmd file

There’s a bit of documentation about this online but most of it is based on using the SetParameters.xml file to adjust settings… but i just don’t want to use that.

Here’s the Path and Arguments that you need to run:

$(System.DefaultWorkingDirectory)/YOUR_BUILD_NAME/drop/YOUR_MSBUILD_PACKAGE.deploy.cmd
/y "/m:https://${publishUrl}/MSDeploy.axd?site=${msdeploySite}" /u:$(userName) /p:$(userPWD) /a:Basic -enableRule:DoNotDeleteRule "-setParam:name='IIS Web Application Name',value='${msdeploySite}'"

The parameters should be added to your VSTS Variables: ${msdeploySite}, $(userName), $(userPWD) and these variables correspond exactly to what is in your publish profile XML file that you downloaded. These parameters need to be pretty much exact, any misplaced quote or if you don’t include https, etc… will cause this to fail.

Important: the use of -enableRule:DoNotDeleteRule is totally optional, if you want to reset your site to exactly what is in the msdeploy package you do not want this. If however, you have user generated images, content or custom config files that exist on your site and you don’t want them deleted when you deploy, then you need to set this.

I’m unsure if this will work for Azure Functions deployment (it might!) … but I used the next option to do that:

Execute msdeploy.exe directly

If you execute the CMD file, you’ll see in the VSTS logs the exact msdeploy signature used which is:

"C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe" -source:package='d:\a\r1\a\YOUR_PROJECT_NAME\drop\YOUR_MSDEPLOY_PACKAGE_FILE.zip' -dest:auto,computerName="https://YOUR_PUBLISH_URL/MSDeploy.axd?site=YOUR_PROFILE_NAME",userName=********,password=********,authtype="Basic",includeAcls="False" -verb:sync -disableLink:AppPoolExtension -disableLink:ContentExtension -disableLink:CertificateExtension -setParamFile:"d:\a\r1\a\YOUR_PROJECT_NAME\drop\YOUR_MSDEPLOY_PACKAGE_FILE.SetParameters.xml" -enableRule:DoNotDeleteRule -setParam:name='IIS Web Application Name',value='YOUR_PROFILE_NAME'

So if you wanted, you could take this and execute that directly instead of the CMD file. I use this method to deploy Azure Functions but the script is a little simpler since that deployment doesn’t require all of these parameters. For that I use this for the Path and Arguments:

C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe
-verb:sync -source:package='$(System.DefaultWorkingDirectory)/YOUR_BUILD_NAME/drop/YOUR_MSDEPLOY_PACKAGE.zip' -dest:auto,computerName="https://$(publishUrl)/msdeploy.axd?site=$(msdeploySite)",UserName='$(userName)',Password='$(userPWD)',AuthType='Basic' -setParam:name='IIS Web Application Name',value='$(msdeploySite)'


Hopefully this comes in handy for someone Winking smile

How to lazily set the multi-term rewrite method on queries in Lucene

September 15, 2017 02:21
How to lazily set the multi-term rewrite method on queries in Lucene

For wildcard queries in Lucene that you would like to have the results ordered by Score, there’s a trick that you need to do otherwise all of your scores will come back the same. The reason for this is because the default behavior of wildcard queries uses CONSTANT_SCORE_AUTO_REWRITE_DEFAULT which as the name describes is going to give a constant score. The code comments describe why this is the default:

a) Runs faster

b) Does not have the scarcity of terms unduly influence score

c) Avoids any "TooManyBooleanClauses" exceptions

Without fully understanding Lucene that doesn’t really mean a whole lot but the Lucene docs give a little more info

NOTE: if setRewriteMethod(org.apache.lucene.search.MultiTermQuery.RewriteMethod) is either CONSTANT_SCORE_BOOLEAN_QUERY_REWRITE or SCORING_BOOLEAN_QUERY_REWRITE, you may encounter a BooleanQuery.TooManyClauses exception during searching, which happens when the number of terms to be searched exceeds BooleanQuery.getMaxClauseCount(). Setting setRewriteMethod(org.apache.lucene.search.MultiTermQuery.RewriteMethod) to CONSTANT_SCORE_FILTER_REWRITE prevents this.

The recommended rewrite method is CONSTANT_SCORE_AUTO_REWRITE_DEFAULT: it doesn't spend CPU computing unhelpful scores, and it tries to pick the most performant rewrite method given the query. If you need scoring (like FuzzyQuery, use MultiTermQuery.TopTermsScoringBooleanQueryRewrite which uses a priority queue to only collect competitive terms and not hit this limitation. Note that org.apache.lucene.queryparser.classic.QueryParser produces MultiTermQueries using CONSTANT_SCORE_AUTO_REWRITE_DEFAULT by default.

So the gist is, unless you are ordering by Score this shouldn’t be changed because it will consume more CPU and depending on how many terms you are querying against you might get an exception (though I think that is rare).

So how do you change the default?

That’s super easy, it’s just this line of code:

QueryParser.SetMultiTermRewriteMethod(MultiTermQuery.SCORING_BOOLEAN_QUERY_REWRITE);

But there’s a catch! You must set this flag before you parse any queries with the query parser otherwise it won’t work. All this really does is instruct the query parser to apply this scoring method to any MultiTermQuery or FuzzyQuery implementations it creates. So what if you don’t know if this change should be made before you use the query parser? One scenario might be: At the time of using the query parser, you are unsure if the user constructing the query is going to be sorting by score. In this case you want to change the scoring mechanism just before executing the search but after creating your query.

Setting the value lazily

The good news is that you can set this value lazily just before you execute the search even after you’ve used the query parser to create parts of your query. There’s only 1 class type that we need to check for that has this API: MultiTermQuery however not all implementations of it support rewriting so we have to check for that. So given an instance of a Query we can recursively update every query contained within it and manually apply the rewrite method like:

protected void SetScoringBooleanQueryRewriteMethod(Query query)
{
	if (query is MultiTermQuery mtq)
	{
		try
		{
			mtq.SetRewriteMethod(MultiTermQuery.SCORING_BOOLEAN_QUERY_REWRITE);
		}
		catch (NotSupportedException)
		{
			//swallow this, some implementations of MultiTermQuery don't support this like FuzzyQuery
		}
	}
	if (query is BooleanQuery bq)
	{
		foreach (BooleanClause clause in bq.Clauses())
		{
			var q = clause.GetQuery();
			//recurse
			SetScoringBooleanQueryRewriteMethod(q);
		}
	}
}

So you can call this method just before you execute your search and it will still work without having to eagerly use QueryParser.SetMultiTermRewriteMethod(MultiTermQuery.SCORING_BOOLEAN_QUERY_REWRITE); before you use the query parser methods.

Happy searching!

Using AspNet5 OptionsModel

January 12, 2015 04:21

If you’ve used AspNet5 then you’ve probably been using some MVC, in which case you’ve probably seen something like this in your Startup class:

// Add MVC services to the services container
services.AddMvc(configuration)
    .Configure<MvcOptions>(options =>
    {
        //Configure some MVC options like customizing the 
        // view engines, etc...
        options.ViewEngines.Insert(0, typeof(TestViewEngine));
    });

It turns out this syntax for specifying ‘options’ for a given service is a generic pattern that you can use in your own code. In fact the OptionsModel framework is it’s own code repository: https://github.com/aspnet/Options

I’ve implemented custom options in my AspNet5 project called Smidge (a runtime JavaScript/CSS pre-processing engine) and wanted to share the details since as far as I’ve seen there isn’t any documentation about this.

What are options?

Probably the simplest way to describe the options framework is that: Options allow you to configure your application via code during startup.

Options are just a POCO class that can contain configuration options to customize the behavior of your library. These option classes can be injected into any of your services with IoC using an interface called Microsoft.Framework.OptionsModel.IOptions. There’s a caveat to this POCO class however: It must contain an parameter-less/empty constructor which means you cannot have services injected into the your options class via it’s constructor.  This options framework also allows for ‘named’ options. So for example, perhaps you have a single options class that you would like to have configured in 2 different ways, one for ‘staging’ and one for your ‘live’ website.

Creating options

Here’s a really simple example of a POCO options class:

public class CustomMessageOptions
{
    public CustomMessage()
    {
        Message = "";
    }

    public string Message { get; set; }
}

In order to use this options class you need to create an options configuration class. For example:

public class CustomMessageOptionsSetup : ConfigureOptions<CustomMessageOptions>
{
    public CustomMessageOptionsSetup() 
    : base(ConfigureMessageOptions)
    {
    }

    /// <summary>
    /// Set the default options
    /// </summary>
    public static void ConfigureMessageOptions(CustomMessageOptions options)
    {
        options.Message = "Hello world";
    }
}

Then you need to add this class to your IoC container of type Microsoft.Framework.OptionsModel.IConfigureOptions:

services.AddTransient<IConfigureOptions<CustomMessageOptions>, CustomMessageOptionsSetup>();

Using options

To configure your options during startup, you do so in the ConfigureServices method like:

services.Configure<CustomMessageOptions>(options =>
{
    options.Message = "Hello there!";
});

Now you can have these options injected into any of your services using the IOptions interface noted previously:

public class MyCoolService 
{
    public MyCoolService(IOptions<CustomMessageOptions> messageOptions)
    {
        //IOptions exposes an 'Options' property which resolves an instance
        //of CustomMessageOptions
        ConfiguredMessage = messageOptions.Options.Message;
    }

    public string ConfiguredMessage {get; private set;}
}

Named options

As an example, lets say that you want a different message configured for your ‘staging’ and ‘live’ websites. This can be done with named options, here’s an example:

services
    .Configure<CustomMessageOptions>(options =>
    {
        options.Message = "Hi! This is the staging site";
    }, "staging")
    .Configure<CustomMessageOptions>(options =>
    {
        options.Message = "Hi! This is the live site";
    }, "live");

Then in your service you can resolve the option instance by name:

public class MyCoolService 
{
    public MyCoolService(IOptions<CustomMessageOptions> messageOptions)
    {
        //IRL This value would probably be set via some environment variable
        var configEnvironment = "staging";

        //IOptions exposes an 'GetNamedOptions' method which resolves an instance
        //of CustomMessageOptions based on a defined named configuration
        ConfiguredMessage = messageOptions.GetNamedOptions(configEnvironment);
    }

    public string ConfiguredMessage {get; private set;}
}

Configuring options with other services

Since your options class is just a POCO object and must have a parameter-less/empty constructor, you cannot inject services into the options class. However, there is a way to use IoC services in your options classes by customizing the ConfigureOptions class created above.  In many cases this won’t be necessary but this really depends on how you are using options.  As a (bad) example, lets say we wanted to expose a custom helper service called SiteHelper on the CustomMessageOptions class that can be used by a developer to create the message. The end result syntax might look like:

services.Configure<CustomMessageOptions>(options =>
    {
        var siteId = options.SiteHelper.GetSiteId();
        options.Message = "Hi! This is the staging site with id: " + siteId;
    });

In order for that to work the options.SiteHelper property needs to be initialized. This is done with the CustomMessageOptionsSetup class (created above) which has been added to the IoC container, this means it can have other services injected into it. The resulting class would look like:

public class CustomMessageOptionsSetup : ConfigureOptions<CustomMessageOptions>
{
    //SiteHelper gets injected via IoC
    public CustomMessageOptionsSetup(SiteHelper siteHelper) 
    : base(ConfigureMessageOptions)
    {
        SiteHelper = siteHelper;
    }

    public SiteHelper SiteHelper { get; private set; }

    /// <summary>
    /// Set the default options
    /// </summary>
    public static void ConfigureMessageOptions(CustomMessageOptions options)
    {
        options.Message = "Hello world";
    }

    /// <summary>
    /// Allows for configuring the options instance before options are set
    /// </summary>
    public override void Configure(Bundles options, string name = "")
    {
        //Assign the site helper instance
        options.SiteHelper = SiteHelper;

        base.Configure(options, name);
    }
}

IRL to give you an example of why this might be useful, in my Smidge project I allow developers to create named JavaScript/CSS bundles during startup using options. In some cases a developer might want to manipulate the file processing pipeline for a given bundle and in that case they need access to a service called PreProcessPipelineFactory which needs to come from IoC. The usage might look like:

services.AddSmidge()
    .Configure<Bundles>(bundles =>
    {                   
        bundles.Create("test-bundle-3", 
            bundles.PipelineFactory.GetPipeline(
                //add as many processor types as you want
                typeof(DotLess), typeof(JsMin)), 
            WebFileType.Js, 
            "~/Js/Bundle2");
    });

In the above, the bundles.PipelineFactory is a property on the bundles (options) class which gets initialized in my own ConfigureOptions class.

 

Hopefully this helps anyone looking to use custom options in their AspNet5 libraries!

Introducing ‘Smidge’ – an ASP.NET 5 runtime JS/CSS pre-processor

December 11, 2014 23:19

During the past month I decided to dive deep into learning ASP.NET 5, and what better way to learn than to start a new OSS project :)

I chose to make a new new simple and extensible Javascript/CSS runtime pre-processor for ASP.NET 5. It does file minification, combination and compression, has a nice file caching layer and it’s all done in async operations. I ported over a few ideas and code snippets from CDF (client dependency framework) but with a more modern approach. I’ve called it ‘Smidge’ = something really small.

The project is on GitHub, it’s still a work in progress but its functional and there’s even some documentation! In the next few weeks I’ll get more of the code and docs updated and hopefully have a beta release out. In the meantime, you can clone the source, browse the code, build it and of course use it if you like.

Project details

It’s currently only targeting aspnet50 and not the Core CLR… I didn’t start with Core CLR because there was some legacy code I had to port over and I wanted to get something up and working relatively quickly. It shouldn’t be too much work to convert to Core CLR and Mono, hopefully I’ll find time to do that soon. It’s referencing all of the beta-* libraries from the ASP.NET 5 nightly myget feeds since there’s some code I’m using that isn’t available in the current beta1 release (like Microsoft.AspNet.WebUtilities.UriHelper). The target KRE version is currently KRE-CLR-amd64 1.0.0-beta2-10760.

Installation

I’ve put up an Alpha 1 release on Nuget, so you can install it from there:

PM> Install-Package Smidge -Pre

There’s some installation instructions here, you’ll need to add the smidge.json file yourself for now, can’t figure out how to get VS 2015 (kpm pack) to package that up … more learning required!

 

There’s certainly a lot of detective work involved in learning ASP.NET 5 but with the code being open source and browse-able/searchable on GitHub, it makes finding what you need fairly easy.

Get JQuery requests to play nicely with AngularJS CSRF convention

December 6, 2013 19:06

Decided to write this quick post for anyone searching on this topic. AngularJS has it’s own convention for CSRF (Cross Site Request Forgery) protection but in some cases you’ll be calling these same server side services via JQuery so you might need to get JQuery requests to also follow Angular’s convention.

For information about Angular’s CSRF protection see the “Security Considerations” part of Angular’s $http documentation.

Luckily it’s pretty darn easy to get JQuery to follow this convention too and this will also work with 3rd party plugins that use JQuery for requests like Blueimp file uploader. The easiest way to get this done is to set the global JQuery $.ajax rules. Probably the best place to do this is in your Angular app.run statement:

app.run(function ($cookies) {

    //This sets the default jquery ajax headers to include our csrf token, we
    // need to user the beforeSend method because the token might change 
    // (different for each logged in user)
    $.ajaxSetup({
        beforeSend: function (xhr) {
            xhr.setRequestHeader("X-XSRF-TOKEN", $cookies["XSRF-TOKEN"]);
        }
    }); 
});

That’s it!

It’s important to note to set the header using beforeSend, if you just set $.ajax options ‘headers’ section directly that means the header cannot be dynamic – which you’ll probably want if you have users logging in/out.

Reference the current form controller in AngularJS

June 26, 2013 23:08

I previously wrote a post about Listening for validation changes in AngularJS which with my knowledge at that time required a handy hack to get a reference to the currently scoped form controller (ngForm) for a given input control. I also complained a bit that it seemed that angular didn’t really provide a way to reference the current form controller without this little hack… well, it turns out I was wrong! :)

AngularJS seems kind of like ASP.Net MVC in the early days when there wasn’t much documentation…  It definitely pays off to read through the source code to figure out how to do more complicated things. I had a bit of a ‘light bulb’ moment when I realized that ngForm was itself a directive/controller and had recently noticed that the ‘require’ parameter of setting up a directive allows you to search for controllers in the current directives ancestry (i.e. prefix the required controller with a hat: ^ )

What does the require parameter of a directive do?

Lets face it, the directive documentation for AngularJS is in desperate need of being updated so that human beings can understand it (as noted by the many comments at the bottom). So I’ll try to explain what the ‘require’ parameter actually does and how to use it.

We’ll create a simple custom validation directive which will invalidate a field if the value is “blah”

function blahValidator() {
    return {
        require: 'ngModel',
        link: function(scope, elm, attr, ctrl) {
            
            var validator = function(value) {
                if (ctrl.$viewValue == "blah") {
                    ctrl.$setValidity('blah', false);
                    return null;
                }
                else {
                    ctrl.$setValidity('blah', true);
                    return value;
                }
            };

            ctrl.$formatters.push(validator);
            ctrl.$parsers.push(validator);
        }
    };
}

You’ll notice that we have a ‘require’ parameter specified for ‘ngModel’. What is happening here is that when we assign this directive to an input field, angular will ensure that the input field also has a defined ng-model attribute on it as well. Then angular will pass in the instance of the ng-model controller to the ‘ctrl’ parameter of the link function.

So, the ‘require’ parameter dictates what the ‘ctrl’ parameter of the link function equals.

You can also require multiple controllers:

image

NOTE: the ctrl/ctrls parameter in the above 2 examples can be called whatever you want

Special prefixes

Angular has 2 special prefixes for the ‘require’ parameter:

^ = search the current directives ancestry for the controller

? = don’t throw an exception if the required controller is not found, making it ‘optional’ not a requirement

You can also combine them so angular will search the ancestry but it can be optional too such as: ^?ngController'

In the above example, the blahValidator will only work if the directive is declared inside of an ng-controller block.

Referencing the current ng-form

Given the above examples, and knowing the ngForm itself is a controller we should be able to just make a requirement on ngForm and have it injected into the directive. BUT, it wont work the way you expect. For some reason angular references the ngForm controller by the name “form” which i discovered by browsing the source of angular.

So now its easy to get a reference to the containing ngForm controller, all you need to do is add a ‘require’ parameter to your directive that looks like:

require: '^form'
and it will be injected into your ctrl parameter of your link function.

NDepend review = pretty cool!

June 21, 2013 23:47

For a while I’ve been wanting to automate some reporting from our build process to show some fairly simple statistical information such as

  • Obsoleted code
  • Internal code planned to be made public
  • Code flagged to be obsoleted
  • Code flagged as experimental

NOTE: some of the above are based on our own internal c# attributes

Originally I was just going to write some code to show me this which would have been fairly straight forward but I knew that NDepend had a code query language and natively outputs reports and can be integrated into the build process, so I figured I’d give it a go.

So I went and installed NDepend and ran the analysis against my current project … I wasn’t quite prepared for the amount of information it gave me! :)

Reporting

I quickly found out that the reports that I wanted to create were insanely simple to write. I then lost a large part of my day exploring the reports it generates OOTB and I realized I hadn’t even touched the surface of the information this tool could give me. I was mostly interested in statistical analysis but this will also happily tell you about how bad or good your code is, how much test coverage you’ve got, what backwards compatibility you broke with your latest release, what code is most used in your project and what code isn’t used at all… and the list goes on and on.

To give you an example of the first report I wanted to write, here’s all I wrote to give me what methods in my codebase were obsoleted:

from m in JustMyCode.Methods where m.HasAttribute("System.ObsoleteAttribute") select m

What’s cool about the report output is that you can output to HTML or XML, so if you want to publish it on your site you can just write some XSLT to make it look the way you want.

Project comparison

Another really great feature that I thought would be fun to automate is the code analysis against previous builds so you know straight away if you are breaking compatibility with previous releases. I think it has great potential but what I did discover is that in some cases the OOTB queries for this aren’t perfect. It will tell you what public types, methods, and fields have been removed or changed and the query is pretty straightforward but… In many cases when we’re trying to maintain backwards compatibility while creating new APIs we’ll remove a type’s methods/properties and just make the legacy type inherit from the new API class (or some other trickery similar to that). This might break compatibility if people are using reflection or specific type checking but it wont break the API usage which I’m mostly interested in. I haven’t figured it out yet but I’m sure there’s some zany NDepend query I can write to check that.

Automation

So now that I know I can extract all this info, it’s time to decide what to export. Thankfully NDepend comes fully equipped with it’s own MSBuild integration so automating the report output should be pretty straightforward. I also noticed that if you wanted to be ultra careful you can integrate it with your build server and have your builds fail if some of your queries cause errors, like the above: If you break compatibility the build will fail. Or anything you want really, like failing the build if we detect you’ve written a method that has too many lines of code.

Pretty cool indeed!

I was going to use NDepend just to output reports which it clearly does without issue. What I didn’t realize was how much detail you can get about your code with this tool. There’s some tools in there that were a bit over my head but I suppose if you are working on really huge projects and need anything from just a snapshot or insane details into what dependencies your code has on other code it’ll do that easily as well.

The UI is a bit overwhelming at first and could probably do with some work but TBH I’m not sure how else you’d display that much information nicely. Once I figured out how the whole query and reporting structures work then it was dead simple.

Happy reporting :)

Listening for validation changes in AngularJS

May 28, 2013 06:18

In some applications it can be really useful to have controllers listen for validation changes especially in more complicated AngularJS apps where ‘ng-repeat’ is used to render form controls. There’s plenty of cases where a parent scope might need to know about validation changes based on child scopes… one such case is a validation summary. There’s a couple ways to implement this (and probably more) but they all seem a bit hacky such as:

  • Apply a $watch to the current form object’s $valid property in the parent scope, then use jQuery to look for elements that have a class like ‘invalid’
    • You could then use the scope() function on the DOM element that ng-repeat is used on to get any model information about the invalid item
  • In child scopes you could apply a $watch to individual form elements’ $valid property then change the $parent scope’s model values to indicate validation changes

Instead what I wanted to achieve was a re-usable way to ‘bubble’ up validation changes from any scope’s form element to ancestor scopes without having to do any of the following:

  • No jquery DOM selection
  • No hard coding of form names to access the validation objects
  • No requirement to modifying other scopes’ values

Implementation

The way I went about this was to create a very simple custom directive which I’ve called ‘val-bubble’ since it has to do with validation and it ‘bubbles’ up a message to any listening scopes. An input element might then look like this:

<input name="FirstName" type="text" required val-bubble />

Then in an outer scope I can then listen for validation changes and do whatever I want with the result:

scope.$on("valBubble", function(evt, args) {
alert("Validation changed for field " + args.ctrl.$name + ". Valid? " + args.isValid);
});

The args object contains these properties:

  • isValid = is the field valid
  • ctrl = the current form controller object for the field
  • scope = the scope bound to the field being validated
  • element = the DOM element of the field being validated
  • expression = the current $watch expression used to watch this fields validation changes

With all of that information you can easily adds some additional functionality to your app based on the current validating inputs such as a validation summary or whatever.

Custom directive

The val-bubble custom directive is pretty simple, here’s the code and an explanation below:

app.directive('valBubble', function (formHelper) {
return {
require: 'ngModel',
restrict: "A",
link: function (scope, element, attr, ctrl) {

if (!attr.name) {
throw "valBubble must be set on an input element that has a 'name' attribute";
}

var currentForm = formHelper.getCurrentForm(scope);
if (!currentForm || !currentForm.$name)
throw "valBubble requires that a name is assigned to the ng-form containing the validated input";

//watch the current form's validation for the current field name
scope.$watch(currentForm.$name + "." + ctrl.$name + ".$valid", function (isValid, lastValue) {
if (isValid != undefined) {
//emit an event upwards
scope.$emit("valBubble", {
isValid: isValid, // if the field is valid
element: element, // the element that the validation applies to
expression: this.exp, // the expression that was watched to check validity
scope: scope, // the current scope
ctrl: ctrl // the current controller
});
}
});
}
};
});

The first thing we’re doing here is limiting this directive to be used only as an attribute and ensuring the element has a model applied to it. Then we make sure that the element has a ‘name’ value applied. After that we are getting a reference to the current form object that this field is contained within using a custom method: formHelper.getCurrentForm … more on this below. Lastly we are applying a $watch to the current element’s $valid property and when this value changes we $emit an event upwards to parent/ancestor scopes to listen for.

formHelper

Above I mentioned that I wanted a re-usable solution where I didn’t need to hard code things like the current form name. Unfortunately Angular doesn’t really provide a way to do this OOTB (as far as I can tell!) (Update! see here on how to access the current form: http://shazwazza.com/post/Reference-the-current-form-controller-in-AngularJS) so I’ve just created a simple factory object that finds the current form object applied to the current scope. The type check is fairly rudimentary but it works, it’s simply checking each property that exists on the scope object and tries to detect the object that matches the definition of an Angular form object:

app.factory('formHelper', function() {
return {
getCurrentForm: function(scope) {
var form = null;
var requiredFormProps = ["$error", "$name", "$dirty", "$pristine", "$valid", "$invalid", "$addControl", "$removeControl", "$setValidity", "$setDirty"];
for (var p in scope) {
if (_.isObject(scope[p]) && !_.isFunction(scope[p]) && !_.isArray(scope[p]) && p.substr(0, 1) != "$") {
var props = _.keys(scope[p]);
if (props.length < requiredFormProps.length) continue;
if (_.every(requiredFormProps, function(item) {
return _.contains(props, item);
})) {
form = scope[p];
break;
}
}
}
return form;
}
};
});

NOTE: the above code has a dependency on UnderscoreJS

So now you can just apply the val-bubble attribute to any input element to ensure it’s validation changes are published to listening scopes!

Uploading files and JSON data in the same request with Angular JS

May 25, 2013 04:01

I decided to write a quick blog post about this because much of the documentation and examples about this seems to be a bit scattered. What this achieves is the ability to upload any number of files with any other type of data in one request. For this example we’ll send up JSON data along with some files.

File upload directive

First we’ll create a simple custom file upload angular directive

app.directive('fileUpload', function () {
return {
scope: true, //create a new scope
link: function (scope, el, attrs) {
el.bind('change', function (event) {
var files = event.target.files;
//iterate files since 'multiple' may be specified on the element
for (var i = 0;i<files.length;i++) {
//emit event upward
scope.$emit("fileSelected", { file: files[i] });
}
});
}
};
});

The usage of this is simple:

<input type="file" file-upload multiple/>

The ‘multiple’ parameter indicates that the user can select multiple files to upload which this example fully supports.

In the directive we ensure a new scope is created and then listen for changes made to the file input element. When changes are detected with emit an event to all ancestor scopes (upward) with the file object as a parameter.

Mark-up & the controller

Next we’ll create a controller to:

  • Create a model to bind to
  • Create a collection of files
  • Consume this event so we can assign the files to  the collection
  • Create a method to post it all to the server

NOTE: I’ve put all this functionality in this controller for brevity, in most cases you’d have a separate factory to handle posting the data

With the controller in place, the mark-up might look like this (and will display the file names of all of the files selected):

<div ng-controller="Ctrl">
<input type="file" file-upload multiple/>
<ul>
<li ng-repeat="file in files">{{file.name}}</li>
</ul>
</div>

The controller code below contains some important comments relating to how the data gets posted up to the server, namely the ‘Content-Type’ header as the value that needs to be set is a bit quirky.

function Ctrl($scope, $http) {

//a simple model to bind to and send to the server
$scope.model = {
name: "",
comments: ""
};

//an array of files selected
$scope.files = [];

//listen for the file selected event
$scope.$on("fileSelected", function (event, args) {
$scope.$apply(function () {
//add the file object to the scope's files collection
$scope.files.push(args.file);
});
});

//the save method
$scope.save = function() {
$http({
method: 'POST',
url: "/Api/PostStuff",
//IMPORTANT!!! You might think this should be set to 'multipart/form-data'
// but this is not true because when we are sending up files the request
// needs to include a 'boundary' parameter which identifies the boundary
// name between parts in this multi-part request and setting the Content-type
// manually will not set this boundary parameter. For whatever reason,
// setting the Content-type to 'false' will force the request to automatically
// populate the headers properly including the boundary parameter.
headers: { 'Content-Type': false },
//This method will allow us to change how the data is sent up to the server
// for which we'll need to encapsulate the model data in 'FormData'
transformRequest: function (data) {
var formData = new FormData();
//need to convert our json object to a string version of json otherwise
// the browser will do a 'toString()' on the object which will result
// in the value '[Object object]' on the server.
formData.append("model", angular.toJson(data.model));
//now add all of the assigned files
for (var i = 0; i < data.files; i++) {
//add each file to the form data and iteratively name them
formData.append("file" + i, data.files[i]);
}
return formData;
},
//Create an object that contains the model and files which will be transformed
// in the above transformRequest method
data: { model: $scope.model, files: $scope.files }
}).
success(function (data, status, headers, config) {
alert("success!");
}).
error(function (data, status, headers, config) {
alert("failed!");
});
};
};

Handling the data server-side

This example shows how to handle the data on the server side using ASP.Net WebAPI, I’m sure it’s reasonably easy to do on other server-side platforms too.

public async Task<HttpResponseMessage> PostStuff()
{
if (!Request.Content.IsMimeMultipartContent())
{
throw new HttpResponseException(HttpStatusCode.UnsupportedMediaType);
}

var root = HttpContext.Current.Server.MapPath("~/App_Data/Temp/FileUploads");
Directory.CreateDirectory(root);
var provider = new MultipartFormDataStreamProvider(root);
var result = await Request.Content.ReadAsMultipartAsync(provider);
if (result.FormData["model"] == null)
{
throw new HttpResponseException(HttpStatusCode.BadRequest);
}

var model = result.FormData["model"];
//TODO: Do something with the json model which is currently a string



//get the files
foreach (var file in result.FileData)
{
//TODO: Do something with each uploaded file
}

return Request.CreateResponse(HttpStatusCode.OK, "success!");
}