Shazwazza

Shannon Deminick's blog all about .Net, Umbraco & Web development

WebApi per controller configuration

March 20, 2014 23:56 by Shannon Deminick

This is more of a blog post about what not to do :)

At first glance, it would seem relatively painless to change your WebApi controller’s configuration, I’d assume most people would do what I initially did. Say for example you wanted to have your controller only support JSON, here’s what I initially tried (DO NOT DO THIS):

protected override void Initialize(HttpControllerContext controllerContext)
{
    base.Initialize(controllerContext);
    var toRemove = controllerContext.Configuration.Formatters
        .Where(t => (t is JsonMediaTypeFormatter) == false).ToList();
    foreach (var r in toRemove)
    {
        controllerContext.Configuration.Formatters.Remove(r);
    }
}

Simple right, just override initialize in your controller and change the current controllerContext’s configuration…. WRONG :(

What this is actually doing is modifying the global WebApi configuration though it’s not clear that this is the case. Unfortunately the actual Configuration property on the controllerContext instance is assigned to the global one. I’m assuming the WebApi team has done this for a reason but I’m not sure what that is; as seen above it’s very easy to change the global WebApi configuration at runtime. Seems to me like it might have been a better idea to clone the global configuration instance and assign that to each HttpControllerContext object.

The correct way to specify per controller custom configuration in WebApi is to use the IControllerConfiguration interface. You can read all about here and it is fairly simple but it does seem like you have to jump through a few hoops for something that initially seems very straight forward.

Get JQuery requests to play nicely with AngularJS CSRF convention

December 6, 2013 18:06 by Shannon Deminick

Decided to write this quick post for anyone searching on this topic. AngularJS has it’s own convention for CSRF (Cross Site Request Forgery) protection but in some cases you’ll be calling these same server side services via JQuery so you might need to get JQuery requests to also follow Angular’s convention.

For information about Angular’s CSRF protection see the “Security Considerations” part of Angular’s $http documentation.

Luckily it’s pretty darn easy to get JQuery to follow this convention too and this will also work with 3rd party plugins that use JQuery for requests like Blueimp file uploader. The easiest way to get this done is to set the global JQuery $.ajax rules. Probably the best place to do this is in your Angular app.run statement:

app.run(function ($cookies) {

    //This sets the default jquery ajax headers to include our csrf token, we
    // need to user the beforeSend method because the token might change 
    // (different for each logged in user)
    $.ajaxSetup({
        beforeSend: function (xhr) {
            xhr.setRequestHeader("X-XSRF-TOKEN", $cookies["XSRF-TOKEN"]);
        }
    }); 
});

That’s it!

It’s important to note to set the header using beforeSend, if you just set $.ajax options ‘headers’ section directly that means the header cannot be dynamic – which you’ll probably want if you have users logging in/out.

How to inspect assemblies with reflection before including them in your application

October 25, 2013 20:13 by Shannon Deminick

If you application supports plugins or extensions in some cases it might be useful to scan a packages assemblies before importing them in to your app. Some reasons for this might be:

  • Checking if the package has missing assembly references
  • Checking if the assembly references obsolete types that might make the package unstable
  • Checking the .Net targeted framework of the assembly
  • Any other assembly inspection to determine it is compatible with your app

To do this you can load assemblies using Assembly.ReflectionOnlyLoadFrom and Assembly.ReflectionOnlyLoad methods which load assemblies into a special assembly load context called “reflection-only context” which will safely let you inspect these assemblies.

Further reading:

A great article on Reflection Only Assembly Loading and if you want to know more about assembly load contexts, here’s an explanation.

Loading in assemblies

For this example, we’ll assume that all of the assemblies for a package are in some folder outside of the normal /bin folder (not loaded in to the current app) and each assembly for the package will need to be inspected for type references that are not supported.

A common mistake when loading in assemblies with reflection (especially in the LoadFrom context) is to load them in one at a time, whereas they generally will need to be all loaded in before inspecting them since they probably have references to each other. Another thing that generally must be done is adding an event listener on AppDomain.ReflectionOnlyAssemblyResolve because even though all known referenced assemblies are loading into the context some assemblies might not be explicitly referenced but are need to load the assembly. This handler provides a way to resolve those missing references.

The first thing to do is setup the event handler

AppDomain.CurrentDomain.ReflectionOnlyAssemblyResolve += (s, e) =>
{
    var a = Assembly.ReflectionOnlyLoad(e.Name);
    if (a == null) throw new TypeLoadException("Could not load assembly " + e.Name);
    return a;
};

Next we need to load all of the assembly files in the folder

foreach (var f in files) Assembly.ReflectionOnlyLoadFrom(f);

Then load all of their referenced assemblies in to the reflection context

//Then load each referenced assembly into the context
var assembliesWithErrors = new List<string>();
foreach (var f in files)
{
    var reflectedAssembly = Assembly.ReflectionOnlyLoadFrom(f);
    foreach (var assemblyName in reflectedAssembly.GetReferencedAssemblies())
    {
        try
        {
            Assembly.ReflectionOnlyLoad(assemblyName.FullName);
        }
        catch (FileNotFoundException)
        {
            //if an exception occurs it means that a referenced assembly could not be found                        
            errors.Add(
                string.Concat("This package references the assembly '",
                    assemblyName.Name,
                    "' which was not found, this package may have problems running"));
            assembliesWithErrors.Add(f);
        }
    }
}

In the catch, I’m detecting assembly reference errors and adding an error message to the outgoing method response and also adding that assembly to the assembliesWithErrors list which is used later to ensure we’re not inspecting assemblies that couldn’t be loaded.

Now that all the assemblies are loaded we can inspect them (ignoring ones with errors). This example is looking for any assemblies that have types implementing ‘MyType’. If they do implement this type, add the assembly to a list to return from the current method.

//now that we have all referenced types into the context we can look up stuff
foreach (var f in files.Except(assembliesWithErrors))
{
    //now we need to see if they contain any type 'MyType'
    var reflectedAssembly = Assembly.ReflectionOnlyLoadFrom(f);
    var found = reflectedAssembly.GetExportedTypes()
        .Where(TypeHelper.IsTypeAssignableFrom<MyType>);
    if (found.Any())
    {
        dllsWithReference.Add(reflectedAssembly.FullName);
    }
}

Separate AppDomain

It’s best to execute all of this logic in a separate AppDomain because once assemblies are loaded in to a context, they cannot be unloaded and since we are loading in from files, those files will remain locked until the AppDomain is shutdown. Explaining how to create a separate AppDomain is outside the scope of this article but the code is included in the source below.

Source Code

Here’s a class the encapsulates all of this logic and of course if you can do much more when inspecting assemblies for various types.

https://gist.github.com/Shandem/7147978

Custom MVC routing in Umbraco

July 4, 2013 21:00 by Shannon Deminick

This post will describe how you can declare your own custom MVC routes in order to execute your own custom controllers in Umbraco but still be able to render Umbraco views with the same model that Umbraco uses natively.

NOTE: This post is not about trying to execute a particular Umbraco page under a custom URL, that functionality can be accomplished by creating a custom IContentFinder (in v6.1), or by applying the umbracoUrlAlias

There’s a long (but very useful) thread on Our describing various needs for custom MVC routing inside of Umbraco, definitely worth a read. Here I’ll try to describe a pretty easy way to accomplish this. I’m using Umbraco v6.0.7 (but I’m pretty sure this will work in v4.10+ as well).

Create the route

This example will use an IApplicationEventHandler (in 6.1 you should use the base class ApplicationEventHandler instead). Here I’m defining a custom route for handling products on my website. The example URLs that I want handled will be:

  • /Products/Product/ProductA
  • /Products/Category/CategoryA

 

public class MyStartupHandler : IApplicationEventHandler
{
    public void OnApplicationStarted(
        UmbracoApplicationBase umbracoApplication, 
        ApplicationContext applicationContext)
    {
        //Create a custom route
        RouteTable.Routes.MapRoute(
            "test",
            "Products/{action}/{id}",
            new
                {
                    controller = "MyProduct", 
                    action = "Product", 
                    id = UrlParameter.Optional
                });           
    }
    public void OnApplicationInitialized(
        UmbracoApplicationBase umbracoApplication, 
        ApplicationContext applicationContext)
    {
    }
    public void OnApplicationStarting(
        UmbracoApplicationBase umbracoApplication, 
        ApplicationContext applicationContext)
    {
    }
}

Create the controller

With the above route in place, I need to create a controller called “MyProductController”. The base class it will inherit from will be “Umbraco.Mvc.PluginController”. This abstract class exposes many of the underlying Umbraco objects that I might need to work with such as an UmbracoHelper, UmbracoContext, ApplicationContext, etc… Also note that the PluginController doesn’t get auto-routed like a SurfaceController which is good because we only want to route our controller once. In 6.1 you can inherit from a different controller called Umbraco.Mvc.UmbracoController, which is what the PluginController will be inheriting from in the next version.

Constructor

First thing is to define the constructors since the PluginController doesn’t have an empty constructor but we want ours to (unless you have IoC setup).

public class MyProductController : PluginController
{
    public MyProductController()
        : this(UmbracoContext.Current)
    {            
    }

    public MyProductController(UmbracoContext umbracoContext) 
        : base(umbracoContext)
    {
    }
}

Actions

Next we need to create the controller Actions. These actions will need to lookup either a Product or a Category based on the ‘id’ string they get passed. For example, given the following URL: /Products/Category/CategoryA the id would be CategoryA and it would execute on the Category action.

In my Umbraco installation, I have 2 document types with aliases: “Product” and “ProductCategory”

image

To perform the lookup in the controller Actions we’ll use the UmbracoHelper.TypedSearch overload which uses Examine.

public ActionResult Category(string id)
{
    var criteria = ExamineManager.Instance.DefaultSearchProvider
        .CreateSearchCriteria("content");
    var filter = criteria.NodeTypeAlias("ProductCategory").And().NodeName(id);
    var result = Umbraco.TypedSearch(filter.Compile()).ToArray();
    if (!result.Any())
    {
        throw new HttpException(404, "No category");
    }
    return View("ProductCategory", CreateRenderModel(result.First()));
}

public ActionResult Product(string id)
{
    var criteria = ExamineManager.Instance.DefaultSearchProvider
        .CreateSearchCriteria("content");
    var filter = criteria.NodeTypeAlias("Product").And().NodeName(id);
    var result = Umbraco.TypedSearch(filter.Compile()).ToArray();
    if (!result.Any())
    {
        throw new HttpException(404, "No product");
    }
    return View("Product", CreateRenderModel(result.First()));
}

The Category action lookup uses Examine to lookup any document with:

  • A document type alias of “ProductCategory”
  • A name equal to the id parameter passed in

The Product action lookup uses Examine to lookup any document with:

  • A document type alias of “Product”
  • A name equal to the id parameter passed in

The result from TypedSearch is IEnumerable<IPublishedContent> and since we know we only want one result we pass in the first item of the collection in “result.First()”

If you didn’t want to use Examine to do the lookup, you could use a Linq query based on the result of Umbraco.TypedContentAtRoot(), but I wouldn’t recommend that since it will be much slower.

In v6.1 the UmbracoHelper exposes a couple of other methods that you could use to perform your lookup if you didn’t want to use Examine and wanted to use XPath instead:

  • TypedContentSingleAtXPath(string xpath, params XPathVariable[] vars)
  • TypedContentAtXPath(string xpath, params XPathVariable[] vars)
  • TypedContentAtXPath(XPathExpression xpath, params XPathVariable[] vars)

CreateRenderModel method

You will have noticed that I’m using a method called CreateRenderModel to create the model that is passed to the View. This method accepts an IPublishedContent object as an argument and creates a RenderModel object which is what a normal Umbraco view expects. This method isn’t complex but it does have a couple things worth noting:

private RenderModel CreateRenderModel(IPublishedContent content)
{
    var model = new RenderModel(content, CultureInfo.CurrentUICulture);

    //add an umbraco data token so the umbraco view engine executes
    RouteData.DataTokens["umbraco"] = model;

    return model;
}

The first thing is that you need to construct the RenderModel with an explicit culture otherwise you’ll get an exception. The next line adds the created RenderModel to the RouteData.DataTokens… this is because we want to render an Umbraco view which will be stored in either of the following places (based on Umbraco standard practices):

  • ~/Views/Product.cshtml
  • ~/Views/ProductCategory.cshtml

These locations are not MVC standard practices. Normally MVC will look in a controller specific folder for views. For this custom controller the locations would be:

  • ~/Views/MyProduct/Product.cshtml
  • ~/Views/MyProduct/ProductCategory.cshtml

But we want to use the views that Umbraco has created for us so we need to ensure that the built in Umbraco ViewEngine gets executed. For performance reasons the Umbraco RenderViewEngine will not execute for a view unless a RenderModel instance exists in the RouteData.DataTokens with a key of “umbraco”, so we just add it there before we return the view.

Views

The views are your regular Umbraco views but there’s a few things that might not work:

  • Macros. Sorry, since we’ve bypassed the Umbraco routing pipeline which macros rely upon, any call to Umbraco.RenderMacro will fail. But you should be able to achieve what you want with Partial Views or Child Actions.
  • Umbraco.Field. Actually this will work but you’ll need to upgrade to 6.0.7 or 6.1.2 based on this fixed issue: http://issues.umbraco.org/issue/U4-2324

One cool thing is that you can use the regular MVC UrlHelper to resolve the URLs of your actions, since this custom controller is actually just a regular old MVC controller after all.

These view example are nothing extraordinary, just demonstrating that they are the same as Umbraco templates with the same model (but using our custom URLs)

ProductCategory

@inherits Umbraco.Web.Mvc.UmbracoTemplatePage
@{
    Layout = null;
}
<html>
    <body>
        <h1>Product category</h1>
        <hr />
        <h2>@Model.Content.Name</h2>
        <ul>
            @foreach (var product in Model.Content.Children
                .Where(x => x.DocumentTypeAlias == "Product"))
            {
                <li><a href="@Url.Action("Product", "MyProduct", new { id = product.Name })">
                        @product.Name
                    </a>
                </li>
            }
        </ul>
    </body>
</html>

Which looks like this:

image

Product

@inherits Umbraco.Web.Mvc.UmbracoTemplatePage
@{
    Layout = null;
}
<html>
    <body>

        <h1>Product</h1>
        <hr />
        <h2>@Model.Content.Name</h2>
        <div>
            @(Model.Content.GetPropertyValue("Description"))
        </div>
    </body>
</html>

Which looks like this:

image

Whats next?

With the setup above you should be able to achieve most of what you would want with custom routing, controllers, URLs and lookups. However, as I mentioned before things like executing Macros and potentially other internal Umbraco bits that rely on objects like the PublishedContentRequest will not work.

Of course if there is a will, there is a way and I have some cool ideas that could make all of those things work seamlessly too with custom MVC routes. Stay tuned!

Reference the current form controller in AngularJS

June 26, 2013 21:08 by Shannon Deminick

I previously wrote a post about Listening for validation changes in AngularJS which with my knowledge at that time required a handy hack to get a reference to the currently scoped form controller (ngForm) for a given input control. I also complained a bit that it seemed that angular didn’t really provide a way to reference the current form controller without this little hack… well, it turns out I was wrong! :)

AngularJS seems kind of like ASP.Net MVC in the early days when there wasn’t much documentation…  It definitely pays off to read through the source code to figure out how to do more complicated things. I had a bit of a ‘light bulb’ moment when I realized that ngForm was itself a directive/controller and had recently noticed that the ‘require’ parameter of setting up a directive allows you to search for controllers in the current directives ancestry (i.e. prefix the required controller with a hat: ^ )

What does the require parameter of a directive do?

Lets face it, the directive documentation for AngularJS is in desperate need of being updated so that human beings can understand it (as noted by the many comments at the bottom). So I’ll try to explain what the ‘require’ parameter actually does and how to use it.

We’ll create a simple custom validation directive which will invalidate a field if the value is “blah”

function blahValidator() {
    return {
        require: 'ngModel',
        link: function(scope, elm, attr, ctrl) {
            
            var validator = function(value) {
                if (ctrl.$viewValue == "blah") {
                    ctrl.$setValidity('blah', false);
                    return null;
                }
                else {
                    ctrl.$setValidity('blah', true);
                    return value;
                }
            };

            ctrl.$formatters.push(validator);
            ctrl.$parsers.push(validator);
        }
    };
}

You’ll notice that we have a ‘require’ parameter specified for ‘ngModel’. What is happening here is that when we assign this directive to an input field, angular will ensure that the input field also has a defined ng-model attribute on it as well. Then angular will pass in the instance of the ng-model controller to the ‘ctrl’ parameter of the link function.

So, the ‘require’ parameter dictates what the ‘ctrl’ parameter of the link function equals.

You can also require multiple controllers:

image

NOTE: the ctrl/ctrls parameter in the above 2 examples can be called whatever you want

Special prefixes

Angular has 2 special prefixes for the ‘require’ parameter:

^ = search the current directives ancestry for the controller

? = don’t throw an exception if the required controller is not found, making it ‘optional’ not a requirement

You can also combine them so angular will search the ancestry but it can be optional too such as: ^?ngController'

In the above example, the blahValidator will only work if the directive is declared inside of an ng-controller block.

Referencing the current ng-form

Given the above examples, and knowing the ngForm itself is a controller we should be able to just make a requirement on ngForm and have it injected into the directive. BUT, it wont work the way you expect. For some reason angular references the ngForm controller by the name “form” which i discovered by browsing the source of angular.

So now its easy to get a reference to the containing ngForm controller, all you need to do is add a ‘require’ parameter to your directive that looks like:

require: '^form'
and it will be injected into your ctrl parameter of your link function.

NDepend review = pretty cool!

June 21, 2013 21:47 by Shannon Deminick

For a while I’ve been wanting to automate some reporting from our build process to show some fairly simple statistical information such as

  • Obsoleted code
  • Internal code planned to be made public
  • Code flagged to be obsoleted
  • Code flagged as experimental

NOTE: some of the above are based on our own internal c# attributes

Originally I was just going to write some code to show me this which would have been fairly straight forward but I knew that NDepend had a code query language and natively outputs reports and can be integrated into the build process, so I figured I’d give it a go.

So I went and installed NDepend and ran the analysis against my current project … I wasn’t quite prepared for the amount of information it gave me! :)

Reporting

I quickly found out that the reports that I wanted to create were insanely simple to write. I then lost a large part of my day exploring the reports it generates OOTB and I realized I hadn’t even touched the surface of the information this tool could give me. I was mostly interested in statistical analysis but this will also happily tell you about how bad or good your code is, how much test coverage you’ve got, what backwards compatibility you broke with your latest release, what code is most used in your project and what code isn’t used at all… and the list goes on and on.

To give you an example of the first report I wanted to write, here’s all I wrote to give me what methods in my codebase were obsoleted:

from m in JustMyCode.Methods where m.HasAttribute("System.ObsoleteAttribute") select m

What’s cool about the report output is that you can output to HTML or XML, so if you want to publish it on your site you can just write some XSLT to make it look the way you want.

Project comparison

Another really great feature that I thought would be fun to automate is the code analysis against previous builds so you know straight away if you are breaking compatibility with previous releases. I think it has great potential but what I did discover is that in some cases the OOTB queries for this aren’t perfect. It will tell you what public types, methods, and fields have been removed or changed and the query is pretty straightforward but… In many cases when we’re trying to maintain backwards compatibility while creating new APIs we’ll remove a type’s methods/properties and just make the legacy type inherit from the new API class (or some other trickery similar to that). This might break compatibility if people are using reflection or specific type checking but it wont break the API usage which I’m mostly interested in. I haven’t figured it out yet but I’m sure there’s some zany NDepend query I can write to check that.

Automation

So now that I know I can extract all this info, it’s time to decide what to export. Thankfully NDepend comes fully equipped with it’s own MSBuild integration so automating the report output should be pretty straightforward. I also noticed that if you wanted to be ultra careful you can integrate it with your build server and have your builds fail if some of your queries cause errors, like the above: If you break compatibility the build will fail. Or anything you want really, like failing the build if we detect you’ve written a method that has too many lines of code.

Pretty cool indeed!

I was going to use NDepend just to output reports which it clearly does without issue. What I didn’t realize was how much detail you can get about your code with this tool. There’s some tools in there that were a bit over my head but I suppose if you are working on really huge projects and need anything from just a snapshot or insane details into what dependencies your code has on other code it’ll do that easily as well.

The UI is a bit overwhelming at first and could probably do with some work but TBH I’m not sure how else you’d display that much information nicely. Once I figured out how the whole query and reporting structures work then it was dead simple.

Happy reporting :)

Listening for validation changes in AngularJS

May 28, 2013 04:18 by Shannon Deminick

In some applications it can be really useful to have controllers listen for validation changes especially in more complicated AngularJS apps where ‘ng-repeat’ is used to render form controls. There’s plenty of cases where a parent scope might need to know about validation changes based on child scopes… one such case is a validation summary. There’s a couple ways to implement this (and probably more) but they all seem a bit hacky such as:

  • Apply a $watch to the current form object’s $valid property in the parent scope, then use jQuery to look for elements that have a class like ‘invalid’
    • You could then use the scope() function on the DOM element that ng-repeat is used on to get any model information about the invalid item
  • In child scopes you could apply a $watch to individual form elements’ $valid property then change the $parent scope’s model values to indicate validation changes

Instead what I wanted to achieve was a re-usable way to ‘bubble’ up validation changes from any scope’s form element to ancestor scopes without having to do any of the following:

  • No jquery DOM selection
  • No hard coding of form names to access the validation objects
  • No requirement to modifying other scopes’ values

Implementation

The way I went about this was to create a very simple custom directive which I’ve called ‘val-bubble’ since it has to do with validation and it ‘bubbles’ up a message to any listening scopes. An input element might then look like this:

<input name="FirstName" type="text" required val-bubble />

Then in an outer scope I can then listen for validation changes and do whatever I want with the result:

scope.$on("valBubble", function(evt, args) {
alert("Validation changed for field " + args.ctrl.$name + ". Valid? " + args.isValid);
});

The args object contains these properties:

  • isValid = is the field valid
  • ctrl = the current form controller object for the field
  • scope = the scope bound to the field being validated
  • element = the DOM element of the field being validated
  • expression = the current $watch expression used to watch this fields validation changes

With all of that information you can easily adds some additional functionality to your app based on the current validating inputs such as a validation summary or whatever.

Custom directive

The val-bubble custom directive is pretty simple, here’s the code and an explanation below:

app.directive('valBubble', function (formHelper) {
return {
require: 'ngModel',
restrict: "A",
link: function (scope, element, attr, ctrl) {

if (!attr.name) {
throw "valBubble must be set on an input element that has a 'name' attribute";
}

var currentForm = formHelper.getCurrentForm(scope);
if (!currentForm || !currentForm.$name)
throw "valBubble requires that a name is assigned to the ng-form containing the validated input";

//watch the current form's validation for the current field name
scope.$watch(currentForm.$name + "." + ctrl.$name + ".$valid", function (isValid, lastValue) {
if (isValid != undefined) {
//emit an event upwards
scope.$emit("valBubble", {
isValid: isValid, // if the field is valid
element: element, // the element that the validation applies to
expression: this.exp, // the expression that was watched to check validity
scope: scope, // the current scope
ctrl: ctrl // the current controller
});
}
});
}
};
});

The first thing we’re doing here is limiting this directive to be used only as an attribute and ensuring the element has a model applied to it. Then we make sure that the element has a ‘name’ value applied. After that we are getting a reference to the current form object that this field is contained within using a custom method: formHelper.getCurrentForm … more on this below. Lastly we are applying a $watch to the current element’s $valid property and when this value changes we $emit an event upwards to parent/ancestor scopes to listen for.

formHelper

Above I mentioned that I wanted a re-usable solution where I didn’t need to hard code things like the current form name. Unfortunately Angular doesn’t really provide a way to do this OOTB (as far as I can tell!) (Update! see here on how to access the current form: http://shazwazza.com/post/Reference-the-current-form-controller-in-AngularJS) so I’ve just created a simple factory object that finds the current form object applied to the current scope. The type check is fairly rudimentary but it works, it’s simply checking each property that exists on the scope object and tries to detect the object that matches the definition of an Angular form object:

app.factory('formHelper', function() {
return {
getCurrentForm: function(scope) {
var form = null;
var requiredFormProps = ["$error", "$name", "$dirty", "$pristine", "$valid", "$invalid", "$addControl", "$removeControl", "$setValidity", "$setDirty"];
for (var p in scope) {
if (_.isObject(scope[p]) && !_.isFunction(scope[p]) && !_.isArray(scope[p]) && p.substr(0, 1) != "$") {
var props = _.keys(scope[p]);
if (props.length < requiredFormProps.length) continue;
if (_.every(requiredFormProps, function(item) {
return _.contains(props, item);
})) {
form = scope[p];
break;
}
}
}
return form;
}
};
});

NOTE: the above code has a dependency on UnderscoreJS

So now you can just apply the val-bubble attribute to any input element to ensure it’s validation changes are published to listening scopes!

Uploading files and JSON data in the same request with Angular JS

May 25, 2013 02:01 by Shannon Deminick

I decided to write a quick blog post about this because much of the documentation and examples about this seems to be a bit scattered. What this achieves is the ability to upload any number of files with any other type of data in one request. For this example we’ll send up JSON data along with some files.

File upload directive

First we’ll create a simple custom file upload angular directive

app.directive('fileUpload', function () {
return {
scope: true, //create a new scope
link: function (scope, el, attrs) {
el.bind('change', function (event) {
var files = event.target.files;
//iterate files since 'multiple' may be specified on the element
for (var i = 0;i<files.length;i++) {
//emit event upward
scope.$emit("fileSelected", { file: files[i] });
}
});
}
};
});

The usage of this is simple:

<input type="file" file-upload multiple/>

The ‘multiple’ parameter indicates that the user can select multiple files to upload which this example fully supports.

In the directive we ensure a new scope is created and then listen for changes made to the file input element. When changes are detected with emit an event to all ancestor scopes (upward) with the file object as a parameter.

Mark-up & the controller

Next we’ll create a controller to:

  • Create a model to bind to
  • Create a collection of files
  • Consume this event so we can assign the files to  the collection
  • Create a method to post it all to the server

NOTE: I’ve put all this functionality in this controller for brevity, in most cases you’d have a separate factory to handle posting the data

With the controller in place, the mark-up might look like this (and will display the file names of all of the files selected):

<div ng-controller="Ctrl">
<input type="file" file-upload multiple/>
<ul>
<li ng-repeat="file in files">{{file.name}}</li>
</ul>
</div>

The controller code below contains some important comments relating to how the data gets posted up to the server, namely the ‘Content-Type’ header as the value that needs to be set is a bit quirky.

function Ctrl($scope, $http) {

//a simple model to bind to and send to the server
$scope.model = {
name: "",
comments: ""
};

//an array of files selected
$scope.files = [];

//listen for the file selected event
$scope.$on("fileSelected", function (event, args) {
$scope.$apply(function () {
//add the file object to the scope's files collection
$scope.files.push(args.file);
});
});

//the save method
$scope.save = function() {
$http({
method: 'POST',
url: "/Api/PostStuff",
//IMPORTANT!!! You might think this should be set to 'multipart/form-data'
// but this is not true because when we are sending up files the request
// needs to include a 'boundary' parameter which identifies the boundary
// name between parts in this multi-part request and setting the Content-type
// manually will not set this boundary parameter. For whatever reason,
// setting the Content-type to 'false' will force the request to automatically
// populate the headers properly including the boundary parameter.
headers: { 'Content-Type': false },
//This method will allow us to change how the data is sent up to the server
// for which we'll need to encapsulate the model data in 'FormData'
transformRequest: function (data) {
var formData = new FormData();
//need to convert our json object to a string version of json otherwise
// the browser will do a 'toString()' on the object which will result
// in the value '[Object object]' on the server.
formData.append("model", angular.toJson(data.model));
//now add all of the assigned files
for (var i = 0; i < data.files; i++) {
//add each file to the form data and iteratively name them
formData.append("file" + i, data.files[i]);
}
return formData;
},
//Create an object that contains the model and files which will be transformed
// in the above transformRequest method
data: { model: $scope.model, files: $scope.files }
}).
success(function (data, status, headers, config) {
alert("success!");
}).
error(function (data, status, headers, config) {
alert("failed!");
});
};
};

Handling the data server-side

This example shows how to handle the data on the server side using ASP.Net WebAPI, I’m sure it’s reasonably easy to do on other server-side platforms too.

public async Task<HttpResponseMessage> PostStuff()
{
if (!Request.Content.IsMimeMultipartContent())
{
throw new HttpResponseException(HttpStatusCode.UnsupportedMediaType);
}

var root = HttpContext.Current.Server.MapPath("~/App_Data/Temp/FileUploads");
Directory.CreateDirectory(root);
var provider = new MultipartFormDataStreamProvider(root);
var result = await Request.Content.ReadAsMultipartAsync(provider);
if (result.FormData["model"] == null)
{
throw new HttpResponseException(HttpStatusCode.BadRequest);
}

var model = result.FormData["model"];
//TODO: Do something with the json model which is currently a string



//get the files
foreach (var file in result.FileData)
{
//TODO: Do something with each uploaded file
}

return Request.CreateResponse(HttpStatusCode.OK, "success!");
}


Sql script for changing media paths or virtual directories in Umbraco

May 14, 2013 23:11 by Shannon Deminick

When you upload media in Umbraco it stores the absolute path to the media item in the database. By default the path will look something like:

/media/12335/MyImage.jpg

However, if you are running Umbraco in a virtual directory the path will also include the virtual directly prefix. For example:

/MyVirtualDirectory/media/12335/MyImage.jpg

So you can imagine that some issues might arise if you already have data in your Umbraco install and then wanted to change virtual directory paths, or even change the media location path (umbracoMediaPath) in the web.config.

Luckily it’s easily solved with a quick SQL script. These scripts will update the path stored in Umbraco created by any property type that is of a data type with an ‘upload control’ property editor.

When moving to a virtual directory

Here’s a quick script to run against your current install if you are moving from a normal installation to using a virtual directory. Note: You will need to replace all instances of ‘MyVirtualDirectory’ to be your vdir path!

update cmsPropertyData 
set dataNvarchar = '/MyVirtualDirectory' + dataNvarchar
where id in
(select cmsPropertyData.id from cmsPropertyData
inner join cmsPropertyType on cmsPropertyData.propertytypeid = cmsPropertyType.id
inner join cmsDataType on cmsPropertyType.dataTypeId = cmsDataType.nodeId
where cmsDataType.controlId = '5032a6e6-69e3-491d-bb28-cd31cd11086c'
and cmsPropertyData.dataNvarchar is not null
and SUBSTRING(cmsPropertyData.dataNvarchar, 0, LEN('/MyVirtualDirectory') + 1) <> '/MyVirtualDirectory')

When moving from a virtual directory

Here’s the script to run if you are currently running in a virtual directory and want to move to a non-virtual directory. Note: You will need to replace all instances of ‘MyVirtualDirectory’ to be your vdir path!

update cmsPropertyData 
set dataNvarchar = SUBSTRING(dataNvarchar, LEN('/MyVirtualDirectory') + 1, LEN(dataNvarchar) - LEN('/MyVirtualDirectory'))
where id in
(select cmsPropertyData.id from cmsPropertyData
inner join cmsPropertyType on cmsPropertyData.propertytypeid = cmsPropertyType.id
inner join cmsDataType on cmsPropertyType.dataTypeId = cmsDataType.nodeId
where cmsDataType.controlId = '5032a6e6-69e3-491d-bb28-cd31cd11086c'
and cmsPropertyData.dataNvarchar is not null
and SUBSTRING(cmsPropertyData.dataNvarchar, 0, LEN('/MyVirtualDirectory') + 1) = '/MyVirtualDirectory' )

Moving to a different virtual directory?

If you want to move to a different virtual directory, you can combine the above 2 scripts… first move from a virtual directory back to normal, then move to the new virtual directory.

The above procedures would work as well if you were to change the umbracoMediaPath app setting in the web.config … though most people wont change that or even know about it ;)

Razor + dynamic + internal + interface & the 'object' does not contain a definition for 'xxxx' exception

April 11, 2013 14:49 by Shannon Deminick

I’ve come across this strange issue and decided to blog about it since I can’t figure out exactly why this is happening it is clearly something to do with the DLR and interfaces.

First, if you are getting the exception: 'object' does not contain a definition for 'xxxx' it is related to either an object you have being marked internal or you are are using an anonymous object type for your model (which .Net will always mark as internal).

Here’s a really easy way to replicate this:

1. Create an internal model

internal class MyModel
{
public string Name {get;set;}
}

2. Return this model in your MVC action

public ActionResult Index()
{
return View(new InternalTestModel("Shannon"));
}

3. Make your view have a dynamic model and then try to render the model’s property

@model dynamic
<h1>@Model.Name</h1>

You’ll get the exception:

Server Error in '/' Application.

'object' does not contain a definition for 'Name'

So even though the error is not very informative, it makes sense since Razor is trying to access an internal class.

Try using a public interface

Ok so if we want to keep our class internal, we could expose it via a public interface. Our code might then look like this:

public interface IModel 
{
string Name {get;}
}
internal class MyModel : IModel
{
public string Name {get;set;}
}

Then we can change our view to be strongly typed like:

@model IModel
<h1>@Model.Name</h1>

And it will work, however if you change your view back to be @model dynamic you will still get the exception. Pretty sure it’s because the DLR is just binding to the instance object and doesn’t really care about the interface… this makes sense.

Try using an abstract public class

For some reason if you make your internal class inherit from a public abstract class that implements the same interface you will not get this exception even though the object instance you are passing to razor is internal. For example with this structure:

public interface IModel 
{
string Name {get;}
}
public abstract class ModelBase : IModel
{
public abstract Name {get;}
}
internal class MyModel : IModel
{
public override string Name {get;set;}
}

You will not get this error if your view is @model dynamic.

Strange!

I’m sure there’s something written in the DLR spec about this but still seems strange to me! If you are getting this error and you aren’t using anonymous objects for your model, hopefully this might help you figure out what is going on.