Shannon Deminick's blog all about web development

Listening for validation changes in AngularJS

May 28, 2013 06:18

In some applications it can be really useful to have controllers listen for validation changes especially in more complicated AngularJS apps where ‘ng-repeat’ is used to render form controls. There’s plenty of cases where a parent scope might need to know about validation changes based on child scopes… one such case is a validation summary. There’s a couple ways to implement this (and probably more) but they all seem a bit hacky such as:

  • Apply a $watch to the current form object’s $valid property in the parent scope, then use jQuery to look for elements that have a class like ‘invalid’
    • You could then use the scope() function on the DOM element that ng-repeat is used on to get any model information about the invalid item
  • In child scopes you could apply a $watch to individual form elements’ $valid property then change the $parent scope’s model values to indicate validation changes

Instead what I wanted to achieve was a re-usable way to ‘bubble’ up validation changes from any scope’s form element to ancestor scopes without having to do any of the following:

  • No jquery DOM selection
  • No hard coding of form names to access the validation objects
  • No requirement to modifying other scopes’ values


The way I went about this was to create a very simple custom directive which I’ve called ‘val-bubble’ since it has to do with validation and it ‘bubbles’ up a message to any listening scopes. An input element might then look like this:

<input name="FirstName" type="text" required val-bubble />

Then in an outer scope I can then listen for validation changes and do whatever I want with the result:

scope.$on("valBubble", function(evt, args) {
alert("Validation changed for field " + args.ctrl.$name + ". Valid? " + args.isValid);

The args object contains these properties:

  • isValid = is the field valid
  • ctrl = the current form controller object for the field
  • scope = the scope bound to the field being validated
  • element = the DOM element of the field being validated
  • expression = the current $watch expression used to watch this fields validation changes

With all of that information you can easily adds some additional functionality to your app based on the current validating inputs such as a validation summary or whatever.

Custom directive

The val-bubble custom directive is pretty simple, here’s the code and an explanation below:

app.directive('valBubble', function (formHelper) {
return {
require: 'ngModel',
restrict: "A",
link: function (scope, element, attr, ctrl) {

if (!attr.name) {
throw "valBubble must be set on an input element that has a 'name' attribute";

var currentForm = formHelper.getCurrentForm(scope);
if (!currentForm || !currentForm.$name)
throw "valBubble requires that a name is assigned to the ng-form containing the validated input";

//watch the current form's validation for the current field name
scope.$watch(currentForm.$name + "." + ctrl.$name + ".$valid", function (isValid, lastValue) {
if (isValid != undefined) {
//emit an event upwards
scope.$emit("valBubble", {
isValid: isValid, // if the field is valid
element: element, // the element that the validation applies to
expression: this.exp, // the expression that was watched to check validity
scope: scope, // the current scope
ctrl: ctrl // the current controller

The first thing we’re doing here is limiting this directive to be used only as an attribute and ensuring the element has a model applied to it. Then we make sure that the element has a ‘name’ value applied. After that we are getting a reference to the current form object that this field is contained within using a custom method: formHelper.getCurrentForm … more on this below. Lastly we are applying a $watch to the current element’s $valid property and when this value changes we $emit an event upwards to parent/ancestor scopes to listen for.


Above I mentioned that I wanted a re-usable solution where I didn’t need to hard code things like the current form name. Unfortunately Angular doesn’t really provide a way to do this OOTB (as far as I can tell!) (Update! see here on how to access the current form: http://shazwazza.com/post/Reference-the-current-form-controller-in-AngularJS) so I’ve just created a simple factory object that finds the current form object applied to the current scope. The type check is fairly rudimentary but it works, it’s simply checking each property that exists on the scope object and tries to detect the object that matches the definition of an Angular form object:

app.factory('formHelper', function() {
return {
getCurrentForm: function(scope) {
var form = null;
var requiredFormProps = ["$error", "$name", "$dirty", "$pristine", "$valid", "$invalid", "$addControl", "$removeControl", "$setValidity", "$setDirty"];
for (var p in scope) {
if (_.isObject(scope[p]) && !_.isFunction(scope[p]) && !_.isArray(scope[p]) && p.substr(0, 1) != "$") {
var props = _.keys(scope[p]);
if (props.length < requiredFormProps.length) continue;
if (_.every(requiredFormProps, function(item) {
return _.contains(props, item);
})) {
form = scope[p];
return form;

NOTE: the above code has a dependency on UnderscoreJS

So now you can just apply the val-bubble attribute to any input element to ensure it’s validation changes are published to listening scopes!

Uploading files and JSON data in the same request with Angular JS

May 25, 2013 04:01

I decided to write a quick blog post about this because much of the documentation and examples about this seems to be a bit scattered. What this achieves is the ability to upload any number of files with any other type of data in one request. For this example we’ll send up JSON data along with some files.

File upload directive

First we’ll create a simple custom file upload angular directive

app.directive('fileUpload', function () {
return {
scope: true, //create a new scope
link: function (scope, el, attrs) {
el.bind('change', function (event) {
var files = event.target.files;
//iterate files since 'multiple' may be specified on the element
for (var i = 0;i<files.length;i++) {
//emit event upward
scope.$emit("fileSelected", { file: files[i] });

The usage of this is simple:

<input type="file" file-upload multiple/>

The ‘multiple’ parameter indicates that the user can select multiple files to upload which this example fully supports.

In the directive we ensure a new scope is created and then listen for changes made to the file input element. When changes are detected with emit an event to all ancestor scopes (upward) with the file object as a parameter.

Mark-up & the controller

Next we’ll create a controller to:

  • Create a model to bind to
  • Create a collection of files
  • Consume this event so we can assign the files to  the collection
  • Create a method to post it all to the server

NOTE: I’ve put all this functionality in this controller for brevity, in most cases you’d have a separate factory to handle posting the data

With the controller in place, the mark-up might look like this (and will display the file names of all of the files selected):

<div ng-controller="Ctrl">
<input type="file" file-upload multiple/>
<li ng-repeat="file in files">{{file.name}}</li>

The controller code below contains some important comments relating to how the data gets posted up to the server, namely the ‘Content-Type’ header as the value that needs to be set is a bit quirky.

function Ctrl($scope, $http) {

//a simple model to bind to and send to the server
$scope.model = {
name: "",
comments: ""

//an array of files selected
$scope.files = [];

//listen for the file selected event
$scope.$on("fileSelected", function (event, args) {
$scope.$apply(function () {
//add the file object to the scope's files collection

//the save method
$scope.save = function() {
method: 'POST',
url: "/Api/PostStuff",
//IMPORTANT!!! You might think this should be set to 'multipart/form-data'
// but this is not true because when we are sending up files the request
// needs to include a 'boundary' parameter which identifies the boundary
// name between parts in this multi-part request and setting the Content-type
// manually will not set this boundary parameter. For whatever reason,
// setting the Content-type to 'false' will force the request to automatically
// populate the headers properly including the boundary parameter.
headers: { 'Content-Type': false },
//This method will allow us to change how the data is sent up to the server
// for which we'll need to encapsulate the model data in 'FormData'
transformRequest: function (data) {
var formData = new FormData();
//need to convert our json object to a string version of json otherwise
// the browser will do a 'toString()' on the object which will result
// in the value '[Object object]' on the server.
formData.append("model", angular.toJson(data.model));
//now add all of the assigned files
for (var i = 0; i < data.files; i++) {
//add each file to the form data and iteratively name them
formData.append("file" + i, data.files[i]);
return formData;
//Create an object that contains the model and files which will be transformed
// in the above transformRequest method
data: { model: $scope.model, files: $scope.files }
success(function (data, status, headers, config) {
error(function (data, status, headers, config) {

Handling the data server-side

This example shows how to handle the data on the server side using ASP.Net WebAPI, I’m sure it’s reasonably easy to do on other server-side platforms too.

public async Task<HttpResponseMessage> PostStuff()
if (!Request.Content.IsMimeMultipartContent())
throw new HttpResponseException(HttpStatusCode.UnsupportedMediaType);

var root = HttpContext.Current.Server.MapPath("~/App_Data/Temp/FileUploads");
var provider = new MultipartFormDataStreamProvider(root);
var result = await Request.Content.ReadAsMultipartAsync(provider);
if (result.FormData["model"] == null)
throw new HttpResponseException(HttpStatusCode.BadRequest);

var model = result.FormData["model"];
//TODO: Do something with the json model which is currently a string

//get the files
foreach (var file in result.FileData)
//TODO: Do something with each uploaded file

return Request.CreateResponse(HttpStatusCode.OK, "success!");

Razor + dynamic + internal + interface & the 'object' does not contain a definition for 'xxxx' exception

April 11, 2013 16:49

I’ve come across this strange issue and decided to blog about it since I can’t figure out exactly why this is happening it is clearly something to do with the DLR and interfaces.

First, if you are getting the exception: 'object' does not contain a definition for 'xxxx' it is related to either an object you have being marked internal or you are are using an anonymous object type for your model (which .Net will always mark as internal).

Here’s a really easy way to replicate this:

1. Create an internal model

internal class MyModel
public string Name {get;set;}

2. Return this model in your MVC action

public ActionResult Index()
return View(new InternalTestModel("Shannon"));

3. Make your view have a dynamic model and then try to render the model’s property

@model dynamic

You’ll get the exception:

Server Error in '/' Application.

'object' does not contain a definition for 'Name'

So even though the error is not very informative, it makes sense since Razor is trying to access an internal class.

Try using a public interface

Ok so if we want to keep our class internal, we could expose it via a public interface. Our code might then look like this:

public interface IModel 
string Name {get;}
internal class MyModel : IModel
public string Name {get;set;}

Then we can change our view to be strongly typed like:

@model IModel

And it will work, however if you change your view back to be @model dynamic you will still get the exception. Pretty sure it’s because the DLR is just binding to the instance object and doesn’t really care about the interface… this makes sense.

Try using an abstract public class

For some reason if you make your internal class inherit from a public abstract class that implements the same interface you will not get this exception even though the object instance you are passing to razor is internal. For example with this structure:

public interface IModel 
string Name {get;}
public abstract class ModelBase : IModel
public abstract Name {get;}
internal class MyModel : IModel
public override string Name {get;set;}

You will not get this error if your view is @model dynamic.


I’m sure there’s something written in the DLR spec about this but still seems strange to me! If you are getting this error and you aren’t using anonymous objects for your model, hopefully this might help you figure out what is going on.

Taming the BuildManager, ASP.Net Temp files and AppDomain restarts

March 28, 2012 08:14

I’ve recently had to do a bunch of research in to the BuildManager and how it deals with caching assemblies since my involvement with creating an ASP.Net plugin engine. Many times people will attempt to restart their AppDomain by ‘bumping’ their web.config files, meaning adding a space character or carriage return and then saving it. Sometimes you may have noticed that this does not always restart your AppDomain the way you’d expect and some of the new types you’d expect to have loaded in your AppDomain simply aren’t there. This situation has cropped up a few times when using the plugin engine that we have built in to Umbraco v5 and some people have resorted to manually clearing out their ASP.Net temp files and then forcing an IIS Restart, but this is hardly ideal. The good news is that we do have control over how these assemblies get refreshed, in fact the BuildManager reloads/refreshes/clears assemblies in different ways depending on how the AppDomain is restarted.

The hash.web file

An important step during the BuildManager initialization phase is a method call to BuildManager.CheckTopLevelFilesUpToDate which does some interesting things. First it checks if a special hash value is on disk and is not zero. You may have noticed a file in your ASP.Net temp files: /hash/hash.web and it will contain a value such as: 4f34227133de5346. This value represents a unique hash of many different combined objects that the BuildManager monitors. If this file exists and the value can be parsed properly then the BuildManager will call: diskCache.RemoveOldTempFiles(); What this does is the following:

  • removes the CodeGen temp resource folder
  • removes temp files that have been saved during CodeGen such as *.cs
  • removes the special .delete files and their associated original files which have been created when shutting down the app pool and when the .dll files cannot be removed (due to file locking)

Creating the hash value

The next step is to create this hash value based on the current objects in the AppDomain. This is done using an internal utility class in the .Net Framework called: HashCodeCombiner (pretty much everything that the BuildManager references is marked Internal! ). The process combines the following object values to create the hash (I’ve added in parenthesis the actual properties the BuildManager references):

  • the app's physical path, in case it changes (HttpRuntime.AppDomainAppPathInternal)
  • System.Web.dll (typeof(HttpRuntime).Module.FullyQualifiedName)
  • machine.config file name (HttpConfigurationSystem.MachineConfigurationFilePath)
  • root web.config file name, please note that this is not your web apps web.config (HttpConfigurationSystem.RootWebConfigurationFilePath)
  • the hash of the <compilation> section (compConfig.RecompilationHash)
  • the hash of the system.web/profile section (profileSection.RecompilationHash)
  • the file encoding set in config (appConfig.Globalization.FileEncoding)
  • the <trust> config section (appConfig.Trust.Level & appConfig.Trust.OriginUrl)
  • whether profile is enabled (ProfileManager.Enabled)
  • whether we are precompiling with debug info (but who precompiles :) (PrecompilingWithDebugInfo)

Then we do a check for something I didn’t know existed called Optimize Compilations which will not actual affect the hash file value for the following if it is set to true (by default is is false):

  • the ‘bin’ folder (HttpRuntime.BinDirectoryInternal)
  • App_WebReferences (HttpRuntime.WebRefDirectoryVirtualPath)
  • App_GlobalResources  (HttpRuntime.ResourcesDirectoryVirtualPath)
  • App_Code (HttpRuntime.CodeDirectoryVirtualPath)
  • Global.asax (GlobalAsaxVirtualPath)

Refreshing the ASP.Net temp files (CodeGen files)

The last step of this process is to check if the persisted hash value in the hash.web file equals the generated hash value from the above process. If they do not match then a call is made to diskCache.RemoveAllCodegenFiles(); which will:

  • clear all codegen files, removes all files in folders but not the folders themselves,
  • removes all root level files except for temp files that are generated such as .cs files, etc...

This essentially clears your ASP.Net temp files completely, including the MVC controller cache file, etc…

Then the BuildManager simply resaves this calculated has back to the hash.web file.

What is the best way to restart your AppDomain?

There is really no ‘best’ way, it just depends on your needs. If you simply want to restart your AppDomain and not have the overhead of having your app recompile all of your views and other CodeGen classes then it’s best that you simply ‘bump’ your web.config by just adding a space, carriage return or whatever. As you can see above the hash value is not dependant on your local web.config file’s definition (timestamp, etc…). However, the hash value is dependent on some other stuff in your apps configuration such as the <compilation> section, system.web/profile section, the file encoding configured, and the <trust> section. If you update any value in any of these sections in your web.config it will force the BuildManager to clear all cached CodeGen files which is the same as clearing your ASP.Net temp files.

So long as you don’t have optimizeCompilations set to true, then the easiest way to clear your CodeGen files is to simply add a space or carriage return to your global.asax file or modify something in the 4 other places that the BuildManager checks locally: App_Code, App_GlobalResources, App_WebResources, or modify/add/remove a file in your bin folder.

How does this affect the ASP.Net plugin engine?

Firstly, i need to update that post as the code in the plugin engine has changed quite a bit but you can find the latest in the Umbraco source code on CodePlex. With the above knowledge its easy to clear out any stale plugins by perhaps bumping your global.asax file, however this is still not an ideal process. With the research I’ve done in to the BuildManager I’ve borrowed some concepts from it and learned of a special “.delete” file extension that the BuildManager looks for during AppDomain restarts. With this new info, I’ve already committed some new changes to the PluginManager so that you shouldn’t need to worry about stale plugin DLLs.

Consume Json REST service with WCF and dynamic object response

February 11, 2011 22:26
This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
This was my first shot at WCF and though I saw how nice it could be, i failed miserably… well, I’m actually not too sure. I tried all sorts of things but no matter what couldn’t get WCF to behave and return the actual response you would get if you just consumed the service with the WebClient object. Here’s what I tried and assumed that this ‘should’ work but the result was always an empty dictionary:

Note: this demo is based off of the @Task project management tool’s REST API.

Define the contract:

[ServiceContract] public interface IAtTaskService { [OperationContract] [WebGet( UriTemplate = "/api/login?username={username}&password={password}", RequestFormat = WebMessageFormat.Json)] IDictionary<string, object> Login(string username, string password); }

Run the code:

public IDictionary<string, object> DoThis(string username, string pword) { using (var cf = new WebChannelFactory<IAtTaskService>( new Uri("https://mycompaniesurl.attask-ondemand.com/attask/"))) { var s = cf.CreateChannel(); var token = s.Login(username, pword); return token; } }

The above seemed really nice to be able to consume services but the result was always empty. I tried using strongly typed objects in the result, made sure it wasn’t an HTTPS issue, used the normal WCF configuration approach, etc… but couldn’t get it to work.

The ‘Fix’

I quite like the way that WCF has UriTemplates and you can just attribute up an interface and then make strongly typed calls to access your services so i decided I’d just make that work with my own implementation. Again, perhaps I was ‘doing it wrong’ or whatever, but the below implementation is pretty cool and also lets you get JSON results as a dynamic object :)

Define the contract:

This is pretty much the same as above, but we have a dynamic response

[ServiceContract] public interface IAtTaskService { [OperationContract] [WebGet( UriTemplate = "/api/login?username={username}&password={password}", RequestFormat = WebMessageFormat.Json)] dynamic Login(string username, string password); }

The JsonRestService base class

I created a class that lets you specify you contract service as a generic argument and then provides you with a method called “Send” which is defined a:

protected dynamic Send(Func<T, dynamic> func)

This class will manage all of the http requests and use the WCF attributes applied to your contract to generate the correct URLs and parameters (source code at the bottom of this article)

Define a standard service

So to consume the REST services, we just create a ‘real’ service class such as:

public class AtTaskService : JsonRestService<IAtTaskService>, IAtTaskService { public AtTaskService(string serviceUrl) : base(serviceUrl) { } public dynamic Login(string username, string password) { return Send(x => x.Login(username, password)); } }

As you can see, the “Login” method just calls the underlying “Send” method which uses a strongly typed delegate. The Send method then inspects the delegate and generates the correct URL with parameters based on the WCF attributes.

Using the service

So now, to use the new service we can do something like:

[TestMethod] public void AtAtask_Login() { //Arrange var svc = new AtTaskService("https://mycompanyname.attask-ondemand.com/attask/"); //Act var token = svc.Login("MyUserName", "mypassword"); //Assert Assert.IsTrue(token != null); Assert.IsTrue(token.data != null); Assert.IsTrue(token.data.userID != null); }

Too easy!

As I mentioned earlier, I would have assumed that WCF should handle this out of the box, but for the life of my I couldn’t get it to return any results. Whether it was a request issue or a parsing issue, or whatever i have no idea. In the meantime, here’s the source code for the JsonRestService class which will allow you to do the above. The source requires references to Json.Net and Aaron-powell.dynamics

JsonRestService source

Disclaimer: I made this today in a couple of hours, I’m sure there’s some code in here that could be refactored to be nicer

public class JsonRestService<T> { public string ServiceUrl { get; private set; } public JsonRestService(string serviceUrl) { ServiceUrl = serviceUrl; } /// <summary> /// Queries the WCF attributes of the method being called, builds the REST URL and sends the http request /// based on the WCF attribute parameters. When the JSON response is returned, it is changed to a dynamic object. /// </summary> /// <param name="func"></param> /// <returns></returns> protected dynamic Send(Func<T, dynamic> func) { //find the method on the main type that is being called... i'm sure there's a better way to do this, //but this does work. //This will not work if there are methods with overloads. var methodNameMatch = Regex.Match(func.Method.Name, "<(.*?)>"); if (!methodNameMatch.Success && methodNameMatch.Groups.Count == 2) { throw new MissingMethodException("Could not find method " + func.Method.Name); } var realMethodName = methodNameMatch.Groups[1].Value; var m = typeof(T).GetMethods().Where(x => x.Name == realMethodName).SingleOrDefault(); if (m == null) { throw new MissingMethodException("Could not find method" + realMethodName + " on type " + typeof(T).FullName); } //now that we have the method, find the wcf attributes var a = m.GetCustomAttributes(false); var webGet = a.OfType<WebGetAttribute>().SingleOrDefault(); var webInvoke = a.OfType<WebInvokeAttribute>().SingleOrDefault(); var httpMethod = webGet != null ? "GET" : webInvoke != null ? webInvoke.Method : string.Empty; if (string.IsNullOrEmpty(httpMethod)) { throw new ArgumentNullException("The WebGet or WebInvoke attribute is missing from the method " + realMethodName); } //now that we have the WCF attributes, build the REST url based on the url template and the //method being called with it's parameters var urlTemplate = webGet != null ? webGet.UriTemplate : webInvoke.UriTemplate; var urlWithParams = GetUrlWithParams(urlTemplate, func); var url = ServiceUrl + urlWithParams; //Do the web requests string output; if (httpMethod == "GET") { output = HttpGet(url); } else { //need to move the query string params to the http parameters var parts = url.Split('?'); output = HttpInvoke(parts[0], httpMethod, parts.Length > 1 ? parts[1] : string.Empty); } //change the response to json and then dynamic var stringReader = new StringReader(output); var jReader = new JsonTextReader(stringReader); var jsonSerializer = new JsonSerializer(); var json = jsonSerializer.Deserialize<Dictionary<string, object>>(jReader); return json.AsDynamic(); } /// <summary> /// Updates the url template with the correct parameters /// </summary> /// <param name="template"></param> /// <param name="expression"></param> /// <returns></returns> private static string GetUrlWithParams(string template, Func<T, dynamic> expression) { //parse the template, get the matches foreach (var m in Regex.Matches(template, @"\{(.*?)\}").Cast<Match>()) { if (m.Groups.Count == 2) { var m1 = m; //find the fields based on the expression(Func<T>), get their values and replace the tokens in the url template var field = expression.Target.GetType().GetFields().Where(x => x.Name == m1.Groups[1].Value).Single(); template = template.Replace(m.Groups[0].Value, field.GetValue(expression.Target).ToString()); } } return template; } /// <summary> /// Do an Http GET /// </summary> /// <param name="uri"></param> /// <returns></returns> private static string HttpGet(string uri) { var webClient = new WebClient(); var data = webClient.DownloadString(uri); return data; } /// <summary> /// Do an Http POST/PUT/etc... /// </summary> /// <param name="uri"></param> /// <param name="method"></param> /// <param name="parameters"></param> /// <returns></returns> private static string HttpInvoke(string uri, string method, string parameters) { var webClient = new WebClient(); var data = webClient.UploadString(uri, method, parameters); return data; } }

Vertically centring multi-line text with CSS

November 18, 2010 22:34
This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
So you have a fixed height container, and you want your text to be vertically centred, whether it be one line or many lines of text.  In every browser.   This is possible, as I just learned from this article.

For all modern browsers, and IE8 and IE9, the method uses display:table-cell.  For IE6 and IE7, the method described in the above article is used, which takes advantage of a height bug.

You HTML looks like this:

<div id="container">
	<div id="wrapper">
		<div id="content">
			Stuff and things

Your CSS looks like this:


The result looks like this:







IE9 Beta


Google Chrome


And of course, this is not just constrained to text.  #content can be any variable-height block...

Mixing percentage and pixel dimensions in CSS

November 9, 2010 21:32
This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
Okay, so there is no trick to actually compute these dimensions (except in IE).  However, there is a little known technique in CSS which is supported in all browsers and solves most of these problems.

Most usually, the desire to mix the two units comes about when one wants column A to fill the page except for column B, which has a fixed pixel width.  The problem becomes more formidable when doing the same thing with rows, the reason being that while a block element will naturally expand to fit its container’s width, there is no parallel behaviour for height.

The misconception: when positioning absolutely, two positions and two dimensions should be set.  I.E., top, left, width, height.

The truth: many combinations of position and dimension may be used.  Want to fill the page except for a 300px bar on the right?  Sure:

html, body {
    height: 100%;

#content {

Turns out you don't even need to specify dimensions, they can be implied by positions. This is actually really old-school stuff which is supported by IE of old, back when it couldn't deal with flowing layouts very well. If you want to start combining percentages with pixels, just mess around with this technique and some containers.

Injecting JavaScript into other frames

September 9, 2010 20:44

The beautiful part of JavaScript is that it is ridiculously flexible and lets you do things that ‘probably’ shouldn’t be done. Here’s a good example of that.

During uComponents development I stumbled upon a situation where I needed to attach a JavaScript method to the top-level frame from inside of an iframe. Well in fact it turns out this is quite easy, you can do something like this:

window.top.doThis = function() { alert("woot!"); }

However, since we’re attaching the ‘doThis’ method to the main frame from an inner iframe, when the inner iframenavigates to another page, this function will no longer exist on the main frame… So this clearly isn’t going to work if we want to be able to call the ‘doThis’ method from the inner frame no matter when and where it navigates to… Conundrum!

So the next possibility is to try to just inject a script block into the main frame from the iframewhich actually does work in Firefox and Chrome but fails in Internet Explorer and Safari. (This snippet of code requires that you have jQuery loaded in the main frame)

var js = "function doThis() { alert('woot!'); }"; var injectScript = window.top.$('<script>') .attr('type', 'text/javascript') .html(js); window.top.$("head").append(injectScript);

In the above, we’re creating a string function, creating a <script> block with jQuery, appending the string function to the script block and then appending this script block to the <head> element of the main frame. But as i said before, Firefox and Chrome are ok with this but Internet Explorer and Safari will throw JavaScript exceptions such as: Unexpected call to method or property access

Ok, so unless you don’t want to be cross browser, this isn’t going to work. It took me a while to figure out that you can do this, but this does work. Yes it looks pretty dodgy, and it probably is. In reality, attempting to do something like this is pretty dodgy to begin with. So here it is (this works in Internet Explorer 8 (probably earlier ones too), Firefox 3.6 (probably earlier ones too) and Chrome 5 (probably earlier ones too) and i didn’t get around to testing Safari but I am assuming it works):

var iframe = window.top.$("#dummyIFrame"); if (iframe.length == 0) { var html = "<html><head><script type='text/javascript'>" + "this.window.doThis = function() { alert('woot'); };" + "</script></head><body></body></html>"; iframe = window.top.$("<iframe id='dummyIFrame'>") .append(html) .hide() .css("width", "0px") .css("height", "0px"); window.top.$("body").append(iframe); }

So i guess this requires a bit of explanation. All browsers seem to let you create iframes dynamically which also means that you can put whatever content into the iframes while it’s being created, including script blocks. Here’s what we’re doing:

  • checking if our ‘dummy’ iframe already exists (since we don’t want to create multiple dummy iframes since we only need one), if it doesn't:
  • create an html text block including the script that will attach the ‘doThis’ method to the ‘this.window’ object (which for some reason will be referring to the window.top object)
  • next we create an iframe element and append the html text block, and then make sure the iframe is completely hidden
  • finally, we append the iframe to the main window’s body element

Nice! So this pretty much means that you can create code from an inner frame and attach it to a different frame, then have that code run in the context of the main frame with the main frame’s objects and script set.

One last thing that i found out you can do, though i wouldn’t recommend it because i think it might start filling up your memory. But this is also possible:

var html = "<html><head><script type='text/javascript'>" + "this.window.doThis = function() { alert('woot'); };" + "this.window.doThis();" + "</script></head><body></body></html>"; iframe = window.top.$("<iframe id='dummyIFrame'>") .append(html);

All that is happening here is that I’m attaching the ‘doThis’ method to the main frame’s window object, calling it directly after and then creating an iframe in memory with this script block. The funny part is that the method executes straight away and I haven’t attached the iframe to the DOM anywhere!

VirtualPathUtility Fixed in .Net 4

August 30, 2010 22:13

If you are not already aware there’s an issue with the VirtualPathUtility object in the .Net framework. If you are trying to use the IsAbsolute or ToAbsolute methods with a path that contains a query string, you’ll get an exception thrown. A quick Google search on this issue will show you a ton of posts and ‘work arounds’ for this problem and I’ve implemented a custom work around for this in the ClientDependency framework as well. In fact, I only noticed this problem when testing the ClientDependency framework with .Net 3.5 and trying to register embedded resources through it (as see in this blog post here). Since I’d been developing the latest builds of ClientDependency in .Net 4, i had no problems at all but as soon as I changed the build to .Net 3.5, the exceptions began to show since web resource URLs have query strings.

So it the good new is that it appears as though Microsoft have finally fixed this bug in .Net 4! The bad news is that you’ll still have to use your ‘work arounds’ if you are targeting framework versions below 4.

How to embed ASMX web services in a DLL

August 13, 2010 00:01

The development of uComponents for Umbraco is still ongoing but is probably close to a v1.0 release. During the development of the Multi-node tree picker data type, I’ve spotted a bug in the Umbraco core in regards to the ASMX web service the existing tree uses to render out it’s initial node. The bug basically won’t allow the tree to render an individual tree, only a full app. Since the Multi-node tree picker requires rendering of an individual tree, i needed to fix this. To do this was a bit of a work around but i had to override the tree’s ASMX web service with my own (I’ll write a post about this one later). It then occurred to me that I’ve never embedded an ASMX web service into a DLL without a physical file and I didn’t want to go updating the web.config with custom handlers, etc… So I came up with a way to embed ASMX web services in your DLL without modifying web.config’s, etc… here’s how:

First, create your web service as a standard class and add all of the functionality that your require your web service to have:


In your class library project, create a new text file but give it the extension .asmx:


Set the file to be a non compiled file:


Create an resource file:


Double click on your new RESX file, then drag your ASMX file into the editor to add the file as a property of your RESX file:


Double click your ASMX file to edit it and set it to inherit from your custom web service class:


Now, we need to register this ASMX web service into the web application that it’s being deployed to. This is easily done by just checking if the ASMX file exists where it needs to be before it needs to be called and if it doesn’t, we just create the file. In the case of uComponents and the multi-node tree picker requires the ASMX web service for the CustomTreeControl, so we’ll do this check in the CustomTreeControl’s static constructor:

public CustomTreeControl() { var filePath = HttpContext.Current .Server.MapPath("~/CustomTreeService.asmx"); //check if it exists if (!File.Exists(filePath)) { //lock the thread lock (m_Locker) { //double check locking.... if (!File.Exists(filePath)) { //now create our new local web service using (var sw = new StreamWriter(File.Create(filePath))) { //write the contents of our embedded resource to the file sw.Write(CustomTreeServiceResource.CustomTreeService); } } } } }


That’s it!