@Shazwazza

Shannon Deminick's blog all about web development

Smidge 2.0 alpha is out

December 30, 2016 03:39
ASP.NET-Core-Logo_2colors_RGB_bitmap_MEDIUM

What is Smidge? Smidge is a lightweight runtime bundling library (CSS/JavaScript file minification, combination, compression) for ASP.NET Core.

If you’ve come from ASP.NET 4.5 you would have been familiar with the bundling/minification API and other bundling options like ClientDependency, but that is no longer available in ASP.NET Core, instead it is advised to do all the bundling and pre-processing that you need as part of your build process …which certainly makes sense! So why create this library? A few reasons: some people just want to have a very simple bundling library and don’t want to worry about Gulp or Grunt or WebPack, in a lot of cases the overhead of runtime processing is not going to make any difference, and lastly, if you have created something like a CMS that dynamically loads in assets from 3rd party packages or plugins, you need a runtime bundler since these things don’t exist at build time.

Over the past few months I’ve been working on some enhancements to Smidge and have found a bit of time to get an alpha released.  There’s loads of great new features in Smidge 2.0! You can install via Nuget and is targets .NET Standard 1.6 and .NET Framework 4.5.2

PM> Install-Package Smidge -Pre

New to Smidge?

It’s easy to get started with Smidge and there’s lots of docs available on GitHub that cover installation, configuration, creating bundles and rendering  them.

New Features

Here’s a list of new features complete with lots of code examples

Customizable Debug and Production options

https://github.com/Shazwazza/Smidge/issues/58

Previous to version 2.0, you could only configure aspects of the Production options and the Debug assets that were returned were just the raw static files. With 2.0, you have full control over how your assets are processed in both Debug and Production configurations. For example, if you wanted you could have your assets combined but not minified in Debug mode. This will also allow for non native web assets such as TypeScript to have pre-processors running and able to work in Debug mode.

Example:

services.AddSmidge(_config)
    .Configure<SmidgeOptions>(options =>
    {
        //set the default e-tag options for Debug mode
        options.DefaultBundleOptions.DebugOptions.CacheControlOptions.EnableETag = false        
    });

Fluent syntax for declaring/configuring bundles

https://github.com/Shazwazza/Smidge/issues/55

If you want to customize Debug or Production options per bundle, you can do so with a fluent syntax, for example:

app.UseSmidge(bundles =>
{                
    //For this bundle, enable composite files for Debug mode, enable the file watcher so any changes
    //to the files are automatically re-processed and cache invalidated, disable cache control headers
    //and use a custom cache buster. You could of course use the .ForProduction options too 
    bundles.Create("test-bundle-2", WebFileType.Js, "~/Js/Bundle2")
        .WithEnvironmentOptions(BundleEnvironmentOptions.Create()
                .ForDebug(builder => builder
                    .EnableCompositeProcessing()
                    .EnableFileWatcher()
                    .SetCacheBusterType<AppDomainLifetimeCacheBuster>()
                    .CacheControlOptions(enableEtag: false, cacheControlMaxAge: 0))
                .Build()
        );                
});

Customizable Cache Buster

https://github.com/Shazwazza/Smidge/issues/51

In version 1.0 the only cache busting mechanism was Smidge’s version property which is set in config, in 2.0 Smidge allows you to control how cache busting is controlled at a global and bundle level. 2.0 ships with 2 ICacheBuster types:

  • ConfigCacheBuster – the default and uses Smidge’s version property in config

  • AppDomainLifetimeCacheBuster – if enabled will mean that the server/browser cache will be invalidated on every app domain recycle

If you want a different behavior, you can define you own ICacheBuster add it to the IoC container and then just use it globally or per bundle. For example:

//Set a custom MyCacheBuster as the default one for Debug assets:
services.AddSmidge(_config)
    .Configure<SmidgeOptions>(options =>
    {
        options.DefaultBundleOptions.DebugOptions.SetCacheBusterType<MyCustomCacheBuster>();       
    });

//Set a custom MyCacheBuster as the cache buster for a particular bundle in debug mode:
bundles.Create("test-bundle-2", WebFileType.Js, "~/Js/Bundle2")
    .WithEnvironmentOptions(BundleEnvironmentOptions.Create()
            .ForDebug(builder => builder
                .SetCacheBusterType<MyCacheBuster>()
            .Build()
    );

Customizable cache headers

https://github.com/Shazwazza/Smidge/issues/48 

You can now control if you want the ETag header output and you can control the value set for max-age/s-maxage/Expires header at a global or bundle level, for example:

//This would set the max-age header for this bundle to expire in 5 days
bundles.Create("test-bundle-5", WebFileType.Js, "~/Js/Bundle5")
    .WithEnvironmentOptions(BundleEnvironmentOptions.Create()
            .ForProduction(builder => builder                                
                .CacheControlOptions(enableEtag: true, cacheControlMaxAge: (5 * 24)))
            .Build()
    );

Callback to customize the pre-processor pipeline per web file

https://github.com/Shazwazza/Smidge/issues/59

This is handy in case you want to modify the pipeline for a given web file at runtime based on some criteria, for example:

services.AddSmidge(_config)
    .Configure<SmidgeOptions>(options =>
    {
        //set the callback
        options.PipelineFactory.OnGetDefault = GetDefaultPipelineFactory;
    });

//The GetDefaultPipeline method could do something like modify the default pipeline to use Nuglify for JS processing:

private static PreProcessPipeline GetDefaultPipelineFactory(WebFileType fileType, IReadOnlyCollection<IPreProcessor> processors)
{
    switch (fileType)
    {
        case WebFileType.Js:
            return new PreProcessPipeline(new IPreProcessor[]
            {
                processors.OfType<NuglifyJs>().Single()
            });                
    }
    //returning null will fallback to the logic defined in the registered PreProcessPipelineFactory
    return null;
}

File watching with automatic cache invalidation

https://github.com/Shazwazza/Smidge/pull/42 

During the development process it would be nice to be able to test composite files but have them auto re-process and invalidate the cache whenever one of the source files changes… in 2.0 this is possible!  You can enable file watching at the global level or per bundle. Example:

//Enable file watching for all files in this bundle when in Debug mode
bundles.Create("test-bundle-7",
    new CssFile("~/Js/Bundle7/a1.js"),
    new CssFile("~/Js/Bundle7/a2.js"))
    .WithEnvironmentOptions(BundleEnvironmentOptions.Create()
            .ForDebug(builder => builder.EnableFileWatcher())
            .Build()
    );

What’s next?

This is an alpha release since there’s a few things that I need to complete. Most are already done but I just need to make Nuget packages for them:

More pre-processors

I’ve enabled support for a Nuglify pre-processor for both CSS and JS (Nuglify is a fork of the Microsoft Ajax Minifier for ASP.NET Core + additional features). I also enabled support for an Uglify NodeJs pre-processor which uses Microsoft.AspNetCore.NodeServices to invoke Node.js from ASP.NET and run the JS version of Uglify. I just need to get these on Nuget but haven’t got around to that yet.

A quick note on minifier performance

Though Nuglify and Uglify have a better minification engine (better/smarter size reduction) than JsMin because they create an AST (Abstract Syntax Tree) to perform it’s processing, they are actually much slower and consume more resources than JsMin. Since Smidge is a Runtime bundling engine, its generally important to ensure that the bundling/minification is performed quickly. Smidge has strict caching so the bundling/minification will only happen once (depending on your ICacheBuster you are using) but it is still recommended to understand the performance implications of replacing JsMin with another minifier. I’ve put together some benchmarks (NOTE: a smaller Minified % is better):

Method Median StdDev Scaled Scaled-SD Minified % Gen 0 Gen 1 Gen 2 Bytes Allocated/Op
JsMin 10.2008 ms 0.3102 ms 1.00 0.00 51.75% - - - 155,624.67
Nuglify 69.0778 ms 0.0180 ms 6.72 0.16 32.71% 53.00 22.00 15.00 4,837,313.07
JsServicesUglify 1,548.3951 ms 7.6388 ms 150.95 3.73 32.63% 0.97 - - 576,056.55
The last benchmark may be a bit misleading because the processing is done via NodeJs which executes in a separate process so I'm unsure if the actual memory usage of that can be properly captured by BenchmarkDotNet but you can see it's speed is much slower.

Thanks!

Big thanks to @dazinator for all the help, recommendations, testing, feedback, etc… and for the rest of the community for filing bugs, questions, and comments. Much appreciated :)

I’ve been working on a side project called Smidge which is a runtime JS & CSS preprocessor for ASP.Net 5. I started this late last year after the 2014 MS MVP Summit as a good starting point to deep dive into ASP.Net 5. I’ve been keeping the codebase up-to-date with the beta releases of ASP.Net 5, I have it cross compiled to both dnx451 and dnxcore50 and recently updated to use Beta 7. This week I decided to give running ASP.Net 5 CoreCLR on Linux… and the result is IT WORKS!

I have next to no experience with Linux and considering that, it wasn’t actually very difficult to get my test site for Smidge up and running. Here’s the info on how I set this up:

Linux setup

I decided to use Ubuntu 14.04.3 LTS. I installed in on Hyper-V on Windows 10 and that was all very easy. I also setup SSH with the server so that I could remote terminal to it which is much nicer than using the terminal through the UI interface of Ubuntu via Hyper-V. Then basically followed the instructions here: https://github.com/aspnet/Home/blob/dev/GettingStartedDeb.md#getting-started-with-aspnet-5-and-linux – except that I didn’t configure any Nuget package sources since that is built into dnvm now. Once that was done I used dnvm to install the default runtime:  dnvm upgrade. This installed mono by default but for my purposes I needed ASP.Net 5 CoreCLR since that’s what Smidge is built against and I wanted to see this CoreCLR cross-platform stuff in action. Issuing this command does the trick: dnvm install 1.0.0-beta7 -r coreclr . Now when I list the runtimes installed (dnvm list) I get:

image

So now dnx is installed! We’re ready to go.

dnu publish & bash

What I really wanted to see was that I could build my solution on my Windows machine in Visual Studio and then export it and see if it would work on the Linux machine. Through the command line on Windows at the root of my project I used dnu publish (https://github.com/aspnet/Home/wiki/DNX-utility#publish-dnu-publish) which outputs a ‘self-contained directory that can be launched’ = great! So I executed that command, it put the folder in my /bin folder for my current project and I copied over that directory to my Linux machine….  and realized I didn’t know what to do next ;)

I had a look through the files that dnu publish exports and the one that is listed in ASP.Net’s docs is the output/kestrel.cmd (since the command in my project is named ‘kestrel’). Inside this file this is listed:

@"dnx.exe" --appbase "%~dp0approot\src\Smidge.Web" Microsoft.Dnx.ApplicationHost --configuration Debug kestrel %*

which if you want to translate to Linux, you could execute this at the Linux terminal at the root of this folder:

dnx --appbase "approot/src/Smidge.Web" Microsoft.Dnx.ApplicationHost --configuration Debug kestrel

… which will actually work, BUT it turns out there’s a way more Linuxy way to do it. dnu publish also creates this file which isn’t in the docs:  output/kestrel Having a look at this file, the first line is: #!/usr/bin/env bash …  so I can only assume this is something for Linux since I’ve heard the term bash before. Turns out on Linux you can just do this in the terminal from the root of this folder!

bash kestrel

Result:

image

WHOOHOOO!

Lets see it in action

Now that it’s running, I’ll jump over to the UI in Ubuntu and fire up the browser… Tada!!

image

Problems along the way

I probably made the above sound a bit easier than it was ;) … I did run into a few setup issues along the way.

Problem #1

The first one was when I first tried to run dnx:

failed to locate libcoreclr with error libunwind.so.8: cannot open shared object file: No such file or directory” when you run dnx or dnu command

I solved this issue from reading about it on this nice post: http://blogs.msdn.com/b/rdcdev/archive/2015/08/28/some-issues-when-hosting-asp-net-5-on-ubuntu-on-azure.aspx which has some other nice tricks if you run into Ubuntu issues with ASP.Net 5.  The solution was that I needed to run this command:

sudo apt-get install libunwind8

Problem #2

Then I got this exception:

The type initializer for 'libcrypto' threw an exception

Which is referenced on this ASP.Net issue: https://github.com/aspnet/dnx/issues/1806 … and turns out that it’s also referenced on the above link. I can’t remember where exactly I found this solution but I had to run:

apt-get install libcurl4-openssl-dev

Problem #3

After fixing those 2 things, the bash kestrel command succeeded but when I went to test this in my browser, I just had a white screen. After Googling, I found this link: http://stackoverflow.com/questions/28845892/blank-white-screen-on-error-with-kestrel-asp-net-5 and as it turns out, I had the same issue. I forgot to add the error handling middleware. Perhaps when running in VS with IIS this is automatically taken care of for you… not sure. But in any case, it’s super important that you add it and you should add it as the first middleware so you can actually see if your other middleware fails, typically your ‘Configure’ method in your Startup class should start with:

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    // Add the following to the request pipeline only in development environment.
    if (env.IsDevelopment())
    {
        app.UseErrorPage();
    }
    else
    {
        // Add Error handling middleware which catches all application specific errors and
        // sends the request to the following path or controller action.
        app.UseErrorHandler("/Home/Error");
    }

Problem #4

We’re not in Windows-land anymore! The errors I was getting were due to invalid file system paths. Turns out that .Net has always had this property: System.IO.Path.DirectorySeparatorChar but there wasn’t much reason to use it since .Net only runs in Windows and that character is always backslash. So I had to change my hard coded backslash use to use this instead. Next file path issue was case sensitivity… DOH. The Smidge configuration file is: ~/smidge.json however in my c# code I was trying to load it in with “Smidge.json” which fails in Linux of course.

Problem #5

Static files… I’m so used to working with IIS I forgot that outside of IIS I’d need to make sure the static file middleware was used:

app.UseStaticFiles();

I fixed that up and everything just worked… very freakin cool!!

Release – beta6

I’ve put up a new release on Nuget with these changes:

PM> Install-Package Smidge -Pre

And the source is on GitHub:

During the past month I decided to dive deep into learning ASP.NET 5, and what better way to learn than to start a new OSS project :)

I chose to make a new new simple and extensible Javascript/CSS runtime pre-processor for ASP.NET 5. It does file minification, combination and compression, has a nice file caching layer and it’s all done in async operations. I ported over a few ideas and code snippets from CDF (client dependency framework) but with a more modern approach. I’ve called it ‘Smidge’ = something really small.

The project is on GitHub, it’s still a work in progress but its functional and there’s even some documentation! In the next few weeks I’ll get more of the code and docs updated and hopefully have a beta release out. In the meantime, you can clone the source, browse the code, build it and of course use it if you like.

Project details

It’s currently only targeting aspnet50 and not the Core CLR… I didn’t start with Core CLR because there was some legacy code I had to port over and I wanted to get something up and working relatively quickly. It shouldn’t be too much work to convert to Core CLR and Mono, hopefully I’ll find time to do that soon. It’s referencing all of the beta-* libraries from the ASP.NET 5 nightly myget feeds since there’s some code I’m using that isn’t available in the current beta1 release (like Microsoft.AspNet.WebUtilities.UriHelper). The target KRE version is currently KRE-CLR-amd64 1.0.0-beta2-10760.

Installation

I’ve put up an Alpha 1 release on Nuget, so you can install it from there:

PM> Install-Package Smidge -Pre

There’s some installation instructions here, you’ll need to add the smidge.json file yourself for now, can’t figure out how to get VS 2015 (kpm pack) to package that up … more learning required!

 

There’s certainly a lot of detective work involved in learning ASP.NET 5 but with the code being open source and browse-able/searchable on GitHub, it makes finding what you need fairly easy.

I previously wrote a post about Listening for validation changes in AngularJS which with my knowledge at that time required a handy hack to get a reference to the currently scoped form controller (ngForm) for a given input control. I also complained a bit that it seemed that angular didn’t really provide a way to reference the current form controller without this little hack… well, it turns out I was wrong! :)

AngularJS seems kind of like ASP.Net MVC in the early days when there wasn’t much documentation…  It definitely pays off to read through the source code to figure out how to do more complicated things. I had a bit of a ‘light bulb’ moment when I realized that ngForm was itself a directive/controller and had recently noticed that the ‘require’ parameter of setting up a directive allows you to search for controllers in the current directives ancestry (i.e. prefix the required controller with a hat: ^ )

What does the require parameter of a directive do?

Lets face it, the directive documentation for AngularJS is in desperate need of being updated so that human beings can understand it (as noted by the many comments at the bottom). So I’ll try to explain what the ‘require’ parameter actually does and how to use it.

We’ll create a simple custom validation directive which will invalidate a field if the value is “blah”

function blahValidator() {
    return {
        require: 'ngModel',
        link: function(scope, elm, attr, ctrl) {
            
            var validator = function(value) {
                if (ctrl.$viewValue == "blah") {
                    ctrl.$setValidity('blah', false);
                    return null;
                }
                else {
                    ctrl.$setValidity('blah', true);
                    return value;
                }
            };

            ctrl.$formatters.push(validator);
            ctrl.$parsers.push(validator);
        }
    };
}

You’ll notice that we have a ‘require’ parameter specified for ‘ngModel’. What is happening here is that when we assign this directive to an input field, angular will ensure that the input field also has a defined ng-model attribute on it as well. Then angular will pass in the instance of the ng-model controller to the ‘ctrl’ parameter of the link function.

So, the ‘require’ parameter dictates what the ‘ctrl’ parameter of the link function equals.

You can also require multiple controllers:

image

NOTE: the ctrl/ctrls parameter in the above 2 examples can be called whatever you want

Special prefixes

Angular has 2 special prefixes for the ‘require’ parameter:

^ = search the current directives ancestry for the controller

? = don’t throw an exception if the required controller is not found, making it ‘optional’ not a requirement

You can also combine them so angular will search the ancestry but it can be optional too such as: ^?ngController'

In the above example, the blahValidator will only work if the directive is declared inside of an ng-controller block.

Referencing the current ng-form

Given the above examples, and knowing the ngForm itself is a controller we should be able to just make a requirement on ngForm and have it injected into the directive. BUT, it wont work the way you expect. For some reason angular references the ngForm controller by the name “form” which i discovered by browsing the source of angular.

So now its easy to get a reference to the containing ngForm controller, all you need to do is add a ‘require’ parameter to your directive that looks like:

require: '^form'
and it will be injected into your ctrl parameter of your link function.

In some applications it can be really useful to have controllers listen for validation changes especially in more complicated AngularJS apps where ‘ng-repeat’ is used to render form controls. There’s plenty of cases where a parent scope might need to know about validation changes based on child scopes… one such case is a validation summary. There’s a couple ways to implement this (and probably more) but they all seem a bit hacky such as:

  • Apply a $watch to the current form object’s $valid property in the parent scope, then use jQuery to look for elements that have a class like ‘invalid’
    • You could then use the scope() function on the DOM element that ng-repeat is used on to get any model information about the invalid item
  • In child scopes you could apply a $watch to individual form elements’ $valid property then change the $parent scope’s model values to indicate validation changes

Instead what I wanted to achieve was a re-usable way to ‘bubble’ up validation changes from any scope’s form element to ancestor scopes without having to do any of the following:

  • No jquery DOM selection
  • No hard coding of form names to access the validation objects
  • No requirement to modifying other scopes’ values

Implementation

The way I went about this was to create a very simple custom directive which I’ve called ‘val-bubble’ since it has to do with validation and it ‘bubbles’ up a message to any listening scopes. An input element might then look like this:

<input name="FirstName" type="text" required val-bubble />

Then in an outer scope I can then listen for validation changes and do whatever I want with the result:

scope.$on("valBubble", function(evt, args) {
alert("Validation changed for field " + args.ctrl.$name + ". Valid? " + args.isValid);
});

The args object contains these properties:

  • isValid = is the field valid
  • ctrl = the current form controller object for the field
  • scope = the scope bound to the field being validated
  • element = the DOM element of the field being validated
  • expression = the current $watch expression used to watch this fields validation changes

With all of that information you can easily adds some additional functionality to your app based on the current validating inputs such as a validation summary or whatever.

Custom directive

The val-bubble custom directive is pretty simple, here’s the code and an explanation below:

app.directive('valBubble', function (formHelper) {
return {
require: 'ngModel',
restrict: "A",
link: function (scope, element, attr, ctrl) {

if (!attr.name) {
throw "valBubble must be set on an input element that has a 'name' attribute";
}

var currentForm = formHelper.getCurrentForm(scope);
if (!currentForm || !currentForm.$name)
throw "valBubble requires that a name is assigned to the ng-form containing the validated input";

//watch the current form's validation for the current field name
scope.$watch(currentForm.$name + "." + ctrl.$name + ".$valid", function (isValid, lastValue) {
if (isValid != undefined) {
//emit an event upwards
scope.$emit("valBubble", {
isValid: isValid, // if the field is valid
element: element, // the element that the validation applies to
expression: this.exp, // the expression that was watched to check validity
scope: scope, // the current scope
ctrl: ctrl // the current controller
});
}
});
}
};
});

The first thing we’re doing here is limiting this directive to be used only as an attribute and ensuring the element has a model applied to it. Then we make sure that the element has a ‘name’ value applied. After that we are getting a reference to the current form object that this field is contained within using a custom method: formHelper.getCurrentForm … more on this below. Lastly we are applying a $watch to the current element’s $valid property and when this value changes we $emit an event upwards to parent/ancestor scopes to listen for.

formHelper

Above I mentioned that I wanted a re-usable solution where I didn’t need to hard code things like the current form name. Unfortunately Angular doesn’t really provide a way to do this OOTB (as far as I can tell!) (Update! see here on how to access the current form: http://shazwazza.com/post/Reference-the-current-form-controller-in-AngularJS) so I’ve just created a simple factory object that finds the current form object applied to the current scope. The type check is fairly rudimentary but it works, it’s simply checking each property that exists on the scope object and tries to detect the object that matches the definition of an Angular form object:

app.factory('formHelper', function() {
return {
getCurrentForm: function(scope) {
var form = null;
var requiredFormProps = ["$error", "$name", "$dirty", "$pristine", "$valid", "$invalid", "$addControl", "$removeControl", "$setValidity", "$setDirty"];
for (var p in scope) {
if (_.isObject(scope[p]) && !_.isFunction(scope[p]) && !_.isArray(scope[p]) && p.substr(0, 1) != "$") {
var props = _.keys(scope[p]);
if (props.length < requiredFormProps.length) continue;
if (_.every(requiredFormProps, function(item) {
return _.contains(props, item);
})) {
form = scope[p];
break;
}
}
}
return form;
}
};
});

NOTE: the above code has a dependency on UnderscoreJS

So now you can just apply the val-bubble attribute to any input element to ensure it’s validation changes are published to listening scopes!

I decided to write a quick blog post about this because much of the documentation and examples about this seems to be a bit scattered. What this achieves is the ability to upload any number of files with any other type of data in one request. For this example we’ll send up JSON data along with some files.

File upload directive

First we’ll create a simple custom file upload angular directive

app.directive('fileUpload', function () {
return {
scope: true, //create a new scope
link: function (scope, el, attrs) {
el.bind('change', function (event) {
var files = event.target.files;
//iterate files since 'multiple' may be specified on the element
for (var i = 0;i<files.length;i++) {
//emit event upward
scope.$emit("fileSelected", { file: files[i] });
}
});
}
};
});

The usage of this is simple:

<input type="file" file-upload multiple/>

The ‘multiple’ parameter indicates that the user can select multiple files to upload which this example fully supports.

In the directive we ensure a new scope is created and then listen for changes made to the file input element. When changes are detected with emit an event to all ancestor scopes (upward) with the file object as a parameter.

Mark-up & the controller

Next we’ll create a controller to:

  • Create a model to bind to
  • Create a collection of files
  • Consume this event so we can assign the files to  the collection
  • Create a method to post it all to the server

NOTE: I’ve put all this functionality in this controller for brevity, in most cases you’d have a separate factory to handle posting the data

With the controller in place, the mark-up might look like this (and will display the file names of all of the files selected):

<div ng-controller="Ctrl">
<input type="file" file-upload multiple/>
<ul>
<li ng-repeat="file in files">{{file.name}}</li>
</ul>
</div>

The controller code below contains some important comments relating to how the data gets posted up to the server, namely the ‘Content-Type’ header as the value that needs to be set is a bit quirky.

function Ctrl($scope, $http) {

//a simple model to bind to and send to the server
$scope.model = {
name: "",
comments: ""
};

//an array of files selected
$scope.files = [];

//listen for the file selected event
$scope.$on("fileSelected", function (event, args) {
$scope.$apply(function () {
//add the file object to the scope's files collection
$scope.files.push(args.file);
});
});

//the save method
$scope.save = function() {
$http({
method: 'POST',
url: "/Api/PostStuff",
//IMPORTANT!!! You might think this should be set to 'multipart/form-data'
// but this is not true because when we are sending up files the request
// needs to include a 'boundary' parameter which identifies the boundary
// name between parts in this multi-part request and setting the Content-type
// manually will not set this boundary parameter. For whatever reason,
// setting the Content-type to 'false' will force the request to automatically
// populate the headers properly including the boundary parameter.
headers: { 'Content-Type': false },
//This method will allow us to change how the data is sent up to the server
// for which we'll need to encapsulate the model data in 'FormData'
transformRequest: function (data) {
var formData = new FormData();
//need to convert our json object to a string version of json otherwise
// the browser will do a 'toString()' on the object which will result
// in the value '[Object object]' on the server.
formData.append("model", angular.toJson(data.model));
//now add all of the assigned files
for (var i = 0; i < data.files; i++) {
//add each file to the form data and iteratively name them
formData.append("file" + i, data.files[i]);
}
return formData;
},
//Create an object that contains the model and files which will be transformed
// in the above transformRequest method
data: { model: $scope.model, files: $scope.files }
}).
success(function (data, status, headers, config) {
alert("success!");
}).
error(function (data, status, headers, config) {
alert("failed!");
});
};
};

Handling the data server-side

This example shows how to handle the data on the server side using ASP.Net WebAPI, I’m sure it’s reasonably easy to do on other server-side platforms too.

public async Task<HttpResponseMessage> PostStuff()
{
if (!Request.Content.IsMimeMultipartContent())
{
throw new HttpResponseException(HttpStatusCode.UnsupportedMediaType);
}

var root = HttpContext.Current.Server.MapPath("~/App_Data/Temp/FileUploads");
Directory.CreateDirectory(root);
var provider = new MultipartFormDataStreamProvider(root);
var result = await Request.Content.ReadAsMultipartAsync(provider);
if (result.FormData["model"] == null)
{
throw new HttpResponseException(HttpStatusCode.BadRequest);
}

var model = result.FormData["model"];
//TODO: Do something with the json model which is currently a string



//get the files
foreach (var file in result.FileData)
{
//TODO: Do something with each uploaded file
}

return Request.CreateResponse(HttpStatusCode.OK, "success!");
}


Injecting JavaScript into other frames

September 9, 2010 20:44

The beautiful part of JavaScript is that it is ridiculously flexible and lets you do things that ‘probably’ shouldn’t be done. Here’s a good example of that.

During uComponents development I stumbled upon a situation where I needed to attach a JavaScript method to the top-level frame from inside of an iframe. Well in fact it turns out this is quite easy, you can do something like this:

window.top.doThis = function() { alert("woot!"); }

However, since we’re attaching the ‘doThis’ method to the main frame from an inner iframe, when the inner iframenavigates to another page, this function will no longer exist on the main frame… So this clearly isn’t going to work if we want to be able to call the ‘doThis’ method from the inner frame no matter when and where it navigates to… Conundrum!

So the next possibility is to try to just inject a script block into the main frame from the iframewhich actually does work in Firefox and Chrome but fails in Internet Explorer and Safari. (This snippet of code requires that you have jQuery loaded in the main frame)

var js = "function doThis() { alert('woot!'); }"; var injectScript = window.top.$('<script>') .attr('type', 'text/javascript') .html(js); window.top.$("head").append(injectScript);

In the above, we’re creating a string function, creating a <script> block with jQuery, appending the string function to the script block and then appending this script block to the <head> element of the main frame. But as i said before, Firefox and Chrome are ok with this but Internet Explorer and Safari will throw JavaScript exceptions such as: Unexpected call to method or property access

Ok, so unless you don’t want to be cross browser, this isn’t going to work. It took me a while to figure out that you can do this, but this does work. Yes it looks pretty dodgy, and it probably is. In reality, attempting to do something like this is pretty dodgy to begin with. So here it is (this works in Internet Explorer 8 (probably earlier ones too), Firefox 3.6 (probably earlier ones too) and Chrome 5 (probably earlier ones too) and i didn’t get around to testing Safari but I am assuming it works):

var iframe = window.top.$("#dummyIFrame"); if (iframe.length == 0) { var html = "<html><head><script type='text/javascript'>" + "this.window.doThis = function() { alert('woot'); };" + "</script></head><body></body></html>"; iframe = window.top.$("<iframe id='dummyIFrame'>") .append(html) .hide() .css("width", "0px") .css("height", "0px"); window.top.$("body").append(iframe); }

So i guess this requires a bit of explanation. All browsers seem to let you create iframes dynamically which also means that you can put whatever content into the iframes while it’s being created, including script blocks. Here’s what we’re doing:

  • checking if our ‘dummy’ iframe already exists (since we don’t want to create multiple dummy iframes since we only need one), if it doesn't:
  • create an html text block including the script that will attach the ‘doThis’ method to the ‘this.window’ object (which for some reason will be referring to the window.top object)
  • next we create an iframe element and append the html text block, and then make sure the iframe is completely hidden
  • finally, we append the iframe to the main window’s body element
window.top.doThis();

Nice! So this pretty much means that you can create code from an inner frame and attach it to a different frame, then have that code run in the context of the main frame with the main frame’s objects and script set.

One last thing that i found out you can do, though i wouldn’t recommend it because i think it might start filling up your memory. But this is also possible:

var html = "<html><head><script type='text/javascript'>" + "this.window.doThis = function() { alert('woot'); };" + "this.window.doThis();" + "</script></head><body></body></html>"; iframe = window.top.$("<iframe id='dummyIFrame'>") .append(html);

All that is happening here is that I’m attaching the ‘doThis’ method to the main frame’s window object, calling it directly after and then creating an iframe in memory with this script block. The funny part is that the method executes straight away and I haven’t attached the iframe to the DOM anywhere!

Umbraco 4.1 Benchmarks Part 1

April 16, 2010 11:42
This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
This is the first installment of what will hopefully be many Umbraco benchmark reports created by various members of the core team in the lead up to the launch of Umbraco 4.1. This benchmark report is about the request/response performance of the Umbraco back-office. This compares 4 different configurations: 4.0.3 with browser cache disabled (first run), 4.0.3 with browser cached files, 4.1 with browser cache disabled and 4.1 with browser cached files. These comparisons have been done by using newly installed Umbraco instances with ONLY the CWS package installed. The benchmark results were prepared by using Charles Proxy.
Test Stats 4.0.3 4.0.3
client cached
4.1 4.1
client cached
Content app Completed Requests 68 7 46 6
Response (KB) 687.05 72.48 431.41 32.54
Edit content
home page
Completed Requests 50 2 34 1
Response (KB) 385.10 47.28 343.36 12.07
Expand all
content nodes
Completed Requests 17 17 16 16
Response (KB) 18.47 18.47 13.96 10.85
TOTALS Completed Requests 135 26 96 23
Response (KB) 1063.62 138.23 788.73 55.46

Note: the above is based on <compilation debug=”false”> being set in the web.config. If it is set to true, the compression, combination and minification for both the ClientDependency framework and ScriptManager is not enabled. Also, this is not based on having IIS 7’s dynamic/static compression turned on, these benchmarks are based on Umbraco performing ‘as is ‘ out of the box which will be the same for IIS 6.

Though there’s only 3 tests listed above, these results will be consistent throughout all applications in the Umbraco back office in version 4.1.

The 4.1 difference:

  • In 4.0.3, all ScriptResource calls generated by ScriptManager were not being compressed or minified. This was due to a browser compatibility flag that was set in the base page (this was probably very old code from pre v3!).
  • Script managers in the back-office have the ScriptMode=”release” explicitly set (for minification of ScriptResource.axd)
  • The ClientDependency framework is shipped with 4.1 and all of the back office registers it’s JavaScript and CSS files with this framework. This allows for:
    • Combination, compression, minification of dependencies
    • Rogue script/style detection (for those scripts/styles that weren’t registered with the framework will still get compressed/minified)
    • Compression/minification of specified Mime types, in this case all JSON requests in the back office (namely the tree)
    • Compression/minification of all JavaScript web service proxy classes (‘asmx/js’ requests that are made by registering web services with the ScriptManager
  • Much of the back office client scripting in 4.1 has been completely refactored. Most of the JavaScript has been rewritten and a ton of file cleanup has been done.

Compared to 4.0.3, this is a HUGE difference with some serious performance benefits!

This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
I’m please to announce that the ClientDependency framework now supports MVC! It’s very easy to implement using HtmlHelper extension methods. Here’s some quick examples:

Make a view dependent on a CSS file based on a path defined as “Styles”

<% Html.RequiresCss("Content.css", "Styles"); %>

Make a view dependent on jQuery using a full path declaration:

<% Html.RequiresJs("/Js/jquery-1.3.2.min.js"); %>

Rendering the Style blocks and defining a global style path:

<%= Html.RenderCssHere(new BasicPath("Styles", "/Css")) %>

Rendering the Script block (no global script path defined):

<%= Html.RenderJsHere() %>

There’s still a provider model for MVC but it uses a slightly different implementation from Web Forms. The same compositeFiles provider model is used but instead of the fileRegistration provider model that is used in Web Forms, a new mvc renderers provider model is used. A renderer provider is similar to the Web Forms fileRegistration providers but instead of registering the markup in the page using the page life cycle, a renderer provider is used to render out the html block necessary to embed in the page.

All of the functionality that existed in Web Forms exists in MVC. You can make as many views that you want dependent on as many of the same or different client files that you want and the system will still sort by position and priority and remove all duplicate registrations. Rogue scripts & styles still get processed by the composite file provider in MVC. Currently however, if you place user or composite controls on your views that have Client Dependencies tagged with either the control or attribute method used in Web Forms, these will not be registered with the view and output with the renderer. 

MVC pages have been added to the demo project as examples so have a look! You can download the source HERE

For full details and documentation go HERE

This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
There are two reasons I use dynamic text replacement:

  1. Plain text in a browser window is never as smooth as it is in the design (except in Safari),
  2. The designs I'm given almost always use fancy (non-web based) fonts for headings, intro text etc.

Currently I subscribe to two solutions, both client-side: sIFR and cufón. If you want a quick answer to "what should I use?" then here it is: if you want replacement for long sentences and headings, use sIFR. If you want replacement for a few words on a button or in a menu, use cufón. If you want to know more, read on...

sIFR

How sIFR works

sIFR uses javascript to dynamically embed a Flash object in the place of specified HTML text elements. The Flash object is essentially an empty SWF (compiled Flash file) which includes the characters of the font you want to use. When javascript embeds the SWF in the HTML, it passes the SWF arguments such as text-content, font-size, color, rollover behaviour and many more. Some of these properties javascript takes from the CSS applied to the text, and some are overridden by sifr-config.js, which is a human-readable config file containing additional formatting tweaks.

How to use sIFR

  1. Download the source (only sifr.js, sifr-config.js and sifr.css are actually needed).
  2. Generate a font SWF from a True Type Font file. You can do this manually with Adobe Flash (like this) but it is easier to use this online generation tool.
  3. Link to sifr.css in the head of your HTML page.
  4. Link to both scripts in the head of your HTML page, first sifr.js, then sifr-config.js
  5. Edit sifr-config.js to read in the font SWF you created and use CSS selector syntax to select the HTML text elements you want to replace.
  6. Tweak away in sifr-config.js until the text looks right when sIFR is both enabled and disabled, in case the user is missing Flash (iPhones don't have Flash as of early 2010!).

cufón

How cufón works

cufón is entirely javascript and works in all major browsers (including IE6). You still need to generate a font file, but this is output to a javascript file similar in size to the equivalent sIFR SWF font file. cufón looks at selected blocks of text and replaces each word with a dynamically generated image (using the HTML <canvas> tag), or in IE's case, a VML object (Vector Markup Language). As a consquence of this, increasing the text size of the page either doesn't affect cufón-replaced text or expands the image, blurring it. Except in IE, which scales the text perfectly.

How to use cufón

  1. Download the source (all you need is cufon-yui.js)
  2. Generate a font file from a True Type Font. You are unfortunately dependent on this online generator.
  3. Link to cufon-yui.js in the head of your HTML page. Beneath it, link to any font files you've generated.
  4. You may also want to create another file, cufon-config.js, to hold selector information much the same way as sifr-config.js does.
  5. Populate cufon-config.js with what HTML text elements you want to replace.

Legality

Font foundaries are seemingly run by people who are very freaked out that their fonts are going to leak for free. Hence, the vast majority of fonts can't legally be embedded directly into the CSS (what, you didn't know about that? It's been around a while in some form or another - the @font-face directive allows you to supply the font file for obscure font-families - but it's only legal with open source fonts).

Because sIFR uses Flash, and Adobe has a cordial agreement with the font foundaries in Switzerland (or wherever) which allows anyone to embed pretty much any font in a .swf, sIFR is totally legal. cufón... cufón not so much. Although both supply a compiled/obfuscated font resource in .swf/.js format respectively, and are basically the same from the font foundaries' point of view, they haven't got around to adding "Allows embedding using javascript methods" to their fonts' terms of use agreements. And I wouldn't count on it...

So in short, use cufón. Go on, it's young and fresh and rad! Push the font foundaries to legalise it! But remember: you read that in a blog post...I'm not going to be your lawyer if they come after you.

Comparison

sIFR cufón
Core file-size (not including fonts) 30KB 18KB
Independent of Flash ×
Resizes nicely ×
Cursor-selectable text ×
Doesn't flicker on load ×
Independent of online font-generator ×
Online font-generator supports Open fonts (.otf) ×
Supported in all browsers
Degrades gracefully
Legal ×