Benchmarking performance of your code between release versions

Benchmarking performance of your code between release versions

A while ago in 2016 I posted a question on the BenchmarkDotNet repository about an official way to run benchmarks between Nuget releases. In 2018 I managed to find some time and with the with the help of Adam Sitnik (one of the project’s maintainers), was able to make that work!

I won’t go into detail about what BenchmarkDotNet is, but it’s a brilliant way to accurately run benchmarks against your code to see how it performs, how much memory is being used, etc…

With this new feature you can now easily see how your code changes effect performance between your releases.

Show me the code

Making this work is quite easy and there’s a quick start code snippet in the repo already. For the example below I’ll use ImageSharp as the library to be tested and we’ll see how well James and his team is doing with regards to improving it’s JPEG decoding performance.


[Config(typeof(Config))]
public class ImageTests
{
    private static readonly string _filePath = @"C:\temp\test.jpg";

    private class Config : ManualConfig
    {
        public Config()
        {
            var baseJob = Job.MediumRun.With(CsProjCoreToolchain.Current.Value);
            Add(baseJob.WithNuGet("SixLabors.ImageSharp", "1.0.0-beta0006").WithId("1.0.0-beta0006"));
            Add(baseJob.WithNuGet("SixLabors.ImageSharp", "1.0.0-beta0005").WithId("1.0.0-beta0005"));
            Add(baseJob.WithNuGet("SixLabors.ImageSharp", "1.0.0-beta0004").WithId("1.0.0-beta0004"));
        }
    }

    [Benchmark]
    public Size LoadJpg()
    {
        using (var img = Image.Load(_filePath))
        {
            var size = img.Size();
            return size;
        }
    }
}

The code above should seem pretty straightforward. It’s just setting up 3 BenchmarkDotNet jobs, each using a different version of the SixLabors.ImageSharp Nuget package. Then the actual benchmark test is loading in a JPEG extracting it’s size and returning it.

Running the benchmark is like running any other BenchmarkDotNet test, for example in a console app:


class Program
{
    static void Main(string[] args)
    {   
        var summary = BenchmarkRunner.Run();
    }
}

The results

Method Job NuGetReferences Mean Error StdDev
LoadJpg 1.0.0-beta0004 SixLabors.ImageSharp 1.0.0-beta0004 297.5 ms 142.8 ms 7.827 ms
LoadJpg 1.0.0-beta0005 SixLabors.ImageSharp 1.0.0-beta0005 202.9 ms 466.6 ms 25.577 ms
LoadJpg 1.0.0-beta0006 SixLabors.ImageSharp 1.0.0-beta0006 148.8 ms 107.8 ms 5.910 ms

 

Looks good! from the beta0004 release to the beta0006 release there’s almost twice the performance boost.

API Surface Area

There is one caveat though… In order to run these tests between versions of your library, the same API surface area will need to exist otherwise you’ll get exceptions when running the benchmarks. This is the reason why versions beta0001 –> beta0003 are not included in the jobs listed above. Its because in the older versions either the APIs were different or the namespaces were different.

It is possible to work around this but you’d need to use some ugly reflection to do it and then you need to be careful that you are not testing the reflection performance hit either.

 

Now you should have a pretty easy way to know how the performance of your library is changing between versions. Happy coding!

Author

Shannon Thompson

I'm a Senior Software Engineer working full time at Microsoft. Previously, I was working at Umbraco HQ for about 10 years. I maintain several open source projects (many related to Umbraco) such as Articulate, Examine and Smidge, and I also have a commercial software offering called ExamineX. Welcome to my blog :)

comments powered by Disqus