Dirty Docker Hack for WikiJS PlantUML

WikiJS 2 is a great wiki server that includes support for PlantUML in markdown. If you want it to use your own PlantUML server for rendering, it’s a simple matter of changing a setting. Unfortunately, the editor is essentially hard-coded to use https://plantuml.requarks.io, and there isn’t currently a way to override it (there’s a two year old pull request that hasn’t been merged yet).

However, if you’re hosting WIkiJS in a container and super lazy, you may find this diabolical Dockerfile hack useful until WikiJS 3 is released:

FROM ghcr.io/requarks/wiki:2

RUN grep -rl plantuml.requarks.io /wiki/server | xargs sed -i 's,https://plantuml.requarks.io,https://MY_PLANTUML_URL,g' && \
    grep -rl plantuml.requarks.io /wiki/assets/js | xargs sed -i 's,https://plantuml.requarks.io,https://MY_PLANTUML_URL,g'

Basically it’s just creating a new image and replacing the https://plantuml.requarks.io URL with your own.

NuGet Package Hierarchical Versioning

This post discusses an auto-versioning strategy for interdependent NuGet packages.

The new project system

The new project system introduced with Visual Studio 2017 is a huge improvement on what’s gone before.  One of the best features is aggregation of several configuration files into a single project file.  Previously, a typical C# project might include these:

  • <project>.csproj – The project file, which typically lists every file included in the project.
  • Properties/AssemblyInfo.cs – Metadata compiled into the assembly, including version numbers.
  • packages.config – A list of referenced NuGet package versions.
  • <project>.nuspec – NuGet package configuration file (optional; used for creating NuGet packages).

With the new project system, most stuff you’ll need is in the .csproj file.  For many projects this just means less clutter, and a project file you’re happy to edit manually.  If you’re building NuGet packages, it gets even better.

Building NuGet packages

Let’s say you’re building Package A and Package B, and Package B requires Package A.  You make an API change in Package A and increment its version number.  Since you’ve not changed Package B, its latest version continues to reference the old version of Package A.

What if Package B is the primary package you want most users to use, and Package A is just one of several dependencies?  If you want users to have the latest code but not have to reference Package A directly, you need to increment Package B’s version number too.  With the old project system you also need to update Package B’s .nuspec file with Package A’s new version number.

Automated versioning

Because the new project system brings everything we need into the .csproj file, it makes automating the versioning process much easier.  Here’s something I’ve been using that works pretty well:

Given a root folder, it’ll recursively identify local changes that haven’t been committed to git yet (using the ngit package).  It then determines which projects reference projects that contain those changes.  At the end it’ll suggest new version numbers for all affected projects.  If the “writeChanges” flag is set, it’ll modify the project version numbers for you.

Because it’s comparing against the last commit, it’s safe to run multiple times or manually update version numbers if you prefer.

 

Resume Artist (iPad)

It’s time to plug a new app!   This piece of awesome lets you create resumes / CV’s pretty easily, then tweak spacing fonts, colors etc.  It’s free with a basic template, or you can unlock the other templates for $4.99.

Resume Artist (iPad)

Of course it’s not as much fun as Saguru, and perhaps not as mind-bogglingly every-day-useful as Cashflows, but it’s extremely cool if you want to make resumes.

Visual Studio 2015 Update 1

Finally, Visual Studio 2015 [Update 1] remembers my “Keep tabs” setting!!!

This has caused me considerable pain and anguish; almost as unpleasant as Go’s arrogant (and wrong) opinion on squiggly placement.  Don’t believe me?  Go paste this code into https://golang.org (or move their opening brace to its artistically correct position on the next line):

// You can edit this code!
// Click here and start typing.
package main

import "fmt"

func main()
{
 fmt.Println("Hello, 世界")
}

This is sick.  Just putting that out there.

ImmutableInterlocked

The Immutable Collections package is awesome.  If you’re writing concurrent code, immutable collections should be your new best friends (along with immutable classes in general).  Explicit locking is bad, bad, bad, with an extra helping of bad (repeat this until it sticks).

A typical pattern you’ll see when modifying a collection looks like this:

myCollection = myCollection.Add( "Banana" );

However if the “myCollection” above is a field you’re sharing between threads, you still need to protect it.  This is easy with the System.Threading.Interlocked helpers:

Interlocked.Exchange(
    ref myCollection,
    myCollection.Add( "Banana" ) );

But what if you’re updating an ImmutableDictionary and need an atomic update, so it’ll only add an item if it doesn’t exist?  Here’s where the ImmutableInterlocked helpers come in:

private ImmutableDictionary<string, Fruit> myDictionary
    = ImmutableDictionary<string, Fruit>.Empty;
…
var fruit = ImmutableInterlocked.GetOrAdd(
    ref myDictionary,
    "banana",
    new Banana() );

Now things could get interesting.  You’ll notice on the line above we create a new Banana instance.  If “banana” already exists in the dictionary, the nice fresh Banana we created will just be discarded.  In many cases this isn’t a problem (maybe a slip hazard); it’s just a redundant object creation.

But what if it’s something we only want to create once, and only if it doesn’t exist?  ImmutableInterlocked has a GetOrAdd override that takes a delegate:

var fruit = ImmutableInterlocked.GetOrAdd(
    ref myDictionary,
    "banana",
    _ => new Banana() );

It sure looks promising.  Presumably it only calls the delegate if the item isn’t in the dictionary?…  Nope!  Apparently it always calls the delegate, checks if the item exists, and discards the result if it does (while the source code isn’t currently available, we can get a vague idea how it might be implemented from this Unofficial port of Immutable Collections).

So it seems we need another solution.  We really don’t want to explicitly lock anything (bad, bad, bad).  Turns out we can get this for “free” if we use Lazy<T>:

private ImmutableDictionary<string, Lazy<Fruit>> myDictionary
    = ImmutableDictionary<string, Lazy<Fruit>>.Empty;
…
var fruit = ImmutableInterlocked.GetOrAdd(
    ref myDictionary,
    "banana",
    new Lazy<Fruit>( () => new Banana(), true ) ).Value;

This ensures there’s a Lazy<Fruit> in the dictionary that knows how to create a Banana on demand.  Lazy<T> already takes care of ensuring only one thread can actually create the instance.  It does some internal locking of its own, but apparently it’s super efficient so we can happily ignore it and go on our way.

Hope this helps!

ImmutableArray in Objective-C

Following my previous post about implementing an AVL tree in Objective-C, here’s an ImmutableArray that makes use of it:

ImmutableArray gist

It’s very loosely based on .NET & Mono’s ImmutableList<T>, but follows conventions similar to NSMutableArray (the main difference is most methods return a new ImmutableArray).

The main overhead compared to a regular NSMutableArray is from the extra object allocations and deletions it may need.  If you need to add many objects quickly, retaining a cache of “old” trees and then releasing it afterwards (perhaps asynchronously on a dispatch queue) will squeeze out a little more performance.  Here’s a simple example:

__block NSMutableArray *cache = [NSMutableArray new];
ImmutableList *myList = [ImmutableList empty];

for ( int n = 0; n < 10000; ++ n )
{
    myList = [myList addObject:@( n )];
    [cache addObject:myList];
}

dispatch_async(
    dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0 ),
    ^{ cache = nil; } );

Note this will use a little more memory (as old tree nodes will be retained by the cache).

Finally, here are some rough timings (adding 20000 NSNunbers):

AVL tree (mutable)
2014-01-18 00:24:09.075 AVLTree[99465:303] elapsed subtime = 27.864
2014-01-18 00:24:09.109 AVLTree[99465:303] elapsed subtime = 30.158
2014-01-18 00:24:09.140 AVLTree[99465:303] elapsed subtime = 29.922
2014-01-18 00:24:09.173 AVLTree[99465:303] elapsed subtime = 32.26
2014-01-18 00:24:09.209 AVLTree[99465:303] elapsed subtime = 35.307
2014-01-18 00:24:09.245 AVLTree[99465:303] elapsed subtime = 36.04
2014-01-18 00:24:09.282 AVLTree[99465:303] elapsed subtime = 34.085
2014-01-18 00:24:09.314 AVLTree[99465:303] elapsed subtime = 31.594
2014-01-18 00:24:09.349 AVLTree[99465:303] elapsed subtime = 32.881
2014-01-18 00:24:09.439 AVLTree[99465:303] elapsed time = 391.271

AVL node (immutable)
2014-01-18 00:24:09.540 AVLTree[99465:303] elapsed subtime = 100.848
2014-01-18 00:24:09.617 AVLTree[99465:303] elapsed subtime = 75.399
2014-01-18 00:24:09.694 AVLTree[99465:303] elapsed subtime = 76.56
2014-01-18 00:24:09.771 AVLTree[99465:303] elapsed subtime = 76.792
2014-01-18 00:24:09.857 AVLTree[99465:303] elapsed subtime = 84.679
2014-01-18 00:24:09.940 AVLTree[99465:303] elapsed subtime = 82.133
2014-01-18 00:24:10.021 AVLTree[99465:303] elapsed subtime = 81.026
2014-01-18 00:24:10.105 AVLTree[99465:303] elapsed subtime = 82.746
2014-01-18 00:24:10.197 AVLTree[99465:303] elapsed subtime = 91.938
2014-01-18 00:24:10.287 AVLTree[99465:303] elapsed time = 847.492

ImmutableArray
2014-01-18 00:24:10.363 AVLTree[99465:303] elapsed subtime = 64.775
2014-01-18 00:24:10.447 AVLTree[99465:303] elapsed subtime = 83.325
2014-01-18 00:24:10.526 AVLTree[99465:303] elapsed subtime = 77.767
2014-01-18 00:24:10.606 AVLTree[99465:303] elapsed subtime = 80.301
2014-01-18 00:24:10.718 AVLTree[99465:303] elapsed subtime = 100.795
2014-01-18 00:24:10.808 AVLTree[99465:303] elapsed subtime = 88.625
2014-01-18 00:24:10.897 AVLTree[99465:303] elapsed subtime = 88.9
2014-01-18 00:24:10.988 AVLTree[99465:303] elapsed subtime = 90.065
2014-01-18 00:24:11.076 AVLTree[99465:303] elapsed subtime = 87.954
2014-01-18 00:24:11.169 AVLTree[99465:303] elapsed time = 870.628

ImmutableArray with cache (deferred release)
2014-01-18 01:21:42.137 AVLTree[99599:303] elapsed subtime = 60.761
2014-01-18 01:21:42.210 AVLTree[99599:303] elapsed subtime = 71.828
2014-01-18 01:21:42.293 AVLTree[99599:303] elapsed subtime = 82.437
2014-01-18 01:21:42.379 AVLTree[99599:303] elapsed subtime = 84.563
2014-01-18 01:21:42.470 AVLTree[99599:303] elapsed subtime = 91.247
2014-01-18 01:21:42.557 AVLTree[99599:303] elapsed subtime = 85.93
2014-01-18 01:21:42.630 AVLTree[99599:303] elapsed subtime = 72.06
2014-01-18 01:21:42.706 AVLTree[99599:303] elapsed subtime = 76.108
2014-01-18 01:21:42.788 AVLTree[99599:303] elapsed subtime = 80.562
2014-01-18 01:21:42.873 AVLTree[99599:303] elapsed time = 796.389

NSMutableArray
2014-01-18 00:24:11.180 AVLTree[99465:303] elapsed subtime = 0.176013
2014-01-18 00:24:11.181 AVLTree[99465:303] elapsed subtime = 0.177979
2014-01-18 00:24:11.181 AVLTree[99465:303] elapsed subtime = 0.234008
2014-01-18 00:24:11.182 AVLTree[99465:303] elapsed subtime = 0.231981
2014-01-18 00:24:11.183 AVLTree[99465:303] elapsed subtime = 0.162005
2014-01-18 00:24:11.183 AVLTree[99465:303] elapsed subtime = 0.295997
2014-01-18 00:24:11.184 AVLTree[99465:303] elapsed subtime = 0.153005
2014-01-18 00:24:11.184 AVLTree[99465:303] elapsed subtime = 0.147998
2014-01-18 00:24:11.185 AVLTree[99465:303] elapsed subtime = 0.147998
2014-01-18 00:24:11.186 AVLTree[99465:303] elapsed time = 6.36405

NSMutableArray with locking (@synchronized)
2014-01-18 00:24:11.187 AVLTree[99465:303] elapsed subtime = 0.769973
2014-01-18 00:24:11.188 AVLTree[99465:303] elapsed subtime = 0.718951
2014-01-18 00:24:11.190 AVLTree[99465:303] elapsed subtime = 0.68599
2014-01-18 00:24:11.191 AVLTree[99465:303] elapsed subtime = 0.788987
2014-01-18 00:24:11.192 AVLTree[99465:303] elapsed subtime = 0.699997
2014-01-18 00:24:11.193 AVLTree[99465:303] elapsed subtime = 0.711024
2014-01-18 00:24:11.194 AVLTree[99465:303] elapsed subtime = 0.693977
2014-01-18 00:24:11.195 AVLTree[99465:303] elapsed subtime = 0.721037
2014-01-18 00:24:11.196 AVLTree[99465:303] elapsed subtime = 0.687957
2014-01-18 00:24:11.198 AVLTree[99465:303] elapsed time = 10.95

NSArray (arrayByAddingObject:)
2014-01-18 00:24:11.238 AVLTree[99465:303] elapsed subtime = 39.774
2014-01-18 00:24:11.366 AVLTree[99465:303] elapsed subtime = 127.631
2014-01-18 00:24:11.573 AVLTree[99465:303] elapsed subtime = 205.681
2014-01-18 00:24:11.862 AVLTree[99465:303] elapsed subtime = 288.225
2014-01-18 00:24:12.238 AVLTree[99465:303] elapsed subtime = 375.7
2014-01-18 00:24:12.685 AVLTree[99465:303] elapsed subtime = 446.18
2014-01-18 00:24:13.208 AVLTree[99465:303] elapsed subtime = 522.149
2014-01-18 00:24:13.844 AVLTree[99465:303] elapsed subtime = 635.206
2014-01-18 00:24:14.604 AVLTree[99465:303] elapsed subtime = 759.399
2014-01-18 00:24:15.589 AVLTree[99465:303] elapsed time = 4391.24

Immutable AVL Tree in Objective-C

If you’re looking for an Objective-C implementation of an immutable AVL tree, based on Mono’s implementation of AvlNode (System.Collections.Immutable namespace), you’ve come to the right place 🙂

You can find the code right here: AvlNode gist

Insert performance is considerably slower than an NSMutable array, and about half the speed of a mutable AVL tree (due to the extra object allocations). However because this implementation is immutable (any operation on the tree potentially creates a new root node, and doesn’t alter the base tree at all), trees can be shared freely between threads. This has the benefit no locking / synchronization is needed to “modify” the collection.

Another interesting property is you’re free to retain “old” trees, effectively keeping a snapshot of past states.

Here are some timings for comparison (inserting 50000 NSNumbers). Note the increasing times of the regular immutable NSArray, using arrayByAddingObject:

AVL tree (mutable)
2014-01-17 21:44:40.003 AVLTree[98872:303] elapsed subtime = 33.4581
2014-01-17 21:44:40.045 AVLTree[98872:303] elapsed subtime = 37.948
2014-01-17 21:44:40.082 AVLTree[98872:303] elapsed subtime = 36.437
2014-01-17 21:44:40.123 AVLTree[98872:303] elapsed subtime = 40.668
2014-01-17 21:44:40.165 AVLTree[98872:303] elapsed subtime = 40.554
2014-01-17 21:44:40.209 AVLTree[98872:303] elapsed subtime = 43.584
2014-01-17 21:44:40.250 AVLTree[98872:303] elapsed subtime = 40.233
2014-01-17 21:44:40.284 AVLTree[98872:303] elapsed subtime = 33.158
2014-01-17 21:44:40.322 AVLTree[98872:303] elapsed subtime = 37.446
2014-01-17 21:44:40.361 AVLTree[98872:303] elapsed time = 391.786

AVL node (immutable)
2014-01-17 21:44:40.441 AVLTree[98872:303] elapsed subtime = 78.872
2014-01-17 21:44:40.536 AVLTree[98872:303] elapsed subtime = 93.391
2014-01-17 21:44:40.631 AVLTree[98872:303] elapsed subtime = 93.877
2014-01-17 21:44:40.727 AVLTree[98872:303] elapsed subtime = 95.449
2014-01-17 21:44:40.835 AVLTree[98872:303] elapsed subtime = 107.319
2014-01-17 21:44:40.943 AVLTree[98872:303] elapsed subtime = 106.377
2014-01-17 21:44:41.043 AVLTree[98872:303] elapsed subtime = 99.7339
2014-01-17 21:44:41.145 AVLTree[98872:303] elapsed subtime = 100.765
2014-01-17 21:44:41.285 AVLTree[98872:303] elapsed subtime = 140.152
2014-01-17 21:44:41.397 AVLTree[98872:303] elapsed time = 1034.59

Mutable array
2014-01-17 21:44:41.412 AVLTree[98872:303] elapsed subtime = 0.232995
2014-01-17 21:44:41.413 AVLTree[98872:303] elapsed subtime = 0.267982
2014-01-17 21:44:41.414 AVLTree[98872:303] elapsed subtime = 0.232995
2014-01-17 21:44:41.415 AVLTree[98872:303] elapsed subtime = 0.340998
2014-01-17 21:44:41.415 AVLTree[98872:303] elapsed subtime = 0.244975
2014-01-17 21:44:41.416 AVLTree[98872:303] elapsed subtime = 0.337005
2014-01-17 21:44:41.417 AVLTree[98872:303] elapsed subtime = 0.227988
2014-01-17 21:44:41.418 AVLTree[98872:303] elapsed subtime = 0.178993
2014-01-17 21:44:41.419 AVLTree[98872:303] elapsed subtime = 0.231981
2014-01-17 21:44:41.420 AVLTree[98872:303] elapsed time = 8.18902

Mutable array with locking
2014-01-17 21:44:41.422 AVLTree[98872:303] elapsed subtime = 0.946999
2014-01-17 21:44:41.423 AVLTree[98872:303] elapsed subtime = 0.873029
2014-01-17 21:44:41.425 AVLTree[98872:303] elapsed subtime = 0.846028
2014-01-17 21:44:41.426 AVLTree[98872:303] elapsed subtime = 0.813007
2014-01-17 21:44:41.427 AVLTree[98872:303] elapsed subtime = 0.771999
2014-01-17 21:44:41.428 AVLTree[98872:303] elapsed subtime = 0.835001
2014-01-17 21:44:41.430 AVLTree[98872:303] elapsed subtime = 0.808001
2014-01-17 21:44:41.431 AVLTree[98872:303] elapsed subtime = 0.815034
2014-01-17 21:44:41.432 AVLTree[98872:303] elapsed subtime = 0.89401
2014-01-17 21:44:41.434 AVLTree[98872:303] elapsed time = 13.129

Immutable array
2014-01-17 21:44:41.489 AVLTree[98872:303] elapsed subtime = 52.946
2014-01-17 21:44:41.638 AVLTree[98872:303] elapsed subtime = 147.815
2014-01-17 21:44:41.886 AVLTree[98872:303] elapsed subtime = 247.666
2014-01-17 21:44:42.201 AVLTree[98872:303] elapsed subtime = 313.846
2014-01-17 21:44:42.620 AVLTree[98872:303] elapsed subtime = 419.08
2014-01-17 21:44:43.122 AVLTree[98872:303] elapsed subtime = 500.611
2014-01-17 21:44:43.720 AVLTree[98872:303] elapsed subtime = 597.283
2014-01-17 21:44:44.413 AVLTree[98872:303] elapsed subtime = 692.916
2014-01-17 21:44:45.245 AVLTree[98872:303] elapsed subtime = 830.406
2014-01-17 21:44:46.305 AVLTree[98872:303] elapsed time = 4868.87

Live Video Face Masking on iOS

Face detection has been possible for some time on iOS thanks to libraries like OpenCV.  The CIDetector class introduced in iOS 5 made it a standard feature.  Since iOS 7 it can also detect smiles and eye blinks Smile

With iOS 6, AV Foundation gained AVCaptureMetadataOutput, allowing face detection to be included in the capture pipeline (in iOS 7 it also supports barcode scanning).

Here’s how you could use that to perform face masking on live video:

Blockhead

First thing to do is get the capture session set up:

	AVCaptureSession *captureSession = [AVCaptureSession new];
 
	[captureSession beginConfiguration];
 
	NSError *error;
 
	// Input device
 
	AVCaptureDevice *captureDevice = [self frontOrDefaultCamera];
	AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:&error];
 
	if ( [captureSession canAddInput:deviceInput] )
	{
		[captureSession addInput:deviceInput];
	}
 
	if ( [captureSession canSetSessionPreset:AVCaptureSessionPresetHigh] )
	{
		captureSession.sessionPreset = AVCaptureSessionPresetHigh;
	}
 
	// Video data output
 
	AVCaptureVideoDataOutput *videoDataOutput = [self createVideoDataOutput];
 
	if ( [captureSession canAddOutput:videoDataOutput] )
	{
		[captureSession addOutput:videoDataOutput];
 
		AVCaptureConnection *connection = videoDataOutput.connections[ 0 ];
 
		connection.videoOrientation = AVCaptureVideoOrientationPortrait;
	}
 
	// Metadata output
 
	AVCaptureMetadataOutput *metadataOutput = [self createMetadataOutput];
 
	if ( [captureSession canAddOutput:metadataOutput] )
	{
		[captureSession addOutput:metadataOutput];
 
		metadataOutput.metadataObjectTypes = [self metadataOutput:metadataOutput allowedObjectTypes:self.faceMetadataObjectTypes];
	}
 
	// Done
 
	[captureSession commitConfiguration];
 
	dispatch_async( _serialQueue,
				   ^{
					   [captureSession startRunning];
				   });
 
	_captureSession = captureSession;

All we’re doing here is creating an AVCaptureSesstion, adding an input device, adding an AVCaptureVideoDataOutput (so we can work with the frame buffer) and an AVCaptureMetadataOutput (to tell us about faces in the frame).

A few helper methods called during the setup:

- (AVCaptureMetadataOutput *)createMetadataOutput
{
	AVCaptureMetadataOutput *metadataOutput = [AVCaptureMetadataOutput new];
 
	[metadataOutput setMetadataObjectsDelegate:self queue:_serialQueue];
 
	return metadataOutput;
}
 
- (NSArray *)metadataOutput:(AVCaptureMetadataOutput *)metadataOutput
		 allowedObjectTypes:(NSArray *)objectTypes
{
	NSSet *available = [NSSet setWithArray:metadataOutput.availableMetadataObjectTypes];
 
	[available intersectsSet:[NSSet setWithArray:objectTypes]];
 
	return [available allObjects];
}
 
- (NSArray *)faceMetadataObjectTypes
{
	return @
	[
	 AVMetadataObjectTypeFace
	 ];
}
 
- (AVCaptureVideoDataOutput *)createVideoDataOutput
{
	AVCaptureVideoDataOutput *videoDataOutput = [AVCaptureVideoDataOutput new];
 
	[videoDataOutput setSampleBufferDelegate:self queue:_serialQueue];
 
	return videoDataOutput;
}

- (AVCaptureDevice *)frontOrDefaultCamera
{
	NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
 
	for ( AVCaptureDevice *device in devices )
	{
		if ( device.position == AVCaptureDevicePositionFront )
		{
			return device;
		}
	}
 
	return [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
}

We’ve told AVCaptureMetadataOutput and AVCaptureVideoDataOutput to call us back (when we set “self” as the delegate) and to do it on a global queue called _serialQueue.  This is just to avoid any concurrency issues if the delegates are called from separate threads.  Here’s how we handle new metadata:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection
{
	_facesMetadata = metadataObjects;
}

Once we get a video frame, we’ll make a CIImage, mask the face with a CIFilter, then render the frame to an OpenGL ES 2.0 context:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
	CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer( sampleBuffer );
 
	if ( pixelBuffer )
	{
		CFDictionaryRef attachments = CMCopyDictionaryOfAttachments( kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate );
		CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(__bridge NSDictionary *)attachments];
 
		if ( attachments ) CFRelease( attachments );
 
		CGRect extent = ciImage.extent;
 
		_filter.inputImage = ciImage;
		_filter.inputFacesMetadata = _facesMetadata;
 
		CIImage *output = _filter.outputImage;
 
		_filter.inputImage = nil;
		_filter.inputFacesMetadata = nil;
 
		dispatch_async( dispatch_get_main_queue(),
					   ^{
						   UIView *view = self.view;
						   CGRect bounds = view.bounds;
						   CGFloat scale = view.contentScaleFactor;
 
						   CGFloat extentFitWidth = extent.size.height / ( bounds.size.height / bounds.size.width );
						   CGRect extentFit = CGRectMake( ( extent.size.width - extentFitWidth ) / 2, 0, extentFitWidth, extent.size.height );
 
						   CGRect scaledBounds = CGRectMake( bounds.origin.x * scale, bounds.origin.y * scale, bounds.size.width * scale, bounds.size.height * scale );
 
						   [_ciContext drawImage:output inRect:scaledBounds fromRect:extentFit];
 
						   [_eaglContext presentRenderbuffer:GL_RENDERBUFFER];
						   [(GLKView *)self.view display];
					   });
	}
}

The _filter is an instance of CJCAnonymousFacesFilter. It’s a simple CIFilter that creates a mask from the faces metadata and a pixelated version of the image, then blends the result into the original image:

	// Create a pixellated version of the image
	[self.anonymize setValue:inputImage forKey:kCIInputImageKey];
 
	CIImage *maskImage = self.maskImage;
	CIImage *outputImage = nil;
 
	if ( maskImage )
	{
		// Blend the pixellated image, mask and original image
		[self.blend setValue:_anonymize.outputImage forKey:kCIInputImageKey];
		[_blend setValue:inputImage forKey:kCIInputBackgroundImageKey];
		[_blend setValue:self.maskImage forKey:kCIInputMaskImageKey];
 
		outputImage = _blend.outputImage;
 
		[_blend setValue:nil forKey:kCIInputImageKey];
		[_blend setValue:nil forKey:kCIInputBackgroundImageKey];
		[_blend setValue:nil forKey:kCIInputMaskImageKey];
	}
	else
	{
		outputImage = _anonymize.outputImage;
	}
 
	[_anonymize setValue:nil forKey:kCIInputImageKey];

You can find all the relevant code as gists on github:

CJCViewController.h

CJCViewController.m

CJCAnonymousFacesFilter.h

CJCAnonymousFacesFilter.m