I recently started a new project called libanim (home page) which aims at providing support for animated file formats on the iOS and perhaps Android at a later stage. The problem doesn’t seem to be limited to any one platform, support for animated file formats is usually limited to gif files but on some places (e.g. iPhone/iOS) we don’t even have that. Although gif is still widely used it has two major drawbacks: it only supports up to 256 color and it lacks alpha support (e.g. make a pixel 50% transparent instead of fully transparent or fully opaque).
There isn’t a lack of options though. There’s APNG, MNG and a couple other but none seems to be taken seriously enough by browser vendors and much less by mobile platforms. Two Mozilla folks created APNG back in 2004, but since the PNG Group didn’t approve of it APNG practically died there. The major problem being the lack of a default implementation in the group’s reference implementation called libpng and widely used by browsers and other applications. On a quick personal note, I find it amazing people is all excited about SVG and Canvas but it seems they all forget not all images are vectorial in nature!
When you need to animate something on iOS you’re limited to a couple options: code the graphics by hand or have a image per frame of the animation. The UIImageView control even provides a convenient way to define an array of images and a time interval between frames. What’s wrong with that? Well, to start with there isn’t such a convenient feature for OpenGL and probably more troublesome is the fact that shipping 10 complete images for 10 frames has a huge payload (read: file size). Animated file formats like APNG provide better support for this type of usage, for example, if you only change a little part of the image during an animation then the APNG file can store just the part of the frame that changes. Additional features including blending of frames as well as the possibility to have different delay times for each frame make these formats better suited to deploy without ugly hacks. If a reduced file size and blending support isn’t enough there’s always the simplicity of updating a single file on your app instead of 10.
Partially for fun and partially by need, I started libanim as an effort to bring support for those animated formats to ‘new places’. I picked APNG to start with and after a bit of research I came across a patch to the official libpng which adds APNG support. With a bit of effort I was able to build libpng as I needed it, which for iOS means a static library compiled for ARM4/6 and another for i386. Using terminal magic I was able to pretty much automate the process and even combine both versions into a single library that works with the simulator as well as with a real device. From there I was able to construct a working demo as shown in the video below:
While it might not seem impressive this is quite a bit of progress (on the right direction). The animation that is standing still on the left is a UIImageView that is loading the images (from CGImageRef(s) provided by libanim). The other animation that is jumping up and down is a OpenGL textured square. The code is based on the XCode’s default OpenGL template but I extended it by making the default object textured and enabled transparency. When instructed libanim opens the APNG file, decodes all the frames and generates all the frames necessary feeding them to a ‘renderer’ function which will provide you with an array of CGImageRef objects or the names for bound OpenGL textures. The code is created to enable easy extensions and custom renderer’s code as well as possibility of being cross platform by isolating the platform dependent parts.
The process still needs fine tuning of course but so far things are working pretty well as you can see. If you want to see/compare with the original image (the one at the post’s top) being animated you’ll need a browser that actually supports APNG (try Firefox). Other than fine tuning the code and double check memory management there are a few important things missing. Frame blending is missing, it isn’t passing every frame delay times back to the UI layer and it probably needs better color space support (something that will need a lot of research for a imaging newbie). Not everything seems complicated and I hope to keep improving the code.
The source code is available at GitHub, MIT licensed, and I’ve created a section on my site to write some documentation and post other information on the project. Feel free to checkout the code, test it, improve it and specially contribute with new and better code!