Hungry Caterpillar
Film / Animation

Hungry Caterpillar

Year
2021
Medium
Motion Design / Animation
Role
Director / Animator
Screened
Local Film Events
Overview

A contemporary animated reimagining of Eric Carle's The Very Hungry Caterpillar — one of the most-read children's books in history — pushed through a completely unexpected visual lens. This isn't the flat, primary-coloured collage of the original. It's an abstract, surreal drift through consumption, transformation, and emergence, using the caterpillar's journey as a vehicle for something stranger and more unsettling.

The project started as a formal exercise: take a narrative so familiar it's practically pre-loaded into the viewer's memory, and see what happens when you remove every visual comfort. The source material's structure — eat, grow, transform, become — maps surprisingly well onto existential territory. The resulting film sits in the space between children's animation, experimental video art, and motion graphics.

The animation approach was entirely frame-by-frame — no puppet rigging, no automated tweening. Each scene was constructed and deconstructed by hand, with organic textures and shifting geometries replacing the crisp flat-colour style of the original. The soundtrack moves through pastoral, dissonant, and ambient registers to underscore the transformation arc.

The film screened at several local film events and short film programmes, and generated conversations about adaptation, nostalgia, and how visual language shapes meaning. It's the kind of project that only makes sense in retrospect — the concept sounds unusual on paper, but on screen the connection to the source material remains legible throughout.

AI Workflow

Style transfer as visual research

Before a single frame was animated, extensive visual language development happened through AI style-transfer experiments. Taking Eric Carle's original illustration style and running it through neural style transfer models — trained on abstract expressionism, glitch art, and organic texture libraries — produced a set of visual reference outputs that were impossible to generate through manual sketching at the same pace. These outputs didn't become frames in the film; they became a map of the visual territory.

Certain textural qualities that appeared in style-transfer experiments — the way colour bleeds through paper-like surfaces, the particular quality of distressed organic form — were then manually recreated in Illustrator and After Effects, giving the film a handmade quality whilst informed by AI-discovered aesthetics.

AI-assisted lip sync and motion smoothing

The film uses a single vocal narrator over its runtime. Rather than manually keyframing mouth shapes frame by frame, Runway ML's motion tools were used to generate lip sync data from the audio track, which was then cleaned up and applied to the animated character. This reduced what would have been several days of technical animation work to a few hours of supervised processing and refinement.

Motion smoothing tools also handled the in-betweening for several transition sequences — AI interpolating between keyframes with a fluidity that pure manual animation at the same resolution would have made prohibitively time-consuming. The result retains a handmade quality because the AI is working between intentionally placed keys, not generating motion autonomously.

Tech Stack

A traditional motion design pipeline with AI augmentation at key stages of the production.

After Effects Adobe Illustrator Frame.io Adobe Audition Runway ML Photoshop
Next Project
The Acid 303 Project