top of page

Mixing Hand-Drawn and Baked Sprites

Updated: Apr 2, 2020

Jem in Action

Today I'd like to share some of how I'm animating and rendering Jem, the main character of Kinematic.

Awesome Concept of Jem by Kit Seaton

One of my favorite parts of building a custom engine is setting up rendering exactly how I want it. Rendering has been a specialty of mine since college, but it's been a while since I've gotten to set up a pipeline from scratch. I've optimized for two goals:

  • Making it quick and easy for a passably skilled, but far from professional-grade artist (myself) to iterate. I'm not trying to make stunning art. I'm trying to make art that looks reasonably good, and doesn't get in the way of the gameplay. And long ago I learned that the best way to make art look as good as possible is make as many iterations as you feasibly can.

  • Making the implementation process interesting and fun. I want to be learning through the entire process, and keep a sense of play. Thus I'm willing to do things in a weird way, a creative way, a silly way, if it works for my game in the end.

As a result, I've wound up with an interesting mix of 2D and 3D techniques, and a mix of hand-drawn sprites and baked renders.

Visual Target

Some Inspirations

Kinematic is (like many other Metroidvanias) styled as an SNES era pixelated game. I'm not going so far as to attempting to emulate any particular hardware limitations; simply using that as an aesthetic idea. Looking at some of the great games in that aesthetic, Super Metroid, Metroid Fusion, Axiom Verge, Cave Story, I honed in on a few key visual goals:

  • Pixel-perfect sprites rendered precisely on a fullscreen grid.

  • Directional lighting-based shading.

  • Clean, crisp outlines in appropriate colors for the material being outlined.

  • Finite color palette.

From my own interests and experience, I added a few extra goals:

  • Dynamic coloring via lookup table (LUT) to make different areas of the game feel dramatically different.

  • Dynamic lighting direction; not baked into the sprites.


I created a 64 color palette based on advice from Les Forges' tutorial on Open Game Art. As a fun artistic challenge, I purposefully didn't add red to my palette.

As a hobby, I do wearable LED art for Burning Man. So I wanted to capture some of that LED feel in my game. So each color ramp (except brown and gray) has 2 richly saturated colors at the peak brightness. When modifying the colors via LUT for different environments, I intend to preserve those columns largely unchanged, to give them the sense of emitting light.

Having been exposed to a lot of cool ideas when Riot reworked Summoner's Rift's visuals in 2014, I'm very conscious of foreground/background separation. Thus each color ramp has 3 colors that are highly desaturated to be primarily used in the background. I was surprised at just how much I needed to ramp the contrast down to make these actually fade into the background effectively.

3D to 2D Baking

Like many of my fellow indie devs, I read Thomas Vasseur's Gamasutra article about Motion Twin's method of creating sprites for Dead Cells. Particularly compelling to me was the focus on iteration speed. I feel exactly aligned when Thomas says "I did not have the skills nor previous experience to hand draw everything AND be quick enough doing it to be able to release the game before the next decade showed up."

My First Model

I fired up Blender, followed some tutorials online about basic modeling and animation, and wound up with a vaguely passable model of Jem. Early on, I realized that getting the head to look good was going to be... difficult. So I didn't invest heavily in the head render to start with. I mostly focused on just getting motion exported into sprite sheets, via a hand-written python plugin for Blender.

That Jump, Though...

In this early stage, I was exporting raw sprites that would be directly rendered in game, but I knew I would need more data to support shading. I decided on 4 values per RGBA pixel:

  • R and G encode the X and Y of the surface normal, with 0-255 remapped to -1 to 1. I can then extract the Z of the normal by assuming it's towards the camera and solving sqrt(1 - X^2 - Y^2)

  • B encodes the depth of the model. I have 0-255 remapped to roughly half a meter on either side of the origin of the scene. My goal for this depth isn't to be super accurate. I mainly want to accomplish 2 things: shading limbs behind the mid-line darker, and preserving the option of exploring 2D shadows for extremely dynamic lighting.

  • A encodes the "material", which is the U coordinate into a special palette texture that's created for this particular model.

Then, for each material present in the model, I create a column, so the "light intensity" can change the color.

Exported Image and Material Shading

Basic Shading

Mixing Hand-Drawn and Baked

As I mentioned above, I knew that rendering Jem's head in a satisfying way as she moved would be very challenging. If you think about great hand-animated sprites from other games, they almost always preserve things like the mouth, eyes, nose as having the same shape, size, and position across animations. (Notably Children of Morta doesn't do this, which is part of its unique style)

Drawin' Hair

I decided to mix a hand-drawn head with the baked body animations. More specifically, I created 9 different 4 frame animations of Jem's head, to support animating her hair as she moves. Then, I saved off the X,Y positions of the head bone in Blender. I could then draw the head at the appropriate position, with her hair reacting to motion.

In order to create the same RGBA textures as the baked model, I wrote a short aseprite script to replace each color in the hand-drawn image with a Z=1 normal, an arbitrary depth value at the front of where the head should be, and material corresponding to the pixel color. Thus, Jem's head is kind of like a cardboard cutout taped to the near side of her head. But I'm not (currently) lighting the color of her skin, hair, eyes, or jewelry, so that works fine.


During forward rendering of everything in the scene, I don't write out a final color. In order to support a LUT, I instead compute lighting for each pixel, and use the palette texture to output the U,V coordinate within the 8x8 global palette. "Lighting" consists of extracting the surface normal, and comparing it against a global light direction (diffuse lighting). Then I check adjacent pixels to see if this pixel is on the edge of the model, and thus should be used as a border (using the diffuse to decide whether to do maximum or minimum lighting). Also, if the depth is behind the center line, I drop the lighting value as well, to make back limbs be darker.

I also write a "fog value" into B, allowing each environment to have atmospheric haze. This can be used for fading background images, but it can also be used for nearground fog.

Purple Polluted Lookup Table

Then the postprocess step takes that UV coordinate and fog, and converts it into the final color by blending the result from the LUT with the fog color based on the fog value


There's a lot more for me to do in the rendering of Kinematic. And a lot more details I could dig into, if you're interested. As with the memory layout, I've made some weird decisions, and now I need to figure out what to do within that space. But that's the sort of challenge I love. And hopefully it makes the game feel unique.

Questions? Comments? Come join the discussion on Reddit.

339 views0 comments

Recent Posts

See All


Post: Blog2_Post
bottom of page