Cel-Shading Character Studio Progress (0.1.0 milestone available for beta testing)

*Note: I'm looking for beta testers for this. Beta testers will get free access and free updates for life. Preferred candidates are NPR artists with experience doing custom cel-shading.*

What is this?

An add-on/artistic system for Blender aiming to revolutionize the cel-shading/hand-drawn animation workflow. It’s partially based on a paper, which you can read about below. I’ve used the paper for conceptual reference, however, my implementation is entirely unique and separate from the authors’ methodology.

Currently, this has the “artistically directed edits” functionality of the paper. The 1.0.0 Public Release version will have a much larger array of functionality, including camera shape edits inspired by Austin Hardwicke, dynamic shadow boundary variation based on my own past work, and more.

Introduction / Abstracts

At Real-Time Live! SIGGRAPH 2021, Lohit Petikam, Ken Anjyo, and Taehyun Rhee introduced a new shading system for “stylized shading for 3D characters”, specifically for cel-shading.

You can find details about their work here:

And their paper is here:


An excerpt from the abstract is as follows:

In our framework, artists build a “shading rig,” a collection of these edits, that allows artists to animate toon shading. Artists pre-animate the shading rig under changing lighting, to dynamically preserve artistic intent in a live application, without manual intervention. We show our method preserves continuous motion and shape interpolation, with fewer keyframes than previous work.

A more comprehensive excerpt can be found on the above website, and is as follows:

Toon shading often gives poor results on 3D characters. Rotoscoped shadows used in film aren’t real-time for games. Current toon shaded games use painted/textured/baked shadows that don’t change with lighting.

Shading Rig is a new workflow to animate artist-defined shading with lighting changes, and preserve art-direction in real-time toon shaded games.

We achieve this with a “rig” of shadow editing primitives designed based on fundamental artistic shading principles. These primitives can be animated to achieve highly stylised shading under dynamic lighting.



This system is leaps and bounds ahead of anything else in the cel-shading 3D world. Current normal editing techniques can approximate the results of this work, but cannot duplicate it. I’ve been following the cel-shading 3D character art community with great interest for years. I’ve exhaustively researched every available method, current, defunct, and prospective, that can be found on the Internet, and I believe that this approach has the most accurate to a hand-drawn look of any other approach.


One huge advantage of this system is that it’s fully vectorized by default, which gives extremely sharp lines at any resolution:

2022-07-06 16_04_12-Blender_ H__blender_empty_pos_test.blend

While it is currently possible to get vectorized shadow boundaries, it's quite difficult with most implementations. Options like ILM mapping, while effective, require intensely precise UV mapping for vectorized lines. This method requires no UV mapping whatsoever.

Austin Hardwicke’s facial blending technique


One huge improvement comes from using shape-keys to create facial variations relative to camera angle. Austin Hardwicke has recently brought this technique to the forefront of the discussion with his work on the main character in animated film Belle. You can see an example of his work here: https://twitter.com/chompotron/status/1481553948721180677

Where do I come in?


There is currently no publically available implementation of Petikam et al.'s work. While their paper details their methodology and mathematical work, there’s no existing implementation of this into 3D software. As a Blender user, I’m most interested in this as it relates to Blender.

I reached out to Petikam a few months ago asking about his plans for releasing a Blender implementation. He said he would maybe do it someday, but didn’t give much definite indication. A follow-up email/reply received no response. After a few months, I believe this project is officially dead, leaving it to community members to interpret the research and implement it themselves. I’ve decided to take on this challenge myself.

Technical Details

When my implementation is finished, it will most likely be bundled into a discreet script. Much of what needs to be done has to be scripted, and the parts that don’t (shader nodes) can also be done with a script.



Roadmap for 0.2.0 +


:white_large_square: Needs implementation, not started
:orange_square: Needs implementation eventually, not for the current version
:red_square: Broken or bugged, will need fixed eventually
:purple_square: May be implemented, requires further development to say
:black_large_square: Could be implemented on community request (a purple item that doesn’t match the direction I’m taking)
:blue_square: Needs implementation, in progress
:white_check_mark: Needs implementation, complete

:orange_square: Can set character object
:orange_square: Character object is associated with e-frames
:orange_square: Can set key light
:orange_square: Multiple characters objects with distinct edits
:orange_square: Key light associated with character
:orange_square: Unique key light per character
:orange_square: Evaluate object rotation in relation to e-frames
:orange_square: Save and preview empty rotation as part of e-frames, not just empty position
:purple_square: Save and preview empty scale?
:blue_square: E-frames are stored in custom properties with CRUD2 capability
Everything here is done except the U
:red_square: If the selection field is cleared manually, auto-select no longer works.

:white_check_mark: Non-active edits preview in Placement mode (so you can preview blending)

:white_check_mark: Fixed bug with deleting all e-frames
:white_check_mark: Can delete e-frames from just one edit

:black_large_square: More edits overall per object? This will require some dedicated performance testing, I’m currently seeing no impact with 8 edits and 1 object, but keeping track of 8 is already fairly taxing mentally.
:orange_square: Disable un-used edits on the backend
:orange_square: Disable un-used edits on the frontend
This should help with performance, if performance is affected eventually

:orange_square: Warning message when setting an e-frame to [0,0,0]

:orange_square: Edits are created without user intervention in the e-frame > edit > object pipeline

:orange_square: Despgraph_update filtering to reduce overhead

Edits have parameters (I’m not sold on Petikam et al’s parameters, these are subject to change and addition):
:orange_square: Bend
:orange_square: Bulge
:orange_square: Normal smooth

Edits have blending types:
:orange_square: Intensity
:orange_square: Mask

:orange_square: Fix sharp corners on subtractive blending

:orange_square: Shader nodes are generated programmatically
:orange_square: Script can handle missing edits
:orange_square: Shader nodes can handle missing edits

Handling the hard locks on the edits
:black_large_square: Edits are locked to prevent deletion?
:black_large_square: Un-used edits are “non-existent” or discretely invisible, giving the appearance of dynamic generation?
:purple_square: Add button to “populate” object with edits, spawning the 8 invisible edits?

:purple_square: Create collection for all edits? Or one collection per object with edits? Either way, the collection doesn’t need to be selectable, so that can be removed from the UI?
:purple_square: If going that route, select object > auto select collection?
:purple_square: Dynamic edit creation?

UI tasks
:orange_square: Operator descriptions
:orange_square: Prop descriptions

:white_check_mark: Multiple panels
:white_check_mark: Re-named “Realtime Preview” to “Preview All”, as Placement mode now previews all except the active edit

Code improvements
:orange_square: currently a lot of validating is being done with object type checking, which is bad- any type[“EMPTY”] needs to be a custom property
:orange_square: it would be better for data to be stored to the scene, not the edit objects
:white_large_square: Custom properties are locked for internal use only


  1. E-frame refers to a relationship between a light position and a shading edit position. Petikam et al. use “keyframe” to refer to this- I’ve replaced this term for clarity in my own work.

  2. CRUD stands for Create, Read, Update, Delete

  3. Edit refers to the visual shape of a “shading edit”, per Petikam et al.'s specification





This doesn’t quite fit in #artwork:works-in-progress , but it doesn’t quite fit anywhere else either, since there’s not a scripting WIP sub-forum, and this is technically an artwork in progress.


I’m going to keep updates to a minimum to avoid making this thread exhaustive. Most of my edits will be changing the status of roadmap items, which will be done as edits. (Nevermind, these posts were too long and exhaustive to read.)


I made this thread to keep myself accountable by having a publicly visible roadmap for this project.


Super Cool Progress, looking forward to see the Next Steps, keep up the Great work :slight_smile:

1 Like

Progress towards 0.1.0

Progress Report, July 4th, 2022

While not yet functional, the edits are super fun to play with.

More details

We can remove these e-frames and add new ones, toggling between Placement and Realtime Preview:

With adding the directional parameter comes the need for adjustable strength and blending strength:

Progress Report, July 5th, 2022

Shader nodes organization, up to 8 edits
I've heavily refactored the shading side of this to be more organized - the front end for the user will now look like this:

In doing so, I've also added the ability for each edit to have an individual blend strength. As this renders the Blend Strength slider pointless, I've removed it. In the future, that slider will be re-implemented and correspond to the active edit.

Added object/UV coordinate switch.

Multi-edit eframe correspondence progress

After trying a few different things today and yesterday (all of them failing), I’ve come up with a new approach to the multiple edit challenge. Rather than storing multiple arrays- one per edit- I’ve decided to keep the system I’m already using which has just one large array for all frames. To distinguish correctly, I’m storing the active edit in an array when an eframe is added. This gives me an system where a[0] = b[0] = c[0] across the three fundamental arrays.

It makes sense to me in my head, I’m not good at explaining technical things, maybe this diagram will help:

The immediate question is what happens when you have something like this (I’ve simplified to a 1D vector instead of 3D for ease of reading):

empty_pos_array = [1,3,4, 2]
light_rot_array = [45,70,90, 60]
eframe_to_edit = [A, B, A, B]

On paper, it’s obvious- edit A should lerp between indices [0] and [2], and edit B should lerp between [1] and [3]. That may prove difficult to implement, however, I plan on avoiding complicated index ordering issues altogether by evaluating each edit discretely. That is to say, the execution of above would go as follows:

  • All values that correspond to edit A in both arrays are pulled into a temporary working array.
  • Lerping happens, as already implemented, with this array only.
  • This array is cleared.
  • This is repeated for Edit B.

This should work, at least theoretically. The more I type it out, the more sense it makes, which is the point of this thread anyway.

This is actually working well so far- I can filter out “working arrays” that have the correct values. I’m not doing anything with them yet, but they exist. One downside is that since the g.eframe_to_edit array isn’t being saved, I have to clear all the eframes whenever restarting the file or Blender to prevent array length errors.

Summary: I’ve found a way to make it work and I’m slowly implementing it.

Edits are associated, preview works on a one-by-one basis

Currently, you can set e-frames for a specific edit, and that specific edit will preview only those e-frames. Whichever edit is currently “active” is the one that previews. This isn’t ideal, but it’s huge multi-edit progress. I have 70% of the multi-edit functionality now, I just need to preview all of them at once.

However, this has re-introduced a lack of smoothness I don’t like. I’m assuming it’s a weight issue, so there’s a few things I can try there to troubleshoot.

Progress Report, July 6th, 2022

Fixed the symmetry issue, better blending, and a Mask parameter to allow precise control over the symmetry issue. I’ve also realized the Edits need Damped Track constraints to work properly, which has made a huge difference.

Added individual scale parameter to edits.

Fixed the (extremely annoying) jumping to origin bug. Added re-name functionality for edits. Significantly cleaned up code.

Progress Report, July 7th, 2022

Added a framework for dynamic edit creation. While it currently just consolidates and simplifies things, it opens up a huge realm of possibilities for better edit management.

Edit management will be the focus of release 0.2.0. The first public release will be 1.0.0, and it will be limited to beta testers.

Added warning messages where appropriate.

Added Rotate parameter. Added the Stretch parameter, which can either elongate or compress an edit. This is useful both for large swathes of shadow and extremely fine details- you can, in fact, create inner lines now with this method:

2022-07-07 11_59_41-Blender_ H__blender_empty_pos_test.blend
2022-07-07 11_59_26-Blender_ H__blender_empty_pos_test.blend

Mask parameter is individual now

July 8th

I got a lot done last night and earlier this morning before work. There’s three major updates:

Individual Influence

Significantly better UI

I’ve divided the parameters into Basic and Advanced, as well as Universal and Individual. You can filter those with checkboxes to clean up the UI as needed:

2022-07-08 13_51_32-Blender H__blender_empty_pos_test.blend

Added Pinch parameter

At first glance, this is similar to Stretch, but it actually functions extremely differently:

2022-07-08 13_54_42-Blender_ H__blender_empty_pos_test.blend
2022-07-08 13_54_34-Blender_ H__blender_empty_pos_test.blend
2022-07-08 14_00_58-Blender_ H__blender_empty_pos_test.blend

On the left: stretch at .2. In the middle, pinch at .2. On the right, both parameters at .2. Pinch blends the edges into existing edits, stretch just elongates edges. A combination of both works nicely for fine line work.

Long story short, everything except “all eframes preview” is ready for 0.1.0

(I’ve split up the progress notes as they were getting really long and unreadable, hopefully this helps)


JULY 9th

Fixed a ton of bugs. (The Pinch parameter was throwing errors on 60% of the edits before now.) Replaced all remaining drivers with programmatically set values. Packaged into an “addon” and fixed all the bugs that come with that.

I’m still trying to wrap my mind around the “all edits preview” problem, I’ve lost track of how many potential solutions I’ve tried at this point.

I’ve done some stress testing to make sure this stays “real-time”. So far, with playback at 60 FPS, I haven’t dropped below 58 FPS while actively playing back and moving edits at the same time:

None of the parameters cause drops either. If this is 99.2% real-time on my not-very-good computer (GTX 1660 Super, Ryzen 5 5600X, 16 GB RAM), I’m not worried at all about performance impact.

Evening: I’m finally making progress on the blasted “all edit preview” problem, I think I’ve actually got it this time. I’ve been stuck for the last three hours, but I just changed one line of code that I made a mistake on and… everything works now. Happens every time. Would love it if it had happened three hours again instead :sweat_smile:


July 10th-11th

After dozens of hours (if not more) of effort, I’ve finally had a huge breakthrough on my own personal archenemy, “all edits preview”!
2022-07-11 17_02_44-
If this doesn’t look like a breakthrough, that is completely understandable. All edits don’t actually preview yet, but I’m finally confident that I know how to make them do so.

I’m going to take a moment to write about this, specifically to solidify it in my own head.

Those numbers up there are weighted distance multipliers, which are crucial to the relationship between light rotation and empty position. Specifically, those numbers are the inverted distance from the current light rotation to the stored light rotation points normalized to a sum of 100. Any light rotation and any number/combination of stored light rotation values will always return a set of numbers that adds up to 100. This set of numbers can then be multiplied by the empty position points to determine the weight each empty position point should have on the final position.

In shorter terms, if there are a, b, c… stored light rotation points, and a, b, c… stored empty position points, the final position of the edit in question will be a simple average:
(WDM a * EPP a) + (WDM b * EPP b) + (WDM c * EPP c)... / count(a,b,c...)

Up until now, this has worked flawlessly with one edit, but it’s taken close to two weeks to get it working with all edits. Finally, I’ve accomplished that today, which means that all that’s left is the simple average function.

Unless something unexpectedly breaks, this means 0.1.0 will release tomorrow- beta access only.

If you’re interested in being a beta tester, let me know :slight_smile:


Minimal Viable Implementation - 0.1.0 - Available to Beta Testers

Succesfully reached 0.1.0 milestone on July 12th.

All core functionality is implemented.

This release includes:
:white_check_mark: Empty position corresponds to light position using custom e-frames1
:white_check_mark: Edits3 are correspondent to e-frames visually
:white_check_mark: Multiple edits per object
:white_check_mark: Selectable edits
:white_check_mark: Auto-select
:white_check_mark: Eframes are associated with edits

:white_check_mark: Selected eframe previews
:white_check_mark: All eframes preview

:white_check_mark: Warning message when re-naming an edit to an existing name
:white_check_mark: Edits are renamable

:white_check_mark: Edits can blend with existing shading
:white_check_mark: Edits can blend with each other
:white_check_mark: Edits have individual blending

:white_check_mark: Fix symmetry issue by filtering generated coordinates
:white_check_mark: Allow for UV coordinates to be used instead

Edits can influence shading both ways:
:white_check_mark: Additive (light)
:white_check_mark: Subtractive (shadow)
:white_check_mark: Individual influence

Edit parameters:
:white_check_mark: Stretch
:white_check_mark: Sharpness
:white_check_mark: Scale
:white_check_mark: Edits have surface deformation
:white_check_mark: Pinch
:white_check_mark: Rotation

:white_check_mark:Script can handle name changes
:white_check_mark:Shader nodes can handle name changes

:white_check_mark: Show individual parameters only on active
:white_check_mark: Make universal parameters hideable


0.1.1 - Minimal Viable Implementation with Improvements

Open to beta testers only

  • Can clear e-frames from individual edits OR all edits
  • Multiple UI panels (much cleaner!)
    2022-07-13 16_56_37-Blender H__blender_empty_pos_test.blend
  • Updated Placement and Preview modes: you can preview all edits except the active one in Placement mode. This lets you see how your current placement blends with existing edits. Preview is now Preview All to reflect this.
  • E-Frames are saved and loaded- closing a file and re-opening it preserves e-frames
  • Fixed 3 bugs
  • Various UI improvements

@joseph - As soon as I saw “Cel-Shading”, I knew that your name would appear.

Not my cup-of-tea, but… best of luck with this project!


1 Like

Funny thing is, it’s not necessarily my favorite artistic style either. It has to be really well done for my to be on board, I don’t really care for the anime style generally and my preferred example here is Avatar: The Last Airbender.

Mainly I thought the technical challenge of implementing something no one has ever done in Blender would be a great Python project for myself :slight_smile:

Thanks for your support friend!

Reminds me of this:


Harvest Records, Founded 1969


July 16

I’m still making progress on this, it’s tricky because I don’t really have anything to show for it. As previously stated, 0.2.0 is focused on edit management. This means:

  • Edits on multiple objects
  • Lights bound to objects
  • E-frame filtering expanded an additional level to objects
  • All e-frames preview on all objects

All of these tasks are far more complex than they sound.

I’ve got light binding set up:
2022-07-16 21_45_49-Blender H__blender_empty_pos_test.blend

As far as everything else goes- I’ve made a ton of progress on it, but it’s not visibly functional yet. It’s probably going to be a couple of weeks, honestly, this level of complexity makes the update loop horrifyingly complex.

I’ve been focused heavily on performance, and I’m proud to report that after a lot of blood, sweat, and tears, I’ve added all this complexity without affecting performance at all.

The sorting function that breaks e-frames down by objects and edits while maintaining the correct order (any change in order will associate a random empty_pos with a random_light_rot and a random empty, so the order is essential) is not kind to performance. Fortunately, it only needs to be called when you add an e-frame or delete one, so you’ll see at worst a very short lag spike when doing either of those operations and then back to light and breezy 60 FPS.

I can’t get higher than 60 FPS even in an empty file with my current GPU- but my new computer with a 3070 will be finished next week :smiley:


Awesome work so far. Keep it up!

1 Like

Thank you :slight_smile: I’m really struggling to figure out how to move forward past some obstacles. It seems like I’m going to have sacrifice some functionality to obtain others so I’m trying to prioritize that. I’ll have more updates soon though!

1 Like

Well, good luck with everything! I hope it works out well.

1 Like

Long overdue update:

I’m struggling to know what to do and input would be greatly appreciated. The author of the paper this is based around has moved forward and made a usable Blender add-on based on that paper. However, my work on this is significantly different, only the same in theory, and… I hate to even say this, but my version is more fully featured. Management wise, not so much, but in terms of actual functionality, I’d rather use mine, even as incomplete and buggy as it is.

Would it be ethical to keep working on this under these circumstances? The paper author has sent me a copy of the add-on, which hasn’t been publicly released, I’ve used it briefly to test it out. I wouldn’t use any of the code from it myself- I haven’t even looked at the code, and I’m not going to, I know for sure that would be wrong


Unless there’s a patent, you can freely use publicly funded and published research papers to develop a product.

The paper, not the add-on. But as long as you don’t use any code from the add-on, the fact that you had it shouldn’t be a problem. Especially not if you’re just using this for yourself. But even if you wanted to share your add-on, that should be fine. Including if you were to sell it. The add-on you got sent is theirs, the paper can be used by everyone, and any work of yours that’s based just on the paper is yours.

I guess since the author sent you the add-on I would wonder how they feel if you put out a competing product, and I might try and forestall bad feelings by communicating with them. But I am not a lawyer, and have no idea whether that would put you in a bad position in case they and you end up as commercial competitors; I’m solely thinking of it from the ethical point of view of how I like to interact with people. It’d probably also depend on how that interaction came about – might there be a possibility of collaboration instead? That sort of thing.


Appreciate all the great work, I hope you didn’t abandon this project,
it looks like a complex topic, even the original author seems to struggle with it.