Boon & Pimento : Cartoon series made with blender

Amazing work man! It’s impressive, also one question is it possible for you to share the blender theme? Is beautiful :stuck_out_tongue:

1 Like

Hey ! Thanks a lot @SephirothTBM !
Here is the theme, it gives an extra touch of fun to blender, even if sometime I switch back to the default theme that is also very polished but a bit too gray :slight_smile:
minimalistic_blue.xml (42.6 KB)

it’s not mine but I’m not sure where I found it, probably in this thread :

Thanks man

Great work. Do you do freelance work? What is your contact info?

Hey thanks !
I do freelance work but I probably won’t be available in the next mounts , you can contact me through this forum, or through vimeo :
Don’t hesitate to contact me even if I’m not available I may know someone who is :slight_smile:

Hello @sozap, how did you manage to make the interior lighting so good? Was it Eevee irradiance volume with very high resolution, or perhaps some sort of lightmap rendered in Cycles?

Hello !!

We had a light rig per set, so the set was already lit with a template.
I though about using irradiance volume but that would have been too complicated to manage.
There is a HDRI with low intensity to provide bounce light.
And also regular lights : most of the time spotlights to control each parts of the set individually.
We tried to find a good balance with these lightrig so once used we generally had a good basis.

On some sets we had different light rigs that we could use. Like a day and night version.
These we can then load into shots with a custom tool.

Here is a set with a template light rig :

and the lights for that set :
Three main very soft-shadow spots to fill the lights and more specifics lights.

From there we did a lot of per shot adjustment to these lights to go further.
And also add / remove some lights until we find the result good.
We also lit the characters individually most of the time, to have some rim lighting on them or make them pop a bit more.
We had some tools to easily append the lights from one shot to another.

Another thing we used a lot was colored gradients , you can see some of them in the FX breakdown toward 3mn50.

That way, let say the lighting is great but we want to add some separation between the set and the character. We can add a dark gradient on the back so the set is a bit darker.
Same if an area is overly lit and draw too much attention. We dimmed that part using a gradient to give more focus toward what’s important.
We can also use them to make soft flares, or reinforce the fog on some episodes. These gradient helped a lot to push the renders a bit more.

So yeah, it was more a lot of manual work rather than an automated technique.
Broad spots (fill lights) with very soft shadows can give a bit of a GI look, even if you don’t have colored light bounces.

We really wanted to play with light on the show. That’s something we worked a lot on each episodes.
And thanks to Eevee that was possible in a manageable time frame and was very fun to do.

It would have been possible to do a safer choice, like something that works in many shots without requiring too much manual work. But that would have led to a more uniform / boring lighting.

Hopes that help to bring some light into the process :smiley:
Feel free to ask if you want more details !


Thank you for the detailed explanation. I’ll try to follow some of those techniques. Thats more complicated than I expected, though I guess things like individual lighting per character will be unavoidable in any serious production


Yes, you always have to do some manual work, but there are different options.

On this show (26x5mn episodes) we used another approach, the lighting was way simpler but it worked in a much more automated way :
Because we had outdoor lighting most of the time, we did a mega-cheat :
We setup a light rig using sun lamps, that makes sure every sides is well lit. You can see that the lighting is quite uniform, nothing is really in shadows. For instance you can have a warm light on the left, a cold one on the right, a rim light behind ect…

Then, for each shot, we oriented that light rig according to the camera position : The character are always lit the same way. There is always a warm light on the left, rim behind ect…

When there was a camera move, we averaged the camera position and align the light according to that and that worked fine.
On a few shots we had to do manual adjustments. But most of the time it worked well right of the bat.
At first , we though that could be a bit disturbing to have the same light direction for every camera angles, but that ended up working fine. The light being quite uniform helps a lot.

Now back on B&P, what could have helped us a lot is more tools to work in sequence.
That can be done in very different ways. One I did but never used in production yet, was for every shot take the camera, the posing of the characters (like start,middle,end frames) , and do an unique scene out of it.
Then adjust light so you get a lighting that works for every camera angles and character position.
Then copying that to the actual shots and do some manual tweaks from there.

Because we did a lot of back and forth with the shots. And because sometimes a spot is on a character, you copy that to another shot and because the character as moved a bit , the spot isn’t working on that second shot.
If we could have access to both posing/camera at the same time , we could have probably put a broader spot that would lit both shot correctly. That would have solved many continuity breaks too.
Even if in that case we allowed ourselves to make not obvious but effective continuity break from time to time.

In the end, the more crazy/precise you want to go with the light, the more unique it will get to every shot.
But it’s possible to go for a safer lighting that will work well from shot to shot. But that mean in general something more uniform, less shadows and contrast. Then you can rely more on compositing and grading to push the final look. Not sure it’s the best but that’s worth thinking about !

Good luck !


Indeed, the Pose Mode system can be infuriating sometimes. One saving grace is that while we cannot keyframe an Armature and Object in the same time, we can do that for multiple selected Armatures in Pose mode. So, I guess, making a “camera rig” armature, and parenting camera and lights to its bones could help streamlining the process.


Ha yes ! Interesting !
Maybe it’s possible to key everything (object and rig) using the dopesheet ! It’s worth testing !
Probably not super useful for animation , but to cleanup the scene and get a keyframe on every asset that could do !

1 Like

I’ve got another question: what did you do to optimize scenes for maximum performance (during animation playback, object and armature interaction, and ideally with Eevee viewport rendering enabled)? My more ambitious scenes quickly turn sluggish to manage.

So, broad subject !

We did a lot of little things that helped to get a responsive viewport. With Eevee we didn’t manage to get to realtime, it was always around 12fps, but in workbench for animation it was possible to reach ~20/25fps and sometime even more. I’ve tried shots with current blender (3.3 Alpha), and they seems faster now (~25fps) even without GPU subdiv, that in my case didn’t make a broad difference (I should look into that) .

1/ We profiled the main characters with a custom script that takes each objects, and record the time to apply modifiers. That allowed to find unexpected slowdowns on simple objects and make the rigs faster.

2/ On each shots, we turned simplify to 1, that gave enough detail to be usable, and that way we could
generally reach 25fps.

3/ As you see in the third making of video, we had another script that convert assets to a low poly version.
It wasn’t done for characters, but nearly everything else was available as lowpoly version.That was useful in our case, because we didn’t apply modifiers most of the time, we used a lot autosmooth that tend to slowdown a lot when used on animated objects, and the materials were quite slow to compute in eevee as well. That helped a lot on many things :
Scenes opened faster. Because the set was rigged, we can manipulate it quite easily where it would have been impractical with the final. And props benefit a lot from that when they were animated too.

4/ On many big props we used the profiler too, so even if we want to use the final props rather than the lowpoly it was responsive.

5/ Another trick I find later : when working with autokey (which is good) you may get un-needed keyframes, either in armatures or objects. I did another script that cleaned keyframes, and remove animation channels where you got only one keyframe. Depending on the case and how much unneeded stuff is keyed that can give a great boost. Even if something doesn’t move, one keyframe will force blender to update it’s geometry every frame rather than only one time.
I think getting rid of unused channels on rigs helps too, of course that’s when you have a lot of unused channels on several characters, only a few channels won’t make a difference.

Getting a fast playback was one of the main technical goal for the project. I did a lot of research to see how to optimize assets and shots.
One thing that was quite promising but we end up not using it, was to be able to do animation cache on the characters and animated props. That gave a ~3x speedup. You can then cache a character/props you aren’t working on.
I tried alembic which didn’t gave much speedup at that time. So I resorted to .mdd .
It was a bit tricky to setup and optimize, but it gave really cool results. Unfortunately, it was not very handy for animators , they tend to work on everything at the same time.
One idea I had is that they could use this in place of playblasts but they didn’t used it in the end :smiley:

Hopes that helps to give ideas, my general rules is to try to optimize the most important stuff like character, sets… Because they are used all the time, and then I tend to be a little more forgiving for other stuff. Because optimization takes time and you want to focus on the actual work.

We didn’t reach 25fps all the time, but it was generally good enough, animators are used to work at 12fps which is generally ok. And we add tools to quickly check if something was wrong.


Again, thanks for the detailed answer. Almost makes me regret wasting your time.
The profiler is a thing that Blender sorely lacking. Hard to find problematic objects. By the way, did it count GPU time as well? I got some ideas of applying game-like optimizations to Eevee (texture atlasing, reducing draw calls and shader complexity), but cant decide if it will be worth the trouble if the main bottlencek is CPU-heavy editor stuff.
Low poly is self-explanatory, I didnt expect Auto Smooth to be so bad though. I will think twice about using custom normals.
Good suggestion about keyframes too. I generally set them manually, still could do better with them like using only Position keying set instead of Position-Rotation-Scale when the situation allows
And animation cache seems like solution for Geometry Node modifiers which are much slower than normal modifiere in my experience

1 Like

Cool !

Don’t worry, I’m more willing to share and I’m sure others may benefits these information as well.

No it didn’t, I didn’t find a way to start on this while staying inside blender.
The script is quite simple, it just apply the modifiers several times to get an average.
And indeed it’s only a part of what is happening when a frame is calculated. But it’s worth looking into, I always find tiny objects that take a great deal of time for nearly no reasons.
If you plan to use eevee and get realtime, which looked a bit to complicated at that time, indeed you may look into profiling the GPU as well.
First step is to get realtime in workbench, then you can look into optimizing eevee specifics.

Yess, take what I said with a grain of salt. That was the case at that time, maybe new optimizations came in. At least it’s easy to test if you already have some models.
Anyway, having a version for animation and a version for rendering for all the assets can help a lot in many situations. On a few projects we did that for characters also.

I encourage you to run your own tests, of course getting inspirations from other is quite helpful, but different projects , different workflows, may not lead to the same issues.

And about keyframes, it’s worth trying to do a proof of concept script there too.
It made a great difference because a lot of stuff was keyed for no reasons, sometime a rigged props that doesn’t move gets keyed triggering evaluation of geometry. It’s hard to notice these things when you work on a team. Probably if you do manual keying on specifics bones, getting rid of say, scale channels may not give you a great speedup.
Edit : I just checked with a random rig, keying all the bones (loc / rot /scale ) vs keying only the root (loc) didn’t bring any differences . But from what I recall it did when I tried. So I should double check that at some point.
All that said, running this script on every shot of the series I tried always gave some fps improvements.
Maybe it’s more because of the stationary objects/rigs.

If you have some scene that you want me to test you can PM them to me. I can run the object profiler on these , and the Curve cleaner on a animated shot, so you’ll see if that’s worth pushing that further !

And for the GN cache, that would be awesome, maybe in that case Alembic can be a great basis.
Especially given that it supports varying geometry per frame unlike .mdd.
Great idea !

1 Like

I just found out about this thread, great work, congrats sozap!
Thanks for the valuable info and breakdown you’ve provided, most of us need it!

Funny thing, when I saw the thumbnail, I thought “this looks like the kind of style that sozap likes”, then I saw it was your thread! :slight_smile:

1 Like

Hahaha, thanks a lot for the kind words NodeX !

This is such a cool project, Sozap!
Nice breakdown too.

There is so much time and love in it, you can see and just feel it, also congrats! :smiley:

1 Like

Thanks a lot FennaFenn !