Blender and C4D

Hey there, fellow blendheads.

As blender developed over the years and the community grew, blender gain alot more attention and for some reason over the last years alot of videos grew like weed on youtube with tiles like “Why you should switch to blender.” and everything. Some people of our community have the audacity to believe that blender became absolutely superior to everything else in the meantime and everyone who doesnt agree is dumb. That being said; thats a sad development. This community was once humble and nice and for some reason (some people) turned into the same trashbags we once believed the community of 3Ds Max, Maya or C4D was like. But thats not what i want to talk about.

I use both blender and C4D. Blender for modelling, C4D for basically everything else (for the reasons i want to talk about).

I really, REALLY miss a few features c4d offers which blender does not, but maybe i just didnt find them so far?

#1.
seperate Render outputs.
in c4d its the take manager. I can set up multiple takes with childtakes etc. let childtakes inherit settings from parent, set a different rendering setting (and output path with variables) to every take, even select different render engines for takes.
Ofc i can set object visibility, mattes (holdouts) etc. for every take as well and even which camera is being used.

C4D then renders everything, and saves it in /%filepath/%projectname/%take_%frame with all AOV’s stored in the right folders.

does blender offer something similar? I know we got the layer system, but i never found a way to render multiple layers and then saving the direct output somethere, and for sure not with variables in the output.

#2.
xPresso in cinema. I believe it could be best explained as a node-based, complex driversetup. You can just create a null, add an xpresso tag to it and in that you can create controls. You can take ANY attribute of any object in the scene and let them work together. create rigs, complex structures, even bools (true/false), work with range mappers etc.
you can set up user defined control GUIs and just create whatever you want.
Blender offers drivers and modifiers, but is there something like this in blender?

#3.
Various redshift features;

  1. Tesselation; Redshift offers super high tesselation at render times. subdividing the mesh’s edgelength to even -1px if wanted. adjust to camera distance and even out of camera view. In blender we have subdiv (limited to 6) and displacement, yes. But is there a better option?
    (edit: okay, found something similar under experimental settings for cycles.)
  2. RSobject; instances from another file. Viewport display can be set to geometry or just a bounding box, replaced at render times.
    I saw somewhere someone doing something alike with geo nodes. 3D-Viewport it was a convex hull, and in render times the real object. How does blender handle it?

Please dont get me wrong. This is no thread to praise cinema nor bash blender. i really like blender, but the points i mentioned kinda make me stick to cinema at work. some projects of mine had around 30 takes organized with variable outputs etc. for post in AE.
I just start the render over night/weekend and go home.
Am i missing something? Does this exist somewhere in a form i just dont know? Is it on the roadmap maybe?

And please, no (paid) addons from blender market.

with kind regards

Hi there! As a former C4D user, let me try a few.

I’m pretty sure Blender doesn’t have something like the take system. But its collections and compositor are more powerful that C4D. You probably know that collections are groups without geometry, so no parenting, but nesting and control over rendering. If you set up a scene with lots of collections like “trees” inside “landscape”, you can quickly render another pass with different settings. You can even sorta automate this, by excluding some collections from some views layers (make multiple view layers in the top right). If you then render the layers consecutively, you’re coming close.

The compositor can combine your rendered view layers (showing different objects and lights) or output them as different files. You can even save all the light passes and cryptomattes as separate files. Just add more outputs to the output node and connect your passes to those nodes. You can’t make setups like this in the Output properties panel. Vr

I used to output many many passes just like C4D and comp them later in AE or PSD, but found it quicker to make the same comp setup in Blender. I still save multipass info AND the Blender composit, but almost never use the passes anymore.

Crytomattes should be the same as in Redshift, but you can combine them in the compositor using blend modes before you mask.

I too find drivers a little lacking, but I do use them to control rigs etc, in combination with custom data and sliders linked to a null (slider and value object data). You can’t add UI elements to the HUD unless you use bones (don’t), but there are addons to add your own UI to a custom side panel. I’m looking into Serpens for this, it offers easy access to almost all object properties as well.

Everybody uses the Cycles mesh subdivision at render time for displacement. You can lower the dicing scale to speed it up. Displacement is coming to Eevee Next too (in beta), so you can see it in the viewport. Might still be 4.1, but likely 4.2. I don’t get why it’s been in “experimental” for years and years.

There are tons of ways to work with high-res render instances and low res proxies in the viewport. Performance is way better than when I last used C4D (r23). There are object instances, for single objects. Collection instances for a group of instances. Geometry nodes of course, displaying objects and collections. But you can also link to an object or collection in another file use File > link. All of these can be hidden in the viewport using the icons in the Outliner. Don’t use the eye icon to disable an object (it still gets processed), use the computer-monitor-icon. For groups and collections, ensure that children are disabled too.

Now just make a proxy or have geometry nodes calculate a hull and set that to not render, using the camera icon in the outliner. If you use a geometry nodes setup, it’s common to do “if viewport” > switch node > proxy object > else > collection instance. That’s how you automate low res grass on a hill and high res grass at render time.

Hope this helps!

1 Like

Hey. Thanks for the quite detailed reply. Didnt know the compositor can be used to generate actuall different file output. thought in the end it’s just one actual output and i basically used the compositor more like AE, but since this happened during render and affected the final output, it took all my flexibility in post. Gonna look further into that (and the other stuff you mentioned)
Oh, and thanks for coming up with the term “Proxies” for some reason it didnt come to my mind while creating the topic :smiley:

I love doing post in AE. Especially since i had maxon one which comes with veeeeery many effects and tools.

another question i forgot in post;
how’s blender doing with animated vertex maps? I know there is a modifier/generator to create proximity-based maps, but they kinda work not that good in my experience.

in c4d you can basically generate vertex/weight maps out of everything. proximity, time, even create ones spreading and multiplying with noises etc. does blender have something similar? I really like it for texturing.

Redshift also has the great curvature-shader which is, together with noises, very flexible and useful to create for example rough edges on metals etc. I found that blender has a pointiness-input which kinda works somehow, but not as i expect it to do. Is there something like this in blender?

Which makes me think of something else; when i started with blender back in 2.37a, it had a lot of noises. sky, perlin, stucco and whatever. Are these somewhere available for cycles? In cycles i just find the basic noise shader.

To be honest, I don’t think it’s Blender users who push those videos. All the videos I saw were from people using other software and comparing how Blender is stacking against their current choice.

Also this type of content is somewhat clickbaity and attracting attention. So people are just following the views. There is a very similar epidemic with people churning out Blender beginner tutorials who barely have any Blender expertise in the first place.

I also hate seeing people moving to Blender for the wrong reasons. The main one it’s being free and not understanding that it also needs monetary support. Since it’s free to use, not to develop. But it’s my personal gripe that this message is not voiced more often from “the influencers”.

3 Likes

There is a bundled addon named Animall, allowing to keyframe a weight painting.

There are 3 vertex weight modifiers that are counter-intuitive.
But the Dynamic Paint modifier is super-intutitive, and is allowing to use a mesh as a brush and another as canvas to paint weight maps, vertex color maps or animated image sequence. It includes fading, spreading, shrinking, dripping effects.

It is also possible to animate weights through geometry nodes, in countless ways.

In Vertex Paint mode, you can use Paint > Dirty Vertex Colors operator.
In Texture Paint mode, there is a Cavity Mask custom curve.
In Sculpt mode, used to paint attributes, there is also ability to define a Cavity Mask from Auto-Masking pop-over.

There are 7 procedural texture nodes in EEVEE/Cycles.
One will be removed in 4.1 because, it will be merged with basic noise.

Those procedural textures have more settings than old Blender Internal textures.
They are able to produce more patterns. And as nodes, they can be combined more easily, in multiple ways to produce a lot more noise patterns.

Anyways, old BI textures are still there. Usable by brushes, old modifiers and compositing nodes.
Through compositing nodes, you can output an old pattern as an image that could be used by Cycles and EEVEE as an Image Texture.

The sky texture offers 3 different models, all more realistic than removed Blender Internal sky. If you want an unrealistic sky, you still can use basic procedural nodes for that.

I’m not sure Blender relies on animated vertex maps the way you describe. Shading nodes can just read edge sharpness from another node for instance. You can even have a Raycast node that tells you is a ray is a shadow ray or a glossy ray. This all outputs values from 0 to 1, so you can just blend it with noises in your shading tree to create mattes to blend parts of your material.

1 Like

Blender is a bit lacking when it comes to drivers and such, but you might want to take a look at the “action” constraint if you haven’t. It allows you to link the playing of an existing animation to an object’s transform instead of having it play based on the timeline. It can replace drivers for many things, though it does have its own, different limitations.

There is a trick to get an edge-detecting shader in blender. The explanation in the video is a bit complicated, but the node setup at the end is really simple.

When I assembled a scene from various models in blender, it was very lacking a tool like place from cinema 4d.

to add to what is being said already, i want to draw your attention to the concept of scenes and viewlayers.
in the same project you can have multiple scenes and each scene will have multiple view layer
the AOV and passes settings are part of the viewlayer. the camera, render engine and world setting are part of the scene. you can select eevee for a scene and cycles for another and it will remember it even tho it doesn’t look the setting is inside the scene in the UI

the collections visibilty options are within the viewlayer.