Substance Designer Fall Update is out (tons of new texture making features)

Unlike previous releases, the fall update has far less focus on iRay stuff and has all to do with dramatically improving your ability to create wonderful and detailed textures.

-Normal distortion
-Swirl node
-Non-square texture support
-Flood fill
-Improved noise algorithms (such as scratches)
-Noise scaling
-and much more

From my perspective, the de-emphases on iRay for the time being is only a good thing (considering that many textures that used to be time consuming to create will now be done far more easily).

So for those using the program already, is this going to be an exciting release or are you still missing something?

still missing an official Blender-Substance Designer bridge
I would give instantly 10$ to Allegorithmic (despite that all their other bridges are free)
for being able to dynamically change parameters of sbsar files in Eevee/cycles.

That would be awesome… indeed.

Seems like these tools are really only good for game assets, nothing more. Here, for example, is a guy who tried to argue that Substance Painter was better than Blender’s texture-painting tools. Trouble is, he didn’t really understand how to use Blender’s tools.

Not sure whether they used game assets here:

Few subjective thoughts on the matter:

-@myclay, just use auto_export on change in SD and bind some reload textures macro to a hotkey (e.g for img in img.reload()*). It’s more streamlined than archiving substance and then going back to do changes, rearchiving etc.

-@Painter vs Blender paint. It’s bad comparison. In Painter you paint with materials, or masks unhiding complex multichannel materials. Suppose you could do that in Blender also by defining several Principled shaders and using AO/Curvature masks. I have a feeling we’ll get there with 2.8.

-@Film or Game only. Truthfully it takes A LOT of effort to create complex materials in SD. Also once a complex shading network is defined, it is VERY, VERY Slow especially when nesting a network in another. SD is far more efficient when used to define simpler generators, patterns, filters etc to be used in SP. Problem with both SP and SD is that they require perfect UVs and extensive bakes. It works for games as there is no other way around it, for FILM though this is a limiting factor, requiring asset to be simplified and baked down. It’s faster to render after though and PBR workflow yields consistent result. That might be preferable for scene prop assets.
On the other hand for FILM you can use nodal networks of Maya/Blender/Houdini/your_software_here to define very similar logics in an unshackled environment. For example you do not have to be bound to UV coordinates or a single set, you can use 3D noises, various mesh data (vertex colors, density etc). In that respect to compare Houdinis nodal shading capability to SD would be laughable. That is why it’s important and nice that Blender is also receiving nodes like Bevel (+curvature*) and hopefully in future AO. All of this is pivotal in modern texturing workflows and maybe not in too far future would give certain benefits in using Blender for texturing. For example today I use Blender for Bevel and also often for baking realistic emission maps, translucency/SSS of cloth - things very hard to fake in Substance. All in all using best of all tools is a good practice.

Instead you can give your ten bucks to this guy:

Still in beta, but very promising.

And another presentation:

That’s pretty funny considering Substance supports UDIM and Blender doesn’t yet (I saw it in the tracker though: Still there’s not much to argue, Substance is just better, at least right now. Blender for what I know doesn’t have: a PBR viewport (until eevee), material painting instead of texture painting (painting on several images at once), vectorized strokes (paint at 128x128 resolution, output at 4k/8k), layers/groups (in Blender each layer is a separate image), filters, generators (for making masks of baked textures or just procedural “noise” textures), height-to-normal painting and much more (already TLDR). :stuck_out_tongue:

I also disagree that you need perfect UVs, it’s more important for setting up texel resolution, but with triplanar projection for fill layers and other ways to get around seams you don’t need perfect UVs at all. Besides if you want to redo your UVs you can do that and then just import the file again, reprojecting the strokes on the new mesh (assuming the geometry itself didn’t change).

It’s not impossible to make something better in Blender though. For example there’s the PBR Painter addon. Still, making something really good would require real developer work (maybe after 2.8?). Right now I think texture painting is still one of the weaker parts in Blender along with the VSE and the game engine.

By perfect UVs I meant nonoverlapping UVs that fit in 0-1 space. Otherwise bakes fail and all generators that feed on these bakes as well. That on the other hand demands that you collapse your modifier stack/lose history, then do UVs manually. All is good for relatively simple game characters, but complex stills and film level stuff - it’s a different story. In contrast in Blender you could use curvature (bevel node) without bake nor even UV as Edge Wear mask. You could first create Highpoly HD textured model and bake it all down onto final LP. You can model HP and texture same time, worry of LP only when you’re 100% certain in design and client has given thumbs up. I keep referencing to this video:

That’s why it is important to get good Curvature (convex, concave) and AO masking in Blender.

You mean like using Blender’s Vertex Paint layers to modulate mixtures of different shaders?

If you need masks with intricate detail and massive variation, then you need textures as your resolution is otherwise dependent on the vertex count.

Also, Substance Designer was originally targeting game creators, but there’s a lot of professional-grade CG art out there that heavily makes use of maps created in the software. That and the fact that Allegorithmic now has the world’s only actively developed software designed specifically for texture creation.

I don’t see why having non overlapping UVs makes UVs perfect, it’s pretty easy to do. Even easier in film when you can split the character per limb/part and use UDIM. Same for the modifiers, I doubt people actually animate with modifiers unapplied (except armature, subdiv modifiers with opensubdiv) because it makes things really slow. If you need to change the character later you can do that on your saved character with the modifiers still there, then copy the UVs from the applied (old) character using data transfer (or other methods).

The video you linked isn’t anything special really, just a bunch of noise textures with triplanar mapping. It’s fine for a still picture but the texture will stay in world space while the character moves which isn’t great for animation. Fine if you don’t want to UV map but then you’ll have to rely on procedural stuff or shader magic. The whole point of PBR is to avoid that and make things really easy and predictable.

  • In principle yes, but with texture masks as Ace hinted. The key here is to find a way to paint with multiple channels(Color, Roughness, Height etc) behind every brush stroke. That could involve blending between 2 or more Principled shaders using texture based masks. Can Eevee show that? I do not know what’s on the roadmap for Eevee or what is already possible. Armor however seems to do a lot of it already: Perhaps someone has experience and can chime in on the subject?
  • @Cyaoeu. For a project as in video, as well as most other stills and animation(non or little deforming), forcing extra steps of game workflow can be very expensive with respect to clients needs, budget and deadlines. Instead of HP>LP retopo> non overlapping UVs>Bake>Texture>Output textures>Reinput in Marmoset/Blender/Kesyhot - steps that you so firmly promote, one could simply do Model HP<->Texture/Shade<->Render. This results in higher quality, less complexity, shorter time and allows infinite refinement on any aspect at any time. For video animation where most assets are static, procedural networks would take less memory(@Vram) than high quality PBR maps per asset. For games result could easily be baked down as final step or inbetween also involving SP/SD if desired.

From that point of view as well as aspects highlighted in prior posts, I am only hinting that depending on project artist might opt for one or the other workflow (vs Substance=one glove fits all). Blender has tremendous potential here. As I listed previously though Cycles would have to support masking with AO and Curvature(convex, concave) as it’s what drives all the SP/SD generators allowing to create sophisticated look.

I apologize If I failed to explain my point of view. I’ll leave it at that.

Hello cgstive,
Thanks for posting, :eyebrowlift:
could you please be so nice and give me a hint /example file
on how that would be made?
Just from the sound of the proposed api parts, it seems like it would be yet another one of the rather slow image i/o solutions?

Right now every available solution more or less seems to work like the over three years ago released
Automaps Plugin from Todd Mcintosh.
While its possible, its imo highly ineffective when you want dynamic changes and it
has the bottleneck of having to save and load images constantly from the harddrive.

I really crave for a way to dynamically change an into the .blend file packed sbsar file
which stores the output directly in the ram or GPUram, while being able to make keyframes
on the sbsar-sliders and be able to either view those keyframed changes in the viewport or when its being rendered - no matter if its cycles or Eevee.

Hi myclay

My workflow is really simple and straightforward. I do all the slider changes, variations in Substance Designer and enable Automatic Export:

In Blender I use a small script i wrote to reload textures (Spacebar search menu > type “rt” as in reload textures):

You can download it here if it helps you:

The whole SD change > Blender preview cycle takes just a couple of seconds. You can make it faster by binding a hotkey to it. To make sbsar and reload it in apps( even in SP after SD) is far more time consuming. That is why they are more relevant for Game engines where you need variation between similar looking assets (floors, walls etc). For 3D apps and individual assets, I think texture reloading workflow is rather efficient. If you really need multiple variations, just duplicate your graph a few times in parent graph and create new outputs.

Hope it helps

Sure but if you say “Model HP<->Texture/Shade<->Render” gives higher quality you would need to make those shaders/node networks first, they don’t just magically appear. Textures give more control because you paint where you want things instead of relying on carefully tuned shaders to do the work for you. Your method could work for inanimate objects but for characters it would probably look really bad. RAM (not VRAM) is not a problem either with beefy render farms.

Besides in high budget productions the cost of the additional hours to make UV maps is a drop in the ocean in the big scheme of things. Things like Mari were made for a reason. It had UDIM support since way back so it was more popular for film, but now that Substance also supports UDIM I guess it’s getting more popular there too.

Hi cgstrive,
thank you for the explanation and the pleasantly small script, yeah for lookdev that solution looks sufficient enough.

At least Blender offers you the choice: you can use an image texture, which gives you pixel-level detail but greatly increases memory requirements, or you can use a vertex paint layer, which only gives you vertex-level detail, but takes up a lot less room.

I think we’re really starting to get into Substance Painter’s territory now. Substance Designer is primarily for the creation of textures that aren’t highly dependent on a specific UV map (decals, bricks, concrete, grates, floor tiles, boards, rock, scale patterns, leather, ect…).