What AI helper should Blender get next?

Have you tried being a bit kinder to the algorithm?

e.g. try orienting the input images so that the wood grain is running in the same direction.

It might give the algorithm a better chance at spotting significant features and creating better variations.

I think some of the blocky artifacts you got in your initial trail are the result of it trying to reconcile different textures with wood grains running perpendicular.

But, lots of progress is being made:

AI upscale would be great so i dont have to keep going to 3rd party software. would be amazing to have an AI that was trained on lots of AI denoised Blender renders for its data, as that is the type of image that will most often be used for a Blender AI upscale. So an AI upscale node for the compositor would be a game changer.

1 Like

After that my dream AI helper if possible would be AI re-topology as i and probably many others find it a tedious monotonous task that should be automated.

Indeed, an AI retopo producing very high quality meshes has to be one of the most anticipated AI helper at the moment.
Same goes with AI unwrap.

What software do you use at the moment?

The only software i use is Blender and Topaz Gigapixel for upscale +Topaz Studio for further compositing.

i would like to see AI based text-to-speech which also can be directed a bit for voice acting. :slight_smile: with lot’s of parameters for tuning the voice, simulated emotions,…

it probably still is a bit unrealistic since even at places like google they don’t master text-to-speech yet but i think in about 10 years we won’t need voice actors anymore.

I want an AI console that lets you tell Blender what to do in plain English (or whatever language you speak).

There’s an input line to write commands. As you write commands, a string of blue tools will pop up below the input line. Each tool has an options menu to tweak in case the computer didn’t get it right.

After pressing enter to execute a command, it grays out, moves up a line, and isn’t editable anymore.

Any AI involving soft I dealt so far , like Artomatix, photoshop AI selection or cataloging, simplygon etc was somehow useful but still so far from doing any essential chunk of my work easy. In many cases it does just useless mess or something weird. More toys than tools.

I feel a bit disappointed actually with all that AI things.

What really amazes me is why usual non AI things in art related software are so monstrously, so tremendously inconvenient after decades of software development and gazilion of releases. Starting right from Windows file browser or open/save dialogs.

I believe 3d content is so expensive because of that too. You have to waste a half of your life digging through small hacks and workarounds , unexpected limitations never clearly explained, a layers of redundant UI inventions helping you with nothing , gazillion of checkboxes hidden somewhere so deep you have to maintain special Evernote base to keep track of them.

The recent Blender UI change is IMO one of those things . Make it feels like 3d max .
I wasted two hours yesterday trying to figure out why a part of my scene suddenly disappeared. Thanks to a new “cool” collection system.

I personally think software folks just doesn’t get an idea that well designed hammer in your hand is convenient BECAUSE it’s simple to use. Same with a brush and pallet . Or a paint tube.
Such a nice small thing like a paint tube took centuries of artists keeping their paints in animal bladders before someone invented it. Maybe art software need centuries too.

1 Like

In my opinion, the current challenge is how to integrate AI solutions into existing workflows or maybe to change existing workflows with AI solutions. Many things I have seen feel like 95% finished, but without those last 5%, they aren’t really worth it in many production scenarios.
Many prominently displayed projects are interesting research projects which might be used as toys, but are very far away from any production usage in the current state. I am consciously not talking about those.
In my opinion, one of the goals of this thread is to find cases where AI is actually beneficial for the users. Finding those is very difficult and even if they are found, giving the users the necessary controllability in a predictable way is a huge challenge.

1 Like

AI tools currently is like working with Dustin Hoffman character from “Rain man” movie. An extremely capable but you never get what you wanted and you can’t explain it to AI. Or like calling to your ISP support. You are mad quickly till you start talking with real person.

Every so called pro software is generally a Gordian Knot of features wrapped into another features with checkbox here you have to not forget to make that button work there and so on and on with zero explanation what you did wrong or why your whole scene disengaged or flight away . Usability is never a priority really. After all we love Zbrush for unique features even with it’s UI being an alien nightmare.

Still any solution that would make user interaction less puzzle solving would be a huge plus. I am not even sure we need AI for that. In fact old Blender UI was somehow less puzzling with everything you need on the left and right panels , right in front your eyes, not hidden beneath some obscure drop downs .

I am ok with new UI by now, after a year :), still there is so much that could be so simpler . Driven by some scenario choices for example.

You want to bake something, a suggestion to drop your sampling if you not ready to wait forever.
You want to use hairs to scatter things around . A ready made solution. Just working no puzzles why your things weirdly rotated etc.
You want to array something. A suggestion to make common parent for things and lock transforms.
The whole physics part of Blender needs such suggestion /examples badly

Again, I am not even sure AI is necessary for such nudge/suggestion machine. A half of blender addons are just solutions to work around its crazy puzzles. But surely Ai could make such suggestion system more advanced.


That’s a really cool idea. I’m commonly frustrated with the search menu since I don’t know the correct word. I mean this is not AI, but on the other hand a lot of the time I don’t even know what I’m looking for, but can explain in english.

For this to work though I’d need a database of code examples with descriptions. Is there such a thing?

Toy vs. tool

Totally agree that most things are toys. Getting into machine learning one of the major surprises for me was that you get into “got to read research papers” pretty fast. AI is very young. So if we want to get anywhere we have to start now. Also, this helps Blender be at the forefront of 3D software.

On the other hand the simplest of tasks take weeks to complete. This is why I’m not super eager to jump into most ideas posted here. Since I’m too much of a noob and there is no clear benefit. I am experimenting though.

1 Like

Thanks! The database of code examples with descriptions could be submitted and moderated by users, kinda like Wikipedia. For example, someone could submit a code that models the shape of a fruit and give it descriptive tags, so it models that fruit if someone types “Model a fruit” into the AI console. Technically not AI, but it is something you could have AI do instead of humans once AI is a bit older. The database would probably be quite large & change frequently, so you might want it to be an add-on that uses Internet.

Fun Fact: I just learnt today about that Cascadeur animation software that is also AI-driven apart from physics-driven.

As for example you can position the character in some very basic pose, and then the AI would figure out how to correct the head tilt, or avoid penetrating surfaces, or touch the ground properly.

These are really life saving techniques there. Really important but simple as well to implement. For more sophisticated systems such as autonomous agents etc, are more complex and experimental until a good paradigm is figured out. But at least getting the simple stuff first is the way to go.

It must be noted that AI is still pretty far from being a magic bullet as of 2020, any AI will need to be guided either by the user or guided by scene data generated by Blender.

The reason is because machine learning algorithms (as far as results go) break down pretty quickly once you start deviating from what it was trained on. It is very naive, and it is still quite difficult for computers to turn that into context for use with anything it sees.

In the many cases, an AI not being fully automatic can be an advantage. When an animator is trying a pose for a character, having something that is interactive to help the animator, meaning the AI needs to be guided, is an advantage, because the animator would be capable to fine tune the results.
This might accelerate the workflow of animators, because a small change in the position of the arm may require an adjustment of many bones and having an AI which completes the pose may help the animator to try poses more quickly and to focus more on the artistic side, rather than the technicalities of animating.

You are certainly right. Though, in my experience, when you are trying to solve a relatively narrow problem, it often works surprisingly well. Especially if the goal isn’t a fully automatic magic thing which is able to read your mind :wink: .

ML already works quite well for Cycles denoising (in most cases), but mainly because you make OIDN use the scene’s color and normal data as a reference and because you can opt to only denoise the indirect lighting. I mentioned OIDN because Optix is not that great in numerous situations yet.

Though as you said, that is a far narrower scope for such an algorithm compared to “model a fruit”. The better and the higher quality way might be to have Blender contain a lot of different ML assist tools (that draw from a central API) rather than “the algorithm that rules them all”.

1 Like