What AI helper should Blender get next?

Hey.

I was a 3D guy over 6 years ago and Blender was my favorite. Today I’m a programmer interested in AI (actually, ML) solutions. I saw Andrews youtube video about where AI is going in 3D software and I must say it’s really interesting. I’ve also seen some Autodesk stuff and different denoisers. But everything I see makes me think that I really don’t know the industry very well anymore.

So I ask you - what AI plugin would Blender benefit from today? Great would be something more simple to start from, but I’ll take any tips at all. I know Blender and I’ve done a few ML projects, perhaps I (and my team) can build something cool that would actually benefit Blender.

Thanks :slight_smile:

2 Likes

Couple things come to mind

  • Automating bone placements and setting up animatable rigs.

  • Fixing deformation issues in characters

  • Setting up automated camera animations

  • Setting up believable lighting scenarios

  • Micro detail generation for surfaces

  • Creating smart surface shaders

  • Reflowing polygons based on surface properties, like detail control

4 Likes

UV unwrapping cries out for attention… Some different methods such as organising the layouts based on physical space, the relations of triangles based on neighbouring objects. The ability to unwrap a selection of faces around others that have already been unwrapped and will not change.

3 Likes

AI could be useful for motion capture stuff.

If you do from hand, you set your trackers and you have to set a driver / constraint to a bunch of (tracker controlled) empties. So, its much hand tweaking. AI could do better job here, I guess.
There is an idea for an AI approach …

  • Rigged model shows basic expressions (targets)
  • Now, a person (with trackers) has to mimic expressions with his own body. Tracker-Empties positions are used as training input.

… not sure it will work like that. But I would be glad for some exchanges of thoughts.

1 Like
  • An Open source targeting/classifier that accepts a sculpt as input and creates markers for various body locations (tip of nose, center of eye). This could be used to auto rigging, or proper loop based retopo, or default shapekey generation (smile, blink)

  • Generative sculpts, similar to GAN based images, but for creatures. One latent vector might produce a dragon, and another a panda. It would require a few 100,000 random hand made sculpts.

  • I tried making this one, (and mostly failed) but it’s sort of a post filter, a conv net, that takes a render as input and then adds some noise, grunge, fine details, fixes shading anomolies etc to make uncanny interiors, or even human faces look more like photographs.

  • A predictive extrude. A lot of time when I am modeling pipes or tables, After a few extrusions and scaling the remaining extrusion/scaling are pretty obvious and can be inferred from the exisitng mesh. SO you’d tap P and the selected loop would magically make the next step, you could just keep tapping until it makes a wrong move, and ctrl Z.

  • This one is really easy to make, but it’s a way to score renders as to how close they are to photographs. I know from experience and from seeing others that the artists can get tricked by their own work, thinking it is much more realistic than it really is. 0 = CG, 1.0= photo. It’s just a way to give yourself honest feedback on photorealism.

  • dot dot dot…

2 Likes

AWESOME!! :slight_smile: Read my thread:

Hmm this one sounds actually doable. Like I can imagine the data i need to work with. Do you mean that there’s a button that generates lights based on the scene? Like a boilerplate lighting solution to tweak so to speak?

Ah nice. Dunno why search didn’t give me this thread.

hehe, interesting idea. Doable, but I doubt it’s usefulness. Rather it should convert the image into a more photorealistic solution. Yet that’s not so much blender any more.


Actually many of these ideas are doable. But I’ve come to realize that any meaningful tool would have to be developed in-house with an actual team (how are you actually solving a problem otherwise) and it would require like a year of runway (real problems are hard). Don’t have either, my company is a bit too young.

So thank you for your input, but currently am not going to pursue this further.

AI could either generate realistic light setups (on top of HDRI, since HDRI can only do spherical lighting) based on a plate, or creates believable light setups based on prior data, and matching content would be an interesting feature too. One way to think is generating light setups without any HDRI, the engine can be provided a single photo of a product shot for instance and a similar setup can be generated from that single source.

Lets say the artist made a scifi hallway. The engine pulls from past movie shots and creates similar moods for the scene.

Another thing AI can do is to match 3d facial mesh in Blender to an existing photo of a face. This is already done but it would be nice to have that as an option in Blender.

1 Like

there is a video on youtube that talks about ai being able to learn how smoke and fluid interact which could basically make high resolution fluid simulations and smoke simulations much faster or even realtime if blender could get something like this it would be incredible.

1 Like

Sorry for bumping a 7 month thread. Thought I’d update my progress.

Well my ML adventures came up much shorter than anticipated.

Thought I would try @kkar’s direction of taking a photo and generating a HDRI for it that creates the same atmosphere that is seen on the photo.

To start off I thought I’d just generate wood textures by giving it examples first (to learn as i go). Well, my 760 sample images are managing to create quite a useless result.

Input sample

image

Output “amazing” never before seen wood textures

It’s going to take a long time before I get anywhere with ML. Maybe I need simpler challenges :smiley:

How does it work?

If anyone’s interested however, here’s how GAN works. I train a neural network on the sample images so the machine has a way of grading how well the generator is doing. This is called the Discriminator. Then we generate random noise and try to change pixels so the Discriminator would give it a higher score. This part is also a neural network, because it tries to learn what the Discriminator is happy with. We call this second part a Generator.

I ran this batch for 500 rounds. Here’s a few images to show how it progressed:

Starting with random.

image

100 rounds of learning

200 rounds

300 rounds

And the result you can see at the top.

I think I’ll go ahead and run this for 5000 rounds now. Because it does seem like it’s getting somewhere… Maybe I just aren’t giving it enough time.

3 Likes

It is great to see other people being interested in machine learning for Blender!

If I understand it correctly, this is pretty much your first ML project. Even though it is not impossible, starting right away with GANs wouldn’t be my first recommendation to be honest. I am sure you have figured out they can be quite nasty, unstable and take very long to train.
I would suggest to start with a supervised project at first. It is a lot simpler and yet, there are tons of things to learn and fiddle around with.

Let me know if you need some project ideas. I have a few I would love to try out, but I am also open to share them.

1 Like

oh my god 5000 epochs later:

It’s as if teasing me to just keep training forever :smiley: but it’s really cool that it actually is promising.

mm, well I’ve taken multiple courses by now and have done supervised learning, but only a little. Last course I took was the freeCodeCamp one and the last lesson showed the basics of GAN. Which I’m grateful for, but I’ve proven to myself with this project, that it is very hard and so slow that nothing practical will come of this. Was fun, moving on.

There’s a bunch of ideas up in this thread and more in my head, but if you know of something more suitable for a beginner I’d love to hear it. I am actually a dev with 6 years under my belt along with a small dev company. So I’m only a noob in ML :slight_smile:

Most of all I’m trying to pivot our agency-type dev company into a product company. So anything that the Blender community would actually use has double points in my eyes. Plus, I love Blender and would love to make useful stuff even if it’s free (prior to this 6 years I was a 3D animator).

Blender is lacking in the Crowd simulation department and AI plays a big role in that part (setting agents behaviors and logic stuff).

“Massive” for example was used in Lord of the rings to create those large scale epic battles.
http://www.massivesoftware.com/massiveprime.html

It has a plugin for maya similar to other crowd sim like Golaem, Miarmy, Atoms, etc…

Let’s not forget Houdini.

Then we have Blen…ah nvm…

Here are way too many ideas :slight_smile: . Several are directly based on existing papers/project, others are my ideas where I have a very clear idea how I would try to tackle them. Feel free to ask questions if you need more details.

There are indeed quite a few ideas. For most of them, I wouldn’t know how to tackle them because of technical reasons. Coming up with practically useful ideas that can be translated into machine learning is quite complicated in my opinion. Practically useful ideas won’t be trivial of course. However, if you pick a supervised problem, it will in most cases be orders of magnitudes easier than a GAN.

Photorealistic Style Transfer
Pretty much this paper:



This could be very useful for prototyping to quickly try different styles out. Or maybe you have to convert an image to look neutral, like a texture that you can use it.
(They use NAS to find an architecture which isn’t reasonable for 99% of the population, but if I remember correctly, you find the resulting architecture in their GitHub repo.)

Image to PBR Material
Take a photo of a material and convert it into a PBR Material with all the necessary textures.
The great thing about this is, you can create your own dataset in Blender using the many freely available PBR materials from various pages. Add a PBR material on a plane, maybe use an HDR for lighting, place the camera to point onto the plane and render it. This gives you the input image for the training data, while the PBR textures are the output.
There are several publications about this sort of work, but I would need to look for them if you are interested. Due to the many different lighting conditions, it might turn out to be necessary to use neutral lighting for this to work (or maybe use some photorealistic style transfer to get some sort of a standard input).

Tracking
Those might be useful for automatic tracking to find tracking points and guide them over several frames (including removing them). It might also be a building block to get a facial tracking solution like https://google.github.io/mediapipe/solutions/face_mesh.html or motion tracking in general (which doesn’t work that well yet).




Face Animation
Since you are into animation, there are also plenty of possibilities in that area. I already posted the Face Mesh link. There are also publications about transferring facial poses between characters or even from a camera onto a character. I would need to search for them if you are interested.

Hand Tracking
Just like for the face, tracking hands is possible, even if this one doesn’t appear that spectacular.

Human Motion Tracking
Just here for completeness. In my opinion, this isn’t advanced enough yet to use it in practise for arbitrary situations.

Pose Assistant
You have a predefined rig, place e.g. the hands and feet and let the trained neural network give a plausible pose for this. If you don’t like it, you may tweak it by adjusting the positions/rotations again or by additionally guiding the knees/elbows/hip and let it complete the pose again.

Pose Interpolation
Use the Pose Assistant to create the key poses and let the interpolater find the poses in between. If you don’t like certain keyframes, adjust them and let it recompute the whole thing.

There are other possibilities like transfering poses between characters with different sizes/proportions.

1 Like

This is amazing. Ideas with papers? Nice. I’ll reply soon. Additional question:

I’m even more into games. But people don’t make games on blender, do they? I mean with Unity and Unreal being free an all. Is there much point in trying something there?

I haven’t thought much about uses in game engines. There are some projects I am aware of for animation, like: https://www.youtube.com/watch?v=wNqpSk4FhSw
Personally, I am not looking at games too much, because of the very tight performance requirements. If I create something for Blender, it is great to have good performance. But in game engines, it doesn’t matter how well it works if it isn’t fast enough. There are also plenty of technical difficulties you would need to deal with to get a machine learning project running in a game engine on the various platforms. Unity sort of has a system, but it is quite limiting as far as I have seen.

Sure they do.
BI has abandoned the Blender Game Engine (BGE) in Blender 2.8

So the last fully integrated BGE only is in Blender 2.79 and below.

Here is a guy with an actual game (Blender 2.79):

Also a tight integration with Blender is:

These guys are trying to keep BGE alive post Blender 2.8x:
https://upbge.org/

There are also people trying to do their own engine:

1 Like

I was more thinking actual games that are sold and has studios using that would potentially buy plugins. Although Krum does indeed look badass.

This one is promising. Actually useful (as far as I can tell) and I could probably manage it.

There was a guy who generated bump maps with NN’s and I wasn’t impressed by the results (I probably can’t get a better result). Also there are a lot of texture packs out there. Am I wrong to assume the lack of textures isn’t really a big problem?

Of all the ideas I think this one is most “professional studio needs it” type. Technically seems way over my head, yet you provided good materials, I’ll look into this.

Hmm, yes camera to 3d face expressions is definitely something 3D software needs. Also connects with the tracking idea. Am I correct to assume such a thing does not yet exist?

I don’t quite understand. So you have a rigged character and AI just strikes a pose? I can understand “grab that object” but what do you mean here?


So out of all of these tracking and StyleGAN seem doable and useful. I think I’ll start with StyleGAN and get acquainted with blender’s dev API. Still have to find time for this beside my other obligations (assuming a new project we have on the table doesn’t just start suddenly).

1 Like

You probably know about 2 minute papers, but just in case, since it wasn’t mentioned yet, great AI/ML Development News YT channel with clear, short explanations on the topic in question (+ quite a diverse examples w/ papers & also code in some cases)

And here’s my fav. https://www.prometheanai.com/ // AI assistant :slight_smile:

There are a few papers with very promising results, though, they usually require more than one photo. You are certainly a lot of texture packs out there. If I was to tackle this, it would be a milestone towards more texturing tools. E.g. making a PBR material seamless or mixing several materials.

This already exists to a certain degree. One of them is called FaceRig. And there are some papers which focus on this topic as well. I can’t find them right now. Let me know if you are interested, I could search more in thoroughly.

You have a rigged character, you position some bones where you want them and let the model suggest you a reasonable pose for the bones positions. Besides the bone positions, you would likely need other information, like the movement speed.