It’s as if teasing me to just keep training forever but it’s really cool that it actually is promising.
mm, well I’ve taken multiple courses by now and have done supervised learning, but only a little. Last course I took was the freeCodeCamp one and the last lesson showed the basics of GAN. Which I’m grateful for, but I’ve proven to myself with this project, that it is very hard and so slow that nothing practical will come of this. Was fun, moving on.
There’s a bunch of ideas up in this thread and more in my head, but if you know of something more suitable for a beginner I’d love to hear it. I am actually a dev with 6 years under my belt along with a small dev company. So I’m only a noob in ML
Most of all I’m trying to pivot our agency-type dev company into a product company. So anything that the Blender community would actually use has double points in my eyes. Plus, I love Blender and would love to make useful stuff even if it’s free (prior to this 6 years I was a 3D animator).
Here are way too many ideas . Several are directly based on existing papers/project, others are my ideas where I have a very clear idea how I would try to tackle them. Feel free to ask questions if you need more details.
There are indeed quite a few ideas. For most of them, I wouldn’t know how to tackle them because of technical reasons. Coming up with practically useful ideas that can be translated into machine learning is quite complicated in my opinion. Practically useful ideas won’t be trivial of course. However, if you pick a supervised problem, it will in most cases be orders of magnitudes easier than a GAN.
Photorealistic Style Transfer
Pretty much this paper:
This could be very useful for prototyping to quickly try different styles out. Or maybe you have to convert an image to look neutral, like a texture that you can use it.
(They use NAS to find an architecture which isn’t reasonable for 99% of the population, but if I remember correctly, you find the resulting architecture in their GitHub repo.)
Image to PBR Material
Take a photo of a material and convert it into a PBR Material with all the necessary textures.
The great thing about this is, you can create your own dataset in Blender using the many freely available PBR materials from various pages. Add a PBR material on a plane, maybe use an HDR for lighting, place the camera to point onto the plane and render it. This gives you the input image for the training data, while the PBR textures are the output.
There are several publications about this sort of work, but I would need to look for them if you are interested. Due to the many different lighting conditions, it might turn out to be necessary to use neutral lighting for this to work (or maybe use some photorealistic style transfer to get some sort of a standard input).
Those might be useful for automatic tracking to find tracking points and guide them over several frames (including removing them). It might also be a building block to get a facial tracking solution like https://google.github.io/mediapipe/solutions/face_mesh.html or motion tracking in general (which doesn’t work that well yet).
Since you are into animation, there are also plenty of possibilities in that area. I already posted the Face Mesh link. There are also publications about transferring facial poses between characters or even from a camera onto a character. I would need to search for them if you are interested.
Just like for the face, tracking hands is possible, even if this one doesn’t appear that spectacular.
Human Motion Tracking
Just here for completeness. In my opinion, this isn’t advanced enough yet to use it in practise for arbitrary situations.
You have a predefined rig, place e.g. the hands and feet and let the trained neural network give a plausible pose for this. If you don’t like it, you may tweak it by adjusting the positions/rotations again or by additionally guiding the knees/elbows/hip and let it complete the pose again.
Use the Pose Assistant to create the key poses and let the interpolater find the poses in between. If you don’t like certain keyframes, adjust them and let it recompute the whole thing.
There are other possibilities like transfering poses between characters with different sizes/proportions.
I haven’t thought much about uses in game engines. There are some projects I am aware of for animation, like: https://www.youtube.com/watch?v=wNqpSk4FhSw
Personally, I am not looking at games too much, because of the very tight performance requirements. If I create something for Blender, it is great to have good performance. But in game engines, it doesn’t matter how well it works if it isn’t fast enough. There are also plenty of technical difficulties you would need to deal with to get a machine learning project running in a game engine on the various platforms. Unity sort of has a system, but it is quite limiting as far as I have seen.
I was more thinking actual games that are sold and has studios using that would potentially buy plugins. Although Krum does indeed look badass.
This one is promising. Actually useful (as far as I can tell) and I could probably manage it.
There was a guy who generated bump maps with NN’s and I wasn’t impressed by the results (I probably can’t get a better result). Also there are a lot of texture packs out there. Am I wrong to assume the lack of textures isn’t really a big problem?
Of all the ideas I think this one is most “professional studio needs it” type. Technically seems way over my head, yet you provided good materials, I’ll look into this.
Hmm, yes camera to 3d face expressions is definitely something 3D software needs. Also connects with the tracking idea. Am I correct to assume such a thing does not yet exist?
I don’t quite understand. So you have a rigged character and AI just strikes a pose? I can understand “grab that object” but what do you mean here?
So out of all of these tracking and StyleGAN seem doable and useful. I think I’ll start with StyleGAN and get acquainted with blender’s dev API. Still have to find time for this beside my other obligations (assuming a new project we have on the table doesn’t just start suddenly).
You probably know about 2 minute papers, but just in case, since it wasn’t mentioned yet, great AI/ML Development News YT channel with clear, short explanations on the topic in question (+ quite a diverse examples w/ papers & also code in some cases)
There are a few papers with very promising results, though, they usually require more than one photo. You are certainly a lot of texture packs out there. If I was to tackle this, it would be a milestone towards more texturing tools. E.g. making a PBR material seamless or mixing several materials.
This already exists to a certain degree. One of them is called FaceRig. And there are some papers which focus on this topic as well. I can’t find them right now. Let me know if you are interested, I could search more in thoroughly.
You have a rigged character, you position some bones where you want them and let the model suggest you a reasonable pose for the bones positions. Besides the bone positions, you would likely need other information, like the movement speed.
AI upscale would be great so i dont have to keep going to 3rd party software. would be amazing to have an AI that was trained on lots of AI denoised Blender renders for its data, as that is the type of image that will most often be used for a Blender AI upscale. So an AI upscale node for the compositor would be a game changer.
I want an AI console that lets you tell Blender what to do in plain English (or whatever language you speak).
There’s an input line to write commands. As you write commands, a string of blue tools will pop up below the input line. Each tool has an options menu to tweak in case the computer didn’t get it right.
After pressing enter to execute a command, it grays out, moves up a line, and isn’t editable anymore.
Any AI involving soft I dealt so far , like Artomatix, photoshop AI selection or cataloging, simplygon etc was somehow useful but still so far from doing any essential chunk of my work easy. In many cases it does just useless mess or something weird. More toys than tools.
I feel a bit disappointed actually with all that AI things.
What really amazes me is why usual non AI things in art related software are so monstrously, so tremendously inconvenient after decades of software development and gazilion of releases. Starting right from Windows file browser or open/save dialogs.
I believe 3d content is so expensive because of that too. You have to waste a half of your life digging through small hacks and workarounds , unexpected limitations never clearly explained, a layers of redundant UI inventions helping you with nothing , gazillion of checkboxes hidden somewhere so deep you have to maintain special Evernote base to keep track of them.
The recent Blender UI change is IMO one of those things . Make it feels like 3d max .
I wasted two hours yesterday trying to figure out why a part of my scene suddenly disappeared. Thanks to a new “cool” collection system.
I personally think software folks just doesn’t get an idea that well designed hammer in your hand is convenient BECAUSE it’s simple to use. Same with a brush and pallet . Or a paint tube.
Such a nice small thing like a paint tube took centuries of artists keeping their paints in animal bladders before someone invented it. Maybe art software need centuries too.
In my opinion, the current challenge is how to integrate AI solutions into existing workflows or maybe to change existing workflows with AI solutions. Many things I have seen feel like 95% finished, but without those last 5%, they aren’t really worth it in many production scenarios.
Many prominently displayed projects are interesting research projects which might be used as toys, but are very far away from any production usage in the current state. I am consciously not talking about those.
In my opinion, one of the goals of this thread is to find cases where AI is actually beneficial for the users. Finding those is very difficult and even if they are found, giving the users the necessary controllability in a predictable way is a huge challenge.