AI for 3D is here

The 2D digital arts world is freaking out over AI (and rightly so, in my opinion). Time for 3D artists to take notice of what’s coming.

Text prompt to 3D example.

https://www.reddit.com/r/StableDiffusion/comments/xxaxgm/a_implementation_of_textto3d_dreamfusion_powered/

This one is takes a 2D image and generates any other view from it. In the link you can see several comparisons to several other methods under development to do the same thing.
https://3d-diffusion.github.io/

Right now these tools only make images… but I imagine it isn’t much of a leap to make a point cloud or something that can get you a polygonal 3D model.

Lots of implications for developers and users of 3D tools. What does everyone think of this?

1 Like

That looks super cool !
I think it’s super interesting what is going on with these tools ! For sure it will change the way we work in CG, but I think there is room for everyone still !

We are already in a situation if someone wants to avoid paying someone to model a pinapple he can get it in minutes : https://www.turbosquid.com/fr/3d-model/pineapple

This is great, yet quite limited, if you need to make a character out of it you’ll need some concept artists and modeler, riggers, animator.

I think there will always be a limit where you’ll start to need expert being technical or artistic.
On the other hand , it looks super cool to be able to take a few pictures and get a model from it.

3D is super fun but very technical, having tools that get rid of some technicalities is really awesome.
For someone who started to play with 3D in the end of 90ies, things like quixel megascan looks like a giant “CHEATING” neon sign label.
But in fact it’s awesome, if you have an idea, does modeling every assets quixel can offer by yourself will make that idea better ?

But for sure, having quixel around can allow someone to pull out an amazing scene in minutes after a few days/week of practice.

Where say 10 years ago same scene done by someone would mean first an insane amount of work to put all that together by someone with a real good eye and years of XP in CG.

These same people will start to use quixel now because there is no point modeling these assets by hand for the most parts. In the meantime they can use their skills in other areas, they’ll always do better stuff than someone with only a few days of training.

Look at the cinema, for some time filming something mean a lot of investment, then comes video, then digital. Nowadays everyone with their smartphone carries in their pocket an amazing device to do movies. For sure this has definitively changed the way we produce and consume video content. Everyone can be a TV star with their youtube channel , or can be a director of their own movie.

But still there is always some professional directors, and more or less interesting video content.
This has just open new interesting possibilities, TV is kind of dying for some very good reasons, but it’s not like everyone died from new tech.

AI is the same to me, for sure 20 years ago I would spend 2 days modeling that pineapple , now it’s just a matter of typing the world in the right place. It’s new possibilities and that leave space for more interesting creative stuff.

Sorry for the long answer, thanks for reading !

“This is great, yet quite limited, if you need to make a character out of it you’ll need some concept artists and modeler, riggers, animator.”

Not sure how long even that will last. There’s already folks working on AI animation tools.

But in terms of on-screen graphics, I think of that the whole paradigm of polygon based rendering could go away. (I’m not going to state this 100% correct, but you get the idea) Right now a GPU calculates how polygons/normals/textures/lights and everything interact to render an image on a screen. But if AI image synthesis gets faster, that whole render pipeline could be skipped and the computer could simply “draw” the desired image on screen. No models. No 3D behind the image you see on screen. Doing that at 60fps may be a stretch right now. But this stuff is evolving at light speed.

1 Like

I’m closing this thread, quoting @Fweeb here (who closed an essentially identical thread recently):

As for the actual topic at hand. There are quite a few other threads already in existence on this topic. If you want to continue to discuss your opinions of a world with machine-generated art, I suggest you jump into one of those threads. I’m going to close this one.