This AI rendered Apocalyptic Blender Classroom Scene in 5 seconds

Stable Diffusion renders scenes like this effortlessly ( not really rendering but however it works with forward diffusion or something).

The image is fully controllable by text prompt.

im blown away.

i know it looks like concept art, but this tech is still in its early days beta… the bigger the models the better and the less compute they take with newer algorithms.


This is impressive, but it looks nothing like the Blender classroom scene:

Where’s the blackboard? Where are all the lamps?

Also, this AI clearly has no idea what a desk is, hence why it gives wildly inconsistent results:
2022-08-11 13_49_46-This AI rendered Apocalyptic Blender Classroom Scene in 5 seconds - General Foru

Of the desks in the pictures, over half have the wrong number of legs, and a good chunk don’t have anywhere to sit or the sitting area isn’t actually accessible.

Like all AI art, this has done a great job with a general approximation, but the details are, realistically, worthless. If you were hired to make art of “apocalyptic Blender classroom scene” and you turned this in, you’d get told to do it again, and get it right this time by your client.

Sure, it would work great as concept art for doing your own apocalyptic Blender classroom scene, but this is in no way a finished piece of art

1 Like

In the case someone is interested how it works: It starts with a noise image (actual noise) and a text prompt. It is trained to make the noisy image slightly less noisy (somewhat like a denoiser, but guided by the text). Now it takes many many many steps to get an actual image out of it.

so its basically magic.

Exactly, it also only works with magic hardware :joy:

The thing is though it seems like i said its in early days, and if it has the right algorithm it can produce finished looking renders. It’s better at faces look

it spat these out in seconds as well.

Of course they are not 3D they are 2D works… That’s a major limitation right now… But I don’t think we should be picking holes in how good it is at making finished looking works, it’s obvious its learning fast, the question is what implications does this have in the coming years.


Which diffusion model is this? There have been plenty of releases recently.

It’s better at faces that all look exactly the same, sure, I’ll give you that. Although… have you seen the eyes in these pictures?

1 Like

i think they said its forward diffusion 4 billion parameter… there was an older version and that had CLIP guided or something but apparently that’s less efficient. The main takeaway is that this Stable Diffusion is way less compute heavy and they are letting people generate these images for free unlimited right now.
It could spit out 10 of those face images in about 10 seconds.

And all ten of them would be worthless in any kind of production environment, since the eyes are completely broken and the faces lack interesting variation

Interesting. All the AI that have gotten so much press recently (DALL-E and co) totally suck at faces; they’re only good for creating horror scenes. These are a step up from that though the eyes are still problematic.

And yeah, well worth watching to see how they develop.

It resembles some of the dreams I have had in the past, places that are sort of strange and unsettling. The dream says it is something, but my eyes will tell me it is not.

Though few ever took me to places like this, even after going too hard on the caffeine intake. Imagine if Salvadore Dali lived to become experienced in a DCC app (which did not exist in his time).

1 Like

yea but i could ask it to do an oriental woman and it would, and yes the eyes are not perfect but its getting better each iteration. Let me try for an oriental woman with red hair wearing a hat. just as a test.

I dont understand the logic of picking holes in the quality of an AI system that is iterating all the time getting better at a rapid rate… in a year or so it might be able to replace a lot of manual renders.


From my point of view, the value of those kinds of techniques is in previsualization. If you have certain visuals in mind, you can quickly produce a result to see for yourself whether it works with the intended style or it might serve as an initial communication tool with stakeholders to make sure everyone agrees with the overall look.


These are really cool. I can see their use for spitting out ideas quickly. Some of the images I’ve seen from Midjourney are stunning.

I can definitely see these tools finding their ways into the pipeline of concept artists and such - using them as a foundation to build off of.

Really impressive, and only getting better.

it does robots as well. told it to do an evil ultron and gave it influences from people like H.R. giger and other text tags, like battle-damaged, terminator t-800 etc… it spits out hundreds of iterations like this in seconds all as detailed and unique. I have about 50 of these all different.


I’ve been hearing people say that AI would replace a lot of manual art in a year or so for the last 6 years, and they’ve been saying it longer than that, I just wasn’t paying attention. I also remember when MagicPoser was going to replace riggers/animators in a year or so, when AI interpolation was going to replace traditional animators in a year or so, when flying cars were going to replace normal cars in a year or so, when crypto was going to replace traditional currency in a year or so, when contactless payment was going to replace debit cards in a year or so…

My point is, I’ve seen this cycle of hype where “it’s almost perfect, give it time and all the artists are going to be replaced!” many, many, many times. This AI trend might seem cool and new in an isolated perspective, but looking at it contextually… it’s just the latest of many things that can be useful, but never quite lives up to the breathlessly anticipated prognosticated powers


It spat this out as well… clearly better eyes than the others. looks like concept art rather than a render. but still impressive. And it was instructed to draw the subject wearing a mask.

Everything related to machine learning has been overhyped for quite some time. If just a fraction of those claims were true, we would have full self-driving flying cars by now.

Generally I agree with you. However, with this technique, there are some unique opportunities that never existed before.
Think about a meeting between artists who have to come of with some kind of concept. With a tool like this, they can really quickly try ideas out and they can be very confident that they are talking about the same thing.
Or a meeting where some stakeholders are telling an artist what they are looking for. The ones where they decide, but don’t have the ability to imagine anything :slight_smile: . With this kind of tool, the artist can quickly present some initial ideas, because the overall look is good enough for it. The stakeholders can directly throw out ideas they don’t like.
To me, that’s quite an achievement. It doesn’t replace the artists at all, but can be tremendously useful as a quick communication/visualization tool.


Is this the newest AI beats humans thread? I was still reading the one from last week. :grin:

Still, some very beautiful renders. I’d just much more would like to have an AI that can perfectly guess when to cook my next cup of coffee. But thats just my personal preference.