Why / how AI with Blender?

HI Folks:

I am seeing more and more mentions of using AI with blender. The thing is im not really sure WHY you would do so – or how you would do so. Is the AI doing the drawing for you?

If its something that will help my drawing, I’m interested, can someone explain and point me in the right directions?

Thanks
TIM

Given the topic, I’m going to add a pre-emptive note here.

The inevitable side discussion about the ethics, potentials, problems, future, worries, hopes, dreams, fears, and concerns related to AI does not belong here. Please keep those replies in the proper thread:

Please keep the replies on this post solely and exclusively focused on answering OP’s questions - the technical details of using AI with Blender, the use cases, benefits, things of that nature.

Thanks! :slight_smile:

1 Like

You might use AI to help with your art

  • to produce textures for materials for your models
  • to upscale or denoise images
  • to generate landscapes, skies, background mattes
  • to generate mood boards
  • to create 3D background models for your scenes
  • to create basic animations (also directly from video of something/someone in motion)
  • to iterate on concepts for art you’re developing – you can start from text or from an image you have drawn and guide the AI through prompting to come up with various changes

The AI isn’t doing the drawing for you in the sense that you still need to come up with the ideas for the art – at least that’s what any serious artist I know does (discussion of the hype around this, and the negative aspects should happen in the thread Joseph linked). But it can very quickly produce alternatives, allowing you to play with many more variations of your concept than you could normally produce on your own.

If you have questions about any of those in specific, just ask and I’ll link you to more information. Here’s a video from the recent Blender conference about how to integrate some of the existing tools with Blender (the first 17 minutes are a summary of how we got here and the up- and downsides).

1 Like

WOW – the big ones I see right off are producing the textures for materials, Producing landscapes, skies and backgrounds and producing the background 3d models

Thats the major things – the biggest one is producing textures – I model log homes, and have yet to find a really good wood grain texture for the logs

TIM

1 Like

Simply said: it’s a tool…

…and like some people thing they are long trained specialists because they use a tool which they haven’t paid for because they use cracked versions or even open source (and believe the developers should do everything for nothing)… this could be used to make something nice… or not…

So if someone just makes a prompt like: art… and then tries to sells this… and someone buys this (because (s)he thinks this is worth it)… but the original artists where this is beeing based one get nothing…
… well… think for yourself…

Some of those generated pieces are like: Look mum -- i can do art too.

For me it’s also so ambiguous like: hand guns don't kill people.. people kill people
…maybe… but why do we have to make killing easy, instantly, from afar, unrecognizable, uninterceptional …
:interrobang:

Be warned that none of this is particularly easy to learn how to use as yet; the learning curve is steep.

Textures can be done directly in Blender already:

Landscapes & backgrounds:
Nvidia’s GauGAN is free for now, but complex: http://imaginaire.cc/gaugan360/
Quick intro and links of several (freemium) tools like it: https://samdavisphd.com/2022/05/ai-landscape-generator-how-to-create-artistic-masterpieces-with-ai/

You can also create landscapes and environments in Stable Diffusion, but it doesn’t specialize in it and the controls over it aren’t nearly as good as GauGAN.

1 Like

With chat gpt you can also have a coding assistant suggest you a python/blender solution.

I can’t wait until someone abstracts geonodes to it.

2 Likes