General AI Discussion

I’ve been reflecting a bit lately on when/if/how Blender will use AI to generate 3D models sometime in the future. Do you think Blender should start focusing on AI? If so, how realistic is that? If not, why?

I doubt that Blender will integrate AI tools (like text-to-3D) very soon ; but maybe in a far far future? Who knows? Just don’t get your hopes too high about the integration of AI tools by the Blender Developers. They already have so much on their plate atm (link 1, link 2).

But as ambi said, it could be done via third party add-ons.

The recent advancements in generative AI could open up a unique opportunity for Blender to become the first complete 3D app to fully integrate this technology. By doing so, it could set itself apart from its competitors such as Autodesk’s 3ds max and Maya who are yet to embrace this innovation. This move could position Blender as the industry standard for 3D-generative AI, much like how Adobe has been a pioneer in integrating 2D-generative AI in Photoshop powered by Firefly.

Your comment shows your enthusiasm about this topic and the hope that Blender would become a pioneer in this area but it’s unlikely to happen. From what I see, the Blender Foundation doesn’t care that much about becoming the “leader software”, that being of 3D or AI. The Blender Developers have their own vision for the future of Blender (and it doesn’t necessary match what the users forsee nor want). Also, I highly doubt that they are competing with 3DSMAX or MAYA, IMO, they just want to improve Blender the best they can at their own pace.

The development and implementation of powerful AI would require significant resources and expertise, which may be beyond the reach of the Blender Foundation. They could consider partnering with organizations like Stability AI (developer of Stable Diffusion) which focuses on open source AI technology to develop and implement the AI features.

It would surprise me a lot if the Blender Foundation partnered with StabilityAI, OpenAI, or any other “AI company” that are known to have collected billions of images off the Internet without consent from the creators of such images. The Blender Foundation (and especially Ton) are known to care a lot about licenses.

How would these AI systems be trained? From Blender Market? It might not provide sufficient content for AI training purposes due to copyright restrictions and user concerns about their work being used as training material without permission (which is fully understandable).

LOL. If you want a civil war between content authors and the Blender Foundation, it’d probably be the thing to do yes. :joy:
More seriously, again the Blender Foundation cares about not infringing licenses.

It would be interesting to hear out your opinions on Blender’s future and AI.

What I would like to see is a “text-to-Nodes AI tool” that could allow the user generate Shader nodes or Geometry nodes by typing in an input field.

1 Like

I saw videos people posted and understand a bit the noise thing.
But the part that I am pretty sure I am right is the data used are copies of pictures from the internet, literal copies.
Without these pictures there is no generative image thing.
When saying the process use literal copies of artwork I am not talking about generated images but the images used in the data set.

If Picasso doesn’t upload any of his artwork images in the internet, stable diffusion can’t find the code pattern that gives a Picasso style.
Some artists could copy Picasso style and upload their stuff and that could be used by stable diffusion, I guess. But also from what I understand, making a handmade 99% similar Picasso and pretending it is an original Picasso would be considered plagiarism and illegal so I don’t see how it could be possible to ask stable diffusion to create a certain artist’s artstyle if that same artist refused to let his art be part of the data used by the model.

If stable diffusion want to use my art to train his model, I want big money.
Either royalties every time the model use a part of my data or a big check for limitless use.

Are these image generators dataset public?
Is there a way to search for specific images to check if these images are part if a image generator data set or not?

https://haveibeentrained.com/ ?

1 Like

That is kinda my point.
AI image generators are nothing without source material, which is original photographies/artwork.
Those who created that source material should be paid every time their work is used and authorization for their work to be used should be required, if not lawsuits and fines should be paid by those generators.
This just feels like Napster free for all when p2p emerged.
I guess in a near future stable diffusion will either die or become the equivalent of spotify for image generation.
What I would like is if thousand of artists’s images are used by a data set to generate an image, all those artists should be paid everytime for their image to be used.
Not sustainable/too expensive for stable diffusion? Sorry, bye.
Sustainable? Nice! Artists get paid.

We are again at the point that makes no sense to me. Artists are allowed to learn from other artists, but I should not be allowed to train a neural network with the work of artists.

Of course you are not allowed to claim an image is a Picasso, if it isn’t one. But artists are still influenced by Picasso. They can study his work as much as they want. They can mimic his style as much as they want.
People study from literal copies of artwork.

4 Likes

Not soon, but 5 4 years perspective, according to Ton:

2 Likes

I concur with your perspective on this matter. When I previously mentioned competition, it was my personal opinion and not an official stance of the Blender Foundation. Based on my understanding (without knowing him in person), Ton Roosendaal appears to be driven by a passion for software development rather than financial gain or market share.

Yes, interesting. While it may be overly optimistic to expect a fully-featured 3D AI generator to Blender, a simpler AI feature designed to handle specific tasks or tools within the workflow could potentially be more feasible and realistic in the near future.

1 Like

That is one of many possibilities. AI is not going to disappear and it will have more and more influence on our lives in the (near) future.

I do not believe that AI will replace all artists but I do believe that artists that use AI will become a large part of the competition (in many other areas as well not just artistic).

Turning your back and refusing to acknowledge AI tools is not a good idea, we should at least be aware of its possibilities and if necessary use them for our benefit.

AI tools will become actively used in artistic creations. Not all artistic creations will use AI but there will be an increasing amount that do.

There are still artists painting with oil paints, musicians playing classical instruments, sculptors using clay and stone etc

But they also have a lot of competition with computer generated art, photographs, synthesizers, drum machines, digital sculptors, scanned digital models etc
AI will add to and introduce a new level of competition.

1 Like

As for Blender incorporating AI we could turn this around.

There are already python scripts for Blender written by AI (even though of dubious quality).

Maybe it will be AI that incorporates Blender.

To offer my brief perspective on the ‘Artist vs AI’ topic:

With current technology, I do not foresee this as a significant issue in the near future. While AI has made remarkable progress in recent years, it still lacks the sophistication and nuance required to replicate the full range of human cognitive abilities, particularly in more complex and abstract tasks. But if there are substantial improvements in CPU power or the development of novel computational processors with entirely new methods of computing, potentially leading to exponential increases in speed and capabilities, it could have profound implications for society as we know it today.

As we consider the potential impact of AI replacing some creative tasks that humans are doing, it is important to recognize that this issue is likely but one aspect of a much larger landscape if information technology keeps improving at a fast pace for many more years to come. However, this issue falls outside the scope of this thread.

1 Like

When I was a kid computers did not even exist (at least personal computers) nor did mobile phones for that matter.

Fast forward thay have had a huge impact on our society in almost every sector.

At the rate computing and investigation evolves I doubt AI will take too long in having a huge impact in many sectors.

The evolution of technology is more logarithmic than lineal. Ai will only accelerate the process.

1 Like

In any case, the AI needs the original artwork to generate it’s thing.
If you find an artist that can reproduce Picasso’s style without literally copying his art, and you make a deal with the artist so that he give rights to use his original Picasso style looking original art, then I guess any user could use that model as much as they want without trouble.

I wish more people would remove their pictures from being available to those image generators to protest against em, I like the link xan2622 posted about that.

As I said, generally movie productions buy the reference art for their artists when reference is required.
Productions also buy rights to copyrighted material and pay royalties if applicable to any content used.
Trained model use original copyrighted art? Pay.

Even before AI I was hesitating to post my art on the internet being scared of copycats stealing my “style” ( I am not a great artists but still). I always tried avoid using shortcuts acquired by artist that went through intensive studies to elaborate them.
It may be considered a waste of time and stupid but I always insisted on that. I also hate copying, for me it’s boring and looks boring.
That is just a personal preference tho.
I am still impressed by people that use copying, as copying is also a skill that is hard to achieve, it’s just not my thing.

“A human copying an image is way harder and requires more time and skills than a computer” is one debate, I am not that interested into that but it still has merit to be addressed.

“An AI image generator model uses literal digital copies, 100% of someone’s copyrighted art to generate something, it should pay” is the debate I am interested in.
You use a literal copy of any photograpy/artwork in your movie production? Pay.
Your AI tool use My picture in it’s dataset? Pay.

Again, I worked also in games. We bought every games we used as inspiration.
In movies, we bought movies we got our inspiration from.
Now these data set don’t use inspirational art, they use LITERAL copies to generate their images.
It cannot do anything without it.
A human needs that reference image, even then it isn’t a perfect copy, even if he takes days, even if he has some sort of super power, he is never using a literal 100% copy, opposed as the training data, which uses an exact copy.
If the training data could be materialized, it would consist of 100% identical millions/billions of photographs of original artwork. Imagine how much it would cost on shutterstock.
If the artists mind that look at a reference could be materialized. it would consist at best at like 90% of what a picture looks like, it would be blurry and weird.

The data set isn’t learning, it consists of literal copies, original artwork/photos.
It needs it to “learn”.

Please read what I wrote again . I clearly implied that these were my opinions (see the bold parts). If I had the proof I would have been talking to the Senate’s committee on AI.

Even Adobe does not own that many images to train a complicated model from scratch.I think they are skewing the facts and the underlying external weights they might have used. They probably also scraped the internet for “freely available” images from non art sites.

This is like the proof for “there are aliens in living somewhere in the universe”. We are deducing and coming to a possible logical conclusion, there is no evidence that they exists. We posit that they might exist given the vastness of the space.

Adobe does not own billions of images, at most they have couple million “paid” images. Even so some of the owners of these Adobe stock images said that they were not told about the AI training when they signed up. Obviously they did not think about such future possibility back then.

1 Like

Thanks I have been aware of that. That is why they chose the wording of “ethically sourced” images for training AI given that is such an open ended description, what constitutes ethical can change from person to person and corporation to corporation. The wording “ethical” does not necessarily imply “moral” in the sense we hope to assume.

ok, so there is no proof, and those are just your accusations and speculations.

And citation from the article xan2622 linked:

Indeed, this assertion is accurate in my opinion. My previous statement alluded to the possibility of creating innovative computational processing systems with unique and unexpected approaches to information processing and AI.

While it is difficult to predict with certainty what the ultimate outcome will be, in the end, only time will provide the answers.

1 Like

Generative AI is not literally copying art. If artists want to reproduce a style, they look at originals made in that style. If you want to generate images in a certain style, you show it actual images made in that style for the training. Again, if they were copying, they would be able to perfectly reproduce them.
They use a copy to learn from them, just like humans do.

A point of discussion could be whether the training dataset severs as reference or if it should be viewed more as inspiration.

In general, it is very difficult or even impossible to recreate the original training images (with a few exceptions). If your claim was true, it should be possible to recreate all the images from the training dataset relatively easily. In fact, it should be difficult to create any other image, because the neural network would have memorized all the images from the training dataset.
When neural networks start to memorize the training dataset, they are not anymore generalizing, meaning they become useless for pretty much everything else (besides the memorization).

Edit: In case someone is more interested in that, it even has its own name. It is called “overfitting”.

1 Like

Generative AI uses copied art as it’s source, that is the problem I have.

You don’t show it. You upload it in 100% of it’s integrity. It is 100% the same.
When you show something to an human, the human filters the image and reflects the best he can that image.
The most human skilled human in the world can’t reproduce an image he observed with 100% accuracy. So far, a human can’t copy 100% an image, like a computer do.
Humans interpret an image and store it in his memory. The more time not observing the image, the less precise.
A computer stores a perfect copy of the image in a nanosecond.

Or if It should be viewed as it stores perfect copies in it’s dataset.
I refuse that any of my images get used that way in any dataset unless major paycheck is discussed.

I know the model can’t reproduce the exact image. The training data does store a copy of images tho and those images are used to train the model. I don’t want my images to be in that unless highly paid.

But when it generates data, it does not have access to the original sources anymore. It can only use what it has learned from the images. And that is not a copy of the image.

The way you are describing what you think is happening makes me believe, there are still misunderstandings about what is actually going on.
Nothing is uploaded to the neural network. At training time, the neural network is trained to reproduce random images from the dataset (with random amount of noise added to them). It is trained to replicate those images pixel by pixel. But it is incapable of achieving that goal! It learns from those images whatever is most promising to achieve the goal instead.

Just like you can’t reproduce (in most cases) the exact image with those neural networks…

If you remove an image from the training dataset, the neural network will over time also forget about that image. So what?

A computer, yes. A neural network, no. And to be clear, we are talking about neural networks here!

That’s fine, I have not problem with that.

Then why are you constantly making claims in that direction?!

Yes!

That’s fine. No reason though to misrepresent what those neural networks are actually doing!