Another AI vs artists thread

Yes, there are warping kind of solutions which use a predefined topology, like this one:

There are others, if I remember correctly also for other tasks.

Unfortunately, generating a new topology seems really difficult to do.

One idea I had was to UV unwrap a sculpt and bake the position to each pixel. This would be some trick to pass mesh data to a neural network in a format that is known to work. I can imagine possibly having multiple of those, all differently unwrapped with varying edge seems might be feasible starting point. Even if that worked, it would be very far away from something that could be used with a practical value.

Any higher profile case of this to point to? I’ve seen the overall attitude for sure, but I haven’t come across to devs publically coming out to state this, even though that’s what they have been doing.


Quick summary for people who don’t want to read through it:
This is a repository to download image datasets and this specific code change adds an option to respect a no-ai tag which can be set by media owners. Like this, all images with this tag would be skipped and not used for the training of the ai.
They decided to turn this option off by default for “reproducibility” reasons.

That’s a very lame excuse! First, there is pretty much no one with the knowledge and computational power to reproduce those kinds of experiments. Further, they could simply document this very prominently or have a version for reproduction purposes.


Has anybody tried generating Blender addons with Chat GPT?
I just tried it and managed to let ChatGPT create an addon to recursively walk through the active objects hierarchy and print each objects name to the console.

I also tried to create a simple circular array addon but failed.

Here is the addon code if anybody is interested.

import bpy

class ObjectInfoPanel(bpy.types.Panel):
    """Panel containing the object info button"""
    bl_label = "Object Info"
    bl_idname = "OBJECT_PT_object_info"
    bl_space_type = 'VIEW_3D'
    bl_region_type = 'UI'
    bl_category = "Tools"

    def draw(self, context):
        layout = self.layout
        row = layout.row()

class ObjectInfoOperator(bpy.types.Operator):
    """Print object information"""
    bl_idname = "object.info_operator"
    bl_label = "Print Object Information"

    def execute(self, context):
        # Recursively walk through the objects hierarchy and print the name of each object
        def print_object_names(obj):
            for child in obj.children:

        # Get the active object
        active_object =

        # Print the object names in the active object's hierarchy
        return {'FINISHED'}

def register():

def unregister():

if __name__ == "__main__":

There are a couple of threads here about that:


This might be a stupid question but does Stability AI have access to the end user’s prompts? I have never installed it but I would guess that is not possible. Sorry if I sound like an old guy trying to understand the “new tech” :sweat_smile:

No, they don’t. You can download everything and run it locally. If you run it online, you can’t really know. Maybe read the terms of service to find more out.

Thanks! I was asking because I watched a video where they suggested that the data collected from prompts will be used to train a new AI to generate the prompts themselves (based on the current online trends and what’s hot) - thus removing human intervention entirely. I still believe this result will be inevitable if nothing changes drastically. It is just the natural progression- humans will try to automate everything without looking back at what they lost in the process.


There are large prompt databases for stable diffusion already (and someone already trained language models to expand simple prompts into more complex one based on those). Also, local instance of stable diffusion is not sending your data to the server. But it embeds metadata about the prompt used, so if you upload those images anywhere without stripping that metadata it’s for the taking.

SD hosted version(s) probably are gathering all kinds of data (also, others like Midjourney and Dalle do that too). The Midjourney ones are even better for this purpose as users do evaluate them, so they also have data on what’s liked by the users (and they actually use this data already to bias the model).

But to be fair, I don’t see a point really in removing human from the loop of those systems. If that will be a desire for someone at some point that it’s pointless to use prompts at all. Just randomly walk through the latent space and use some kind of a score to estimate if the image is cool or not. The coolness can be learned already form social media likes even (no new thing is needed).

Using prompts to remove human from the loop is just plain stupid idea and only could be “invented” by someone that doesn’t have idea how those system works or is totally dishonest and manipulative (I bet the latter as I think I know which video you are referencing).

It’s not about randomly picking images, it’s about catering to the mega-corporations, celebrities, and influencers who would love to automatically generate quality promotional content based on what’s trending currently, or maybe even individual preferences in the near future. Heck, people are already getting excited about ChatGPT generating prompts for them.

I know this does not make sense whatsoever. Like, what is even the point? The future is dumb.


1 Like

Are you sure about that?
I guess, “removing the human” might be phrasing it wrong, or at least suggesting the wrong thing.

Seen from the outside those text-to-image AIs, whatever black magic they perform under the hood, look a lot like an interpreter’s read-eval-print loop, only they don’t process programming languages but natural ones.
What’s more obvious than to use them for batch-creation of images, feeding them artificially/script created prompts? So what’s more obvious than letting another AI create those prompts?

I believe the guy who made that video I think you’re thinking of having been referred has developed a pretty good understanding of how a tech nerd’s mind works … and how bad things can become, if that mindset is fused to a capitalist big money guy’s brain. The github thread linked to by @joseph is a good example of that … nothing being “manipulative” about it, self-documenting reality instead.

All that won’t “eliminate” the human from the loop as it didn’t “eliminate” human artists before, but it will render human contribution to the equation economically insignificant one more time. Human beings still will be able to prompt AIs manually, but who will care for their “creations” if AIs batch-create millions of “artworks” using optimized prompting, tailored down to what Big Tech thinks fit for the masses according to other AIs findings, in their never-ceasing dream on tracked human behavior they’re force-fed with, day by day by day by day …



Read my post again. If a goal was to remove human from the loop, than the whole textural part of the model is unnecessary and wasteful.

Nah, he for sure developed pretty good way to manipulate and lie to people. Look up Royal Skies video for a rebuttal. And Royal Skies was very, very polite and gentle there.

You read from this thread what you want to read. Reproducibility is important for software development. More important than “art nerds” (you started calling the other side by “tech nerds”) wishes for defaults in the context where defaults doesn’t matter. Hint: every “tech nerd” that uses that will change not only defaults, but also the code to tailor this software to his specific needs.

Random walk is not about randomly picking images. It’s technique that’s frequently used for ranking and recommendation systems. Think PageRank (the original google search algorithm).

I am sure this sort of thing is being done and it can make sense. When you have a better understanding of what kind of prompt works, you might build something like a prompt translator. Or if you have prompts with which produce awesome images, you may add them to the dataset. The model is open and anyone could do this sort of thing, if they have the data. There are a lot of things that can be done, including getting rid of human intervention in many ways.

Oh, this sounds like fine tuning. You can take any model which does text to image like stable diffusion and continue the training, but with data that better suits what you are looking for. All large neural networks which have been trained on huge amount of diverse data have weaknesses. When you narrow the scope down, you can massively improve the quality of the results.

Well, if there were only one driving force here, then yes.
However software development as a whole doesn’t work that monolithic.
Tools, libraries etc are written by one party and put to use by another, often in ways not foreseen (or intended) by the original developers, often “abusing” interfaces and also often creating tremendous overhead, because it still works efficient enough on fast-enough hardware.

Hell, all that server side scripting in Web programming, one huuuuge waste of resources, could all be handled 1000x faster if there were no clunky interpreter languages involved, often even nested, all over the place. I hear people claim there’s even a 3D package that pipes geometry data through Python …

On a closer look, your efficiency argument therefore … well … doesn’t hold up too well.

Reproducibility? Of what exactly?
In the case we’re referring to, the result of a post-processed, non-authorized bulk download from art websites with ever-changing content databases???

If you want reproducibility, then create a stable and well defined test environment to test your software on instead of using the f*ckin’ live internet. Can be done in other software projects too, right?

As far as I can see we’re talking a pretty straightforward command line tool here, much like “wget” or similar. Using a combination of cmd line options and maybe a bit of editing in a config file does the job. Browsing and “tailoring” the source for sure isn’t the standard use case for this.

To quote directly from the discussion:

1 Like

I will link the video in question so that all of us and others reading this can be on the same page

Just to add to your point, most hardcore AI enthusiasts don’t have any real art training (correct me if I am wrong), so they don’t care about the artistic process, nor understand it in the first place. What is most important to them is a good-looking end result, in contrast to a (conventional) artist who would always like to realize their ideas exactly as they are in their minds.

Yes, I know what random walk means. From your wording, I interpreted the creation of artwork without any end goal or intention, therefore I used the word random. I apologise if that is not what you meant

No, it would not be useless. In the video, he suggests that prompting is a convenient and self-contained method of teaching the algorithm what people like and want.

Yes, fine-tuned to the point that the already diminished artistic choices are completely removed.

1 Like

I quickly came up with this possible reality. I am not saying that this is THE future, just a possibility. Do any of the steps seem far-fetched to you? Important: This is completely made up

Fine tuning is a technical term in machine learning, where a neural network with a broader scope gets narrowed down.

Do you have a source for that? I couldn’t find anything related to that. I would like to read what exactly it is first, before commenting.