General AI Discussion

Artists should be asked for consent and paid for their labor. I never argued against fair use or fanfic. Don’t you see a distinction between a collage/fanfic and thousands and thousands of Greg Rutkowski images used in for profit ventures? Greg never consented to the Ai taking his art and is not compensated for its use.

Did you read the Fair Use article you linked to? Ai Art generators fail several of the 4 Tests to determine fair use. You even stated previously it would put many artists out of work. From the fair use Wiki:

  1. Effect upon work’s value[edit]

The fourth factor measures the effect that the allegedly infringing use has had on the copyright owner’s ability to exploit his original work. The court not only investigates whether the defendant’s specific use of the work has significantly harmed the copyright owner’s market, but also whether such uses in general, if widespread, would harm the potential market of the original.

What a strange conclusion from a person with a Ukrainian flag on their avatar.

In case it’s not clear, I don’t want fanfics to be stopped. I want artists to be paid for their work.

5 Likes

This is a quote from the transcript’s conclusion that is in my opinion mostly rambling and has no substance to it:

Wherever they show up, and whatever work they undertake, before you just roll over and relinquish all of life’s efforts to the AI, ask yourself–am I forfeiting work I like doing? Is life really so packed with surplus joy that we should be letting machines automate something we take pleasure in? Do we really have a good reason to let them commandeer a job or hobby that is aspirational and fun rather than rote and miserable?

We must not permit AI developers, with all their underhanded techniques, to undermine us until we are ultimately supplanted. We must fight back–otherwise, we set a dangerous precedent for all AI systems to come.

This is just some sort of automation which happend plenty of times and is going to happen plenty of times in the future. It is great if someone enjoys what they are doing, but there is no guarantee that it can’t be automated in one way or another.
As it is written, it overall comes across as some sort of rebellion that should be started because of those systems.

Another aspect:

They want you to accept, without question, that anything and everything you make and share will automatically get fed into their lucrative product. At the time of writing this, there is no way to secede from these systems, even though some are promising to add opt-out features in the future.

This is in my opinion the wrong approach. What would make more sense is to make sure the neural network doesn’t memorize whole works, but only learns patterns from them. Overall this is what happens anyways, but it can be demonstrated that some times it memorized certain works. When this can be prevented, there is no need to have the discussion about what can and can not be included, because it would be more reasonable to justify that the neural network is only “inspired” by the data without replicating it exactly.

I don’t want to go into a whole lot of detail here, and I generally respect that people are afraid. He makes some good points, and I even agree that the outlook is bleak, and that most happy “a new tool democratizing art!” outcries don’t project far enough ahead, because this thing is moving fast. But towards the end this stood out for me:

We must not permit AI developers, with all their underhanded techniques, to undermine us until we are ultimately supplanted. We must fight back–otherwise, we set a dangerous precedent for all AI systems to come.

We’re long past the precedent-setting stage.

All of a sudden he’s concerned, and just about how AI affects artists? He hasn’t noticed that all along, for generations, technology has been used to exploit and supplant people? He missed that automation has already been making massive inroads in manufacturing, for decades? That AI systems have already replaced people other than artists? Oh, that’s ok because those jobs are “rote and miserable”? Only fun and joyful professions need to be protected (as if all art-related jobs were fun and joyful)? All the tools he has previously happily used (he protests loudly that he is not a Luddite), they also all cost some people their livelihood. Where was his impassioned outcry? It’s AI researchers (underhanded!) who’re at fault, not the system that always puts profit over people, or the people championing it and getting filthy rich off the labours of us peons? As if AI companies run by hedge fun managers were the first and only. I mean, it’s good he’s now paying attention, but he’s only seeing part of the picture.

In a democracy, your voice matters. In a world flooded by AI media, your voice has no chance of being heard.

As if our voices mattered now in that sense. (And we don’t even have actual democracies.) In a sea of billions of people our individual voices get drowned out already, how could it be otherwise? Only our friends listen to us, maybe some relatives, and small circles of acquaintances. Unless we become famous. The vast majority of people never experience even those misattributed Warholian 15 minutes. We eke out what importance we feel from the few people we can reach personally. Contrary to Mr Zapata I don’t see that changing. I will watch my friend’s animation over any AI-produced work just like I now read my friends’ books, put my friends’ art on my walls, and enjoy my friends’ music. No, that won’t make them a living, but most of them don’t make one with their art now. But it’s what I have control over. We all do.

The main thing that he seems upset about is that artists aren’t getting paid for their art being used to train AI systems. I have bad news for him: even if they were, that would in no way prevent the inexorable march towards replacement, and the few pennies the average artist might get if by some chance AI training will end up paying for the data, won’t keep food on anyone’s table.

It just seems like a whole lot of hostility directed entirely at the wrong people. Yelling at artists who’re embracing the new tool because they feel that gives them a better chance at survival than not doing so is … weird when he isn’t also yelling at artists who might sell their art to AI companies for training. And conflating AI researchers and the people running companies who’re pushing the hype seems also misplaced. For somebody excoriating everyone disagreeing with him as shortsighted, he’s weirdly shortsighted about where the responsibility lies, and whom we’d have to tackle, and how.

4 Likes

{video transcript} In a democracy, your voice matters.

Hahahaha :sweat_smile: l that’s a good one, I needed that chuckle this morning :wink:

That aside, as powerless as we collectively are in the face of the corporations that dominate our word, in democratic states, voting is really the only option we have left. Don’t misinterpret my cynicism as apathy- voting is not optional.

One theme keeps recurring in recent news and recent world history- “too little, too late”. The timidity and unwillingness of people who can actually make a difference- politicians and big tech CEOs, who, let’s be honest, actually run our society- to change the status quo or care about anything other than profit is a far bigger problem than just AI.

For example, I just read the latest batch of papers released under duress from Facebook, and to no one’s surprise, they were aware that Facebook was a hub for vaccine misinformation during the earliest days of the pandemic, and they actively chose not to do anything about it, because they didn’t want to lose political influence :man_shrugging:t3:

The video in question is right to be sounding the alarm, but the alarm being focused on AI-using artists like it is is completely misplaced. You’re entirely right, we’re all in trouble until we can get people in charge with even a hint of empathy or any ambitions beyond fame and power. AI is a tool that has been corrupted to exploit “the little guy” by giant corporations and attention-starved tech CEOs. Just like everything else in our society.

Our best way to fight back, other than voting, is to refuse to play their twisted profit-grabbing game and choose instead to support individual people. Or as you said:

6 Likes

oh wow… but what’s worse is that ‘no action’ in corporate environment is often seen as less risky out of any situation than taking any kind of action…

Yeah… it’s almost like screaming at people using newer versions of Photoshop, because Adobe added content aware fill and Neural Filters to it…

If only that would work…

1 Like

Yesterday I generated over 100 AI paintings, and I am finding that the best ones are the ones seeded with my own actual paintings, especially ones that I made specifically for the purpose of using as a seed. So I am feeling a little less threatened by it. And I must say, it came out with some VERY cool stuff.

2 Likes

That does not make sense to say that a conclusion has no substance, when sentences of conclusion are globalization of the substances of the speech.

He is giving arguments and is propping up them by examples and quotes.

Just read the whole thing. Or watch the video sped up 2x.

We will not spend hours to make reverse engineering of his speech, misquoting him in an incoherent order, because you are not able to find 20 minutes to follow his articulation of thoughts.

That is a dead end to ask for interesting points. When value is in the path, the building of thinking, you should ask for a sum-up.

I am mortified by the fact that we are now, helping AI to acquire a characteristic of a strong AI.
We trained it to recognize by captcha.
Now, we are training it to imagine. Soon, it will dream of electronic sheep.

Is it better to start building a strong AI by giving to it an imagination, alimented by pictures from social networks, that are very far from reality ; or to train it with philosophical essays and spiritual texts, that could give to it a moral compass, that could potentially lead it to hate us ?
How do you forge an immature mind to become a decent being ; when its playground is the world wide web, and its tutors are private companies, motivated by profit ?

The fact that situation is already bad should not prevent us to try to avoid that it gets worse.

2 Likes

A lot of times I’ll see a post (or essay, or vid, or etc) cite a bunch of facts that, although true, don’t support the conclusion. In this case I agree with your conclusion, but with nothing else in that post.

(again IANAL/TINLA throughout)

Throughout these discussions here and elsewhere, my frustration with scattershot “solutions” made in (at best) ignorance of the struggles and progress accomplished in other artistic fields – or in outright hostility, with the occasional sop of concealment, to the artists in those fields who 3D artists should be in solidarity with, and learning from their earlier experiences with this kind of thing? With the acceptance (if not outright embracing) of righteous anger as a supposedly moral suasion, despite the massive death toll in the last few years alone? With the rhetorical stunts and other tactics that, even in Off-topic Chat, accusations of (no matter how accurate) are verbotten on this forum? I still agree that the current AI art issues are bad and, unless effectively countered, likely to get a lot worse.

But this must be done with awareness that much of what’s already been suggested is not only going to be ineffective, but would hand ammunition to our opponents.

AIs shouldn’t be trained on any data without the artists’ legal consent and payment for their work? Legal consent and payment is what they currently have in the music industries, ask those artists how well it’s worked out for the vast majority of them. The corporate owners blame-game the trickle of ripping (most of which is legit format-shifting) and piracy for artists being paid less than the protection fees they’re paying, all while raking in their lawyer-defended massive profits. The very idea of that as a solution plays into for-profit AI-corp hands.

That’s just one example, plenty of others have already been referred to. Can we PLEASE avoid at least the obvious failures that’ve allowed similar exploitation against other artistic fields?

I would do that if my intention was to have a discussion with the author. Now, if there are only some interesting points according to the post I replied to, I am not going to guess what those might be and spend even more time on that.

There is a reason why I asked which points Freddy_W found convincing. And I clearly expressed that I did not read the whole thing.
I simply replied what I didn’t find convincing because I was asked.

I don’t understand what you mean by “strong AI”. It is simply a few neural networks that have been trained on text, images and to generate one from the other.

Are you serious? It generates images from texts. That’s it. There is no reason to believe it is sentient!

When it comes to dangerous applications of AI, I don’t fear the private companies at first, but the militaries!

Should this always be done, even if the neural network is just “inspired” by the work and can not replicate it?

Edit: “Similar” to how humans can be inspired by the work of others.

Today’s top private companies have “militaries” and they are actively deploying. Global elites have realized this for quite some time now. Their troops use what is termed “soft power”, though in fact it’s in some ways the most brutal, dehumanizing and effective. AI will rapidly accelerate and escalate.

Not going to discuss this kind of direction any further. Because what we have today is extremely far away from having a good enough understanding to be able to make decisions, especially in critical situations. It is going to be the responsibility of governments to control this with appropriate laws.

Hence why I argued that AI is capable of incredibly good results, if there is data that it uses as a guide.

A quick example can be done in any Blender build, use the denoise node on a Cycles render without the use of passes and then compare the result with another one that makes use of the passes, it should be an almost night and day difference on some images.

It would make sense to me if AI like Midjourney has an easier time making a permutation of an existing image as opposed to being told to generate a new one.

We can certainly trust governments to protect the common man. :roll_eyes:

Above you said you’d read the whole thing if your intention was to have a discussion with the author – you appear to be trying to have a discussion with me, but I don’t see how you could’ve posted this question after reading even just the paragraph that you quoted my line from, never mind the whole post.

I am asking a clarifying question instead of guessing what your opinion is.

Edit: If your answer was that the artists should be compensated anyways (the expected answer), my follow up question would be, whether that is only the case for machines, or if artists should compensate other artists too, if they used their work as inspiration?

Maybe we should return to the year 1540…

If my opinion as stated in that paragraph is so poorly expressed that you’re sufficiently unclear on it to ask that question? I can’t think of anything I can post that could possibly clarify it for you, sorry.

I know plenty of artists who first get a bunch of reference visuals when they start on something new. Those references are from the internet and the people who created them never get to see any compensation. As far as I can see, this is a common practice.
I simply tried to understand if it makes a difference whether humans have such references or if computers have them.

The incoherence, here, is although you did not want to look the whole video : you commented its conclusion.

The video starts by an invitation to consider that AI will evolve.

Currently, result corresponds to a bad photo bashing at first iteration, if an artist does not correct AI’s wrong anatomy or incoherent shadows, etc…

An important point of the video is that data is collected by these software.
This data will be used to create future generation of AI tools. A future generation that will have no more lacks of artistic technics.
Data collected can also be used to determine cultural connections, made by human who write the text.
The fact that experimentation is worldwide will help them to determine, what is the most widely shared representation between humans that evoke a specific word or a specific text.

Picturing an image from a description is a basic of communication between humans.

There are already AI voice bots specialized in audio communication.
That will be consolidated by research on images.
It would be naive to think that all fields where AI is used will stay separated.
And that nobody will experiment a convergence.

Based on the scale of their sample of humans, AI should end-up to understand desires of majority of humans better than any individual.

Who said they had no partnership ?
Who said that no researcher was making a carrier passing from one to the other ?

If nobody cares about changing current policies ; that could be a return that could be imposed to us.

There will be an end to all energies that are not renewables.
Our climate change issue is menacing of decimation 2/3 of humanity, in current of 30 next years, if we continue to burn oil and coal like we do.
The rise of temperature has as consequence an increase of evaporation of water. Without water, arable lands are becoming deserts. Humans don’t live as sedentaries in desert.
So, drought, starvation and war due to migrations is about to kill more humans than Thanos in Infinity War, but during 30 years instead of period of a snap.

I doubt that energy consumed by AI will help us to reduce our carbon footprint.
But we could use our machines to improve renewables efficiency, design small cities that don’t require a lot of transport, promote green building, permaculture, low tech, etc…
A lot of solutions that were not documented or studied in 1540.