General AI Discussion

You can’t program good design. I’ve been in graphic design long enough to have worked with b/w logo sheets that were provided by clients for paste-up. Logos (should be) designed with the intent to be viewed as 1-color and at varying sizes to test legibility and clarity. Sure you can program ‘make a thing’, but what algorithm tells you ‘this feels off’ or it just ‘doesn’t work’?

Users/customers accepting good enough for the cheapest price, well that’s a different conversation than good design. You can’t program good taste or good business decisions either.

4 Likes

there is evidence we are quantum computers and we ‘collapse a wave’ to think,

I am pretty sure one day AI will be able to ‘think’ and ‘extrapolate’ but it is a hardware issue.

even if coherence only lasts a few ms at a time, with light you could do 10,000’s of calculations in 2ms

1 Like

I personally am thrilled that believable AI voices have arrived assist us animated filmmakers who work alone, but here we go…

https://aibusiness.com/verticals/video-game-voice-actors-strike-over-ai-concerns

1 Like

Another sign of bubble bursting soon? :crossed_fingers:

This reminds me of this gif:

2 Likes

I feel like saying I told you so. :slight_smile:

2 Likes

There are a handful of companies burning money on AI like crazy. Is this a bubble in your view?
There are plenty of companies that heavily rely on AI on many levels. Sure, some companies go way over the top with it. Sure, there are plenty of idiots claiming there is something, that doesn’t exist and won’t exist in the foreseeable future. But there are many things that work.
You might call it a bubble, but I am not convinced, it is a significant one. It impacts a bunch of large companies which drove the fear of missing out train too fast and too far.

2 Likes

Ha, in Jan I would say it will takes a lot longer for it to burst. But now it seems like every other day there is some news pointing in that direction. Some people even claim that it will crash still this month… Guess we will have to wait and see.

There are many uses for neural networks, but as I’ve always stated in this thread, the possibilities have been over inflated. At the end of the day this is still 70 year old technology, and the mathematics behind it won’t be changing any time soon. I think using words like “learning” and “intelligence” is giving people the wrong idea of how it works.

4 Likes

Yea, obviously. AI as a field of computer science is wide and has many applications. Just the current hype on gen ai was/is way overblown.

I think abusing those terms was done on purpose by a few companies trying to build a hype around the topic to grab as much investors money as possible… And maybe to muddle the waters around copyright infringement they were/are committing on a mass scale.


Edit, this popped out in my feed:

By cursory look it summarizes a lot of the recent news nicely.

5 Likes

lol

1 Like

That looks interesting until I zoom in and realize it’s garbage output

5 Likes

You’d think they’d at least do a quick touchup pass to make the Blender logo vaguely recognizable

6 Likes

and it’s super lazy (not even saying about ethics of training data). But also… grabbing a screenshot of actual blender UX would look way better and would be faster too.

5 Likes

CG Cookies’ “New in Blender 4.2” video completely lacking any awareness of the problems with Eevee Next, starts to make more sense now.

3 Likes

Sometimes I wonder if I would have handled our current AI hype/fear cycle better if I had been a little older and more aware of the old dot com bubble–by that I mean completely ignore these unproductive discussions and just focus on my art/what I can control instead of letting the techbros as well as those fearing a full-blown AI dystopia concern me in any way. It feels like I’ve already went through this unnecessary emotional rollercoaster with NFTs and the “Metaverse” recently, too. Turns out people saying “it’s inevitable”, “it’s the future, like it or not” or “get used to it” in their marketing and evangelizing isn’t exactly a sign of market strength in general.

I dabbled in some AI art generation app in 2022 out of curiosity, found it sort of a cool way to help come up with ideas when the ideas when I suffer from art block, got in trouble for using it in the concepting stage, when other artists reminded me the technology skirts copyright, haven’t had any reason to touch the technology myself since unless companies develop the humility needed to fix the ethical and technical problems and make the technology serve artists like me.

3 Likes

Maybe it would give you better perspective, but probably not. Just because you deeply care about art (I lived through it and am still bothered).

Nobody can predict the future. Stuff might be ‘inevitable’ only in hindsight.

I did that too (as many people did before realizing how unethical the training is, among other problems with it). But I came to totally different conclusions: it actually is not good for this too. It’s smoke and mirrors, you feel like it helps you, but old methods for e.g., gathering references work better (not that clear anymore as ai garbage flooded the internet, and you have to work around it, eg. by adding before:2022 to your google searches). Also I felt like I was doing progress on my projects while prompting, but what I was actually doing was wasting time (procrastinating). Ofc YMMV, that just mine experience.

3 Likes

They said that the AI was going to learn from our interactions, but it is in my experience that once I coax a working solution from the AI by suggesting different methods from what it originally presents, that the next day it has forgotten all interaction and solutions and attempts to use the same failed methods until I ‘remind’ it of what we did by pasting the previous day’s solution back to it to read. That doesn’t seem like it is ‘learning’ to me.

3 Likes

Oh yeah, that was probably the reason I so easily gave up on using AI for reference once I dealt with even the most minor controversy over using AI (before it became REALLY controversial to the general public)–when I really thought about it, at best generating an AI image to use for reference was an unnecessary extra step. Because of the limitations of the technology at the time (and possibly now due to these datasets now being corrupted by the existing AI images on the Internet now), the results were often too mushy and wonky to be used in the concepting stage by itself–I often had to spend a day or two anyway gathering anatomy reference and other reference images with the sufficient amount of detail I was looking for to fill in the blanks left by the mushy, un-detailed AI-generated image. I might as well have just gone straight to finding anatomy and other detail reference and skipped generating the AI image in the first place, like I’ve always done.

Also, regarding how hard it is to find a reference image online that is FOR SURE a photograph and not a generated image that has, say, inaccurate lights and shadowing if you look REALLY closely, my workaround for this for now is to literally buy physical image reference books at my local bookstore, similar to how famous artists like Charles Schulz gathered references before computers and the Internet were accessible and popularized late into the 20th century. Many of them are in the “bargain bin” section, lucky for my wallet, cover a wide variety of real-life subjects from organic to hard surface/man-made, and all seem to have been published before 2015, which pretty much guarantees the illustrations and photos were actually drawn/taken by a human if they are that old compared to the current AI art generation trends. And people wonder why the book industry is so resilient in the face of all these technological advancements that should have made print obsolete long ago–books aren’t at the mercy of the whims of greedy tech companies like Google or Microsoft, and you actually own these physical goods to the point where they will likely outlive the people that originally bought them.

2 Likes

Who made that sort of claim? “They” can essentially say everything about AI.

All the available generative AIs I am aware of, are not continuous learners. Meaning, even if they stored and used the interaction for the training, you most likely wouldn’t notice anything within the next weeks or months, until the data was used for the training.

It is very similar to humans. As in, if you copied someone’s brain to generate something and for every major step, you take another copy. Then you collect what has been generated and once in a while, you show it to the original brain to learn from…

Who made that claim? The entire mass media that touted AI as the new solution to everything, obviously a mischaracterization. If you want specific articles, you can look for them too, I am not here to argue about what specific person or agency said what - the base of my point is that I was lead to believe one thing, and then after experiencing it for myself, I know better.

Learning during the interaction must then be a farce, too, since I can coax a result from continuous input to get to a place where the result is operational and it tries several solutions to reassess what the errors are from the debugging output, but then that AI will circle back to a method already disproven in the session and I have to call it out to skip that since we already tried it.

I’m not complaining that AI is useless - the opposite is true for me, since it is still helping me with interaction and discussion and problem solving in a way that helps me learn. I was just misinformed of the actual abilities and need to modify my expectations.