General AI Discussion

The staff says they have a rule against conspiracy theories; however, a funny thing has been happening in recent years in which a number of them are coming true. Are many of them really conspiracies or is it simply information that the media and the government does not want us to know about?

In addition, inverse conspiracies are an interesting concept, but the forum allowing discussion of that might appear as a double standard (since they also are technically considered unproven).

1 Like

Actually, that’s not how I meant it.
Learn about how AI learns, and do the same: create from imagination, embrace your weaknesses, get along without all that explicit copying.

You’re human after all. Weakness is strength. :slight_smile:

Well, it depends on how you define “free will”. I would define it as not to be influenced by someone else in your decision making. And I promise, no one else ordered me to quote that earlier for no reason. :crossed_fingers: Therefore, I stand by my assertion that it was purely by free will.

Great observation! My text can be improved. Using the word “humanity” broadly in this context is misleading. I think a wording more fitting could potentially be “our society” or “a developing society (with focus on technology)?”. Aah… the possibilities of designing a good comprehensive text are endless and intriguing!

The reasons I used “humanity” in this context:

  • In my laziness, my subconscious made loose claims: “bruh! so like, in the super far-off future, it’s totes possible that everything will be run by awesome robots, that is so dope dude!!!”.

  • I did not proofread my text well enough.

  • Somewhere, in my subconscious again… it said: “Hyssch… no one is cleverrr enough to question our little secret of lazy wording…” Well, f*** you subconscious. “Thanks” alot.

1 Like

Not even bacteria in your gut influenced your decision making? :grin:

(and btw. in the pure philosophical sense… I don’t think it can be proven one way or the other, at least not with our current limited understanding of how our ‘minds’ function)

To be fair, I don’t think anybody knows what’s on the other end of the ride we are on now. It’s like predicting how always connected smartphone would transform our society in the 1970’*. And assuming even moderate rate of progress, AI will be probably** more transformative than smartphone.

(*Sure, some people guessed correctly (like Artur C. Clarke, but he also made a lot of predictions that didn’t come true).
(**Mind you, that’s my prediction which might be totally incorrect :grin: but it’s fun to imagine what can be)

That was an interesting one. Because the bacteria in one’s gut has more to your personality than previously thought. That’s probably where the old saying “You are what you eat” comes from. However, this is an entirely different topic. :slight_smile:

Once upon a time, I said something in another thread. I think it would be suitable to quote it as a response here:

End of quote and point.

Agree. In the end, what we are discussing is not something that we can determine with certainty. Yes, it’s fun to imagine :slight_smile:

1 Like

I am skeptical we can currently learn from AI how to learn (based on how it learns). It is known to be highly inefficient compared to humans and requires millions and billions of tiny improvements to get better. Maybe the learning process could be split up into curriculum like learning (in the sense of structuring the dataset). Overall, I would be surprised if we could improve our understanding like that, especially because psychology (motivation, …) plays a very important role for us, while that’s not an issue for computers.
We might be able to learn from diffusion models how they over time improve images. Possibly models could be trained to produce more human like outputs for that.

Sometimes I try to be optimistic about our future when it comes to A.I. But not today, because truthfully I can’t help feel as I put above some weeks back.

Are we creating our own extinction?

I wonder what you think about this 10 min Ted talk on the subject.

1 Like
2 Likes

That is awesome!

While it’s hard to predict how significant AI will be in the future, I’m sure that generating textures and materials will soon be a common workflow when working with 3D.

I hope Blender doesn’t fall behind on this technology.

1 Like

I definitely see generating textures as something that will be standard soon, maybe even in video games for generating levels that are different each time you play. Many “the sky is falling” nonsense in AI won’t happen for decades and decades, but textures is definitely possible and should be coming soon.

1 Like

The GPL might impose strong limitations on what the BF can do with the technology, and whether they can even include such technology.

Even if the BF was fast on the ball with this, there are plenty of other issues within Blender that need to take higher priority, and I can almost guarantee that the BF will miss the opportunity and end up having Blender be the last to get such functionality (as with a proper OpenSubDiv implementation with other standard tools).

In short, FOSS will happen, and you will be reminded that you are using free software. Just be grateful that Blender and Maya can now be used in the same sentence.

It is likely that the Blender Foundation lacks the expertise to even develop its own AI (correct me if I am wrong, I am not well-versed in how difficult or easy it is to program and train good AI). What is more realistic, however, is that the Blender Foundation will be sponsored by another organization that allocates extra personnel to develop such AI technology for Blender, while the Blender Foundation’s own staff continue working on their priority projects as usual.

(Isn’t it a bit like what they already do with other projects? Nvidia, for example, allocated personnel to work on OptiX for Cycles, if I remember correctly.)

The second more realistic alternative is that this is solved through addons, e.g. Stable Diffusion releasing a version of SDXL designed to easily generate textures, which can be used in Blender via API.

In short, everything probably depends on 3rd parties and how fast they react.

(If not for Ton Roosendaal revealing that he has studied a lot about AI and is fully occupied with programming it himself for Blender in the present moment X-D).

Seems that Steam doesn’t even want NPC controlled by chatgpt:

https://www.reddit.com/r/gamedev/comments/167iied/the_game_ive_spent_35_years_and_my_savings_on_has/

I think that’s fantastic.

1 Like

Well reading the discussion is interesting. Steam does allow some projects with AI generated content, including textures made by midjourney, apparently. Its not clear where the line is drawn, or if indeed that AI itself was the primary cause for this rejection. Some people think the game was rejected because it asks users to type in their OpenAI API key, which could be seen as a sketchy attempt at API key harvesting.

1 Like

I’ve read a few discussions and articles (since the whole thing became a thing). And it seems to me that steam did allow stuff in the past, but introduced this policy in just recent months. They didn’t apply it retroactively. And obviously a lot of things can get published if it doesn’t get flagged for manual review.

But also from what I read, there is no separate clear policy from Valve about that. Just some comments from their spokesperson quoted by a few press articles.

1 Like

Way the AI is going now is very dangerous I think.
AI should go the “Star Wars” way. Robots (like C3PO or R2D2) should working fully offline (for security) and should be able to mimic human behaviours, and be sensible of existence.
For example in future we should have fully autonomic housekeepers.

And about art? AI should have real hands, and fingers and “generate” art by painting like human or use computer like human. Then it will be a thing.

And Robots should look familiar. Like this:

Or this
spaceballs_matrice

Current robots developers have no taste. Their projects are awful.

In the case of the ChatGPT topic, it is pretty much irrelevant how Valve’s policy looks like. If you allow an external program (whether it is AI drive or not) to control the dialogs within the game, the developer essentially gives up the control of what happens within the dialogs and has no way to ensure any laws or policies are being followed at all.

2 Likes

As an experiment I made an entire animated movie, in 76 days over the summer, where the story outline was generated by chatGPT and all of voice acting is done by AI as well

1 Like

Anyone of you saw this already?

1 Like