Advanced Human Baby AI Simulation in UPBGE !!

An ASI sphere could create something so small yet so useful. Amazing things don’t have to use lots of resources. It could grow another of itself from a small cell and take up a whole galaxy of space. Or a small but powerful algorithm, or encryption/lossless compression.

I don’t want to release too much information but my AI’s vision will be very efficient and the crawl algorithm will be very efficient and simple. It will run in at least our real-time and learn very fast too. No cuts or switching creatures, just the baby.

That’s exactly what I mean with not saying what this is all about.
It is easy to have a vision where you have something that is better and more efficient and learns much faster than everything else that exists. When you actually want to create something, a clear plan with actual goals that is adjusted as needed is invaluable. At least, you have your vision.

Good luck with your project

you will need your ai baby to be creative and have curiousity.You will need a database of some sort to store what it remembers.
are you going to have it grow up?

Having it grow up through different bodies would not be something I would be able to handle myself. So for now, definitely no. Also it could be something un-needed or even very disrupting. It’s quite possible at some far stage it could be better off transferring to some other strange bot body or algorithm.

Of course, memories, databases, etc. Curiosity yes, whether I have how to do that now or will look for a way that emerges it.

you will need your ai system to have creative problem solving in order for it to be truly ai.

I have how to create the Imagination Generator - what creates your imagination and dreams ! It doesn’t account for all of our creativity but definitely a good amount.

Dreams and imaginations are both based on real world experiences. If there is nothing to experience, there is nothing to build upon. And my understanding is that the experience part is not yet there, though I might be wrong about that.

make a procedural imagination generator.you could try to make it from a procedural generator.

You could teach it through reward and punishment.

You would need to know when to reward and when to punish based on some criterion, e.g. a data set you have. You would end up with a generator to match your data set, which is not imagination.

maybe it could create it’s own procedural imagination generator through outside stimulus like humidity,sun or dust content of the air.You need sensors for that.

Before any imagination can take place, it still needs to have extensive experience with the given environment.

But I am not entirely sure whether we are talking about the same regarding “procedural imagination”. What do you mean with that?

use an algorithm that responds to the outside environement to improve itself
to make a procedural imagination generator.There have been programs that have made other programs.The outside environment could be the internet or a forum.

Having an agent taking actions in an environment and getting rewards for it is reinforcement learning. That alone is remarkably complicated to implement. Imagination built on top of that has also been implemented and somehow works.
You are right that there have been programs that have at least partially made other programs. Did you investigate how long this would have taken on a conventional computer to calculate? Everything that is slightly beyond being trivial takes a significant amount of computational power.

no I did not investigate that.i am just giving ideas.

Just to give a little bit of context:
There is a lot of research going on in the area of reinforcement learning. They are presenting impressive results on a regular basis, like recently AlphaZero, which can teach itself games like Chess and Go, almost only based on the rules of the game and then playing against itself and thereby exceeding the current state of the art. It has been reported that e.g. the Chess engine they built only took 4 hours to train. But if you have a closer look, that happened on a massive cluster or several clusters of TPUs. It has been calculated that it would take a decent GPU more than a year to do the same.
This is just to illustrate the complexity of reinforcement learning. There is a lot of research on simpler problems too, but for reinforcement learning, it is the norm that it takes weeks or months of computation time. But the researches have access to the required infrastructure to distribute that. That is usually not just handled by the researches, but they are closely working with engineers.

There are different areas in AI, like supervised, unsupervised and reinforcement learning. Reinforcement learning is by far the most computational expensive one.

what about using cloud computing for reinforced learning.

That’s what some of the researchers are doing. When you consider the amount of computations that are required, the cost may however become an important factor. All of a sudden, you also have to make sure that the computational resources are not wasted because the machines are waiting for each other. Right now, reinforcement learning at this scale requires a team of experts, unless someone comes up with extremely good ideas to make it work a lot better. That is not impossible, but highly unlikely.
I doubt that Blender is a good choice for this kind of work as it is right now.

step 1 - see patch allowing blender and addons using c extensions
https://developer.blender.org/D2835?id=9254

step 2 - go to tensor flow git and pull source code and setup compiler following instructions in patch and compile tensor flow as a .pydl and drop in addons folder.

step 3 import tensor flow and use lib for reinforment via bullet / bge

step 3.b replace creepy baby w/robot

Or

Step 1: Do your research.

Step 2: Find out that there are other environments that are better suited for this kind of work

Step 2: Find out that you need a shitload of computational power

Step 3: Start with a project that is easier, like an MMO