Advanced Human Baby AI Simulation in UPBGE !!

with a featherstone solver and tensorflow you could have it do less creepy things like learn to walk and maybe with a layer of clothing it would be even less creepy.

move on to making a game where you are a toddler and you are sabotaging adults and setting booby traps while toddeling around.

If an AI baby grows up and becomes smarter than I am, I’ll be REALLY mad at you. >:(
Already have to deal with smartasses, and now digital smartasses.

Goodluck with your project. Glad to see you’re going mad scientist on this.

I would really like for the ai baby to learn to walk ,speak and be creative.I really like advanced ai stuff.

I am working next on having it learn to crawl or walk, understand and speak the English language, and super duper mad scientist thingies that will make it at least as smart as us. It’s my mission. My only mission.

step 1 is getting tensor flow to import and get it working.

Why are you creating a thread if you have no intention to share what you are actually doing? What is the purpose of this thread if not discussing what you are actually doing or trying to achieve?

Facepalm lol… Dantus, I have showed yous what I created so far, and have explained why I am creating human intelligence - to better this world from humans and animals that are dying and suffering in pain. It will advance technology greatly.

What more do you want?..

to understand means to realize everything has a much deeper meaning.You will have to program your human baby ai simulation to do this.

You could question what understanding means…if one says they understand something, do you agree they really do as long as they say they do? Or is it by the actions performed on the world that prove it? I.e. they must draw up a house blueprint or build a tower, to show they understand anything, else it’s all sensory in yer mind! Or, is it the senses and their connections and their +/- rewardings that determine they “get” the knowledge? So it could be actions, or, senses, or both, that defines what understanding means. Alos, if you can’t see or remember something too good, and make out what it is, then you don’t “recognize” the input, where as if you did, you would select pre-learned reactions (actions to do now or senses to remember (knowledge about what you’re staring at)).

This has nothing to do with a technology. It would at best be a research project.

You are showing a baby. How are you going to use that baby to make any advances in AI?
Is it supposed to learn like human beings? If yes, do you plan to integrate social interactions and mimicking of other in order to learn?
Human beings need a lot of guidance when they grow up and they need a lot of help that they are not dying. Babies are exposed to a huge amount of sensations during that time. Are you going to simulate all the social interactions and sensations a baby is exposed to?

Well some of the things I may add to the baby could be so powerful it definitely seems like technologies to me…I like to call them technologies lol. And, once we have a digital human intelligence, it will change all of the world greatly, so this definitely affects technology massively in all ways…

“How am I going to use that baby to make any advances in AI?”
Well…by just that…the AI baby will be the advancement…the more smarter it becomes and the more I improve it then the more human it becomes and the more likely we will change Earth.

Definitely I may incorporate some sorts of interaction with the baby and us, and/or feed in things into its brain.

How do you make the baby smarter?

Smarter/intelligent by the AI algorithm I get coded and improve. And things like teaching it after running the project.

Does the baby learn by experiencing the world around it? Does the baby learn directly or do you train something independent of the baby and then feed that knowledge into it?

The baby will learn to ex. crawl by being in the world by itself yes, and also it may learn by me gathering say skills and then at one point injecting them into its brain maybe even during the project. Both ways will likely happen.

How is it supposed to perceive the world around it? How are you defining the objective that it should learn to crawl?

By different types of senses, like somatosensory, visual, accelometer, and other variants. To learn to crawl, it essentially must generate actions, reward them, and then know when to use those actions.

And that is all being executed in Blender?

Yes hehe! It will be efficient enough!

You assume that it is fast enough. As far as I understand, you haven’t trained it at all. If I have a look at comparable projects, where they e.g. taught a spider or ant like creature to move from one position to another, it took them a significant amount of computation time, because it is using deep reinforcement learning which is extremely data hungry and requires a lot of time to get better.
There are companies that create their own training environments or applications for this sort of problem, in order to speed up the training times. If you consider that those are companies like Facebook or Google which have basically as much computational power available as you can get for this sort of project, I am getting suspicious that you think it is going to be efficient enough!