An ASI sphere could create something so small yet so useful. Amazing things don’t have to use lots of resources. It could grow another of itself from a small cell and take up a whole galaxy of space. Or a small but powerful algorithm, or encryption/lossless compression.
I don’t want to release too much information but my AI’s vision will be very efficient and the crawl algorithm will be very efficient and simple. It will run in at least our real-time and learn very fast too. No cuts or switching creatures, just the baby.
That’s exactly what I mean with not saying what this is all about.
It is easy to have a vision where you have something that is better and more efficient and learns much faster than everything else that exists. When you actually want to create something, a clear plan with actual goals that is adjusted as needed is invaluable. At least, you have your vision.
you will need your ai baby to be creative and have curiousity.You will need a database of some sort to store what it remembers.
are you going to have it grow up?
Having it grow up through different bodies would not be something I would be able to handle myself. So for now, definitely no. Also it could be something un-needed or even very disrupting. It’s quite possible at some far stage it could be better off transferring to some other strange bot body or algorithm.
Of course, memories, databases, etc. Curiosity yes, whether I have how to do that now or will look for a way that emerges it.
I have how to create the Imagination Generator - what creates your imagination and dreams ! It doesn’t account for all of our creativity but definitely a good amount.
Dreams and imaginations are both based on real world experiences. If there is nothing to experience, there is nothing to build upon. And my understanding is that the experience part is not yet there, though I might be wrong about that.
You would need to know when to reward and when to punish based on some criterion, e.g. a data set you have. You would end up with a generator to match your data set, which is not imagination.
maybe it could create it’s own procedural imagination generator through outside stimulus like humidity,sun or dust content of the air.You need sensors for that.
use an algorithm that responds to the outside environement to improve itself
to make a procedural imagination generator.There have been programs that have made other programs.The outside environment could be the internet or a forum.
Having an agent taking actions in an environment and getting rewards for it is reinforcement learning. That alone is remarkably complicated to implement. Imagination built on top of that has also been implemented and somehow works.
You are right that there have been programs that have at least partially made other programs. Did you investigate how long this would have taken on a conventional computer to calculate? Everything that is slightly beyond being trivial takes a significant amount of computational power.
Just to give a little bit of context:
There is a lot of research going on in the area of reinforcement learning. They are presenting impressive results on a regular basis, like recently AlphaZero, which can teach itself games like Chess and Go, almost only based on the rules of the game and then playing against itself and thereby exceeding the current state of the art. It has been reported that e.g. the Chess engine they built only took 4 hours to train. But if you have a closer look, that happened on a massive cluster or several clusters of TPUs. It has been calculated that it would take a decent GPU more than a year to do the same.
This is just to illustrate the complexity of reinforcement learning. There is a lot of research on simpler problems too, but for reinforcement learning, it is the norm that it takes weeks or months of computation time. But the researches have access to the required infrastructure to distribute that. That is usually not just handled by the researches, but they are closely working with engineers.
There are different areas in AI, like supervised, unsupervised and reinforcement learning. Reinforcement learning is by far the most computational expensive one.
That’s what some of the researchers are doing. When you consider the amount of computations that are required, the cost may however become an important factor. All of a sudden, you also have to make sure that the computational resources are not wasted because the machines are waiting for each other. Right now, reinforcement learning at this scale requires a team of experts, unless someone comes up with extremely good ideas to make it work a lot better. That is not impossible, but highly unlikely.
I doubt that Blender is a good choice for this kind of work as it is right now.
step 2 - go to tensor flow git and pull source code and setup compiler following instructions in patch and compile tensor flow as a .pydl and drop in addons folder.
step 3 import tensor flow and use lib for reinforment via bullet / bge