I need help with a strange proposition, using blender to code a real brain…
I need a system, that randomly sorts through node configurations, assigning things “observed” by an Ai’ into a list, that is sorted semantically, then uses those objects to “think” doing random things until it meets a condition, I have a system using a “sensor cone eye” a "reactor(eye sees candy apply force to body)"and a “Body-stores variables and is used as parent” and a “Head-vision cones are attached, and collision designates feeding”, so it can appy real forces based on it’s motion, gather resources and learn patterns, based on meeting needs, eventually I hope to write a weighted adjusting hierarchy of needs…
Some of the nodes will be populated by actions, like attempt a cyclic motion etc, (like my lizard’s run cycle)
but the more cones you have, with different behaviors triggered and ran through, the Ai could solve a goal,
So randomly spawn creatures, or objects that use rules to meet needs,
then when a rule is “solved” a diffident list has that node configuration saved, as a semantic action and these could also be grouped by the type of need,
then an algorithm compares the saved “Node seeds” and looks for similarities, while the Ai’ is “sleeping” then data is purged, and the cycle starts again, with the new days goals/objects
So this system could eventually be used to gather object data, think about what each part could do together, or apart,
and see if it can find a solution, like touch object property “food” , catch property lizard
so it re-iterates using these “semantic random seeds” populating and connecting logic nodes, until a condition is met think of a thinking boid in 3d…
Some one poke some holes in this? fill in the gaps? thought is just as important as code on this one…
Starting in the right place is better then blindly stabbing at the dark