Of course, the discussion was pre 2.5. Now, do se have a better vision on how Blender 3.0 will be like or it will
"
be so advanced that we cannot even comprehend it now "
I think that Blender 3.0 will release 4-6 years from now…and will still have an UI. Blender 4.0 will be a lot different, and have no UI. Scientists have recently created artificial neurons, that work similary to the human ones. They mimic the human neuron membranes capabilities by using a material found in DVDs, that modify its electrical properties when heated. A future version of Blender could work on a computer with this type of neurons. Blender 4.0 would have an AI, called Blender AI. Before usage, you must make a pact with it, so when you finish your work with Blender, AI’s consciousness will be released in the body of a droid. Modelling…as in Blender 2x or 3x will be considered a waste of time. Our successors will wonder ,why such a hassle, to drag every vertex or line" . In Blender 4.0, everything will be made of voxels. While using it, you will have a special cap attached, which records your brain’s electrical currents and blood flow, and sends this data to Blender. Blender AI interprets this data, and builds what you imagine. For example, you imagine a house, but not very clearly. You see that it has no doors or windows- you imagine more details. Currently, we consider artists people that know how to draw or model . In the future, artists will be considered people with just a richer imagination than average. The rest of the technical stuff will be handled by Blender 4.0. Also, Blender 4.0 would be full VR.
@darksider
I dont like to spoil it but your living inside Blender 5.0, yes its very real (for you) but in the end its a simulation.
Your running on a 2000 node quantum dots computer the size of a milk pack.
I usually dont touch it much i prefer watching it, as touching it has a risk of creating new religions.
I think that the devs could do this earlier than Blender 5.0, because there have been some breakthroughs. For example, we can already decodify some imagery from brain activity.
or
or
So, from the TED video, you could see that for various objects, brain activity patterns can be recorded in a sort of dictionary. Once we have a dictionary of brain activity patterns for various 3D shapes and textures, you could probably model only with your mind, and virtual worlds create themselves as you imagine them. An AI made up of artificial neutrons could decipher the thoughts at a greater precision. After HD, 3D, 4K, HDR and VR, probably this would be the next trend the CG Industry will follow, a closer link with neurosciences. Of course, before comming Blender 4.0, these techs would appear in small, experimental softwares at Siegraph 20XX, and after that in softwares of giants as Autodesk, Microsoft or Google.
I don’t know if you have seen this video, but this tech, combined with what we see in Unreal Engine 4 VR editor, could look like this: