I think that the devs could do this earlier than Blender 5.0, because there have been some breakthroughs. For example, we can already decodify some imagery from brain activity.
So, from the TED video, you could see that for various objects, brain activity patterns can be recorded in a sort of dictionary. Once we have a dictionary of brain activity patterns for various 3D shapes and textures, you could probably model only with your mind, and virtual worlds create themselves as you imagine them. An AI made up of artificial neutrons could decipher the thoughts at a greater precision. After HD, 3D, 4K, HDR and VR, probably this would be the next trend the CG Industry will follow, a closer link with neurosciences. Of course, before comming Blender 4.0, these techs would appear in small, experimental softwares at Siegraph 20XX, and after that in softwares of giants as Autodesk, Microsoft or Google.
I don’t know if you have seen this video, but this tech, combined with what we see in Unreal Engine 4 VR editor, could look like this: