The feasibility of coding Blender to work with Brain machine/computer interfaces

Like this one

I think it would be worth the trouble to bring about the code needed for Blender’s interaction with such an interface in the near future. It would likely speed up developement in artwork and productions and Blender adopting this early could give it an industry advantage. This type of interface, I believe, can provide the ultimate blending experience.

What’s the priority of developing Blender to work with this when it becomes available for the masses?

it seems that at the moment all its good for is simple on/off type of commands and, still under development, something as simple as a remote. a Blender-type usability is still a ways away and im not sure how youd have to train your brain to be able to reliably control something like that.

on the other hand, having Blender on something like this would be amazing.

coming from an architectural background, something like this multi touch interface would speed up digital drafting a thousand fold and the same could be said of 3d modeling, especially organic methods such as sculpting. i cant wait for stuff like this to hit the shelves.

Here’s another example

The product shown here has likely been more developed and should be ready for applications by 2008, it also uses brainwaves, and I think has shown the most promise of all such applications so far. Granted, it’s mainly being developed for games, but I think someone may find a use for it with Blender.

At Emotiv, we’ve created a robust system and methodology for detecting and classifying both human conscious thoughts and non-conscious emotions. This revolutionary patent pending neural processing technology makes it possible for computers to interact directly with the human brain. By the detection of thoughts and feelings, our technology now makes it possible for applications to be controlled and influenced by the user’s mind

Honestly I would guess something like this would be cake to integrate into. Its just a human input device and would sit on top of the OS event system so really you may not even need to do anything to make it work.

The manufacturer does a bulk of the work for something like this.

This is quite possibly the most unrealistic feature request I’ve ever seen.

CD: Sit back for five minutes, and think about why this isn’t even remotely plausible for at least the next ten years.

It is plausible, but maybe not entirely beneficial in its current incarnation. The commands they can generate are fairly simple by the sounds of it, and won’t be replacing the mouse any time soon, but, there is definite potential for something like this.

It’s plausible, entirely possible, but maybe not advanced enough yet.


I think that chances are strong that Jean-Luc Peurière, in his refactor of the event system, will take multi-touch screens into account.
As for taking brainwaves into input: I agree that there isn’t anything very special to do to integrate it to any app since it’ll talk to the OS directly. Besides being silent, I also see little use for non handicaped persons over common speech recognition.

OS and drivers exist for some reasons!

Well it will get better like all new technologies do, the systems by Emotive and Hitachi are only in their first incarnation. The system planned to be for sale in 2008 by Emotive will actually be one of the first brain computer interface type system that’ll be affordable by the masses. I’m sure future versions will become more and more improved and thus would be able to handle a wider variety of computer applications. (and more Blender functions)

CD: i think you should try coding something in blender yourself before asking other coders to develop these complex features. i bet you’ll find that coding even very simple features can be very difficult :slight_smile:

The past month seen on a TV channel (History Channel) a guy playing with the old skategirld demo with her mind.
You can see some info at blendernation (past year):