Oculus Rift Technology - Blender UI and the future

First, I’m sorry English is not the native language and may be imperfect, but will try to do my best :wink:

Well, I do not see any other thread talking about this, but I want to point out here my opinion. I’m starting a new thread because I do not want this opinion is lost in the middle of a lot of comments about the UI of blender.

I understand that blender needs to evolve its UI as the sofware evolves

And I understand that so far, no substantial changes within the UI should still occur.

In the same way, I believe, that the focus on a new UI for Blender, this wrong focus.

Well, when I was a kid, I had a dream … Bruce Branit, knew represent my dream to perfection in one of his creations. Since I saw this video, I knew that I was not he only one who had that dream. Please watch this video and will understand better the reason for this post (especially if you are developer of code).

Incredible, right?

This may be possible. With oculous rift technology we can be inside a virtual world, almost physically.

Why waste time arguing and working over a new UI based on old criteria?

New technologies offer many more alternatives to build a UI, The current mobile phones are a good example of how to adapt the UI to the new technologies. (Gestures to control).

Some companies, are investing in the development of games, movies and various applications … using Oculous system. And as expected, in a few years, not only Oculous, will supply a device to “enter” into the application. It’s the future (not too distant), I believe.

For people who do not know that I speak (Oculous rift): http://www.oculusvr.com/

Maybe not all blender functions could be implemented with Oculous rift technology… I don’t know…

But I think it could be very interesting for many other things, modeling, animating, texturing, sculpting, BGE … even play with videos. And more important…Could be revolutionary in the industry, and that can affect the quality of the tools.

Of course, all this can bring compatibility problems or workflow (painting in photoshop, test the texture, make modifications in PS, reload it etc. …), but sooner or later will have to think about this one way or another.
(It is not my intention to start a technical discussion of “how to”)

Well I am coder, php some javascript and lately I am learning to program Python for blender stuff.
So I know that this may be possible, time and money can do.
For my financial situation, I can not afford the time to learn C and develop something for Bleder/Oculous rif. So until that is possible, the only option I see to experience my dream is to launch the idea and wait for someone else with this vision, time and money to make it happen (blender fundation? hehe let’s face it, you would not make a donation of $ 20 that will run something like this video? even knowing that it can take three years of development or more?).

And meanwhile emphasize that the UI of an application,(in my opinion) does not have to be conventional to be effective, and the importance of looking to the future.


You should try the occulus rift first… I tried it and felt like throwing up after about 5min. It was a bit like motion sickness.
The technology will be great for games, the feeling of immersion is really there. But i really cant imagine wearing that thing for hours on end, especially with nausea.

It has to do with a mixture of latency, resolution and depth feedback (moving head forward doesnt result in the rift moving forward in gameworld). These are all being fixed for the consumer version, even to the point of having 4k screens in the mere future.

That said, something like the rift for productivity is certainly doable, and actually potentially ground breaking, BUT it would have to be within the context of AR not VR (AR being augmented reality). They said AR is on their radar but its not even part of the scope of the Rift for now. Until that changes it should only be seen as an entertainment and visualization device.

incredible! with OC+a new Blender everybody can make pixar quality movies after 1 weeks of learning!

wait. why pixar dont use OC-like things?

oh, because they are amateurs


@Muffy yes true , I not try it and I don’t know de contras like this. Is something new for denvelop. in any case, the new things are not perfect the first time. not a blender as I imagine, or OC technology which is very recent.

BTW: http://www.youtube.com/watch?v=O3NbURH14ro

And I don’t know… but I think this problem is because you see walking but not moving because you not feel the legs walking but how about … to be in a flying chair?

I do not know, but I think sooner or later, most people (perhaps thanks to video games) will feel, fictitious 3d elements, different, and I think that in order to create a new user interface can be interesting.

And yes AR is another stuff to consider because yes, seems part of our future, I agree

@abc123 Well…Is not " incredible! with OC+a new Blender everybody can make pixar quality movies after 1 weeks of learning!"

Do you think disney are amateurs?


Disney Seems are very interested on Virtual Reallity. And why not use VR for create enviroments for work, event if you must create a new device like when someone create the mouse?

And when I saw oculus thought I could be a start.

I do not think, autodesck/pixar/etcc (for example) have the same opportunities as blender to undertake a project of these characteristics, it has to think about users who already have (that pay enough money) to continue making money, because it works. For the same reason that in many studios, software is not changed, because it works and have their own tools. (I can be wrong, they handle a lot of money, I think)

But the community of blender can do what he wants if he tries. The best example is Blender.

To avoid comments … I am not saying to stop develop or improve what is there. I believe creating a department for this and pick up exclusive funds for this, for an organization like Blender Fundatoin, I would not be problem. Or at least, less of a problem than for Autodesck

And really, I do not know if Pixar / Autodesck / Etc …,'ve done some study like disney … or If They are working on it or if someone is working on it (Virtual reality workspaces). What is clear (to me) is that we also have the right to think of alternatives and therefore not think that these companies are amateurs…

Keep in mind with a VR headset on, you will not be able to see the keyboard and its keys, while in some cases it might be able to work like on a tablet with side buttons, you will become limited in the interactivity. AR will allow the user to see their peripherals in a virtual environment. Dont hold your breath for anything anytime soon that is more function than gimmick as far as production goes.

With haptic technology, you would be able to theoretically ‘touch’ objects in the environment and then use your hands to perform functions like sculpting.

What would really bring VR to the next level however would be a helmet that uploads sensory information to the brain as well as reading brainwaves so you would be able to do things like feeling the softness of objects covered in hair particles or smelling the stench of a Suzanne that has a material indicating she hasn’t bathed in a while, and who says you would even have to be in a virtual human body to work in the virtual scene (okay, all of that may be more for something like the BGE and not necessary for general 3D scene building, but hey)?

I have a Novint Falcon, it actually allows users to touch and feel 3d objects for compatible applications. One can feel weight, touch surface texture and deal with force. Fun gimmick, but hardly usable for detailed work.

Haptic feedback is one side of future feedback/input, it is not good for precision, not yet and its extremely limited in how it can be applied much less affordable. I cant see it really helping in 3d sculpting or modeling, we dont need to feel a surface to make good 3d art. It, like VR is better served for entertainment and general input (phone vibeclick) as opposed to content creation. Maybe that will change but I dont see it, not yet anyhow.

First Blender would need to be ported to something like VRUI (from this guy: https://www.youtube.com/user/okreylos )

There are lots of interface paradigms that Blender uses that wouldn’t work very well in an immersive environment; specially the concept of dividing a flat surface into many views, many of which are inherently non-3d. I’m not sure how practical it would be to have to click on your own HUD in 3d; and having your main view be detached from your head position would probably induce VR-sickness. Instead of dividing the “screen”, you would probably actually create many floating screens for stuff you want (some parented to your head, and some just placed in the world).

Actually, haptic feedback is quite useful (not talking about just vibration; things like skin shear, and the whole spectrum provided by 6dof force feedback). It’s quite hard to do arbitrary precise motions in the air only counting on your proprioception for figuring out where you are and where you should go; being able to feel the “texture of the paper” makes a lot of difference (many makers of 2d input devices like graphics tablets already know this, they make their tablets texturized so you don’t get that floaty feeling of drawing on glass); many gamers prefer joysticks that have clear center positions and some easy way to tell how much you’re pushing (like for example, having the stick rotate as it is pushed, or with spring strength as a function of deflection). For 3d art stuff, it would help a lot to feel how much you’re moving something away from the position you picked it, to feel the “clicks” of the notches when moving things precisely, to feel how much you’re pushing/pulling somehting when sculpting, feel the diferences between faces, edges and vertices, and even to feel whether you’re touching an object or just the air.

Things that provide parallax hints like headtracking and stereo displays help, but it isn’t the full solution.

I found this on augmented reality:

augmented reality would be very useful for a program like blender … would be very nice to have on a desktop (or a wall) with the tools you need and the scene as a hologram in front of you.

What I’m not convinced of augmented reality applied to the software for creating 3d, visual effects etc. … is that real space is limited. I mean … if you want to create a big scene and have the feeling of the scene in its scale, you need the space to project (if I understand correctly)

If you wanted to create a huge boat in the harbor, for example … I do not think that in a room of 3 meters high have space to create the feeling of huge size of ships

however, the technology of the “3d gestures” that used in the first video would not be bad to be able to control things in a virtual reality.

Or something like this:

I think that for an artist, or a group of artists to have the feeling of experiencing things in real scale (or very rough) can help in many ways.

On the problem of the feeling with the movement, there are people experiencing it:

Personally (and I have not been able to test Oculous), I think the solution is in trying to trick the mind, as in the example of Oculous:

You can “start” and “end” always from a point, which can be always the same, with a flying chair, seems to be less aggressive in terms of orientation.

But really, to be sure, I think the best way is to test it or see someone doing a similar test (a “chair flying” as a reference to an application that is not a war game)

Obviously, the size of the glasses etc. … guess it happened as with the first mobile phones, or the first processors, I do not know how many years … but I guess that is the objective of people developing such devices, to make it smaller and less obtrusive as possible.

(excuse my long post, could not help myself :p)

AR/VR is a hot topic these days but in terms of productivity and efficiency, there are fundamental issues that have not been addressed. Simply put:

AR/VR must provide an advantage over current interfaces.
In other words, a user must be able to complete a task or goal faster in a AR/VR system over a traditional 2D screen/mouse/kb. Until it has more functionality beyond looking cool, it will be delegated to purely entertainment/visualization/naive interaction purposes.

One major issue is “gorilla arm”.
It’s a funny sounding term but it is a serious limitation. For example, to move an object or navigate the desktop, you need only twitch your wrist/forearm. With AR/VR, you need to move your entire arm to move an object. If using that newer technology takes more time and energy to accomplish a task than with current technology, then that newer technology is moving backwards.

Another issue is tactile feedback (touch).
Tactile feedback is a crucial element of how we interact with the world; consider that our fingers have the highest concentration of touch receptors and thermoreceptors throughout the body, brain structures and our manual dexterity, opposable thumbs, wrist joint etc. These AR/VR systems are dominantly visual-based. Think of how toddlers love to touch everything - it helps them learn and understand the world around them. Also, think of how people have taken a liking to touchscreens. I’ve read some studies of how scientists have been trying to interface electrical signals with the neural system but we’ve barely scratched the surface.

Another example, is clicking. When you click on a mouse, you can feel the button spring back which lets you know you clicked. Say you press virtual buttons in mid-air, you have no spring to indicate you clicked. Essentially, you’d need to learn the muscle movement/coordination of how far to move your finger through the air which is much more variable since there is no physical hand/arm support like with a mouse. You could watch your finger, but if you have to think about clicking then this creates a distraction. Again, this creates more work. To compensate, maybe you could click/tap against a surface like your leg or a table.

I’ve tried grabbing virtual things in the air and it is a good test of hand-eye coordination. This imposes a limitation on people who don’t have great hand-eye coordination (think general population versus elite athletes). Currently, gesture control is great in theory but does not play out well in the real world -> why it is still not mainstream.

The mouse works well because we have fine motor control of our fingers and hands, and we have the physical support of a surface which gives us great precision. We need only barely move a muscle and the mouse amplifies those minute movements into something useable. In addition, because clicking is such a small and simple movement, it doesn’t require great motor control. Think about what makes the mouse extremely effective: our fingers only move through a small range/area, which means less energy/work/time required to move through space, and we have fine motor control.

I agree, it’s more about AR than VR.
IMHO, I don’t think we have the computing capability to create a realtime VR that compares to the real world. IMHO, the real breakthrough will be some sort of neural interface where we don’t have to physical move at all, or a computer that knows our thoughts before we do :p.

Note LEAP motion, Mycestro, Myo, Duo3D, etc. They still haven’t taken off but this is mostly due to software lagging behind hardware. And what’s limiting the software, is the UI conceptualization and design itself.

I would really love to do scultping with Oculus Rift. There`s a huge difference between “seeing” actual depth and just a flat sceen.

I think a 3d display would be nice, But i’m not ready to adopt head hounted displays.

3D display is not nice because your position to the screen changes depending on what you sit on, how far you sit from the screen and at what angle you look at it. Also todays 3D glasses suck most of the time.

Oculus Rift though is the perfect solution for immersive and optimal 3D implementation because the screen is always in a single position. Also you can actually look around with headtracking. OC sceptics just doesn`t actually get what it is about. If feels like you are in a different world and it would be damn awesome to use that tech for sculpting, for actually feeling the depth of the thing you sculpt, to look around it.

I definitely think that Oculus Rift can substantially improve 3D sculpting workflow and the end result. Not that sure about other 3D creation aspects but there could possibly be more uses.