[feature] Modeling & sculpting in VR

So now that Oculus Rift and HTC Vive are both released and soon available to everyone, wouldnt it be a perfect time to discuss about support for Blender?

So we already have Stereoscopic viewport which is a good start, but we need to go further. Think about adding support for Vive control sticks and movement recognition, think about walking around your sculpture and sculpting as you do. How awesome on scale from 1-10 that would be?

And it wouldnt just be cool, it would be way easier to learn and way faster than methods we have at the moment. The fact that movements of your hands can be perfectly replicated and used to sculpt with tools is a game changer. The amount of control is way above the control you get with traditional wacom setup. Also the fact that you would see the scale and perspective of things way more naturally would make the process way less painful.

And well, these are just from sculpting perspective. I am pretty sure theres way more to explore and benefit from VR than just traditional sculpting. But for sure we can say this is THE FUTURE, 5 years from now everybody is sculpting in VR and methods we are using today are in past.

And now that nobody has done this yet, lets just wonder a moment how it would affect Blender if it was the first software to integrate such support? I personally think it would be HUGE.

When will this happen? Is it already on some todo list? Should it be there? Discuss

And to people that arent aware of these things heres just few short clips showing some of the stuff I am talking about:

Attachments


Try running around in your room, waving your arms around frantically for a couple of hours and then tell me that you want to do that 8+ hours every workday for the rest of eternity.

While I do think that VR can be good for judging scale and other things in 3D I am not convinced that the whole “working in VR space” is anything more than riding the hype train at this point. Then again VR is still in it’s infancy so better methods will probably be invented but until then I am happy staring at a screen with my keyboard and mouse thank you.

1 Like

The thought of working in VR is just going to be a gimmick right now providing the technology stays at the current level.

For this to really work, the technology will need to find a way to to improve the head tracking (perhaps adding eye tracking to it), improve the field of view, and implement high-resolution haptic technolofy (so you can ‘feel’ the scene you’re working on which would improve precision and reduce the strain on your arms). Though I can’t see why the technology requirements can’t be met with the intense research going into it right now.

Why though?

We have more important things that need to be developed than a marketting gimmick that’s going to die out in 2 years

Why would you need to improve head tracking or need eye tracking? Why would you need a better fov or haptics?
The head tracking is very good in current head sets. Eye tracking is not really needed for modelling or sculpting. It would be nice for selection purposes and similar things of course but you don have that with you normal screen, so you won need it just because your screen is a pair of goggles. The fov is sufficient and while haptics more than the currently available rumble pack would be nice, they are not necessary either, imo.

The only thing I think would really need some improvement is resolution. And you would need some way to counter frame drops bellow a certain threshold or else people are going to get really sick really fast.
Also, you’d need a really good tool that can somehow compete with “normal” sculpting tools which have years of development behind them.

Of course it is questionable if an artist trained in a conventional sculpting tool would see any improvements in a vr sculpting tool.

It’s only the first generation though (and with all of the R&D going into these things, the technology by 2020 may make the current crop of products look like rough prototypes in comparison).

The early versions of some popular software like Blender (along with electronic devices we now use daily), weren’t all that great either (and look where they are now).


Why would you need to improve head tracking or need eye tracking? Why would you need a better fov or haptics?
The head tracking is very good in current head sets. Eye tracking is not really needed for modelling or sculpting. It would be nice for selection purposes and similar things of course but you don have that with you normal screen, so you won need it just because your screen is a pair of goggles. The fov is sufficient and while haptics more than the currently available rumble pack would be nice, they are not necessary either, imo.

The reason for eye tracking and the wider field of view is that it would be one of those small things that would stave off motion sickness, because having such would make the virtual seem a little more real (people on gaming sites have commented that the current products are almost like looking through binoculars all the time).

We’re talking about a different genre of display technology here, a genre that gives less tolerance to an imperfect experience because you’re actually replacing the view of the real world.

Well either way

sure they might improve but is there any real benefit to using them over monitors for 3d modelling?

First time I hear of that. Usually motion sickness is the result of too low of a frame rate and other kinds of lag or camera control being taken away from the user. For example by head bobbing in first person shooters, cut scene cameras or the user entering loading zones causing the screen to freeze up like in Half Life 2 for example.

The current FOV is plenty and if it has some sort of effect on motion sickness it is next to irrelevant compared to the issues mentioned above.

It feels nothing like looking through binoculars. If you are skier or snowboarder you have probably worn skiing goggles at some time. That’s pretty much the FOV you get in a Rift.

Nobody know this. It has not been tested to a degree required to make a qualified statement.
And no tools exist with which this could be tested at this time.

The recent review of the Vive should also give some pause as to whether the Blender devs. want to expend resources on VR modeling, because the market is going to very limited at first.

To have the full experience, you will need to spend at least 1500 dollars on the required hardware, but that won’t matter if your living space is too small (so those living in apartments or in a place with no big rooms may literally have to move to a larger living unit). Our house for instance does have spaces that can be used, but not without having to move a bunch of stuff first.

Even if you get that part right, you may also need to buy shelving if you don’t want to use the wall mount for the lighthouse technology (it has to be at a decent height and can only be pointed downward it seems).

So yes, a very limited application for 3D work right now, though I can see a future where the arcade makes a major comeback by providing VR experiences in a way that most people either can’t afford or in a way their house can’t support.

You don’t need the full experience. The Vive can use up to 3X3 meters of space but it also works with less. It scans the space you have and after that “knows” how much space is available. So if you only have 1.80X2.20 you can use that.
Shelves are cheap. There are other cheaper solutions like duct tape or monkey hooks, too.
As for the price… 1500 dollars aren’t really that much for a decent 3D rig. A lot of people using Blender or other 3D apps routinely have rigs more expensive than that.

A poblem is that you can not control the content like you can in a game. In a game you can limit the number of polygons and textures and other things influencing the perfromance by designing it in a certain way.
In a 3D app you can obviously not do that, hence the users will experience severe frame drops and vomit all over the office when loading or creating a heavy file.
Perhaps simply fading to black when running into performance issues would be a solution. You could only create simpler objects but not complex scenes. But perhaps that is enough to be useful.

Hilarious it reminds me of an office where i worked once, you had to move a bit on the toilet or else the lights went off
(it only turned on if people where entering and stayed on basedupon motion detectors for 2 minutes, there where no light buttons).

I can imagine this be nice in games minecraft or so but not be verry usefully for creating game assets themselves.
VR glasses might be nice to observe you work, (basicly having 2 viewports with 2 camera’s ofsets would allready work), but i think blender allready has this.

Another example / similar tech called Pinch Draw.

I second the feature request.

I used Blender with VirtualDesktop and Oculus DK2 for about 150 hours on a daily basis.
Mostly sculpting with a tablet, but some vert pushing as well.

Try it before you diss it. It’s an unforgettable experience.

You don’t need Vive’s controllers, you can stick to a mouse and/or a drawing tablet, though having one of these to rotate the model would be great for sculpting.

The ability to see your workspace in 3d is a game changer. Try it ONCE. There is no going back.

I actually hold back my Vive purchase, waiting for a software that implements this.

Yes, it’s obvious 3d is way better than 2d and I don’t see why people are calling this a gimmick. Right now when modeling you have to rotate your view a lot to get a feel of how the surfaces look. There’s a UE4 VR editor that’s on their github: https://www.youtube.com/watch?v=1VVr2vMVdjc

I would like this too but Blender probably needs a viewport upgrade before the performance is there.

Probably because they haven’t tried it yet.

I had no issues with 2d content display, it went really smoothly. Don’t remember mesh size now, but it went around 500K and above. Didn’t get seasick when working - unlike in games.

I crave for full stereo.

AFAIR I saw working DIY rigups of Oculus and the BGE.

But yeah, the experience is amazing. You kinda get used to it after some time, and then decide to press shift-z to see how it renders… and the brain lands on the opposite wall from awesome. It just… looks so damn nice!

Also, that feeling, when you get the headset and headphones off after 5 hours of work. Turns out it’s night already, the sun is long gone. The glowing inside of the headset is the only source of light in the room, aside from the orange power led on the suspended monitor.
I’ve never felt more 21st century than I did at that very moment. :smiley:

It won’t work in edit mode though. I don’t know if performance can actually be improved in edit mode but it’s probably required for VR with high poly meshes if you want to actually do stuff in edit mode there, but maybe sculpting is enough.

Ah, yes, I confirm this one. AFAIR, it wasn’t an issue with VirtualDesktop, though!

Since the Blender screen was displayed as a 2d image, VD would keep head tracking of the virtual screen surface while Blender lagged.
This avoided the nausea.
You didn’t get depth perception, though.

So no performance boost necessary, just reduced feature set instead - when FPS fall behind certain amount, fade-to-black, and fade-in with 2d surface as screen with info/prompt on what happened.

(actually, you could render the static image into a stereo buffer, displayed on a 2d surface hanging in space! :smiley: This way it would look like a pseudo-stereoscopic image, like that of a 3d TV. You would get a simple depth perception of the lagging scene, while head tracking would still work, without causing nausea.)

Augmented / mixed reality Tilt Brush alternative shown @22 seconds.

Looks promising.

I’m not a gamer, and really have zero interest in gaming technology - so I kind of feel like this technology is being wasted on games and modelling implementation has a sort of game-like quality about it. That said, gaming is making this technology possible and affordable.

I think one issue that people see with this technology is a temptation to completely change how we work. I don’t think it necessarily needs to be that way. In fact, I don’t think the GUI needs to change all that much, especially since Blender already has the concept of the 3d cursor. The existing GUY could be like a HUD with the existing viewport kind of like looking out of a window. You wouldn’t need to wave your arms all over the place like they show in these demos. Simple gestures would work just fine with something like LEAP or next generation input devices.

We don’t need to reinvent the wheel to be innovative.