We need Vr tech inside blender

There are going to be more augmented capabilites beyound just plain vr headsets
I am sure everyone knows about Microsoft’s hololens and there are more out there
some have broke the boundaries of the fov issues with hololens already and are open source

That’s absolutely a gimmick. After waving their arms around for a day, standing, getting tangled up in cords, bumping into stuff, and finding that they have access to less accuracy and fewer control options, 99% of 3D artists are going to go right back to doing things the way that they are comfortable with. As someone who tested these systems while they were in development I feel confident saying that nobody is going to adopt VR in their pipeline any time soon beyond final level viewing/previewing. As someone who lifts regularly and does half-marathons, working the way shown in that video was tiring after 45 minutes - an hour. The average person is going to find themselves getting sick of it much more quickly. It’s simply not a comfortable way to work. It will be just like when everyone figured out that you don’t need to be standing and active to play Wii Sports.

I use to use tools in the real world for 4-16 hours a day. I can tell you that it takes time to build the upper body conditioning to do that. And quite frankly many users would benefit physically from that workflow.

The difference between holding a physical object and holding a virtual object is appreciable. I’ve done a fair bit of manual labor, but you always have tactile feedback. You can lean into your work piece, or rest your wrist against the surface to get more precision. those aren’t options when you are flailing about in empty space. I can write 1/8" letters on a wall with a pencil. Try getting that level of precision while holding your hands at chest height in midair. Now do that for 8 hours a day.

VR for content production is definitely a gimmick at this point.

hence the need for 3d plastic force feedback exoskeletons*

and ‘touch gloves’ with expandable cells to simulate textures -

Also gimmick, and doesn’t solve the problem of fatigue. also, an exoskeleton capable of withstanding the forces necessary, with the level of articulation necessary would cost 10s of thousands of dollars. I don’t care if you can 3d print all the components (still expensive, btw) every single sensor and actuator you would need for every single joint would still cost thousands. not to mention the years of engineering that it would take to make something like that feasible.

if the current tech of VR is dependent upon these 3d printed exoskeletons you are always talking about, then we are at least 5 years and $100,000,000 away from that being possible.

Anyone who could look at that and think “Yeah, that’s better than the current way!” is either trying to sell said equipment, or lives in a fantasy world. The SpaceMouse can’t even gain market penetration, and it’s a piece of plastic you sit on your desk, no additional learning, tools, or sea change in the production pipeline necessary. Unless some group or company shows off something absolutely amazing that is impossible to do with existing tools (and I mean amazing on the level of a ‘Make Awesome’ button), this is a non-starter. VR will thrive in the consumer gaming space, the archviz space, the 3D video space, but will never make a dent in the content authoring space. Even if it were dirt cheap, it will never be as dirt cheap as a mouse, nor as easy to sit down and start working with at the beginning of a work day.

I think that these will be able to happen sooner than IF they are developed for multiple uses at 1 time.

The system could be

  1. medical - help people walk Start off using 95% assist -> taper as patient gets better

  2. VR - help people be immersed

  3. Telepresence - Work from home via a robot

  4. Mocap - MUCH better data along with the potential to interact with virtual entities and environments while ‘shooting’

  5. Industrial - upgrade materials and components -> lift heavy loads, and reduce fatigue

To reduce the cost… we need 3d printed electronics from recycled materials… (something like this- http://www.voxel8.co/)
but without the expensive conductive ink system

Okay, now you’re proposing that the Blender devs. support highly elaborate (and highly experimental setups still in the infancy phase) that not only would take days of setup and fine tuning, but setups that the vast majority of artists and studios will not be able to afford.

In a way, your promotion of the VR workflow is getting even less practical now because there’s no way the BF is going to burn through so much time and money for something that’s not guaranteed to get any use. I mean how many Blender users do you know with the means to obtain something like this?

In a sense, this is becoming a “Blender should support this because it looks cool” thread (with no regard as to the practical use in a 3D environment), and now you’re veering away from the subject of Blender altogether.

As someone who develops VR since the last century, there is no point in BPR’s wish.

No, that is for many industries, not something blender could or should do, it’s for All Vr, any industry that needs telepresence or interactions with non physical environments, like game engines or mocap, to record in realtime interactions with actors and objects that don’t exist.

but it is totally a nessessity for real world interaction with non physical environments, and feel like you are manipulating a real object.

imagine adding a cube, aiming at a face, a little handle shows up,

you could grab in, and lock a local axis, and not be able to move the handle that direction, the force feedback could fight you, and always return the handle to a point along the movment constraint axis…

you could do ultra precision movments, by constraining your vr interaction, with force feedback,

you would also have amazing ik data / insant mocap / scene physics animations using bounds based on joint angles

like opening a drawer that does not exist, and using a item from it just like a actor would, like say a tablet that did not exist, and require no animation tracking changes or post proccess? it would make non real - realistic props a breeze.

what about this for resistance - http://io9.gizmodo.com/scientists-just-created-some-of-the-most-powerful-muscl-1526957560

and this for detecting pressure? http://www.sciencedirect.com/science/article/pii/S0921889014001821

and this for detecting joint angles? https://en.m.wikipedia.org/wiki/Flexible_electronics

make it so all of it can be 3d printed including the fishing line muscles?

As someone who works with these systems on a daily basis: you’re wrong. There’s a reason that Tilt Brush is the most popular application for the Vive, and there’s a reason that Oculus’s Touch driven sculpting app got the most positive impressions out of the last Oculus Connect. People intrinsically understand how to use their hands, so the thin layer of abstraction that the Vive controllers and the Oculus Touch controllers represent require very little effort to break through, and then you’re just making stuff with your hands.

And content creation in VR is extremely important. Especially if you’re making VR content. We spent literally months iterating on a design for a VR space, building it in Maya, importing it into our engine, loading it up in VR, deciding that while it looked cool in Maya, in VR it just didn’t work, and then doing it again. If the design team had been able to work in VR for the entire time, it would’ve taken weeks.

That’s also why both Epic and Unity have spent a lot of time and money developing VR interfaces for their editors. What you see on a screen and what you see in the headset are not comparable. At all.

‘And content creation in VR is extremely important.’

No, it is not. Interactivity is important, so you got the answer why interactive apps are popular.

‘We spent literally months iterating on a design for a VR space’

The time needed depends on the size of the project and on the knowledge of the designers. An architect designs an archviz environment far faster then a level builder (as an example).

‘That’s also why both Epic and Unity have spent a lot of time and money developing VR interfaces for their editors.’

They are simple following trends.

VR now (and from the start) misses only 2 things: proper input and output devices. It is the actual playground for geeks, nothing more (except of serious usage for engineering, medical care, but for them it is not new at all).

Vive is different*

Right now you strap on the Vive and walk around a large space, in this feels so flippin’ realfirst-person experiences with motion controllers tracking your hand movement. Put on the Rift, and you sit down in a chair with a gamepad in your hands, playing what amounts to classic video games, that just happen to a) be in 3D and b) surround you in all directions.

imagine working on a desk, that has drawers full of objects, that you can configure however you wish,
but that is just it, your UI can be a drawer pulled out for animation, or a drawer built for
texture painting, but the end user can remove and add any icon they choose, linking to code
contained in the icon,

python based tool tips and effectors could be imported…
drawers could be imported/exported etc.

you could bind a drawers transforms to a game object, and then bind 3d logic nodes in the drawer to effect the object, basically making a code access port on a object…

your workspace, would be whatever project you’re working in.

a drawer could be round, and open into a pie menu instead of a desk plane.

You could put drawers, inside of drawers.

Drawer = collapsable folder/ui element with user definded behaviours

Tip = tool / command that takes more then 1 click
polls keypresses while active defined in tip

effector = tool/command that intiates and runs but does not require user interaction once activated (ie save or play animation or?) can poll keycommands while running.

besides the tips, effectors and drawers, all that is left is keybindings for virtual or real keyboards to parent new tips with a keypress, intiate effectors on a keypress,

making keybindings totally remapable.

[video]http://www.roadtovr.com/axonvr-making-haptic-exoskeleton-suit-bring-body-mind-vr/[/video]

this will happen sooner than you think**
/uploads/default/original/4X/e/2/a/e2a78edbb2a1452a69bef40656744943eaa8704d.pngstc=1

I have been thinking about it for years, but the tech is there now.

Attachments


Funny

that was the vid that convinced me of the irrelevance of VR in 3d modelling

BPR; One of the key issues I’m seeing with your proposals is that we don’t even have the first steps down yet (that is just viewing the scene with the VR helmet) and you’re already wanting the scope to expand to everything and the kitchen sink (even the technology that is still only found in the lab).

You also have yet to answer the question as to how many Blender users you think have the time, space, money, and means to purchase a full exoskeleton setup like the one in your post above (because it’s just not worth pursuing functionality that might see very little use at best).

On step at a time please, the HMD viewport branch for instance can be seen as an ideal start (just viewing something with a VR helmet). You cannot run with something if you don’t yet have the basics.

funny, but I think painting inside of a scene like this, is the future.

*working inside a world

and the tech, will evolve over time, in blender, and the real world,

but it’s the kind of tech you may want to consider while developing.

Call me a luddite, but allow me a moment to disagree. Painting inside of a scene like that (“working inside a world”) is the past. It was called (wait for it…) painting.

yeah, but with infinite paint, and tools like substance painter,

air brush? roller? bucket fill?

undo/redo

every other brush / method you can cram into it.