We need Vr tech inside blender

For years I have been saying this is the future,

Map out a level in the viewport,

press play bge game, the game is titled - Texture painter -

in the end, the textures you painted get saved as a new image, per object, or per instance etc.

This way we can hang out, in multiplayer, and paint a level…

this + a asset placer, that adds dupligroup instances to a scene with the option to save the changes or not** also, a texture picker and a splat mapper*

would mean you could make very complex game levels in hours rather then days.

or 3d rendering backgrounds / foregrounds etc.

I did some legwork on it, and have texture painting working in the game engine, but the project needs someone smarter
then I.

can we please have a in game dynamic texture painting scheme?

a ‘3d painter mode’ in the viewport would be good as well.

thanks!

You do realize that it’s possible to make a collaborative environment without involving the game engine, right?

Doing things in VR may seem cool, but it won’t be practical until the systems become capable of ultra-precise tracking for both your head and your hands (think Wii-U levels of precision).

I did watch some of the demoing in the Unreal Engine, the lack of precision right now kills any chance of it being other than a gimmick.

bleeding edge VR tech is getting really precise,

as I have stated, it’s blenders future, not it’s present *

also there is a very large amount of VR Seed money out there From HTC and oculus to pay for VR pipelines.

The game engine, is do for a upgrade, just like the viewport in 2.8, and has all the sensors and logic to facilitate such an endeavor, and multiple users editing the ‘Server Copy’ of the game is good,

Everyone having ‘Ops’ to overwrite the scene that spawned it from is not

I think that VR, Game Dev, and 3d animation pipelines could become a social endeavor.
where people share ideas, and knowledge, while chilling making art together*

Verse / mulitiple editing of a .blend file is better for teaching, but not as much for ‘chilling’.

I still think some version of Blender+SecondLife is the future for education, game dev, and 3d animation studio pipelines… a sort of VR lobby for world wide collaboration.

Maybe even mechanical and electrical design one day… Cad / 3d printing etc.

You have clearly never actually used a Vive. Its tracking is more than good enough for using with Blender.

I don’t think it will ever be practical, because your hands would just get to tired when working in VR.

So, about the 1000’s of years of art…

they did not get tired ever :smiley:

The VR rig would be 1 input scheme, the digital texture painting could be achieved using a mouse as well****

just not as cool :smiley: (first person control while moving - mouse cursor when painting [Keypress to swap?])

They did not have the alternative: still arms resting on the table on mouse and keyboard. It is called digital art for a reason. Not really sure what your point is?

Creating in resting pose with standard peripherals is better, faster, less cumbersome and less tiresome.

Sure, VR can be good for inspecting the final product, but the creation in VR looks way too clunky. And tracking precision has nothing to do with it.

you side against ton?

heretic!

Though seriously VR is a fad that’s going to die very soon.

It doesn’t make 3D modelling any more enjoyable

Looks to me like the artist that was recording that video was doing just fine as far as precision. In my opinion I would have to try it before saying that it would be too cumbersome. Looks cool at any rate.

People said that about rock and roll.

It doesn’t make 3D modelling any more enjoyable

Have you tried it?

for one, your UI could look like tools you grab and set out on any surface, and aiming the gun at it and clicking would pick up that tool tip,

the tool tip executes commands stored inside it, and operates using rays, or a selection set.

all interaction with the UI could be picking and using tools, some tools picking in itself could invoke a action.

one ui but any ui.

these objects could also be parented hovering in a 2d plane where a desk would be in front of the actor, so you could look down and pick a tool, and look up and use it. when the person moves around in VR the objects placed on the ‘desk’ follows the torso orientation. so even as you fly around inside a scene like super man, you also are ready for buisness :smiley:

Modeling in actual 3D space is tiring and annoying. It might look cool, but the existing tools for control are garbage compared to a mouse or pen+tablet. They aren’t accurate enough, and without any kind of active physical feedback like touching a surface it’s not nearly as intuitive as you might think. Fine control is pretty much non-existent. Also, holding your arms up in the air gets old really quickly. The only place I’ve seen it work even close to well enough was on a tour at Media Molecule. Their new “game” they’re working on involves painting in 3D space, but everything is done in a very abstract style (if you ask me, to cover up how inaccurate waving your hands around in 3D space is). On top of that, the system in place took nearly 6 years of R&D to nail down, and is so specialized that no 3D package in their right mind would waste resources trying to replicate it.

People complain spending $99 for something. Like you guys have $699 for a vr kit.

all of my wat :smiley:

The game engines we know have been evolved around the idea of reusable game objects (like the prefabs on Unity). And this concept from the modern 3D games goes to the most classic games. For example in super mario out of a simple tileset a whole world could be loaded in the memory. At very best with this new design, a system like PTex would do the trick but the resources accompanying the game would turn to be humongous.

/uploads/default/original/4X/9/2/8/92841e8ca4edfc2abb960767a8a43e43e49cfc36.jpgstc=1

Attachments


Sure, having actual 3D tool objects you pick up and work with may indeed be a fun way to work.

However, you could easily end up with a situation where your VR workspace becomes just as messy and unorganized as a physical one (which like in the real world can make the actual creation of something difficult).

That is cool :slight_smile:

Sure it’s cool, just like working in VR.

However, the question remains, why should the devs. prioritize something like this over the new Object Nodes, the New Depsgraph, modernized particle code, overhauled physics, and other todo items addressing things that’s either lacking in Blender or missing altogether (because if the devs. are working on this, chances are other items will get postponed until further notice)?

actually, it could be pretty easy to setup as a test,

just open only a viewport, and have a 3d mouse, and replace mouse / cursor over
movements with raycast based input, also a ‘workplane’ may be handy* for limiting selection behind the plane by occlusion.

You see Ace get’s it!

This gimmick shouldn’t take priority over other more important things.