Multiple user in a single 3D scene.

Hello guys,

I just saw that article in 80.lv :

About this collaborative method of working in one scene, I think its great and i was wondering if something like that will be possible in Blender rigth now or may be in the future with the new 2.8 architecture, may be via an ad don or integrated blender tool.

I think its great and could be very useful for some projects.

It should be possible to do it for blender and it won’t work.
Not even that fancy unity thing really works as it seems and the reason is network bandwidth.
What the video shows is users playing around with existing primitives, meaning stuff that exists in all the local installations of the program and different users update. Add building blocks, move around stuff, change colors. All nice and easy.
But as soon as you start loading user models made of a zillion vertices or applying local high resolution textures, the network bandwidth (or the lack of) will bite your neck.
User A loads a 300 megabytes model, User B and C will see it on screen after an hour.
Unity might get away with it because it pushes users to buy stuff from the unity store, so all users can download the same data set before to start and then send each other only the commands to alter the data which is locally available.
For blender, I see it as a pretty big limitation, especially considering that this mysterious creature called “3d artist” seems to like to use a variety of external tools, which require exporting and importing stuff quite frequently.

sync delta*

stream mesh()

Network bandwidth shouldnt be the issue, seccondlife allows for editing meshes in sandboxes as well.
Sure some restrictions could be setup for texture sizes as they do, but it could work without that too, once those are synced between all workstations they can be shown. At first one would see a empty cube, …loading more… the texture would come in.
If one works on the mesh again, you see last version until it is synced again.
Sure it would work faster on a local gigabit network as compared to working over the internet.
Its just that currently Blender doesnt have any syncing methods, alla dropbox or torrent-sync.

If we remove bandwidth from the equation, then I’m not aware of any reason why it shouldn’t be possible.
By possible I mean possible with an external add-on, if we dabble with blender’s source code then anything is possible (well, provided it is computable).
The reason why I say that is because what you need is a way to programmatically reproduce the interaction of one user with the local data set.
The blender’s scripting api allows that.
Synchronization is not an issue because mutual exclusion only requires a way for two or more control flows to share a value - which can be done through the remote connection - and once you have mutual exclusion you can control all kinds of concurrent behaviors, including sharing a common set of partially defined building blocks.
Designing and implementing the whole system looks extremely boring though.

Realtime collaboration is one of those things that sound good but that aren’t actually that valuable. You really want updates from other users in meaningful lumps, not an unordered stream of little tweaks.

Something you shouldn’t underestimate is the value of having real-time collaborative 3d without the collaborative editing. Basically just recreating a but of guys siting about a workstation, without having to have all the people in the same place.

All you really need is collaborative object mode, grease pencil and chat.

I keep wondering if I should recreate Blender in my DIVE(distributed interactive virtual environment) or build my DIVE into Blender.

Permission to edit can only be held by 1 user at once who can push changes to shared repo, and once completed, changes can reverted by a log.

Honestly I see people hanging out together working on projects in a virtual chat environment basically.

This allows for remote artists to not feel so distant, and really would help with long term morale.

I see groups like “daily creative” where people just hang out and work and chat and also teach each other.

Honestly BPR, your VR with avatars approach would be overkill.

We already have technology that can put a face to who you’re working with without the need for VR helmets. You can have an interface that brings you realtime video feeds of the person sitting at a distant workstation (like Microsoft Skype, but open source as would be required for such a thing in Blender).

For most cases, this should be enough, as the demos of working entirely in VR (such as the ones by Unity Tech. and Epic) still have a bit of a ‘toy’ feel.

I have used secondlife, and though it sucked, I could see the future of it.

One thing, is using gestures to invoke commands, another is force feedback
For tactile response in sculpting,

If one could invoke and direct shorcuts using a sort of blender sign language, I am convinced they could model far faster than now.

People working, and learning together makes you feel not so all alone in the world.
This has long term positive effects for team projects.

Also, one can drive a camera around without VR tech*

It’s the same problem you get with networked physics. Floating point math is not reproducible, you have to send the data.

Yes but you want a repo that uses diffs so you don’t have to redownload the whole data block.

Blender has hundreds of operators, it would be no easy feat for anyone to learn and remember that many gestures (how would you even make all of them different enough so there’s not a high chance of accidentally invoking the wrong tool)?

In addition, having your hands in the air all the time while working would bring up the issue of fatigue. For some, that would start only minutes into their Blender session (so it would not be viable at all for even medium-complexity projects).

And when it comes to working together, why do you think that the positive effects cannot come unless the Blender developers drop much needed projects so as to spend months of their time coding game mechanics into Blender (which would more or less be needed for your VR thing)?

Nope, HMD sprint is 15 days long

Hopefully we can get leap support later.

That’s just looking at the scene and navigating through a VR helmet, last I’ve seen it doesn’t include floating interface panels, ‘magic wands’ as a sort of cursor, walking about the scene using an avatar with other people, picking up virtual tools and objects and carrying them as if they were real ect…

All of the workflow topics you would need to address to do everything in VR would take far longer (Unreal 4 for instance has been seeing work on that since last year and it’s perhaps the fastest-developed application in the history of software).

Yeah, I can do much ui development in BGE, but I don’t know BPY as well, but I suspect it won’t help as much, as the ui would need to be textured 3d objects in the viewport kinda.