VR camera in Blender realtime [working /WIP]

Update: I got the basics working! see the last code block
I’m inspired by this guy to make a camera mover for VR for Blender:

I would like to move a camera via python in realtime , without playing the timeline. I would like to update it every time the screen refreshes or more than that. Is there:
A) a way to make a app.handler.on_window_redraw() ,
2) a way to kick of a python script asynchronously so the Blender UI doesn’t have to wait on the script to complete?

Here is my test code that will break the GUI - you have to CTRL+C on a mac/linux or close the gui on windows (or just wait 5 minutes )

import bpy
# select camera 
import time
import math
camera = bpy.data.objects['Camera']
i = 0
while i < 100000:
    x= time.time()
   print(going on and on...)
    camera.location =(math.sin(10*x),camera.location[1], camera.location[2])
    i +=1

I want a stream similar to websockets to change the location of the object.
I could do a client -server , but this also freezes blender.

I need a stream of information - location and rotation - to be handled by blender.
Do I need to go into C++ and modify source/ submit a patch for this?
I was thinking about doing python Popen, but it just does command line parameters and I’m stuck with 2 open blender apps trying to communicate with eachother - which is another problem I have that I do not know if python can solve.

Edit1:
I found modal timer operators. https://blender.stackexchange.com/questions/15670/send-instructions-to-blender-from-external-application

and got the basics working here - the camera updates in realtime without freezing :

import bpy
import os.path
import time
import math
class ModalTimerOperator(bpy.types.Operator):
    """Operator which runs its self from a timer"""
    bl_idname = "wm.modal_timer_operator"
    bl_label = "Modal Timer Operator"
    
   
    _timer = None


    def modal(self, context, event):
        if event.type == 'TIMER':
            camera = bpy.data.objects['Camera']
            x= time.time()
            camera.location =(math.sin(10*x),camera.location[1], camera.location[2])

        return {'PASS_THROUGH'}

    def execute(self, context):
        wm = context.window_manager
        self._timer = wm.event_timer_add(.01, context.window)
        wm.modal_handler_add(self)
        return {'RUNNING_MODAL'}

    def cancel(self, context):
        wm = context.window_manager
        wm.event_timer_remove(self._timer)
        print('timer removed')

def register():
    bpy.utils.register_class(ModalTimerOperator)


def unregister():
    bpy.utils.unregister_class(ModalTimerOperator)


if __name__ == "__main__":
    register()

    # test call
    bpy.ops.wm.modal_timer_operator()

Now I would like to communicate through some sort of message passing - ZeroMQ I heard is what leap motion uses .
Perhaps I don’t have to ?

this prints out data to python, I just have to read it in my code.

It looks like I will need pip :

So once I get a steamVR headset , and intstall PIP, this thing should read location/rotation of the headset.
I might use a wii remote until I can get ahold of a headset - it will give rotation data I think

I was thinking about serial - can I get serial data r/w ?
I want to get basics working first - so the internal pyopenvr will do for now.
Next iteration:

  • Location / Rotation coming from somewhere
3 Likes

zeroMQ should be just fine. Using it will save you lot of time and struggle :slight_smile:
You need it or not, depending what your goal is.
Basically ZMQ is good for TCP/IP communication so makes sense using it for client-server applications.

I tried using zeromq , but it slowed the framerate down to 10 FPS, then I remembered a trick I used with processing.js to only update when the frame was different. This remarkably speeds up the framerate to 40fps, a few frames below realtime (60fps). I am also on a macbook pro, so my machine probably cannot keep up anyways without the trick.

I will put this in a github.

the current problem now is that when the mouse moves over the window, the modal thing stops responding - perhaps it gives the mouse preference?
Any suggestion on how I prevent that from happening?
is there another option besides using modal ?

Edit: I turned on all the playback sync on the timeline and restarted the modal operator. it is smooth and does not have that problem anymore

Make sure your zmq libs are not debug builds but release. Debug libs may work really slow.

I am using release builds - I’m using pip install pyzmq
I made a github for this :

I tried to use pip install openvr but it only works on windows. Looks like I will be switching to Windows shortly.

I’m targeting the Lenovo Explorer since that is what I currently have available.
I anticipate steamVR will work fine.

I had some ideas to improve socket functionality once my base case works

  • send across byte data instead of unicode text
  • take input in from midi knobs , and send the output blender through json.
  • Hook up motion controllers and send location/rotation of those over zeromq

the headset display is going to probably be the hardest part. OpenVR says it can make a display, but again, windows is needed.

I would like to get motion controllers in and test this sculpting out, i may not need to tell it where the mouse is and just set the location,

That is my happy path scenario : get sculpting working in 3d viewport.
Questions:
Is there another library besides openvr that I can use?
are lenovo motion controllers readable by openvr?

update:
I have updated the controller to now work with openvr . It reads the headset and moves the camera
Here is a test I made showing the blender camera moves with openvr.

Current issue: How do I get the display from the camera mirrored onto the headset?

Update: I have found there is a way to capture a window image:

I found a pixel buffer object for loading images quickly to texture:

This sends left and right eye channels to a head mounted display:

My next step will be to try and tie the PBO texture streamer into the hellovr_opengl and see if I can get an output.
After that, I will load an image from the desktop into the PBO Texture buffer thing.
Then, I will load an image stream from the blender window into the texture buffer thing and hopefully that will be it.

Any comments or suggestions appreciated.

I have the screenshot maker working, I was unable to get glgetPixels working with blender - I probably have to brush up on my opengl - if anyone knows the process of grabbing a texture from blender to a separate application’s gl , I’m open to hearing it , I think it needs to be streamed as I don’t think direct framebuffer referencing is possible (I could be wrong though)

My next goal is to get the screenshot into a stream and into the left/right eyes of the hellovr_opengl.
If the framerate is super laggy I’ll look for other options, this is my working one at the moment though.

Progress - I have the screenshot using HBITMAP/HWnd getActiveWindow streaming to the left/right eyes in hellovr_opengl. I fixed a memory leak that had it stuttering and now it can run for days without performance issues
The right side is what gets sent to the Openvr camera

I ran into a slight hiccup with python code - only one openvr application can run at a time. my openvr server with zeromq works in python, but i will need to port this to C++ as to only have 1 application running. Once I get that, I will be finished with the camera data and can move onto controllers.
I will need to cleanup my code before I upload to github. Once I get a working setup, my plan is to put binaries on Sourceforge or Google Drive and link a cmake in github.

My framerate is somewhere near 19 fps - I’m guessing if I put a pixel buffer in , it will get smoother.
I’m going to leave this tune-up until after I get the controllers in.

Progress report:

  • The headmounted display successfully tracks a camera!
  • I can view the blender camera from the headset!
1 Like

Can you use asyncio instead zmq or c++ rewrites? This will make things portable.

Cheers,
Albert

Hey this is a super interesting piece of work, im trying to do something similar with vive trackers, did you ever post this to github or similar? Id love to take a look.

I haven’t worked on this in 6 months as the vr headset was on loan and I had to return it.
It uses zeromq on the python side - pip/zeromq installed on the blender internal python, a screenshot taker in cpp , an openvr opengl sender to the headset in cpp, and a modal operator that catches what is streamed from zeromq, executes bpy functions (like location, rotation, scale movements) from the stream , and then changes the camera accordingly.

The modal operator really should be a separate thread- it would be nice to register a thread and have the thread stop when a stop button is pressed.

I was going for completion rather than cleanliness, so the code is slightly fragmented and messy. I’d prefer if team leads/ future bosses not see the mess so I don’t want to make it public, but If you send me your github username, I can give you access to a private repo. How good are your cpp /visual studio/ python skills ? If you like, we can step through each piece, get those working, and orchestrate the entire thing once the smaller pieces are working.

You will need to install pip using "get pip "with python’s internal python.

Please explain. From my understanding asyncio is a library to write code using the async/await syntax. It is internal async vs single threade. zmq is io over tcp sockets. I’m not sure what c++ rewrites are.

Asyncio contains non blocking tcp IP server.