VR camera in Blender realtime [working /WIP]

vr

(AMDBCG) #1

Update: I got the basics working! see the last code block
I’m inspired by this guy to make a camera mover for VR for Blender:


I would like to move a camera via python in realtime , without playing the timeline. I would like to update it every time the screen refreshes or more than that. Is there:
A) a way to make a app.handler.on_window_redraw() ,
2) a way to kick of a python script asynchronously so the Blender UI doesn’t have to wait on the script to complete?

Here is my test code that will break the GUI - you have to CTRL+C on a mac/linux or close the gui on windows (or just wait 5 minutes )

import bpy
# select camera 
import time
import math
camera = bpy.data.objects['Camera']
i = 0
while i < 100000:
    x= time.time()
   print(going on and on...)
    camera.location =(math.sin(10*x),camera.location[1], camera.location[2])
    i +=1

I want a stream similar to websockets to change the location of the object.
I could do a client -server , but this also freezes blender.

I need a stream of information - location and rotation - to be handled by blender.
Do I need to go into C++ and modify source/ submit a patch for this?
I was thinking about doing python Popen, but it just does command line parameters and I’m stuck with 2 open blender apps trying to communicate with eachother - which is another problem I have that I do not know if python can solve.

Edit1:
I found modal timer operators. https://blender.stackexchange.com/questions/15670/send-instructions-to-blender-from-external-application

and got the basics working here - the camera updates in realtime without freezing :

import bpy
import os.path
import time
import math
class ModalTimerOperator(bpy.types.Operator):
    """Operator which runs its self from a timer"""
    bl_idname = "wm.modal_timer_operator"
    bl_label = "Modal Timer Operator"
    
   
    _timer = None


    def modal(self, context, event):
        if event.type == 'TIMER':
            camera = bpy.data.objects['Camera']
            x= time.time()
            camera.location =(math.sin(10*x),camera.location[1], camera.location[2])

        return {'PASS_THROUGH'}

    def execute(self, context):
        wm = context.window_manager
        self._timer = wm.event_timer_add(.01, context.window)
        wm.modal_handler_add(self)
        return {'RUNNING_MODAL'}

    def cancel(self, context):
        wm = context.window_manager
        wm.event_timer_remove(self._timer)
        print('timer removed')

def register():
    bpy.utils.register_class(ModalTimerOperator)


def unregister():
    bpy.utils.unregister_class(ModalTimerOperator)


if __name__ == "__main__":
    register()

    # test call
    bpy.ops.wm.modal_timer_operator()

Now I would like to communicate through some sort of message passing - ZeroMQ I heard is what leap motion uses .
Perhaps I don’t have to ?


this prints out data to python, I just have to read it in my code.

It looks like I will need pip :

So once I get a steamVR headset , and intstall PIP, this thing should read location/rotation of the headset.
I might use a wii remote until I can get ahold of a headset - it will give rotation data I think


I was thinking about serial - can I get serial data r/w ?
I want to get basics working first - so the internal pyopenvr will do for now.
Next iteration:

  • Location / Rotation coming from somewhere

(karab44) #2

zeroMQ should be just fine. Using it will save you lot of time and struggle :slight_smile:
You need it or not, depending what your goal is.
Basically ZMQ is good for TCP/IP communication so makes sense using it for client-server applications.


(AMDBCG) #3

I tried using zeromq , but it slowed the framerate down to 10 FPS, then I remembered a trick I used with processing.js to only update when the frame was different. This remarkably speeds up the framerate to 40fps, a few frames below realtime (60fps). I am also on a macbook pro, so my machine probably cannot keep up anyways without the trick.

I will put this in a github.

the current problem now is that when the mouse moves over the window, the modal thing stops responding - perhaps it gives the mouse preference?
Any suggestion on how I prevent that from happening?
is there another option besides using modal ?

Edit: I turned on all the playback sync on the timeline and restarted the modal operator. it is smooth and does not have that problem anymore


(karab44) #4

Make sure your zmq libs are not debug builds but release. Debug libs may work really slow.


(AMDBCG) #5

I am using release builds - I’m using pip install pyzmq
I made a github for this :

I tried to use pip install openvr but it only works on windows. Looks like I will be switching to Windows shortly.

I’m targeting the Lenovo Explorer since that is what I currently have available.
I anticipate steamVR will work fine.

I had some ideas to improve socket functionality once my base case works

  • send across byte data instead of unicode text
  • take input in from midi knobs , and send the output blender through json.
  • Hook up motion controllers and send location/rotation of those over zeromq

the headset display is going to probably be the hardest part. OpenVR says it can make a display, but again, windows is needed.

I would like to get motion controllers in and test this sculpting out, i may not need to tell it where the mouse is and just set the location,

That is my happy path scenario : get sculpting working in 3d viewport.
Questions:
Is there another library besides openvr that I can use?
are lenovo motion controllers readable by openvr?


(AMDBCG) #6

update:
I have updated the controller to now work with openvr . It reads the headset and moves the camera
Here is a test I made showing the blender camera moves with openvr.

Current issue: How do I get the display from the camera mirrored onto the headset?


(AMDBCG) #7

Update: I have found there is a way to capture a window image:
https://docs.microsoft.com/en-us/windows/desktop/gdi/capturing-an-image

I found a pixel buffer object for loading images quickly to texture:
http://www.songho.ca/opengl/gl_pbo.html
https://www.opengl.org/discussion_boards/showthread.php/170783-Streaming-texture

This sends left and right eye channels to a head mounted display:

My next step will be to try and tie the PBO texture streamer into the hellovr_opengl and see if I can get an output.
After that, I will load an image from the desktop into the PBO Texture buffer thing.
Then, I will load an image stream from the blender window into the texture buffer thing and hopefully that will be it.

Any comments or suggestions appreciated.


(AMDBCG) #8

I have the screenshot maker working, I was unable to get glgetPixels working with blender - I probably have to brush up on my opengl - if anyone knows the process of grabbing a texture from blender to a separate application’s gl , I’m open to hearing it , I think it needs to be streamed as I don’t think direct framebuffer referencing is possible (I could be wrong though)

My next goal is to get the screenshot into a stream and into the left/right eyes of the hellovr_opengl.
If the framerate is super laggy I’ll look for other options, this is my working one at the moment though.