SDI signal to Eevee

I want to use Eevee for an AR project.

So the first question: is it able to import SDI signal to Eevee to do the real time render? Can I import video board to the real time render?

Second question: Can I connect the camera signal from real world to the real time render?

Hi,
I answer you in other sites but to avoid not answer you here.
There is a blender 2.8 fork that includes the game engine. The game engine can connect to a decklink card. Maybe @nestanqueiro can supply you more information as he is using decklink card too.

Link to build: https://mega.nz/#F!t9EEFSaS!JPiOPSInCZyU-SW_-rhEOQ

1 Like

Also there is BlenderXR branch

Maybe some day the upbge development can leverage it when it’s merged into blender master.

Hello all, Short answer to your question, Yes!
As long as you use a Decklink Card, although not every single model is known to work…USB3 and thunderbolt cards are not your best choice… PCIe cards are more reliable, although you may have to fidle with the settings, as the internal framebuffers for each card are different… there is a setting for that in the API.
Also you have to know your way around glsl as these cards usually work in modes other than RGB, so you must convert YUV422 (10 or 8 bit) to RGB and flip the image upsidedown before using it…

You must also consider how to take the composed image back to video (also supported by UpBGE), as you will then require at least a Decklink Duo to have input and output.

If you use Windows, then you will benefit from using a Quadro Card for compositing, as the Decklink module takes advantage of GPU Direct (from Nvidia Quadro Drivers Only) to have direct access to the aquired image on the GPU. Be sure to have dvp.dll along side with UpBGE…

Hope that helps!
Happy new year!

1 Like

I got it working using a combination of BMD Ultrastudio Mini Recorder, BlackSpout (https://magicmusicvisuals.com/forums/viewtopic.php?f=6&t=201), Spout Sender turned into receiver (see this: https://github.com/maybites/blender.script.spout) and a bit of python hacking in Blender.

BlackSpout is a software for Win that can capture feed from BMD device and create a spout stream from this. Spout is a mechanism for sending images from one software to another. From the spout sender example on the link above it is pretty easy to create a receiver instead. Then I used the 3d viewer drawing handle python example to create a 3D surface in viewport and draw the received image onto it. It works ok, main problem is that in order to refresh the image, you must also redraw the viewport, which depending on your scene can be a quite heavy and prohibit realtime playback.

As I’m currently in the middle of a project where I use the code I can’t share it at the moment, but i can discuss what I did anyway if interested.

with Decklink module realtime is doable, and quality is the best possible… everything happens under the hood and is done by the GPU and the Decklink SDK natively… but I don’t really know if the minirecorder is supported…

Other realtime video input solutions may include OBS virtualcam outputing the Decklink Videostream to a WDM or VFW video input (or a virtual /dev/video on linux).

also maybe intersting for you to invertigate is the NDI protocol from newtek, as soon as I have some time I also plan to try and include blender support to it, but for the time being, it is impossible for me to do this…
NDI is already also supported by ffmpeg, VLC and OBS, therefore it will be easy enough to create a complete pipeline to feed video into Blender…

Thanks,
Nuno Estanqueiro

Realtime reading (and separate displaying) the image is not a problem, problems arise when you try to draw it in realtime inside the actual 3d window.

I’m not sure I understand you… If you can read the image you can display it as backgroud of your scene using 2dFilters, then your objects will display in front of live video… just make sure to fake a shadowcatcher using a white plane to catch shadows, but use multiply bending mode to get rid of the white and just keep the black…https://media.blenderartists.org/uploads/default/original/4X/9/4/a/94ad4629cdd57a7451aa2ebce9bcc5ada3f1c1d8.mp4

Interesting, I haven’t tried using it as 2d background, do you happen to have a code snippet that shows relevant parts of this? Or link to some example code?

My use case is that I draw an actual polygon inside 3d scene so it is inbetween the scene geo. I use it for placing greenscreen element into scene, with live key done inside glsl shader etc.

Hi kesonmis, thanks for the tip with BlackSpout. I’m trying to write a receiver in python for Blender. Unfortunately I am a bad programmer with a limited understanding of python. May I be so cheeky and ask you to share the code with me. I nead it for an art project I am working on. I really would appreciated help. Thanks in advance.

I have managed to recreate the script to some extent. But the image is drawn only after I close the sender or ReleaseReceiver(). I’m not sure how to fix this. Maybe someone will have more luck. Sender has to the same name and resolution to connect.

import bpy
import gpu
import bgl
from gpu_extras.batch import batch_for_shader

import SpoutSDK

receiverName = 'aaa'
spoutReceiverWidth = 640
spoutReceiverHeight = 360
# create spout receiver
spoutReceiver = SpoutSDK.SpoutReceiver()
# Its signature in c++ looks like this: bool pyCreateReceiver(const char* theName, unsigned int theWidth, unsigned int theHeight, bool bUseActive);
spoutReceiver.pyCreateReceiver(receiverName,spoutReceiverWidth,spoutReceiverHeight, False)
# create texture for spout receiver
textureReceiveID = bgl.Buffer(bgl.GL_BYTE, spoutReceiverWidth*spoutReceiverHeight*4)

bgl.glGenTextures(1,textureReceiveID)
print("textureReceiveID "+str(textureReceiveID[0]))
# initalise receiver texture
bgl.glBindTexture(bgl.GL_TEXTURE_2D, textureReceiveID[0])
bgl.glTexParameterf(bgl.GL_TEXTURE_2D, bgl.GL_TEXTURE_WRAP_S, bgl.GL_CLAMP_TO_EDGE)
bgl.glTexParameterf(bgl.GL_TEXTURE_2D, bgl.GL_TEXTURE_WRAP_T, bgl.GL_CLAMP_TO_EDGE)
bgl.glTexParameteri(bgl.GL_TEXTURE_2D, bgl.GL_TEXTURE_MAG_FILTER, bgl.GL_NEAREST)
bgl.glTexParameteri(bgl.GL_TEXTURE_2D, bgl.GL_TEXTURE_MIN_FILTER, bgl.GL_NEAREST)
# copy data into texture
bgl.glTexImage2D(bgl.GL_TEXTURE_2D, 0, bgl.GL_RGBA, spoutReceiverWidth, spoutReceiverHeight, 0, bgl.GL_RGBA, bgl.GL_UNSIGNED_BYTE, textureReceiveID ) 
bgl.glBindTexture(bgl.GL_TEXTURE_2D, 0)

shader = gpu.shader.from_builtin('2D_IMAGE')
shader.bind()

c=0
def draw():
    global c

    bgl.glActiveTexture(bgl.GL_TEXTURE0)
    
                                 #(char* name,   int &width,         int &height,         GLuint TextureID,    GLuint TextureTarget, bool bInvert, GLuint HostFBO)
    spoutReceiver.pyReceiveTexture(receiverName, spoutReceiverWidth, spoutReceiverHeight, textureReceiveID[0], bgl.GL_TEXTURE_2D,    False,        0)
    spoutReceiver.ReleaseReceiver()
    
    bgl.glBindTexture(bgl.GL_TEXTURE_2D, textureReceiveID[0])
    
    shader.bind()
    shader.uniform_int("image", 0)
    
    batch_for_shader(shader, 'TRI_FAN',{"pos": ((0, 0), (640, 0), (640, 360), (0, 360)),"texCoord": ((0, 0), (1, 0), (1, 1), (0, 1)),}).draw(shader)

    print(str(c))
    c+=1


draw_handle = bpy.types.SpaceView3D.draw_handler_add(draw, (), 'WINDOW', 'POST_PIXEL')