Compute Shader using OpenGL Wrapper (bgl) and Python OpenGL Wrapper

I get managed to add a compute shader to the custom material shader.
It using OpenGL Wrapper (bgl) and Python OpenGL Wrapper for the functions that are missing in bgl.

What can be done with compute shader?
Basically you can calculate everything in a compute shader, but a compute shader can’t directly render the image to the screen.

Where compute shader can be faster the the CPU and could be used?
Basically for all N-Body calculations, like:

  • Physics
  • Particles
  • AI Simulation
  • Ray Tracing (still to slow for real time)
  • Wave Simulation
  • Global Illumination


You’ll need to download the OpenGL.zip and unzip it in the same folder as the blend file.
You need at least OpenGL 4.3 and GLSL 4.3.

There is actually only one example which use a texture buffer object.

I hope you enjoy it.

HG1

Attachments

OpenGL.zip (2.57 MB)CoumputeShaderExample V1.0.blend (87.9 KB)

1 Like

Oh thats nice!
At the moment I don’t know excactly how to use a Compute shader but I will look what it can do.
A big thanks for your work and the features that you make available for the bge-community.

Great! at the right time, might try porting my scene geometry ray tracing tests to the bge now!

Hello! I don’t understand well but it sounds great! :slight_smile:

I’ve some questions about it, if you’ve the time to answer it:

  • Texture buffer objects (TBO) are like virtual textures not rendered on the screen?
  • When you write: in vec2 texcoords (line 30 in yout fragment shader), it’s the TBO coordinates?

EDIT:Oh, no… It seems that’s not the case… So can we get TBO texcoords as input in the fragment shader?

  • Can we fill a TBO with the captured (by a camera) scene? Sorry, I don’t know enough in openGL to understand completely your code.

  • What is storePos (and the other elements) line 53 in image store? And can we access it in the fragment shader?

EDIT: No need to read after if we can’t get TBO texcoords in the fragment shader…

Finally, do you think this technique is appropriate to make a Per Object Motion Blur effect:

  • (At frame 1) render the scene in a TBO in the first texture slot of a plane (texture1).
  • (At frame 2) render the scene in the second texture slot (this texture will appear in front of the camera)(texture2).
  • (At frame 2) make a fragment shader with TBO texcoords (texcoords1) as input and texture2 texcoords (texcoords2) as input (texcoords2 computed in the VertexShader>gl_TexCoord[0] = gl_MultiTexCoord0 ;.
  • (At frame 2) in the fragment shader, compare texcoords1 and texcoords2 to find a Velocity Vector (velocity per pixel) (vec2 velocityVector = texcoords2.st - texcoords1.st)
  • (At frame 2) apply the fragment shader (with the motion blur effect) to the texture2 and position the plane in front of the camera.

Do you believe that this pipeline is correct (EDIT: no if we can’t get the TBO texcoords) and may be efficient?

Thanks for this ressource and thanks very much if you have the time to answer my questions.

EDIT2: I made this: http://www.mediafire.com/download/ltbgbkbm5bjh7lg/warping.rar
with a noise computed TBO and this fragment shader: https://www.shadertoy.com/view/4s23zz

But I don’t understand why the shader accelerate… (EDIT3: get something nicer replacing line 118 with: vec2 q = (2.0*p-1.0); Far from perfect but http://www.pasteall.org/blend/36455)? Hehe. It’s not as nice as the shadertoy result (needs more work) but it’s cool to get such a result from nothing (without base texture) :slight_smile:

Great! Thanks this worked and thanks for the python script download !


Python script error - object 'Plane', controller 'Python':
Traceback (most recent call last):
  File "compute", line 8, in <module>
ImportError: No module named 'OpenGL'

Execuse me?
I didn’t read whole post, sorry!

Yes.

  • When you write: in vec2 texcoords (line 30 in yout fragment shader), it’s the TBO coordinates?

Not sure what you mean TBO coordinates. That are the UV texture coordinates which you define on the UV editor for the object.

  • Can we fill a TBO with the captured (by a camera) scene? Sorry, I don’t know enough in openGL to understand completely your code.

Yes. You can can use the video texture module to render a screen.

  • What is storePos (and the other elements) line 53 in image store?

First parameter is the output image where the compute shader should store the pixel.
Second parameter (storePos) is the xy image coordinates (pixel coordinates).
Third parameter is the vec4 data value (RGBA), basically the pixel color.
Example:
imageStore(image, ivec2(gl_GlobalInvocationID.xy), vec4(0.0f, 1.0f, 1.0f, 1.0f));

And can we access it in the fragment shader?

Yes. That is what my example does. The buffer that is generated and bound with OpenGL to the compute shader and fragment shader. The compute shader writes the texture to it and the fragment sander gets texture and displays it.

Finally, do you think this technique is appropriate to make a Per Object Motion Blur effect:

In general you can calculate everything in a compute shader. But for your approach a compute shader is not necessary.
You can make a texture buffer without compute shader.

But I don’t understand why the shader accelerate…

The shader is written to stretch the texture to full screen iResolution, but you are use it on an object with the texture coordinates are normalized form [0,0] to [1,1]. So you have to set the resolution to 1, 1 and not 512, 512.
The shader from Inigo using a 256x256 noise texture to generate the fragment output image, and you are generating with the compute shader a 512x512 noise image with an different noise, so the output image will be look different.

By the way you don’t need to use a compute shader to generate a noise image you can calculate per pixel noise in the fragment shader, if you don’t want to use a texture.
Or you can use blenders mathutils.noise module to generate the image and store it into a buffer.

Thanks very much for your answer.

  • When you write: in vec2 texcoords (line 30 in yout fragment shader), it’s the TBO coordinates?

Not sure what you mean TBO coordinates. That are the UV texture coordinates which you define on the UV editor for the object.

Sorry I was tired when I wrote the question (I mixed up gl_Position and gl_TexCoord[0] :o )… What I would like to achieve is what is described here: http://john-chapman-graphics.blogspot.fr/2013/01/per-object-motion-blur.html . Velocity Vector chapter. To compute it, we have to know the previous gl_Position (At the frame before the current frame). I was wondering if we could store it (previous gl_Position) in a TBO (or anything else… I don’t understand enough buffer and array concepts) then pass it to a shader to compute velocity vector and apply a motion blur per object.

The shader is written to stretch the texture to full screen iResolution, but you are use it on an object with the texture coordinates are normalized form [0,0] to [1,1]. So you have to set the resolution to 1, 1 and not 512, 512.
The shader from Inigo using a 256x256 noise texture to generate the fragment output image, and you are generating with the compute shader a 512x512 noise image with an different noise, so the output image will be look different.

Thanks! I understand better what was going wrong with iResolution variable in shadertoy shaders.

By the way you don’t need to use a compute shader to generate a noise image you can calculate per pixel noise in the fragment shader, if you don’t want to use a texture.

Yes. I didn’t think about that when I tried your “compute shader” file (And I just discovered how to make a noise texture…). I wanted to test it! :slight_smile: But you’re right, it seems much more simple like that (I wouldn’t have thought to this alone (without your tip…)

Or you can use blenders mathutils.noise module to generate the image and store it into a buffer.

I did not know that option. I’ve never worked with buffers so I’ve to test it… But it may be difficult for me (without an example).

Thanks for anwering my noobish questions!

Another question: What can we do with TBO that we can’t do before with Blender? Thanks! (EDIT: Sorry I haven’t read the first post correctly)

realtime shadow baking?

(like event -> bake ->apply?)

manipulate normals with bullets?

Awesome work HG1! I’m sure I will spend too much time playing with this…

How could this be used with physics?

Nice! Thanks for sharing! :yes:
A bit out of topic, with this kind of wrapper technique, can you implement geometry shader on BGE too?

with a geometry shader wrapper, we could do hardware mesh instancing right?

with hardware mesh instancing, gpu based LOD and Occlusion…
we would be able to do stuff almost as good as ue4 right?

we would only be missing tesselation?

Wow! I didn’t know you already implemented that! Kudos to you!
I hope your patch get accepted.

That is not possible this way. For tessellation the geometry need to be rendered in patches. So we have to change the C++ source code to get tessellation shader working. I already described the problems with tessellation shader in the geometry shader thread on the side 3 post 45 and 50.

Any new word on this?

Which will take a long time if ever.

Hg1, are you helping UPBGE yet?

Thank you again,