GLSL shader variables automated by gameobject properties

Is there an easy way to link/map/remap game object properties/variables to GLSL variables ?
So far all examples I’ve seen are either broken or too complicated.
Lets say I want to make an object automatically change it’s colors based on distance from scene center or based on it’s world position.

In GLSL vec4 color probably goes from 0.0 to 1.0 while the game object world position is a vec3 I’m guessing and it goes way past 1.0 value so it needs remapping/scaling.

Also the 2D Filter “motion blur” is not working in v2.74. Is that a bug or is it just my version doing that ?

I also need to run one script at 240 fps while the rest of the scripts update at 60 fps ? Is there a way to do that or it’s
too complicated ? I know you could change the fps in the physics engine to 240fps and limit the other scripts to not run every logic tick but is there some better way to set different refresh rates for different things ?

Here is an example file. I’m trying to automate or link some of the variables in the bloom shader to other variables in the scene but it’s not working.

Attachments

StarWars .blend (211 KB)RandomColorsGLSL.blend (86.4 KB)

I’ve done a tutorial on scrolling textures with GLSL that covers passing python variables to GLSL shaders.

As for your second question: why would you want a script to run at that speed?

I noticed that when people code a game from scratch in Java or C++ they use two independent loops or functions. One update() for input and game logic that runs locked at a constant rate , lets say 60fps and a draw() that runs completely unlocked and with variable frame rate . Blender is doing it differently it seems. There is no way do disconnect the physics/input/logic/render so that each one runs independently in it own thread and at configurable independent rate based on a timer. Some things to run at variable rate while other things to run at constant rate.

The reason I want one script to run 4 or 8 times faster than the other scripts is because I’ve seen a LightWave 3D tutorial where they did the spinning light trick to create soft shadows or filtered shadows but for that to work you have to spin a shadow casting lamp 4 or 8 times in between logic ticks and apply motion blur on each spin/pass.

It can also be used for Antialiasing but you have to be able to shake the camera multiple times in between logic ticks and composite the results most likely using some motion blur applied on each shaking pass.

In other words , I need it for soft shadows and anti-aliasing but I guess it should be possible to simulate all that in a GLSL 2d filter with multiple passes. Maybe. I don’t know. Blender is not making it very easy to try those kinds of ideas.

Is the “motion blur” 2d filter working in Blender v2.74 because mine is not doing it ?

Well you can “draw() that runs completely unlocked”:
http://puu.sh/i4D0B/e3977e1789.png

But what you would want to do is to render the view of the camera to a texture 4 or 8 times per logic tic.
The thing is, that you can change the scene between each render call in python.

Blender is a little backwards in this regard. It doesn’t separate render calls from logic ticks via the API, you get called for logic ticks whenever the frames are rendered (with some exceptions) which isn’t sensible.

You might be better off using something like Panda for this

I love it when you guys are confused but that’s ok because that’s why I’m here so we can sort the bugs out of BGE. Blender is doing it wrong. It’s improving but still doing it wrong. Here is an example file. In layer 3 is the timer test. Run the layer 3 example. Try to figure out what’s wrong with it because I already know what’s wrong there. In layer 1 it’s something else. A small city generator but it’s not finished yet. An entire city in 134.4 KB. How about that ? LOL

After you guys run the layer 3 test at 60fps then run it again with v-sync off and all frame rates maxed out and check the timer result again. Can you guys say “Super Bugs Extreme” ? Say it together maybe then the timer is going to hear it properly. Gravity is supposed to be 9.8 Blender Units per second squared. Not per logic ticks… Not per frame time… Not per frame rate… It’s per unit of time and time delta. It’s all confusion in the calculations so far.

About that Panda 3D … You know it’s not going to match UnrealEngine4 anytime soon and that one is free too. You only have to pay royalties and license fees for Unreal4 if you end up making trillions of dollars with it by selling tons and tons of kick-ass games.

Attachments

Timer Test.blend (134 KB)

Are you trying to pass a variable from one shader to another (or back to python)?

Either of those is not currently possible. The output of a shader is to the framebuffer. A vertex shader will run the code for every vertex in the model, and a fragment shader will run for every pixel. A shader variable might contain thousands of different values in a single frame, there is no way for it to determine which one is the value that you want.

I also need to run one script at 240 fps while the rest of the scripts update at 60 fps ? Is there a way to do that or it’s
too complicated ? I know you could change the fps in the physics engine to 240fps and limit the other scripts to not run every logic tick but is there some better way to set different refresh rates for different things ?

Depending on what you are trying to do you could base it off of a timer… Use time.monotonic() it is independent of the frame rate.

Until this point, no one compared it to unreal, simply stated it’s better I’m this regard than the bge. However, panda is actually a very competent engine, and I’d rate it higher than unity and many others. The ‘better than’ argument falls pretty short if we consider the burden of the developer which is to actually make the game. Tools only go so far.

There is another thing called Source SDK and Cry Engine SDK plus another one called Cube 2: Sauerbraten. Why not just grab the Bullet Physics SDK and OpenGL SDK and code everything in C++ ?