Any known differences bnetween Standalone player and embedded player?

Hi all,

Does anybody know of differences between the embedded and standalone player with regards to the video texture module?

We are seeing twice the logic time in the standalone player compared to running inside blender. This seems weird. We’ve been trying to trace where this is caused. It seems to be caused in the VideoTexture.refresh method.

Anybody any clues?



Hope I don’t have to dive in the source but it seems there is duplicate code between the players. This could be the cause ofcourse:

Duplicate Code in VideoTexture

The VideoTexture module sits almost completely apart of the rest of the engine. At first glance this is a good thing (loosely coupled, few dependencies, etc.). However, taking a deeper look at the module shows that it achieves this lack of dependencies by copying large chunks of the render setup code from KX_KetsjiEngine (the core engine class). This creates a large chunk of duplicate code. Now, if a developer wants to fix/change something like how camera projections are handled (e.g., fix something related to orthographic cameras) they might find the code in KX_KetsjiEngine, make the change, and call it a day. However, they have just introduced a bug to the render-to-texture functionality offered by VideoTexture! Overall, I would like to see the VideoTexture features better integrated into the BGE and the VideoTexture module itself phased out. For example, we should just add “native” support for Blender’s movie texture type.
A further issue with the VideoTexture module is that it is only exposed through the Python API. The rest of the engine does not know it exists. This makes it difficult to use the VideoTexture module in other parts of the engine (e.g., for the earlier mentioned support for movie texture types). Ideally, the features in the VideoTexture module would be parts of the engine that are exposed to the Python API (like all of the other engine code).