VideoTexture Enhancement Patchset

Adapted from the GSOC proposal format

Synopsis
The VideoTexture module has much to be desired and thus a refactor is in order to address the following issues:

  • The Python API doesn’t allow access to more options and features of the FFMPEG API.
  • Texture generation takes up precious logic time.
  • Cache options for the VideoFFmpeg class would allow more fine-tuning over loading videos.
  • Capture support is lacking.
  • A new VideoTexture logic brick would be a convenience.
  • The entire module should be integrated as part of KX_GameObject, as the Texture class operates on a KX_GameObject. Ideally, the KX_GameObject class should be restructured, before integration less it becomes more convoluted.
  • Possibly renaming VideoTexture to a more operative name - DynamicTexture.

Benefits to Blender
These patches will allow BGE users a more robust API and more VideoTexture features at their disposal. It should also improve performance when utilizing multiple VideoTextures.

Deliverables

  • VideoTexture
    [LIST]

  • Code refactor (mostly in the ffmpeg classes)

  • Code cleanup

  • KX_GameObject

  • Code refactor

  • Documentation of any new features
    [/LIST]

Project Details
Most of the existing class structure will be kept. The Texture struct will be converted into a class and a PyObject struct will hold an instance of the Texture class, similar to VideoFFmpeg etc. Currently, the Texture class calculates a new image when its refresh method is called. Generally, this method is to be called by the user every logic tic to ensure the texture appears in the BGE. However, refreshing can take up significant logic time depending on the parameters. Since the texture calculation can be independent of the BGE, this function can be put in a separate thread. This may provide some other insights into multithreading the entire BGE.

Currently, the constructor arguments for the VideoFFmpeg class is somewhat confusing. So they will be condensed into a simpler but more robust form. Both the VideoFFmpeg class and the ImageFFmpeg class will use the following arguments: filename, format, options. Filename is the name of the file. Format is used to force the ffmpeg decoder to use that format. If it is invalid, the ffmpeg decoder will automatically try to detect it. Options are the options passed to the ffmpeg decoder which includes video size, framerate, etc.

The VideoFFmpeg class will also be able to specify optional cache mode, and optional cache sizes for packets and frames. They can be passed to the constructor and also modified afterwards. Cache mode will allow the user to force threading off, force it on, or be automatically detected. For the Texture class, no cache is used since logic tick rate generally doesn’t exceed render rate nor do users have a need to call refresh multiple times in a single logic tick. A new texture will not be calculated (assuming a new frame from the decoder) until the current one has been loaded by a call to refresh. The Texture threads will run at the frequency of the logic tick rate which should ensure a texture is always available. However, I may consider adding a cache in the future if the need arises. Finally, the Texture cache is disabled for ImageViewport, etc.

Flow Diagram:
VideoFFmpeg -> Texture -> BGE
[ 1 -> {2 ] -> 3 }

Thread = #
Mutex A = []
Mutex B = {}

Mutex A controls the frame cache resource and is shared between threads 1 and 2.
Mutex B controls the texture resource and is shared between threads 2 and 3.

  • VideoFFmpeg thread
    [LIST=1]

  • Process data stream: reads packets, then decodes packets into frames and puts frames in cache.

  • Texture thread

  • Convert/calc frames into textures: calls VideoFFmpeg.getImage() which grabs a frame from cache and converts it into a texture.

  • Main BGE thread

  • Load texture: calls Texture.refresh(True) which loads texture from texture thread, if available.

[/LIST]

Project Timeline
The project will be broken down into three parts:

VideoTexture Enhancements I [DONE]

  • Refactor and enhancements
  • Some code/style cleanup

KX_GameObject Refactor

VideoTexture Enhancements II

  • Integration with KX_GameObject
  • VideoTexture logic brick
  • Rename to DynamicTexture?
  • Final code/style cleanup

Progress
VideoTexture Enhancements I is mostly done. I still need to do some cleanup and minor tasks. So far I’ve been able to reduce logic by almost 50%. Scaling is improved - I can create up to 4-5 VideoFFmpeg textures (with nothing else major going on) @ 60fps. I’ve only tested this on Linux however.

I’ve attached the patch and sample blend for preliminary release (based on 2.70), but I’ll try to get some Windows builds up. Eventually I will setup a task on the blender development site for the official submission and review. I mainly created this thread since it’s more user-friendly to the masses than the developer site.

Future

  • Fix deinterlace warning -> This requires filters, yadif -> Add support for filters

partI.zip (103 KB)

So far I’ve been able to reduce logic by almost 50%. Scaling is improved - I can create up to 4-5 VideoFFmpeg textures (with nothing else major going on) @ 60fps.

Hey, thats very nice!

I don’t think we want to integrate with the game object. They are not related, it is the object’s mesh that shares this relationship. If we want to allow video texture to be expanded for non mesh applications later in the relationship should be sparse

Sent from my Nexus 7 using Tapatalk

What I’m thinking is to make the bge.texture.Texture class an attribute of KX_GameObject, and leave the other classes in the VideoTexture module. My reasoning is that Texture works on a KX_GameObject. The general convention is to store the Texture object as a game property of the object it acts on. Thus in essence, the Texture object should be an attribute of KX_GameObject, although I’d rename it to dynamic texture or something. Internally, I think it’d simplify the code a bit.

So the user can do something like:


#dyntex already an attribute of KX_GameObject
gameObj.dyntex.source = bge.texture.VideoFFmpeg(...)
gameObj.dyntex.play()

Instead of:


Texture tex = Texture()
tex.source = bge.texture.VideoFFmpeg(...)
tex.play()
gameObj['tex'] = tex

IMHO, right now, the overall implementation feels kind of hackish. When refreshing the Texture, it checks KX_KetsjiEngine’s clock time to see if it missed the window for loading a texture. When the Texture object is created, it saves a copy of the original texture code, whereas I think whomever is responsible for loading textures (BL_Texture?), should be aware of whether a Texture object is being used. The Texture object kind of takes over the responsibility of loading the texture from whoever is responsible, duplicating the function for loading textures (still exploring).

What do you think?

Some notes from me:

  1. It would make sense that the texture update is handled from the engine (for now). I am an advocate of allowing the scripting layer to influence how things are handled, but we always open that up later.
  2. It might be sensible to consider how normal textures are accessed.
  3. Textures belong to the mesh, not the object.

I agree with this isolation of relevant data, but if we were to do this, then I think some other redesign may be necessary.
The Mesh is where the texture is associated with the object. Therefore the texture should belong to the mesh. However, I don’t think that this should be exclusive to in-game textures. As far as the user is concerned, the Blender texture is the same as any other image texture, and thus should be accessible from the offset. It doesn’t make sense to implement boilerplate to facilitate this, so it might require a rethink on how textures are refreshed. I think refreshing the texture by the system is fine, the user should still have the option to call play() and stop() or set the frame manually.

If I were you, submit the videotexture improvements as a patch, and create a separate patch / branch that deals with these bigger system changes.