Acceptable bge.texture.ImageBuff input

Does any documentation exist that details how ImageBuff wants it’s data arranged when using ImageBuff.load()?

It seems that it wants a one-dimensional array representing every pixel in the image, like so:

[r[SUB]1[/SUB], g[SUB]1[/SUB], b[SUB]1[/SUB], a[SUB]1[/SUB], ... r<i>[SUB]n[/SUB]</i>, g<i>[SUB]n[/SUB]</i>, b[SUB]<i>n</i>[/SUB], a<i>[SUB]n[/SUB]</i>]

In fact, bge.texture.imageToArray() returns a bgl.Buffer object with that exact same layout.

I’ve been trying to utilize some PyOpenGL functions in Blender, to allow for offscreen rendering of textures, but OpenGL functions seem to return me pixels arranged in multidimensional arrays. Unfortunately this makes it impossible to render a texture using OpenGL and then use that texture with the bge.texture module without some expensive remapping of array data to a new layout.

Is there any reason it’s done this way? Does anyone know if ImageBuff is supposed to accept other arrangements of data?

It’s done that way because native OpenGL requires raw byte arrays, and there was no reason to hide that fact in the api.

OpenGL is an inherently low-level api. Trying to “pythonize” it is misguided, but I guess the PyOpenGL folks would disagree. :smiley:

So if I understand this correctly, ImageBuff.load() expects the data in a low-level format more akin to regular OpenGL in/out operations?