Does any documentation exist that details how ImageBuff wants it’s data arranged when using ImageBuff.load()?
It seems that it wants a one-dimensional array representing every pixel in the image, like so:
[r[SUB]1[/SUB], g[SUB]1[/SUB], b[SUB]1[/SUB], a[SUB]1[/SUB], ... r<i>[SUB]n[/SUB]</i>, g<i>[SUB]n[/SUB]</i>, b[SUB]<i>n</i>[/SUB], a<i>[SUB]n[/SUB]</i>]
In fact, bge.texture.imageToArray() returns a bgl.Buffer object with that exact same layout.
I’ve been trying to utilize some PyOpenGL functions in Blender, to allow for offscreen rendering of textures, but OpenGL functions seem to return me pixels arranged in multidimensional arrays. Unfortunately this makes it impossible to render a texture using OpenGL and then use that texture with the bge.texture module without some expensive remapping of array data to a new layout.
Is there any reason it’s done this way? Does anyone know if ImageBuff is supposed to accept other arrangements of data?