Multiple cameras data to array

I plan on making characters and each having a camera for view, they basicly walk around with camera attached on their heads. I am using alpha 3 because it has evee game render.

Okay so …
I tried some things and got some results, but not the multiple camera thing I want.

I used Bpy.render.render to render non active cameras and get their data without me logging inside of them. The problem with this is that you have to save data to file first then load data back and once you load the image back it’s in the buffer and you can load it into an np array. Bpy render does not provide results live, render results are always zero, you have to save the render to file and then load it back, and that costs on computer resources, it’s just not practical.

I tried using bge.texture class to render non active camera and get the results with bge.textureImageRender, it does not work, I have to set texture on another object and then get the results with another cam that is active (render to texture) I tried grabing the results from non active camera first into the array, but no results. Once I set texture I can, but I have to view the texture live with another camera to get the data into the array.

This is my code

import bge
import bpy
import sys

controller = bge.logic.getCurrentController()
own = controller.owner

## get current scene
scene = bge.logic.getCurrentScene()  

# check to see variable RenderToTexture has been created
if "RenderToTexture" in own:
 # update the texture

# if variable RenderToTexture hasn't been created
 # import VideoTexture module
 import VideoTexture
 # get a list of objects in the scene
 ownList = scene.objects
 cameraName = own['cam']

 cam = scene.cameras[cameraName]

 # get the texture material ID
 matID = VideoTexture.materialID(own, "MA" + own['material'])

 # set the texture 
 renderToTexture = VideoTexture.Texture(own, matID)
 # Source that replaces the texture on the plane with the dynamic one
 renderToTexture.source = VideoTexture.ImageRender(scene,cam)

 renderToTexture.source.capsize = [30, 30]
 own["RenderToTexture"] = renderToTexture

 #Insert source into array
 arr = np.array(renderToTexture.source)

So I have two cameras, one no active that grabs the view and renders it on an object (a wall I made) and the other cam the active one that records it that is mounted in front of the wall to get a picture. Works as long as I am in the view cam, records inside the array live data, I get out of the viewport it no longer works, no longer feeds the array data.

Maybe there are other methods I don’t know of to grab unactive camera data (just camera sitting somewhere on the scene grabbing data without me touching it, setting it active, or getting inside it’s view port")

I want to do this with multiple cameras, collect data from them and send to 2d arrays

remember, 0.3 is still highly experimental and stuff might not work.

Do you have an example for 2.5 ? how I could do it in 2.5 ?