You have to have the image in some form that blender python can read it. This means it should be one of the classes from the bge.texture module (eg the ImageFFMPEG if reading the image from disk or an ImageRender if reading from an in-game camera). Once you have that, you can convert it to pixels by dumping the data into a buffer with the refresh(bytearray_buffer). Or if you have it bound to a texture you can use the to_list() function (still has to be an ImageFFMPEG from disk though). Either way you’ll end up with a 1D array of RGBARGBARGAB… Then it’s up to you to encapsulate how you want to look it up (by UV, raycast, etc).
Be aware the conversion is not fast at all. It’s fine for images from disk because you can convert them into pixels once, but for in-game cameras you can only manage about 256x256 pixels at 60FPS if you want to have other things in your game. (CPU/GPU bottleneck).
For an example using ImageRender to get pixels from in-game camera, see my most recent BGMC game (Solar Sailing Simulator). The comment lies, it doesn’t return an array, it simply fills the array. It’s exactly the same for ImageFFMPEG.