Get Pixel coordinates from the rendered image

Hi and welcome on BA !

If you are speaking about Python scripting to go from 3D vertex to 2D, I am currently having the exact same problem, and couldn’t yet find a solution. With no refraction it’s very easy (using World Matrix of object, and vertex coordinate) When there is refration… That is a very good question !

Well, I use renders done with Blender as much more than “only” 2D images ! All the passes, material, obj index, depth, etc can be very usefull in a LOT of cases :slight_smile:

A solution could be to do a first render with a refractive index which equals 1.0 (no displacement of the rays) where the 2D position can be computed. Then do the “normal” render with refraction. And then use an Optical Flow algorithm to track the pixel movement between two renders, and get back the final 2D position.

1 Like