3D coordinates texture

Hi,

I would like to get a rather peculiar image - an image where each pixel gets the 3D (x,y,z) coordinates of its corresponding 3D point.
In essence, a mapping from pixels to 3D points.
For example, if a certain pixel is a pixel of some point on a wall, I would like to know the 3D coordinate of the point on the wall.

Is there a way to do this in Blender?

Thanks!

the unwrapping process wil do precisely that
each point of the mesh will be assign a UV point !

what do you want to do with that !

thanks

Maybe I should clarify. What I want is, you could say a ‘3D point cloud’, where each pixel is associated with a 3D point in the cloud. For each pixel, the 3D point is the point in space that the pixel is ‘seeing’.
I’m interested in this because I’m using blender as a simulator for developing computer vision algorithms (Blender is great for this), and getting this 3D point cloud is very useful in my work.

Thanks!

Maybe UV map is the solution. I just want, for every pixel, the 3D point of the surface it’s ‘looking at’.
In essence, I want to generate a point cloud from the scene.

This is actually exactly the opposite of the link you gave. There, someone is trying to get a point cloud into Blender. I’m trying to generate a point cloud using Blender.

Thanks! :slight_smile:

Use cycles
make everything the same material
make material emission
hook geometry position into emission strength one
save result as a exr

Hi,
I want to achieve the exact same thing. In principal your idea works great, but it is not exactly what I need. This way you only get the representation of the object’s coordinates onto the faces. What we need is, the difference between camera pixels and the object face. Maybe we can use your idea for the visual representation of the ground truth data we actually need. Each pixel positon needs a 3D-coordinate assigned. There are a bunch of solutions out there using vertex coordinates. This is also not enough.

Examples on ground truth data here: http://origin-ars.els-cdn.com/content/image/1-s2.0-S0924271611000748-gr3.jpg
http://projects.asl.ethz.ch/datasets/lib/exe/fetch.php?cache=&w=900&h=539&media=laserregistration:gazebo_winter:gazebo.png

I guess this is not too easy. I already searched for hours. This propably needs to be implemented first. Does anybody have an idea on how to tackle this.
Thanks

please feel free to erase - double post, sorry

If i understand correct, you want to shoot a bunch of rays from camera through viewport (each pixel of it) into the ‘scene’ and get coordinates for each ray where it hits something? This data should be somewhere inside blender, however i suspect it is not what usually gets rendered…Cycles Camera Data node output data?

@mishung

I think what doublebishop suggests is actually giving the result you want, , giving the shading hit point, not the object location, but world space is huge, so unless you either use a mapping node or math nodes to normalize to the size you want, you will see in the viewport probably only one colour per quadrant on small objects as the difference in colours is too small to be seen in the low dynamic range. Thus his suggestion of an .exr which you can tone map after rendering.

Edit:

Docs here: http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/More

suggest that node:Geometry -> Position returns the position of the shading point in world space, which is I think what you want, but upon experimentation, it only provides meaningful results in the quadrant where world coordinates are all positive.

Perhaps there is a way to do this with an OSL shader?

The idea with rays is interesting. There must be some way to access that data. I guess I will have to dig deeper into the rendering process. I also tried the Cycles Camera Data in the node editor (also subtraction from global coordinates). I’m just not sure how to interprate my result, but it looks not like the image I want. If you have a cube, the faces should fade from white (closest to your camera) into black, the further they are away from the camera. I can do this sort of visual representation with Matlab later on. I guess extracting the matrix will be the hard part.
Maybe it boils down to using lighting or rays as you proposed in some way.

Perhaps I am not fully understanding what you are after. Below is a screengrab of the node tree i have been experimenting with. it will shade all objects with the material based on the xyz coordinate of the shading hit point. I added a mapping node to scale a bit so object does not go full white in low dynamic range viewport at 1,1,1 . You would have to adjust to size of your scene or tonemap exr, and possibly transform coordinates as well so your objects are all in positive quadrant.

If you want white to black based on camera depth there is already z pass.


If it is any use, blend file is here:

http://www.pasteall.org/blend/24919

Or perhaps this is what you mean, get the same result in camera space?:


Now i’m confused - do you want image or point cloud as an output? It is not quite the same. You could assume that found hit point’s XYZ is RGB if it is somehow normalized and see it as a 2d image, still…
Image.

If you are just wanting depth for each pixel then use the mist pass, that will output a 0-1 black-white image… it wont give you x,y,z coordinates of the image, but it will give you just depth.

I also want to use this to calibrate a camera. For that I need the 3D data for my 2D image. The best way to do this would be extracting a matrix (with rows=height in pixel and collumns=with in pixel) that saves the hitpoint wiht the 3D objects in 3D camera coordinates (using camaera as origin). 3D world coordinates are also possible, but then I need to recalculate it in relation to the camera position and orientation. So I don’t really need a point cloud.
Sorry for the confusion - the image I described before is just a visual representation.
It is really hard to describe, I’m still not sure if I know exactly what I want ;-).
Thanks guys for your ideas and your support so far.