How to generate coordinates for an HTML image map

I am using a Python script to create an effect that looks a pile of photos laying on a floor. The photos are just cubes with image textures applied. I would like to use the resulting render on a web page with an HTML image map. Thus causing each photo in the render to link to the original image. I believe this would entail mapping the 3d coordinates of the objects in the render into the 2d space of the camera view, and also taking into account objects that are partially occluded by other objects. This is certainly beyond my abilities to figure out on my own. I know this may be a bit of a undertaking and don’t expect someone to give me a complete solution if so. But if someone could at least point me in the right direction, I would be grateful.

For a little more context, here is a link to view a mockup of the site design with the render at the bottom: http://db.tt/WsQ5Cir (the idea is to make each photo on the floor into a link using an image map.)

why don’t u use an online image map generator for this image

Oops. Realized I could have been more clear. This render will be regenerated every day with new images and the positions of the photos will be randomized, so the generation of the image map must be automated.

a very simple idea, you could -from a script- join meshes, create an UV projected from cam view and export as SVG the layout… that could be a start…?

I’ve been doing some more research and it seems that I need to learn how to projection map world coordinates onto the camera plane. I found this 2.49 .blend which I hope will be a good start. The code looks like this:



# ----------------------------
# http://blenderunderground.com
# - projects vertices from a
#   cube onto a plane
# ----------------------------

from Blender import *
from Blender.Mathutils import *
from math import *

FOV = 90

# ----------------------------
def clamp( val, min, max ):
	if val < min: return min
	if val > max: return max
	return val
# ----------------------------

source = Object.Get('Cube')		# Source cube
dest = Object.Get('Cube-Proj')	# Projected cube
screen = Object.Get('Plane')	# Screen surface

# distance calculation
radfov = pi * FOV / 180
dist = screen.LocY/tan(radfov/2) 

# get the mesh data
smesh = source.data				  # read only
dmesh = dest.getData(False, True) # write

# we need the object's rotation
#  so we can rotate the vertices
euler = source.getEuler()

# step through vertices
for v in range(8):
	
	vertex = Vector(smesh.verts[v]) # source cube
	
	# use euler angles to build rotation matrix
	rotx = Matrix([1,0,0], [0,cos(euler.x),sin(euler.x)], [0,-sin(euler.x),cos(euler.x)])
	roty = Matrix([cos(euler.y),0,-sin(euler.y)], [0,1,0], [sin(euler.y),0,cos(euler.y)])
	rotz = Matrix([cos(euler.z),sin(euler.z),0], [-sin(euler.z),cos(euler.z),0], [0,0,1])
	rotationMatrix = rotx * roty * rotz

	# build scale matrix
	scaleMatrix = Matrix([source.SizeX,0,0], [0,source.SizeY,0], [0,0,source.SizeZ])
	
	# concatenate matrices
	worldmatrix = scaleMatrix * rotationMatrix
	
	# transform vertex
	#  this provides a set of
	#  verts in world space
	vnew = vertex * worldmatrix
	
	# manual translation
	dx = vnew.x + source.LocX
	dy = vnew.y + source.LocY
	dz = vnew.z + source.LocZ
	
	# keep origin collisions from 
	#  getting too ugly
	dy = clamp(dy, 0.01, dy)
	
	# manual projection
	nx = -dist * dx / dy
	ny = screen.LocY
	nz = -dist * dz / dy
	
	# output transformed vertices
	dmesh.verts[v].co.x = nx
	dmesh.verts[v].co.y = ny
	dmesh.verts[v].co.z = nz

Can anyone tell me if this the most straight-forward way to approach it?

Since I figured out the right terms, I decided to do some more Googling and found the solution for 2.5 by balintfodor. This code takes a point in world coordinates and projects it onto the camera plane. So, more or less, it tells me “This vertex ended up at this pixel location in the final render”. Which is exactly what I need in order to generate an image map.


import bpy 
from mathutils import * 
from math import * 

# Getting width, height and the camera 
scn = bpy.data.scenes['Scene'] 
w = scn.render.resolution_x*scn.render.resolution_percentage/100. 
h = scn.render.resolution_y*scn.render.resolution_percentage/100. 
cam = bpy.data.cameras['Camera'] 
camobj = bpy.data.objects['Camera'] 

# Some point in 3D you want to project 
v = Vector((-2.433,-1.336,0.0009908)) 

# Getting camera parameters 
# Extrinsic 
RT = camobj.matrix_world.inverted() 
# Intrinsic 
C = Matrix().to_3x3()
C[0][0] = -w/2 / tan(cam.angle/2) 
ratio = w/h 
C[1][1] = -h/2. / tan(cam.angle/2) * ratio 
C[0][2] = w / 2. 
C[1][2] = h / 2. 
C[2][2] = 1. 
C.transpose() 

# Projecting v with the camera 
p = (v * RT) * C 
p /= p[2]

print(p[0], h-p[1])

Also, in my initial post, I was concerned about accounting for occluded objects. I did some reading about image maps and found out that it’s ok for clickable areas to overlap; the area defined first will take precedence. So, when gathering the coords for the image map, I’ll just loop through objects in order of top-of-the-pile first and the overlaps won’t matter.

Hey liero,

Thanks for the reply. I actually just noticed it.

I’m not sure I understand what you’re suggesting (I’m fairly new to Blender and haven’t gotten into UVing yet). Could you elaborate a bit more?

just imagined getting the 2d data frome a projected UV, i think you can link a face id with it’s coordinates that way but never really tried that, and doesn’t matter now you solved it… you can even export layout as svg to process outside blender

update: here is a minimal test… I would first create the mesh with a face from each cube, then generate uv projection and get uv coords for faces with the script - it is just another approach :wink: