New align camera to face script

Ok needed this function.

So I wrote BCFA 0.0.1 (Blender Camera to Face Aligner)

Find it and instructions here:
http://www.alienhelpdesk.com/index.php?id=20

It basicly aligns the camera perfectly to any selected faces on an object.

I’m going to use it to do stuff like baking ao into uv textures, but am in the middle of a project so I’ll show results and write a tut later on. When I have results.

Hope it’s usefull for other people as well :wink:

How is this different from the following method?

  1. Select faces.
  2. Shift+V -> Top
  3. Ctrl+Alt+Numpad 0

Ehm… well first off… I didn’t know that was possible. As with so many features in blender that aren’t directly available in buttons or menus… I had no idea.

The big difference appears to be that the script keeps the camera “upright” and allows you to set a distance which can help if you need to do it over and over.

The upright makes a real difference since shift-v often gives me an upside down view.

But you’re right in that it’s not all that far from this… thanks for showing me those functions Egg

The main diference is evident, it’s an script so you can use it to do many things automatic (if you know python) :wink:

I personally think would be nice to use it to create Normalmappings for UV maps.

Simple test: Created a cube. Went into face select mode … selected a face away from the camera. Ran the script. Used default distance. “R” (reset) was pressed. Got the following error in the console (using the latest CVS build):

Traceback (most recent call last):
File “<string>”, line 249, in bevent
File “<string>”, line 145, in script
AttributeError: ‘NoneType’ object has no attribute ‘faces’

This, using the 2.36 release:

Traceback (most recent call last):
File "\Program Files\Blender Foundation\Blender.blender\scripts\BCFA-0.0.1.py
", line 249, in bevent
script()
File "\Program Files\Blender Foundation\Blender.blender\scripts\BCFA-0.0.1.py
", line 145, in script
for f in me.faces:
AttributeError: ‘NoneType’ object has no attribute ‘faces’

Just thought you would like to know. And yes, I have a full python install. Windows ME, AMD Athlon 1 GHZ, 128 MB ram, NVidia GEForce 2 graphics card.

That error I’ve only encountered when there is no object selected.

But it could mean that your object and it’s relevant mesh name don’t match. Look under your edit buttons… check to see whether the name in the OB field is the same as in the ME field. I’ll see if I can fix it so it doesn’t have to match but in the meantime it does.

Oh and I forgot to mention… the script takes into account the location of the object… but not the size and rotation, that’d be too complicated to code. So if it’s a heavily edited object apply size and rotation first.

Actually I’m thinking about coding in a function where the distance is calculated so that the faces perfectly fill the rendered area…

Also… I’ve come up with an idea in which I can use this script to “bake” shadows and such… but more on that later :wink:

it would be useful if somebody could actually eventually implement a full (and predictably working) lightmapping / texture baking in blender. from within the program, not as a script.

LOL well yeah tedi… it would… and it probably will but only when a coder actually sits down and does it.

And I don’t think such a script actually exists yet… Also if such a script is written in Python it would then perhaps be easier to integrate for a coder.

well … yeah, indeed :stuck_out_tongue:

until that happy moment, if memory serves me well, lightwave discovery edition allows of saving rendered/baked illumination of UV-mapped objects without watermarks etc. …

Ok update.

I’m trying to make the script automaticly distance the camera so that the furthest selected verticle is just in the frame… having trouble.

So if anyone wants to help?

I played a bit with the old blender’s lamp-to-vertcolors functionality and basically it would be cool if something would exist to:

  • translate this to 2nd uv lightmaps engines can read
  • or bake this into new texture (or set of textures), sortof like LW does - basic texture with additional shading according to light data

or (God, can you hear me, please) something like Gile[s] …

Well basicly to make this little thingy I wrote I need to figure out 3 things.

1 How to make the camera distance correct automaticly.
2 How to set the proper UV coords (should be easy once I know 1)
3 How to make a lot of images rendered into 1 image (like the envmap is now)

<cough> well could you add to your script just these steps:

(providing user has already pressed make uvcol in F9 )

  • collect and move lights to invisible layer

  • make the mesh’s material use uvcol-light

  • take the pretty picture

  • restore the lights etc. … clean after itself … kiss the mom …

</cough>

Ok, now I’ve read this I figured out what you meant to do in your other post on the developer forums. here’s what I’d do:
1: set up an images folder to store images for all individual faces
2: render all faces keeping track of the vertex position on the rendered image (for later UV mapping)
3: build a set of meshes, one for image/face, with the coordinates of the vertices set according to the UV coordinates of the original mesh
4: UV map all of them with the respective texture and coordinates from point 2
5: re-use the portion of the script that aligns the camera to the face and render the ‘composite’ mesh that you obtain this way. This should be the finished normal-light map

Ideally you would want to use an orthogonal camera for all operation rather than a perspective camera, and for the last render you would want a sun lamp pointed directly at the mesh collection. To align the last camera you could temporarily build a face on teh same plane of the mesh collection, use the aligning script and delete the temporary face.

tedi:

I’m going to add a function that will allow you to pick what layer(s) you want to render. I personally always put all my lights on a layer sepparate to my meshes anyway (always a good idea). You can do UVcolLight yourself easily enough. What you suggest can be done but I don’t think it’s the most important thing right now.

Brandano:

Yes you get it now :wink: And yes that could be a method.

In specific:

  1. Yes I’ll put in a box where you can select the file name/location/type to store the images, that’s easy enough.

2 The thing is… I’m working on this one… and what I need for that is to get each verticle’s position relative to the camera (I have them relative to the “focus point” of the camera. And I have the camera angles. Now I need to figure out the camera distance to it’s focus point. And I just haven’t found the correct math for it. Especially since every variable can be positive or negative.

The problem is that I keep coming up with a long list of mathematical equations that I can’t test for a result halfway down the list… so I never know where exactly I went wrong.

The remainder comes later… I really need to figure out step 2

I agree that it should use an ORTHO camera, I came to that conclusion also.

If I were to go for that “final render” in the way you suggest I think it’d be best to use no lamps and just a full emit value.

Nice script!

BTW, have you got time to have a look at BGC updates :wink: ?

I have a suggestion!

Make a ‘place the camera so that the whole scene is seen’ script.

I mean, THe camera is oriented by the user, then the scrip moves (translates) it so that all objects in a selected set of layers are in the field of view.

Stefano

thanks Stefano.

well… I’m kind of working obsessively on this one right now hehe. I can’t seem to let it go. And I’m overlooking something really simple… I just know it.

So I’ll have a look at the updates after this one… and nice idea… we’ll look at that as well.

I was looking for you in the chat channels cause I think you can probably solve the issue I’m trying to solve quite easily. If you have the time to help out it’d be great… if you don’t I understand.

Hey, I’m really interested in the development of this script, and was wondering whether or not you’ve been able to figure out “step 2.” Something you said over at blender.org

made me think that perhaps you had.

Anyway, I’m no mathematician, but I think I have a solution to the problem (although it’s far from elegant, and I would assume there’s a better way).

Basically, I would translate and rotate the face-points so that the normal was paralell with the z-axis and the “focus point” was located at the origin. This is just so I can work with the familiar “x/y” coordinate system. There’s probably a way to work with an x/y coordinate system without transforming the points.
Then, I would test the x and y values for the max/minimums. Subtracting the x-min from the x-max would give me the “length” of the face(s), and subtracting the y-min from the y-max would give the the “height.” If we are rendering a square, orthogonal image, then the distance of the camera could be found by:

scalar = max(length, height)
lens = camera lens value
distance = scalar * lens * PI / 100

Each vertice’s position on the rendered image could then (theoretically) be found by:

scalar = max(length, height)
resolution = image’s resolution
vert_x, vert_y = vertice’s x coordinate, vertice’s y coordinate
imagex = (vert_x + length/2) / scalar * resolution
imagex = (vert_y + height/2) / scalar * resolution

Anyway, you may have already figured this out, and I’m surprised none of the math gurus have been attracted to this problem (or maybe it’s already been solved and I missed it), but I thought I’d try to do a little to help you out.

Keep up the good work!

Levi

Levi: thanks for thinking with me… I really appreciate it. And I’m amazed no one’s actually coded something like this either.

I’ve already solved the camera distance/angle issues.

The way you’ve solved it looks correct for the old Ortho camera. The ortho has been recoded for 2.37 and has become a true ortho camera. It has a new Scale setting as well. You can read about it at blender3d.org. It basicly makes the distancing different. It kind of makes the camera into a flat plane that looks along it’s axis. It is already in the buids you can get at the blender.org forums. In python setScale apparently sets the new variable.

So yeah it’s solved and in a better way than could have been done in 2.36

Right now (at this moment) I’m cleaning up the code a bit… making it work as a “per face” loop, and finding some solutions to minor issues (ob name not matching me name and such).

The big trick will be taking all the rendered square “tiles” and putting them into one big image. I’ll probably end up using the method that ideasman wrote into his blenderfarm script (make new scene with the images as textures on some planes).

I’m thinking about further refinements as well… like a “selected faces only render” “all objects in scene render” “all selected objects render” “per vertex group render” “all objects on layer render”. I think it should be possible to also make it so that you can render the UV for an entire object… then in a later stage (if all you did was move a few verts) rerender only the altered faces by selecting them. If the tiles are kept with nrs relative to the face nr that should be possible.

Sorry about the ammount of text.

First cleanup and tonight I hope to finish the UV coordinate code (relative to nr of faces) and tile render code.