[Addon] Camera matching add-on for modeling based on photographs

Shortkey “n” to toogle propertie shelf at right side and then at top of it.

thank you, i was not able to find it :slight_smile:

hello here you can find two test i’ve made using BLAM add on.
The interface (now…) is easy to understand and it seem to work flawlessy, but the calibration I get seem wrong.

I’ve performed two test, the first one use a real photo, it has only one vanishing point but i get incorrect camera rotation, at least this is what i can guess.

the second test uses a reverse process.
it is based on a render of 4 simple complanar planes.
also in this case (without lens distorsion or whatever) the calibration I get seem really imprecise.
can you give me some suggestion?
do i draw lines using grease pencil in a bad way?
thank you for your help.


Hi,

have hard time trying to reconstruct a house (similar solid to that of blam tutorial).
Focal = 22, origin set to camera, still got errors:

Traceback (most recent call last):
File “C:\Program Files\Blender Foundation\Blender\2.62\scripts\addons_contrib
blam.py”, line 729, in execute
self.performSimpleProjection(camera, mesh, img)
File “C:\Program Files\Blender Foundation\Blender\2.62\scripts\addons_contrib
blam.py”, line 588, in performSimpleProjection
self.addUVsProjectedFromView(mesh)
File “C:\Program Files\Blender Foundation\Blender\2.62\scripts\addons_contrib
blam.py”, line 567, in addUVsProjectedFromView
assert(len(mesh.data.faces) == len(mesh.data.uv_textures[0].data))
AttributeError: ‘Mesh’ object has no attribute ‘faces’

location:<unknown location>:-1

location:<unknown location>:-1
Traceback (most recent call last):
File “C:\Program Files\Blender Foundation\Blender\2.62\scripts\addons_contrib
blam.py”, line 729, in execute
self.performSimpleProjection(camera, mesh, img)
File “C:\Program Files\Blender Foundation\Blender\2.62\scripts\addons_contrib
blam.py”, line 588, in performSimpleProjection
self.addUVsProjectedFromView(mesh)
File “C:\Program Files\Blender Foundation\Blender\2.62\scripts\addons_contrib
blam.py”, line 567, in addUVsProjectedFromView
assert(len(mesh.data.faces) == len(mesh.data.uv_textures[0].data))
AttributeError: ‘Mesh’ object has no attribute ‘faces’

location:<unknown location>:-1

location:<unknown location>:-1
Traceback (most recent call last):
File “C:\Program Files\Blender Foundation\Blender\2.62\scripts\addons_contrib
blam.py”, line 1307, in execute
if len(self.mesh.data.faces) == 0:
AttributeError: ‘Mesh’ object has no attribute ‘faces’

location:<unknown location>:-1

location:<unknown location>:-1
CURSOR
Traceback (most recent call last):
File “C:\Program Files\Blender Foundation\Blender\2.62\scripts\addons_contrib
blam.py”, line 1307, in execute
if len(self.mesh.data.faces) == 0:
AttributeError: ‘Mesh’ object has no attribute ‘faces’

location:<unknown location>:-1

location:<unknown location>:-1

Moreover:

  1. despite defining a lot of x and y guidelines the vanishing points have been recognized too narrow(too dynamic persp.) than the image actually proves.
  2. BLAM menu appears in Movie Clip Editor-not in UV Editor Screen.

What am i doing wrong??

for 1: a lot? i just use two for each of the chosen directions (x, y or z as an alternative). Maybe you get at some point the median of all lines and summing up inaccuracy or getting missleaded by distortion actually the script of course.
for 2: this is the intended way - had been changed to be there.

up.
Please can somebody give suggestion on how to get a correct calibration with BLAM?
I am not able to understand why i get so bad calibration even if i use a very simple model as a test.
Thank you for your help.
bye.

Make sure that the image you use is not cropped first.

Whats the difference? Its still a 2d bitmap - while you set grease guidelines they come out of the picture in most cases anyway.
So the vanishing points usually stay far away :wink:

Did it suggested(simpler way) - no change. Unclear thing to me still is why(and -what most important-HOW?!) a solid could be reconstructed precisely using hidden edges - as shown on BLAM tut ??

same problem (the 1st one, with the “traceback” error)-can someone plz help me? what am i doing wrong?

Hi since the addon is in the clip editor why not to use tracking points instead of (hard to control, adjust and fiddle) grease pencil lines? For me this is the biggest (maybe the only) drawback of this script at the moment.
I have some code-snippet for that if it helps.

@bluecd You may want to try this test image and see if you get accurate results. If you do, the problem is most likely in the image you’re using (it could be asymmetrically cropped, warped etc).

@aakk1122006 (and others getting the same error) If you’re using a recent blender version with bmesh, I’d say that’s the cause of the crash. I’ll try to fix that before the release of 2.63.

@dfelinto I haven’t had much time for BLAM development lately, but I have been thinking about ways to allow for interactive tweaking. (Ab)using tracking points for this purpose sounds like an option to investigate. It would be interesting to have a look at your code snippets.

interactive tracking would solve all the problems, because with small “human eye” corrections you can do miracles.

  1. The picture has been taken directly from the camera (Fuji E900)
  2. Above ‘test’ pic cannot be read (?!).

Can you upload this picture ?

Here is the test image for those having problems downloading it


please if you have time try to calibrate this “complanar” image also.
I’m not able to have a decent result from this.
Thank you for your help.

http://misc.cgcookie.netdna-cdn.com//pencil.png

Thanx for uploading. Still my doubt-as mentioned above: heres a ‘perfect’ image example-ie. rarely one has a chance in real life to get visible all the edges behind :wink: How about reconstructing a house from visible edges (2 v.points) ?? Here is my problem w/calibrating perspective.
Evening i’ll try to upload my example to clarify the question.

An example of an operator to print the position of two selected vertices. If you don’t need a proper poll you can gather them in the execute() instead of the poll() - thus no need for self._selected_tracks


class CLIP_OT_print_markers(bpy.types.Operator):
    """"""
    bl_idname = "clip.print_markers"
    bl_label = "Print Markers"
    bl_description = ""
    bl_options = {'REGISTER', 'UNDO'}
    
    _selected_tracks = []
    
    @classmethod
    def poll(cls, context):
        movieclip = context.edit_movieclip
        tracking = movieclip.tracking.objects [movieclip.tracking.active_object_index]

        cls._selected_tracks = []
        for track in tracking.tracks:
            if track.select:
                cls._selected_tracks.append(track)

        return len(cls._selected_tracks) == 2

    def execute(self, context):
        scene = context.scene
        movieclip = context.edit_movieclip
        settings = movieclip.ibl_settings

        verts = []
        for track in self._selected_tracks:
            verts.append(track.markers[0].co)

       print(verts[0])
       print(verts[1])

       return {'FINISHED'}


To draw markers back you do: bpy.ops.clip.add_marker(location=(u, v)) uv ranges from 0,0 to 1,1
I hope this helps :wink: I should be (beta)publishing my addon soon. I let you know when it’s out. (by then I will have some bgl drawing on top of it hopeefully)

http://www.pasteall.org/blend/13474

cannot move forward w/it …( …and i am very anxious about Blam’s results !