Easiest way to check if ray from camera to Vert1 goes through Ver2 too

Hi, I have few selected vertices, and some are aligned on top of each other form user 3d view.
I want to find pairs of vertices that are aligned in one line.
http://i.imgur.com/mT5uqjS.jpg
I know I can shoot ray from camera. But it returns face, normal, hit(bool) and this data is no use for me.
In best scenario, script should work in ortho and prerspective view. So if I’m in perspective view, and 2 verts are on top of each other script should find return those 2 verts.

Edit:
Ok it seems I can use angle() to determine if vertex lies on line.
If I have 3 points: Cam, V1, V2 then:
agnle(v1-cam, v2-cam) will be 0, if those two vectors are parallel.

But I do not think it will work in perspective view…
How do I transform this to perspective view then?

You can also use the cross product instead of the angle. This function is faster and this is advantageous if you need to test many vertices.
http://www.blender.org/api/blender_python_api_2_64_9/mathutils.html?highlight=cross#mathutils.Vector.cross

I do not think it will work in perspective view

Why not? Vector is vector, I wouldn’t expect it to bend in perspective view and break math :wink:

http://i.imgur.com/MI1pDuc.jpg

What you could also do is flatten to 2d and perform a search for nearby vertices. Z coordinate would contain the order / distance between vertices. The trickiest part is probably to construct the correct transformation matrix (projection matrix to to bring vertices into Homogeneous Space)

http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/#The_Model__View_and_Projection_matrices

There’s something wrong with my formula at the end… maybe someone can fix it.

import bpy

ob = bpy.context.object
assert ob.type == 'MESH'

v1, v2 = ob.data.vertices

scene = bpy.context.scene
cam = scene.camera

x = scene.render.resolution_x
y = scene.render.resolution_y
scale_x = scene.render.pixel_aspect_x
scale_y = scene.render.pixel_aspect_y

mat = cam.calc_matrix_camera(x, y, scale_x, scale_y)

print(mat * cam.matrix_world.transposed() * v1.co * ob.matrix_world)
print(mat * cam.matrix_world.transposed() * v2.co * ob.matrix_world)

CoDEmanX
Difference between perspective and ortho I was speaking about:
http://i.imgur.com/Bx6wVio.jpg
In perspective, I can use camera location, to see if vertices are aligned. But in ortho camera can be placed in many positions, and green verts would still be aligned. So for ortho I would had to use camera forward looking vector, to cross it with vertices vector.

Anyway my code won’t work. It gives small angles for most of pair of vertices, but they are usualy non zero, even for aligned vertices. I will try cross product, but It should work for vector.angle() too.


rv3d = context.region_data
vectViewLoc = rv3d.perspective_matrix.inverted().translation   # wont work for <b>view_matrix too.. :(</b>
vectViewLookAt = rv3d.view_rotation * mathutils.Vector((0.0, 0.0, -1.0))
for aVert,bVert in combinations(vertLoopList, 2):
    if rv3d.is_perspective:
        print ((aVert.co-vectViewLoc).angle(bVert.co-vectViewLoc))
        if (aVert.co-vectViewLoc).angle(bVert.co-vectViewLoc)&lt;0.0001:
            print('verts aligned! ',aVert.index, ' and ', bVert.index)
            bmesh.ops.connect_vert_pair(bm, verts=[aVert,bVert])

I used https://docs.python.org/3.2/library/itertools.html#itertools.combinations to find pairs of vertices.

CoDEmanX - cool link. Thanks!

Edit: after bit more testing, script above works, for verts placed in object in center of scene. But if obj is offseted, then script fails. I will have to do matrix multiplication somwhere I guess, to fix this.

With the perspective matrix, you can know if the vertices are aligned without having to calculate the angle. And still works in Ortho mode also:


from mathutils import Vector
def is_behind_the_other(perspective_matrix, coord1, coord2, precision=0.02):
    prj1 = perspective_matrix * Vector((coord1[0], coord1[1], coord1[2], 1.0))
    prj2 = perspective_matrix * Vector((coord2[0], coord2[1], coord2[2], 1.0))
    
    return  abs(prj1.x/prj1.w - prj2.x/prj2.w) &lt;= precision and\
                abs(prj1.y/prj1.w - prj2.y/prj2.w) &lt;= precision


import bpy


for area in bpy.context.screen.areas:
    if area.type == 'VIEW_3D':
        for space in area.spaces:
            if space.type == 'VIEW_3D':
                perspective_matrix = space.region_3d.perspective_matrix
                break


coord1 = bpy.data.objects['Cube'].location
coord2 = bpy.data.objects['Lamp'].location


print(is_behind_the_other(perspective_matrix, coord1, coord2))

Works great! Thanks
To make it work with objects that have some transformations applied, I just multiplied perspective matrix by obj.matrix_local. And best thing is it works for ortho, and perspective.

But in ortho camera can be placed in many positions, and green verts would still be aligned.

No, if you rotate an ortho cam then the same vertices may no longer be aligned. It’s only true if you translate the camera or the object. The camera type is of no relevance IMHO.

I used https://docs.python.org/3.2/library/…s.combinations to find pairs of vertices.

Should be fine for coarse meshes, but will be horribly inefficient for dense meshes.
Unfortunately, KDTree supports 3D coordinates only. It might still be more efficient to find vertices close to each other, but could be even faster with a specialized 2D algorithm for that.

With the perspective matrix, you can know if the vertices are aligned without having to calculate the angle. And still works in Ortho mode also:

Can you post this for camera too? I’m uncertain if I could simply replace persp matrix by what calc_matrix_camera returns.

KDTree doesn’t work for some reason, or I’m using it incorrectly :’-O

import bpy
from mathutils.kdtree import KDTree
from bpy_extras.object_utils import world_to_camera_view


scene = bpy.context.scene
ob = bpy.context.object
assert ob.type == 'MESH'


me = ob.to_mesh(scene, True, 'PREVIEW')
#me.transform(ob.matrix_world)


verts = [world_to_camera_view(scene, scene.camera, v.co) for v in me.vertices]


""" doesnt work...
tree = KDTree(len(verts))
for i, v in enumerate(verts):
    tree.insert((v.x, v.y, 0), i)
tree.balance()


for v in verts:
    ret = tree.find_range(v, 10)
    print(ret)
"""


epsilon = 0.005
found = {}
for i, v in enumerate(verts):
    for j, w in enumerate(verts):
        if i == j or found.get((i,j)) is not None:
            continue
        if abs((w.to_2d() - v.to_2d()).length) &lt; epsilon:
            print("Vertex", i, "and", j)
            print(me.vertices[i].co)
            print(me.vertices[j].co)
            found[(i,j)] = 1
            found[(j,i)] = 1
            
bpy.data.meshes.remove(me)

My pleasure :slight_smile: :


from mathutils import Vector
def is_behind_the_other(perspective_matrix, coord1, coord2, epsilon=.02):
    prj1 = perspective_matrix * Vector((coord1[0], coord1[1], coord1[2], 1.0))
    prj2 = perspective_matrix * Vector((coord2[0], coord2[1], coord2[2], 1.0))
    
    return  abs(prj1.x/prj1.w - prj2.x/prj2.w) &lt;= epsilon and\
                abs(prj1.y/prj1.w - prj2.y/prj2.w) &lt;= epsilon




import bpy


scene = bpy.context.scene
cam = scene.camera
x = scene.render.resolution_x
y = scene.render.resolution_y
scale_x = scene.render.pixel_aspect_x
scale_y = scene.render.pixel_aspect_y


mat = cam.calc_matrix_camera(x, y, scale_x, scale_y)

# * perspective_matrix = project_matrix * view_matrix
pmat = mat * cam.matrix_world.inverted()

coord1 = bpy.data.objects['Cube'].location
coord2 = bpy.data.objects['Lamp'].location

print(is_behind_the_other(pmat, coord1, coord2))

I confused a bit matrix.inverted() with matrix.transposed(), but it is now solved

You forgot of set the z coordinate of verts to zero. Right here:

...
for v in verts:
    ret = tree.find_range(v, 10)
    print(ret)
...

Kdtree was great idea. Script it quite a bit faster for bigger meshes.
CodemanX
when you make

for i, v in enumerate(verts):
    for j, w in enumerate(verts):

It negates the speed gain for kd tree. Possibly

for i,j in combinations(verts) 

would be faster.

I combined the kdtree with mano-wii script and it works great. btw. I do not think it is necessary to do:
(prj1.x/prj1.w - prj2.x/prj2.w)
And
proj1.x - proj2.x< delta is good enough for finding small distances.
But I wonder what is this division through proj.w for? Does it contain some value that perspective is cam creating for parallax/depth effect?

Anyway here is how this looks now:

rv3d = context.region_data
viewPerspectiveMatrix = rv3d.perspective_matrix #.inverted().translation #   view_matrix
localObjMatrix = context.object.matrix_local #
    kd = mathutils.kdtree.KDTree(mainLoopVertCount)
    perspectiveMatrixLocalSpace = viewPerspectiveMatrix*localObjMatrix
    vertsPerspective = []
    for i, v in enumerate(vertLoopList):
        tempVert = perspectiveMatrixLocalSpace * mathutils.Vector((v.co[0], v.co[1], v.co[2], 1.0))
        perVert = [tempVert.x, tempVert.y, 0.0]
        vertsPerspective.append(perVert)
        kd.insert(perVert, i)
    kd.balance()
    for i, vertPersp in enumerate(vertsPerspective):
        for (co, index, dist) in kd.find_n(vertPersp,2):
            print('Distance i=',index,' is ',dist)
            if dist&lt;0.0001 and index!=i:
                bmesh.ops.connect_vert_pair(bm, verts=[vertLoopList[i],vertLoopList[index]])

Another thing that is not clear for me is that in thistutorial , CoDEmanX provided, there is talk about using matrix.inverted a lot, to negate cam transformation.
But in mano-wii there was no such thing. And I’m not sure why it works above without inverting. But it works.

Here is a demo what I used this script for (sometimes it dosen’t work- see second cut, but in 90% it works ok)
https://dl.dropboxusercontent.com/u/26878362/AnimNodes/TrimKnife.gif

Btw I wonder is it possible to disable some events for blender build in modal knife operator?
I would like to disable some options for knife in this script: block ‘z’ key to prevent user from disabling ‘cut through mode’, and disable ‘mmb’ for camera rotation.
I was thinking about making modal operator, that calls modal knife, but from what I remember it is problematic…

proj.w is directly related to the distance between the coordinate and camera plan. If you do not divide by this value will obtain the coordinates as if they were in orthogonal view mode.

Doh! Fixed:

import bpy
from mathutils.kdtree import KDTree
from bpy_extras.object_utils import world_to_camera_view

scene = bpy.context.scene
ob = bpy.context.object
assert ob.type == 'MESH'

me = ob.to_mesh(scene, True, 'PREVIEW')
me.transform(ob.matrix_world) # to_mesh() does not apply the transformation matrix for us!

verts = [world_to_camera_view(scene, scene.camera, v.co).to_2d().to_3d() for v in me.vertices]


tree = KDTree(len(verts))
for i, v in enumerate(verts):
    tree.insert(v, i)
tree.balance()


print()
for v in verts:
    ret = tree.find_range(v, 0.1)
    for co, idx, dist in ret:
        if dist &gt; 0.0:
            print(dist, idx)
bpy.data.meshes.remove(me)

It negates the speed gain for kd tree. Possibly
Code:
for i,j in combinations(verts)
would be faster.

Nested for loops like in that example are of course the worst case scenario, O(n*m). Everything else will be faster, especially if it’s a specialized algorithm to solve a certain problem. combinations() would be faster, because it avoid unnecessary iterations. You could do that without this utility function though:

epsilon = 0.005
for i, v in enumerate(verts):
    for j, w in enumerate(itertools.islice(verts, i+1, None), i+1): # generator is nicer than slicing us a copy
        if abs((w.to_2d() - v.to_2d()).length) &lt; epsilon:
            print("Vertex", i, "and", j)
            print(me.vertices[i].co)
            print(me.vertices[j].co)

proj.w is directly related to the distance between the coordinate and camera plan. If you do not divide by this value will obtain the coordinates as if they were in orthogonal view mode.

So it’s not equal to 1, unlike most of the time?

Looks like it fixed my code I posted earlier:

import bpy

ob = bpy.context.object
assert ob.type == 'MESH'


v1, v2 = ob.data.vertices[:2]


scene = bpy.context.scene
cam = scene.camera


x = scene.render.resolution_x
y = scene.render.resolution_y
scale_x = scene.render.pixel_aspect_x
scale_y = scene.render.pixel_aspect_y


mat = cam.calc_matrix_camera(x, y, scale_x, scale_y)
pmat = mat * cam.matrix_world.inverted()


r1 = pmat * (ob.matrix_world * v1.co).to_4d()
r2 = pmat * (ob.matrix_world * v2.co).to_4d()
print(r1.xyz / r1.w)
print(r2.xyz / r2.w)

And here with KDTree:

import bpy
from mathutils.kdtree import KDTree


ob = bpy.context.object
assert ob.type == 'MESH'
assert not ob.data.is_editmode


scene = bpy.context.scene
cam = scene.camera


x = scene.render.resolution_x
y = scene.render.resolution_y
scale_x = scene.render.pixel_aspect_x
scale_y = scene.render.pixel_aspect_y


mat = cam.calc_matrix_camera(x, y, scale_x, scale_y)
pmat = mat * cam.matrix_world.inverted()


me = ob.to_mesh(scene, True, 'PREVIEW')
me.transform(ob.matrix_world)


verts = []
for v in me.vertices:
    co = pmat * v.co.to_4d()
    verts.append(co.xyz / co.w)


tree = KDTree(len(verts))
for i, v in enumerate(verts):
    tree.insert(v.to_2d().to_3d(), i)
tree.balance()


epsilon = 0.1


for i, v in enumerate(verts):
    ret = tree.find_range(v.to_2d().to_3d(), epsilon)
    if len(ret) &gt; 1:
        print("
Group:
{:8d}  -".format(i))
    for co, idx, dist in ret:
        if idx != i:
            print("{:8d}  {:.6f}".format(idx, dist))


bpy.data.meshes.remove(me)

A connected component algorithm might be needed to combine groups, although that may not what you want, because it’s possible that it leads to a single cluster of verts near one or more verts, but not in view direction anymore (e.g. a line of connected verts, evenly spaced, but diagonal in camera view, so that the distance between two connected verts is < epsilon, but accumulated > multple times epsilon).

Here is a demo what I used this script for

That looks extremely useful! That’s certainly something that should be added to the actual knife tool IMO (cut and erase everything on side of the cut - left or right in simple cuts, inner or outer in more complex cuts - based on view direction during cutting; would ideally require user input to decide which side to remove). Camera rotation between cut steps woud be a bit of an issue of course, but I guess one could store the initial view matrix on invocation (and maybe add a hotkey to toggle between the initial and current view matrix?)

A stock operator very similar to this is Bisect with option to remove either side + fill I think. It’s always cut-through, but doesn’t allow multiple cuts, snapping, angle constraint and so on.

is it possible to disable some events for blender build in modal knife operator?

I doubt it because it’s a built-in operator. I’m not sure if you could disable some of the modal key map items however before invoking it…