I have been experimenting with
ray_cast definition that Blender’s Python API offers. Specifically, I am using definition find in below for casting rays towards an object from a given point.
def intersect_a_ray(p0,p1,ob): """ Definition to create a cylinder in between two points with a certain radius. Parameters ---------- p0 : list Starting point of a ray. p1 : list Ending point of a ray. ob : blender object Object to be tested for intersection with a ray. Returns ---------- result : list Result of ray intersection test containing state, coalision ray, surface normal. loc : bool/float Location of the intersection. If ray isn't intersecting returns False. dist : float Distance from the starting point of the ray to the intersection point. """ mwi = ob.matrix_world.inverted() ray_begin = mwi @ Vector((p0,p0,p0)) ray_end = mwi @ Vector((p1,p1,p1)) ray_direction = (ray_end-ray_begin).normalized() result = ob.ray_cast(origin=ray_begin,direction=ray_direction) loc = False dist = 0 if result == True: mw = ob.matrix_world loc = mw @ result dist = ((loc-p0)**2+(loc-p0)**2+(loc-p0)**2)**0.5 return result,loc,dist
In my observation using either Blender
2.82, the accuracy of ray intersection point with a given object is questionable for the application that I am targeting.
In my test-bed, I have a mesh that represents a flat surface standing at (0,0,0) in the reference coordinate system. By definition, I know that the intersection is at some point at X,Y,0. If I shoot a ray using the definition given at above, the returned location value with
intersect_a_ray is sometimes off by orders of
I am speculate that this is a common behavior. If not, may I kindly ask if you see any bugs in
If it is a common behavior, is there a way to get higher precision intersection points, probably being off by the orders of
10^-7 is good for what I am aiming for.