Getting a verts screenx and screeny or the mouse UVx and UVy


I have an object and want to select verts based on either the screenx and screeny of the vert or preferably the UVx and UVy of the mouse.

Does anyone have a code snippet, or can recommend a script to look at?


Tom M.

I dont understand your question, please explain and for what purpose too

I’m doing a new selection method that selects and deselects vertices as you move the mouse.

See this link for one of the many uses

(It will also be useful for static selection, since it uses an alphamask to designate selection, but that is a side effet).

My script can also displace the selected verts as the mouse moves, either according to an alphamask (a greyscale image), or a radius and a formula for falloff. It is also being designed as a dynamic smooth tool.

I have most of the code written,


what remains to be done is

  1. determining if the mouse is within the bounds of the mesh (because it doesn’t make sense to calculate the vert selection if we are outside of the bounds)
  2. determining the UV coordinate location of the mouse (or) determining the screen x and screen y of the vertices
  3. getting pixel information to create image alphamaps from.

2 is the only critical point for the basic functionality.

I’ve found code on ‘screen projection transforms’

But I’m hoping there might be some python code for blender that I can see/use. Since this same problem has likely been solved by other blender coders, I don’t wish to waste my time reimplementing the wheel.

Thanks for your assistance,

Tom M.

Here is a better explanation,

given MouseX and MouseY in screen coordinates

and given a list of verts in object coordinates

transform the MouseX, MouseY to object coordinates,

so that the distance from the mouse to the verts can be calculated

even better, would be to calculate the distance, by treating the MouseX, MouseY as if they are on the surface of the object. Then we can calculate the distance using X Y and Z, instead of just X and Y.

Hope this is clearer,

Tom M.

Okay- I dont know how to get mouse coords to vertex coords-

I would do somthing very different-

YSe the mouse to controle the X/Y of the 3d curser- then you could use that info to distort the mesh.

I’ve decided to find a refresher on vector and matrix algebra.

I was hoping that someone would have the answer at the tip of there tounge, but I guess not.


Unfortunately, I don’t think it can be done. I have been wanting to find a way to interactively incorperate Python scripts into the editing interface, but have yet to see a feasable solution. The closest I’ve seen is Emilio’s horn extrude:

The way he does it (and a way I didn’t know was possible) is he creates a script link on Redraw and uses the BGL module to add the “extra” information (i.e. the horns) in realtime. This is fantastic, but not what you want. It uses the position of the empty as a reference, and not the mouse coordinates.

So, to the mouse coordinates problem. It simply can’t be done (will someone please prove me wrong!). There is not enough information accessible to Python regarding the 3D View transformations to accurately translate the mouse coordinates into “3D workspace coodinates.” You can access the View “rotation” and the View “offset,” but not the “zoom” or “perspective.”
In other words, although you can calculate the directional vector (you might say the screen’s Z-axis) of the mouse in the 3D workspace, there is no way of telling where or what it intersects with.

I may be wrong, and I hope I am, but that’s my take on it. Brush up on your matrix and linear algebra all you want, but it isn’t going to help you just yet. You need the tools/methods to apply them first.


Levi, I’m positive it can be done. It just has been so long since I’ve needed vector math I’ve forgotten nearly everything.

First, find the x,y screen coordinates of the verts. This is done via a standard projection (which I can’t recall how to do).

Then find the verts nearest the mouseX, MouseY. Backface cull so that only the faces towards you are exposed (ie by checking their normals). Now go through the verts and find which face you are intersecting, if there are multiple overlapping faces, then select the face that is closest to the camera.

Now you sudivide the face till you get a point with an XY coordinate that is close enough and call that your XYZ location of the mouse (XYZ local coordinates of the object).

Now, you treat the face as a plane, and project from that point onto the face. This will be your UV coordinates.

There probably is a more efficient method, but that is how it could be done.

Now, I need to find someone who remembers the math (I’ve heard that theeth is the master). Or dig up some math books and try and figure it out myself.


Yes, it could be done, IF (and only if) you knew the projection matrix. As far as I know, this is not accessible to Python via the Window module. Perhaps it is through the BGL module, but that’s out of my league.

After thinking about it some more, I thought of three possible alternatives.

  1. Vertex painting (as in the script I directed you to in your other post)
  2. Using the script link method and an empty for the “cursor” (Emilio’s horn extrude)
  3. Create your own BGL application in the script window to modify the selected mesh

I’ve listed these in the order of “feasibleness.” I see a vertex painting method as being your best bet.
Using an empty for a cursor would probably work because the view direction (vector) IS accessible through the Window module. You just have to project the empty’s location along this vector until it “hits” the mesh (or, the other way around, i.e. project each vertex along the view vector onto the plane containing the empty).
Number 3 would be the ultimate. This would mean creating your own mini-3D application using BGL. In this manner, you could control EVERYTHING you need to (how the mesh is displayed, the projection matrix used, the navigation, etc.). Unless you are up to some serious coding, though, this won’t be very likely.

I have some old blender python scripts I could send you, if you would like. They would give you some ideas.


I just talked to Ton, and he thought it wasn’t doable,

therefore, I’ve decided to solve it <g>

Yes, it could be done, IF (and only if) you knew the projection matrix. As far as I know, this is not accessible to Python via the Window module.

it is called GetViewMatrix()

I’ll set the camera to the view, then get the camera position. This is the center of our screen. Set the cursor to this location.

Now we send the the mouse to each of the four corners of the View3D and set the cursor to each of the four corners. This gives us our local coordinate system. Now we can convert screen coordinates in pixels to our coordinate system.

Then, we can proceed as above.

So, does my method make sense, do you see any problems with it?

Tom M.

It may be worth while doing this is C using existing functions-

There are probable some functuions you could add python hooks to si its not as complex.

OK. I forgot about that. I had thought of it as I was browsing through the documentation, but it never solidified. If you’re going to use the camera, why not just use the Window.CameraView(1) function, instead of Window.GetViewMatrix().
Also, I’m confused when you say “mouse” and “cursor.” I’m assuming you mean the mouse when you say “mouse” and the 3D Cursor when you say “cursor.” In that case, how do you propose setting the 3D cursor to the four corners of the camera?
That’s the main problem I see, not to mention that you still don’t know the projection matrix even though you now know the “location” of the view (unless you can somehow derive one from the camera lens value).
If Ton said it wasn’t doable, I’m beginning to resign myself to that opinion as well.
There still may be a possibility with a script-linked BGL call. I’ll have to research that, though.

In reply to cambo:
Yes, in order to do this, LetterRip will have to use some ugly “hacks.” While adding python hooks is an appealing solution, I (and I’m assuming LetterRip as well) don’t know enough about that kind of stuff to do it.


I’m looking into the c code option, but that probably requires quite a bit more learning qbout the blender internals than I had hoped to tackle.

I’m assuming you mean the mouse when you say “mouse” and the 3D Cursor when you say “cursor.” In that case, how do you propose setting the 3D cursor to the four corners of the camera?

Set the mouse to each corner, simulate a mouse press event, that sets the cursor to that mouse position (other coordinates are maintained that aren’t needed to translate the location).

If Ton said it wasn’t doable, I’m beginning to resign myself to that opinion as well.

He said he didn’t see how it was doable with python, and he wasn’t sure that the functionality needed could be properly exposed to python.

He thought the idea was really cool and said it would be ‘easy’ in c, but easy for someone who is a zen master with the code is a bit different than someone completely unfamiliar with it (whose C is quite a bit rusty) having a go.

I’ll be having a look and see if I can get it done.


Set the mouse to each corner, simulate a mouse press event, that sets the cursor to that mouse position (other coordinates are maintained that aren’t needed to translate the location).

I see. Something like a macro. In that case, you wouldn’t even need to set the camera to the current view (well, except to find the view’s “location”).

I still think that you could use the vertex paint. It might take some work to make it user friendly and interactive, and it would use a slightly different masking method than you had planned on. Still, you could take each vertex in the face, and rather than just do a simple displacement based on it’s color value, also take into account the neighboring vertices and implement your fancy fall-off and smoothing algorithms. Just clear the vertex colors after each modification to the mesh (something jms’ script doesn’t do).

Anyway, happy hunting with your C research.


LetterRip, check out the following file, and tell me what you think.[email protected]/

It has two scenes, each one implements a type of displacement painting using vertex colors via a python script-link on Redraw.
Tell me once you’ve downloaded it so I can remove it from my web space.


I could add python hooks to a c/c++ fileset, you guys just need to get me the code since i’m not one to program it myself unless i need it, and i’m still working on bloded.


OK I downloaded it, will look at it later,

Tom M.

Happy happy joy joy

I have a fully operational battle station - er that is, I can now do live displacing of mesh ( it is a bit too slow for practical use right now, and slows down proportionally to the size of the mesh…).

Also I haven’t implemented nicities like symetrical selection (which can be done by using the symetry code from makehuman), no nice gui (have to edit the falloff type by hand right now), and you can’t load an image and use it as an alphamap. (And other planned features as well, such as having the alphamap rotate with the mouse direction).

I’ll release the preliminary version probably by this weekend.

Tom M.

The new EMesh python module will probably prove to be good for this- since youll be able to move verts without Putting and Getting the mesh.

E-Mesh = Edit Mesh python module thats being coded now.


I’ve been following the discussion on the python list. I agree that it will probably help quite a bit.

I also have some ideas for a speed up that should make the mesh size irrelevant and only depend on the local mesh density.

Tom M.