Set vertex/polygon visibility of an object with python?

Hi, is there a way to set the visibility of specific vertices or polygons of a mesh on an object. Say for example hide all the vertices below a certain Z coordinate?

Or is there a similar way to “Cull” just part of a mesh in-game?

EDIT: I have to use it in rendering reflections for my water surface, the rendering camera moves below the water surface for rendering a reflection pass but if the objects are partially submerged it ends up rendering that part too and messes up the reflection on the surface, so I want everything, including some parts of the mesh invisible, just to the rendering camera.
I tried setting camera clipping dynamically but it doesn’t work as I was hoping.

You would need a special material for that. This can be easily done with nodes. Just use the global coordinate Z as an input to alpha.(You might want to round the coordinate, to get sharp edge)

He is probably worried about his framerate.I never heard of this before.It sounds interesting.

If you have different objects to hide, there’s occlude physics type, that you could assign to a plane, anything behind it, will be hidden - http://blender.stackexchange.com/questions/21773/can-blender-game-engine-hide-vertices-that-are-not-visible-to-the-camera
(this will not hide separate vertices, but entire objects)

Vertices are never visible :smiley:

All you can see are faces (or edges in wireframe mode).

What you want can be achieved via camera clipping which is already there. Everything outside the camera frustum is not rendered. The frustum is 3D. This means it also hides things before the clipping start and behind the clipping end.

Object culling is present too. Which means that objects with bounding box completely outside the camera frustum are not considered for rendering.

With Python? Yes, it can be done. The explanation is long, read the API and you’ll figure it out. Or if you’re too lazy just forget about it, it has hardly any practical use.

I have made water and it has realtime reflections, for reflections the render camera moves below the water plane takes a render which is then applied to the using view coordinates, this is quite similar to martinsh water but I managed to make my own version of it, the problem arises when an object is partially submerged, the camera ends up rendering the bottom half too so it messess up the reflection on surface, besides that it also does a refraction pass where it moves the camera to the viewer position and ends up rendering everything above water. It becomes a problem when the texture is being distorted and you can see it around the edges.

Hope you can understand, I tried camera clipping but it doesn’t work for me. I want it so that part of the objects above (or below) water surface becomes invisible to the camera.

Yeah I was not sure LOL that’s why I put a “/polygon”, all the methods you stated I have either tried or wouldn’t work, I made a better explanation in my question, should’ve done that before, a lot of people seem to be thinking this is for performance purposes

Yes I am well aware of the method but it wouldn’t work as it would make it invisible to all the cameras in the scene…

You could set the invisible point by using lamp data as a driver. Then I think you can choose when to render texture by using a callback. So set the water level high by controling the lamp, render to texture, then reset the lamp to render the scene.

Thanks, I got it to work, I used two point lamps, basically if those point lamps are at the exactly same position, then the object will be visible, but if they change their position respective to each other, everything below the pre determined water level will be invisible. I also wrapped it up into a node group to use for multiple materials.

So, when rendering the texture, I move one of the lamps to some other location(say to the camera’s location) and after rendering it, I move it back to the other lamps position so everything is visible again. Thanks, I completely forgot about the lamp data :P…

EDIT: Okay so I tried it, it works in that when you move the point light the object below the level gets invisible, but when I do the coding to this in-game, i.e. set the point light position somewhere else, render the texture, set point light position back, it doesn’t seem to work, I can’t understand why? Can somebody help me out please.

Attachments



I tried it out but the lamp attributes that I can change in python (color, position etc.) don’t seem to work properly, maybe the nodes don’t register it as quickly so when I change the colour from black to white, render texture, change back to black, it doesn’t really work.

I tried myself to get the lamp to update but it doesn’t work. The pre and post draw callbacks don’t work either, they have the strange effect of replacing your view with the rendered texture too, so I’d avoid that route.

However, I did get object color to work.

https://www.mediafire.com/convkey/707c/q7x3sj9judiqd5m6g.jpg

Here the player character has its object color switched to red just before rendering, then switched back.

to use object color as a driver in nodes you need two ‘material’ nodes.

One set to shadeless and with object color enabled and the other set to shaded and with object color disabled.
Use the first one as a driver. It will give you three channels, rgb which you can use in different ways to control things in your node setup. I usually split RGB channels and then recombine with 0.0 inputs them to use as 3 different vectors.

To handle a lot of objects in your scene you can do this:



def set_color(list,color):
    for ob in list:
        ob.color = color

render_objects = [ob for ob in own.scene.objects if ob.get("render")] ## objects you want to be cut off since they are under water, give them a "render" property.

set_color(objects,[1.0,0.0,0.0,1.0])
render_texture.refresh(True)
set_color(objects,[0.0,0.0,0.0,1.0])


It’s best if you create a list at the beginning of the game and maintain it though, rather than getting the objects every tic.

Another alternative is to use ‘local’ or ‘view’ geometry co-ordinate data in your node setup for objects to be culled since rendering from different cameras will give different data for the rendered scene and your rendered texture:

/uploads/default/original/4X/0/b/6/0b687ef4f24afa8ed3c590379e74e98a9624f5ab.jpgstc=1

I’m not sure how you’d use that to your advantage though. Maybe you can think of something.

EDIT:
I got it working!

It seems to work ok with different camera orientations, hopefully you can use it.

Attachments



Thank you very much, the object color did the trick, I also played around with the view vector and it can also work but it seems a lot more tedious and involves trial and error, but it can definitely be done, maybe I’ll use it later, but for now object color works perfectly well!!!

Thanks