new script: deleting the no visible vertices

Hiya! First of all, Thank for your help. I am wondering if there is a script that can delete the no visible vertices. I need it because i I am involved in a big problem: have a mesh with 256.000.000 vertices.
If not:
The blender community told me that may be one of you could resolve this little problem because it is “easy” to made it in python. Anyway, do you know if it will optimize the render time?
thanks in advance!
Lisa

such a script is easy to write but it won’t solve the problems we’ve discussed in the other threads. i suppose you want to import the data because you want to show it in the final render, so it need to be in front of the camera, right? even if you cut of half of the mesh, it’s still to much for blender.

i don’t like to repeat myself but you need a script which imports the surface vertices only.

Two options spring to mind:

  1. As Kai Kostack has said, you could write a program (would advise C rather than python) to get the surface verts rather than all. Some kind of a marching cubes mesh skinning idea might work. Assuming your dataset is a cube, then I’m guessing you have something like 640x640x640 resolution? There’s an adaptive marching cubes presented here:
    http://www.springerlink.com/content/rk523x2q69556q20/
    which has an O(n^2) space complexity for a nxnxn dataset. That might be ok, depends on the linear component, obviously.

Or if you need all verts/the above turns out to be too complex/takes too long/whatever:
2). Split and render in smaller batches. Set where you want the camera, FOV, etc, and section off the dataset to only include a small angle of it.
Include only, say, 1/10th of the height, 1/10th of the width of the camera angle, render with a transparent background then combine the 100 (or more) images at the end.
You can automate rendering by loading these sections into an outside format, then having a script in the blender file which loads each section in turn using the “framechanged” script link. This would allow you to also render on many machines and save a lot of manual work.

For the segmenting program, you’d have to do something like load 1mill verts, check to see which segment they lie in and place them in the right file, release the memory and load the next mill, etc, etc.

Hope this helps.

Yes, it helps. I thought it was not possible in blender but it appears to be good. Do you know if the script (2) has been written? (or similar). Can you help me in writting/finding it? I think it is very interesting to the blender community. I will use a supercomputer grid (with 4500 nodes) so maybe it will be the best choice and maybe i could generate the biggest animation in blender never done… (a fluid… or as kaito [tom] said, billions of hairs flying…)
Back from the moon… many thank for your helpful aid. I will work (search) for your 2 solution. (please help me…)

cheers

Lisa

I don’t know if the second exists, but it shouldn’t be technically difficult. The main hurdle I can think of is how dense your dataset is. Is it fairly uniform, or do you have, say 200 million in a small area of the picture and the rest on the other side? If it’s reasonably uniform it makes things a lot easier.

Really, what you need to be able to do is say that if we look at any small section of the render, it won’t contain more than the amount of verts blender can handle.

One note though, this will not allow shadowing. Shading, yes, but no shadows would work, as you’d be treating the different sections completely independently.

If this sounds alright, I’d be more than happy to help you with it. I can’t guarantee time towards it, as I’m moving into exam season (uni finals) but as far as I can see it nothing presents a really large problem.

many thank for your helpful aid.

No worries, I’m addicted to solving problems. Genuinely physically addicted :slight_smile: