I have some data that I would like to visualise in 3D, and I’d like to ask a few questions relating to Blender.
My data is a bunch of XYZ coordinates in a grid like a terrain or height map, but there will be at least 5 million vertices, meaning 10 million triangle faces. I’ve currently exported the data in OBJ and OFF formats and can load and see the files in blender/geomview … but only subsets of the data. When I tried a full 10 million faces in Blender as an import from an OBJ file it takes a very long time to import and never finishes. It gets to about 9gb of memory usage and then seems to stop increasing at all.
Viewing the OBJ file in MeshLab works fine but it seems to ignore my textures.
Is there a more efficient way of storing the files and importing them? Would it be better to split the object into multiple smaller objects?
If I import my 3D model into a .blend file will it still take a long time to open?
My eventual goal is to animate the camera moving over the data for display, and I have access to a lot of computing power and a large amount of memory so network rendering is possible (I discovered via searching this forum that blender 2.5 has a network rendering option, which is excellent). If I’m doing this then will each machine on the network also have difficulty loading the input file?