Greetings partners!
As you may have seen, in the first post, we managed to create a spherical map from various planar maps! Although the planet generator is still far from done (I’ve been very busy at work, and sometimes lazy to work after work), I’m tackling a different aspect of the same system.
In this part, I’m designing a way to copy vertex positions from an object to the other.
Here we have a object A, using the planet generator to get a height map. Object B is a piece of a sphere, with a completely different vertex array. The vertices on this object will “scan” the object A vertices and copy their value multiplied by 2 (since A has a smaller scale) The idea is to send rays/vectors from each vertex and get a value from it’s(ray or vector) length, to set a Z value.
Right now I see that this can be very logic intensive, so I’ll need help to have it running as fast as i can.
Also I’m not able to make vectors out of a vertex or even don’t see how to get the data from the vector. As the previous thread, I come with a goal and a few clues, and hope things will get clearer as we go!
I’ll provide some working material later.
I appreciate your help, suggestions, advice and critics.
you might want to draw a sketch of what exactly you want to achive. It might contain one or more examples.
With that it might be much easier to find solutions that fit your needs ;).
The whole thing sounds like a geometric problem.
Rays through each single vertex of a mesh sound really expensive.
Monster
Thank you Monster Sensei! I was planning to add the working material once at home!
The aim of this setup is a LOD I wanted to achieve using GLSL Vertex displacement last year!
Now with the vertex manipulation it’s possible to have physics on the objects…
So this is how it works:
the Source object is a highly detailed version of a planet, only it has no faces, but just vertices. The highmap is proceduraly generated via the under dev. planet gen. It wont be rendered.
The target object is a lower density hemisphere with more detail at the pole. This one will be rendered, but it will also track the player’s position. Adding more detail to the place the player is.
The copy script will provide the vertex color and the textures will be set according to them, so the effect will almost look fluid (except for the textures that will keep their global or local position as the hemisphere spins). The way I see to accomplish it is by using vectors or rays to get the closest vertex from the source object. The distance between the source and target vertices must be the same!
I saw that vertex only objects do not take any scenegraph or rendering/physics resources, even if they are above 2 million vertices, so the rendering can be good!
This works like a displace modifier, using an object as reference fr the mapping. The denser part of the hemisphere will track to the player’s position, but the global map will remain. So the hemisphere will behave like a cloths sliding through an invisible terrain.
In short the hemisphere will copy the topology of the side of the sphere it’s (reverse) facing. In the 3rd illustration, you can see the doted mesh (source) and the shaded one on top (target) with the same topology.
In my understanding, I need to project a vector from every vertex towards the center of the source object and get the distance to it’s nearest vertex. Then the target object vertices will set an equal distance from the source’s surface!
I’m trying to avoid complex calculations since the script will loop through all vertices…
So what you’re doing is this?:
having a section of a sphere, which copies the advanced topology of the vertices below, in effect wrapping itself around, so that it looks like the terrain. That tracks to the player and looks like the planet, but only represents the segment that the player can see?
This will be very costly. No matter how, but your method ought to work.
As long as:
the ships speed is low
[OR] the ships is a large distance from the centerpoint of the planet
this will be fine.
Thank you very much!
@ Agoose77:
Yes I think it might be very costly and it’s not for fly-through but mostly for walking purposes.
@ Solarlune:
If that happens, that would be troublesome. I might have to use a lower density mesh!
@ Mosnter:
Ideasman42 x-tree setup is a rigid one, I’m working on a dynamic system, where the planet topology is not pre-made, it’s semi-proceduraly generated. The mesh is not split because the patches are inter-dependent(although other ways can be used to make it work as split patches). I’ll check the sorting algorithms soon!
-----You can skip this part------
Another way to make it a bit faster might be to use other objects as reference.
In Part1, the planet has 24 materials applied, meaning that the planet has 24 terrain patches. So by changing the vertex index order dynamically, and at the same time reading from the map as the hemisphere rotates( which might be as costly as wrapping around another mesh), and using 24 objects to represent the center of each terrain patch… hmm, maybe not!
----- End of rant------------------
But before we know if it’s costly or not, we should get it to work first. I recall that the first system was labeled costly as well(although, it actually has to be run only once). This one will run a few more times but with considerably less vertices to loop. In my opinion, the cost might be as much as the Wave system by Solarlune!
In the long run I think that the smaller patches idea will be cheaper/better!
I can say from experience, that setting vertices and reinstnacing physics on a 32*32 chunk is fine for realtime, but I’m not doing a lot of math during that process.
In general, anything you can precompute and store in memory will be much better than doing it during runtime. I suggest your planet maps should store both low detail and high detail vertex information, which you can mix at runtime (I think your method is to use vertex color to mix data sets?)
As monster says, it will be a O(n^2) solution to find the height for each vertex on the closeup chunks, and I don’t know where the extra detail would come from because you are checking against the lower detail height map.
I know this means that your planet map files will be bigger, but you will be able to load/remove these from memory (say when the player enters a new system) and I think a small wait time for the player at those times is better than poor performance when exploring a planet.
The maps are precomputed! I just don’t want to have more than 1 mesh rendered.
The detailed mesh (vertex cloud) is used to get the detail, so the rendered mesh should be fine as far as detail goes. Although Monster Sensei mentioned that the cloud is deleted on runtime, which implies this all process is impossible!
My concern with using patches, is the skirts! I could add patches as the player approaches a side of the planet, they will snap, and the change will be noticeable, but can also be masked! Whereas my method would suggest a smooth LOD transition, almost perfect!
If there’s a remote chance to get it to work, I’m willing to try!
You could manipulate multiple meshes at the same time. So you can modify connecting borders to fit. You only need to worry about smoothed shading.
For that there is a solution for this as well, see .
Just continue the mesh with invisible faces. The face should match the border faces of the other mesh, to get the right shading.
Thank you Monster Sensei. That’s what we call the skirt. IT works fine for continuous meshes, but in my case the meshes are completely different from each other! Actually I can try to make them all modular so that this method can successfully be used!
After thinking how the detailed mesh object will be accessed, I see that another loop would be required. As I run the planet generator to fire every 100 frames, I get 0.125 fps, which is very high speed death! The generator only has to read from a source file, so I imagined if the LOD patch had to read for a mesh and then calculate the vertices positions… could be even more expensive! So for now I give up on this idea, at least this approach.
I’ll then move a few planes around and change the map data depending on their position. I might use empties or replace mesh to get it to work! Hmm, I’ll run some tests then!
OK, I come back to this problem again.
Here’s the idea:
Create a map with an object as reference. A high density planet is modelled, then vertices the z position, normals are saved in the local ref_object coordinates. The coordinates are used as keys for the dictionary.
Split the object into activity spaces. Cubes of nnn units, so that only the objects in that space are processed.
Use a smaller mesh to take copy the map’s vertices positions inside the activity space not by index, but per vertex position.
This might solve the performance problems…
OK, I’ll take pen and paper to plan this better!
Edit:
Preliminary study done:
The ref object space will be vertical, ignoring the z positions. This will create columns that will have the same role as the vectors/rays I mentioned earlier. The object will set the vertex height in one of the map files to any terrain patch that will be inside the ref_object space.
Well, finally hit the wall,
I did not go too far!
This code seems to work, but then I get all object’s vertices. I need just the ones in the ref_object’s space.
cont= bge.logic.getCurrentController()
own= cont.owner
sce= bge.logic.getCurrentScene()
obj= sce.objects
ref_ob= obj["ref_ob"]
planet=obj["planet"]
mesh= planet.meshes[0]
map1=[]
vlen = mesh.getVertexArrayLength(0)
def map_space():
for cell in range(-10, 10):
space= int(ref_ob.position[0]+cell), int(ref_ob.position[1]+cell)
for i in range(0,1000):
vertPos = mesh.getVertex(0,i).getXYZ()
vertPos1= vertPos+ref_ob.position
map1.append([space,float (vertPos[2])])
I wanted to add a condition if space== VertPos: map1.append, etc…etc to prevent it from appending all meshes’s vertices. But space is made of integer keys and VertPos returns float…
Am I once again bitting on something too big for me to chew?
This line forces me to loop through all vertices in the object, after that, I append them with the keys I get from space…
What I want is to determine the vertices in the space first, then append them with the keys.
Thanks SolarLune, but i being the vertices index, that didn’t go very well. Besides, you tell to convert into float, but the code refers to int, I really don’t see what this line tries to do…
To refer to any vertex, I think I need it’s index first. That forces me to use the vlen. I’ll try a different approach. Less vertices!
Thank you SolarLune, Agoose77 for your replies. Can you tell me if the stuff I’m trying might actually work?
No, you had the right idea. You want to get vertPos to return integers to check it against the space variable - that’s what I was aiming for in my post above. You set vertPosl to be vertPos + the reference object’s position, and then you round that value off into an integer list to check against the space variable. It didn’t work, though, huh?
Here’s what i would do:
You have no alternative for finding vertices within a space, without first looping through them to get their coordinates.
So, my first step would be as follows:
import bge
basemesh = bge.logic.getCurrentScene().objects['']
ori = basemesh.worldOrientation
pos = basemesh.worldPosition
vertices = [[ori*(mesh.getVertex(0,v).getXYZ()+pos),mesh.getVertex(0,v)] for v in range(mesh.getVertexArrayLength(0))]
That will give you a list of [vertex global coords,vertex] for each vertex in the mesh.
I would run this once, as an init, and save it to a globaldict key or the object itself.
Now, you’ll need to find the vertices in the space. Here you could use ‘point in poly’.
This sounds quite a heavy process, but i can’t think of an alternative.
I would first calculate the sorted vertices of the basemesh (so you get the furthest x and y points (presuming you’re using a correctly aligned cube) then pass that on. I’ll try an example later
near_verts = [v[1] for v in vertices if pointInPoly(v[0],basemesh)]