Ideas on randomly generated terrain for RTS?

I’m currently brainstorming some ideas for a terrain generation setup for a tactical Real Time Strategy game.
Here’s a illustration of what I’m working with:

I’m using mathutils.noise to get a basic area pattern (specifically with noise.noise using
VORONOI_CRACKLE type generation).

This generates a data array which I then use a flood fill to mark out separate areas. I can then use real time texture painting to paint on to a game map using texture masking and nodes. (Thanks to agoose77, 2d23d and cuervo1003 for the code they wrote for real time texture painting)
In future each area will use a different routine for filling, like forest, cropland, lake, urban etc… This should create a map with a feel similar to real rural areas:

Though I will use an algorithm to break them up slightly like you can see with the forest areas above, where they overlap some areas and leave others empty.

Right now I can get some data from analyzing the basic array for use in game:
> The different colored polygonal areas show the different regions.
> The GREY lines are roads. These are removed during the flood fill and stored separately. I can get info about these, for example which direction the road is running by analyzing the surrounding pixels.
> The GREEN CIRCULAR areas are cross roads. These are logical places to put enemy encampments or other strategic objectives. They are generated by analyzing where three or more different regions meet.
> The RED CIRCULAR areas are map entry points. This is where the player owned units would start out, and where any reinforcements for either side would spawn. These won’t generate close to the strategic objectives to limit the chance of battle breaking out as soon as the game loads.

>If a map is generated with no entry areas or no objective areas it is discarded and generated again.

My problems are:

  1. EDIT: Solved, see second post.

  2. Does anyone know what the other types of noise methods do? What does noise.cell give me? It appears to just be pixelated random noise. How about the methods that return a vector? Why would I need a vector (especially as it seems to be 2d as one axis always returns 0.0 when i did it)?

OK, I worked out how to get the noise.voronoi data directly after reading the Wikipedia article again.
It returns two lists, a sorted list of distances to the 4 nearest points and a list of the 4 nearest points.
Using the first point as a home region makes it easy to color the areas, and then using the first distance as a distance to the home region makes it easy to compare to the others to get roads and other features.

The different distance_metrics give some pretty interesting layouts:

https://www.mediafire.com/convkey/5148/kf010boyn5ii3r56g.jpg
This one is Manhattan distance. Good for generating Urban maps.

The next one is MINKOVSKY, it gives nice curves for a rural map.
But I’ve still got issues, here’s an example:

https://www.mediafire.com/convkey/6b78/30cvallfqlf960m6g.jpg

A: I’m getting the roads by comparing the first and second distances returned (the distances are sorted already) meaning that at least one other node is equidistant to the home node. But this causes distortion near the corners of the cells. Any idea on how to minimize this to get uniform thickness roads?
B: To map out the intersections I’m comparing here if three nodes are equidistant. It creates a problem when two of those nodes are too close together. Any ideas?
C: I want to find the center of each cell, but here a calculation of if distance < 0.2 works in some cases but not in others, like D: where is the center node that should be here? Are distances absolute or relative? If relative how can I find a good way to block out the center of a node for feature placement?
E: Can I avoid these problems by rejecting maps where two points are too close together? That may rule out a lot of potentially interesting maps…

BTW:
Here’s the blend I’m working with, hit space to reload:
terrain_paint_class.blend (109 KB)

I would generate the map the way gimp is doing its mosaic.



Handling them as polygons seems simpler and you would probably have more familiar tools to use in pre-production or in game.

Thanks, but the problem is I’ve tried using geometry in the past and works ok but has some real limitations. However I’m having some good results with the texture paint method, which is only really a drain on resources when it is drawn the first frame. After that it’s no more of a drain than a regular texture.

Texture paint opens the door for a lot of old industry standard techniques which I’ve read about but been unable to reproduce in blender. For example having a multi layered tileset.
Normally in blender you can use a lot of different objects with varying degrees of alpha to place tiles that overlap, but when you have a map of several hundred tiles and several layers you have thousands of objects and get big problems with render time (the alpha sorting is a big deal on its own), physics and scene management.

But imagine using a single mesh with a single texture made up of multiple layers of brush strokes. No alpha at all, it’s all handled during the initial paint. I’ve done some tests already and it compares very favorably.

Watch this space for some experiments coming soon… if I can improve this generation method and work out the bugs.

Read this http://www-cs-students.stanford.edu/~amitp/game-programming/polygon-map-generation/

what about doing the high cost method(alpha tiles) and somehow snapshotting the map and placing it on your map/topography?

(use what you have to make what you need?)

that Manhattan distance looks like you could generate some really nice greebles…

Actually this project was partly inspired by that article. :slight_smile: I read it a long time ago and filed it away in my brain. I wish I was better at math and was able to write everything from scratch like that. I don’t want to get in to quite the level of detail he does though, since one important part of the script is that it shouldn’t take too long to execute.

@BPR Thats an interesting idea. I’ll have to try it…

Model View Control

It sounds like you are currently generating the map INSIDE the texture, and then trying to extract data from the texture to use for AI and display purposes.

Consider

  1. Generate the map using an internal representation of some kind (eg polygonal, not a game mesh, but your own python based polygon representation).
  2. Generate a texture map from the internal representation
  3. Generate an AI pathfinding map from the internal representation

That’s how I did caveX16. The ‘map’ is actually a list of lists of nodes that contain information about their terrain. This is put in a texture to control the viewing (so you can’t see the floor behind walls, can see where metal is etc.), and it used for pathfinding.
In your case, a list of lists is not a good way to store it, but if you do a polygonal representation, you could group them by face, or some other system.
But the key thing is:

  • Seperate the generation of your map from drawing the texture.

Yeah, thats what I did on the first post, then I switched to getting the noise data directly. The mathutils function sets points randomly and then allows you to get (at any position) the nearest nodes and distances to them. I need to put them in a texture though to see whats happening.

If I placed my own points and generated distance data from them I guess I could skip the mathutils method, which would be good because I’d be able to do things like relax the points to avoid some of the problems above. I’d be able to stick to 2d noise which would also be good. Sigh, I’m not really a maths guy though. :frowning: the minkowski distance calculations in particular make my head hurt. Plus calculating that all in python would be slower than the mathutils function.

For pathfinding I’m going to be doing something else, the map is mostly empty so a sparse array of points is going to serve.

oh man! Love the idea, I always wanted to do that in Blender but never find the time to do it and learn all that I need to learn to do so.

Here some resources that I found some time ago:

http://www-cs-students.stanford.edu/~amitp/game-programming/polygon-map-generation/

http://www.cartania.com/alexander/generation.html

Best luck for the project I always love your games, I will try to help as much as possible :wink:

@ lluc84 thanks!

Here’s a study of using brushes to paint in the correct areas, these are just greyscale gradients (yes it looks like mipmapped pixels, but it’s not!) 64x64 brushes placed in the correct places. The texture is 4096x4096*4channels, and it doesn’t place any drain on the rasterizer at all.
In the stand alone player it’s really 0.0 ms rasterizer!

https://www.mediafire.com/convkey/a62c/iv5izqtwupn4hel6g.jpg

I’m going to have to look at using other brushes, maybe more types of terrain. They need to be 64x64 and tiled, I haven’t found a way to crop a section from a larger brush yet… any ideas?

I’m loading brushes like this:

def load_brush(image_name,image_size):
    
    self.brush_size = image_size
    file = bge.logic.expandPath('//brushes/{}.png'.format(image_name))
    image = bge.texture.ImageFFmpeg(file)
    image.refresh()
            
    return image

From there I guess I can mix two or more brushes together maybe using a gradient for the alpha channel and a different brush for the color detail. I suppose I could load a larger image (like a 4096 tiled base) and then crop/sample out some pixels to create a custom brush…

I’m starting to think I should move this over to the WIP forum though.

I got most of my answers already.
So I’m going to mark it as solved and move over to the WIP forum.
Thanks!

Just in case that someone is looking for more references for that: http://mewo2.com/notes/terrain/

That’s a really cool resource. Added to my procedural generation bookmarks, thanks!