Evolutionary Design, anybody?

Here you are one level above me. I’ve never heard of codons, atm or the use of redundancy. Would you care to clarify, I’ll bring up my notebook?

The only advice I can think of is to only have one crossover point. Because it is linear, one cut would probably be best.

I do believe that two point crossover is pretty much a standard for LGPs but surely I will test one point crossovers to.


One thing I am a bit worried about is the lack of introns (instructions that don’t effect the solution) in my encoding scheme. When evolving equations they are very important since they allow parts that right now are negative to continue in the genotype until they might be useful.

Maybe I should have some kind of buffers for commands. Then there could be a command for emptying a buffer and one for executing one. All instructions between a execute instruction and a clean instruction would then become introns. If then the clean instruction is removed by mutation or crossover the instructions might be executed.

I just happened upon a shader creator for renderman called GenShade. Hard to find it’s homepage but there is some info on http://www.deathfall.com/article.php?sid=357.

I might consider this one more time…

I’ve just read this thread out of interest (while I wait for a render to finish at work…). I think I could really use an evolutionary design approach for the astronomy stuff I’m doing in Blender at the moment. I came up with a very quick and dirty (and not very good) approach for creating nebulae based on real images, but they don’t go anywhere near creating the actual structure of a nebula. They’re really “2.5D”, just a 2D image created out of particles with just enough depth to fly past them with a camera.

The way astrophysicists approximate nebula structure is to study a photo and work out a very basic arrangement of sheets or shells of gas that could possibly give that image from a planar projection of the 3D structure. (I hope that makes sense). For instance, some nebulae are obviously cylindrical, others have lobes, otherse are kind of like bubbles of gas:

http://www.edwinhubble.com/space_pics/planetary_nebula_IC_418_350.jpg

What I’d love to be able to do is give Blender a photo, a mock-up of what I think the structure is, and have it generate particle objects (or point clouds, more likely, because they have to be static) with colours sampled from the photo. I could sit and tell it which solution is closest to what I want.

Well I don’t know, perhaps that’s too much to do all at once? But it would be simpler than full volumetrics!

Always interested in astronomy stuff (I’ve studied physics for 4 years).
What you pretty much first want is a way to fill a 3D object with vertexes, right? That would not be too hard to do with a bounding box, some random particles and some ray intersection to see if the particles are inside the object or not. It sound that you have already solved this part in some way tough…

What you want after that is a texture to be placed on these vertexes that will make it look like a nebulae. There you pretty much want to take a 2D-texture (the image) and map it into 3D-space and make it still look the same from the cameras perspective. Right now I have a hard time to see the process of doing this. Maybe, like you suggested, different textures are tried and you can select the ones you like. I will think more on this but it seams like it could be done with an easier solution. Is it really that hard to find a fitting texture yourself?

Here you are one level above me. I’ve never heard of codons, atm or the use of redundancy. Would you care to clarify, I’ll bring up my notebook?

I just used atm as “at the moment” :). codons are in your DNA. Basically your DNA is made from 4 bases, a t g and c. A codon is three of these, aat cgt or whatever. This means you have 64 different codings, but they only code for 20 or so amino-acids. So, what happens is that certain ones are treated the same, which means you can change the dna without changing the result (helps with stability).

One thing I am a bit worried about is the lack of introns (instructions that don’t effect the solution) in my encoding scheme. When evolving equations they are very important since they allow parts that right now are negative to continue in the genotype until they might be useful.

This would be possible if you leave LPG’s and use an acyclic graph system. Probably possible with LPGs but I’ve never done it myself. If you use a graph system like blenders nodes, you can have ones that aren’t connected to anything. The code itself is fairly easy. In fact, cyclic graphs are easier but wouldn’t be good for your system.

What I’d love to be able to do is give Blender a photo, a mock-up of what I think the structure is, and have it generate particle objects (or point clouds, more likely, because they have to be static) with colours sampled from the photo. I could sit and tell it which solution is closest to what I want.

Hmm, sounds interesting. I’ve got a couple of ideas, but I’ll write them down later after ive been to the pub.

Introns is an intrinsic part of a LGP for equations. Then you have two sets of registers; one set for variables and one for constants. The set for constants is set up efore starting evolving the LGP and does never change.
The variable set can be initialized with data input.
For example if you want to evolve y=f(x) you initialize one of the variable registers with x. Then at the end of the program you output y from one of the registers. So the instructions can be like this (xn being a register).
x1=x1*x2
x2=x4+x5
x2=x6
x1=x1/x2
If we load the input into x1 and also take the output from x1, the second instruction is an intron since that in no way effects the solution. You really don’t have to do anything to set that up.

My plan would then be to have code looking like this:
Instruction 1
Instruction 2
Execute Buffer
Instruction 3
Instruction 4
Clear Buffer
Instruction 5
Instruction 6
Execute Buffer

The instructions would be loaded into FIFO-buffer and because of this instruction 3 and 4 would become introns. Quite simple actually. If it works is another question, I just came up with this myself.

I didn’t really understand your suggestion. Well, kinda… You are talking about cyclic and asyclic graphs (so far, so good). But mustn’t any GP be a asyclic graph to ever be finished? Well, I guess if you are doing a robot it mustn’t but is that the way you mean?
The problem with a general graph though is that you don’t have order defined. And the order of instructions are of huge importance in modelling. In equations this is solved by setting it up as a directed tree where each node needs to be calculated before its parent. But there it is of no importance in what order two nodes with the same parent is calculated. We do have that importance in modelling. So that would limit each node to have one child.
That would lead to us having pretty much an LGP. If we let certain nodes be unconnected these would indeed be introns. I guess an operation would be able to insert a chain of of unconnected nodes at a certain point in another chain, but I fail to see the advantage over the LGP-solution.

Have I understood you correctly?

But mustn’t any GP be a asyclic graph to ever be finished?

Unless it is stable (rare). thats right.

The problem with a general graph though is that you don’t have order defined. And the order of instructions are of huge importance in modelling. In equations this is solved by setting it up as a directed tree where each node needs to be calculated before its parent. But there it is of no importance in what order two nodes with the same parent is calculated. We do have that importance in modelling. So that would limit each node to have one child.

You can have more than one child and parent. You can also have any order, that’s the point of acylcic graphs.

And the order of instructions are of huge importance in modelling. In equations this is solved by setting it up as a directed tree where each node needs to be calculated before its parent.

Yeah, that’s right. The order is very important, but it largely doesn’t matter. It allows a small chance of a large change (which is good).

That would lead to us having pretty much an LGP. If we let certain nodes be unconnected these would indeed be introns. I guess an operation would be able to insert a chain of of unconnected nodes at a certain point in another chain, but I fail to see the advantage over the LGP-solution.

I think my problem is that I don’t understand the LPG idea. An acyclic graph could easily allow for a LPG but also allow more changes.

I think the main problem here is semantics.

Have I understood you correctly?

As far as I understand your posts, yeah, pretty much. Maybe I’m using the wrong terms, I haven’t been taught them, I’ve coded most of the things but never descrubed them.

(sorry, am rather drunk at the moment, will post sensible answers tomorrow)
Ian

Eh, what now? You lost me there. Might this be why?:

(sorry, am rather drunk at the moment, will post sensible answers tomorrow)
Ian

I’ll follow your lead, with the drunkenness, a bit later but right now I’ve got an ANN project to finish.

If you want to read up on Linear Genetic Programming I can highly recommend the following thesis [eldorado.uni-dortmund.de:8080/ bitstream/2003/20098/2/Brameierunt.pdf](eldorado.uni-dortmund.de:8080/ bitstream/2003/20098/2/Brameierunt.pdf).

That thesis looks really good. I will surely read it some time.

BTW that link didn’t work for me but https://eldorado.uni-dortmund.de/bitstream/2003/20098/2/Brameierunt.pdf did.

hehe, possibly. I think I left some of that post inside my mind.

I think my point was that the graph idea allows large scale mutations very easily, which is important because you can jump around the solution space quickly.

Thanks for the link, I’ll have a read of it. I’ve got a good paper on mutating real numbers as though they were represented as an inifnitely long binary gray code. The distribution apparently works very well (though only afaik in small dimensional problems, I’m going to try it on a 600+ D problem when I get around to it.

Ian

I’ll definately look into the graph idea more closely. Right now though I started coding on a java program to create blender texture plugins by linear genetic programming. That was an interesting idea that came up earlier.

My plan here is to have the normal operators such as +, -, * and / but also more advanced such as sin, sgn, >, < and if clauses. I will also have even more advanced such as Voronoi, clouds, random and other premade things that we know can produce good results by itself. Will be interesting to see what will come up when combined in all possible ways.

I suspect that I will have something to show in 2-3 weeks (got a lot other stuff to do in school right now). I’ll keep you posted.

I have not let go of the idea to evolve geometry. Just put it on hold to see if this has the potential to produce something good first. Will see that pretty soon I think.

When looking around to find a name that wasn’t taken already I found http://www.i-tex.de/help/modules/gentex.htm. They have done something similar. The big difference though (other than that this is a much bigger program) is that it looks like it exports images instead of code. And come one, seamless textures doesn’t look that hot when you can see them repeat over and over.

Since my program will export a procedual texture, that will not be a problem.

Sounds like a good plan. I look forward to seeing what you come up with :slight_smile:

I’ll have more of a ponder on this one while I’m working today.

I am working on using the genetic algorithm to design structures that filter/reflect light (this is for a graduate architecture program). My latest effort used metaballs, mainly because it was easy to script the locations and variation of the objects in the “genotype”. Here’s what I discovered (I’m currently working on version two):

  1. testing genotypes (the filter, if you will) is by far the biggest computing barrier. I was using rendering to test the performace of each specimen in sunlight throughout the year - since I was testing reflected light (and using Yafray) the final batch was only 1000 specimens over 10 generations. (8 hours of daylight6 months1000 specimens*10 generations = four days on four processors)

  2. this is more of an architecture school problem, but I think one issue with using the metaballs is that every specimen, no matter how optimum, still looks random. I’m hoping that this may change with larger genetic pools…

  3. the genetic algorithm is a bit of a blunt instrument - you need a lot of data! The actual algorithm is very easy to code, but there’s a lot of nuance in how much you control the search space: the less control you exert, the more possibilities you can end up with, but the more computational power you need. I didn’t implement any sort of mutation - I also wasn’t using the genetic algorithm as a dynamic process, rather a search algorithm.

This project came out of reading Manuel DeLanda - an architectural theorist who, interestingly enough, was a professional animator for a while.

I’m curious about your project…I’ll go through and read the thread now;)

RS

since the rendering time is a barrier, my thoughts were to pregenerate a huge number of parents before hand and render them out over night, then store an image file (with a programatically generated name based on shader paramters, or just a numeric identity) and a text/xml description of the shader parameters that is associated with the image name.

Also it would be nice to be able to lock any parameter or set of parameters; narrow the range of any set of parameters; allow the parameter to vary linearly over a chosen parameter instead of randomly.

LetterRip

Also it would be nice to be able to lock any parameter or set of parameters; narrow the range of any set of parameters; allow the parameter to vary linearly over a chosen parameter instead of randomly.

If you make the random number chosen fit a normal distribution (gaussian), that helps. Then you just vary the SD of it, the less you want it changed the smaller (? or larger?) the SD. Easy to do in python, helpfully.

Sounds very interesting what you are doing. What are you trying to optimize? I mean, what properties does a structure with high fitness have?

I doubt that it would change significantely. If you want some kind of order I suggest to either change the genome or the fitness function to incorperate order. If you do this by editing the genome try to encode the placing by an algorithm. If you do it by the fitness function you have to find some function that will give higher fitness when the randomness is low. Might try to do some Fourier-analysis here maybe to see what spectral components are available and if it is similar to that of noise or if you have spikes at certain frequencies (order).

  1. the genetic algorithm is a bit of a blunt instrument - you need a lot of data! The actual algorithm is very easy to code, but there’s a lot of nuance in how much you control the search space: the less control you exert, the more possibilities you can end up with, but the more computational power you need. I didn’t implement any sort of mutation - I also wasn’t using the genetic algorithm as a dynamic process, rather a search algorithm.
    RS

Very true. This is one of the things I see as the biggest obstacle ahead of me. Since I am doing interactive evolution I am not limited so much by computing power as by the onset of boredom in the user. I have to have extremely fast evolution. I can’t possibly have thousands of generations. Maybe around one hundred is reasonable but hardly much more. And as you point out, the larger the search space the longer until converence.

I am thinking of incorperating some sort of automatic fitness function that helps the user in finding interesting individuals. How that will look I don’t really know yet but it might give higher points for complex textures than for simple ones (uniform colors). But writing a function that finds the complex and at the same time throws away both simple and random is no easy task.

Had a lesson today in a new course in Simulation of Complex systems where this was talked about. Basically there are two different approches to writing such a function. The first is algorithmic and the second one is based on entropy. Neither is perfect. It is apparently a big subject.

If I find that such a fitness function would be helpful in the end I will look closer into it.

Will hear a presentation about subjective evolution tomorrow, think it will be very interesting. I’ll keep you guys posted if it leads me into any new alleys.

Fly,

Order is less of a mathematical problem and more in the eye of the beholder :o I see order in it, but I’ve also been looking at it longer than the jurors, who see it for a minute or less. Maybe its a matter of presentation…

I’m optimizing a light-diffusing screen: direct sunlight passing through the screen is bad, but indirect light is good. Simple enough, right? I placed a plane underneath the screen, and rendered just that plane for each hour of sunlight I was looking at. I could then throw out the dark pixels and the lightest pixels, and add up the values of the rest - this (over six months of changing sunlight) was the fitness score for the specimen.

It seems like you could calculate the chance of finding a successful specimen, given the number of possible combination, number of specimens, and number of generations…

RS

That’s some seriously interesting stuff you got there. Your problem seems to be the fitness function. You have an objective fitnessfunction that optimizes the diffusing properties. The jurors though, look more at the esthetics of the screen since they can’t calculate the objective fitness in their heads. Maybe incorperating some sort interactive evolution would help you arrive at a solution that is better in the eyes of the jurors.

During a seminar I was attending today approches to deal with this sort of problems were brought up. Basically you can combine interactive evolution assigned fitness with a deterministic fitness in an arithmetic way.

That’s not a bad idea - the script is already generating massive iteration, so it wouldn’t be much of a stretch to output them into a simple animation.

I think I’ve simplified the filter algorithm a bit - I’m going to try reversing the relationship and having cameras rendering pictures of the back of the screen, with the idea that more lit areas=more diffuse light below. Seconds count!

RS