Evolutionary Design, anybody?

lol its the attack of creationists!! Blenda is heresy!!

I don’t want want this thread to be drowned by propaganda. So everybody please keep your 2 cents to yourselves, no matter if you believe in science or creationism.

please just let it be, for the sake of flamewars.

Agreed, and I know what you mean since you are the author of the thread. But just let this be and let it selfdie, I mean ignore it, and if this happen again anywhere, please report this using the reportbutton.

admin.

i would suspect that materials design would be much more ammendable to an EA approach and for your first generation parents use the various material librarys that have been made.

LetterRIp

I just did a evolutionary NN (only weights, not topology) just a week and a half ago in a homework assignment (solving the 3 input binary XOR problem). Very interested indead!

:smiley: cool. If you do ever do the topology as well, look into evolving a growth strategy rather than an absolute topology (no idea how to do this myself, but apparently it’s by far the best way).

Oh, and don’t get me wrong, hill climbing/simulated annealing is a great search algorithm. Basically it’s like having only one individual instead of a population.

I wasn’t really thinking, you can make yours work like a population by just cycling through which individual you were looking at. Though I personally prefer the all computational method, thats more because of the problems I like working on (they tend to involve 100,000 - 1,000,000 generations/steps).

If you want to partner up on anything connected to blender and EA/ANN/SA just let me know.

Sure :slight_smile: , same to you.

The core of any EA is quite simple, and there are a few little tricks I’ve come up with that work nicely in python (the main language I do it in). Python really is a great language for them, as is matlab (I’m just learning that now).

They do exist and they do work. Now, what the evolutionary part is in it I have not really been able to figure out yet. Mutation and Crossover does happen but between which individuals I don’t yet really know.

One way is you have a small population, and at each stage three are chosen. The best two are picked and they crossover. Or you have it asexually, where you choose one randomly and the user picks out of a group of mutated ones.

The funky thing about EA’s is that it largely doesn’t matter for an application like this, you can do pretty much anything. As long as you have some sort of grading/fitness measure it’ll work.

Another idea related to textures etc. perhaps train a neural net on textures that are interesting and boring. Then it can become a fitness function, of course it might be easy to overfit a neural net. Instead of using parameters (which would be easier to train on); using AI based vision approaches (lines, contrast, etc), might prove more robust and less susceptible to over fitting (and of course one heck of a lot more computationally expensive…)

LetterRip

using AI based vision approaches (lines, contrast, etc), might prove more robust and less susceptible to over fitting (and of course one heck of a lot more computationally expensive…)

Sounds interesting. I’ve just written a program for identifying lines in images.

The problem is that you need to make a texture look like something, not just appealing.
Perhaps evolve node networks for procedurals…

Hmm.

Compare the result to a photo or drawing, perhaps as several paramaters (contrast, edges, rate of change, colour, etc).

Ian

Have you seen the Materializer? It creates random textures… It would probably be a good platform to build on. At http://uselessdreamer.byethost32.com/materializer.html

You guys really get my thinking! Very interesting stuff!

Have not thought about doing it for textures before. One possibly could create an EA that evolves node networks (with a messy encoding scheme like when Neural networks are evolved). The fitness could be the likeness of the result compared to a scanned image of a texture. This likeness might be possible to evaluate by a NN (Neural network) with inputs from well known image recognition algorithms. I think it was something like this that IanC and LetterRip suggested.

Would be difficult to make it realistic though, since it would be very hard to evolve fitting normal and specular maps.

I’ve heard of the materilizer but have not tried it. Sounds like a very good canditate for extending with an EA though from reading on its webpage.

Have you seen the Materializer? It creates random textures… It would probably be a good platform to build on. At http://uselessdreamer.byethost32.com/materializer.html

Cool, cheers for the link!

I’ll be back later to chat, gotta go to a lecture on natural computation now, heheh.

Ian

How about an evolutionary UV seam creator. You could have it mark seams 9 ways (displaying the unwrapped results) and the user could pick which one, then 9 more ways based on that, etc. The UV seams is a rather simple problem. That seems like a fairly managable task and easier then some of the others (except the texture one). I think a lot of people would like their mesh to be automatically marked with seems and unwrapped, even if not in the most efficient way, so then they can go to Gimp and start painting rather than marking seams all over their model.

san_diego_james,

uv seams don’t really have converging solutions until you are very close to a correct unwrapping, and for any non trivial unwrapping there is a huge number of solutions. Thus I’d suggest that it is a very poor fit for a GA approach.

LetterRip

I agree with LetterRip there. But not for the same reasons. A huge number of solutions is not a problem for a EA. The problem is that you might need a subjective evaluation on a large number of solutions. The subjective evaluation is necessary because we want to hide seams where they are invisible. Without that criterion creating a UV-unmapping based on an EA wouldn’t be too hard. Of course you would need a fitness value from the UV-inwrapper that would say how much distortion there is in the mapping.

This would of course need a recode of the Blender source but not to bad I would guess.

One way to skip the subjective part is to weight paint the vertexes on how visible they are. Might even be possible to do automatically by radiance or AO to guess which edges would be most hidden.

OK, a lot of ideas popping up here. I like it. But I can’t do everything. Right now I a most interested in evolving geometry. This might not be the most useful, but it is what I am most interested in. Please continue discussing other options. And if anyone feels like they want to pick up the glove, please go ahead. I don’t have monopoly on developing these things.

Let me then write a little bit more on how I suggest evolving geometry to get your input. Right now I plan on creating a linear genetic programming system for specifying transformations. A LGP is a way to have a program evolve. Every line of program would then look like this:
Take vertex group X and use operator Y on it with I, J as parameters

One special instruction could then look like:
Take vertex group 3 and use operator SCALE on it with X-axis, 10% as parameters.

The operators could be everything from scaling to extruding and so on.

Does this sound like it could produce interesting solutions to you guys? Do you see any problems?

One way to skip the subjective part is to weight paint the vertexes on how visible they are. Might even be possible to do automatically by radiance or AO to guess which edges would be most hidden.

You would need the object ‘in motion’ also you would need to know the camera angles that would be used.

LetterRip

You might also be able to weight all vertexes on how far they are from the camera on average over the animation. Edges being on the back side would always be further away than edges on the front side.

Over all, I don’t think this is a good solution to do automatic. If you weigh paint it manually you would get around that. For example, if you have a person with a shirt you would have really high weights under the shirt. Also on the back of the head and really low in the face.

Still, I don’t know how useful it would be. Is choosing the seams really that big a problem. I have not done so many UV-unwrappings but to me it seems that seams that give a subjectively logical and symmetrical layout would be highest on my list.

Does this sound like it could produce interesting solutions to you guys? Do you see any problems?

Redundancy is always good. I’d personally suggest using some sort of coding that allows it. Still, easier to make a system that doesn’t have it and add it later.

If you are looking at GP, a simple tree approach is quite good and easy to work with. A binary tree is easy to transverse and calculate. Also very easy to apply a crossover function. They give a nice range of small to large effects given certain mutations too.

Oh, have you done and EAs before? If not, watch out for cheating. They are immensely good at finding tiny flaws in the fitness criteria and exploiting them.

Edit - thinking about it, what ideas do you have for fitness? Are you thinking of user input all the way?

Ian

Sure…redundancy is good… Don’t really get it in this context though. You mean like with real evolution with dominant and recessive traits? I’m going to think on this but on the top of my head, I have a hard time imagining the implemention and use. Wouldn’t that make the evolution really slow? I mean, redundancy is a way to make something more stable.

If you are looking at GP, a simple tree approach is quite good and easy to work with. A binary tree is easy to transverse and calculate. Also very easy to apply a crossover function. They give a nice range of small to large effects given certain mutations too.

Well, Binary trees are good for equations such as (+, (*, 3, 2),5). But here we don’t have assignments and therefore I don’t see the use of a TGP. A LGP instead is a line of commands which is exactly how you do modelling manually. LGPs are not that hard either in my opinion. The crossover is pretty easy too. You use a two-point crossover but the crossover points don’t have to be at the same places in the two individuals. That way the genome will change in length. Which is perfectly fine since nothing says we started with the optimal lenght:
aa|aaaaa|aa
b|bb|bbbbbb
->
aabbaa
baaaaabbbbbb

Oh, have you done and EAs before? If not, watch out for cheating. They are immensely good at finding tiny flaws in the fitness criteria and exploiting them.

I just finished a master level course in EAs. I know what you are talking about when you warn me to be careful with the fitness criteria but those concerns are really not valid for interactive evolutionary algorithms. The fitness is not decided by an algorithm, it is decided by a human.

Edit:
I am at this stage not thinking on using any type of algorithm for fitness. In a later stage it might be possible to decrease fitness for objects that have intersecting surfaces for example. Just so everything doesn’t get to messed up. Don’t expect that to be to big a problem though since a human can judge that too.

I am going to read up a bit more on interactive evolution but I might do some kind of fuzzy deal on the fitness values. So out of 9 you mark one as BEST, a few as GOOD a few as BAD. Not really fuzzy if I only assign them a single fitness value after that but you get the point.

I just finished a master level course in EAs. I know what you are talking about when you warn me to be careful with the fitness criteria but those concerns are really not valid for interactive evolutionary algorithms.

Very true, I work mainly on non-interactive ones so the fitness criteria is very important!

Sure…redundancy is good… Don’t really get it in this context though. You mean like with real evolution with dominant and recessive traits?

Though very interesting in the field of EAs, and something I’m toying with atm, it wasn’t quite what I meant. I was thinking of having several codings for the same operation. Not necessary, but I like it. It might slow things down, I don’t know. I personally work with non-interactive progs so it doesn’t really concern me unless it significantly affects the running time.

I’m making one atm with chromosones, redundancy and a codon-style coding. :slight_smile:

The only advice I can think of is to only have one crossover point. Because it is linear, one cut would probably be best.

That way the genome will change in length.

Good stuff, works great with eas and python is fantastic for working with variable length arrays.