Tuesday, December 09, 2008

Evolving Mona Lisa Through Simulated Evolution




Amazing story this one! See it here.

It is a genetic programming algorithm designed to evolve Leonardo Da Vinci's Mona Lisa. It seems that each individual is represented by 50 polygons and the fitness function is the real image of Mona Lisa. So, the polygons should be arranged as close as possible to the real one.

The final result is amazing.

Labels: , , , , , , , ,

11 Comments:

Blogger Togelius said...

Yes, it's impressive. But it's very unclear what his fitness function his. Could be lots of domain knowledge in there. Until we know, I refuse to be too impressed.

09 December, 2008 16:04  
Blogger Marcelo said...

Hi, Julian!

Well-remembered! I did not think about that you mentioned. Yeah, it could have some many biases to fit perfectly the original Mona Lisa.

Surely, the author shall never tell us if, indeed, he handled such a kind of "method". :)

See Ya!

09 December, 2008 18:06  
Blogger Nicolas said...

I doubt it -- the fitness function is simply the difference between the current generated image and the target...

I wrote my own version (http://camaelon.blogspot.com), and results are ok, though not as good as his. I did only run to at most 50-60K iterations though, and at this stage what I have looks rather similar to his results... (interestingly, the video I put online show a version where the algo allow for random modifications to the polygon transparency -- I'm running another test right now where the transparency is instead fixed to a low value, and this looks much better)

My implementation is also rather crude, and from his description his should be a little more flexible too, which may explain the really good final results.

Anyway, that's a pretty neat idea, and quite fun to implement and play with :)
(next step: implement a proper GA)

11 December, 2008 14:28  
Anonymous haltux said...

This does not use genetic algorithms at all. This is just a local search, no more, no less.

So it seems that local optima are not a problem in this case, which is not too surprising when you think about it.

12 December, 2008 06:49  
Blogger Togelius said...

Nicolas: I understood so much. But what is "the difference"? There are many ways to measure difference.

haltux: Local search algorithms definitely have problems with local optima. Did you miss out on a negation somewhere?

13 December, 2008 10:20  
Blogger Nicolas said...

Togelius: it simply is a pixel-difference -- for each iteration, adds for every pixel of the image the color difference between the desired image and the current image.
Typical formula is abs(R2-R1) + abs(G2-G1) + abs(B2-B1), or sqrt( (R2-R1)^2 + (G2-R1)^2 + (B2-B1)^2) .
To speed up things, the difference could be calculated using a smaller target image, etc.

I played a bit more with my program this weekend, unsurprisingly the best results are achieved with low amount of differences between iterations, low mutation rate, and transparent polygons or triangles using plain colors. Gradient triangles give pretty results but less accurate images (as in fact, plain color translucent triangles achive good results because of their combination; combining gradients is harder). Similarly, plain (non-translucent) polygons give worse results.

15 December, 2008 00:14  
Blogger Nicolas said...

Oh, and something important for good results is also the possibility to add nodes to the polygons in a random position (not a surprise too).

15 December, 2008 00:16  
Blogger Marcelo said...

Nicolas, it's strange what you said, because the original Mona Lisa evolution program author told at his blog the algorithm he used was a genetic programming one. You say it's a simulated annealing (SA), what is pretty the same as a (1+1)-ES -- or, at least, very close to. SA is well known for its scaping local optima (in)ability, given the no population surplus (i.e., pop_size = 1, frequently).

When he claimed his algorithm was a GP, I took it with a grain of salt, since GP needs big populations (>1000) and a (very) nice amount of iterations to deliver a reasonable result.

Nicolas, how do you represent the individual? It's a floating point vector?

Is the color difference the mechanism you use to "mutate" your individual? How do you implement it?

Cheers...

Marcelo

15 December, 2008 03:30  
Blogger Nicolas said...

Well, Roger Alsing describes his algorithm in the following way:

0) Setup a random DNA string (application start)
1) Copy the current DNA sequence and mutate it slightly
2) Use the new DNA to render polygons onto a canvas
3) Compare the canvas to the source image
4) If the new painting looks more like the source image than the previous painting did, then overwrite the current DNA with the new DNA
5) repeat from 1

To me, this is a hill climbing algorithm, not a GA. If some parameters (such as the mutation rate) change depending on the evolution's success, then it's a SA algo. And yes, this should have problems with local minima. The whole thing "evolve" if you want, hence I believe the confusion, but to me a GA approach would mean using a large population and apply evolution...

Note though that I'm not a GA specialist at all -- my background is in systems and a bit of graphics, so I may be completely wrong ;)

In my implementation, I simply have a collection of Polygons objects (having a color and transparency) defined as a set of points. For each iteration, I randomly select some polygons, "mutate" their points (move them, add/remove one), draw the resulting image, compare it and if there is a smaller difference I keep the mutated polygons. I then do the same thing but mutating the polygons' colors.

I think there's a lot of optimisations that could be made to this simple approach (such as, instead of, when adding a polygon, putting it at a random position, do a few iterations to try a few positions for the added polygon and choose the best), and of course implementing a proper GA solution would be interesting (as in my few tests, finding the right balance between all the different parameters (mutation rates) can be tricky, and I believe a GA approach would converge faster as well as eschewing local minima)...

16 December, 2008 09:31  
Blogger ender said...

Hi everybody:

Roger Alsing put his code on the web. http://code.google.com/p/alsing/downloads/list

Im not a .NET programmer and dont have much time for checking it. But maybe somebody with these abilities can inspect it.

The discussion has been really quiet the last days, perhaps everybody is out by vacations.

I agree with Togelius in the sense that te algorithm must have a lot of domain knowledge inside... Some comments of the author make me think that this DK is not in the fitness function, it may be in the mutation process.

Even then, problem and the results shown sounds amazing. I would try to check the code and develop a GA C++ version of it someday.

21 December, 2008 04:07  
Anonymous Anonymous said...

I've saw too much noise about those Monalisa's pseudo-GAs on the net, everyone is doing his own version, but they are full of domain knowledge as Togelius fist noted...

15 February, 2009 02:54  

Post a Comment

Links to this post:

Create a Link

<< Home

Charles Darwin Has A Posse Check Google Page Rank