Thursday, March 20, 2008

Swarms




Very good point from the masterminds behind the Grey Thumb blog. It's a link to Dr. Deborah Gordon's lecture on how ants know to do what they do, see here.

Understanding the inner working of ants' behaviour - and others swarms too - may be important to further the current state of those bio-inspired methods relying on that.

I read her book Ants at Work: How an Insect Society is Organized. She even applied neural networks to model the way that ants interact.

Maybe, the main lesson we may learn from those little insects is directly linked to the fact that ants (and other kinds of swarms) do what they do without a central control and the nest works fine. Sure, when it comes to the evolutionary computation realm, it is always sensible to remember that ants' nests own a very big population and in optimization (and machine learning in general) the population size may be a drawback if set up so big in some problems. Let alone that keeping a small population is very useful when dealing with expensive fitness/objective function evaluations.

Labels: , , , , , ,

6 Comments:

Anonymous Anonymous said...

Big populations is increasingly less problematic; but usually most practitioners strive to find the correct size for a particular problem. You can't just not throw lots of chromosomes at a problem until it's solved.

21 March, 2008 14:18  
Blogger Marcelo said...

Hi, JJ!

Even though the population size has been less problematic, since computation has got cheaper, this should not be a reason to forget that a good algorithm must scale well when it comes to population size too. As I said in the post, that quality in scalability would help a lot to save precious computational resources and, of course, money too, let alone the fact that someone would require a simple computer facility to perform simulations, rather than clusters and/or expensive supercomputers.

Sometimes when I read a paper about those new trends in evolutionary computation and notice that the authors used a big population (from 4000 to 10000 individuals), I really wonder if the field is actually advancing. Despite those new genetic algorithms are able to handle problems which were "hard" to solve via old school GAs - such as the Canon Genetic Algorithm (CGA) -, this does not mean that the field is, indeed, going ahead if it is necessary to set up a population with thousands of individuals for a problem relatively small in dimensions (30-60 variables). I consider that the way the variables interact are more important than its amount on an optimization problem.

BUT, I consider that those "problems", such as researching to find a manner to decrease the population size to deal with (and scale well on) big dimension optimization, are the future challenges of evolutionary computation (and artificial intelligence in general).

Best Regards!

Marcelo

21 March, 2008 20:36  
Anonymous Anonymous said...

But the question is how population or the number of generations scales with problem complexity. I don't think there's a lot made in that area analytically, but for the kind of problems I've encountered (ever met my genetic MasterMind?), scaling is pretty good usually.

22 March, 2008 06:41  
Blogger Marcelo said...

Hi, JJ!

You wrote:

"But the question is how population or the number of generations scales with problem complexity."

I assume that you meant to say the relationship between population size and problem dimension, that is, the manner the population varies when handling problems of different dimensions.

Yes! You are right. There are few works published about that issue - the analitical ones are even fewer. From what I have read upon that, the majority - mainly in evolutionary computation - dealt with toy problems (OneMax) and/or problems designed toward the algorithm's inner working (what means that the problem itself holds some bias, easing its optimization by the algorithm). Let alone that when it comes to the complexity analysis, frequently, what is made is just a "comparison" about the way the population (or number of function evaluations) varies versus problem dimensions. Then, from that "comparison" a Big O asymptotic behaviour is extracted, even though being valid only for a limited class of problems (some papers do not cite that!).

I respect those works based on that kind of analysis, but they do not reprensent a robust investigation on time complexity and so on.

Sometimes, theory helps to set up an algorithm to solve a particular problem and it may be translated into saving resources, not only the computational ones, but the material ones too, such as human resources and, most important, money! :)

Theoretical investigations must be encouraged! It may help a lot to design simple and elegant algorithms which rely on small amounts of computational resources. But, what I have noticed is that the advance and cheaper price of computations have, maybe in some sense, encouraged researchers to rely much more on expensive computer facilities rather than in way to decrease that dependence - and theory comes to exactly help us in this way.

Here you are another future challenge for the EC field!

Best Regards!

Marcelo

22 March, 2008 15:10  
Anonymous Anonymous said...

It is worth mentioning that the EDAs - Estimation of Distribution Algorithms - don't use a population at all. This may ease the solution of complex problems.

25 March, 2008 23:28  
Blogger Marcelo said...

Hi, Anselmo!

EDAs keep a population in some sense, mainly when it comes to the estimation of the probability distribution which generates all (or just some part) the individuals.

From my previous comment, I guess that that kind of research I cited (mainly about complexity analysis and theoretical investigations) could help to diminish the "population size" used by the EDA to estimate the distribution. Using, for example, 10000 individuals to estimate the distribution to deal with a 50 dimension problem is not so useful and frugal from the computational cost viewpoint. An important observation: There is a trade-off between the precision degree of the distribution and the computational effort necessary to reckon it, that is, the more accurate the distribution is, the more costly it becomes - let alone that the nature of the problem itself may even increase the computational effort to estimate a *nice* distribution. This is another future problem to the EC field.

Best Regards!

Marcelo

26 March, 2008 00:41  

Post a Comment

<< Home

Charles Darwin Has A Posse Check Google Page Rank