Robot Drawing

Evolutionary
Robotics

zooborg


Bibliography


 

    The articles, papers and books listed in this bibliography cover research in the field of Evolutionary Robotics and reflect many of the important advances in the field that occurred over the first two decades of evolutionary robotics research. This bibliography is being updated to include more recent authors and research results (as well as a few historical figures in the field), and the section "Additional Authors" contains a partial list of researchers to be included. 
    Short summaries of some of the papers are provided. Where possible, a link is included to a publisher or author web page where a PDF version of the paper can be obtained online. The PDF files do not reside on this website and only external links are provided. Use of these files must conform to the rules specified by the individual copyright holders of the publications.

 
 


A very well written forward-looking analysis of evolutionary robotics: 
Doncieux and Mouret, "Beyond black-box optimization: a review of selective pressures for evolutionary robotics." Evolutionary Intelligence 7.2 (2014): 71-93.  link to paper


Jump to an author:

Adams(Schultz) * Augustsson * Barlow * Barlow * Bredeche * Beer * Bongard * Capi * Clune Chellapilla * Cliff * Cliff(Harvey) * Doncieux * Doya(Capi) * Dorigo * Ficici(Watson) * Floreano * Floreano(Marocco) * Floreano(Mondada) * Floreano(Nolfi) * Fogel * Fogel(Chellapilla) * Fujita(Hornby) * Galeotti(Nelson) * Gallagher(Beer) * Grant(Nelson) * Grefenstette * Grefenstette(Schultz) * Hanagata(Hornby) * Harvey * Harvey(Cliff) * Henderson(Nelson) * Hoffmann * Holland * Hornby * Husbands(Cliff) * Husbands(Harvey) * Husbands(Quinn) * Jakobi * Kortmann(Sprinkhuizen-Kuype) * Koza * Lehman * Lipson * Lipson(Hornby) * Lipson(Zykov) * Macinnes * Marocco * Mayley(Quinn) * Miglino(Nolfi) * Miller(Cliff) * Mondada * Mondada(Floreano) * Mondada(Nolfi) * Mouret * Nelson * Nolfi * Nolfi(Floreano) * Nordin(Augustsson) * Owens(Fogel) * Paolo(Macinnes) * Pfister(Hoffmann) * Pollack(Hornby) * Pollack(Lipson) * Pollack(Watson) * Postma(Sprinkhuizen-Kuype) * Quinn * Rhody(Nelson) * Schultz * Schultz(Grefenstette) * Smith(Quinn) * Sprinkhuizen-Kuype * Stanley * Takamura(Hornby) * Thompson * Urzelai(Floreano) * Walsh(Fogel) * Watson * White(Nelson) * Wolff(Augustsson) * Yokono(Hornby) * Zykov

 


Additional authors:


Ampatzis,  Bredeche, Bryson, Gomi, Koos, Schmickl, Christensen,  Doncieux, Duarte, Eiben, Hamamm, Fussell, Gomes, Hickinbotham, Joachimczak, Lehman, Lessin, Montanier, Mouret, Nitschke, Østman, Shao, Silva, Walker, Stanley, Soros, Trianni, Tuci, Urbano, Friedman, Von Neumann, Adami,  Ray, Miikkulainen, Ofria, Meyer, Tuci, Steels, Postma, Takaya, Groß, Banzhaf,  Meeden, Zagal, Haasdijk, Weel, Cheney,



 

augustsson-gecco-2002  P. Augustsson, K. Wolff, P. Nordin, “Creation of a learning, flying robot by means of evolution,” in Proceedings of the Genetic and Evolutionary Computation Conference, (GECCO 2002), New York, July 9-13 2002, Morgan Kaufmann, pp. 1279-1285.

Summary:  This paper describes the embodied evolution of flying in a lab robot.  This is one of the only embodied evolution works to use a flying robot, albeit a rather constrained one. A near-aggregate fitness function is used during evolution that measures height reached or distance traveled, and no other details of the robot’s behavior. As with gait evolution, this work doesn’t fall under the heading of intelligent autonomous robot control, but it is still within the bounds of evolutionary robotics, and novel control strategies are evolved.

Link to URL (PDF format): 

http://fy.chalmers.se/~wolff/AWNGecco2002.pdf

 

beer-ab-1992  R.D. Beer, J.C. Gallagher, “Evolving dynamical neural networks for adaptive behavior,” Adaptive Behavior, vol. 1, no. 1, pp. 91-122, 1992.

Summary:  This is one of the earliest evolutionary robotics papers.  Controllers were evolved for homing and for locomotion in legged robots.  Only simulated robots were used, but portions of the work were later demonstrated on a real hexapod robot (1994).  It is interesting to note that almost every current trend in ER was touched upon in this paper and in Kosa’s 1992 paper koza-ecal-1992.  Beer’s paper clearly demonstrated that neural networks could be evolved to perform simple behavioral tasks.  This result has been repeated many, many times by subsequent researchers.  Some improvements have been achieved since this paper was published, but no quantum leap has occurred.  With a few notable exceptions, the same basic problems block the further advancement of ER and remain unsolved today.

Link to URL (PDF format):

http://vorlon.cwru.edu/~beer/Papers/EvolvingDNNforAB.pdf

 

Bredeche-2010  Bredeche, N., Haasdijk, E., & Eiben, A. E. (2010). On-line, on-board evolution of robot controllers. In Artifical Evolution (pp. 110-121). Springer Berlin Heidelberg.


capi-ab-2005  G. Capi, K. Doya, “Evolution of neural architecture fitting environmental dynamics,” Adaptive Behavior, vol. 13 pp. 53-66, 2005.

Summary:  A complicated sequential task in which a robot (Cyber Rodent) must visit three goal locations in a specific order is investigated in this paper.  This work is at the high end of sophistication in ER, at least with regard to complexity of task achieved, and avoidance of injection of a priori information into the evolved controllers.

Link to URL (PDF format):

http://www.cns.atr.jp/~doya/papers/Capi2005ab.pdf 

 

chellapilla-ec-2001  K. Chellapilla, D.B. Fogel, “Evolving an expert checkers playing program without using human expertise,”  IEEE Transactions on Evolutionary Computation, vol. 5, no. 4, pp. 422-428, 2001.

Summary: This paper discusses a system for training neural networks for playing Checkers using evolutionary algorithms. The work employed win-lose aggregate selection in conjunction with intra-population competition and resulted in extremely competent game-playing networks that performed near the expert human level.  This is seminal work, and shows conclusively that relatively small neural networks (50 or so neurons) are capable of performing complex tasks at near human levels of competence. In terms of training, the high level of performance achieved was mainly due to an unusual feature of the game of Checkers itself, namely that the game will often end with a win or a loss for one of the players regardless of skill level. Games almost always end even if the players are taking nearly random moves, and over the whole course of training, the winner of a game is more often than not also the better network.  Because of this, pure win-lose aggregate selection can be used over the whole course of evolution. Checkers is not a trivial game, but these factors are not usually found in non-trivial problems such as autonomous robot control in dynamic environments.  In most complex tasks, including competitive ones, randomly initialized agents generally express no detectable ability to perform the task, and pure aggregate selection fails (the Bootstrap Problem).

Link to URL (PDF format):

http://www.natural-selection.com/Library/2001/IEEE-TEVC.pdf

 

cliff-spie-1993  D. Cliff, P. Husbands, I. Harvey, “General visual robot controller networks via artificial evolution,” in D. Casasent, ed., Proceedings of the Society of Photo-optical Instrumentation Engineers Conference 1993 (SPIE93), Session on Intelligent Robots and Computer Vision XII: Algorithms and Techniques, SPIE Conference vol. 2055, 1993, pp. 271-282.

Summary: This paper presents very early work performed mainly in simulation.  The simulated robots use photoreceptors and tactile sensors and were evolved to perform a position homing and object avoidance task.

Link to URL: not available

 

cliff-ecal-1995  D. Cliff, G.F. Miller, “Tracking the Red Queen: measurements of adaptive progress in co-evolutionary simulations,” in Proceedings of the Third European Conference on Artificial Life: Advances in Artificial Life (ECAL95). F. Moran, A. Moreno, J.J. Merelo, P. Cachon eds., Lecture Notes in Artificial Intelligence 929, Springer-Verlag, 1995, pp. 200-218.

Summary: This paper discusses co-evolution of predator and prey in simulated robots. This work is contemporary with, and the task investigated similar to that of nolfi-al-1995. The authors discuss the Red Queen Effect.  The authors note that training fitness values can’t be used for monitoring of absolute fitness or progress of evolution.  The authors point out that the complexities involved in ER violate many of the simplifying assumptions used to generate theories related to the dynamics of population-based evolution.  As is also implied in nolfi-al-1995, the authors claim that there is no relevant theory from AL or EC or even evolutionary biology that is relevant to fitness evaluation in evolving competing populations.

 Link to URL (PDF format):

http://citeseer.ist.psu.edu/cache/papers/cs/40/http:zSzzSzwww.cogs.susx.ac.ukzSzuserszSzdaveczSzecal_crc19.pdf/cliff95tracking.pdf


Clune-2013  Clune, J., Mouret, J. B., & Lipson, H. (2013). The evolutionary origins of modularity. Proceedings of the Royal Society of London B: Biological Sciences, 280(1755), 20122863.

Doncieux-2014, S., & Mouret, J. B, "Beyond black-box optimization: a review of selective pressures for evolutionary robotics." Evolutionary Intelligence 7.2 (2014): 71-93.  link to paper


Dorigo-2004 Dorigo, M., Trianni, V., Şahin, E., Groß, R., Labella, T. H., Baldassarre, G., ... & Gambardella, L. M. (2004). Evolving self-organizing behaviors for a swarm-bot. Autonomous Robots, 17(2-3), 223-245.
 


floreano-icsab-1998  D. Floreano, S. Nolfi, F. Mondada, “Competitive co-evolutionary robotics: from theory to practice,” in From Animals to Animats 5, Proceedings of the Fifth International Conference on Simulation of Adaptive Behavior (SAB'1998), R. Pfeifer, ed., Cambridge, MA, MIT Press, 1998.

Summary: This paper essentially reproduced the simulation results of the author’s earlier work (nolfi-al-1995) from 1995, but this work used real robots.  Competing populations of predator and prey robots were evolved.  As in the earlier work, cycling patterns of pursuit and evasion strategies were seen over the course of evolution after competence was achieved.

Link to URL (PDF format):

http://lis.epfl.ch/publications/sab98.pdf

 

floreano-nn-2000  D. Floreano, and J. Urzelai, “Evolutionary robots with on-line self-organization and behavioral fitness,” Neural Networks, vol. 13, no. 4-5, pp. 431-443, June 2000.

Summary: This paper presents a very extended version of earlier reported work (floreano-smcb-1996 and floreano-ecal-1999) involving embodied evolution and a relatively complicated task with a sequential element. The task involves the robot moving to one position in an arena before moving to and remaining at a second position.  Positions are marked by black bands, and a light source is switched on as soon as the robot reaches the first homing position. Here, Heb integral neuron excitation rules are evolved rather than network weights. This work provides an example of a controller that is more than purely reactive. Cross-platform evolution between Kheperas and Koalas is also considered in addition to the use of a low resolution  vision system.

Link to URL (PDF format): 

 http://microtechnique.epfl.ch/isr/east/

 

fogel-1966  [429]b  L. Fogel, A. Owens, M. Walsh, Artificial Intelligence Through Simulated Evolution, John Wiley & Sons, Inc, New York, 1966.

Many ideas laid out in the introduction of this book are nearly identical to those presented in current work 40 years later under the rubric of ER.  A series of experiments using artificial evolution to derive finite state machines (FSMs) for repetitive string recognition and number pattern prediction tasks are performed and presented (in much more depth than might be expected in modern work). Some of these concepts reappear in Holland’s seminal monograph of 1975 (holland-1975).

Link to URL: not available

 

grefenstette-mlwrl-1994  J. Grefenstette, A. Schultz, “An evolutionary approach to learning in robots,” Machine Learning Workshop on Robot Learning, New Brunswick, 1994.

Summary:  The task investigated in this work is goal homing with object avoidance using the SAMUEL robot platform.  This is early work in the field of ER, contemporary with cliff-spie-1993, mondada-jras-1995, and nolfi-iwal-1994. The authors expected their systems and methods to scale up to systems with complex capabilities. Over a decade later this has happened only to a limited degree. There are now many such architectures and methods, most of which are still applied to simple tasks similar to the ones investigated in these early works.

Link to URL (postscript only available): http://cs.gmu.edu/research/gag/pubs.html

ftp://ftp.aic.nrl.navy.mil/pub/papers/1994/AIC-94-014.ps.gz

 

harvey-faa-1994  I. Harvey, P. Husbands, D. Cliff, “Seeing the light: artificial evolution, real vision,” in D. Cliff, P. Husbands, J.-A. Meyer, S. Wilson eds., From Animals to Animats 3, Proc. of 3rd Intl. Conf. on Simulation of Adaptive Behavior, SAB94, MIT Press/Bradford Books, Boston, MA, 1994, pp. 392-40.

Summary: The paper describes evolution in simulation using a gantry robot that could move a camera about an environment to generate real sensor values.  Robot controllers were evolved to perform a location-homing task. This is early work using partially embodied incremental evolution.

Link to URL: unavailable

 

hoffmann-ipmu-1996  F. Hoffmann, G. Pfister,  “Evolutionary learning of a fuzzy control rule base for an autonomous vehicle,” in Proceedings of the Fifth International Conference IPMU: Information Processing and Management of Uncertainty in Knowledge-Based Systems, Granada, Spain, July 1996, pp. 659-664.

Summary: In this work fuzzy logic rule sets were evolved to control a robot performing a goal homing with object avoidance task.  The system evolves both the rules and the rule parameters.

Link to URL (PDF format): 

http://esr.e-technik.uni-dortmund.de/Hoffmann/ipmu96.pdf

 

holland-1975  J.J. Holland, Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, Michigan, 1975.

Summary: This classic monograph introduces evolutionary algorithms and many mark this as the starting point of the field of evolutionary computation. It presents Holland’s schema theory and bit-string representation.  It also presents some theorems, but many of these do not generalize to more complex continuous genome spaces.

Link to Publisher URL:

http://mitpress.mit.edu

 

hornby-ices-2000  G.S. Hornby,  S. Takamura,  J. Yokono,  O. Hanagata,  M. Fujita, J. Pollack,  “Evolution of controllers from a high-level simulator to a high DOF robot,” in Evolvable Systems: from Biology to Hardware; Proceedings of the Third International Conference (ICES 2000), J. Miller, Ed., Lecture Notes in Computer Science, vol. 1801, Springer, 2000, pp. 80-89.

This evolutionary robotics paper uses a form of meet-in-the-middle simulation to facilitate coupling to reality.  Real sensor inputs are processed and simulated sensors are simulated at the processed level. A “ball chasing” task was studied.  A fairly complex simulator allows tasks learned in the simulator to be transferred to a real 18 DOF AIBO. This is also one of only a handful of ER works that used mixed-neuron controllers.  

Publisher URL:

http://www.springerlink.com/link.asp?id=vdw4978vgtmvpepg

 

hornby-icra-2001  G.S Hornby, H. Lipson, J.B. Pollack, “Evolution of generative design systems for modular physical robots,” in Proceedings of the IEEE International Conference on Robotics and Automation  (ICRA’01), vol. 4, 2001, pp. 4146-4151.

Summary: This paper evolves morphology and controllers for modular robots in simulation and then transfers some of them to reality by building them with Tinkerbots (servos attached to Tinker-Toys). The task studied here was locomotion.  This work presents an example of feasible complex body evolution using modular design.  See lipson-n-2000, macinnes-al-2004, for similar work exploiting modularity to cross the reality gap.

Link to URL (PDF format): 

http://demo.cs.brandeis.edu/papers/hornby_icra01.pdf 

 

jakobi-1998 N. Jakobi, “Running across the reality gap: octopod locomotion evolved in a minimal simulation,” in Evolutionary Robotics: First European Workshop, EvoRobot98, P. Husbands, J.A. Meyer, eds., Springer-Verlag, 1998, pp. 39-58.

Summary: This paper presents minimal simulation as a method to increase the speed of evolution in simulation.  Walking gaits for an octopod robot were evolved. The first step in defining a minimal simulation requires a precise definition of the behavior to be evolved.  The author notes in results that controllers transferred to real robots satisfactorily, but that this does not imply a general result.  It is difficult to apply minimal simulation to evolve novel behaviors in uncharacterized domains because sufficient information to design the simulation may not be available.

Link to URL: (unavailable)

 

koza-ecal-1992  J.R. Koza, “Evolution of subsumption using genetic programming,” in F.J. Varela, P. Bourgine, eds., Toward a Practice of Autonomous Systems: Proceedings of the First European Conference on Artificial Life, pp. 110-119, The MIT Press, Cambridge, MA, 1992.

Summary: This paper presents very early evolutionary robotics research using genetic programming (GP). A wall-following task was studied.  The work was performed in simulation only, but should still be considered pioneering work since it predates almost all of the other work in this field, most of which has used neural networks.

Link to URL (PDF format):

http://www.genetic-programming.com/jkpdf/ecal1991.pdf

 

Lehman-2011  Lehman, J., & Stanley, K. O. (2011). Abandoning objectives: Evolution through the search for novelty alone. Evolutionary computation, 19(2), 189-223.

lipson-n-2000  H. Lipson, J.B. Pollack, “Automatic design and manufacture of robotic lifeforms,” Nature, vol. 406, no. 6799, pp. 974-978, Aug. 31, 2000.

Summary: The paper describes a system in which whole robots are evolved in simulation and then constructed in the real world using modular actuators and physical units.  An aggregate fitness function was used to select robots for the ability to move in a planar environment. This work used modularity of design to cross the reality gap while still allowing for a high degree of diversity of evolved forms.

Link to URL (PDF format):

 http://www.mae.cornell.edu/ccsl/papers/Nature00_Lipson.pdf

 

macinnes-al-2004  I. Macinnes, E. Di Paolo, “Crawling out of the simulation: evolving real robot morphologies using cheap, reusable modules,” in Proceedings of the International Conference on Artificial Life (ALIFE9), Boston, Mass, Sept.12-15, the MIT press, 2004, pp. 94-99.

Summary:  This paper presents the co-evolution of morphologies and controllers (bodies and brains).  Along with lipson-n-2000 and hornby-icra-2001 this is one of the best body-evolution platforms because it allows for the relatively unconstrained evolution and construction of almost infinite body architectures.  Legos and servos with Lego plates attached were used.  Neural network controllers were also co-evolved for a task of simple locomotion in a planar environment.

Link to URL (PDF format):

http://www.informatics.susx.ac.uk/users/ianma/alife9_ianma.pdf

 

Mouret-2009  Mouret, J. B., & Doncieux, S. (2009, May). "Overcoming the bootstrap problem in evolutionary robotics using behavioral diversity." In Evolutionary Computation, 2009. CEC'09. IEEE Congress on (pp. 1161-1168). IEEE, 

Link to URL (PDF format):

marocco-faa-2002  D. Marocco, D. Floreano, “Active vision and feature selection in evolutionary behavioral systems,” in From Animals to Animats 7, J. Hallam, D. Floreano, G. Hayes, J, Meyer, eds., Cambridge, MA, MIT Press, 2002.

Summary: This paper discusses an evolutionary robotics vision system that consists of a 5x5 array of black and white pixels. Both neural processing and acquisition of vision and neural controllers were evolved using embodied evolution. This work represents a significant improvement in ER in sensory-motor systems, but generalization to difficult tasks is neglected and the authors use a simplistic hand-formulated fitness function applied to a relatively simple locomotion task.

Link to URL (PDF format):

http://lis.epfl.ch/publications/sab02-dm.pdf

 

mondada-jras-1995 F. Mondada, D. Floreano, “Evolution of neural control structures: Some experiments on mobile robots,” Robotics and Autonomous Systems, vol. 16, pp. 183-195, 1995.

Summary: This paper presents three experiments in embodied evolution using a Khepera robot and a laser communications fixture. The tasks were 1) locomotion with object avoidance; 2) locomotion with periodic goal homing; and 3) object acquisition, in which the robot locates and picks up small objects.  These three tasks, especially the object acquisition task, represent some of the most complex tasks attempted in the early years of ER.

Link to URL:

http://asl.epfl.ch/aslInternalWeb/ASL/publications/uploadedFiles/mondada.RobAuto96.pdf

 

nelson-jras-2006  A.L. Nelson, E. Grant,Using direct competition to select for competent controllers in evolutionary robotics,” Robotics and Autonomous Systems, vol. 54, no. 10, pp. 840-857, Oct. 2006.

Summary: The research reported in this paper used an intra-population competitive fitness function to evolve robot controllers for a competitive team searching task.  The robots operated in complex maze environments.  The robots used video for all sensing of their environment, and evolved very large arbitrarily connected neural networks.  This work demonstrated that relatively high levels of performance can be evolved if robots are forced to compete directly against one another, rather than simply attempting to maximize a static fitness function.

Link to URL (PDF format):

http://www.nelsonrobotics.org/paper_archive_nelson/nelson-jras-2006.pdf

 

nelson-iros-2003  A.L. Nelson, E. Grant, G.J. Barlow, T.C. Henderson, “A colony of robots using vision sensing and evolved neural controllers,” in Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS03), Las Vegas NV, Oct. 27-31, 2003, pp 2273-2278.

Summary:  This paper discusses the vision component of the EvBot I evolutionary robotics platform.  This system evolves controllers using the largest amount of vision-based sensor information of any evolutionary robotics research platform (on the order of 150 processed vision inputs).  Video images are processed by 1) colors are detected at the pixel level and 2) vertical columns of the image are aggregated using a nonlinear summation for each detected color.  The resulting vectors (one for each color) are fed to evolving neural controllers.    Human bias injected into the vision system is reflected in the choice of colors to detect and the method of nonlinear summation. Color segmentation per se was not used, but the columnar summing process could be considered a form of 2D segmentation.  However, the controllers themselves were given no knowledge of what the numbers in the vectors represented.

Link to URL (PDF format):

http://www.nelsonrobotics.org/paper_archive_nelson/nelson-iros-2003.pdf

 

nelson-jras-2004  A.L. Nelson, E. Grant, J.M. Galeotti, S. Rhody, “Maze exploration behaviors using an integrated evolutionary robotics environment,” Journal of Robotics and Autonomous Systems, vol. 46, no. 3, pp. 159-173, Mar. 2004.

Summary: This paper discusses the evolution of locomotion and object avoidance behaviors using a mobile robot equipped only with 5 tactile sensors.  Relatively complex neural network controller structures were evolved that allow for non-reactive behaviors to accommodate the extreme perceptual aliasing produced by using only 5 binary sensors for detection of the environment. Controllers were demonstrated to have gained temporal processing abilities.

Link to URL (PDF format):

http://www.nelsonrobotics.org/paper_archive_nelson/nelson-jras-2004a.pdf

 

nolfi-iwal-1994  S. Nolfi, D. Floreano, O. Miglino, F. Mondada, “How to Evolve Autonomous Robots: Different Approaches in Evolutionary Robotics,” in R.A. Brooks, P. Maes eds., Proceedings of the IV International Workshop on Artificial Life, Cambridge, MA, MIT Press, 1994.

Summary: This paper discusses some of the first embodied evolution experiments.  Nolfi and Floreano present their original experiments in this paper.  Small robots were evolved for the task of straight motion with object avoidance.  A physical robot was used during evolution.  Controllers were downloaded to the robot, tested and evaluated during each generation of the genetic algorithm.  Small neural networks (about 10 neurons) were used as the evolved controllers.  The authors also performed similar experiments in simulation with resulting evolved controllers being transferred to real robots.

Link to URL (PDF format):

http://lis.epfl.ch/publications/nolfi.alife4.pdf

 

nolfi-al-1995  S. Nolfi, D. Floreano, “Co-evolving predator and prey robots: Do 'arms races' arise in artificial evolution?Artificial Life, vol. 4, no. 4, pp. 311-335, 1998.

Summary: This paper introduces Nolfi and Floreano’s co-evolution work. This paper discusses only simulation work.  Similar experiments using real robots appear later in floreano-icsab-1998. The authors use a binary aggregate co-competitive fitness function and posit that co-evolution may produce increasingly complex evolving strategies in response to a changing fitness landscape (the Red Queen Effect).  This is seen to a degree, but many of the evolution runs resulted in fitness oscillations in species that correlated with cycling strategies. The authors address this issue of behavioral cycling at sub-optimal fitness levels by employing a “Hall of Fame” selection strategy, and demonstrate limited improvement.  The authors also demonstrate an example of the Boot-Strap Problem by showing that nascent predators have difficulty evolving against fully evolved (but fixed) prey. 

This is a relatively important work in simulated evolutionary robotics and is similar to that of cliff-ecal-1995.

Link to URL (PDF format):

http://asl.epfl.ch/aslInternalWeb/ASL/publications/uploadedFiles/nolfi.co-evol.pdf

 

nolfi-jras-1997  S. Nolfi, “Evolving non-trivial behaviors on real robots,” Robotics and Autonomous Systems, vol. 22, no. 3-4, pp. 187-198, 1997.

Summary: This paper presents research into the evolution of a complex behavior using a complex hand-formulated task-specific fitness function.  The task was object foraging and deposition and includes an inherent sequential element. The task was decomposed by hand and sub-behavior controllers were generated (by various means, mostly evolved) and then an overall controller was evolved to coordinate the sub-behaviors.  The author notes that natural evolution optimizes nothing except the ability to propagate.  The author suggests evolutionary computation is an uncharacterized distortion of natural evolution.

Link to URL (PDF format):

http://gral.ip.rm.cnr.it/nolfi/papers/nolfi.gripper2.pdf

 

quinn-iwbir-2002  M. Quinn, L. Smith, G. Mayley, P. Husbands, “Evolving team behavior for real robots,” in EPSRC/BBSRC International Workshop on Biologically-Inspired Robotics: The Legacy of W. Grey Walter (WGW '02), Aug. 14-16, 2002, HP Bristol Labs, U.K.

Summary: This paper presents (relatively) recent work evolving robot controllers for a complex behavior.  A flocking task was studied in which three robots are to move while remaining within sensor range (2 body lengths) of each other. Robots were remotely controlled.  The fitness function used was fairly complex. It didn’t define every evolved detail of behavior, but it had the effect of curtailing the search space to a very large degree.  Small networks with on the order of 6 internal neurons were used.  Authors also note that quality of transference is difficult to gauge quantitatively or qualitatively.  Similar results were published in quinn-ical-2002.

Link to URL (PDF format):

http://www.cogs.susx.ac.uk/users/matthewq/

 

schultz-flairs-1996  A.C. Schultz, J.J. Grefenstette, W. Adams, “RoboShepherd: learning a complex behavior,” in Robotics and Manufacturing: Recent Trends in Research and Applications, vol. 6, pp. 763-768, 1996. (Schultz references the work as: Presented at RoboLearn, The Robotics and Learning workshop at FLAIRS ‘96, 1996.)

Summary:  In this work populations of stimulus-response rule sets (controllers or strategies) were evolved for an agent herding task.  The “sheep” agent had a fixed control strategy while the “herd dog” strategy was evolved.  The behaviors achieved, and the tailored, near aggregate fitness function rank the evolved controllers among the most complex achieved in the early years of ER research.

Link to URL (PDF format): 

http://www.cse.unr.edu/~monica/Courses/CS493-790/PapersPDF/Schultz_RoboShepherd.pdf

 

sprinkhuizen-kuyper-bnaic-2000  I.G. Sprinkhuizen-Kuyper, R. Kortmann, E.O. Postma. “Fitness functions for evolving box-pushing behaviour,” in Proceedings of the Twelfth Belgium-Netherlands Artificial Intelligence Conference, eds. A. van den Bosch, H. Weigand, 2000,  pp. 275-282.

Summary:  This paper compares four separate fitness functions for the same box-pushing task, and is one of the few papers that is focused on fitness function formulation.  All of the fitness functions tested converged upon solutions within 250 generations.   Some of the controllers evolved in simulation were tested in real robots. 

Link to URL (PDF format):

http://www.cs.unimaas.nl/~kortmann/publications/sprinkhuizen-kuyper-etal.pdf 

 

Stanley-2002  Stanley, K. O., & Miikkulainen, R. (2002). Evolving neural networks through augmenting topologies. Evolutionary computation, 10(2), 99-127.

thompson-iceh-1996  A. Thompson, “Evolving electronic robot controllers that exploit hardware resources,” In Advances in Artificial Life: Proceedings of the 3rd European Conference on Artificial Life (ECAL95),  Lausanne. Springer-Verlag, vol. 929, 1995, pp. 640-656.

Summary: The paper presents an early example of ER with evolvable hardware.  The hardware encoded controllers as evolvable dynamic state machines.  Robot hardware controllers were evolved for locomotion and object avoidance in custom lab robots.

Link to URL (PDF format):

http://citeseer.csail.mit.edu/cache/papers/cs/801/http:zSzzSzwww.cogs.susx.ac.ukzSzuserszSzadrianthzSzecal95zSzpaper.pdf/thompson95evolving.pdf

 

watson-cec-1999  R.A. Watson, S.G. Ficici, J.B. Pollack, “Embodied evolution: embodying an evolutionary algorithm in a population of robots,” in P. Angeline, Z. Michalewicz, M. Schoenauer, X. Yao, A. Zalzala, eds., The 1999 Congress on Evolutionary Computation, IEEE Press, 1999, pp. 335-342.

Summary: This paper presents research investigating fully embodied evolutionary algorithms.  Robots were evolved to move toward a light source (phototaxis). Not only were the controllers evolved on real robots, but no evolutionary supervisor was used. Robots broadcast their controller networks to nearby less-fit robots during propagation in an asynchronous manner. Robots interact during evolution.

Link to URL (PDF format):

http://www.cs.plu.edu/courses/csce330/arts/robots1.pdf

 

zykov-gecco-2004  V. Zykov, J. Bongard, H. Lipson, “Evolving dynamic gaits on a physical robot,” 2004 Genetic and Evolutionary Computation Conference (GECCO), Seattle, WA., 2004.

Summary: This paper presents a nice example of the embodied evolution of walking gaits in a physical robot. The robot was a pneumatic hexapod of minimalist design.  For fitness determination, the distance walked by the robot was determined using images taken from an overhead camera. This is new work using new methods and an aggregate fitness function.

Link to URL (PDF format):

http://www.mae.cornell.edu/ccsl/papers/GECCO04_Zykov.pdf

 

 

 Current review of open ended evolution in evolutionary robotics:  Embodied Artificial Life at an Impasse: Can Evolutionary Robotics Methods Be Scaled? Proceedings of the 2014 IEEE Symposium Series on Computational Intelligence (IEEE SSCI’14), Orlando, FL, Dec. 9-12, 2014.  link to paper

C
omprehensive review of the first 2 decades of evolutionary robotics research: Fitness functions in evolutionary robotics: A survey and analysis,” Robotics and Autonomous Systems, vol. 57, no. 4, pp. 345-370, Apr. 2009. link to paper



This page is maintained by A Nelson
  All artwork©1990-2017 A.L.Nelson, All rights reserved
  Site administrator contact: alnelson @ ieee dot org
 
 
©2015A.L.Nelson