The key aim of cooperating air traffic control agents (CAT CA) was to comparatively analyze and e... more The key aim of cooperating air traffic control agents (CAT CA) was to comparatively analyze and evaluate the success of a multiagent system (MAS) and a single agent system (SAS) at addressing a given set of goals in a problem domain amenable to the utilization of either a MAS or SAS. The problem domain selected was air traffic control (AT C). The success of the MAS and SAS was evaluated in accordance with a set of objective criteria. The results attained support a hypothesis that the MAS is successful at addressing the given set of goals for simulation scenarios utilizing a large number of aircraft, and the SAS is successful at addressing the sameset of goals in simulation scenarios utilizing a small number of aircraft. The term success is defined as the number of cases within a given simulation scenario, where an agent detects and resolves potential air traffic conflicts, and directs aircraft adhering to expected aircraft flight times. The results attained are applicable in problem domains other than AT C, such as intelligent hospital scheduling (Decker, 1996), and road traffic congestion and traffic jam simulation (Cremer, et al., 1994).
This research concerns the comparison of three different artificial evolution approaches to the d... more This research concerns the comparison of three different artificial evolution approaches to the design of cooperative behavior in a group of simulated mobile robots. The first and second approaches, termed: single pool and plasticity, are characterized by robot controllers that share a single genotype, though the plasticity approach includes a learning mechanism. The third approach, termed: multiple pools, is characterized by robot controllers that use different genotypes. The application domain implements a pursuit-evasion game in which a team of robots, termed: pursuers, collectively work to capture one or more robots from a second team, termed: evaders. Results indicate that the multiple pools approach is superior comparative to the other two approaches in terms of measures defined for evader-capture strategy performance. Specifically, this approach facilitates behavioural specialization in the pursuer team allowing it to be effective for several different pursuer team sizes.
This paper is a preliminary study of the types of collective behavior tasks that are best solved ... more This paper is a preliminary study of the types of collective behavior tasks that are best solved by neuro-evolution (NE). This research tests a hypothesis that for a multi-rover task, the best approach (for deriving effective collective behaviors) is to evolve complete artificial neural network (ANN) controllers, and then combine controller behaviors in a collective behavior context. Such methods are
This paper presents a simulation of predator (pursuer) and prey (evader) agents operating within ... more This paper presents a simulation of predator (pursuer) and prey (evader) agents operating within a competitive co-evolution process. The aim of the study was to investigate the effects of different resource (food for the prey) distributions and amounts on the adaptation of predator (pursuit) and prey (evasion) behaviors. Predator and prey use Artificial Neural Network (ANN) controllers to simulate behavior,
The research goal was to engineer an agent collective that most effectively accomplished a cooper... more The research goal was to engineer an agent collective that most effectively accomplished a cooperative gathering task. The proper setting of agent controller parameters was vital for the finding of agent behavior that most effectively gathered a high quantity and quality of resources in an unknown virtual environment. We tested the efficacy of evolutionary design of agent controller parameters, via testing evolved parameters in a non-evolutionary control experiment. The effectiveness of evolutionary design for the given task was further supported by comparing evolved parameters with a quality space sampling methodology that explored the parameter space for regions that produced a high value gathered cooperatively for the agent collective. Results indicated that the evolutionary approach was able to find agent controller settings for accomplishing the task with a high level of performance
This paper reports upon two adaptive approaches for deriving words in an artificial language simu... more This paper reports upon two adaptive approaches for deriving words in an artificial language simulation. The efficacy of a Particle Swarm Optimization (PSO) method versus an Artificial Evolution (AE) method was examined for the purpose of adapting communication between agents. The objective of the study was for agents to derive a common (shared) lexicon for talking about food resources in the simulation environment. In the simulation, communication was essential for agent survival and as such facilitated lexicon adaptation. Results indicated that PSO was effective at adapting agents to quickly converge to a common lexicon, where, on average, one word for each food type was derived. AE required more method iterations to converge to a common lexicon that contained, on average, multiple words for each food type. However, there was greater word diversity in the lexicon converged upon by AE evolved agents, compared to that converged upon by PSO adapted agents.
This paper describes user-supervised Evolutionary Algorithm (EA) experiments that investigate the... more This paper describes user-supervised Evolutionary Algorithm (EA) experiments that investigate the evolution of a sensible fictional dialogue. A user-supervised EA was used given the difficulty of defining a fitness function for evolving art tasks. Two EAs were tested for the task of evolving dialogue given an English word population. The EAs required user-assigned fitness values to be given as input with varying degrees of frequency during the evolutionary process. The success of the EAs were comparatively evaluated with respect to two-point recombination and a novel complement gene scan operator. Task performance was evaluated according to average fitness, word and genotype diversity, and the number of words used in the fittest evolved dialogue. Results indicated that for both EAs, complement gene scan was more effective for evolving complex, sensible and grammatically correct dialogue, comparative to sentences evolved by the EAs using two-point recombination.
This research investigates an evolutionary approach to engineering agent collectives that accompl... more This research investigates an evolutionary approach to engineering agent collectives that accomplish tasks cooperatively. In general, reproduction and selection form the two cornerstones of evolution and in this paper we study various reproduction schemes in an evolving population of agents. We classify reproduction schemes in temporal and spatial terms, that is, by distinguishing when and where agents reproduce. In terms of the temporal dimension, we tested schemes where agents reproduce only at the end of their lifetime or multiple times during their lifetime. In terms of the spatial dimension we distinguished locally restricted reproduction (agents reproduce only with agents in adjacent positions) and panmictic reproduction (when an agent can reproduce with any other in the environment). This classification leads to four different reproduction schemes, which we compare, via their overall impact upon collective performance. Results using two completely different types of evolvable controllers (hand-coded or neural-net based) indicate that utilizing single reproduction at the end of an agent’s lifetime and locally restricted reproduction afforded the agent collective a significantly higher level of performance in its cooperative task.
In this research, a neuro-evolution method called collective neuro-evolution (CONE), is introduce... more In this research, a neuro-evolution method called collective neuro-evolution (CONE), is introduced for the design of neural controllers (agents) operating in collective behavior task domains. The efficacy of the CONE method for facilitating emergent behavioral specialization for the benefit of increasing task performance is tested in a pursuit-evasion and collective gathering task. For a comparative study, a conventional neuro-evolution method was applied to the same tasks. In both tasks, the CONE method derived behavioral specialization in groups of agents resulting in higher task performances, where as the conventional neuro-evolution method was unable to derive specialization resulting in comparatively lower task performances
The research goal was to engineer agent collectives that most effectively accomplish a cooperativ... more The research goal was to engineer agent collectives that most effectively accomplish a cooperative gathering task. In view of this, we compared reproduction schemes for the artificial evolution of agent controller parameters for a cooperative minesweeping task. Agents utilized cooperative behavior to improve task performance in a simulated environment where different types of mines with different fitness rewards were randomly distributed. We compared the evolution of agent controller parameters with respect to temporal and spatial dimensions of agent reproduction schemes. The first dimension concerned agents reproducing only once at the end of their lifetime or multiple times during their lifetime. The second dimension concerned agents reproducing only with agents in adjacent positions (locally restricted) or with agents located anywhere else in the environment (panmictic). Results indicated that the single reproduction at the end of an agent's lifetime and the locally restricted reproduction schemes afforded the agent collective a higher level of performance in its cooperative gathering task
This paper introduces the collective neuro evolution (CONE) method, and compares its efficacy for... more This paper introduces the collective neuro evolution (CONE) method, and compares its efficacy for designing specialization, with a conventional neuro-evolution (NE) method. Specialization was defined at both the individual agent, and at the agent group level. The CONE method was tested comparatively with the conventional NE method in an extension of the multi-rover task domain, where specialization exhibited at both the individual and group level is known to benefit task performance. In the multi-rover domain, the task was for many agents (rovers) to maximize the detection and evaluation of points of interest in a simulated environment, and to communicate gathered information to a base station. The goal of the rover group was to maximize a global evaluation function that measured performance (fitness) of the group. Results indicate that the CONE method was appropriate for facilitating specialization at both the individual and agent group levels, where as, the conventional NE method succeeded only in facilitating individual specialization. As a consequence of emergent specialization derived at both the individual and group levels, rover groups evolved by the CONE method were able to achieve a significantly higher task performance, comparative to groups evolved by the conventional NE method.
This research applies the Collective Specialization Neuro-Evolution (CONE) method to the problem ... more This research applies the Collective Specialization Neuro-Evolution (CONE) method to the problem of evolving neural controllers in a simulated multi-robot system. The multi-robot system consists of multiple pursuer (predator) robots, and a single evader (prey) robot. The CONE method is designed to facilitate behavioral specialization in order to increase task performance in collective behavior solutions. Pursuit-Evasion is a task that benefits from behavioral specialization. The performance of prey-capture strategies derived by the CONE method, are compared to those derived by the Enforced Sub-Populations (ESP) method. Results indicate that the CONE method effectively facilitates behavioral specialization in the team of pursuer robots. This specialization aids in the derivation of robust prey-capture strategies. Comparatively, ESP was found to be not as appropriate for facilitating behavioral specialization and effective prey-capture behaviors.
This article presents results from an evaluation of the collective neuro-evolution (CONE) control... more This article presents results from an evaluation of the collective neuro-evolution (CONE) controller design method. CONE solves collective behavior tasks, and increases task performance via facilitating emergent behavioral specialization. Emergent specialization is guided by genotype and behavioral specialization difference metrics that regulate genotype recombination. CONE is comparatively tested and evaluated with similar neuro-evolution methods in an extension of the multi-rover task, where behavioral specialization is known to benefit task performance. The task is for multiple simulated autonomous vehicles (rovers) to maximize the detection of points of interest (red rocks) in a virtual environment. Results indicate that CONE is appropriate for deriving sets of specialized rover behaviors that complement each other such that a higher task performance, comparative to related controller design methods, is attained in the multi-rover task.
The key aim of cooperating air traffic control agents (CAT CA) was to comparatively analyze and e... more The key aim of cooperating air traffic control agents (CAT CA) was to comparatively analyze and evaluate the success of a multiagent system (MAS) and a single agent system (SAS) at addressing a given set of goals in a problem domain amenable to the utilization of either a MAS or SAS. The problem domain selected was air traffic control (AT C). The success of the MAS and SAS was evaluated in accordance with a set of objective criteria. The results attained support a hypothesis that the MAS is successful at addressing the given set of goals for simulation scenarios utilizing a large number of aircraft, and the SAS is successful at addressing the sameset of goals in simulation scenarios utilizing a small number of aircraft. The term success is defined as the number of cases within a given simulation scenario, where an agent detects and resolves potential air traffic conflicts, and directs aircraft adhering to expected aircraft flight times. The results attained are applicable in problem domains other than AT C, such as intelligent hospital scheduling (Decker, 1996), and road traffic congestion and traffic jam simulation (Cremer, et al., 1994).
This research concerns the comparison of three different artificial evolution approaches to the d... more This research concerns the comparison of three different artificial evolution approaches to the design of cooperative behavior in a group of simulated mobile robots. The first and second approaches, termed: single pool and plasticity, are characterized by robot controllers that share a single genotype, though the plasticity approach includes a learning mechanism. The third approach, termed: multiple pools, is characterized by robot controllers that use different genotypes. The application domain implements a pursuit-evasion game in which a team of robots, termed: pursuers, collectively work to capture one or more robots from a second team, termed: evaders. Results indicate that the multiple pools approach is superior comparative to the other two approaches in terms of measures defined for evader-capture strategy performance. Specifically, this approach facilitates behavioural specialization in the pursuer team allowing it to be effective for several different pursuer team sizes.
This paper is a preliminary study of the types of collective behavior tasks that are best solved ... more This paper is a preliminary study of the types of collective behavior tasks that are best solved by neuro-evolution (NE). This research tests a hypothesis that for a multi-rover task, the best approach (for deriving effective collective behaviors) is to evolve complete artificial neural network (ANN) controllers, and then combine controller behaviors in a collective behavior context. Such methods are
This paper presents a simulation of predator (pursuer) and prey (evader) agents operating within ... more This paper presents a simulation of predator (pursuer) and prey (evader) agents operating within a competitive co-evolution process. The aim of the study was to investigate the effects of different resource (food for the prey) distributions and amounts on the adaptation of predator (pursuit) and prey (evasion) behaviors. Predator and prey use Artificial Neural Network (ANN) controllers to simulate behavior,
The research goal was to engineer an agent collective that most effectively accomplished a cooper... more The research goal was to engineer an agent collective that most effectively accomplished a cooperative gathering task. The proper setting of agent controller parameters was vital for the finding of agent behavior that most effectively gathered a high quantity and quality of resources in an unknown virtual environment. We tested the efficacy of evolutionary design of agent controller parameters, via testing evolved parameters in a non-evolutionary control experiment. The effectiveness of evolutionary design for the given task was further supported by comparing evolved parameters with a quality space sampling methodology that explored the parameter space for regions that produced a high value gathered cooperatively for the agent collective. Results indicated that the evolutionary approach was able to find agent controller settings for accomplishing the task with a high level of performance
This paper reports upon two adaptive approaches for deriving words in an artificial language simu... more This paper reports upon two adaptive approaches for deriving words in an artificial language simulation. The efficacy of a Particle Swarm Optimization (PSO) method versus an Artificial Evolution (AE) method was examined for the purpose of adapting communication between agents. The objective of the study was for agents to derive a common (shared) lexicon for talking about food resources in the simulation environment. In the simulation, communication was essential for agent survival and as such facilitated lexicon adaptation. Results indicated that PSO was effective at adapting agents to quickly converge to a common lexicon, where, on average, one word for each food type was derived. AE required more method iterations to converge to a common lexicon that contained, on average, multiple words for each food type. However, there was greater word diversity in the lexicon converged upon by AE evolved agents, compared to that converged upon by PSO adapted agents.
This paper describes user-supervised Evolutionary Algorithm (EA) experiments that investigate the... more This paper describes user-supervised Evolutionary Algorithm (EA) experiments that investigate the evolution of a sensible fictional dialogue. A user-supervised EA was used given the difficulty of defining a fitness function for evolving art tasks. Two EAs were tested for the task of evolving dialogue given an English word population. The EAs required user-assigned fitness values to be given as input with varying degrees of frequency during the evolutionary process. The success of the EAs were comparatively evaluated with respect to two-point recombination and a novel complement gene scan operator. Task performance was evaluated according to average fitness, word and genotype diversity, and the number of words used in the fittest evolved dialogue. Results indicated that for both EAs, complement gene scan was more effective for evolving complex, sensible and grammatically correct dialogue, comparative to sentences evolved by the EAs using two-point recombination.
This research investigates an evolutionary approach to engineering agent collectives that accompl... more This research investigates an evolutionary approach to engineering agent collectives that accomplish tasks cooperatively. In general, reproduction and selection form the two cornerstones of evolution and in this paper we study various reproduction schemes in an evolving population of agents. We classify reproduction schemes in temporal and spatial terms, that is, by distinguishing when and where agents reproduce. In terms of the temporal dimension, we tested schemes where agents reproduce only at the end of their lifetime or multiple times during their lifetime. In terms of the spatial dimension we distinguished locally restricted reproduction (agents reproduce only with agents in adjacent positions) and panmictic reproduction (when an agent can reproduce with any other in the environment). This classification leads to four different reproduction schemes, which we compare, via their overall impact upon collective performance. Results using two completely different types of evolvable controllers (hand-coded or neural-net based) indicate that utilizing single reproduction at the end of an agent’s lifetime and locally restricted reproduction afforded the agent collective a significantly higher level of performance in its cooperative task.
In this research, a neuro-evolution method called collective neuro-evolution (CONE), is introduce... more In this research, a neuro-evolution method called collective neuro-evolution (CONE), is introduced for the design of neural controllers (agents) operating in collective behavior task domains. The efficacy of the CONE method for facilitating emergent behavioral specialization for the benefit of increasing task performance is tested in a pursuit-evasion and collective gathering task. For a comparative study, a conventional neuro-evolution method was applied to the same tasks. In both tasks, the CONE method derived behavioral specialization in groups of agents resulting in higher task performances, where as the conventional neuro-evolution method was unable to derive specialization resulting in comparatively lower task performances
The research goal was to engineer agent collectives that most effectively accomplish a cooperativ... more The research goal was to engineer agent collectives that most effectively accomplish a cooperative gathering task. In view of this, we compared reproduction schemes for the artificial evolution of agent controller parameters for a cooperative minesweeping task. Agents utilized cooperative behavior to improve task performance in a simulated environment where different types of mines with different fitness rewards were randomly distributed. We compared the evolution of agent controller parameters with respect to temporal and spatial dimensions of agent reproduction schemes. The first dimension concerned agents reproducing only once at the end of their lifetime or multiple times during their lifetime. The second dimension concerned agents reproducing only with agents in adjacent positions (locally restricted) or with agents located anywhere else in the environment (panmictic). Results indicated that the single reproduction at the end of an agent's lifetime and the locally restricted reproduction schemes afforded the agent collective a higher level of performance in its cooperative gathering task
This paper introduces the collective neuro evolution (CONE) method, and compares its efficacy for... more This paper introduces the collective neuro evolution (CONE) method, and compares its efficacy for designing specialization, with a conventional neuro-evolution (NE) method. Specialization was defined at both the individual agent, and at the agent group level. The CONE method was tested comparatively with the conventional NE method in an extension of the multi-rover task domain, where specialization exhibited at both the individual and group level is known to benefit task performance. In the multi-rover domain, the task was for many agents (rovers) to maximize the detection and evaluation of points of interest in a simulated environment, and to communicate gathered information to a base station. The goal of the rover group was to maximize a global evaluation function that measured performance (fitness) of the group. Results indicate that the CONE method was appropriate for facilitating specialization at both the individual and agent group levels, where as, the conventional NE method succeeded only in facilitating individual specialization. As a consequence of emergent specialization derived at both the individual and group levels, rover groups evolved by the CONE method were able to achieve a significantly higher task performance, comparative to groups evolved by the conventional NE method.
This research applies the Collective Specialization Neuro-Evolution (CONE) method to the problem ... more This research applies the Collective Specialization Neuro-Evolution (CONE) method to the problem of evolving neural controllers in a simulated multi-robot system. The multi-robot system consists of multiple pursuer (predator) robots, and a single evader (prey) robot. The CONE method is designed to facilitate behavioral specialization in order to increase task performance in collective behavior solutions. Pursuit-Evasion is a task that benefits from behavioral specialization. The performance of prey-capture strategies derived by the CONE method, are compared to those derived by the Enforced Sub-Populations (ESP) method. Results indicate that the CONE method effectively facilitates behavioral specialization in the team of pursuer robots. This specialization aids in the derivation of robust prey-capture strategies. Comparatively, ESP was found to be not as appropriate for facilitating behavioral specialization and effective prey-capture behaviors.
This article presents results from an evaluation of the collective neuro-evolution (CONE) control... more This article presents results from an evaluation of the collective neuro-evolution (CONE) controller design method. CONE solves collective behavior tasks, and increases task performance via facilitating emergent behavioral specialization. Emergent specialization is guided by genotype and behavioral specialization difference metrics that regulate genotype recombination. CONE is comparatively tested and evaluated with similar neuro-evolution methods in an extension of the multi-rover task, where behavioral specialization is known to benefit task performance. The task is for multiple simulated autonomous vehicles (rovers) to maximize the detection of points of interest (red rocks) in a virtual environment. Results indicate that CONE is appropriate for deriving sets of specialized rover behaviors that complement each other such that a higher task performance, comparative to related controller design methods, is attained in the multi-rover task.
Uploads