Understanding the brain is an interdisciplinary effort spanning over the fields of computational ... more Understanding the brain is an interdisciplinary effort spanning over the fields of computational neuroscience, machine learning and robotics. The collaboration between researchers across these fields should be encouraged by comprehensive simulation platforms. The Neurorobotics Platform (NRP) developed by the Human Brain Project enables such collaboration by allowing researchers to define virtual experiments in which brain models are connected to simulated robots. In this paper, we present how we use the NRP as an education tool to introduce master students to neurorobotics. The students are given the task to define, implement and solve three virtual neurorobotics challenges related to perception, arm motion and locomotion. Without any previous knowledge on neurorobotics, the students completed this task within the course of one semester. We present the challenges, which are now open-source benchmarks available online, as well as example solutions from the students to these challenges. This paper gives a glimpse of what new users are capable of by using the NRP to simulate their neurorobotics experiment. Aside from educating the students, this initiative also allowed to collect their direct feedback on the NRP. This feedback is valuable for the Human Brain Project as a whole since it helps identify how new users interact with the platform.
Depth perception through stereo vision is an important feature of biological and artificial visio... more Depth perception through stereo vision is an important feature of biological and artificial vision systems. While biological systems can compute disparities effortlessly, it requires intensive processing for artificial vision systems. The computing complexity resides in solving the correspondence problem – finding matching pairs of points in the two eyes. Inspired by the retina, event-based vision sensors allow a new constraint to solve the correspondence problem: time. Relying on precise spike-time, spiking neural networks can take advantage of this constraint. However, disparities can only be computed from dynamic environments since event-based vision sensors only report local changes in light intensity. In this paper, we show how microsaccadic eye movements can be used to compute disparities from static environments. To this end, we built a robotic head supporting two Dynamic Vision Sensors (DVS) capable of independent panning and simultaneous tilting. We evaluate the method on both static and dynamic scenes perceived through microsaccades. This paper demonstrates the complementarity of event-based vision sensors and active perception leading to more biologically inspired robots.
Target reaching is one of the most important problems in robotics - object interaction, manipulat... more Target reaching is one of the most important problems in robotics - object interaction, manipulation and grasping tasks require reaching of specific targets. We avoid the complexity of calculating the inverse kinematics and doing motion planning, and instead we use a combination of motor primitives. We propose a bio-inspired architecture that performs target reaching with a robot arm without planning. A spiking neural network represents motions in a hierarchy of motor primitives. Different correction primitives are combined using an error signal. We present experiments with a simulation of a robot arm to extensively cover the working space by going to different points and returning to the start point, and experiments to test extreme targets and random points in sequence. Robotics applications - like target reaching - can provide benchmarking tasks and realistic scenarios for validation of neuroscience models. Robotics can take advantage of the capabilities of spiking neural networks and the advantages of neuromorphic hardware to run them.
Human-Robot Interaction (HRI) plays an important and growing role, both in industrial application... more Human-Robot Interaction (HRI) plays an important and growing role, both in industrial applications and in game development. Over recent years, robots can be controlled by gestures via special devices, but these methods are not intuitive and require usually a learning phase. This paper proposes an intuitive method for controlling a robot end-effector using human gestures. Vision based techniques were used to track the position of the user's hand, which is directly translated in control signals. The use of a 3D camera sensor allows to easily control the robot tool position in all dimensions. Our approach includes a Graphical User Interface (GUI), to ease the control through interactive, visual feedback. This interface, including 3D markers, text messages and the visualization of the user's point cloud and the robot model, enables a control mechanism which does not require a teaching phase. Our approach was tested and evaluated using realistic experiments to prove that our approach works reliably and is extremely intuitive.
To guarantee safety in a shared work space between humans and robots, robust yet flexible robotic... more To guarantee safety in a shared work space between humans and robots, robust yet flexible robotic motion control is required. Algorithms for motion planning of complex robotic systems are too computationally expensive to enable a real-time solution on conventional hardware. We apply neuromorphic sensors and Spiking Neural Networks to create an obstacle memory of a robot’s work space. We create a neuron population representing all objects of the robot’s work cell except for the robot itself. Hereby, we use two sensor networks for proprioception and exteroception. Furthermore, we adapt the network to preserve older states while still reacting to new events, obtaining a correct obstacle memory at any given point in time. This is done by extending the network with further neurons and introducing a neighborhood-based structure. The system is evaluated with experiments with increasing complexity on simulated data. The results show that even though the issues with spatially not-separated objects and fast motions remain, this method of obtaining a neural memory works. Our network of spiking neurons represents a neural memory of obstacles and a robotic arm. The long-term goal of performing a reactive path planning algorithm on it makes it interesting in the context of Human-robot interaction.
Robot control is an active field of research, in particular for humanoid robots which present cha... more Robot control is an active field of research, in particular for humanoid robots which present challenging problems. Yet, few of the proposed methods exhibit properties of biological systems. Neurorobotics presents an interesting approach by using neural models and mechanisms of brain function to control robots. In this paper, we present a novel way to activate robot motions with different modalities, inspired by biology and implemented with spiking neurons. We focus on two specific characteristics of biological systems for motion control: the hierarchical representation, which is distributed in the body and the nervous system, and the different activation modalities. We modeled three different activation modalities: voluntary, rhythmic and reflexes. In our architecture, motions are represented with motor primitives that can be combined and parameterized. A mechanism to learn new motions based on previous knowledge is incorporated using an error function. Our approach is evaluated controlling a robotic arm in simulation. We show that each activation modality works, and we show that they can be combined in various ways in different scenarios.
For robotics, especially industrial applications, it is crucial to reactively plan safe motions t... more For robotics, especially industrial applications, it is crucial to reactively plan safe motions through efficient algorithms. Planning is more powerful in the configuration space than the task space. However, for robots with many degrees of freedom, this is challenging and computationally expensive. Sophisticated techniques for motion planning such as the Wavefront algorithm are limited by the high dimensionality of the configuration space, especially for robots with many degrees of freedom. For a neural implementation of the Wavefront algorithm in the configuration space, neurons represent discrete configurations and synapses are used for path planning. In order to decrease the complexity, we reduce the search space by pruning superfluous neurons and synapses. We present different models of self-organizing neural networks for this reduction. The approach takes real-life human motion data as input and creates a representation with reduced dimension. We compare six different neural network models and adapt the Wavefront algorithm to the different structures of the reduced output spaces. The method is backed up by an extensive evaluation of the reduced spaces, including their suitability for path planning by the Wavefront algorithm.
2019 19th International Conference on Advanced Robotics (ICAR)
Depth perception is crucial for many applications including robotics, UAV and autonomous driving.... more Depth perception is crucial for many applications including robotics, UAV and autonomous driving. The visual sense, as well as cameras, map the 3D world on a 2D representation, losing the dimension representing depth. A way to recover 3D information from 2D images is to record and join data from multiple viewpoints. In case of a stereo setup, 4D data is gained. Existing methods to recover 3D information are computationally expensive. We propose a new, more intuitive method to recover 3D objects out of event-based stereo data, by using a Self-Organizing Map to solve the correspondence problem and establish a structure similar to a voxel grid. Our approach, as it is also computationally expensive, copes with performance issues by massive parallelization. Furthermore, the relatively small voxel grid makes this a memory friendly solution. This technique is very powerful as it does not need any prior knowledge of extrinsic and intrinsic camera parameters. Instead, those parameters and also the lens distortion are learned implicitly. Not only do we not require a parallel camera setup, as many existing methods, we do not even need any information about the alignment at all. We evaluated our method in a qualitative analysis and finding image correspondences.
2020 8th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob), 2020
Conventional methods for motion control and path planning in robots are nowhere near as reactive ... more Conventional methods for motion control and path planning in robots are nowhere near as reactive and flexible as in nature. Brains solve navigation using place cells - neurons that provide a cognitive representation of a specific environment. Neural techniques for path planning in 2D have been developed for years, however, to allow their application for robotic tasks beyond locomotion, a transmission to 3D is required. We present an implementation for path planning via a propagating wavefront on 3D environments. The algorithm operates on a Spiking Neural Network of excitatory place cells structured as a grid. A wavefront travelling through the network is initiated by activating the goal place cell. The wave strengthens synapses in the direction of the propagation using STDP, as a synaptic learning rule. By interpreting the synaptic weights as a vector field, a path can be derived from any place cell, reached by the wave, to the destination. We demonstrate, using a neural simulator, that our algorithm works well on maps with multiple obstacles. Our method allows fast simulation and query times and we expect to considerably improve the network creation time by using dedicated hardware, allowing massive parallelism. Our algorithm applies bio-inspired techniques and is especially interesting for Human-robot interaction, which requires reactive flexible motion planning.
2018 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob), 2018
Robot control is an active field of research, in particular for humanoid robots which present cha... more Robot control is an active field of research, in particular for humanoid robots which present challenging problems. Yet, few of the proposed methods exhibit properties of biological systems. Neurorobotics presents an interesting approach by using neural models and mechanisms of brain function to control robots. In this paper, we present a novel way to activate robot motions with different modalities, inspired by biology and implemented with spiking neurons. We focus on two specific characteristics of biological systems for motion control: the hierarchical representation, which is distributed in the body and the nervous system, and the different activation modalities. We modeled three different activation modalities: voluntary, rhythmic and reflexes. In our architecture, motions are represented with motor primitives that can be combined and parameterized. A mechanism to learn new motions based on previous knowledge is incorporated using an error function. Our approach is evaluated controlling a robotic arm in simulation. We show that each activation modality works, and we show that they can be combined in various ways in different scenarios.
2018 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2018
Human-Robot Interaction (HRI) plays an important and growing role, both in industrial application... more Human-Robot Interaction (HRI) plays an important and growing role, both in industrial applications and in game development. Over recent years, robots can be controlled by gestures via special devices, but these methods are not intuitive and require usually a learning phase. This paper proposes an intuitive method for controlling a robot end-effector using human gestures. Vision based techniques were used to track the position of the user's hand, which is directly translated in control signals. The use of a 3D camera sensor allows to easily control the robot tool position in all dimensions. Our approach includes a Graphical User Interface (GUI), to ease the control through interactive, visual feedback. This interface, including 3D markers, text messages and the visualization of the user's point cloud and the robot model, enables a control mechanism which does not require a teaching phase. Our approach was tested and evaluated using realistic experiments to prove that our approach works reliably and is extremely intuitive.
Artificial Neural Networks and Machine Learning – ICANN 2018, 2018
Depth perception through stereo vision is an important feature of biological and artificial visio... more Depth perception through stereo vision is an important feature of biological and artificial vision systems. While biological systems can compute disparities effortlessly, it requires intensive processing for artificial vision systems. The computing complexity resides in solving the correspondence problem – finding matching pairs of points in the two eyes. Inspired by the retina, event-based vision sensors allow a new constraint to solve the correspondence problem: time. Relying on precise spike-time, spiking neural networks can take advantage of this constraint. However, disparities can only be computed from dynamic environments since event-based vision sensors only report local changes in light intensity. In this paper, we show how microsaccadic eye movements can be used to compute disparities from static environments. To this end, we built a robotic head supporting two Dynamic Vision Sensors (DVS) capable of independent panning and simultaneous tilting. We evaluate the method on both static and dynamic scenes perceived through microsaccades. This paper demonstrates the complementarity of event-based vision sensors and active perception leading to more biologically inspired robots.
Understanding the brain is an interdisciplinary effort spanning over the fields of computational ... more Understanding the brain is an interdisciplinary effort spanning over the fields of computational neuroscience, machine learning and robotics. The collaboration between researchers across these fields should be encouraged by comprehensive simulation platforms. The Neurorobotics Platform (NRP) developed by the Human Brain Project enables such collaboration by allowing researchers to define virtual experiments in which brain models are connected to simulated robots. In this paper, we present how we use the NRP as an education tool to introduce master students to neurorobotics. The students are given the task to define, implement and solve three virtual neurorobotics challenges related to perception, arm motion and locomotion. Without any previous knowledge on neurorobotics, the students completed this task within the course of one semester. We present the challenges, which are now open-source benchmarks available online, as well as example solutions from the students to these challenges. This paper gives a glimpse of what new users are capable of by using the NRP to simulate their neurorobotics experiment. Aside from educating the students, this initiative also allowed to collect their direct feedback on the NRP. This feedback is valuable for the Human Brain Project as a whole since it helps identify how new users interact with the platform.
Depth perception through stereo vision is an important feature of biological and artificial visio... more Depth perception through stereo vision is an important feature of biological and artificial vision systems. While biological systems can compute disparities effortlessly, it requires intensive processing for artificial vision systems. The computing complexity resides in solving the correspondence problem – finding matching pairs of points in the two eyes. Inspired by the retina, event-based vision sensors allow a new constraint to solve the correspondence problem: time. Relying on precise spike-time, spiking neural networks can take advantage of this constraint. However, disparities can only be computed from dynamic environments since event-based vision sensors only report local changes in light intensity. In this paper, we show how microsaccadic eye movements can be used to compute disparities from static environments. To this end, we built a robotic head supporting two Dynamic Vision Sensors (DVS) capable of independent panning and simultaneous tilting. We evaluate the method on both static and dynamic scenes perceived through microsaccades. This paper demonstrates the complementarity of event-based vision sensors and active perception leading to more biologically inspired robots.
Target reaching is one of the most important problems in robotics - object interaction, manipulat... more Target reaching is one of the most important problems in robotics - object interaction, manipulation and grasping tasks require reaching of specific targets. We avoid the complexity of calculating the inverse kinematics and doing motion planning, and instead we use a combination of motor primitives. We propose a bio-inspired architecture that performs target reaching with a robot arm without planning. A spiking neural network represents motions in a hierarchy of motor primitives. Different correction primitives are combined using an error signal. We present experiments with a simulation of a robot arm to extensively cover the working space by going to different points and returning to the start point, and experiments to test extreme targets and random points in sequence. Robotics applications - like target reaching - can provide benchmarking tasks and realistic scenarios for validation of neuroscience models. Robotics can take advantage of the capabilities of spiking neural networks and the advantages of neuromorphic hardware to run them.
Human-Robot Interaction (HRI) plays an important and growing role, both in industrial application... more Human-Robot Interaction (HRI) plays an important and growing role, both in industrial applications and in game development. Over recent years, robots can be controlled by gestures via special devices, but these methods are not intuitive and require usually a learning phase. This paper proposes an intuitive method for controlling a robot end-effector using human gestures. Vision based techniques were used to track the position of the user's hand, which is directly translated in control signals. The use of a 3D camera sensor allows to easily control the robot tool position in all dimensions. Our approach includes a Graphical User Interface (GUI), to ease the control through interactive, visual feedback. This interface, including 3D markers, text messages and the visualization of the user's point cloud and the robot model, enables a control mechanism which does not require a teaching phase. Our approach was tested and evaluated using realistic experiments to prove that our approach works reliably and is extremely intuitive.
To guarantee safety in a shared work space between humans and robots, robust yet flexible robotic... more To guarantee safety in a shared work space between humans and robots, robust yet flexible robotic motion control is required. Algorithms for motion planning of complex robotic systems are too computationally expensive to enable a real-time solution on conventional hardware. We apply neuromorphic sensors and Spiking Neural Networks to create an obstacle memory of a robot’s work space. We create a neuron population representing all objects of the robot’s work cell except for the robot itself. Hereby, we use two sensor networks for proprioception and exteroception. Furthermore, we adapt the network to preserve older states while still reacting to new events, obtaining a correct obstacle memory at any given point in time. This is done by extending the network with further neurons and introducing a neighborhood-based structure. The system is evaluated with experiments with increasing complexity on simulated data. The results show that even though the issues with spatially not-separated objects and fast motions remain, this method of obtaining a neural memory works. Our network of spiking neurons represents a neural memory of obstacles and a robotic arm. The long-term goal of performing a reactive path planning algorithm on it makes it interesting in the context of Human-robot interaction.
Robot control is an active field of research, in particular for humanoid robots which present cha... more Robot control is an active field of research, in particular for humanoid robots which present challenging problems. Yet, few of the proposed methods exhibit properties of biological systems. Neurorobotics presents an interesting approach by using neural models and mechanisms of brain function to control robots. In this paper, we present a novel way to activate robot motions with different modalities, inspired by biology and implemented with spiking neurons. We focus on two specific characteristics of biological systems for motion control: the hierarchical representation, which is distributed in the body and the nervous system, and the different activation modalities. We modeled three different activation modalities: voluntary, rhythmic and reflexes. In our architecture, motions are represented with motor primitives that can be combined and parameterized. A mechanism to learn new motions based on previous knowledge is incorporated using an error function. Our approach is evaluated controlling a robotic arm in simulation. We show that each activation modality works, and we show that they can be combined in various ways in different scenarios.
For robotics, especially industrial applications, it is crucial to reactively plan safe motions t... more For robotics, especially industrial applications, it is crucial to reactively plan safe motions through efficient algorithms. Planning is more powerful in the configuration space than the task space. However, for robots with many degrees of freedom, this is challenging and computationally expensive. Sophisticated techniques for motion planning such as the Wavefront algorithm are limited by the high dimensionality of the configuration space, especially for robots with many degrees of freedom. For a neural implementation of the Wavefront algorithm in the configuration space, neurons represent discrete configurations and synapses are used for path planning. In order to decrease the complexity, we reduce the search space by pruning superfluous neurons and synapses. We present different models of self-organizing neural networks for this reduction. The approach takes real-life human motion data as input and creates a representation with reduced dimension. We compare six different neural network models and adapt the Wavefront algorithm to the different structures of the reduced output spaces. The method is backed up by an extensive evaluation of the reduced spaces, including their suitability for path planning by the Wavefront algorithm.
2019 19th International Conference on Advanced Robotics (ICAR)
Depth perception is crucial for many applications including robotics, UAV and autonomous driving.... more Depth perception is crucial for many applications including robotics, UAV and autonomous driving. The visual sense, as well as cameras, map the 3D world on a 2D representation, losing the dimension representing depth. A way to recover 3D information from 2D images is to record and join data from multiple viewpoints. In case of a stereo setup, 4D data is gained. Existing methods to recover 3D information are computationally expensive. We propose a new, more intuitive method to recover 3D objects out of event-based stereo data, by using a Self-Organizing Map to solve the correspondence problem and establish a structure similar to a voxel grid. Our approach, as it is also computationally expensive, copes with performance issues by massive parallelization. Furthermore, the relatively small voxel grid makes this a memory friendly solution. This technique is very powerful as it does not need any prior knowledge of extrinsic and intrinsic camera parameters. Instead, those parameters and also the lens distortion are learned implicitly. Not only do we not require a parallel camera setup, as many existing methods, we do not even need any information about the alignment at all. We evaluated our method in a qualitative analysis and finding image correspondences.
2020 8th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob), 2020
Conventional methods for motion control and path planning in robots are nowhere near as reactive ... more Conventional methods for motion control and path planning in robots are nowhere near as reactive and flexible as in nature. Brains solve navigation using place cells - neurons that provide a cognitive representation of a specific environment. Neural techniques for path planning in 2D have been developed for years, however, to allow their application for robotic tasks beyond locomotion, a transmission to 3D is required. We present an implementation for path planning via a propagating wavefront on 3D environments. The algorithm operates on a Spiking Neural Network of excitatory place cells structured as a grid. A wavefront travelling through the network is initiated by activating the goal place cell. The wave strengthens synapses in the direction of the propagation using STDP, as a synaptic learning rule. By interpreting the synaptic weights as a vector field, a path can be derived from any place cell, reached by the wave, to the destination. We demonstrate, using a neural simulator, that our algorithm works well on maps with multiple obstacles. Our method allows fast simulation and query times and we expect to considerably improve the network creation time by using dedicated hardware, allowing massive parallelism. Our algorithm applies bio-inspired techniques and is especially interesting for Human-robot interaction, which requires reactive flexible motion planning.
2018 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob), 2018
Robot control is an active field of research, in particular for humanoid robots which present cha... more Robot control is an active field of research, in particular for humanoid robots which present challenging problems. Yet, few of the proposed methods exhibit properties of biological systems. Neurorobotics presents an interesting approach by using neural models and mechanisms of brain function to control robots. In this paper, we present a novel way to activate robot motions with different modalities, inspired by biology and implemented with spiking neurons. We focus on two specific characteristics of biological systems for motion control: the hierarchical representation, which is distributed in the body and the nervous system, and the different activation modalities. We modeled three different activation modalities: voluntary, rhythmic and reflexes. In our architecture, motions are represented with motor primitives that can be combined and parameterized. A mechanism to learn new motions based on previous knowledge is incorporated using an error function. Our approach is evaluated controlling a robotic arm in simulation. We show that each activation modality works, and we show that they can be combined in various ways in different scenarios.
2018 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2018
Human-Robot Interaction (HRI) plays an important and growing role, both in industrial application... more Human-Robot Interaction (HRI) plays an important and growing role, both in industrial applications and in game development. Over recent years, robots can be controlled by gestures via special devices, but these methods are not intuitive and require usually a learning phase. This paper proposes an intuitive method for controlling a robot end-effector using human gestures. Vision based techniques were used to track the position of the user's hand, which is directly translated in control signals. The use of a 3D camera sensor allows to easily control the robot tool position in all dimensions. Our approach includes a Graphical User Interface (GUI), to ease the control through interactive, visual feedback. This interface, including 3D markers, text messages and the visualization of the user's point cloud and the robot model, enables a control mechanism which does not require a teaching phase. Our approach was tested and evaluated using realistic experiments to prove that our approach works reliably and is extremely intuitive.
Artificial Neural Networks and Machine Learning – ICANN 2018, 2018
Depth perception through stereo vision is an important feature of biological and artificial visio... more Depth perception through stereo vision is an important feature of biological and artificial vision systems. While biological systems can compute disparities effortlessly, it requires intensive processing for artificial vision systems. The computing complexity resides in solving the correspondence problem – finding matching pairs of points in the two eyes. Inspired by the retina, event-based vision sensors allow a new constraint to solve the correspondence problem: time. Relying on precise spike-time, spiking neural networks can take advantage of this constraint. However, disparities can only be computed from dynamic environments since event-based vision sensors only report local changes in light intensity. In this paper, we show how microsaccadic eye movements can be used to compute disparities from static environments. To this end, we built a robotic head supporting two Dynamic Vision Sensors (DVS) capable of independent panning and simultaneous tilting. We evaluate the method on both static and dynamic scenes perceived through microsaccades. This paper demonstrates the complementarity of event-based vision sensors and active perception leading to more biologically inspired robots.
Uploads
Papers by Lea Steffen