- Pablo Gil is currently full-time Professor at the University of Alicante (University Institute of Computer Research-... morePablo Gil is currently full-time Professor at the University of Alicante (University Institute of Computer Research-IUII) . His research interests include Computer Vision for robots and autonomous systems, robotic manipulation, Robotic grasping and Tactile perception. Dr. Gil is member of CEA-IFAC and Senior member of IEEE Education Society, IEEE Robotics and Automation Society (he was secretary of Spanish Chapter from December 2018 until March 2023) and IEEE Sensors Council.edit
Optimal robotic grasping cannot be limited to the estimation of object grasping pose using vision-based methods. It is necessary to use tactile sensors to learn the physical properties of the objects that are to be grasped. In this work,... more
Optimal robotic grasping cannot be limited to the estimation of object grasping pose using vision-based methods. It is necessary to use tactile sensors to learn the physical properties of the objects that are to be grasped. In this work, we integrated two Contactile force-based tactile sensors with a 2F-140 ROBOTIQ gripper and a UR5 robot to estimate the volume of a waterfilled container using Multilayer Perceptron (MLP) neural networks. During experimentation, we trained and evaluated different
MLPs varying the input forces (Fx, Fy, Fz) in a task of discrete-volume regression in a range of between 0ml and 300ml. The preliminary proposed approach is compared with an algebraic method based on the diagram of the equilibrium of forces, proving that our results are more precise, obtaining a R2 value of 8% higher in the worst-case scenario, and of 30% in the best.
MLPs varying the input forces (Fx, Fy, Fz) in a task of discrete-volume regression in a range of between 0ml and 300ml. The preliminary proposed approach is compared with an algebraic method based on the diagram of the equilibrium of forces, proving that our results are more precise, obtaining a R2 value of 8% higher in the worst-case scenario, and of 30% in the best.
Research Interests:
The paper industry manufactures corrugated cardboard packaging, which is unassembled and stacked on pallets to be supplied to its customers. Human operators usually classify these pallets according to the physical features of the... more
The paper industry manufactures corrugated cardboard packaging, which is unassembled and stacked on pallets to be supplied to its customers. Human operators usually classify these pallets according to the physical features of the cardboard packaging. This process can be slow, causing congestion on the production line. To optimise the logistics of this process, we propose a visual recognition and tracking pipeline that monitors the palletised packaging while it is moving inside the factory on roller conveyors. Our pipeline has a two-stage architecture composed of Convolutional Neural Networks, one for oriented pallet detection and recognition, and another with which to track identified pallets. We carried out an extensive study using different methods for the pallet detection and tracking tasks and discovered that the oriented object detection approach was the most suitable. Our proposal recognises and tracks different configurations and visual appearance of palletised packaging, pro...
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
When carrying out robotic manipulation tasks, objects occasionally fall as a result of the rotation caused by slippage. This can be prevented by obtaining tactile information that provides better knowledge on the physical properties of... more
When carrying out robotic manipulation tasks, objects occasionally fall as a result of the rotation caused by slippage. This can be prevented by obtaining tactile information that provides better knowledge on the physical properties of the grasping. In this letter, we estimate the rotation angle of a grasped object when slippage occurs. We implement a system made up of a neural network with which to segment the contact region and an algorithm with which to estimate the rotated angle of that region. This method is applied to DIGIT tactile sensors. Our system has additionally been trained and tested with our publicly available dataset which is, to the best of our knowledge, the first dataset related to tactile segmentation from non-synthetic images to appear in the literature, and with which we have attained results of 95% and 90% as regards Dice and IoU metrics in the worst scenario. Moreover, we have obtained a maximum error of ≈3 degrees when testing with objects not previously seen by our system in 45 different lifts. This, therefore, proved that our approach is able to detect the slippage movement, thus providing a possible reaction that will prevent the object from falling.
Research Interests:
Research Interests:
Research Interests:
Latest trends in robotic grasping combine vision and touch for improving the performance of systems at tasks like stability prediction. However, tactile data are only available during the grasp, limiting the set of scenarios in which... more
Latest trends in robotic grasping combine vision and touch for improving the performance of systems at tasks like stability prediction. However, tactile data are only available during the grasp, limiting the set of scenarios in which multimodal solutions can be applied. Could we obtain it prior to grasping? We explore the use of visual perception as a stimulus for generating tactile data so the robotic system can ”feel” the response of the tactile perception just by looking at the object.
This paper presents a system that combines computer vision and surface electromyography techniques to perform grasping tasks with a robotic hand. In order to achieve a reliable grasping action, the vision-driven system is used to compute... more
This paper presents a system that combines computer vision and surface electromyography techniques to perform grasping tasks with a robotic hand. In order to achieve a reliable grasping action, the vision-driven system is used to compute pre-grasping poses of the robotic system based on the analysis of tridimensional object features. Then, the human operator can correct the pre-grasping pose of the robot using surface electromyographic signals from the forearm during wrist flexion and extension. Weak wrist flexions and extensions allow a fine adjustment of the robotic system to grasp the object and finally, when the operator considers that the grasping position is optimal, a strong flexion is performed to initiate the grasping of the object. The system has been tested with several subjects to check its performance showing a grasping accuracy of around 95% of the attempted grasps which increases in more than a 13% the grasping accuracy of previous experiments in which electromyograph...
Research Interests: Engineering, Robotics, Computer Science, Artificial Intelligence, Analytical Chemistry, and 14 moreComputer Vision, Medicine, Grasp, Sensors, Grasping, Humans, Female, Male, Young Adult, Electromyography, Robotic Hand, Surface Electromyography, Electrical And Electronic Engineering, and Assistive Robotics
Research Interests:
Learning and teaching processes are continually changing. Therefore, design of learning technologies has gained interest in educators and educational institutions from secondary school to higher education. This paper describes the... more
Learning and teaching processes are continually changing. Therefore, design of learning technologies has gained interest in educators and educational institutions from secondary school to higher education. This paper describes the successfully use in education of social learning technologies and virtual laboratories designed by the authors, as well as videos developed by the students. These tools, combined with other open educational resources based on a blended-learning methodology, have been employed to teach the subject of Computer Networks. We have verified not only that the application of OERs into the learning process leads to a significantly improvement of the assessments, but also that the combination of several OERs enhances their effectiveness. These results are supported by, firstly, a study of both students ’ opinion and students ’ behaviour over five academic years, and, secondly, a correlation analysis between the use of OERs and the grades obtained by students.
Research Interests: Computer Science, Educational Technology, Computer Science Education, Computer Engineering, Learning and Teaching, and 10 moreComputer Networks, Assessment in Higher Education, Online Learning, Learning And Teaching In Higher Education, Educational Technologies, Open Educational Resources (OER), Virtual laboratories, Computer Based Learning, E, and E Learning
Research Interests: Computer Science and IEEE
In this paper we analyze, in some detail, the vision system architecture for disassembly applications. This work is carried out in the context of motion and stereo analysis. The methodology presented is useful for work in manufacturing... more
In this paper we analyze, in some detail, the vision system architecture for disassembly applications. This work is carried out in the context of motion and stereo analysis. The methodology presented is useful for work in manufacturing conditions facing difficult situations like the occlusion of components. The recognition and location of three-dimensional objects is important for automatic disassembly. A data fusion from a multiple-camera scheme has been proposed for extracting information from the scene. Data provided by some sensors is used to determine object recognition, location and orientation. Copyright 2001 IFAC
Research Interests:
This paper presents an AI system applied to location and robotic grasping. Experimental setup is based on a parameter study to train a deep-learning network based on Mask-RCNN to perform waste location in indoor and outdoor environment,... more
This paper presents an AI system applied to location and robotic grasping. Experimental setup is based on a parameter study to train a deep-learning network based on Mask-RCNN to perform waste location in indoor and outdoor environment, using five different classes and generating a new waste dataset. Initially the AI system obtain the RGBD data of the environment, followed by the detection of objects using the neural network. Later, the 3D object shape is computed using the network result and the depth channel. Finally, the shape is used to compute grasping for a robot arm with a two-finger gripper. The objective is to classify the waste in groups to improve a recycling strategy.
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Este artículo presenta una aplicación de reconocimiento mediante el uso de redes de aprendizaje profundo para llevar a cabo la clasificación de basura en el ámbito doméstico. Así mismo, una vez realizado el reconocimiento se determina su... more
Este artículo presenta una aplicación de reconocimiento mediante el uso de redes de aprendizaje profundo para llevar a cabo la clasificación de basura en el ámbito doméstico. Así mismo, una vez realizado el reconocimiento se determina su localización, para poder obtener los puntos de agarre para que un brazo robot dotado de una pinza de dedos paralelos pueda hacerlo de manera automática. Se presenta el algoritmo utilizado, así como, los resultados experimentales que permiten comprobar la bondad de la propuesta.
Research Interests:
La manipulación robótica sigue siendo un problema no resuelto. Implica muchos aspectos complejos como la percepción táctil de una amplia variedad de objetos y materiales, control de agarre para planificar la postura de la mano robótica,... more
La manipulación robótica sigue siendo un problema no resuelto. Implica muchos aspectos complejos como la percepción táctil de una amplia variedad de objetos y materiales, control de agarre para planificar la postura de la mano robótica, etc. La mayoría de los trabajos anteriores sobre este tema han estado utilizando sensores caros. Este hecho dificulta la aplicación en la industria. En este trabajo, se propone un sistema de detección de agarre mediante un sensor táctil de tecnología de imagen y bajo coste, conocido como DIGIT. El método desarrollado basado en redes convolucionales profundas es capaz de detectar contacto o no contacto, con precisiones superiores al 95%. El sistema ha sido entrenado y testado con una base de datos propia de más de 16000 imágenes procedentes de agarres de diferentes objetos, empleando distintas unidades de DIGIT. El método de detección forma parte de un controlador de agarre para una pinza ROBOTIQ 2F-140.
Research Interests:
In this paper, we present a robotic workcell for task automation in footwear manufacturing such as sole digitization, glue dispensing, and sole manipulation from different places within the factory plant. We aim to make progress towards... more
In this paper, we present a robotic workcell for task automation in footwear manufacturing such as sole digitization, glue dispensing, and sole manipulation from different places within the factory plant. We aim to make progress towards shoe industry 4.0. To achieve it, we have implemented a novel sole grasping method, compatible with soles of different shapes, sizes, and materials, by exploiting the particular characteristics of these objects. Our proposal is able to work well with low density point clouds from a single RGBD camera and also with dense point clouds obtained from a laser scanner digitizer. The method computes antipodal grasping points from visual data in both cases and it does not require a previous recognition of sole. It relies on sole contour extraction using concave hulls and measuring the curvature on contour areas. Our method was tested both in a simulated environment and in real conditions of manufacturing at INESCOP facilities, processing 20 soles with differ...
Research Interests:
Research Interests:
Este trabajo presenta un método para clasificar objetos agarrados con una mano robótica multidedo combinando en un descriptor híbrido datos propioceptivos y táctiles. Los datos propioceptivos se obtienen a partir de las posiciones... more
Este trabajo presenta un método para clasificar objetos agarrados con una mano robótica multidedo combinando en un descriptor híbrido datos propioceptivos y táctiles. Los datos propioceptivos se obtienen a partir de las posiciones articulares de la mano y los táctiles se extraen del contacto registrado por células de presión instaladas en las falanges. La aproximación propuesta permite identificar el objeto aprendiendo de forma implícita su geometría y rigidez usando los datos que facilitan los sensores. En este trabajo demostramos que el uso de datos bimodales con técnicas de aprendizaje supervisado mejora la tasa de reconocimiento. En la experimentación, se han llevado a cabo más de 3000 agarres de hasta 7 objetos domésticos distintos, obteniendo clasificaciones correctas del 95%con métrica F1, realizando una única palpación del objeto. Además, la generalización del método se ha verificado entrenando nuestro sistema con unos objetos y posteriormente, clasificando otros nuevos simi...
Research Interests:
One of the challenges in robotic grasping tasks is the problem of detecting whether a grip is stable or not. The lack of stability during a manipulation operation usually causes the slippage of the grasped object due to poor contact... more
One of the challenges in robotic grasping tasks is the problem of detecting whether a grip is stable or not. The lack of stability during a manipulation operation usually causes the slippage of the grasped object due to poor contact forces. Frequently, an unstable grip can be caused by an inadequate pose of the robotic hand or by insufficient contact pressure, or both. The use of tactile data is essential to check such conditions and, therefore, predict the stability of a grasp. In this work, we present and compare different methodologies based on deep learning in order to represent and process tactile data for both stability and slip prediction.
Research Interests:
Robotic manipulators have to constantly deal with the complex task of detecting whether a grasp is stable or, in contrast, whether the grasped object is slipping. Recognising the type of slippage—translational, rotational—and its... more
Robotic manipulators have to constantly deal with the complex task of detecting whether a grasp is stable or, in contrast, whether the grasped object is slipping. Recognising the type of slippage—translational, rotational—and its direction is more challenging than detecting only stability, but is simultaneously of greater use as regards correcting the aforementioned grasping issues. In this work, we propose a learning methodology for detecting the direction of a slip (seven categories) using spatio-temporal tactile features learnt from one tactile sensor. Tactile readings are, therefore, pre-processed and fed to a ConvLSTM that learns to detect these directions with just 50 ms of data. We have extensively evaluated the performance of the system and have achieved relatively high results at the detection of the direction of slip on unseen objects with familiar properties (82.56% accuracy).
Research Interests:
We present a method to detect maritime oil spills from Side-Looking Airborne Radar (SLAR) sensors mounted on aircraft in order to enable a quick response of emergency services when an oil spill occurs. The proposed approach introduces a... more
We present a method to detect maritime oil spills from Side-Looking Airborne Radar (SLAR) sensors mounted on aircraft in order to enable a quick response of emergency services when an oil spill occurs. The proposed approach introduces a new type of neural architecture named Convolutional Long Short Term Memory Selectional AutoEncoders (CMSAE) which allows the simultaneous segmentation of multiple classes such as coast, oil spill and ships. Unlike previous works using full SLAR images, in this work only a few scanlines from the beam-scanning of radar are needed to perform the detection. The main objective is to develop a method that performs accurate segmentation using only the current and previous sensor information, in order to return a real-time response during the flight. The proposed architecture uses a series of CMSAE networks to process in parallel each of the objectives defined as different classes. The output of these networks are given to a machine learning classifier to pe...
Research Interests:
Research Interests:
Research Interests:
In this work, we use deep neural autoencoders to segment oil spills from Side-Looking Airborne Radar (SLAR) imagery. Synthetic Aperture Radar (SAR) has been much exploited for ocean surface monitoring, especially for oil pollution... more
In this work, we use deep neural autoencoders to segment oil spills from Side-Looking Airborne Radar (SLAR) imagery. Synthetic Aperture Radar (SAR) has been much exploited for ocean surface monitoring, especially for oil pollution detection, but few approaches in the literature use SLAR. Our sensor consists of two SAR antennas mounted on an aircraft, enabling a quicker response than satellite sensors for emergency services when an oil spill occurs. Experiments on TERMA radar were carried out to detect oil spills on Spanish coasts using deep selectional autoencoders and RED-nets (very deep Residual Encoder-Decoder Networks). Different configurations of these networks were evaluated and the best topology significantly outperformed previous approaches, correctly detecting 100% of the spills and obtaining an F 1 score of 93.01% at the pixel level. The proposed autoencoders perform accurately in SLAR imagery that has artifacts and noise caused by the aircraft maneuvers, in different weat...
Research Interests:
This work presents a method for oil-spill detection on Spanish coasts using aerial Side-Looking Airborne Radar (SLAR) images, which are captured using a Terma sensor. The proposed method uses grayscale image processing techniques to... more
This work presents a method for oil-spill detection on Spanish coasts using aerial Side-Looking Airborne Radar (SLAR) images, which are captured using a Terma sensor. The proposed method uses grayscale image processing techniques to identify the dark spots that represent oil slicks on the sea. The approach is based on two steps. First, the noise regions caused by aircraft movements are detected and labeled in order to avoid the detection of false-positives. Second, a segmentation process guided by a map saliency technique is used to detect image regions that represent oil slicks. The results show that the proposed method is an improvement on the previous approaches for this task when employing SLAR images.
Research Interests:
University courses concerning Computer Vision and Image Processing are generally taught using a traditional methodology that is focused on the teacher rather than on the students. This approach is consequently not effective when teachers... more
University courses concerning Computer Vision and Image Processing are generally taught using a traditional methodology that is focused on the teacher rather than on the students. This approach is consequently not effective when teachers seek to attain cognitive objectives involving their students’ critical thinking.This manuscript covers the development, implementation and assessment of a short project-based engineering course with MATLAB applications Multimedia Engineering being taken by Bachelor’s degree students. The principal goal of all course lectures and hands-on laboratory activities was for the students to not only acquire image-specific technical skills but also a general knowledge of data analysis so as to locate phenomena in pixel regions of images and video frames. This would hopefully enable the students to develop skills regarding the implementation of the filters, operators, methods and techniques used for image processing and computer vision software libraries. Our teaching-learning process thus permits the accomplishment of knowledge assimilation, student motivation and skill development through the use of a continuous evaluation strategy to solve practical and real problems by means of short projects designed using MATLAB applications. Project-based learning is not new. This approach has been used in STEM learning in recent decades. But there are many types of projects. The aim of the current study is to analyse the efficacy of short projects as a learning tool when compared to long projects during which the students work with more independence. This work additionally presents the impact of different types of activities, and not only short projects, on students’ overall results in this subject. Moreover, a statistical study has allowed the author to suggest a link between the students’ success ratio and the type of content covered and activities completed on the course. The results described in this paper show that those students who took part in short projects made a significant improvement when compared to those who participated in long projects.
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Traditional visual servoing systems have been widely studied in the last years. These systems control the position of the camera attached to the robot end-effector guiding it from any position to the desired one. These controllers can be... more
Traditional visual servoing systems have been widely studied in the last years. These systems control the position of the camera attached to the robot end-effector guiding it from any position to the desired one. These controllers can be improved by using the event-based control paradigm. The system proposed in this paper is based on the idea of activating the visual controller only when something significant has occurred in the system (e.g. when any visual feature can be loosen because it is going outside the frame). Different event triggers have been defined in the image space in order to activate or deactivate the visual controller. The tests implemented to validate the proposal have proved that this new scheme avoids visual features to go out of the image whereas the system complexity is reduced considerably. Events can be used in the future to change different parameters of the visual servoing systems.
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Event-based visual servoing is a recently presented approach that performs the positioning of a robot using visual information only when it is required. From the basis of the classical image-based visual servoing control law, the scheme... more
Event-based visual servoing is a recently presented approach that performs the positioning of a robot using visual information only when it is required. From the basis of the classical image-based visual servoing control law, the scheme proposed in this paper can reduce the processing time at each loop iteration in some specific conditions. The proposed control method enters in action when an event deactivates the classical image-based controller (i.e. when there is no image available to perform the tracking of the visual features). A virtual camera is then moved through a straight line path towards the desired position. The virtual path used to guide the robot improves the behavior of the previous event-based visual servoing proposal.