Skip to main content

    Pablo Gil

    The accumulation of litter is increasing in many places and is consequently becoming a problem that must be dealt with. In this paper, we present a manipulator robotic system to collect litter in outdoor environments. This system has... more
    The accumulation of litter is increasing in many places and is consequently becoming a problem that must be dealt with. In this paper, we present a manipulator robotic system to collect litter in outdoor environments. This system has three functionalities. Firstly, it uses colour images to detect and recognise litter comprising different materials. Secondly, depth data are combined with pixels of waste objects to compute a 3D location and segment three-dimensional point clouds of the litter items in the scene. The grasp in 3 Degrees of Freedom (DoFs) is then estimated for a robot arm with a gripper for the segmented cloud of each instance of waste. Finally, two tactile-based algorithms are implemented and then employed in order to provide the gripper with a sense of touch. This work uses two low-cost visual-based tactile sensors at the fingertips. One of them addresses the detection of contact (which is obtained from tactile images) between the gripper and solid waste, while another...
    This paper presents a method that can be used for the efficient detection of small maritime objects. The proposed method employs aerial images in the visible spectrum as inputs to train a categorical convolutional neural network for the... more
    This paper presents a method that can be used for the efficient detection of small maritime objects. The proposed method employs aerial images in the visible spectrum as inputs to train a categorical convolutional neural network for the classification of ships. A subset of those filters that make the greatest contribution to the classification of the target class is selected from the inner layers of the CNN. The gradients with respect to the input image are then calculated on these filters, which are subsequently normalized and combined. Thresholding and a morphological operation are then applied in order to eventually obtain the localization. One of the advantages of the proposed approach with regard to previous object detection methods is that it is only required to label a few images with bounding boxes of the targets to be trained for localization. The method was evaluated with an extended version of the MASATI (MAritime SATellite Imagery) dataset. This new dataset has more than 7 000 images, 4 157 of which contain ships. Using only 14 training images, the proposed approach achieves better results for small targets than other well-known object detection methods, which also require many more training images. INDEX TERMS Artificial neural networks, learning systems, object detection, remote sensing.
    The accumulation of litter is increasing in many places and is consequently becoming a problem that must be dealt with. In this paper, we present a manipulator robotic system to collect litter in outdoor environments. This system has... more
    The accumulation of litter is increasing in many places and is consequently becoming a problem that must be dealt with. In this paper, we present a manipulator robotic system to collect litter in outdoor environments. This system has three functionalities. Firstly, it uses colour images to detect and recognise litter comprising different materials. Secondly, depth data are combined with pixels of waste objects to compute a 3D location and segment three-dimensional point clouds of the litter items in the scene. The grasp in 3 Degrees of Freedom (DoFs) is then estimated for a robot arm with a gripper for the segmented cloud of each instance of waste. Finally, two tactile-based algorithms are implemented and then employed in order to provide the gripper with a sense of touch. This work uses two low-cost visual-based tactile sensors at the fingertips. One of them addresses the detection of contact (which is obtained from tactile images) between the gripper and solid waste, while another has been designed to detect slippage in order to prevent the objects grasped from falling. Our proposal was successfully tested by carrying out extensive experimentation with different objects varying in size, texture, geometry and materials in different outdoor environments (a tiled pavement, a surface of stone/soil, and grass). Our system achieved an average score of 94% for the detection and Collection Success Rate (CSR) as regards its overall performance, and of 80% for the collection of items of litter at the first attempt.
    The paper industry manufactures corrugated cardboard packaging, which is unassembled and stacked on pallets to be supplied to its customers. Human operators usually classify these pallets according to the physical features of the... more
    The paper industry manufactures corrugated cardboard packaging, which is unassembled and stacked on pallets to be supplied to its customers. Human operators usually classify these pallets according to the physical features of the cardboard packaging. This process can be slow, causing congestion on the production line. To optimise the logistics of this process, we propose a visual recognition and tracking pipeline that monitors the palletised packaging while it is moving inside the factory on roller conveyors. Our pipeline has a two-stage architecture composed of Convolutional Neural Networks, one for oriented pallet detection and recognition, and another with which to track identified pallets. We carried out an extensive study using different methods for the pallet detection and tracking tasks and discovered that the oriented object detection approach was the most suitable. Our proposal recognises and tracks different configurations and visual appearance of palletised packaging, providing statistical data in real time with which to assist human operators in decision-making. We tested the precision-performance of the system at the Smurfit Kappa facilities. Our proposal attained an Average Precision (AP) of 0.93 at 14 Frames Per Second (FPS), losing only 1% of detections. Our system is, therefore, able to optimise and speed up the process of logistic distribution.
    The paper industry manufactures corrugated cardboard packaging, which is unassembled and stacked on pallets to be supplied to its customers. Human operators usually classify these pallets according to the physical features of the... more
    The paper industry manufactures corrugated cardboard packaging, which is unassembled and stacked on pallets to be supplied to its customers. Human operators usually classify these pallets according to the physical features of the cardboard packaging. This process can be slow, causing congestion on the production line. To optimise the logistics of this process, we propose a visual recognition and tracking pipeline that monitors the palletised packaging while it is moving inside the factory on roller conveyors. Our pipeline has a two-stage architecture composed of Convolutional Neural Networks, one for oriented pallet detection and recognition, and another with which to track identified pallets. We carried out an extensive study using different methods for the pallet detection and tracking tasks and discovered that the oriented object detection approach was the most suitable. Our proposal recognises and tracks different configurations and visual appearance of palletised packaging, pro...
    Este artículo analiza diferentes experiencias docentes que tienen como finalidad el aprendizaje de la robótica en el mundo universitario. Estas experiencias se plasman en el desarrollo de varios cursos y asignaturas sobre robótica que se... more
    Este artículo analiza diferentes experiencias docentes que tienen como finalidad el aprendizaje de la robótica en el mundo universitario. Estas experiencias se plasman en el desarrollo de varios cursos y asignaturas sobre robótica que se imparten en la Universidad de Alicante. Para el desarrollo de estos cursos, los autores han empleado varias plataformas educativas, algunas de implementación propia, otras de libre distribución y código abierto. El objetivo de estos cursos es enseñar el diseño e implementación de soluciones robóticas a diversos problemas que van desde el control, programación y manipulación de brazos robots de ámbito industrial hasta la construcción y/o programación de mini-robots con carácter educativo. Por un lado, se emplean herramientas didácticas de última generación como simuladores y laboratorios virtuales que flexibilizan el uso de brazos robots y, por otro lado, se hace uso de competiciones y concursos para motivar al alumno haciendo que ponga en práctica l...
    This paper presents an AI system applied to location and robotic grasping. Experimental setup is based on a parameter study to train a deep-learning network based on Mask-RCNN to perform waste location in indoor and outdoor environment,... more
    This paper presents an AI system applied to location and robotic grasping. Experimental setup is based on a parameter study to train a deep-learning network based on Mask-RCNN to perform waste location in indoor and outdoor environment, using five different classes and generating a new waste dataset. Initially the AI system obtain the RGBD data of the environment, followed by the detection of objects using the neural network. Later, the 3D object shape is computed using the network result and the depth channel. Finally, the shape is used to compute grasping for a robot arm with a two-finger gripper. The objective is to classify the waste in groups to improve a recycling strategy.
    This work was funded by the Spanish MCYT project “Diseno, implementacion y experimentacion de escenarios de manipulacion inteligentes para aplicaciones de ensamblado y desensamblado automatico (DPI2005- 06222)”.
    Sometimes, the presence of objects difficult the observation of other neighboring objects. This is because part of the surface of an object occludes partially the surface of another, increasing the complexitiy in the recognition process.... more
    Sometimes, the presence of objects difficult the observation of other neighboring objects. This is because part of the surface of an object occludes partially the surface of another, increasing the complexitiy in the recognition process. Therefore, the information which is acquired from scene to describe the objects is often incomplete and depends a great deal on the view point of the observation. Thus, when any real scene is observed, the regions and the boundaries which delimit and dissociate objects from others are not perceived easily. In this paper, a method to discern objects from others, delimiting where the surface of each object begins and finishes is presented. Really, here, we look for detecting the overlapping and occlusion zones of two or more objects which interact among each other in a same scene. This is very useful, on the one hand, to distinguish some objects from others when the features like texture colour and geometric form are not sufficient to separate them wi...
    Latest trends in robotic grasping combine vision and touch for improving the performance of systems at tasks like stability prediction. However, tactile data are only available during the grasp, limiting the set of scenarios in which... more
    Latest trends in robotic grasping combine vision and touch for improving the performance of systems at tasks like stability prediction. However, tactile data are only available during the grasp, limiting the set of scenarios in which multimodal solutions can be applied. Could we obtain it prior to grasping? We explore the use of visual perception as a stimulus for generating tactile data so the robotic system can ”feel” the response of the tactile perception just by looking at the object.
    In this paper, we present a robotic workcell for task automation in footwear manufacturing such as sole digitization, glue dispensing, and sole manipulation from different places within the factory plant. We aim to make progress towards... more
    In this paper, we present a robotic workcell for task automation in footwear manufacturing such as sole digitization, glue dispensing, and sole manipulation from different places within the factory plant. We aim to make progress towards shoe industry 4.0. To achieve it, we have implemented a novel sole grasping method, compatible with soles of different shapes, sizes, and materials, by exploiting the particular characteristics of these objects. Our proposal is able to work well with low density point clouds from a single RGBD camera and also with dense point clouds obtained from a laser scanner digitizer. The method computes antipodal grasping points from visual data in both cases and it does not require a previous recognition of sole. It relies on sole contour extraction using concave hulls and measuring the curvature on contour areas. Our method was tested both in a simulated environment and in real conditions of manufacturing at INESCOP facilities, processing 20 soles with differ...
    Este trabajo presenta un método para clasificar objetos agarrados con una mano robótica multidedo combinando en un descriptor híbrido datos propioceptivos y táctiles. Los datos propioceptivos se obtienen a partir de las posiciones... more
    Este trabajo presenta un método para clasificar objetos agarrados con una mano robótica multidedo combinando en un descriptor híbrido datos propioceptivos y táctiles. Los datos propioceptivos se obtienen a partir de las posiciones articulares de la mano y los táctiles se extraen del contacto registrado por células de presión instaladas en las falanges. La aproximación propuesta permite identificar el objeto aprendiendo de forma implícita su geometría y rigidez usando los datos que facilitan los sensores. En este trabajo demostramos que el uso de datos bimodales con técnicas de aprendizaje supervisado mejora la tasa de reconocimiento. En la experimentación, se han llevado a cabo más de 3000 agarres de hasta 7 objetos domésticos distintos, obteniendo clasificaciones correctas del 95%con métrica F1, realizando una única palpación del objeto. Además, la generalización del método se ha verificado entrenando nuestro sistema con unos objetos y posteriormente, clasificando otros nuevos simi...
    One of the challenges in robotic grasping tasks is the problem of detecting whether a grip is stable or not. The lack of stability during a manipulation operation usually causes the slippage of the grasped object due to poor contact... more
    One of the challenges in robotic grasping tasks is the problem of detecting whether a grip is stable or not. The lack of stability during a manipulation operation usually causes the slippage of the grasped object due to poor contact forces. Frequently, an unstable grip can be caused by an inadequate pose of the robotic hand or by insufficient contact pressure, or both. The use of tactile data is essential to check such conditions and, therefore, predict the stability of a grasp. In this work, we present and compare different methodologies based on deep learning in order to represent and process tactile data for both stability and slip prediction.
    Robotic manipulators have to constantly deal with the complex task of detecting whether a grasp is stable or, in contrast, whether the grasped object is slipping. Recognising the type of slippage—translational, rotational—and its... more
    Robotic manipulators have to constantly deal with the complex task of detecting whether a grasp is stable or, in contrast, whether the grasped object is slipping. Recognising the type of slippage—translational, rotational—and its direction is more challenging than detecting only stability, but is simultaneously of greater use as regards correcting the aforementioned grasping issues. In this work, we propose a learning methodology for detecting the direction of a slip (seven categories) using spatio-temporal tactile features learnt from one tactile sensor. Tactile readings are, therefore, pre-processed and fed to a ConvLSTM that learns to detect these directions with just 50 ms of data. We have extensively evaluated the performance of the system and have achieved relatively high results at the detection of the direction of slip on unseen objects with familiar properties (82.56% accuracy).
    We present a method to detect maritime oil spills from Side-Looking Airborne Radar (SLAR) sensors mounted on aircraft in order to enable a quick response of emergency services when an oil spill occurs. The proposed approach introduces a... more
    We present a method to detect maritime oil spills from Side-Looking Airborne Radar (SLAR) sensors mounted on aircraft in order to enable a quick response of emergency services when an oil spill occurs. The proposed approach introduces a new type of neural architecture named Convolutional Long Short Term Memory Selectional AutoEncoders (CMSAE) which allows the simultaneous segmentation of multiple classes such as coast, oil spill and ships. Unlike previous works using full SLAR images, in this work only a few scanlines from the beam-scanning of radar are needed to perform the detection. The main objective is to develop a method that performs accurate segmentation using only the current and previous sensor information, in order to return a real-time response during the flight. The proposed architecture uses a series of CMSAE networks to process in parallel each of the objectives defined as different classes. The output of these networks are given to a machine learning classifier to pe...
    In this work, we use deep neural autoencoders to segment oil spills from Side-Looking Airborne Radar (SLAR) imagery. Synthetic Aperture Radar (SAR) has been much exploited for ocean surface monitoring, especially for oil pollution... more
    In this work, we use deep neural autoencoders to segment oil spills from Side-Looking Airborne Radar (SLAR) imagery. Synthetic Aperture Radar (SAR) has been much exploited for ocean surface monitoring, especially for oil pollution detection, but few approaches in the literature use SLAR. Our sensor consists of two SAR antennas mounted on an aircraft, enabling a quicker response than satellite sensors for emergency services when an oil spill occurs. Experiments on TERMA radar were carried out to detect oil spills on Spanish coasts using deep selectional autoencoders and RED-nets (very deep Residual Encoder-Decoder Networks). Different configurations of these networks were evaluated and the best topology significantly outperformed previous approaches, correctly detecting 100% of the spills and obtaining an F 1 score of 93.01% at the pixel level. The proposed autoencoders perform accurately in SLAR imagery that has artifacts and noise caused by the aircraft maneuvers, in different weat...
    This work presents a method for oil-spill detection on Spanish coasts using aerial Side-Looking Airborne Radar (SLAR) images, which are captured using a Terma sensor. The proposed method uses grayscale image processing techniques to... more
    This work presents a method for oil-spill detection on Spanish coasts using aerial Side-Looking Airborne Radar (SLAR) images, which are captured using a Terma sensor. The proposed method uses grayscale image processing techniques to identify the dark spots that represent oil slicks on the sea. The approach is based on two steps. First, the noise regions caused by aircraft movements are detected and labeled in order to avoid the detection of false-positives. Second, a segmentation process guided by a map saliency technique is used to detect image regions that represent oil slicks. The results show that the proposed method is an improvement on the previous approaches for this task when employing SLAR images.
    Research Interests:
    Research Interests:
    En este artículo se presenta un estudio sobre la dedicación temporal, el esfuerzo y la dificultad que supone el estudio y aprendizaje de la asignatura cuatrimestral “Ingeniería de Control” en las Ingenierías en Informática de la... more
    En este artículo se presenta un estudio sobre la dedicación temporal, el esfuerzo y la dificultad que supone el estudio y aprendizaje de la asignatura cuatrimestral “Ingeniería de Control” en las Ingenierías en Informática de la Universidad de Alicante. Con este estudio se ha pretendido medir como influenciará la implantación del espacio europeo de educación superior y créditos ECTS, en el aprendizaje de asignaturas técnicas con cierta complejidad conceptual. El estudio, aquí presentado, se ha centrado en tratar únicamente el aprendizaje teórico de la asignatura, mediante el desarrollo y exposición de trabajos. Esto es así, porque la adquisición de destrezas prácticas ha sido tratada ampliamente y con éxito en otros trabajos e investigaciones docentes que versan sobre la incorporación de laboratorios virtuales para simular conceptos y acceder remotamente a maquetas y sistemas reales. Este trabajo ha sido financiado por el Instituto de Ciencias de la Educación ICE de la Universidad d...
    Research Interests:
    Research Interests:
    This paper presents an object recognition technique based on projective geometry for industrial pieces that satisfy geometric properties. First at all, we consider some methods of corner detection which are useful for the extraction of... more
    This paper presents an object recognition technique based on projective geometry for industrial pieces that satisfy geometric properties. First at all, we consider some methods of corner detection which are useful for the extraction of interest points in digital images. For object recognition by means of projective invariants, an excessive number of points to be processed supposes a greater complexity of the algorithm We present a method that allows to reduce the points extracted by different corner detection techniques, based on the elimination of non -significant points, using the estimation of the straight lines that contain those points. Secondly, these groups of points are then used to build projective invariants which allow us to distinguish one object from another. Experiments with different pieces and real images in grey -scale show the validity of this approach.
    Research Interests:
    † These authors contributed equally to this work. Abstract: This paper describes the development of a low-cost mini-robot that is controlled by visual gestures. The prototype allows a person with disabilities to perform visual inspections... more
    † These authors contributed equally to this work. Abstract: This paper describes the development of a low-cost mini-robot that is controlled by visual gestures. The prototype allows a person with disabilities to perform visual inspections indoors and in domestic spaces. Such a device could be used as the operator's eyes obviating the need for him to move about. The robot is equipped with a motorised webcam that is also controlled by visual gestures. This camera is used to monitor tasks in the home using the mini-robot while the operator remains quiet and motionless. The prototype was evaluated through several experiments testing the ability to use the mini-robot’s kinematics and communication systems to make it follow certain paths. The mini-robot can be programmed with specific orders and can be tele-operated by means of 3D hand gestures to enable the operator to perform movements and monitor tasks from a distance.
    In this paper we analyze, in some detail, the vision system architecture for disassembly applications. This work is carried out in the context of motion and stereo analysis. The methodology presented is useful for work in manufacturing... more
    In this paper we analyze, in some detail, the vision system architecture for disassembly applications. This work is carried out in the context of motion and stereo analysis. The methodology presented is useful for work in manufacturing conditions facing difficult situations like the occlusion of components. The recognition and location of three-dimensional objects is important for automatic disassembly. A data fusion from a multiple-camera scheme has been proposed for extracting information from the scene. Data provided by some sensors is used to determine object recognition, location and orientation. Copyright  2001 IFAC
    Learning and teaching processes are continually changing. Therefore, design of learning technologies has gained interest in educators and educational institutions from secondary school to higher education. This paper describes the... more
    Learning and teaching processes are continually changing. Therefore, design of learning technologies has gained interest in educators and educational institutions from secondary school to higher education. This paper describes the successfully use in education of social learning technologies and virtual laboratories designed by the authors, as well as videos developed by the students. These tools, combined with other open educational resources based on a blended-learning methodology, have been employed to teach the subject of Computer Networks. We have verified not only that the application of OERs into the learning process leads to a significantly improvement of the assessments, but also that the combination of several OERs enhances their effectiveness. These results are supported by, firstly, a study of both students’ opinion and students’ behaviour over five academic years, and, secondly, a correlation analysis between the use of OERs and the grades obtained by students.
    La manipulación robótica sigue siendo un problema no resuelto. Implica muchos aspectos complejos como la percepción táctil de una amplia variedad de objetos y materiales, control de agarre para planificar la postura de la mano robótica,... more
    La manipulación robótica sigue siendo un problema no resuelto. Implica muchos aspectos complejos como la percepción táctil de una amplia variedad de objetos y materiales, control de agarre para planificar la postura de la mano robótica, etc. La mayoría de los trabajos anteriores sobre este tema han estado utilizando sensores caros. Este hecho dificulta la aplicación en la industria. En este trabajo, se propone un sistema de detección de agarre mediante un sensor táctil de tecnología de imagen y bajo coste, conocido como DIGIT. El método desarrollado basado en redes convolucionales profundas es capaz de detectar contacto o no contacto, con precisiones superiores al 95%. El sistema ha sido entrenado y testado con una base de datos propia de más de 16000 imágenes procedentes de agarres de diferentes objetos, empleando distintas unidades de DIGIT. El método de detección forma parte de un controlador de agarre para una pinza ROBOTIQ 2F-140.
    This paper presents a system that combines computer vision and surface electromyography techniques to perform grasping tasks with a robotic hand. In order to achieve a reliable grasping action, the vision-driven system is used to compute... more
    This paper presents a system that combines computer vision and surface electromyography techniques to perform grasping tasks with a robotic hand. In order to achieve a reliable grasping action, the vision-driven system is used to compute pre-grasping poses of the robotic system based on the analysis of tridimensional object features. Then, the human operator can correct the pre-grasping pose of the robot using surface electromyographic signals from the forearm during wrist flexion and extension. Weak wrist flexions and extensions allow a fine adjustment of the robotic system to grasp the object and finally, when the operator considers that the grasping position is optimal, a strong flexion is performed to initiate the grasping of the object. The system has been tested with several subjects to check its performance showing a grasping accuracy of around 95% of the attempted grasps which increases in more than a 13% the grasping accuracy of previous experiments in which electromyograph...
    Comunicación presentada en IBCE'04, Second IFAC Workshop on Internet Based Control Education, 5-7 septiembre 2004, Grenoble, Francia In this article, we describe the virtual and remote laboratory for computer vision and robotics... more
    Comunicación presentada en IBCE'04, Second IFAC Workshop on Internet Based Control Education, 5-7 septiembre 2004, Grenoble, Francia In this article, we describe the virtual and remote laboratory for computer vision and robotics education at the University of Alicante (Spain). Its aims are to provide access for all the students to the available robotic and computer vision equipments, generally limited, due to its high cost.
    Research Interests:

    And 84 more