Domain Adaptation (DA) is a technique that aims at extracting information from a labeled remote s... more Domain Adaptation (DA) is a technique that aims at extracting information from a labeled remote sensing image to allow classifying a different image obtained by the same sensor but at a different geographical location. This is a very complex problem from the computational point of view, specially due to the very high-resolution multispectral images. TCANet is a deep learning neural network for DA classification problems that has been proven as very accurate for solving it. TCANet consists of several stages based on the application of convolutional filters obtained through Transfer Component Analysis (TCA) computed over the input images. It does not require training, in contrast to the usual CNN-based networks. In this paper, a hybrid parallel TCA-based domain adaptation technique for solving the classification of very high-resolution multispectral images is presented. It is designed for efficient execution on a multi-node computer by using message passing interface (MPI), exploiting...
ABSTRACT The techniques for segmentation and classification of hyperspectral images are very cost... more ABSTRACT The techniques for segmentation and classification of hyperspectral images are very costly, which makes them good candidates for parallel and, in particular, GPU processing. In this paper we present a GPU implementation of a segmentation strategy for hyperspectral images consisting in the calculation of a morphological gradient operator that reduces the dimensionality of the hyperspectral image followed by the calculation of a watershed transform over the resulting 2D image. We have studied the main issues for the efficient implementation of the algorithms in GPU: the exploitation of thousands of threads available in this architecture and the adequate use of the device bandwidth. The tests show the efficiency of the GPU implementation indicating that the processing of hyperspectral images can be performed in real-time even on commodity GPUs like the one used in the experiments.
2017 9th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), 2017
The need for information of the Earth's surface is growing as it is the base for applications... more The need for information of the Earth's surface is growing as it is the base for applications such as monitoring the land uses or performing environmental studies, for example. In this context the effective change detection (CD) among multitemporal datasets is a key process that must produce accurate results obtained by computationally efficient algorithms. Most of the CD methods are focused on binary detection (presence or absence of changes) or in the clustering of the different detected types of changes. In this paper, a CUDA scheme to perform pixel-based multiclass CD for hyperspectral datasets is introduced. The scheme combines multiclass CD with binary CD to obtain an accurate multiclass change map. The combination with the binary map contributes to reducing the execution time of the CUDA code. The binary CD is based on performing the difference among images based on Euclidean and Spectral Angle Mapper (SAM) distances and a later thresholding by Otsu's algorithm to detect the changed pixels. The multiclass CD begins with the fusion of the multitemporal data following with feature extraction by Principal Component Analysis (PCA) and incorporating spatial features by means of an Extended Morphological Profile (EMP). The resulting dataset is filtered using the binary CD map and classified pixel by pixel by the supervised algorithms Extreme Learning Machine (ELM) and Support Vector Machine (SVM). The scheme was validated in a non-synthetic multitemporal hyperspectral dataset.
Image and Signal Processing for Remote Sensing XXVII, 2021
Deep Learning (DL)-based classification schemes for hyperspectral remotely sensed data have been ... more Deep Learning (DL)-based classification schemes for hyperspectral remotely sensed data have been introduced in the last few years with remarkable success due to their capability to learn the non-linear nature of the information that conforms hyperspectral images. In particular, Convolutional Neural Networks (CNNs) have been successfully used for solving problems requiring multi-class classification in the remote sensing field involving feature extraction. CNNs operate over small cubes of the dataset called patches centered in the pixels of the image instead of relying only on the spectral information corresponding to each pixel. These networks require a high number of observations to properly produce a generalized model. In these circumstances data augmentation techniques can help alleviate the problem by generating new, synthetic samples from existing data. Imputation is a statistical technique consisting in filling or replacing missing observations or values of a subset of observations by others obtained via inference from the original dataset. In this paper, a preliminary idea for a data augmentation technique based on the use of data imputation techniques for CNN classification is presented. Different hyperspectral images of the Earth surface widely used in the remote sensing field have been considered as test datasets. The results show the viability of the preliminary idea.
2013 IEEE 7th International Conference on Intelligent Data Acquisition and Advanced Computing Systems (IDAACS), 2013
ABSTRACT Level-set methods are commonly used to segment regions of interest within images or volu... more ABSTRACT Level-set methods are commonly used to segment regions of interest within images or volumes. These tasks usually involve a high number of operations. GPUs nowadays feature high computation and data throughput capabilities. In this work we present two GPU implementations of the level-set-based segmentation method called Fast Two Cycle. Our solutions partition the computational domain in tiles that can be processed in parallel. The original algorithm is adapted to the special features of the GPU, and performance is optimized by keeping a record of the tiles that require processing at any given time. We have tested our implementations with a set of 3D CT images of brain vessels and we show that we can obtain competitive results using commodity hardware.
2011 IEEE International Conference on Virtual Environments, Human-Computer Interfaces and Measurement Systems Proceedings, 2011
... REFERENCES [1] G. Thiagarajan, V. Sarangan, R. Suriyanarayanan, P. Sethuraman, A. Sivasubrama... more ... REFERENCES [1] G. Thiagarajan, V. Sarangan, R. Suriyanarayanan, P. Sethuraman, A. Sivasubramaniam, and A. Yegyanarayanan, Automating a buildings carbon management, IEEE Computer, vol. 44, no. 1, pp. 2430, Jan. 2011. ...
Advances in Intelligent Systems and Computing, 2013
ABSTRACT The high computational cost of the techniques for segmentation and classification of hyp... more ABSTRACT The high computational cost of the techniques for segmentation and classification of hyperspectral images makes them good candidates for parallel processing, in particular, for computing on Graphics Processing Units (GPUs). In this paper an efficient projection on the GPUs for the spectral–spatial classification of hyperspectral images using the Compute Unified Device Architecture (CUDA) for NVIDIA devices is presented. A watershed transform is applied after reducing the hyperspectral image to one band through the calculation of a morphological gradient, while the spectral classification is carried out by Support Vector Machine (SVMs). The results are combined with an adaptive majority vote. The different computational stages are concatenated in a pipeline that minimizes the data transfer between the main memory of the host computer and the global memory of the graphics device to maximize the computational throughput. The memory hierarchy and the thousands of threads available in this architecture are efficiently exploited. It is possible to study different data partitioning strategies and thread block arrangements in order to promote concurrent execution of a large number of threads. The objective is to efficiently exploit commodity hardware with the aim of achieving real-time execution for on-board processing.
The problem of visualizing large volumetric datasets is appealing for computation on the GPU. Nev... more The problem of visualizing large volumetric datasets is appealing for computation on the GPU. Nevertheless, the design of GPU volume rendering solutions must deal with the limited available memory in a graphics card. In this work, we present a system for multiresolution volume rendering which preprocesses the dataset dividing it into bricks and generating a compressed version by applying different levels of compression based on wavelets. The compressed volume is then stored in the GPU memory. For the later visualization process by texture mapping each brick of the volume is decompressed and rendered with a different resolution level depending on its distance to the camera. This approach computes most of the tasks in the GPU, thus minimizing the data transfers among CPU and GPU. We obtain competitive results for volumes of size in the range between 64 and 256.
Domain Adaptation (DA) is a technique that aims at extracting information from a labeled remote s... more Domain Adaptation (DA) is a technique that aims at extracting information from a labeled remote sensing image to allow classifying a different image obtained by the same sensor but at a different geographical location. This is a very complex problem from the computational point of view, specially due to the very high-resolution multispectral images. TCANet is a deep learning neural network for DA classification problems that has been proven as very accurate for solving it. TCANet consists of several stages based on the application of convolutional filters obtained through Transfer Component Analysis (TCA) computed over the input images. It does not require training, in contrast to the usual CNN-based networks. In this paper, a hybrid parallel TCA-based domain adaptation technique for solving the classification of very high-resolution multispectral images is presented. It is designed for efficient execution on a multi-node computer by using message passing interface (MPI), exploiting...
ABSTRACT The techniques for segmentation and classification of hyperspectral images are very cost... more ABSTRACT The techniques for segmentation and classification of hyperspectral images are very costly, which makes them good candidates for parallel and, in particular, GPU processing. In this paper we present a GPU implementation of a segmentation strategy for hyperspectral images consisting in the calculation of a morphological gradient operator that reduces the dimensionality of the hyperspectral image followed by the calculation of a watershed transform over the resulting 2D image. We have studied the main issues for the efficient implementation of the algorithms in GPU: the exploitation of thousands of threads available in this architecture and the adequate use of the device bandwidth. The tests show the efficiency of the GPU implementation indicating that the processing of hyperspectral images can be performed in real-time even on commodity GPUs like the one used in the experiments.
2017 9th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), 2017
The need for information of the Earth's surface is growing as it is the base for applications... more The need for information of the Earth's surface is growing as it is the base for applications such as monitoring the land uses or performing environmental studies, for example. In this context the effective change detection (CD) among multitemporal datasets is a key process that must produce accurate results obtained by computationally efficient algorithms. Most of the CD methods are focused on binary detection (presence or absence of changes) or in the clustering of the different detected types of changes. In this paper, a CUDA scheme to perform pixel-based multiclass CD for hyperspectral datasets is introduced. The scheme combines multiclass CD with binary CD to obtain an accurate multiclass change map. The combination with the binary map contributes to reducing the execution time of the CUDA code. The binary CD is based on performing the difference among images based on Euclidean and Spectral Angle Mapper (SAM) distances and a later thresholding by Otsu's algorithm to detect the changed pixels. The multiclass CD begins with the fusion of the multitemporal data following with feature extraction by Principal Component Analysis (PCA) and incorporating spatial features by means of an Extended Morphological Profile (EMP). The resulting dataset is filtered using the binary CD map and classified pixel by pixel by the supervised algorithms Extreme Learning Machine (ELM) and Support Vector Machine (SVM). The scheme was validated in a non-synthetic multitemporal hyperspectral dataset.
Image and Signal Processing for Remote Sensing XXVII, 2021
Deep Learning (DL)-based classification schemes for hyperspectral remotely sensed data have been ... more Deep Learning (DL)-based classification schemes for hyperspectral remotely sensed data have been introduced in the last few years with remarkable success due to their capability to learn the non-linear nature of the information that conforms hyperspectral images. In particular, Convolutional Neural Networks (CNNs) have been successfully used for solving problems requiring multi-class classification in the remote sensing field involving feature extraction. CNNs operate over small cubes of the dataset called patches centered in the pixels of the image instead of relying only on the spectral information corresponding to each pixel. These networks require a high number of observations to properly produce a generalized model. In these circumstances data augmentation techniques can help alleviate the problem by generating new, synthetic samples from existing data. Imputation is a statistical technique consisting in filling or replacing missing observations or values of a subset of observations by others obtained via inference from the original dataset. In this paper, a preliminary idea for a data augmentation technique based on the use of data imputation techniques for CNN classification is presented. Different hyperspectral images of the Earth surface widely used in the remote sensing field have been considered as test datasets. The results show the viability of the preliminary idea.
2013 IEEE 7th International Conference on Intelligent Data Acquisition and Advanced Computing Systems (IDAACS), 2013
ABSTRACT Level-set methods are commonly used to segment regions of interest within images or volu... more ABSTRACT Level-set methods are commonly used to segment regions of interest within images or volumes. These tasks usually involve a high number of operations. GPUs nowadays feature high computation and data throughput capabilities. In this work we present two GPU implementations of the level-set-based segmentation method called Fast Two Cycle. Our solutions partition the computational domain in tiles that can be processed in parallel. The original algorithm is adapted to the special features of the GPU, and performance is optimized by keeping a record of the tiles that require processing at any given time. We have tested our implementations with a set of 3D CT images of brain vessels and we show that we can obtain competitive results using commodity hardware.
2011 IEEE International Conference on Virtual Environments, Human-Computer Interfaces and Measurement Systems Proceedings, 2011
... REFERENCES [1] G. Thiagarajan, V. Sarangan, R. Suriyanarayanan, P. Sethuraman, A. Sivasubrama... more ... REFERENCES [1] G. Thiagarajan, V. Sarangan, R. Suriyanarayanan, P. Sethuraman, A. Sivasubramaniam, and A. Yegyanarayanan, Automating a buildings carbon management, IEEE Computer, vol. 44, no. 1, pp. 2430, Jan. 2011. ...
Advances in Intelligent Systems and Computing, 2013
ABSTRACT The high computational cost of the techniques for segmentation and classification of hyp... more ABSTRACT The high computational cost of the techniques for segmentation and classification of hyperspectral images makes them good candidates for parallel processing, in particular, for computing on Graphics Processing Units (GPUs). In this paper an efficient projection on the GPUs for the spectral–spatial classification of hyperspectral images using the Compute Unified Device Architecture (CUDA) for NVIDIA devices is presented. A watershed transform is applied after reducing the hyperspectral image to one band through the calculation of a morphological gradient, while the spectral classification is carried out by Support Vector Machine (SVMs). The results are combined with an adaptive majority vote. The different computational stages are concatenated in a pipeline that minimizes the data transfer between the main memory of the host computer and the global memory of the graphics device to maximize the computational throughput. The memory hierarchy and the thousands of threads available in this architecture are efficiently exploited. It is possible to study different data partitioning strategies and thread block arrangements in order to promote concurrent execution of a large number of threads. The objective is to efficiently exploit commodity hardware with the aim of achieving real-time execution for on-board processing.
The problem of visualizing large volumetric datasets is appealing for computation on the GPU. Nev... more The problem of visualizing large volumetric datasets is appealing for computation on the GPU. Nevertheless, the design of GPU volume rendering solutions must deal with the limited available memory in a graphics card. In this work, we present a system for multiresolution volume rendering which preprocesses the dataset dividing it into bricks and generating a compressed version by applying different levels of compression based on wavelets. The compressed volume is then stored in the GPU memory. For the later visualization process by texture mapping each brick of the volume is decompressed and rendered with a different resolution level depending on its distance to the camera. This approach computes most of the tasks in the GPU, thus minimizing the data transfers among CPU and GPU. We obtain competitive results for volumes of size in the range between 64 and 256.
Uploads
Papers