[go: up one dir, main page]

Next Article in Journal
High-Efficiency Frequency Doubling Blue-Laser VECSEL Based on Intracavity Beam Control
Next Article in Special Issue
Frequency-Oriented Transformer for Remote Sensing Image Dehazing
Previous Article in Journal
Dual-Polarized Dipole Antenna with Wideband Stable Radiation Patterns Using Artificial Magnetic Conductor Reflector
Previous Article in Special Issue
VELIE: A Vehicle-Based Efficient Low-Light Image Enhancement Method for Intelligent Vehicles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Remote Sensing Image Classification Based on Canny Operator Enhanced Edge Features

College of Information Science and Engineering, Shenyang Ligong University, Shenyang 110159, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(12), 3912; https://doi.org/10.3390/s24123912
Submission received: 1 May 2024 / Revised: 7 June 2024 / Accepted: 14 June 2024 / Published: 17 June 2024
(This article belongs to the Special Issue Advances in Remote Sensing Image Enhancement and Classification)
Figure 1
<p>Example of images from different datasets. NWPU-RESISC45 (<b>a</b>–<b>d</b>); UCM (<b>e</b>–<b>h</b>); MSTAR (<b>i</b>–<b>l</b>).</p> ">
Figure 2
<p>Illustration of the distinction between (<b>a</b>) previous works and (<b>b</b>) our work.</p> ">
Figure 3
<p>The overarching framework of CAF. The original image and edge image (with the same depth) are fused using the CAF module, ensuring that multi-layer features maintain their dimensions through upsampling. Subsequently, the resulting fusion features are fed as input into the Swin-transformer. The details of CAF and the multi-scale channel attention module (MSCAM) are also presented.</p> ">
Figure 4
<p>The proposed CAF. By employing the attention-based feature fusion approach, the weights <math display="inline"><semantics> <msub> <mi>λ</mi> <mi>X</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>λ</mi> <mi>Y</mi> </msub> </semantics></math> are computed for integrating the original image and edge image. In comparison to the addition and concatenation methods, this technique enables enhanced focus on crucial regions and features within the image, thereby augmenting its performance and robustness.</p> ">
Figure 5
<p>The confusion matrices of UCM were computed using two different methods, with a training ratio of 80%.</p> ">
Figure 6
<p>The confusion matrices of MSTAR were computed using two different methods, with a training ratio of 80%.</p> ">
Figure 7
<p>The assessment metrics of NWPU-RESISC45 employing diverse methodologies.</p> ">
Figure 8
<p>The assessment metrics of MSTAR employing diverse methodologies.</p> ">
Figure 9
<p>The assessment metrics of UCM employing diverse methodologies.</p> ">
Figure 10
<p>The sequence from (<b>a</b>–<b>f</b>) includes the UCM dataset’s initial image, the heat map created by the model without incorporating the CAF module (overlaid on top of the original image), and the heat map generated by integrating the CAF module into the model (also overlaid on top of the original image).</p> ">
Figure 11
<p>The sequence from (<b>a</b>–<b>f</b>) includes the NWPU-RESISC45 dataset’s initial image, the heat map created by the model without incorporating the CAF module (overlaid on top of the original image), and the heat map generated by integrating the CAF module into the model (also overlaid on top of the original image).</p> ">
Figure 12
<p>The sequence from (<b>a</b>–<b>f</b>) includes the MSTAR dataset’s initial image, the heat map created by the model without incorporating the CAF module (overlaid on top of the original image), and the heat map generated by integrating the CAF module into the model (also overlaid on top of the original image).</p> ">
Versions Notes

Abstract

:
Remote sensing image classification plays a crucial role in the field of remote sensing interpretation. With the exponential growth of multi-source remote sensing data, accurately extracting target features and comprehending target attributes from complex images significantly impacts classification accuracy. To address these challenges, we propose a Canny edge-enhanced multi-level attention feature fusion network (CAF) for remote sensing image classification. The original image is specifically inputted into a convolutional network for the extraction of global features, while increasing the depth of the convolutional layer facilitates feature extraction at various levels. Additionally, to emphasize detailed target features, we employ the Canny operator for edge information extraction and utilize a convolution layer to capture deep edge features. Finally, by leveraging the Attentional Feature Fusion (AFF) network, we fuse global and detailed features to obtain more discriminative representations for scene classification tasks. The performance of our proposed method (CAF) is evaluated through experiments conducted across three openly accessible datasets for classifying scenes in remote sensing images: NWPU-RESISC45, UCM, and MSTAR. The experimental findings indicate that our approach based on incorporating edge detail information outperforms methods relying solely on global feature-based classifications.

1. Introduction

Given the rapid advancements in aerospace technology, remote sensing has emerged as an indispensable tool for Earth observation missions. Based on remote sensing images, more research has been promoted for civil and military applications, such as urban planning, environmental monitoring, disaster assessment, etc. Analyzing these images plays an important social and economic role.
Researchers in the field of remote sensing have shown considerable interest in scene classification, which plays a crucial role in understanding images captured from a distance. The role it plays is of paramount importance in numerous Earth observation applications, and its extensive utilization can be observed in domains such as national defense and security, land use, urban planning, and geographic image retrieval, among others [1,2].
In recent years, deep learning has emerged as the prevailing trend in the domain of big data analysis and has achieved remarkable advancements across various computer vision tasks. The classification of remote sensing images showcases remarkable results with the utilization of the convolutional neural network (CNN), which autonomously captures and extracts advanced abstract features from raw data [3,4,5].
The classification tasks in remote sensing imagery face new problems and challenges due to the ever-increasing scale of remote sensing image data. The image examples of commonly used remote sensing datasets are illustrated in Figure 1. Primarily, the majority of CNN approaches employ features extracted from the highest layer for classifying remote sensing images. The deep-level advanced image features, while providing abundant meaning and content information, typically focus solely on the local details or overall characteristics of an image, disregarding the interrelationships between different parts and the overall complex structure. Consequently, when confronted with images containing a substantial amount of information and intricate structures, the model’s recognition capability and classification accuracy may become constrained [6,7,8]. Secondly, with the diversity of remote sensing image data in different times, seasons, and regions, as well as changes in the perspective and scale of aerial images and satellite images, the appearance features and scales of objects such as shape, texture, and color change, which cause trouble for scene classification tasks.
In recent years, to address the issue of limited features for image classification, researchers have proposed multi-scale feature fusion and multi-modal information fusion techniques as means to expand and enrich the input features for image classification. Hong et al. [9] introduced a multimodal dataset for the classification of remote sensing data. Wu et al. [10] employed a cross-modal reconstruction strategy along with a sophisticated module called CCR-Net to achieve a more concise fusion representation of diverse remote sensing data sources. Bai et al. [11] presented a network for efficient multi-scale feature fusion, capturing features across various frequencies and scales for scene classification purposes, which proved advantageous by incorporating both high- and low-frequency characteristics to enhance classification accuracy. Yang et al. [12] proposed an enhanced multi-scale feature fusion network that effectively captures features through a parallel multi-path network. Ai et al. [6] proposed a novel approach that utilizes convolution kernels of varying dimensions to extract depth characteristics from SAR targets; the experimental results provide compelling evidence that the incorporation of multi-scale features enhances the robustness of SAR target recognition in the presence of speckle noise interference. Li et al. [13] devised a two-channel CNN architecture integrating attention map-derived features with RGB stream-based ones to extract fusion features, thereby demonstrating improved discriminative power. The aforementioned studies demonstrate that the integration of multiple information sources can effectively address the issue of limited input features in remote sensing image classification, enabling a more comprehensive and accurate representation of remote sensing image characteristics. Consequently, this approach enhances the performance and precision of remote sensing image classification, offering a viable resolution for examination and execution in the relevant field. However, the problem of underutilizing shallow features remains unresolved.
The classification performance of remote sensing image processing can be improved by extracting relevant prior information from large convolutional neural network models and imposing constraints on the network, thereby mitigating the adverse effects of appearance characteristics and scale changes of remote sensing objects on image processing results [13,14]. The efficacy of incorporating edge information for capturing image details has been validated in the context of remote sensing image segmentation tasks [15,16]. Wang et al. [17] proposed an edge enhancement channel attention mechanism, which selectively identifies effective channels after enhancing spatial edge features. This mechanism assists in the identification of blurred or irregular mining areas (MLC). Additionally, Hao et al. [18] and Zhang et al. [19] validated the effectiveness of this approach in remote sensing classification tasks.
Prior studies employ multi-level or multi-scale networks to extract more discriminative image features, utilizing weighted addition or concatenation for fusing features from different levels or scales to enhance image detail information and obtain more discriminative features, which are then fed into the classifier for image scene classification. However, this approach lacks contextual understanding and fails to fully consider the correlation between features, resulting in suboptimal performance for scene classification [20]. Moreover, the disregard for image content in edge information leads to subpar accuracy, and the inclusion of manual feature extraction in some studies limits their applicability as they lack end-to-end structures.
In this study, we propose a Canny edge-enhanced multi-level attention feature fusion network (CAF) to augment the extraction of more comprehensive features. The Canny operator is employed in our approach to extract edge information from images, effectively capturing diverse characteristics by integrating global information with edge details. We employ the AFF module, which effectively incorporates contextual information without introducing excessive parameters [21]. The original image data and edge information undergo multiple convolutions to derive shallow, medium, and deep features, respectively, followed by feature fusion using CAF structure. After feature fusion through CAF structure, they are inputted into the Swin-Transformer for feature extraction and classification. The comparative diagram illustrating the structural differences is presented in Figure 2.
The primary contribution of this article can be concisely summarized as follows:
(1)
The proposed methodology utilizes a multi-level feature fusion approach to extract diverse features from remote sensing images. The integration of features at various levels facilitates the capture of intricate details as well as comprehensive contextual information within the image. Simultaneously, it facilitates the gradual extraction of abstract features, spanning from low-level to high-level representations, thereby yielding more comprehensive and semantically meaningful feature representations that facilitate enhanced interpretation and comprehension of the image.
(2)
At each level, the edge information is simultaneously fused to enhance the representation of detailed features and achieve results.
(3)
The CAF method, which fuses image features and edge features at each level, achieves better classification performance than without this method.

2. Related Work and Motivation

2.1. Related Work

Hand-crafted features: In the initial stages of remote sensing image scene classification, conventional hand-crafted features such as form, texture, hue, and spectral range are commonly utilized for feature extraction. The techniques employed for extracting hand-crafted features include color histogram, texture feature, Global Feature Information (GIST) [22], Gray Level Co-Occurrence Matrix (GLCM) [23], and Scale-Invariant Feature Transform (SIFT) [24,25], among others [26]. Despite their commendable stability and capacity to convey overall shallow information, these traditional hand-crafted features heavily rely on manual design and fail to effectively extract high-resolution remote sensing image features. As a result, their widespread application in classification tasks is limited.
Ways to enhance feature information: To enhance feature information for classification, a multi-branch [27,28,29], multi-level [30,31], and multi-scale [17,32] structure can be employed to fuse diverse features. Shi et al. [29] utilized a bilinear feature extraction structure to merge the extracted feature information from two branches, resulting in improved classification accuracy. Cheng et al. [31] incorporated a feature pyramid network and a squeeze-and-excitation block to obtain multi-level feature maps, while Shi et al. [33] proposed two convolutional combination modules for deep image feature extraction. The corresponding weights of all extracted features are calculated to facilitate fusion through multiple branches. Wang et al. [17] leveraged multi-scale convolution kernels to extract multi-scale information and introduced an additional branch of shallow features for fusion with deep features. However, when fusing features, employing ‘addition’ or ‘concatenation’ would directly result in a mere superposition or concatenation of information from distinct features, disregarding the inter-feature correlation. This oversight may blur the significance among different features and lead to redundancy or loss of crucial information.
Loss function: The remote sensing image classification tasks typically encounter the following challenges: (a) Acquiring images is often difficult, resulting in a limited number of samples for training sets [34]. (b) Some categories have an insufficient number of samples, leading to class imbalance issues. (c) Image interpretation poses difficulties and is susceptible to label noise interference and errors [35]. (d) Images exhibit significant intra-class differences and subtle inter-class distinctions, making fine-grained classification challenging. The aforementioned issues have prompted the development of diverse loss functions aimed at tackling these challenges. For classification problems that require precise distinctions or involve subtle differences, such as in SAR applications, incorporating L2 normalization into the cosine loss function acts as a robust regularizer and can effectively reduce the impact of misclassified samples, including challenging instances and label inaccuracies. Consequently, employing cosine loss [36] yields superior classification accuracy compared to standard cross-entropy loss. Moreover, it proves more suitable for small sample datasets like remote sensing images, which are typically arduous to acquire. The purpose of multi-class cross-entropy loss is to ensure equitable learning across all classes. However, in the presence of class imbalance during training, the model may exhibit excessive confidence towards the majority class and incorrectly classify most samples as belonging to this dominant category. This phenomenon can result in overfitting and have a negative impact on generalization performance. To address this problem, focal loss has been proposed as a solution that focuses on challenging categories [37]. The contribution of straightforward examples is diminished and more intricate classification instances are emphasized by focal loss, enabling a focused approach towards difficult-to-classify examples. By manipulating the regularization factor, it is possible to reduce the impact of simple examples at different levels. The Orthogonal Projection Loss (OPL) [38] is designed to optimize feature discrimination by maximizing inter-class orthogonality and minimizing intra-class distance, enhancing the model’s robustness against label noise and other practical interferences. This approach is particularly suitable for small sample datasets of remote sensing images.

2.2. Motivation

Currently, the majority of CNN-based models employ the ultimate classification features derived from the final stage for tasks that involve classifying images at a coarse-grained level. However, in tasks involving classification of fine-grained images, discarding the front-end shallow features can lead to a deterioration in classification accuracy, particularly when dealing with low-resolution (SAR) images. Furthermore, although shallow features can be propagated from the network’s initial layers to deeper ones through multiple convolutional and pooling operations, they may become diluted and consequently weaken the spatial expression capability of the final features. Additionally, issues such as small sample size and imbalance in remote sensing images also affect the accuracy of classification tasks.
The Swin-Transformer has gained significant traction among researchers in the field of remote sensing tasks due to its exceptional performance and ability to seamlessly integrate global and local data using window attention [30,39,40,41]. This is why we have chosen it as the backbone network to address the aforementioned challenges. To extract edge information, we utilize the Canny operator, which offers advantages such as multi-scale detection, noise reduction, precise localization, and an effective edge connectivity strategy. Additionally, we employ a multi-tier feature integration technique to extract diverse levels of features from remote sensing images. This approach is advantageous for capturing contextual information, extracting deep and shallow features, and improving model interpretability. By fusing features at different levels, both detailed information and global context can be captured while progressively extracting abstract features from low-level to high-level representations that are more comprehensive and semantically meaningful. Regarding fusion methods, we choose the attention-based image feature fusion technique due to its effectiveness in extracting and integrating crucial feature information from various regions within the image. This enables the model to focus better on significant regions and features within the image, thereby enhancing performance and robustness. Furthermore, by assigning appropriate weights to multiple loss functions, leveraging their unique advantages significantly enhances classification accuracy.

3. Method

3.1. Overall Framework

The overall structure of CAF is depicted in Figure 3. The fundamental elements of our network consist of a feature fusion module with multiple levels and a classifier for extracting features. Firstly, the edge image is obtained by applying the Canny edge detection operator on the original image, which is then combined with the original image using a dual branch structure. Convolution operations are performed iteratively on the dual branches to extract shallow and deep features separately (including global and detailed features). Subsequently, an AFF method proposed by Dai et al. [21] is employed to fuse corresponding level features from both branches. Afterwards, the fused information from multiple levels is superimposed to obtain enhanced edge information as well as discriminative shallow and deep characteristics. Finally, this output is fed into Swin-Transformer for further feature extraction and classification.

3.2. Multi-Level Feature Extraction

The process of multi-level feature extraction is illustrated in gray in Figure 3. The upper branch represents the multi-level extraction of figure features from the original image, while the lower branch represents the multi-level extraction of edge features. We perform convolutions once to extract features from both figures at different scales using a 3 × 3 convolution kernel, which helps avoid redundant network parameters and overfitting. Each convolution is followed by a maxpooling layer with a stride of 2, enabling size reduction of the feature map as the network deepens. To enhance abstract semantic features, we progressively increase channel width to 8, thereby minimizing the risk of feature loss.
The abovementioned approach integrates all extracted scales (features at different stages or depths) to facilitate feature fusion, thereby determining the ultimate classification outcome and ensuring enhanced representation capability.

3.3. Edge Information Enhancement

By utilizing the Canny operator to extract edge features from remote sensing objects, the global features of the image can be supplemented and further enhance classification performance. The extraction and complement process is illustrated in Figure 3.
Step 1: Inputting a remote sensing image figure, performing image sampling on each image, and resizing it to 224 × 224 size while maintaining the feature dimension unchanged.
Step 2: Conducting multi-layer feature extraction on both the original image and edge features obtained using the Canny operator at different scales (depths).
Step 3: Fusing the original image feature maps with corresponding scale (depth) edge feature maps (i.e., scales of 224 and 112). The specific fusion method will be described in detail in the subsequent section.
Step 4: Upsampling the scales (smaller than 224 × 224) to a size of 224 × 224 using an inversion convolution method. This step aims to utilize official weights trained on imagenet100k data for Swin-Transformer models, which can improve model performance by accelerating convergence and enhancing generalization abilities. Additionally, this approach reduces training time, making it suitable for small-scale datasets like remote sensing images that are not easily obtainable.
Step 5: Overlaying these four fused feature maps with different depths onto Swin-Transformer for conducting enhanced feature extraction and classification operations aiming at improving overall classification performance.

3.4. Feature Information Fusion Network

We employ the Canny edge and AFF fusion method, referred to as CAF for short, as demonstrated in Figure 4 and Section 4 of this paper’s experiment, which successfully combines high-level and low-level characteristics from a single image to produce a depth fusion feature map by assigning suitable weights.
The feature X represents the Nth layer extracted from the original image, while Y represents the Nth layer extracted from the edge feature. Mathematically, X and Y can be represented as X , Y R C × H × W . The original image feature X and edge feature Y are added together and divided into two branches, focusing on global and local information, respectively, which can be expressed as
L ( X + Y ) = BN ( Conv 2 ( ReLU BN Conv 1 ( X + Y ) )
G ( X + Y ) = BN ( Conv 2 ReLU BN Conv 1 ( g ( X + Y ) )
After sigmoid, the feature weights of X and Y are generated, respectively, and the fusion feature map Z with weight information is generated—that is, image fusion feature based on the attention method is obtained.
Among them,
λ X = sig ( L ( X + Y ) G ( X + Y ) )
λ X + λ Y = 1
The fused feature map Z R C × H × W can be represented as follows:
Z = λ X ( X + Y ) X + λ Y ( X + Y ) Y

3.5. Loss Function

To adapt the loss function for remote sensing image applications, we combine multiple loss functions, including cosine loss, focal loss, and OPL loss, which are suitable for small sample sizes, class imbalances, and noisy labels commonly found in remote sensing images.
We use cosine loss to solve fine-grained classification problems and small-sample classification problems. In addition, this loss function is suitable for SAR images. It enhances the maximization of cosine similarity between the output of the neural network and one-hot vectors representing the true class. Consequently, employing cosine loss [36] yields superior classification accuracy compared to standard cross-entropy loss.
The calculation of cosine similarity for two vectors a , b R d in d-dimensional space is determined by the angle formed between them and defined as
σ cos ( a , b ) = cos ( a b ) = a , b a 2 · b 2
· , · represents the dot product, while · p signifies the L P norm.
φ onehot ( y ) refers to the establishment of a mapping between classes and the prediction space. The class embeddings φ are considered as fixed, and our goal is to acquire the parameters θ of a neural network f θ by maximizing the cosine similarity between the features of images and their respective class embeddings.
φ onehot ( y ) = 0 0 y 1 times 1 0 0 n y times T
The cosine loss function, which is aimed to be minimized by the neural network, is defined for this purpose.
L cos ( x , y ) = 1 σ cos ( f θ ( x ) , φ ( y ) )
The neural network’s parameters θ are acquired through the maximization of cosine similarity between the features f θ ( x ) and φ ( y ) extracted from images, and an instance x from a specific domain, where y represents the authentic classification of x chosen from a collection of categories.
To mitigate the detrimental impact of class imbalance on generalization performance during training, a potential solution known as focal loss has been proposed to specifically address overfitting issues associated with challenging categories [37].
The weight of the majority class is reduced in accordance with the cross-entropy loss, thereby directing the model’s focus towards learning the minority class. The cross-entropy loss can be expressed as follows:
L entropy ( y ^ ) = α log ( y ^ )
The cross-entropy loss, μ , is augmented with a modulation factor ( 1 y ^ ) μ , serving as an adaptable focusing parameter.
The Focal loss function allows for the weighting of losses based on the extent of prediction errors, exhibiting varying rates of loss weighting in response to different values of μ . In cases where errors are pronounced, specifically when the predicted label y ^ tends towards 0, the Focal loss adjusts the weight assigned to these errors by increasing their significance. Consequently, the model allocates greater attention to these challenging samples. By diminishing the contribution of straightforward examples and emphasizing more intricate classification instances, Focal loss facilitates a focused approach towards difficult-to-classify examples.
The presence of label noise interference in remote sensing image interpretation poses a significant challenge. The Orthogonal Projection Loss (OPL) [38] is designed to enhance the model’s robustness against label noise and other practical interferences, making it particularly suitable for small sample datasets of remote sensing images.
L OPL = ( 1 s ) + β d
The objective of OPL loss is to enforce the functionality of constrained clustering, ensuring that the features pertaining to different classes are orthogonal in the feature space while exhibiting similarity within the same class. This serves to enhance the discriminative capability of the model.
The total loss, referred to as COFE loss, is a weighted combination of these image classification losses and can be precisely defined as follows:
L COFE = L entropy + δ · L cos + η · L focal + λ · L OPL

4. Experiments and Result

The performance of our CAF method in classifying remote sensing data is assessed by conducting experiments on three publicly accessible datasets in this section: the University of California Merced Land Use (UCM) Data Set [42], the Northwestern Polytechnical University Remote Sensing Image Scene Classification 45 (NWPU RESISC45) Data Set [2], and the Moving and Stationary Target Acquisition and Recognition (MSTAR) Data Set [43]. The main information of the three datasets is shown in Table 1.

4.1. Experimental Details

The input images were standardized to a resolution of 224 × 224 pixels as a preliminary step in the training process. To bolster the model’s efficacy and counteract the potential for overfitting, we implemented the RandomResizedCrop data augmentation strategy [44,45]. This technique has a proven track record for refining model performance and alleviating overfitting tendencies. We meticulously selected the following hyperparameters: a batch size of 64 and a learning rate of 0.0001. A wealth of empirical evidence has corroborated the enhanced performance delivered by these particular parameter settings. The epoch number is 100. For the optimization of weight parameters across all layers, we employed the Adam Weight Decay (AdamW) optimizer, continuing the optimization process until the model converged [46,47,48].
The PyTorch-based experiments detailed herein were conducted on a personal computing platform equipped with an Intel i7-12700 CPU, an NVIDIA GeForce RTX 3090 graphics card, and 24 GB of RAM.
The efficacy of our methodology was rigorously assessed by segregating the dataset into training and testing subsets, adhering to the conventional partitioning ratios. The dataset was allocated an 80% ratio for training purposes. Ablation studies were meticulously executed to substantiate the performance improvements conferred by the integration of the CAF fusion network and the novel loss function, specifically in relation to classification accuracy.
Regarding the selection of hyperparameters in the COFE loss function, all four types of losses are assigned equal weights of 1, and the hyperparameters for each type of loss function are determined based on the values provided in the original literature [36,37,38].

4.2. Performance Evaluation Metrics

The performance of the proposed CAF method for scene classification is quantitatively evaluated by employing several commonly used metrics as evaluation indices to assess experimental results. The metrics encompassed in this study comprise overall accuracy (OA), Kappa coefficient (Kappa), and the confusion matrix (CM), enabling both quantitative and qualitative comparisons of classification performance.
The Confusion Matrix serves as a reflection of the classification outcomes, providing a foundation for comprehending other image classification evaluation metrics. In cases where there are n sample classes, the confusion matrix C is represented by an n × n square matrix, with its expression depicted in Formula (12).
C M = c 11 c 12 c 1 n c 21 c 22 c 2 n c n 1 c n 2 c n n
The value of c i j denotes the count of instances belonging to class i that were assigned to class j. The total number of class i ground feature samples can be denoted by i = 1 n c i j , and j = 1 n c i j represents the number of samples classified as class j ground features.
Formula (13) demonstrates the calculation method for overall accuracy (OA), which is determined by dividing the sum of correctly classified samples i = 1 n c i i by the total number of samples j = 1 i = 1 n i j n c i j . The overall accuracy serves as an indicator of the classifier’s performance as a whole, yet it is significantly influenced by imbalanced sample distribution, particularly when one class dominates with a large number of samples.
O A = i = 1 n c i i j = 1 n i = 1 n c i j
The Kappa coefficient plays a vital role in assessing the effectiveness of image classification by measuring the level of concordance between classification outcomes and actual values. It is calculated using Formula (14). The Kappa coefficient represents the extent to which the current classification method reduces errors compared to a completely random classifier, with values typically ranging from 0 to 1. The higher the Kappa coefficients, the stronger the consistency and enhanced model performance in classification.
K a p p a = N i = 1 n c i i i = 1 n ( j = 1 n c i j × i = 1 n c i j ) N 2 i = 1 n ( j = 1 n c i j × i = 1 n c i j )
In addition, we compared alternative evaluation metrics for assessing the classification performance, namely, Precision, Recall, and F1 Score (F1). The calculation method for these four metrics is defined by Equations (15)–(17), respectively. In this context, TP represents True Positive, denoting accurately classified positive samples. The abbreviation TN stands for True Negative, which refers to accurately classified negative samples. FP represents False Positive, indicating the misclassification of positive samples that are actually negative. FN denotes False Negative, representing the misclassification of negative samples that are actually positive.
P r e c i s o n = T P T P + F P
R e c a l l = T P T P + F N
F 1 = P r e c i s o n × R e c a l l 0.5 × ( P r e c i s i o n + R e c a l l )

4.3. Results

We conducted a series of experiments using the Swin-Transformer as the sole baseline for feature extraction and classification. These experiments included the following: (1) sole use of the Swin-Transformer with cross-entropy loss function; (2) utilization of the novel loss function in conjunction with the Swin-Transformer; (3) integration of CAF fusion with the Swin-Transformer along with cross-entropy loss function; (4) fusion of CAF with the Swin-Transformer while employing the novel loss function; (5) application of CAF twice alongside cross-entropy loss function in combination with the Swin-Transformer; and, finally, (6) dual utilization of CAF fusion along with application of the novel loss function on top of the Swin-Transformer. The findings from the experiments are displayed in Table 2, Table 3 and Table 4.
By analyzing Table 2, it can be observed that the influence of the loss function on the classification outcome is minimal when solely employing the Swin-Transformer as a classifier for original image classification. This phenomenon arises due to our proposed novel loss function primarily addressing challenges associated with small sample sizes and imbalanced datasets. Consequently, improvements in accuracy are evident on the UCM dataset characterized by a lower number of samples per class, while the accuracy rate remains almost unchanged on the other two datasets.
Furthermore, through the analysis of Table 3 and Table 4, it is apparent that incorporating edge information in the CAF method indeed enhances classification accuracy compared to direct utilization of original images for classification purposes. Moreover, mining depth information with each additional convolutional layer further improves accuracy levels, as evidenced by experiments conducted on visible light datasets. However, intriguingly, SAR dataset analysis reveals that mining only one layer of deep information yields higher accuracy than mining two layers; we attribute this discrepancy to unique image characteristics inherent in SAR data.
In addition, as shown in Figure 5 and Figure 6, we conducted an analysis of the enhancements for each category in the UCM and MSTAR datasets. Notably, Figure 5 and Figure 6 demonstrates a significant reduction in misclassified samples and scenes when employing our method, thereby substantiating the efficacy of our proposed approach. The evaluation metrics for the three datasets using different methods are presented in Figure 7, Figure 8 and Figure 9 and Table 5.
To acquire a holistic comprehension of the influence exerted by the CAF technique on tasks related to classification, we present feature heat maps for target samples generated by networks with and without the CAF module. This analysis aims to provide valuable insights into how this method influences classification performance and can contribute to future research in this field.
The heat maps of UCM, NWPU-RESISC45, and MSTAR datasets are displayed in Figure 10, Figure 11 and Figure 12. Two samples from each dataset are provided as examples, with each sample consisting of three images. The arrangement from left to right consists of the original image, the heat map generated by the model excluding the CAF module (superimposed on the original image), and the heat map produced by incorporating the CAF module into the model (also superimposed on the original image). Darker colors (blue) indicate lower activation values and smaller contributions while brighter colors (red) represent higher activation values and greater contribution to classification. Thus, significant regions that contribute to classifying images based on our model’s decision-making process can be observed. As depicted in Figure 10, Figure 11 and Figure 12, models that do not incorporate a CAF module have two main issues arise: Firstly, activation regions mainly concentrate on background areas rather than accurately identifying foreground objects, such as an airplane in Figure 10a, a house in Figure 10d, a church in Figure 11a, or circular farmland in Figure 11d. This occurs because integrating a CAF module encourages network exploration of potential domain invariant features through edge extraction and feature fusion resulting in more discriminative features. Secondly, for images within the MSTAR dataset that exhibit prominent shadows, like Figure 12a,d, models lacking a CAF module tend to prioritize shadows over target identification; however, employing a CAF module helps alleviate this issue.
The classification accuracy metric is employed as an objective indicator to assess the performance of our proposed model. To thoroughly analyze the strengths and weaknesses of our model, we perform a comparative evaluation of the proposed method against several state-of-the-art algorithms on the three datasets. The results are given in Table 6. The superiority of our method is evident, as it surpasses the majority of existing methods.
To objectively and comprehensively demonstrate the superiority of the proposed CAF, an ablation experiment was conducted, encompassing four conditions: ➀ employing solely the Swin-transformer, ➁ utilizing a Swin-transformer-based approach with enhanced edge features through addition, ➂ integrating CAF with the Swin-transformer, and ➃ combining two instances of CAF with the Swin-transformer. To ensure the fairness of the experiment, we used the cross-entropy loss function. The experimental findings unequivocally validated that incorporating edge information via the CAF method surpassed the mere addition of edge information. The results are given in Table 7.

5. Discussion

Experimental results demonstrate significant improvements in classification accuracy and discriminability with our approach. Specifically, compared to the baseline on different datasets, our method achieves respective improvements of 5.66%, 3.49%, and 1.78%, validating its effectiveness.
By incorporating image edge information, attention is visibly focused on objects with clear edges in the generated heat maps. The proposed enhancement significantly enhances recognition performance, particularly in scenarios characterized by a simple background or distinct objects to be recognized. This approach effectively improves overall recognition performance, as demonstrated on the MSTAR dataset. Moreover, when applied to the NWPU-RESISC45 dataset with a large number of samples, the proposed model also acquires more generalized features and enhances classification accuracy. However, challenges arise in complex scenes (e.g., buildings in UCM) or objects with simple shapes and uniform backgrounds (e.g., golf course in UCM), as these situations hinder effective learning of edge features and limit performance improvement.
Simultaneously, based on the outcomes of comparative experiments, we observed that the efficacy of mining two layers of feature information may not necessarily surpass that of mining a single layer, contingent upon the dataset’s characteristics. Consequently, in practical applications, it becomes imperative to make choices based on contextual factors.
Furthermore, the loss function proposed in this study exhibits distinct effects on enhancing classification accuracy owing to the dataset’s inherent characteristics. Ablation experiments reveal a notable enhancement in classification accuracy, particularly for datasets with limited samples and imbalanced distributions.

6. Conclusions

The article proposes a novel approach for classifying remote sensing scenes by extracting comprehensive features from remote sensing images through the integration of multi-layer global features and multi-layer edge features. To address the limitations of existing methods, such as inadequate utilization of depth information, neglecting edge information and image content, as well as insufficient consideration for contextual information and feature correlation during the fusion process, we introduce an edge information enhancement module, a multi-level feature extraction module, and a feature information fusion module. Experimental results demonstrate significant improvements in classification accuracy and discriminability with our approach. Furthermore, without the CAF architecture network design, the total number of parameters reached 27.53 M; subsequently, adding one or two CAFs increased the parameters by 21.318 M and 21.328 M, respectively. Further efforts are needed to optimize the network architecture for lightweight performance improvement.

Author Contributions

Conceptualization, M.Z.; data curation, D.Y.; formal analysis, M.Z.; methodology, M.Z.; resources, K.S.; software, M.Z.; supervision, K.S.; validation, M.Z.; visualization, Y.Z.; writing—original draft, M.Z.; writing—review and editing, K.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Basic research Fund of the Department of Education JYTQN2023063.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cheng, G.; Xie, X.; Han, J.; Guo, L.; Xia, G.S. Remote sensing image scene classification meets deep learning: Challenges, methods, benchmarks, and opportunities. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 3735–3756. [Google Scholar] [CrossRef]
  2. Cheng, G.; Han, J.; Lu, X. Remote sensing image scene classification: Benchmark and state of the art. Proc. IEEE 2017, 105, 1865–1883. [Google Scholar] [CrossRef]
  3. Thapa, A.; Horanont, T.; Neupane, B.; Aryal, J. Deep learning for remote sensing image scene classification: A review and meta-analysis. Remote Sens. 2023, 15, 4804. [Google Scholar] [CrossRef]
  4. Adegun, A.A.; Viriri, S.; Tapamo, J.R. Review of deep learning methods for remote sensing satellite images classification: Experimental survey and comparative analysis. J. Big Data 2023, 10, 93. [Google Scholar] [CrossRef]
  5. Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep learning for hyperspectral image classification: An overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef]
  6. Ai, J.; Mao, Y.; Luo, Q.; Jia, L.; Xing, M. SAR target classification using the multikernel-size feature fusion-based convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5214313. [Google Scholar] [CrossRef]
  7. Tang, X.; Li, M.; Ma, J.; Zhang, X.; Liu, F.; Jiao, L. EMTCAL: Efficient multiscale transformer and cross-level attention learning for remote sensing scene classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5626915. [Google Scholar] [CrossRef]
  8. Wang, Q.; Huang, W.; Xiong, Z.; Li, X. Looking closer at the scene: Multiscale representation learning for remote sensing image scene classification. IEEE Trans. Neural Netw. Learn. Syst. 2020, 33, 1414–1428. [Google Scholar] [CrossRef]
  9. Hong, D.; Hu, J.; Yao, J.; Chanussot, J.; Zhu, X.X. Multimodal remote sensing benchmark datasets for land cover classification with a shared and specific feature learning model. ISPRS J. Photogramm. Remote Sens. 2021, 178, 68–80. [Google Scholar] [CrossRef] [PubMed]
  10. Wu, X.; Hong, D.; Chanussot, J. Convolutional neural networks for multimodal remote sensing data classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5517010. [Google Scholar] [CrossRef]
  11. Bai, L.; Liu, Q.; Li, C.; Ye, Z.; Hui, M.; Jia, X. Remote sensing image scene classification using multiscale feature fusion covariance network with octave convolution. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5620214. [Google Scholar] [CrossRef]
  12. Yang, J.; Wu, C.; Du, B.; Zhang, L. Enhanced multiscale feature fusion network for HSI classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10328–10347. [Google Scholar] [CrossRef]
  13. Li, J.; Lin, D.; Wang, Y.; Xu, G.; Zhang, Y.; Ding, C.; Zhou, Y. Deep discriminative representation learning with attention map for scene classification. Remote Sens. 2020, 12, 1366. [Google Scholar] [CrossRef]
  14. He, C.; He, B.; Yin, X.; Wang, W.; Liao, M. Relationship prior and adaptive knowledge mimic based compressed deep network for aerial scene classification. IEEE Access 2019, 7, 137080–137089. [Google Scholar] [CrossRef]
  15. He, C.; Li, S.; Xiong, D.; Fang, P.; Liao, M. Remote sensing image semantic segmentation based on edge information guidance. Remote Sens. 2020, 12, 1501. [Google Scholar] [CrossRef]
  16. Xu, Z.; Zhang, W.; Zhang, T.; Yang, Z.; Li, J. Efficient transformer for remote sensing image segmentation. Remote Sens. 2021, 13, 3585. [Google Scholar] [CrossRef]
  17. Wang, H.; Li, X.; Zhou, G.; Chen, W.; Wang, L. Edge Enhanced Channel Attention-based Graph Convolution Network for Scene Classification of Complex Landscapes. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 3831–3849. [Google Scholar] [CrossRef]
  18. Hao, S.; Wu, B.; Zhao, K.; Ye, Y.; Wang, W. Two-stream swin transformer with differentiable sobel operator for remote sensing image classification. Remote Sens. 2022, 14, 1507. [Google Scholar] [CrossRef]
  19. Zhang, T.; Zhang, X.; Ke, X.; Liu, C.; Xu, X.; Zhan, X.; Wang, C.; Ahmad, I.; Zhou, Y.; Pan, D.; et al. HOG-ShipCLSNet: A novel deep learning network with hog feature fusion for SAR ship classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5210322. [Google Scholar] [CrossRef]
  20. Xu, C.; Zhu, G.; Shu, J. A combination of lie group machine learning and deep learning for remote sensing scene classification using multi-layer heterogeneous feature extraction and fusion. Remote Sens. 2022, 14, 1445. [Google Scholar] [CrossRef]
  21. Dai, Y.; Gieseke, F.; Oehmcke, S.; Wu, Y.; Barnard, K. Attentional feature fusion. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2021; pp. 3560–3569. [Google Scholar]
  22. Oliva, A. Gist of the scene. In Neurobiology of Attention; Elsevier: Amsterdam, The Netherlands, 2005; pp. 251–256. [Google Scholar]
  23. Zhang, X.; Cui, J.; Wang, W.; Lin, C. A study for texture feature extraction of high-resolution satellite images based on a direction measure and gray level co-occurrence matrix fusion algorithm. Sensors 2017, 17, 1474. [Google Scholar] [CrossRef] [PubMed]
  24. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  25. Zhu, Q.; Zhong, Y.; Zhao, B.; Xia, G.S.; Zhang, L. Bag-of-visual-words scene classifier with local and global features for high spatial resolution remote sensing imagery. IEEE Geosci. Remote Sens. Lett. 2016, 13, 747–751. [Google Scholar] [CrossRef]
  26. Zhao, F.; Sun, H.; Liu, S.; Zhou, S. Combining low level features and visual attributes for VHR remote sensing image classification. In Proceedings of the MIPPR 2015: Remote Sensing Image Processing, Geographic Information Systems, and Other Applications, Enshi, China, 31 October–1 November 2015; SPIE: California, CA, USA, 2015; Volume 9815, pp. 74–81. [Google Scholar]
  27. Khan, S.D.; Basalamah, S. Multi-branch deep learning framework for land scene classification in satellite imagery. Remote Sens. 2023, 15, 3408. [Google Scholar] [CrossRef]
  28. Wu, H.; Zhou, H.; Wang, A.; Iwahori, Y. Precise Crop Classification of Hyperspectral Images Using Multi-Branch Feature Fusion and Dilation-Based MLP. Remote Sens. 2022, 14, 2713. [Google Scholar] [CrossRef]
  29. Shi, C.; Wang, T.; Wang, L. Branch feature fusion convolution network for remote sensing scene classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 5194–5210. [Google Scholar] [CrossRef]
  30. Shi, C.; Zhang, X.; Sun, J.; Wang, L. Remote sensing scene image classification based on dense fusion of multi-level features. Remote Sens. 2021, 13, 4379. [Google Scholar] [CrossRef]
  31. Cheng, G.; Si, Y.; Hong, H.; Yao, X.; Guo, L. Cross-scale feature fusion for object detection in optical remote sensing images. IEEE Geosci. Remote Sens. Lett. 2020, 18, 431–435. [Google Scholar] [CrossRef]
  32. Jiang, N.; Shi, H.; Geng, J. Multi-Scale Graph-Based Feature Fusion for Few-Shot Remote Sensing Image Scene Classification. Remote Sens. 2022, 14, 5550. [Google Scholar] [CrossRef]
  33. Shi, C.; Zhao, X.; Wang, L. A multi-branch feature fusion strategy based on an attention mechanism for remote sensing image scene classification. Remote Sens. 2021, 13, 1950. [Google Scholar] [CrossRef]
  34. Sun, X.; Wang, B.; Wang, Z.; Li, H.; Li, H.; Fu, K. Research progress on few-shot learning for remote sensing image interpretation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 2387–2402. [Google Scholar] [CrossRef]
  35. Jiang, J.; Ma, J.; Wang, Z.; Chen, C.; Liu, X. Hyperspectral image classification in the presence of noisy labels. IEEE Trans. Geosci. Remote Sens. 2018, 57, 851–865. [Google Scholar] [CrossRef]
  36. Barz, B.; Denzler, J. Deep learning on small datasets without pre-training using cosine loss. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Snowmass, CO, USA, 1–5 March 2020; pp. 1371–1380. [Google Scholar]
  37. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
  38. Ranasinghe, K.; Naseer, M.; Hayat, M.; Khan, S.; Khan, F.S. Orthogonal projection loss. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 12333–12343. [Google Scholar]
  39. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  40. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 10012–10022. [Google Scholar]
  41. Wang, G.; Zhang, N.; Liu, W.; Chen, H.; Xie, Y. MFST: A Multi-Level Fusion Network for Remote Sensing Scene Classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6516005. [Google Scholar] [CrossRef]
  42. Yang, Y.; Newsam, S. Bag-of-visual-words and spatial extensions for land-use classification. In Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, San Jose, CA, USA, 2–5 November 2010; pp. 270–279. [Google Scholar]
  43. Ross, T.D.; Worrell, S.W.; Velten, V.J.; Mossing, J.C.; Bryant, M.L. Standard SAR ATR evaluation experiments using the MSTAR public release data set. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery V. SPIE, Orlando, FL, USA, 13–17 April 1998; Volume 3370, pp. 566–573. [Google Scholar]
  44. Brigato, L.; Barz, B.; Iocchi, L.; Denzler, J. Image classification with small datasets: Overview and benchmark. IEEE Access 2022, 10, 49233–49250. [Google Scholar] [CrossRef]
  45. Huang, L.; Wang, F.; Zhang, Y.; Xu, Q. Fine-grained ship classification by combining CNN and swin transformer. Remote Sens. 2022, 14, 3087. [Google Scholar] [CrossRef]
  46. Zhang, J.; Zhao, H.; Li, J. TRS: Transformers for remote sensing scene classification. Remote Sens. 2021, 13, 4143. [Google Scholar] [CrossRef]
  47. Chen, P.; Zhou, H.; Li, Y.; Liu, B.; Liu, P. Shape similarity intersection-over-union loss hybrid model for detection of synthetic aperture radar small ship objects in complex scenes. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 9518–9529. [Google Scholar] [CrossRef]
  48. Li, B.; Guo, Y.; Yang, J.; Wang, L.; Wang, Y.; An, W. Gated recurrent multiattention network for VHR remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5606113. [Google Scholar] [CrossRef]
  49. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  50. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  51. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
Figure 1. Example of images from different datasets. NWPU-RESISC45 (ad); UCM (eh); MSTAR (il).
Figure 1. Example of images from different datasets. NWPU-RESISC45 (ad); UCM (eh); MSTAR (il).
Sensors 24 03912 g001
Figure 2. Illustration of the distinction between (a) previous works and (b) our work.
Figure 2. Illustration of the distinction between (a) previous works and (b) our work.
Sensors 24 03912 g002
Figure 3. The overarching framework of CAF. The original image and edge image (with the same depth) are fused using the CAF module, ensuring that multi-layer features maintain their dimensions through upsampling. Subsequently, the resulting fusion features are fed as input into the Swin-transformer. The details of CAF and the multi-scale channel attention module (MSCAM) are also presented.
Figure 3. The overarching framework of CAF. The original image and edge image (with the same depth) are fused using the CAF module, ensuring that multi-layer features maintain their dimensions through upsampling. Subsequently, the resulting fusion features are fed as input into the Swin-transformer. The details of CAF and the multi-scale channel attention module (MSCAM) are also presented.
Sensors 24 03912 g003
Figure 4. The proposed CAF. By employing the attention-based feature fusion approach, the weights λ X and λ Y are computed for integrating the original image and edge image. In comparison to the addition and concatenation methods, this technique enables enhanced focus on crucial regions and features within the image, thereby augmenting its performance and robustness.
Figure 4. The proposed CAF. By employing the attention-based feature fusion approach, the weights λ X and λ Y are computed for integrating the original image and edge image. In comparison to the addition and concatenation methods, this technique enables enhanced focus on crucial regions and features within the image, thereby augmenting its performance and robustness.
Sensors 24 03912 g004
Figure 5. The confusion matrices of UCM were computed using two different methods, with a training ratio of 80%.
Figure 5. The confusion matrices of UCM were computed using two different methods, with a training ratio of 80%.
Sensors 24 03912 g005
Figure 6. The confusion matrices of MSTAR were computed using two different methods, with a training ratio of 80%.
Figure 6. The confusion matrices of MSTAR were computed using two different methods, with a training ratio of 80%.
Sensors 24 03912 g006
Figure 7. The assessment metrics of NWPU-RESISC45 employing diverse methodologies.
Figure 7. The assessment metrics of NWPU-RESISC45 employing diverse methodologies.
Sensors 24 03912 g007
Figure 8. The assessment metrics of MSTAR employing diverse methodologies.
Figure 8. The assessment metrics of MSTAR employing diverse methodologies.
Sensors 24 03912 g008
Figure 9. The assessment metrics of UCM employing diverse methodologies.
Figure 9. The assessment metrics of UCM employing diverse methodologies.
Sensors 24 03912 g009
Figure 10. The sequence from (af) includes the UCM dataset’s initial image, the heat map created by the model without incorporating the CAF module (overlaid on top of the original image), and the heat map generated by integrating the CAF module into the model (also overlaid on top of the original image).
Figure 10. The sequence from (af) includes the UCM dataset’s initial image, the heat map created by the model without incorporating the CAF module (overlaid on top of the original image), and the heat map generated by integrating the CAF module into the model (also overlaid on top of the original image).
Sensors 24 03912 g010
Figure 11. The sequence from (af) includes the NWPU-RESISC45 dataset’s initial image, the heat map created by the model without incorporating the CAF module (overlaid on top of the original image), and the heat map generated by integrating the CAF module into the model (also overlaid on top of the original image).
Figure 11. The sequence from (af) includes the NWPU-RESISC45 dataset’s initial image, the heat map created by the model without incorporating the CAF module (overlaid on top of the original image), and the heat map generated by integrating the CAF module into the model (also overlaid on top of the original image).
Sensors 24 03912 g011
Figure 12. The sequence from (af) includes the MSTAR dataset’s initial image, the heat map created by the model without incorporating the CAF module (overlaid on top of the original image), and the heat map generated by integrating the CAF module into the model (also overlaid on top of the original image).
Figure 12. The sequence from (af) includes the MSTAR dataset’s initial image, the heat map created by the model without incorporating the CAF module (overlaid on top of the original image), and the heat map generated by integrating the CAF module into the model (also overlaid on top of the original image).
Sensors 24 03912 g012
Table 1. Dataset description.
Table 1. Dataset description.
DatasetsRemote Sensing Imaging TypesNumber of ClassesNumber of per ClassNumbers of InstancesImage SizePixel ResolutionYear
NWPU-RESISC45Very High-Resolution4570031,500256 × 2560.2–30 m2017
UCMVery High-Resolution211002100256 × 2560.3 m2010
MSTARSynthetic Aperture Radar8428–5735172368 × 3680.3 m1996
Table 2. Comparison of classification accuracy using the cross-entropy loss function and a novel loss function across three datasets.
Table 2. Comparison of classification accuracy using the cross-entropy loss function and a novel loss function across three datasets.
MethodNWPURESISC 45UCMMSTAR
cross-entropy90.63%95.71%93.97%
COFE-Loss91.20%96.67%94.13%
Table 3. Comparison of classification accuracy using the cross-entropy loss function across three datasets.
Table 3. Comparison of classification accuracy using the cross-entropy loss function across three datasets.
MethodNWPURESISC 45UCMMSTAR
cross-entropy90.63%95.71%93.97%
cross-entropy + CAF92.39%96.24%99.26%
cross-entropy + 2 × CAF92.98%97.24%99.15%
Table 4. Comparison of classification accuracy using a novel loss function across three datasets.
Table 4. Comparison of classification accuracy using a novel loss function across three datasets.
MethodNWPURESISC 45UCMMSTAR
COFE-Loss91.20%96.67 %94.13%
COFE-Loss + CAF92.65%96.24%99.63%
COFE-Loss + 2 × CAF94.12%97.49%97.28%
Table 5. Numeric results of all metrics for all datasets.
Table 5. Numeric results of all metrics for all datasets.
DatasetMethodMetric
Accuracy%Precision%Recall%F1%Kappa%
NWPUcross-entropy90.6390.9690.6390.6790.42
COFE-Loss91.2091.5991.2091.2591.00
cross-entropy + CAF92.3992.6992.3992.4192.22
COFE-Loss + CAF92.6592.8092.6592.6392.48
cross-entropy + 2 × CAF92.9893.2392.9892.9692.82
COFE-Loss + 2 × CAF94.1294.3094.1294.1393.98
MSTARcross-entropy93.9794.0994.0794.0793.06
COFE-Loss94.1394.6793.5093.9893.24
cross-entropy + CAF99.2698.9799.3399.1499.15
COFE-Loss + CAF99.6399.4499.4199.4299.27
cross-entropy + 2 × CAF99.1599.2299.2699.2499.03
COFE-Loss + 2 × CAF97.2897.5697.7197.5797.44
UCMcross-entropy95.7195.8695.7195.7095.50
COFE-Loss96.6797.0896.6796.7096.50
cross-entropy + CAF96.2496.3796.2496.1796.05
COFE-Loss + CAF96.2496.5296.2496.2196.05
cross-entropy + 2 × CAF97.2497.3297.2497.2297.11
COFE-Loss + 2 × CAF97.4997.6097.4997.4897.37
Table 6. Classification accuracy of various targets with different methods.
Table 6. Classification accuracy of various targets with different methods.
MethodNWPURESISC 45UCMMSTAR
Res-Net50 [49]94.96%97.35%53.80%
densenet121 [50]94.90%96.82%64.56%
VGG11 [51]93.56%79.36%58.92%
EMTCAL [7]92.31%96.25%99.41%
ours method95.04%97.49%99.63%
Table 7. Result of the ablation experiment.
Table 7. Result of the ablation experiment.
MethodNWPURESISC 45UCMMSTAR
Swin-transformer90.63%95.71%93.97%
add + Swin-transformer90.28%96.24%98.15%
CAF + Swin-transformer92.39%96.24%99.26%
2 × CAF + Swin-transformer92.98%97.24%99.15%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhou, M.; Zhou, Y.; Yang, D.; Song, K. Remote Sensing Image Classification Based on Canny Operator Enhanced Edge Features. Sensors 2024, 24, 3912. https://doi.org/10.3390/s24123912

AMA Style

Zhou M, Zhou Y, Yang D, Song K. Remote Sensing Image Classification Based on Canny Operator Enhanced Edge Features. Sensors. 2024; 24(12):3912. https://doi.org/10.3390/s24123912

Chicago/Turabian Style

Zhou, Mo, Yue Zhou, Dawei Yang, and Kai Song. 2024. "Remote Sensing Image Classification Based on Canny Operator Enhanced Edge Features" Sensors 24, no. 12: 3912. https://doi.org/10.3390/s24123912

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop