CLIP-Driven Few-Shot Species-Recognition Method for Integrating Geographic Information
"> Figure 1
<p>The overall framework of SG-CLIP for few-shot species recognition. It contains three paths for text, image, and geographic information, respectively. The geographic feature is obtained by GFEM. The parameters of GFEM and IGFFM are learnable.</p> "> Figure 2
<p>The structure of GFEM for geographic feature extraction. The dashed box is the structure of the FCResLayer.</p> "> Figure 3
<p>The structure of IGFFM for image and geographic feature fusion, where Fc denotes the fully connected layer, ReLU denotes the ReLU activation function, and LayerNorm denotes layer normalization. DFB denotes the dynamic fusion block. DFB is used recursively, where <span class="html-italic">N</span> is the number of DFB modules.</p> "> Figure 4
<p>Heatmaps of the geolocation distribution. (<b>a</b>) Mammals. (<b>b</b>) Reptiles. (<b>c</b>) Amphibians. Different colors indicate the number of species at different locations. Green indicates relatively little data and red indicates a large number.</p> "> Figure 5
<p>Performance comparison of different methods with different training samples on different datasets. (<b>a</b>) Mammals. (<b>b</b>) Reptiles. (<b>c</b>) Amphibians.</p> "> Figure 6
<p>Comparison of few-shot species recognition accuracy on different datasets under different versions of CLIP. (<b>a</b>) Mammals. (<b>b</b>) Reptiles. (<b>c</b>) Amphibians.</p> "> Figure 7
<p>Visualization of t-SNE representations under different methods. (<b>a</b>) Zero-shot CLIP under ViT-B/32. (<b>b</b>) Zero-shot CLIP under ViT-L/14. (<b>c</b>) SG-CLIP under ViT-B/32. (<b>d</b>) SG-CLIP under ViT-L/14.</p> ">
Abstract
:1. Introduction
- We proposed SG-CLIP, which integrates geographic information about species, to improve the performance of few-shot species recognition. To the best of our knowledge, this is the first work to exploit geographic information for few-shot species recognition of large vision-language models.
- We introduced the geographic feature extraction module to better process geographic location information. Meanwhile, we designed the image and geographic feature fusion module to enhance the image representation ability.
- We performed extensive experiments of SG-CLIP in the iNaturalist 2021 dataset to demonstrate its effectiveness and generalization. Under ViT-B/32 and the 16-shot training setup, compared to Linear probe CLIP, SG-CLIP improves the recognition accuracy by 15.12% on mammals, 17.51% on reptiles, and 17.65% on amphibians.
2. Methods
2.1. CLIP-Driven Image and Text Feature Extraction
2.2. Geographic Feature Extraction Module (GFEM)
2.3. Image and Geographic Feature Fusion Module (IGFFM)
2.4. Species Prediction Probability Calculation
3. Experiments
3.1. Datasets
3.2. Experimental Setup, Implementation Details, and Evaluation Metrics
4. Results
4.1. Performance of Different Few-Shot Learning Methods on ViT-B/32
4.2. Performance of Few-Shot Species Recognition with Different Versions of CLIP
4.3. Time Efficiency Analysis
4.4. Ablation Studies
4.5. Visualization Analysis
4.6. Case Studies
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Hong, P.; Schmid, B.; De Laender, F.; Eisenhauer, N.; Zhang, X.; Chen, H.; Craven, D.; De Boeck, H.J.; Hautier, Y.; Petchey, O.L.; et al. Biodiversity Promotes Ecosystem Functioning despite Environmental Change. Ecol. Lett. 2022, 25, 555–569. [Google Scholar] [CrossRef] [PubMed]
- Simkin, R.D.; Seto, K.C.; McDonald, R.I.; Jetz, W. Biodiversity Impacts and Conservation Implications of Urban Land Expansion Projected to 2050. Proc. Natl. Acad. Sci. USA 2022, 119, e2117297119. [Google Scholar] [CrossRef] [PubMed]
- Gaston, K.J.; O’Neill, M.A. Automated Species Identification: Why Not? Philos. Trans. R. Soc. B Biol. Sci. 2004, 359, 655–667. [Google Scholar] [CrossRef] [PubMed]
- Bojamma, A.M.; Shastry, C. A Study on the Machine Learning Techniques for Automated Plant Species Identification: Current Trends and Challenges. Int. J. Inf. Tecnol. 2021, 13, 989–995. [Google Scholar] [CrossRef]
- Tuia, D.; Kellenberger, B.; Beery, S.; Costelloe, B.R.; Zuffi, S.; Risse, B.; Mathis, A.; Mathis, M.W.; van Langevelde, F.; Burghardt, T.; et al. Perspectives in Machine Learning for Wildlife Conservation. Nat. Commun. 2022, 13, 792. [Google Scholar] [CrossRef] [PubMed]
- Chen, R.; Little, R.; Mihaylova, L.; Delahay, R.; Cox, R. Wildlife Surveillance Using Deep Learning Methods. Ecol. Evol. 2019, 9, 9453–9466. [Google Scholar] [CrossRef]
- Duggan, M.T.; Groleau, M.F.; Shealy, E.P.; Self, L.S.; Utter, T.E.; Waller, M.M.; Hall, B.C.; Stone, C.G.; Anderson, L.L.; Mousseau, T.A. An Approach to Rapid Processing of Camera Trap Images with Minimal Human Input. Ecol. Evol. 2021, 11, 12051–12063. [Google Scholar] [CrossRef]
- Gomez Villa, A.; Salazar, A.; Vargas, F. Towards Automatic Wild Animal Monitoring: Identification of Animal Species in Camera-Trap Images Using Very Deep Convolutional Neural Networks. Ecol. Inform. 2017, 41, 24–32. [Google Scholar] [CrossRef]
- Xie, Y.; Jiang, J.; Bao, H.; Zhai, P.; Zhao, Y.; Zhou, X.; Jiang, G. Recognition of Big Mammal Species in Airborne Thermal Imaging Based on YOLO V5 Algorithm. Integr. Zool. 2023, 18, 333–352. [Google Scholar] [CrossRef]
- Huang, S.; Xu, Z.; Tao, D.; Zhang, Y. Part-Stacked CNN for Fine-Grained Visual Categorization. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; IEEE: Piscataway, NJ, USA; pp. 1173–1182. [Google Scholar]
- Lv, X.; Xia, H.; Li, N.; Li, X.; Lan, R. MFVT: Multilevel Feature Fusion Vision Transformer and RAMix Data Augmentation for Fine-Grained Visual Categorization. Electronics 2022, 11, 3552. [Google Scholar] [CrossRef]
- Li, M.; Zhou, G.; Cai, W.; Li, J.; Li, M.; He, M.; Hu, Y.; Li, L. Multi-Scale Sparse Network with Cross-Attention Mechanism for Image-Based Butterflies Fine-Grained Classification. Appl. Soft Comput. 2022, 117, 108419. [Google Scholar] [CrossRef]
- He, J.; Kortylewski, A.; Yuille, A. CORL: Compositional Representation Learning for Few-Shot Classification. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 2–7 January 2023; pp. 3890–3899. [Google Scholar]
- Zhang, Q.; Yi, X.; Guo, J.; Tang, Y.; Feng, T.; Liu, R. A Few-Shot Rare Wildlife Image Classification Method Based on Style Migration Data Augmentation. Ecol. Informatics 2023, 77, 102237. [Google Scholar] [CrossRef]
- Snell, J.; Swersky, K.; Zemel, R. Prototypical Networks for Few-Shot Learning. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Long Beach, CA, USA, 2017; Volume 30. [Google Scholar]
- Guo, Z.; Zhang, L.; Jiang, Y.; Niu, W.; Gu, Z.; Zheng, H.; Wang, G.; Zheng, B. Few-Shot Fish Image Generation and Classification. In Proceedings of the Global Oceans 2020: Singapore—U.S. Gulf Coast, Biloxi, MS, USA, 5–14 October 2020; pp. 1–6. [Google Scholar]
- Zhai, J.; Han, L.; Xiao, Y.; Yan, M.; Wang, Y.; Wang, X. Few-Shot Fine-Grained Fish Species Classification via Sandwich Attention CovaMNet. Front. Mar. Sci. 2023, 10, 1149186. [Google Scholar] [CrossRef]
- Lu, J.; Zhang, S.; Zhao, S.; Li, D.; Zhao, R. A Metric-Based Few-Shot Learning Method for Fish Species Identification with Limited Samples. Animals 2024, 14, 755. [Google Scholar] [CrossRef]
- Xu, S.-L.; Zhang, F.; Wei, X.-S.; Wang, J. Dual Attention Networks for Few-Shot Fine-Grained Recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, Palo Alto, CA, USA, 22 February–1 March 2022; pp. 2911–2919. [Google Scholar] [CrossRef]
- Radford, A.; Kim, J.W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. Learning Transferable Visual Models From Natural Language Supervision. In Proceedings of the 38th International Conference on Machine Learning, PMLR, Virtual, 18–24 July 2021; pp. 8748–8763. [Google Scholar]
- Gao, P.; Geng, S.; Zhang, R.; Ma, T.; Fang, R.; Zhang, Y.; Li, H.; Qiao, Y. CLIP-Adapter: Better Vision-Language Models with Feature Adapters. Int. J. Comput. Vis. 2023, 132, 581–595. [Google Scholar] [CrossRef]
- Zhou, K.; Yang, J.; Loy, C.C.; Liu, Z. Learning to Prompt for Vision-Language Models. Int. J. Comput. Vis. 2022, 130, 2337–2348. [Google Scholar] [CrossRef]
- Zhang, R.; Zhang, W.; Fang, R.; Gao, P.; Li, K.; Dai, J.; Qiao, Y.; Li, H. Tip-Adapter: Training-Free Adaption of CLIP for Few-Shot Classification. In Computer Vision—ECCV 2022; Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T., Eds.; Springer: Cham, Switzerland, 2022; pp. 493–510. [Google Scholar]
- Guo, Z.; Zhang, R.; Qiu, L.; Ma, X.; Miao, X.; He, X.; Cui, B. CALIP: Zero-Shot Enhancement of CLIP with Parameter-Free Attention. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; pp. 746–754. [Google Scholar] [CrossRef]
- Parashar, S.; Lin, Z.; Li, Y.; Kong, S. Prompting Scientific Names for Zero-Shot Species Recognition. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Singapore, 6–10 December 2023; pp. 9856–9861. [Google Scholar]
- Menon, S.; Vondrick, C. Visual Classification via Description from Large Language Models. In Proceedings of the Eleventh International Conference on Learning Representations, Kigali, Rwanda, 29 September 2022. [Google Scholar]
- Mac Aodha, O.; Cole, E.; Perona, P. Presence-Only Geographical Priors for Fine-Grained Image Classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October 2019; pp. 9596–9606. [Google Scholar]
- Terry, J.C.D.; Roy, H.E.; August, T.A. Thinking like a Naturalist: Enhancing Computer Vision of Citizen Science Images by Harnessing Contextual Data. Methods Ecol. Evol. 2020, 11, 303–315. [Google Scholar] [CrossRef]
- de Lutio, R.; She, Y.; D’Aronco, S.; Russo, S.; Brun, P.; Wegner, J.D.; Schindler, K. Digital Taxonomist: Identifying Plant Species in Community Scientists’ Photographs. ISPRS J. Photogramm. Remote Sens. 2021, 182, 112–121. [Google Scholar] [CrossRef]
- Yang, L.; Li, X.; Song, R.; Zhao, B.; Tao, J.; Zhou, S.; Liang, J.; Yang, J. Dynamic MLP for Fine-Grained Image Classification by Leveraging Geographical and Temporal Information. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18 June 2022; IEEE: Piscataway, NJ, USA; pp. 10935–10944. [Google Scholar]
- Liu, L.; Han, B.; Chen, F.; Mou, C.; Xu, F. Utilizing Geographical Distribution Statistical Data to Improve Zero-Shot Species Recognition. Animals 2024, 14, 1716. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27 June 2016; pp. 770–778. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image Is Worth 16 × 16 Words: Transformers for Image Recognition at Scale. Available online: https://arxiv.org/abs/2010.11929v2 (accessed on 26 January 2024).
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2017; pp. 6000–6010. [Google Scholar]
- Zhang, R.; Guo, Z.; Zhang, W.; Li, K.; Miao, X.; Cui, B.; Qiao, Y.; Gao, P.; Li, H. PointCLIP: Point Cloud Understanding by CLIP. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 8552–8562. [Google Scholar]
- Maniparambil, M.; Vorster, C.; Molloy, D.; Murphy, N.; McGuinness, K.; O’Connor, N.E. Enhancing CLIP with GPT-4: Harnessing Visual Descriptions as Prompts. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 262–271. [Google Scholar]
- Deng, H.; Zhang, Z.; Bao, J.; Li, X. AnoVL: Adapting Vision-Language Models for Unified Zero-Shot Anomaly Localization. arXiv 2023, arXiv:2308.15939. [Google Scholar]
- Van Horn, G. Oisin Mac Aodha. iNat Challenge 2021-FGVC8. Available online: https://kaggle.com/competitions/inaturalist-2021 (accessed on 18 January 2024).
- Van der Maaten, L.; Hinton, G. Visualizing Data Using T-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
- Chu, G.; Potetz, B.; Wang, W.; Howard, A.; Song, Y.; Brucher, F.; Leung, T.; Adam, H. Geo-Aware Networks for Fine-Grained Recognition. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Seoul, Republic of Korea, 27 October 2019; pp. 247–254. [Google Scholar]
- Tang, K.; Paluri, M.; Fei-Fei, L.; Fergus, R.; Bourdev, L. Improving Image Classification with Location Context. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7 December 2015; pp. 1008–1016. [Google Scholar]
Species | Number of Species | Number of Images in the Training Set | Number of Images in the Testing Set |
---|---|---|---|
Mammals | 246 | 3936 | 2460 |
Reptiles | 313 | 5008 | 3130 |
Amphibians | 170 | 2720 | 1700 |
Method | Data Usage | Mammals | Reptiles | Amphibians |
---|---|---|---|---|
Zero-shot CLIP | 0 | 10.65% | 3.93% | 3.65% |
Linear probe CLIP | 1 | 13.94% | 4.82% | 4.77% |
2 | 16.55% | 6.61% | 7.29% | |
4 | 22.85% | 10.13% | 9.35% | |
8 | 31.26% | 14.86% | 14.65% | |
16 | 38.70% | 19.87% | 18.41% | |
Tip-Adapter | 1 | 16.14% | 5.97% | 5.53% |
2 | 18.66% | 7.8% | 6.82% | |
4 | 21.75% | 9.52% | 8.24% | |
8 | 26.75% | 10.8% | 10.82% | |
16 | 30.24% | 13.51% | 13.71% | |
Tip-Adapter-F | 1 | 18.29% | 7.09% | 6.24% |
2 | 22.89% | 9.94% | 9.35% | |
4 | 26.59% | 13.29% | 12.57% | |
8 | 34.02% | 17.67% | 15.88% | |
16 | 41.26% | 20.77% | 20.53% | |
SG-CLIP | 1 | 15.53% | 8.85% | 10.06% |
2 | 22.97% | 12.36% | 12.94% | |
4 | 31.71% | 19.20% | 18.65% | |
8 | 41.54% | 27.53% | 27.18% | |
16 | 53.82% | 37.38% | 36.06% |
Image Encoder Version of CLIP | Dataset | Method | Data Usage | |||||
---|---|---|---|---|---|---|---|---|
0 | 1 | 2 | 4 | 8 | 16 | |||
ViT-B/32 | Mammals | Zero-shot CLIP | 10.65% | - | - | - | - | - |
Linear probe CLIP | - | 13.94% | 16.55% | 22.85% | 31.26% | 38.70% | ||
Tip-Adapter | - | 16.14% | 18.66% | 21.75% | 26.75% | 30.24% | ||
Tip-Adapter-F | - | 18.29% | 22.89% | 26.59% | 34.02% | 41.26% | ||
SG-CLIP | - | 15.53% | 22.97% | 31.71% | 41.54% | 53.82% | ||
Reptiles | Zero-shot CLIP | 3.93% | - | - | - | - | - | |
Linear probe CLIP | - | 4.82% | 6.61% | 10.13% | 14.86% | 19.87% | ||
Tip-Adapter | - | 5.97% | 7.8% | 9.52% | 10.8% | 13.51% | ||
Tip-Adapter-F | - | 7.09% | 9.94% | 13.29% | 17.67% | 20.77% | ||
SG-CLIP | - | 8.85% | 12.36% | 19.20% | 27.54% | 37.38% | ||
Amphibians | Zero-shot CLIP | 3.65% | - | - | - | - | - | |
Linear probe CLIP | - | 4.77% | 7.29% | 9.35% | 14.65% | 18.41% | ||
Tip-Adapter | - | 5.53% | 6.82% | 8.24% | 10.82% | 13.71% | ||
Tip-Adapter-F | - | 6.24% | 9.35% | 12.57% | 15.88% | 20.53% | ||
SG-CLIP | - | 10.06% | 12.94% | 18.65% | 27.18% | 36.06% | ||
ViT-L/14 | Mammals | Zero-shot CLIP | 18.05% | - | - | - | - | - |
Linear probe CLIP | - | 24.84% | 31.67% | 41.56% | 50% | 57.85% | ||
Tip-Adapter | - | 28.9% | 31.18% | 36.5% | 43.58% | 48.5% | ||
Tip-Adapter-F | - | 31.3% | 38.17% | 45.41% | 52.48% | 58.5% | ||
SG-CLIP | - | 25.16% | 36.18% | 44.96% | 55.61% | 64.07% | ||
Reptiles | Zero-shot CLIP | 5.37% | - | - | - | - | - | |
Linear probe CLIP | - | 9.2% | 14.25% | 20.38% | 29.94% | 35.46% | ||
Tip-Adapter | - | 11.18% | 13.48% | 18.56% | 21.95% | 26.29% | ||
Tip-Adapter-F | - | 13.23% | 18.53% | 25.97% | 30.45% | 37.22% | ||
SG-CLIP | - | 14.72% | 21.02% | 30.03% | 39.81% | 49.23% | ||
Amphibians | Zero-shot CLIP | 4.71% | - | - | - | - | - | |
Linear probe CLIP | - | 6.82% | 9.53% | 16.06% | 21.35% | 27% | ||
Tip-Adapter | - | 8.94% | 9.88% | 11.53% | 15.24% | 18.53% | ||
Tip-Adapter-F | - | 8.29% | 13.53% | 16.59% | 23.41% | 28.18% | ||
SG-CLIP | - | 13.24% | 18.24% | 24.65% | 33.18% | 43.82% |
Method | Training Time | ||
---|---|---|---|
Mammals | Reptiles | Amphibians | |
Zero-shot CLIP | 0 | 0 | 0 |
Linear probe CLIP | 51.64 s | 43.79 s | 16.57 s |
Tip-Adapter | 0 | 0 | 0 |
Tip-Adapter-F | 6.57 m | 8.20 m | 4.7 m |
SG-CLIP | 3.31 h | 4.49 h | 2.12 h |
Residual Ratio α | Mammals | Reptiles | Amphibians |
---|---|---|---|
0.2 | 49.07% | 34.09% | 30.47% |
0.4 | 51.67% | 36.23% | 32.47% |
0.6 | 52.07% | 33.76% | 33.46% |
0.8 | 53.82% | 37.12% | 33.71% |
1.0 | 52.64% | 37.03% | 33.76% |
Number | Mammals | Reptiles | Amphibians |
---|---|---|---|
1 | 50.94% | 34.22% | 33.59% |
2 | 53.82% | 37.12% | 33.76% |
3 | 52.36% | 37.38% | 33.65% |
4 | 52.73% | 36.68% | 36.06% |
5 | 51.46% | 36.52% | 35% |
Number | Mammals | Reptiles | Amphibians |
---|---|---|---|
w/longitude | 57.76% | 41.21% | 33.82% |
w/latitude | 55.89% | 36.58% | 27.76% |
w/(latitude, longitude) | 64.07% | 49.23% | 43.82% |
Method | Data Usage | Accuracy | |
---|---|---|---|
Top-1 | Top-5 | ||
Zero-shot CLIP | 0 | 37% | 63% |
Tip-Adapter | 1 | 46% | 71% |
2 | 54.5% | 78.5% | |
4 | 62.5% | 87% | |
8 | 67% | 93% | |
16 | 74% | 95.5% | |
Tip-Adapter-F | 1 | 44.5% | 76% |
2 | 52.5% | 77% | |
4 | 67.5% | 92% | |
8 | 72.5% | 96.5% | |
16 | 75.5% | 97.5% | |
SG-CLIP | 1 | 52.5% | 85% |
2 | 60% | 88.5% | |
4 | 72% | 92% | |
8 | 80.5% | 96% | |
16 | 89% | 97.5% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, L.; Yang, L.; Yang, F.; Chen, F.; Xu, F. CLIP-Driven Few-Shot Species-Recognition Method for Integrating Geographic Information. Remote Sens. 2024, 16, 2238. https://doi.org/10.3390/rs16122238
Liu L, Yang L, Yang F, Chen F, Xu F. CLIP-Driven Few-Shot Species-Recognition Method for Integrating Geographic Information. Remote Sensing. 2024; 16(12):2238. https://doi.org/10.3390/rs16122238
Chicago/Turabian StyleLiu, Lei, Linzhe Yang, Feng Yang, Feixiang Chen, and Fu Xu. 2024. "CLIP-Driven Few-Shot Species-Recognition Method for Integrating Geographic Information" Remote Sensing 16, no. 12: 2238. https://doi.org/10.3390/rs16122238
APA StyleLiu, L., Yang, L., Yang, F., Chen, F., & Xu, F. (2024). CLIP-Driven Few-Shot Species-Recognition Method for Integrating Geographic Information. Remote Sensing, 16(12), 2238. https://doi.org/10.3390/rs16122238