[go: up one dir, main page]

 
 
remotesensing-logo

Journal Browser

Journal Browser

Deep Learning Meets Remote Sensing for Earth Observation and Monitoring (Second Edition)

A special issue of Remote Sensing (ISSN 2072-4292).

Deadline for manuscript submissions: 28 June 2025 | Viewed by 1814

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, National University of Technology, Islamabad, Pakistan
Interests: computer vision; remote sensing; deep learning; embedded system design
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Science and Technology (REALTEK), Norwegian University of Life Sciences, Ås, Norway
Interests: computer vision; remote sensing; signal processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norges Teknisk-Naturvitenskapelige Universitet, Trondheim, Norway
Interests: image and video analysis; remote sensing; deep learning; pattern recognition; medical imaging
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

After the resounding success of our first edition, we are happy to announce the second edition of our Special Issue.

Remote sensing technologies have enabled researchers to understand, analyze and monitor different activities on Earth from afar. With the current technological advances, such as satellites, drones, etc., significant amounts of data (in the form of high-resolution images) can be easily acquired. This opens up new paradigms and research directions for the remote sensing community that offer different applications in diverse fields, for example, smart agriculture, traffic monitoring, disaster management, and urban planning. For Earth’s monitoring, visual pattern recognition is a pre-processing step. Automated recognition of different patterns by employing computer vision and deep learning techniques will provide crucial information for monitoring changes across the Earth’s surface. Deep learning techniques have achieved tremendous success in object classification, detection, and segmentation tasks in natural images; however, these models face challenges in identifying patterns in remote sensing images due to complex backgrounds, arbitrary views, and large variations in object sizes.

This Special Issue invites authors to submit their original articles regarding the design and development of novel deep learning models to identify different visual patterns to support Earth monitoring. In addition, we would like to invite the submission of research related to remote sensing-based disaster assessment and management support systems. Lastly, we welcome comprehensive review articles that focus on analyzing the performances of state-of-the-art deep learning models in remote sensing imagery.

Submissions may cover a wide range of topics, including the following:

  • Deep learning models for the monitoring of Earth;
  • Deep learning models for the monitoring of crops using remote sensing data;
  • Flood segmentation and natural hazards prediction;
  • Road and building footprint extraction for urban growth and planning;
  • Remote sensing for smart farming for sustainable agriculture.

Dr. Sultan Daud Khan
Dr. Habib Ullah
Dr. Mohib Ullah
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • satellite image processing
  • multi-scale feature extraction
  • deep learning
  • context understanding
  • data fusion

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 22673 KiB  
Article
Enhanced YOLOv8-Based Model with Context Enrichment Module for Crowd Counting in Complex Drone Imagery
by Abdullah N. Alhawsawi, Sultan Daud Khan and Faizan Ur Rehman
Remote Sens. 2024, 16(22), 4175; https://doi.org/10.3390/rs16224175 - 8 Nov 2024
Cited by 1 | Viewed by 1358
Abstract
Crowd counting in aerial images presents unique challenges due to varying altitudes, angles, and cluttered backgrounds. Additionally, the small size of targets, often occupying only a few pixels in high-resolution images, further complicates the problem. Current crowd counting models struggle in these complex [...] Read more.
Crowd counting in aerial images presents unique challenges due to varying altitudes, angles, and cluttered backgrounds. Additionally, the small size of targets, often occupying only a few pixels in high-resolution images, further complicates the problem. Current crowd counting models struggle in these complex scenarios, leading to inaccurate counts, which are crucial for crowd management. Moreover, these regression-based models only provide the total count without indicating the location or distribution of people within the environment, limiting their practical utility. While YOLOv8 has achieved significant success in detecting small targets within aerial imagery, it faces challenges when directly applied to crowd counting tasks in such contexts. To overcome these challenges, we propose an improved framework based on YOLOv8, incorporating a context enrichment module (CEM) to capture multiscale contextual information. This enhancement improves the model’s ability to detect and localize tiny targets in complex aerial images. We assess the effectiveness of the proposed framework on the challenging VisDrone-CC2021 dataset, and our experimental results demonstrate the effectiveness of this approach. Full article
Show Figures

Figure 1

Figure 1
<p>Detailed pipeline of proposed framework for small object detection (zoomed in for best view).</p>
Full article ">Figure 2
<p>Detailed architecture of C2F module.</p>
Full article ">Figure 3
<p>Detailed architecture of CEM.</p>
Full article ">Figure 4
<p>Detailed architectures of SPP and SPPF.</p>
Full article ">Figure 5
<p>Sample frames from diverse scenes represent comprehensive coverage of different environments and conditions.</p>
Full article ">Figure 6
<p>Pipeline of converting dot annotations into bounding boxes.</p>
Full article ">Figure 7
<p>Visualization of the detection results by the proposed framework and other baseline methods in different scenarios. The first column represents the results of YOLOv8l, the second column shows the results of YOLOv8x, and the third column shows the results of the proposed model. Bounding boxes in the blue color represent the correct detection, and bounding boxes in the yellow color represent the false detections. (The best view is the zoomed-in view.)</p>
Full article ">
Back to TopTop