[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (9)

Search Parameters:
Keywords = virtual data cube

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 9028 KiB  
Article
Multi-Shot Simultaneous Deghosting for Virtual-Shot Gathers via Integrated Sparse and Nuclear Norm Constraint Inversion
by Junming Zhang, Deli Wang, Bin Hu, Xiangbo Gong, Yifei Chen and Yang Zhang
Remote Sens. 2024, 16(12), 2075; https://doi.org/10.3390/rs16122075 - 7 Jun 2024
Viewed by 851
Abstract
Seismic interferometry is a key technology in geophysical exploration, having achieved significant developments in constructing virtual seismic responses, overcoming the limitation of traditional exploration. However, non-physical reflections in virtual-shot gathers pose challenges for data processing and interpretation. This study focuses on deghosting in [...] Read more.
Seismic interferometry is a key technology in geophysical exploration, having achieved significant developments in constructing virtual seismic responses, overcoming the limitation of traditional exploration. However, non-physical reflections in virtual-shot gathers pose challenges for data processing and interpretation. This study focuses on deghosting in virtual-shot gather data processing. We propose a novel method that integrates sparse and nuclear norm constraint inversion for multi-shot simultaneous deghosting. Initially, a pseudo 3D data cube is created to enhance computational efficiency and lay the foundation for subsequent continuity regularization. Subsequently, an inversion framework is constructed to improve deghosting precision and stability by combining sparse and nuclear norm constraint inversion. Both synthetic and field examples demonstrate the superiority of our method, offering a new paradigm for virtual-shot gather data processing, and representing a major advancement in overcoming the inherent limitations of seismic interferometry. Full article
Show Figures

Figure 1

Figure 1
<p>A demonstration of seismic data before and after seismic interferometry: (<b>a</b>) Original data. (<b>b</b>) The corresponding virtual-shot gather.</p>
Full article ">Figure 2
<p>A demonstration of different source data. (<b>a</b>) Active seismic data. (<b>b</b>) Passive seismic data.</p>
Full article ">Figure 3
<p>Data rearrangement schematic: (<b>a</b>) The acquired original data (shown in common shot gathers). (<b>b</b>) The pseudo-3D data cube which obtained by arranging the original data by shot number. (<b>c</b>) Frequency-domain pseudo-3D data cube. (<b>d</b>) Extract frequency slices from the frequency-domain data cube. (<b>e</b>) The extracted frequency slices.</p>
Full article ">Figure 4
<p>Schematic diagram of virtual-shot data rearrangement. (<b>a</b>) Top view of the 3D data cube where we illustrate the relative positions of the sources and receivers. (<b>b</b>) Top view of the rotated pseudo-3D data cube, where the red lines indicate data with the same offset. (<b>c</b>) Top view of the preprocessed results, with shaded areas indicating filled zero matrices.</p>
Full article ">Figure 5
<p>Salt dome velocity model.</p>
Full article ">Figure 6
<p>The 3D data cube of original synthetic data: (<b>a</b>) Virtual-shot gathers of passive seismic data. (<b>b</b>) Virtual-shot gathers of active seismic data. The blue lines indicate the positions of the displayed slices within the data cube, that is, the three slices shown from the 3D cube are: a time slice at 0.8 s, a common shot gather at shot number 135, and a common offset gather at an offset of 0 km.</p>
Full article ">Figure 7
<p>The 3D data cube of original field data. The blue lines indicate the positions of the displayed slices within the data cube.</p>
Full article ">Figure 8
<p>The data domain deghosting result of passive seismic data: (<b>a</b>) The input data which contain ghosts. (<b>b</b>) Radon-based inversion deghosting result. (<b>c</b>) Multi-shot simultaneous deghosting result. The locations of noise introduced are marked by yellow arrows. The blue lines indicate the positions of the displayed slices within the data cube.</p>
Full article ">Figure 8 Cont.
<p>The data domain deghosting result of passive seismic data: (<b>a</b>) The input data which contain ghosts. (<b>b</b>) Radon-based inversion deghosting result. (<b>c</b>) Multi-shot simultaneous deghosting result. The locations of noise introduced are marked by yellow arrows. The blue lines indicate the positions of the displayed slices within the data cube.</p>
Full article ">Figure 9
<p>The frequency domain deghosting result of passive seismic data: (<b>a</b>) The input data which contain ghosts. (<b>b</b>) Radon-based inversion deghosting result. (<b>c</b>) Multi-shot simultaneous deghosting result. The locations of frequence notch compensation are marked by red arrows.</p>
Full article ">Figure 10
<p>The data domain deghosting result of active seismic data: (<b>a</b>) The input data which contain ghosts. (<b>b</b>) Radon-based inversion deghosting result. (<b>c</b>) Multi-shot simultaneous deghosting result. The locations of the denoising effect are marked by red arrows, and the location of noise introduced is marked by a yellow arrow. The blue lines indicate the positions of the displayed slices within the data cube.</p>
Full article ">Figure 10 Cont.
<p>The data domain deghosting result of active seismic data: (<b>a</b>) The input data which contain ghosts. (<b>b</b>) Radon-based inversion deghosting result. (<b>c</b>) Multi-shot simultaneous deghosting result. The locations of the denoising effect are marked by red arrows, and the location of noise introduced is marked by a yellow arrow. The blue lines indicate the positions of the displayed slices within the data cube.</p>
Full article ">Figure 11
<p>The frequency domain deghosting result of active seismic data: (<b>a</b>) The input data which contain ghosts. (<b>b</b>) Radon-based inversion deghosting result. (<b>c</b>) Multi-shot simultaneous deghosting result.</p>
Full article ">Figure 12
<p>The data domain deghosting result of field data: (<b>a</b>) The input data which contain ghosts. (<b>b</b>) Radon-based inversion deghosting result. (<b>c</b>) Multi-shot simultaneous deghosting result. The locations of the denoising effect are marked by red arrows, and the location of waveform distortion is marked by yellow arrows. The red dotted box marks the position needed to provide a zoom view. The blue lines indicate the positions of the displayed slices within the data cube.</p>
Full article ">Figure 12 Cont.
<p>The data domain deghosting result of field data: (<b>a</b>) The input data which contain ghosts. (<b>b</b>) Radon-based inversion deghosting result. (<b>c</b>) Multi-shot simultaneous deghosting result. The locations of the denoising effect are marked by red arrows, and the location of waveform distortion is marked by yellow arrows. The red dotted box marks the position needed to provide a zoom view. The blue lines indicate the positions of the displayed slices within the data cube.</p>
Full article ">Figure 13
<p>Zoom views of the position marked by a red dotted box of the data domain deghosting result of field data: (<b>a</b>) The input data which contain ghosts. (<b>b</b>) Radon-based inversion deghosting result. (<b>c</b>) Multi-shot simultaneous deghosting result. The location of the damaged signal is marked by yellow arrows.</p>
Full article ">Figure 14
<p>The frequency domain deghosting result of the field data: (<b>a</b>) The input data which contain ghosts. (<b>b</b>) Radon-based inversion deghosting result. (<b>c</b>) Multi-shot simultaneous deghosting result.</p>
Full article ">
4 pages, 15553 KiB  
Proceeding Paper
Three-Dimensional Visualization of Astronomy Data Using Virtual Reality
by Gilles Ferrand
Phys. Sci. Forum 2023, 8(1), 71; https://doi.org/10.3390/psf2023008071 - 5 Dec 2023
Viewed by 856
Abstract
Visualization is an essential part of research, both to explore one’s data and to communicate one’s findings with others. Many data products in astronomy come in the form of multi-dimensional cubes, and since our brains are tuned for recognition in a 3D world, [...] Read more.
Visualization is an essential part of research, both to explore one’s data and to communicate one’s findings with others. Many data products in astronomy come in the form of multi-dimensional cubes, and since our brains are tuned for recognition in a 3D world, we ought to display and manipulate these in 3D space. This is possible with virtual reality (VR) devices. Drawing from our experiments developing immersive and interactive 3D experiences from actual science data at the Astrophysical Big Bang Laboratory (ABBL), this paper gives an overview of the opportunities and challenges that are awaiting astrophysicists in the burgeoning VR space. It covers both software and hardware matters, as well as practical aspects for successful delivery to the public. Full article
(This article belongs to the Proceedings of The 23rd International Workshop on Neutrinos from Accelerators)
Show Figures

Figure 1

Figure 1
<p>Collage of photos taken at the ABBL booth during RIKEN Open Day 2019. At the <b>top left</b> and <b>top right</b> one can see a snapshot of the evolution of the supernova remnant (volume-rendered); the <b>top center</b> panel shows the user interface to select iso-contours of elemental abundances in the supernova (meshes). Even though a flat display cannot convey the VR experience, having a monitor in the room is important so that everyone can have an idea of what is going on inside the headset. The <b>bottom right</b> photo illustrates the flow of the demo: on-boarding using info sheet, the VR navigation per se, and the questions/feedback corner.</p>
Full article ">
18 pages, 7976 KiB  
Article
Prototyping of Utilization Model for KOMPSAT-3/3A Analysis Ready Data Based on the Open Data Cube Platform in Multi-Cloud Computing Environment: A Case Study
by Kwangseob Kim and Kiwon Lee
Appl. Sci. 2023, 13(18), 10478; https://doi.org/10.3390/app131810478 - 20 Sep 2023
Cited by 1 | Viewed by 1227
Abstract
This study introduces a multi-cloud model that combines private and public cloud services for processing and managing satellite images. The multi-cloud service is established by incorporating private clouds within organizations and integrating them with external public cloud services to utilize the data. Private [...] Read more.
This study introduces a multi-cloud model that combines private and public cloud services for processing and managing satellite images. The multi-cloud service is established by incorporating private clouds within organizations and integrating them with external public cloud services to utilize the data. Private clouds can maintain data security within an organization or between organizations, while public clouds offer easy processing options for general users with access accounts. The model for the private cloud service utilizes open-source OpenStack software to create virtual machines, allowing users to manage analysis ready data (ARD) of the Korea Multi-Purpose Satellite (KOMPSAT)-3/3A images simultaneously. The public cloud service through Amazon Web Services (AWS) offers four services and uses the Open Data Cube (ODC) to manage data and provide web-based time-series visualization and processing. The model utilizes OpenStack to create virtual machines, and the public cloud service through AWS offers various services using ODC to manage data. A system that handles large amounts of satellite imagery in a multi-cloud environment has benefits such as improved availability, cost savings through open-source, and enhanced scalability. We present a prototyped utilization model that can be used with the ODC user interface (UI) that applies the proposed multi-cloud model. The multi-cloud model of this study can be applied to constructing a country-scale data cube system, that deals with large-scale satellite image data. It can also be applied to systems that need to be built with data that is tailored to a specific user’s needs at any institution. Full article
Show Figures

Figure 1

Figure 1
<p>Conceptual diagram of a multi-cloud service based on the ODC platform.</p>
Full article ">Figure 2
<p>Microservice design of the multi-cloud environment for the integrated management and utilization of KOMPSAT based on ODC.</p>
Full article ">Figure 3
<p>Sequence diagram for uploading KOMPSAT data to the private cloud and ODC indexing.</p>
Full article ">Figure 4
<p>Sequence diagram for ODC indexing of KOMPSAT data in conjunction with public and private clouds.</p>
Full article ">Figure 5
<p>Examples of the step processes on ODC and data sharing settings for private cloud KOMPSAT bundled data. Base map is OpenStreetMap [<a href="#B46-applsci-13-10478" class="html-bibr">46</a>] in Korean.</p>
Full article ">Figure 6
<p>Results of synchronization of KOMPSAT-3/3A reflectance data on AWS S3.</p>
Full article ">Figure 7
<p>Web search results for ODC KOMPSAT TOCR data indexing on AWS cloud.</p>
Full article ">Figure 8
<p>User interfaces for ODC data processing: (<b>a</b>) Areas; (<b>b</b>) Application type; and (<b>c</b>) Satellite sensor type.</p>
Full article ">Figure 9
<p>Processing results of registered data visualization, spectral indices using KOMPSAT-3A TOCR, and downloading tab: (<b>a</b>) Web mapping of ingested data; (<b>b</b>) Interface for spectral indices function selection; (<b>c</b>) Interface for checking processing status; (<b>d</b>) Processing results of NDVI; (<b>e</b>) Processing results of EVI; (<b>f</b>) Processing results of NDWI; (<b>g</b>) Information on processing results in the result tab; and (<b>h</b>) Downloading processing results in the output tab.</p>
Full article ">
23 pages, 17391 KiB  
Article
Real-Time 3D Reconstruction Pipeline for Room-Scale, Immersive, Medical Teleconsultation
by Ulrich Eck, Michael Wechner, Frieder Pankratz, Kevin Yu, Marc Lazarovici and Nassir Navab
Appl. Sci. 2023, 13(18), 10199; https://doi.org/10.3390/app131810199 - 11 Sep 2023
Cited by 2 | Viewed by 1866
Abstract
Medical teleconsultation was among the initial use cases for early telepresence research projects since medical treatment often requires timely intervention by highly specialized experts. When remote medical experts support interventions, a holistic view of the surgical site can increase situation awareness and improve [...] Read more.
Medical teleconsultation was among the initial use cases for early telepresence research projects since medical treatment often requires timely intervention by highly specialized experts. When remote medical experts support interventions, a holistic view of the surgical site can increase situation awareness and improve team communication. A possible solution is the concept of immersive telepresence, where remote users virtually join the operating theater that is transmitted based on a real-time reconstruction of the local site. Enabled by the availability of RGB-D sensors and sufficient computing capability, it becomes possible to capture such a site in real time using multiple stationary sensors. The 3D reconstruction and simplification of textured surface meshes from the point clouds of a dynamic scene in real time is challenging and becomes infeasible for increasing capture volumes. This work presents a tightly integrated, stateless 3D reconstruction pipeline for dynamic, room-scale environments that generates simplified surface meshes from multiple RGB-D sensors in real time. Our algorithm operates directly on the fused, voxelized point cloud instead of populating signed-distance volumes per frame and using a marching cube variant for surface reconstruction. We extend the formulation of the dual contouring algorithm to work for point cloud data stored in an octree and interleave a vertex-clustering-based simplification before extracting the surface geometry. Our 3D reconstruction pipeline can perform a live reconstruction of six incoming depth videos at their native frame rate of 30 frames per second, enabling the reconstruction of smooth movement. Arbitrarily complex scene changes are possible since we do not store persistent information between frames. In terms of mesh quality and hole filling, our method falls between the direct mesh reconstruction and expensive global fitting of implicit functions. Full article
Show Figures

Figure 1

Figure 1
<p>Overview. Processing steps of the proposed 3D reconstruction pipeline from left to right: (<b>a</b>) fused point cloud of all depth cameras, (<b>b</b>) voxelization with reconstructed surface normals, (<b>c</b>) simplification based on vertex clustering of flat surfaces, (<b>d</b>) simplified mesh extracted with dual contouring for point clouds, and (<b>e</b>) textured mesh with the smallest viewing angle difference to the color camera.</p>
Full article ">Figure 2
<p>Literature Review. We review the literature in the relevant areas of real-time 3D reconstruction using point clouds as input.</p>
Full article ">Figure 3
<p>Pipeline Overview. This figure shows the data flow of our reconstruction system. After temporal filtering, we generate an octree of all nonempty voxels, then surface samples and normals are found for each voxel. Simplification decimates octree nodes on flat surfaces. Finally, mesh extraction from the octree is performed using an adapted version of dual contouring for point clouds, and a GPU-based step called finalization performs quad-triangulation, filters duplicate triangles, and removes unused vertices.</p>
Full article ">Figure 4
<p>Neighborhood Sizes. Comparison of the neighborhood sizes and its consequences for 3D surface mesh reconstruction. The blue board is not aligned with the axes of the world coordinate system, causing a lot of diagonal connections to be missed in (<b>a</b>) that we consider with our method in (<b>b</b>).</p>
Full article ">Figure 5
<p>Mesh Texturing. Comparison between per-face and per-pixel selection of the optimal RGB-D camera. Per-face selection is inferior due to the simplified mesh topology.</p>
Full article ">Figure 6
<p>Diagonal EdgeProc Subprocedures. Three examples of possible diagonalEdgeProc subprocedures: (<b>a</b>) shows 1 of 12 subprocudures in cellProc, (<b>b</b>) shows 1 of 8 subprocedures in faceProc, and (<b>c</b>) shows 1 of 2 two recursions of diagonalEdgeProc on itself.</p>
Full article ">Figure 7
<p>CornerProc Cases. Three examples of possible cornerProc cases in cellProc: there are (<b>a</b>) 6 cases in total where 2 pairs of nodes share an edge, (<b>b</b>) 24 cases where just 1 of the nodes is not aligned with the 3 others, and (<b>c</b>) 2 situations where 2 nodes are criss-cross on different sides. In sum, this amounts to 32 cases.</p>
Full article ">Figure 8
<p>CornerProc Subprocedures. Three examples of cornerProc subprocedures: (<b>a</b>) shows 1 of 32 subprocedures in faceProc, (<b>b</b>) shows 1 of 18 subprocedures in faceProc, and (<b>c</b>) is the recursion for cornerProc.</p>
Full article ">Figure 9
<p>Connectivity Properties during Mesh Generation. (<b>a</b>) Faulty internal mesh connections, which do not follow the underlying surface across a resolution border and (<b>b</b>) in a uniform voxel grid.</p>
Full article ">Figure 10
<p>Interior Mesh Generation Causes. Two examples of layers of voxels being generated: (<b>a</b>) shows two close-by opposite surfaces, and (<b>b</b>) visualizes an unfortunate voxel-grid placement for the surface.</p>
Full article ">Figure 11
<p>Mesh Simplification Example. Close-up mesh of an unsimplified floor in one of our reconstruction frames with and without merging vertices. We used a voxel size of 2 cm and PCA normal reconstruction with a neighborhood radius of 1 cm. The center of the octree is placed on the floor; thus, the two layers of voxels are generated between the sign-changing split plane.</p>
Full article ">Figure 12
<p>Conditions for Simplification. Visualization of the simplification conditions to prevent the generation of internal faces. When red child nodes are empty, the neighboring blue nodes must also be empty. In (<b>a</b>), the neighbor node only shares a single corner, (<b>b</b>) is one of three external neighbors with a shared edge that requires an additional empty internal node in red, and (<b>c</b>) is one of three neighbors with a shared face. In total, queries for up to seven external neighbors must be performed in the worst case.</p>
Full article ">Figure 13
<p>Effects of Vertex Clustering. Close-up wireframe of a simplified mesh, with (blue) and without (red) filtering the nonmanifold vertices from simplification. The vertices of the red mesh are slightly pulled towards the mesh interior.</p>
Full article ">Figure 14
<p>Exterior Mesh Classification. Two exceptions for the rule of four incident faces per vertex: (<b>a</b>) is an example of four incident faces that is still located on a nonmanifold edge, and (<b>b</b>) indicates a case of only three incident faces that is in the mesh interior.</p>
Full article ">Figure 15
<p>Comparison with Existing Approaches. (<b>a</b>) Ball-pivoting algorithm [<a href="#B71-applsci-13-10199" class="html-bibr">71</a>] with 177,825 triangles, (<b>b</b>) greedy projection triangulation of Marton et al. [<a href="#B69-applsci-13-10199" class="html-bibr">69</a>] with 187,028 triangles in total, (<b>c</b>) trimmed SSD reconstruction [<a href="#B12-applsci-13-10199" class="html-bibr">12</a>] with 398,736 triangles in total, and (<b>d</b>) our approach with multiple LODs with 49,340 triangles in total.</p>
Full article ">Figure 16
<p>Evaluation of Simplification Errors. Absolute Euclidean distance in meters between the unsimplified reference and a simplified mesh with varying QEF error thresholds. Computation and visualization were performed with CloudCompare [<a href="#B74-applsci-13-10199" class="html-bibr">74</a>].</p>
Full article ">Figure 17
<p>Mesh Extraction Benchmarks. CPU scaling measurements of our mesh extraction: (<b>a</b>) effective utilization calculated as a fraction of time spent on tasks relative to the total active CPU time, (<b>b</b>) box plot of the total run time with respect to multiple CPU threads using the <span class="html-italic">unsimplified-low-detail</span> preset and (<b>c</b>) <span class="html-italic">simplified-low-detail</span> preset. Using all samples of our recording (<a href="#sec6-applsci-13-10199" class="html-sec">Section 6</a>). Box plot whiskers denote the 99th percentile of 1728 measurements.</p>
Full article ">
30 pages, 32076 KiB  
Article
An Ontology-Based Framework for Geospatial Integration and Querying of Raster Data Cube Using Virtual Knowledge Graphs
by Younes Hamdani, Guohui Xiao, Linfang Ding and Diego Calvanese
ISPRS Int. J. Geo-Inf. 2023, 12(9), 375; https://doi.org/10.3390/ijgi12090375 - 8 Sep 2023
Cited by 6 | Viewed by 2959
Abstract
The integration of the raster data cube alongside another form of geospatial data (e.g., vector data) raises considerable challenges when it comes to managing and representing it using knowledge graphs. Such integration can play an invaluable role in handling the heterogeneity of geospatial [...] Read more.
The integration of the raster data cube alongside another form of geospatial data (e.g., vector data) raises considerable challenges when it comes to managing and representing it using knowledge graphs. Such integration can play an invaluable role in handling the heterogeneity of geospatial data and linking the raster data cube to semantic technology standards. Many recent approaches have been attempted to address this issue, but they often lack robust formal elaboration or solely concentrate on integrating raster data cubes without considering the inclusion of semantic spatial entities along with their spatial relationships. This may constitute a major shortcoming when it comes to performing advanced geospatial queries and semantically enriching geospatial models. In this paper, we propose a framework that can enable such semantic integration and advanced querying of raster data cubes based on the virtual knowledge graph (VKG) paradigm. This framework defines a semantic representation model for raster data cubes that extends the GeoSPARQL ontology. With such a model, we can combine the semantics of raster data cubes with features-based models that involve geometries as well as spatial and topological relationships. This could allow us to formulate spatiotemporal queries using SPARQL in a natural way by using ontological concepts at an appropriate level of abstraction. We propose an implementation of the proposed framework based on a VKG system architecture. In addition, we perform an experimental evaluation to compare our framework with other existing systems in terms of performance and scalability. Finally, we show the potential and the limitations of our implementation and we discuss several possible future works. Full article
(This article belongs to the Topic Geospatial Knowledge Graph)
Show Figures

Figure 1

Figure 1
<p>An abstract overview of the classes and properties defined in the GeoSPARQL standard (<b>left sub-figure</b>) and the types of spatial relationships where only the topological ones are implemented by the GeoSPARQL standard (<b>right sub-figure</b>).</p>
Full article ">Figure 2
<p>GeoSPARQL geometry taxonomy in compliance with OGC standards [<a href="#B34-ijgi-12-00375" class="html-bibr">34</a>].</p>
Full article ">Figure 3
<p>Different abstraction of the data cube spatial atom.</p>
Full article ">Figure 4
<p>Diagram of the developed raster data cube vocabulary. Solid lines indicate object or data properties, whereas arrows indicate the direction of property relations. Dotted lines indicate subclass relations. Dashed lines without arrowheads indicate the connection between disjoint classes. The green rectangles indicate object properties, while the pink rectangles indicate data properties. In the case of object properties, their functional or inverse-functional nature is specified where applicable. Classes represented in dark blue correspond to the classes of the ontological vocabulary we have developed, while those in lighter blue represent classes reused from another ontology (in this case, the GeoSPARQL ontology).</p>
Full article ">Figure 5
<p>Taxonomy used for the classification of geospatial semantic queries.</p>
Full article ">Figure 6
<p>System architecture is presented as three layers and six processes. The orange dashed arrows indicate the inputs to each system component. The green arrows indicate the processes involved in the query answer. The numbers indicate the order in which each process must be performed to obtain an answer to a query.</p>
Full article ">Figure 7
<p>Database design for representing semantic data cube.</p>
Full article ">Figure 8
<p>Study area and dataset.</p>
Full article ">Figure 9
<p>Map visualization of query results of Q5 (<b>left</b>) and Q6 (<b>right</b>).</p>
Full article ">Figure 10
<p>Map visualization of the query result for Q7.</p>
Full article ">Figure 11
<p>Visualization of the query result for Q8. The numbers ranging from 1 to 5 delineate the chronological order of paths followed, beginning from the initial point and concluding at the final destination while tracking the maximum temperature. The directional arrows positioned along these pathways serve to illustrate the specific direction of movement.</p>
Full article ">Figure 12
<p>List of query results of Q9.</p>
Full article ">Figure 13
<p>Graphs highlighting the evolution of computation time of four queries of the Ontop system depending on the size of temporal window in comparison with Geold and Strabon.</p>
Full article ">Figure 14
<p>Graphs highlighting the evolution of computation time of five advanced queries on the Ontop system depending on the size of temporal window.</p>
Full article ">
12 pages, 12631 KiB  
Article
Head-Mounted Display for Clinical Evaluation of Neck Movement Validation with Meta Quest 2
by Manuel Trinidad-Fernández, Benoît Bossavit, Javier Salgado-Fernández, Susana Abbate-Chica, Antonio J. Fernández-Leiva and Antonio I. Cuesta-Vargas
Sensors 2023, 23(6), 3077; https://doi.org/10.3390/s23063077 - 13 Mar 2023
Cited by 4 | Viewed by 3322
Abstract
Neck disorders have a significant impact on people because of their high incidence. The head-mounted display (HMD) systems, such as Meta Quest 2, grant access to immersive virtual reality (iRV) experiences. This study aims to validate the Meta Quest 2 HMD system as [...] Read more.
Neck disorders have a significant impact on people because of their high incidence. The head-mounted display (HMD) systems, such as Meta Quest 2, grant access to immersive virtual reality (iRV) experiences. This study aims to validate the Meta Quest 2 HMD system as an alternative for screening neck movement in healthy people. The device provides data about the position and orientation of the head and, thus, the neck mobility around the three anatomical axes. The authors develop a VR application that solicits participants to perform six neck movements (rotation, flexion, and lateralization on both sides), which allows the collection of corresponding angles. An InertiaCube3 inertial measurement unit (IMU) is also attached to the HMD to compare the criterion to a standard. The mean absolute error (MAE), the percentage of error (%MAE), and the criterion validity and agreement are calculated. The study shows that the average absolute errors do not exceed 1° (average = 0.48 ± 0.09°). The rotational movement’s average %MAE is 1.61 ± 0.82%. The head orientations obtain a correlation between 0.70 and 0.96. The Bland–Altman study reveals good agreement between the HMD and IMU systems. Overall, the study shows that the angles provided by the Meta Quest 2 HMD system are valid to calculate the rotational angles of the neck in each of the three axes. The obtained results demonstrate an acceptable error percentage and a very minimal absolute error when measuring the degrees of neck rotation; therefore, the sensor can be used for screening neck disorders in healthy people. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors III)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Inertial measurement unit (IMU) placed over the head-mounted display (HMD); (<b>b</b>) coordinate reference system of the device in a test avatar.</p>
Full article ">Figure 2
<p>Examples of looking right (<b>a</b>) and looking up (<b>b</b>). Elements highlighted with number 1 are the instructions consisting of puppet animation and text output. Elements highlighted with number 2 are the visual feedback about user’s gaze. Finally, elements highlighted with number 3 are the targets to reach.</p>
Full article ">Figure 3
<p>Subject performing the six maximum movements and the corresponding axis. Max ROM—maximum range of motion. The black lines represent 0° of the selected movement.</p>
Full article ">Figure 4
<p>Bland–Altman plots to assess the agreement between HMD system and the IMU in all the positions and axes. The plots include the mean difference (green line) and limits of agreement (red lines).</p>
Full article ">
37 pages, 34345 KiB  
Article
Petrographic Microscopy with Ray Tracing and Segmentation from Multi-Angle Polarisation Whole-Slide Images
by Marco Andres Acevedo Zamora and Balz Samuel Kamber
Minerals 2023, 13(2), 156; https://doi.org/10.3390/min13020156 - 20 Jan 2023
Cited by 5 | Viewed by 5505
Abstract
‘Slide scanners’ are rapid optical microscopes equipped with automated and accurate x-y travel stages with virtual z-motion that cannot be rotated. In biomedical microscopic imaging, they are widely deployed to generate whole-slide images (WSI) of tissue samples in various modes of illumination. The [...] Read more.
‘Slide scanners’ are rapid optical microscopes equipped with automated and accurate x-y travel stages with virtual z-motion that cannot be rotated. In biomedical microscopic imaging, they are widely deployed to generate whole-slide images (WSI) of tissue samples in various modes of illumination. The availability of WSI has motivated the development of instrument-agnostic advanced image analysis software, helping drug development, pathology, and many other areas of research. Slide scanners are now being modified to enable polarised petrographic microscopy by simulating stage rotation with the acquisition of multiple rotation angles of the polariser–analyser pair for observing randomly oriented anisotropic materials. Here we report on the calibration strategy of one repurposed slide scanner and describe a pilot image analysis pipeline designed to introduce the wider audience to the complexity of performing computer-assisted feature recognition on mineral groups. The repurposed biological scanner produces transmitted light plane- and cross-polarised (TL-PPL and XPL) and unpolarised reflected light (RL) WSI from polished thin sections or slim epoxy mounts at various magnifications, yielding pixel dimensions from ca. 2.7 × 2.7 to 0.14 × 0.14 µm. A data tree of 14 WSI is regularly obtained, containing two RL and six of each PPL and XPL WSI (at 18° rotation increments). This pyramidal image stack is stitched and built into a local server database simultaneously with acquisition. The pyramids (multi-resolution ‘cubes’) can be viewed with freeware locally deployed for teaching petrography and collaborative research. The main progress reported here concerns image analysis with a pilot open-source software pipeline enabling semantic segmentation on petrographic imagery. For this purpose, all WSI are post-processed and aligned to a ‘fixed’ reflective surface (RL), and the PPL and XPL stacks are then summarised in one image, each with ray tracing that describes visible light reflection, absorption, and O- and E-wave interference phenomena. The maximum red-green-blue values were found to best overcome the limitation of refractive index anisotropy for segmentation based on pixel-neighbouring feature maps. This strongly reduces the variation in dichroism in PPL and interference colour in XPL. The synthetic ray trace WSI is then combined with one RL to estimate modal mineralogy with multi-scale algorithms originally designed for object-based cell segmentation in pathological tissues. This requires generating a small number of polygonal expert annotations that inform a training dataset, enabling on-the-fly machine learning classification into mineral classes. The accuracy of the approach was tested by comparison with modal mineralogy obtained by energy-dispersive spectroscopy scanning electron microscopy (SEM-EDX) for a suite of rocks of simple mineralogy (granulites and peridotite). The strengths and limitations of the pixel-based classification approach are described, and phenomena from sample preparation imperfections to semantic segmentation artefacts around fine-grained minerals and/or of indiscriminate optical properties are discussed. Finally, we provide an outlook on image analysis strategies that will improve the status quo by using the first-pass mineralogy identification from optical WSI to generate a location grid to obtain targeted chemical data (e.g., by SEM-EDX) and by considering the rock texture. Full article
(This article belongs to the Section Mineral Exploration Methods and Applications)
Show Figures

Figure 1

Figure 1
<p>VS200 ST research slide scanner components. (<b>A</b>) The fundamental parts (<b>top-left</b>) are referenced (red lines) to the sketch of the base unit without housing panels (<b>top-right</b>). (<b>B</b>) The part diagrams (<b>bottom-right</b>) point out two possible optical paths (<b>bottom-left</b>) showing the source LED lamps. Copyright: Modified from Installation Manual VS200, Olympus.</p>
Full article ">Figure 2
<p>VS200 setup photos. (<b>A</b>) Sample chamber without sample tray showing components. The upper half of the optical axis is shown in dashed yellow lines. (<b>B</b>) Slide trays with the calibrating samples inserted in the long (pocket #3: Si semi-conductor wafer; 4: vs. calibrating sample; 5: biotite-garnet gneiss) and short (pocket #3: white label; 4 and 5: quartz crystals) trays.</p>
Full article ">Figure 3
<p>Example of VS200 optical performance in transmitted (PPL-90 and XPL-90) and reflected (RL unpolarised) light compared to a corresponding false-colour SEM-BSE image. From top to bottom (10×: 0.547; 20×: 0.273; 40×: 0.137 µm/px), the image shows a small area of a pyroxene-plagioclase granulite xenolith within a basalt (overview at bottom-right). In PPL, the decreasing depth-of-field at greater spatial resolutions is evident. In RL, the similarity of grain shapes is expected to be best in the recoloured SEM-BSE (<a href="#app1-minerals-13-00156" class="html-app">Supplementary Materials</a> for details). Sample 18-RBE-006h [<a href="#B23-minerals-13-00156" class="html-bibr">23</a>].</p>
Full article ">Figure 4
<p>Whole-slide images in plane polarized (PPL) (<b>left</b>) and cross-polarized light (XPL) (<b>centre</b>) in multi-angle polarisation showing 18° steps of slight pleochroism and vivid interference colours of a coarse-grained harzburgite with olivine, pyroxene (ol &gt; opx &gt; cpx), spinel, and alteration products. Two unpolarised reflected light (RL) images (<b>top-right</b>) are obtained, one with standard illumination and the other with the gamma value boosted (<b>second on the top-right</b>). This allows us to better distinguish different silicates in RL, as seen in the histograms (<b>bottom-right</b>). Sample ID: 17-BSK-043.</p>
Full article ">Figure 5
<p>Flowchart of the image analysis pipeline (option A) showing the main steps (blue squares), outputs (light orange squares), and deployed software (logos) for semantic segmentation. In Step 1, the images are uploaded to the server in proprietary format (*.vsi). Windows and Linux operating systems (OS) are utilised using packages detailed in the <a href="#app1-minerals-13-00156" class="html-app">Supplementary Materials S1</a>.</p>
Full article ">Figure 6
<p>QuPath image annotation on a powerful laptop (<b>top-left</b>) using the user-friendly annotation toolbar to generate internal GeoJSON files (<b>top-centre</b>) that describe the drawn polygons for the selected RL γ-boosted image (main picture). The tools use the visible layers (<b>right panel</b>) that can be toggled for convenience, helping the ‘wand’ tool for precise contouring of grains. The yellow square shows the field of view of <a href="#minerals-13-00156-f007" class="html-fig">Figure 7</a>. Sample: harzburgite 17-BSK-043.</p>
Full article ">Figure 7
<p>QuPath platform using a multi-view canvas (2 × 2) with the ray tracing image stack overlaid by ‘live’ segmentation (<b>top-left</b>). The views show PPL-max (<b>top-right</b>), XPL-max (<b>bottom-left</b>), and RL γ-boosted (bottom-right) to give a petrographer views across all relevant modalities. The ‘Pixel classifier’ is operated to improve the labelled annotation set and to define classes for parametrised supervised clustering (centre sub-menu) following the mineral library (<b>top-left</b> tab: ‘Annotations’). The graphical user interface (GUI) navigates and updates the segmentation on each pyramid block (see <a href="#minerals-13-00156-f008" class="html-fig">Figure 8</a>) of the combined image while reading annotations, calculating feature maps, and enabling the saving of the WSI semantic segmentation maps.</p>
Full article ">Figure 8
<p>QuPath ‘Pixel classifier’ workflow on the registered image stack (<b>top</b>) that is split into red-green-blue image layers (RGB) and then processed block-wise. The multi-scale feature map calculation (blue box inset) uses filtering for all selected channels. This helps find characteristic features with a random trees model during prediction. A loop reconstructs the phase map for each pixel and block (bottom-right). Sample ID: granulite-facies metabasite 7KB-42.</p>
Full article ">Figure 9
<p>Imaging of a hornblende granulite. (<b>A</b>) Whole-slide images in RL auto-exposure and γ-boosted marking the position of a ROI (red square) (<b>top-left</b>) and RGB histograms of PPL-0, XPL-0, and both RL (<b>bottom-left</b>). RL γ-boosting allows for better distinguishing of dielectric minerals of low reflectivity. (<b>B</b>) Schematic drawing shows ray tracing in an image patch stack at five different sample locations (vertical yellow lines) of distinct mineralogy. The two typical image dimensionality options (<b>bottom-right</b>) are specified. Sample ID: KB-67.</p>
Full article ">Figure 10
<p>Orthogonal views of <a href="#minerals-13-00156-f009" class="html-fig">Figure 9</a> stack (pyramidal level = 0) of the hornblende granulite KB-67. The PPL-36 (<b>top-left</b>) and XPL-18 (<b>middle-right</b>) positions illustrate the mineralogy. Five pixels of distinct mineralogy (biotite 1 and 2, plagioclase, garnet, and chlorite) are marked (blue stars) for each modality (RL, PPL, XPL) and orientation. Colour gradients within the orthogonal traverses are arranged left–right (vertical) and top–bottom (horizontal). The change in PPL and XPL colour is seen as a gradient as the stage is virtually rotated. ROI RGB spectra at 180 (PPL) steps and six steps (XPL) are shown at the bottom with ideal spectral fits to 180 (PPL) and 90° (XPL) periods.</p>
Full article ">Figure 11
<p>Ray tracing for PPL and XPL whole-slide images in ultra-mafic, mafic, and felsic rocks showing the mutual relationships between mineralogy and statistical measures (maximum, minimum, and standard deviation). The sample names are given at the top.</p>
Full article ">Figure 12
<p>XPL-min image of granulite-facies metabasite KB-63. It allows seeing areas of clay (prominently in the top-left) altering plagioclase, amphibole, and garnet (possibly kelyphite). Further, note the clay-filled micro-fractures that are oriented NE across the thin section. The sub-grain structural complexity of plagioclase can also be interpreted in XPL-min.</p>
Full article ">Figure 13
<p>Optical phase maps of the image stacks (RL γ-boosted, PPL-max, and XPL-max). Visually, the fidelity is better in ultra-mafic and mafic rocks. It generally decreases towards felsic compositions due to pyroxenes and quartz-feldspar colour overlaps. The prediction scales of the maps (2 × 2 px: (<b>a</b>–<b>f</b>) maps, and 4 × 4 px: (<b>g</b>,<b>h</b>) maps) were set according to the visual quality assessment (iteration #1), see the colour legends in <a href="#minerals-13-00156-f014" class="html-fig">Figure 14</a>. The published EDX phase maps are available for (<b>b</b>–<b>d</b>) in [<a href="#B24-minerals-13-00156" class="html-bibr">24</a>] and e in [<a href="#B23-minerals-13-00156" class="html-bibr">23</a>].</p>
Full article ">Figure 14
<p>Legends and volume (%) modal abundance plots for the phase maps shown in <a href="#minerals-13-00156-f013" class="html-fig">Figure 13</a>. The X-axis shows the pixel population (logarithmic) for easy comparison between the order of magnitude, whereas the Y-axis shows the conversion to volume % values for each phase. Phases for which identification was equivocal were tagged as ‘unknown’ or were named with broad mineral family names (oxide, sulphide, altered, etc.). The identification could be refined by adding chemical information.</p>
Full article ">Figure 15
<p>Modal mineralogy comparison of the segmentation from ray tracing stack and the SEM-EDX phase maps of metabasites [<a href="#B24-minerals-13-00156" class="html-bibr">24</a>]. The volume % abundance was weighted by nominal mineral densities. Note the logarithmic scale of the axes. Rock-forming minerals fit better to the 1:1 line (not the log-log scale) than accessory phases. Several phases identified in <a href="#minerals-13-00156-f014" class="html-fig">Figure 14</a> were omitted from the plot for clarity.</p>
Full article ">Figure 16
<p>QuPath showing challenges in the image analysis methodology with optical microscopy (iteration #1) on thin sections. The granitoid sample panels (<b>left</b>) are shown in logical ‘multi-view’ canvas order (<b>top-left</b>: phase map, <b>bottom-left</b>: XPL-max, <b>top-right</b>: PPL-max, and <b>bottom-right</b>: RL γ-boosted). The granulite (<b>top-right</b>) illustrates challenging secondary sheet silicate segmentation near the section margin. The pyroxene-plagioclase granulite xenolith (<b>bottom-right</b>) displays the difficulty of separating orthopyroxene from clinopyroxene depending on crystallographic orientation.</p>
Full article ">Figure 17
<p>Visual cross-referencing between a regular photo scan (<b>A</b>), ray tracing products (<b>B</b>), and electron microscopy (<b>C</b>) and a phase map (iteration #2) with a colour legend (<b>D</b>). Iteration #2 incorporated recoloured SEM-BSE and the -EDX ‘TruMap’ stack (27 characteristic X-ray lines) principal component analysis (PCA) to the ray tracing stack for improving segmentation accuracy. This has helped resolve the distinction between pyroxenes within the xenolith. The output map intersects have defined a smaller sample area (15 selectable channels, evidencing areas filled with alteration minerals, fracturing, epoxy, and prior laser ablation grooves). Pyroxene-plagioclase granulite xenolith 18-RBE-006h.</p>
Full article ">Figure 18
<p>QuPath zoom-in to 10× whole-slide image (WSI) of <a href="#minerals-13-00156-f017" class="html-fig">Figure 17</a>. (<b>A</b>): iteration #1 phase map comparison alongside detailed RL image (<b>B</b>). This shows the imperfect distinction between the two pyroxenes. (<b>C</b>): Image overlay on medium-grained xenolith showing perfect image registration between SEM-BSE (40% transparency) and RL, which contain complementary information. (<b>D</b>): Contact between xenolith and basalt matrix illustrating random pixel offsets (top-left and bottom-right) due to intrinsic deformation within montages. Top-right inlets show fields-of-view (FoV) locations. Sample ID: 18-RBE-006h.</p>
Full article ">
21 pages, 17937 KiB  
Article
The openEO API–Harmonising the Use of Earth Observation Cloud Services Using Virtual Data Cube Functionalities
by Matthias Schramm, Edzer Pebesma, Milutin Milenković, Luca Foresta, Jeroen Dries, Alexander Jacob, Wolfgang Wagner, Matthias Mohr, Markus Neteler, Miha Kadunc, Tomasz Miksa, Pieter Kempeneers, Jan Verbesselt, Bernhard Gößwein, Claudio Navacchi, Stefaan Lippens and Johannes Reiche
Remote Sens. 2021, 13(6), 1125; https://doi.org/10.3390/rs13061125 - 16 Mar 2021
Cited by 47 | Viewed by 10329
Abstract
At present, accessing and processing Earth Observation (EO) data on different cloud platforms requires users to exercise distinct communication strategies as each backend platform is designed differently. The openEO API (Application Programming Interface) standardises EO-related contracts between local clients (R, Python, and JavaScript) [...] Read more.
At present, accessing and processing Earth Observation (EO) data on different cloud platforms requires users to exercise distinct communication strategies as each backend platform is designed differently. The openEO API (Application Programming Interface) standardises EO-related contracts between local clients (R, Python, and JavaScript) and cloud service providers regarding data access and processing, simplifying their direct comparability. Independent of the providers’ data storage system, the API mimics the functionalities of a virtual EO raster data cube. This article introduces the communication strategy and aspects of the data cube model applied by the openEO API. Two test cases show the potential and current limitations of processing similar workflows on different cloud platforms and a comparison of the result of a locally running workflow and its openEO-dependent cloud equivalent. The outcomes demonstrate the flexibility of the openEO API in enabling complex scientific analysis of EO data collections on cloud platforms in a homogenised way. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Communication between clients and backends. The openEO API specifies use of the JavaScript Object Notation (JSON) format and the way in which backends should respond to requests.</p>
Full article ">Figure 2
<p>Processing of user-defined functions (UDFs).</p>
Full article ">Figure 3
<p>The region of interest with the test area to compare outcomes of equal workflow at different backends. The base image is a <span class="html-italic">Google Earth</span> satellite image (Imagery ©2021, TerraMetrics, Map data ©2021).</p>
Full article ">Figure 4
<p>The region of interest with the (20 km × 20 km) area (the red rectangle) selected for testing the openEO UDF workflow on the EURAC backend and locally. The base image is a <span class="html-italic">Google Earth</span> satellite image (Imagery ©2021, TerraMetrics, Map data ©2021).</p>
Full article ">Figure 5
<p>Results of use case 1, executed at GEE, VITO, and JRC (from <b>top</b> to <b>bottom</b>).</p>
Full article ">Figure 6
<p>(<b>a</b>) The output of the openEO-BFAST workflow calculated at the EURAC backend. The values show the time of detected breaks in Sentinel-1 time series in decimal years. (<b>b</b>) The difference between the outputs calculated at the EURAC backend and in a local R environment. The values show the time difference between the detected breaks.</p>
Full article ">
17 pages, 12664 KiB  
Article
A Portal Offering Standard Visualization and Analysis on top of an Open Data Cube for Sub-National Regions: The Catalan Data Cube Example
by Joan Maso, Alaitz Zabala, Ivette Serral and Xavier Pons
Data 2019, 4(3), 96; https://doi.org/10.3390/data4030096 - 10 Jul 2019
Cited by 24 | Viewed by 6387
Abstract
The amount of data that Sentinel fleet is generating over a territory such as Catalonia makes it virtually impossible to manually download and organize as files. The Open Data Cube (ODC) offers a solution for storing big data products in an efficient way [...] Read more.
The amount of data that Sentinel fleet is generating over a territory such as Catalonia makes it virtually impossible to manually download and organize as files. The Open Data Cube (ODC) offers a solution for storing big data products in an efficient way with a modest hardware and avoiding cloud expenses. The approach will still be useful up to the next decade. Yet, ODC requires a level of expertise that most people who could benefit from the information do not have. This paper presents a web map browser that gives access to the data and goes beyond a simple visualization by combining the OGC WMS standard with modern web browser capabilities to incorporate time series analytics. This paper shows how we have applied this tool to analyze the spatial distribution of the availability of Sentinel 2 data over Catalonia and revealing differences in the number of useful scenes depending on the geographical area that ranges from one or two images per month to more than one image per week. The paper also demonstrates the usefulness of the same approach in giving access to remote sensing information to a set of protected areas around Europe participating in the H2020 ECOPotential project. Full article
(This article belongs to the Special Issue Earth Observation Data Cubes)
Show Figures

Figure 1

Figure 1
<p>The Catalan Data Cube WMS browser.</p>
Full article ">Figure 2
<p>Two scenes offered in the Catalan Data Cube map browser and how they are shown in the table of contents (legend): (<b>a</b>) scene offers less coverage than (<b>b</b>) scene (as can be seen in “cov“ percentages), but both scenes have high visibility ("vis" percentage, i.e., the ground is not obscured by clouds) in the covered areas.</p>
Full article ">Figure 3
<p>Time series visualization of the Catalan Data Cube WMS browser: (<b>a</b>) layer selection and thumbnails download start when pressing the <span class="html-italic">Load</span> button; (<b>b</b>) thumbnails download process; (<b>c</b>) selection of the full resolution images to download using the slider of percentage of void space; (<b>d</b>) resulting full resolution animation; and (<b>e</b>) detail of the temporal controls.</p>
Full article ">Figure 4
<p>Dynamic NDVI layer animation in the Catalan Data Cube including a temporal profile for a crop (showing phenological dynamics, in black) and sand (almost a constant signal, in red) points (centered in Roses Gulf area).</p>
Full article ">Figure 5
<p>Options of temporal statistics of the Catalan Data Cube WMS browser: (<b>a</b>) for quantitative values (such as NDVI); (<b>b</b>) for categorical values (such as Scene Classification map, SCL).</p>
Full article ">Figure 6
<p>Temporal statistics of the Catalan Data Cube WMS browser for the NDVI variable along the first year of Sentinel 2 acquisitions (27/03/2018 to 24/03/2019): (<b>a</b>) mean; (<b>b</b>) standard deviation.</p>
Full article ">Figure 7
<p>x/t graphic in the Catalan Data Cube. Black and red rectangles show the temporal profile (in vertical starting from above with the first date and ending below with the last date) for the same crop (black) and sand (red) points used in <a href="#data-04-00096-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 8
<p>Number of scenes with visible ground for each pixel over Catalonia along first year of Sentinel-2 acquisitions (27/03/2018 to 24/03/2019) in the Catalan Data Cube.</p>
Full article ">Figure 9
<p>Image availability over Catalonia: (<b>a</b>) number of scenes with “Ground” class; (<b>b</b>) orbits distribution over Catalonia; and (<b>c</b>) mean annual rainfall<a href="#fn004-data-04-00096" class="html-fn">4</a>.</p>
Full article ">Figure 10
<p>Protected areas in the ECOPotential project. The protected areas selected for the ECOPotential data cube are represented with the ODC logo next to them.</p>
Full article ">Figure 11
<p>Use of Sentinel 2 images from the ECOPotential Data Cube in the Protected Areas from Space map browser: (<b>a</b>) false color RGB combination over the mountain ecosystem of Gran Paradiso National Park in Italy (in red, highest values of vegetated areas); (<b>b</b>) Soil Adjusted Vegetation Index (SAVI) dynamic calculation over the arid ecosystem of Har Ha Negev National Park in Israel (in green, highest values of SAVI; in brown-orange, lowest values); (<b>c</b>) Leaf Area Index (LAI) dynamic calculation over the coastal ecosystem of the Camargue National Park in France (in green highest values of LAI; in brown-orange, lowest values); and (<b>d</b>) Scene Classification map (SCL) provided by ESA over the coastal ecosystem of Doñana National Park.</p>
Full article ">
Back to TopTop