<p>Satellite imagery provides a unique reference for estimating flood inundation ex... more <p>Satellite imagery provides a unique reference for estimating flood inundation extent that can help characterize flood magnitudes and impacts in support of scientific studies and for operational disaster response. All imagery modalities (multispectral/hyperspectral, panchromatic, synthetic aperture radar (SAR)) suffer from factors that confound accurate spatial representation of flood extent, whether using traditional image classification methods or machine learning-based approaches. Clouds, cloud shadows, tree canopy, tall vegetation, and other factors either obscure the water surface or confuse the classifiers. These can yield results that vary widely when compared to actual flood extents, whether referencing observed data like high-water marks or high-quality hydrodynamic models. In addition, opportunities for imagery collection often do not coincide with maximum flood extent due to satellite access windows, cloud cover impacting optical sensors, or a combination of both. That said, the proliferation of existing and planned commercial and civil sensors across all modalities presents increasing opportunities for timely collection.</p><p>In recent years, the quality of terrain data at regional, country, continental, and global scales has continued to rapidly improve. The data include WorldDEM, NASADEM, MERIT DEM, EarthDEM, among others, and many regional to country-scale lidar-derived datasets. The availability of this high-quality data allows for new methods that integrate terrain data with remotely sensed imagery data, to yield accurate and timely representations of flood extent in new ways to support both scientific investigations and disaster response.</p><p>However, few methods have been developed that integrate satellite and/or aerial imagery data with terrain data to improve imagery-derived flood products. This paper will present new methods, based on the novel Flood Inundation Surface Topology (FIST) Model, for integration of terrain data with the limited data derived from imagery to provide a more accurate representation of maximum flood extents that overcomes many of the aforementioned limitations of using imagery alone. In addition, The FIST model also produces flood depth grids at the resolution of the native terrain data, which represents a major advance in imagery-derived flood products. We present the fundamental directed graph algorithm that is unique to the FIST model; the data architectures that support a range of applications; and case studies for the use of active flood and post-peak flood imagery to generate inundation extents and depth grids for peak-flood conditions.</p>
<p>We show that machine learning models learn and perform better when they ... more <p>We show that machine learning models learn and perform better when they know where to expect shadows, through hillshades modeled to the time of imagery acquisition.</p><p>Shadows are detrimental to all machine learning applications on satellite imagery. Prediction tasks like semantic / instance segmentation, object detection, counting of rivers, roads, buildings, trees, all rely on crisp edges and colour gradients that are confounded by the presence of shadows in passive optical imagery, which rely on the sun’s illumination for reflectance values.</p><p>Hillshading is a standard technique for enriching a mapped terrain with relief effects, which is done by emulating the shadow caused by steep terrain and/or tall vegetation. A hillshade that is modeled to the time of day and year can be easily derived through a basic form of ray tracing on a Digital Terrain Model (DTM) (also known as a bare-earth DEM) or Digital Surface Model (DSM) given the sun's altitude and azimuth angles. In this work, we use lidar-derived DSMs. A DSM-based hillshade conveys a lot more information on shadows than a bare-earth DEM alone, namely any non-terrain vertical features (e.g. vegetation, buildings) resolvable at a 1-m resolution. The use of this level of fidelity of DSM for hillshading and its input to a machine learning model is novel and the main contribution of our work. Any uncertainty over the angles can be captured through a composite multi-angle hillshade, which shows the range where shadows can appear throughout the day.</p><p>We show the utility of time-dependent hillshades in the daily mapping of rivers from Very High Resolution (VHR) passive optical and lidar-derived terrain data [1]. Specifically, we leverage the acquisition timestamps within a daily 3m PlanetScope product over a 2-year period. Given a datetime and geolocation, we model the sun’s azimuth and elevation relative to that geolocation at that time of day and year. We can then generate a time-dependent hillshade and therefore locate shadows in any given time within that 2-year period. In our ablation study we show that, out of all the lidar-derived products, the time-dependent hillshades contribute a 8-9% accuracy improvement in the semantic segmentation of rivers. This indicates that a semantic segmentation machine learning model is less prone to errors of commission (false positives), by better disambiguating shadows from dark water.</p><p>Time-dependent hillshades are not currently used in ML for EO use-cases, yet they can be useful. All that is needed to produce them is access to high-resolution bare-earth DEMs, like that of the US National 3D Elevation Program covering the entire continental U.S at 1-meter resolution, or creation of DSMs from the lidar point cloud data itself. As the coverage of DSM and/or DEM products expands to more parts of the world, time-dependent hillshades could become as commonplace as cloud masks in EO use cases.</p><p><br>[1] Dolores Garcia, Gonzalo Mateo-Garcia, Hannes Bernhardt, Ron Hagensieker, Ignacio G. Lopez-Francos, Jonathan Stock, Guy Schumann, Kevin Dobbs and Freddie Kalaitzis Pix2Streams: Dynamic Hydrology Maps from Satellite-LiDAR Fusion.<em> AI for Earth Sciences Workshop, NeurIPS 2020</em></p>
Remotely sensed imagery is increasingly used by emergency managers to monitor and map the impact ... more Remotely sensed imagery is increasingly used by emergency managers to monitor and map the impact of flood events to support preparedness, response, and critical decision making throughout the flood event lifecycle. To reduce latency in delivery of imagery-derived information, ensure consistent and reliably derived map products, and facilitate processing of an increasing volume of remote sensing data-streams, automated flood mapping workflows are needed. The U.S. Geological Survey is facilitating the development and integration of machine-learning algorithms in collaboration with NASA, National Geospatial Intelligence Agency (NGA), University of Alabama, and University of Illinois to create a workflow for rapidly generating improved flood-map products. A major bottleneck to the training of robust, generalizable machine learning algorithms for pattern recognition is a lack of training data that is representative across the landscape. To overcome this limitation for the training of alg...
Funding Sources: The land cover mapping project was co-funded by the Kansas GIS Policy Board with... more Funding Sources: The land cover mapping project was co-funded by the Kansas GIS Policy Board with funds from the Kansas Water Plan that are administered by the Kansas Water Office under the grant titled " Kansas Next-Generation Land Use/Land Cover Mapping Initiative " and the National Science Foundation (NSF) Experimental Program to Stimulate Competitive Research (EPSCoR) under the research project titled " Understanding and Forecasting Ecological Change: Causes, Trajectories and Consequences of Environmental Change in the Central Plains. "
Where are the Earth's streams flowing right now? Inland surface waters expand with floods and... more Where are the Earth's streams flowing right now? Inland surface waters expand with floods and contract with droughts, so there is no one map of our streams. Current satellite approaches are limited to monthly observations that map only the widest streams. These are fed by smaller tributaries that make up much of the dendritic surface network but whose flow is unobserved. A complete map of our daily waters can give us an early warning for where droughts are born: the receding tips of the flowing network. Mapping them over years can give us a map of impermanence of our waters, showing where to expect water, and where not to. To that end, we feed the latest high-res sensor data to multiple deep learning models in order to map these flowing networks every day, stacking the times series maps over many years. Specifically, i) we enhance water segmentation to $50$ cm/pixel resolution, a 60$\times$ improvement over previous state-of-the-art results. Our U-Net trained on 30-40cm WorldVie...
Page 1. EVA ATASET AND THE KANSAS BIOLOGICAL SURV Y'S FLDPLN (FLOODPLAIN) MODEL FO... more Page 1. EVA ATASET AND THE KANSAS BIOLOGICAL SURV Y'S FLDPLN (FLOODPLAIN) MODEL FOR INUNDATI N EXTENT ESTIMATION By Submitted to the Department of Geography and the Faculty of the Graduate ...
<p>Satellite imagery provides a unique reference for estimating flood inundation ex... more <p>Satellite imagery provides a unique reference for estimating flood inundation extent that can help characterize flood magnitudes and impacts in support of scientific studies and for operational disaster response. All imagery modalities (multispectral/hyperspectral, panchromatic, synthetic aperture radar (SAR)) suffer from factors that confound accurate spatial representation of flood extent, whether using traditional image classification methods or machine learning-based approaches. Clouds, cloud shadows, tree canopy, tall vegetation, and other factors either obscure the water surface or confuse the classifiers. These can yield results that vary widely when compared to actual flood extents, whether referencing observed data like high-water marks or high-quality hydrodynamic models. In addition, opportunities for imagery collection often do not coincide with maximum flood extent due to satellite access windows, cloud cover impacting optical sensors, or a combination of both. That said, the proliferation of existing and planned commercial and civil sensors across all modalities presents increasing opportunities for timely collection.</p><p>In recent years, the quality of terrain data at regional, country, continental, and global scales has continued to rapidly improve. The data include WorldDEM, NASADEM, MERIT DEM, EarthDEM, among others, and many regional to country-scale lidar-derived datasets. The availability of this high-quality data allows for new methods that integrate terrain data with remotely sensed imagery data, to yield accurate and timely representations of flood extent in new ways to support both scientific investigations and disaster response.</p><p>However, few methods have been developed that integrate satellite and/or aerial imagery data with terrain data to improve imagery-derived flood products. This paper will present new methods, based on the novel Flood Inundation Surface Topology (FIST) Model, for integration of terrain data with the limited data derived from imagery to provide a more accurate representation of maximum flood extents that overcomes many of the aforementioned limitations of using imagery alone. In addition, The FIST model also produces flood depth grids at the resolution of the native terrain data, which represents a major advance in imagery-derived flood products. We present the fundamental directed graph algorithm that is unique to the FIST model; the data architectures that support a range of applications; and case studies for the use of active flood and post-peak flood imagery to generate inundation extents and depth grids for peak-flood conditions.</p>
<p>We show that machine learning models learn and perform better when they ... more <p>We show that machine learning models learn and perform better when they know where to expect shadows, through hillshades modeled to the time of imagery acquisition.</p><p>Shadows are detrimental to all machine learning applications on satellite imagery. Prediction tasks like semantic / instance segmentation, object detection, counting of rivers, roads, buildings, trees, all rely on crisp edges and colour gradients that are confounded by the presence of shadows in passive optical imagery, which rely on the sun’s illumination for reflectance values.</p><p>Hillshading is a standard technique for enriching a mapped terrain with relief effects, which is done by emulating the shadow caused by steep terrain and/or tall vegetation. A hillshade that is modeled to the time of day and year can be easily derived through a basic form of ray tracing on a Digital Terrain Model (DTM) (also known as a bare-earth DEM) or Digital Surface Model (DSM) given the sun's altitude and azimuth angles. In this work, we use lidar-derived DSMs. A DSM-based hillshade conveys a lot more information on shadows than a bare-earth DEM alone, namely any non-terrain vertical features (e.g. vegetation, buildings) resolvable at a 1-m resolution. The use of this level of fidelity of DSM for hillshading and its input to a machine learning model is novel and the main contribution of our work. Any uncertainty over the angles can be captured through a composite multi-angle hillshade, which shows the range where shadows can appear throughout the day.</p><p>We show the utility of time-dependent hillshades in the daily mapping of rivers from Very High Resolution (VHR) passive optical and lidar-derived terrain data [1]. Specifically, we leverage the acquisition timestamps within a daily 3m PlanetScope product over a 2-year period. Given a datetime and geolocation, we model the sun’s azimuth and elevation relative to that geolocation at that time of day and year. We can then generate a time-dependent hillshade and therefore locate shadows in any given time within that 2-year period. In our ablation study we show that, out of all the lidar-derived products, the time-dependent hillshades contribute a 8-9% accuracy improvement in the semantic segmentation of rivers. This indicates that a semantic segmentation machine learning model is less prone to errors of commission (false positives), by better disambiguating shadows from dark water.</p><p>Time-dependent hillshades are not currently used in ML for EO use-cases, yet they can be useful. All that is needed to produce them is access to high-resolution bare-earth DEMs, like that of the US National 3D Elevation Program covering the entire continental U.S at 1-meter resolution, or creation of DSMs from the lidar point cloud data itself. As the coverage of DSM and/or DEM products expands to more parts of the world, time-dependent hillshades could become as commonplace as cloud masks in EO use cases.</p><p><br>[1] Dolores Garcia, Gonzalo Mateo-Garcia, Hannes Bernhardt, Ron Hagensieker, Ignacio G. Lopez-Francos, Jonathan Stock, Guy Schumann, Kevin Dobbs and Freddie Kalaitzis Pix2Streams: Dynamic Hydrology Maps from Satellite-LiDAR Fusion.<em> AI for Earth Sciences Workshop, NeurIPS 2020</em></p>
Remotely sensed imagery is increasingly used by emergency managers to monitor and map the impact ... more Remotely sensed imagery is increasingly used by emergency managers to monitor and map the impact of flood events to support preparedness, response, and critical decision making throughout the flood event lifecycle. To reduce latency in delivery of imagery-derived information, ensure consistent and reliably derived map products, and facilitate processing of an increasing volume of remote sensing data-streams, automated flood mapping workflows are needed. The U.S. Geological Survey is facilitating the development and integration of machine-learning algorithms in collaboration with NASA, National Geospatial Intelligence Agency (NGA), University of Alabama, and University of Illinois to create a workflow for rapidly generating improved flood-map products. A major bottleneck to the training of robust, generalizable machine learning algorithms for pattern recognition is a lack of training data that is representative across the landscape. To overcome this limitation for the training of alg...
Funding Sources: The land cover mapping project was co-funded by the Kansas GIS Policy Board with... more Funding Sources: The land cover mapping project was co-funded by the Kansas GIS Policy Board with funds from the Kansas Water Plan that are administered by the Kansas Water Office under the grant titled " Kansas Next-Generation Land Use/Land Cover Mapping Initiative " and the National Science Foundation (NSF) Experimental Program to Stimulate Competitive Research (EPSCoR) under the research project titled " Understanding and Forecasting Ecological Change: Causes, Trajectories and Consequences of Environmental Change in the Central Plains. "
Where are the Earth's streams flowing right now? Inland surface waters expand with floods and... more Where are the Earth's streams flowing right now? Inland surface waters expand with floods and contract with droughts, so there is no one map of our streams. Current satellite approaches are limited to monthly observations that map only the widest streams. These are fed by smaller tributaries that make up much of the dendritic surface network but whose flow is unobserved. A complete map of our daily waters can give us an early warning for where droughts are born: the receding tips of the flowing network. Mapping them over years can give us a map of impermanence of our waters, showing where to expect water, and where not to. To that end, we feed the latest high-res sensor data to multiple deep learning models in order to map these flowing networks every day, stacking the times series maps over many years. Specifically, i) we enhance water segmentation to $50$ cm/pixel resolution, a 60$\times$ improvement over previous state-of-the-art results. Our U-Net trained on 30-40cm WorldVie...
Page 1. EVA ATASET AND THE KANSAS BIOLOGICAL SURV Y'S FLDPLN (FLOODPLAIN) MODEL FO... more Page 1. EVA ATASET AND THE KANSAS BIOLOGICAL SURV Y'S FLDPLN (FLOODPLAIN) MODEL FOR INUNDATI N EXTENT ESTIMATION By Submitted to the Department of Geography and the Faculty of the Graduate ...
Uploads
Papers by Kevin Dobbs