Large-Scale Detection and Categorization of Oil Spills from SAR Images with Deep Learning
<p>Distribution of the backscatter values in the VV images, across pixels belonging to class “oil” and “non-oil”. The plots show only the range <math display="inline"><semantics> <mrow> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>500</mn> <mo>]</mo> </mrow> </semantics></math>, rather than <math display="inline"><semantics> <mrow> <mo>[</mo> <mn>0</mn> <mo>,</mo> <msup> <mn>2</mn> <mn>16</mn> </msup> <mo>]</mo> </mrow> </semantics></math>.</p> "> Figure 2
<p>Illustration of patches extraction from the SAR products. The patches centered on the oil spill events (depicted in green) form the dataset <math display="inline"><semantics> <msub> <mi mathvariant="script">D</mi> <mn>1</mn> </msub> </semantics></math> and are associate with a label that indicates the values assumed by the 12 categories. The second dataset, <math display="inline"><semantics> <msub> <mi mathvariant="script">D</mi> <mn>2</mn> </msub> </semantics></math>, contains (i) all the patches of <math display="inline"><semantics> <msub> <mi mathvariant="script">D</mi> <mn>1</mn> </msub> </semantics></math>, (ii) all the patches with at least 1 oil spill pixel, (iii) an equal amount of patches without oil that is randomly sampled from other locations in the SAR product. Along with the VV channel, the segmentation masks are included in both datasets.</p> "> Figure 3
<p>Schematic depiction of the OFCN architecture used for segmentation. Conv(<span class="html-italic">n</span>) stands for a convolutional layer with <span class="html-italic">n</span> neurons. For example, <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>32</mn> </mrow> </semantics></math> in the first Encoder Block, 64 in the second, and so on.</p> "> Figure 4
<p>First the trained OFCN generates the segmentation masks from the SAR images. Then, both SAR images and predicted mask are fed in the classification network. A different architecture is trained to classify each one of the categories (e.g., the depicted one classifies the “texture” category).</p> "> Figure 5
<p>Evolution of training loss and F1 score on validation across the 400 training epochs on dataset <math display="inline"><semantics> <msub> <mi mathvariant="script">D</mi> <mn>2</mn> </msub> </semantics></math>. Bold lines indicate a running average with window of size 30.</p> "> Figure 6
<p>Training history of the model configured with C1 with two-stage training (C1-2ST-Long) on dataset <math display="inline"><semantics> <msub> <mi mathvariant="script">D</mi> <mn>2</mn> </msub> </semantics></math>. The plot depicts the evolution of the training loss and F1 score on the validation set over the 3000 epochs in the second stage. Bold lines indicate a running average with window of size 30.</p> "> Figure 7
<p>Examples of segmentation masks predicted by the OFCN on the validation set of <math display="inline"><semantics> <msub> <mi mathvariant="script">D</mi> <mn>2</mn> </msub> </semantics></math>. From left to right: the VV input channel, the mask made by the human operator, the OFCN output thresholded at 0.5 (values <math display="inline"><semantics> <mrow> <mo>≤</mo> <mn>0.5</mn> <mo>→</mo> <mn>0</mn> </mrow> </semantics></math>, values <math display="inline"><semantics> <mrow> <mo>≥</mo> <mn>0.5</mn> <mo>→</mo> <mn>1</mn> </mrow> </semantics></math>). Green bounding boxes are TP (the oil spill appears both in the human-made and the predicted mask), blue boxes are FP (the OFCN detects an oil spill that is not present in the human-made mask), and red boxes are FN (the oil spill is in the human-made mask but is not detected by the OFCN).</p> "> Figure 8
<p>Variation of the F1 score on the validation set according to the incidence angle of the satellite.</p> "> Figure 9
<p>Filters visualization. We synthetically generated the input images that maximally activate 64 of the 512 filters in the first convolutional layer of the “Enc Block 512” at the bottom of the OFCN.</p> "> Figure 10
<p>The x-axis always denotes the value of the rounding threshold <math display="inline"><semantics> <mi>τ</mi> </semantics></math>. (<b>a</b>–<b>c</b>) number of True Positive (TP), False Positive (FP), and False Negative (FN) detection obtained by using a different threshold <math display="inline"><semantics> <mi>τ</mi> </semantics></math> on the soft output of the OFCN. (<b>d</b>–<b>f</b>) values of F1 score and IoU for different <math display="inline"><semantics> <mi>τ</mi> </semantics></math>.</p> "> Figure 11
<p>Results on the 3 Sentinel-1 products used for testing. The first column contains the original SAR images (VV-intensity); the second column contains the segmentation masks of oil spills that are manually drawn by human operators (yellow masks); the third column contains the segmentation masks of oil spills that are predicted by the OFCN (blue masks). Only small sections of the whole SAR products are shown in the figures. The number on the top of the images represent the incident angle.</p> "> Figure 11 Cont.
<p>Results on the 3 Sentinel-1 products used for testing. The first column contains the original SAR images (VV-intensity); the second column contains the segmentation masks of oil spills that are manually drawn by human operators (yellow masks); the third column contains the segmentation masks of oil spills that are predicted by the OFCN (blue masks). Only small sections of the whole SAR products are shown in the figures. The number on the top of the images represent the incident angle.</p> "> Figure 12
<p>Visualization in NLive of oil spills detected in a large area (approximately <math display="inline"><semantics> <mrow> <mn>500</mn> <mo>×</mo> <mn>200</mn> <mspace width="3.33333pt"/> <msup> <mi>km</mi> <mn>2</mn> </msup> </mrow> </semantics></math>) in the South hemisphere between 2014 and 2020.</p> "> Figure A1
<p>Overview of the Squeeze-and-Excitation block used in the OFCN encoder. The SE blocks are inserted after each ReLU activation.</p> "> Figure A2
<p>Growth of the receptive field in the layers of the encoder.</p> "> Figure A3
<p>Evolution of training loss and F1 score on validation across the 400 training epochs on dataset <math display="inline"><semantics> <msub> <mi mathvariant="script">D</mi> <mn>1</mn> </msub> </semantics></math>. Bold line indicates a running average with window of size 30.</p> ">
Abstract
:1. Introduction
- (Detection): we develop a deep-learning architecture that detects oil spills in SAR scenes with high accuracy. When trained on a large-scale dataset, our model achieves extremely high performance.
- (Categorization): each oil spill detected by our deep-learning model is further processed by a second neural network, which infers information about shape, contrast, and texture.
- (Visualization): we present our production pipeline to perform inference at a large scale and visualize the obtained results. Our visualization tool allows analyzing the presence of oil spills worldwide during given historical periods.
2. Dataset Description
2.1. Division in Patches
2.2. Division on in Training, Validation, and Test Set.
3. Oil Spill Detection
3.1. The Deep-Learning Architecture for Semantic Segmentation
3.2. Training and Inference
3.2.1. Loss Functions for Unbalanced Dataset
3.2.2. Data Augmentation
3.2.3. Two-Stage Training
3.2.4. Test Time Augmentation
4. Oil Spill Categorization
5. Results and Discussion
5.1. Oil Spill Detection: Experimental Setting, Analysis, and Results
5.1.1. Evaluation Metrics
5.1.2. Hyperparameters Search
5.1.3. Comparison with Baselines
5.1.4. Training on the Large-Scale Dataset
5.1.5. Segmentation Performance and Incidence Angle
5.1.6. Visualization of the Learned Filters
5.1.7. Results on the Test Set
5.2. Oil Spill Categorization: Experimental Setting and Results
6. Large-Scale Visualization
- as input, we only specify the coordinates of an area and the time interval of interest;
- all the SAR products within the time frame that overlaps at least 20% with the specified area are fetched from the Alaskan SAR Facility (ASF) repository;
- since the SAR images come from Sentinel-1 (GDRH, 10 m resolution), they are first smoothed and then downsampled by a factor of 4 to match the mode of our training data;
- all the SAR products are processed with the OFCN described in Section 3; the procedure consists of two steps, filtering and coloring, discussed below;
- for each oil spill, we generate a vector of features that includes the size of the slick and the distance from the closest oil spill detected;
- very small slicks are discarded, i.e., slicks whose surface is lower than and are farther than 1.5 km from any other detected oil spill.
7. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
Appendix A. Further Details of the Segmentation Model
Appendix A.1. Bilinear Upsampling Layer
Appendix A.2. Batch Normalization Layer
Appendix A.3. Squeeze-and-Excitation Layer
Appendix A.4. Receptive Field of the OFCN Architecture
Appendix B. Additional Experimental Details
Appendix B.5. Training Stats of Deep-Learning Architectures
Appendix C. Unsuccessful Approaches
Appendix C.6. Loss Functions for Class Imbalance
- Focal loss. Addresses class imbalance by reshaping the standard cross-entropy loss such that it down-weights the loss assigned to well-classified examples [46]. The Focal Loss focuses on training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. The Focal Loss is defined asWe used the default parameters and proposed in the original paper [46].
- Jaccard Loss handles class imbalance by computing the similarity between the predicted region and the ground-truth region for an object present in the image. In particular, the loss penalizes a naive algorithm that predicts every pixel of an image as the background, as the intersection between the predicted and ground-truth regions would be zero [47]. The Jaccard loss is defined as
- Lovász-softmax loss is an extension of the Jaccard Loss, which generates convex surrogates to submodular loss functions, including the Lovász hinge. We refer to the original paper for the formal definition [48]. The official TensorFlow implementation (https://github.com/bermanmaxim/LovaszSoftmax) of the Lovász-softmax loss has been used to perform the experiments.
Loss | BN | SE | L2 Reg. | Dropout | LR | F1 () |
---|---|---|---|---|---|---|
JAC | True | True | 0.1 | 0.667 | ||
FOC | True | True | 0.0 | 0.664 | ||
LOV | True | True | 0.0 | 0.597 |
Appendix C.7. Conditional Random Field
Appendix C.8. Multi-Head Classification Network
Appendix C.9. Gradient Descent Optimizers
References
- Solberg, A.H.S.; Storvik, G.; Solberg, R.; Volden, E. Automatic detection of oil spills in ERS SAR images. IEEE Trans. Geosci. Remote Sens. 1999, 37, 1916–1924. [Google Scholar] [CrossRef] [Green Version]
- Brekke, C.; Solberg, A.H. Oil spill detection by satellite remote sensing. Remote Sens. Environ. 2005, 95, 1–13. [Google Scholar] [CrossRef]
- Bakke, T.; Green, A.M.V.; Iversen, P.E. Offshore Environmental Effects Monitoring in Norway—Regulations, Results and Developments. In Produced Water: Environmental Risks and Advances in Mitigation Technologies; Lee, K., Neff, J., Eds.; Springer: New York, NY, USA, 2011; pp. 481–491. [Google Scholar]
- NIVA. Environmental Effects of Offshore Produced Water Discharges Evaluated for the Barents Sea; Norwegian Institute for Water Research (Norsk institutt for vannforskning): Oslo, Norway, 2019. [Google Scholar]
- Skrunes, S.; Johansson, A.M.; Brekke, C. Synthetic Aperture Radar Remote Sensing of Operational Platform Produced Water Releases. Remote Sens. 2019, 11, 2882. [Google Scholar] [CrossRef] [Green Version]
- Bourbigot, M.; Johnsen, H.; Piantanida, R. Sentinel-1 Product Definition. Technical Report S1-RS-MDA-52-7440, Issue 2/7, MPC-S1. 2016. Available online: https://sentinel.esa.int/documents/247904/1877131/Sentinel-1-Product-Definition (accessed on 14 June 2012).
- Skrunes, S.; Brekke, C.; Eltoft, T. Characterization of Marine Surface Slicks by Radarsat-2 Multipolarization Features. IEEE Trans. Geosci. Remote Sens. 2014, 52, 5302–5319. [Google Scholar] [CrossRef] [Green Version]
- Salberg, A.; Larsen, S.O. Classification of Ocean Surface Slicks in Simulated Hybrid-Polarimetric SAR Data. IEEE Trans. Geosci. Remote Sens. 2018, 56, 7062–7073. [Google Scholar] [CrossRef]
- Singha, S.; Bellerby, T.J.; Trieschmann, O. Satellite Oil Spill Detection Using Artificial Neural Networks. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2013, 6, 2355–2363. [Google Scholar] [CrossRef]
- Krestenitis, M.; Orfanidis, G.; Ioannidis, K.; Avgerinakis, K.; Vrochidis, S.; Kompatsiaris, I. Oil Spill Identification from Satellite Images Using Deep Neural Networks. Remote Sens. 2019, 11, 1762. [Google Scholar] [CrossRef] [Green Version]
- Topouzelis, K.; Karathanassi, V.; Pavlakis, P.; Rokos, D. Detection and discrimination between oil spills and look-alike phenomena through neural networks. ISPRS J. Photogramm. Remote Sens. 2007, 62, 264–270. [Google Scholar] [CrossRef]
- Alpers, W.; Holt, B.; Zeng, K. Oil spill detection by imaging radars: Challenges and pitfalls. Remote Sens. Environ. 2017, 201, 133–147. [Google Scholar] [CrossRef]
- Gade, M.; Alpers, W.; Hühnerfuss, H.; Masuko, H.; Kobayashi, T. Imaging of biogenic and anthropogenic ocean surface films by the multifrequency/multipolarization SIR-C/X-SAR. J. Geophys. Res. 1998, 103, 18851–18866. [Google Scholar] [CrossRef]
- Del Frate, F.; Petrocchi, A.; Lichtenegger, J.; Calabresi, G. Neural networks for oil spill detection using ERS-SAR data. IEEE Trans. Geosci. Remote Sens. 2000, 38, 2282–2287. [Google Scholar] [CrossRef] [Green Version]
- Garcia-Pineda, O.; MacDonald, I.R.; Li, X.; Jackson, C.R.; Pichel, W.G. Oil Spill Mapping and Measurement in the Gulf of Mexico With Textural Classifier Neural Network Algorithm (TCNNA). IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2013, 6, 2517–2525. [Google Scholar] [CrossRef]
- Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.S.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep learning in remote sensing: A comprehensive review and list of resources. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–36. [Google Scholar] [CrossRef] [Green Version]
- Krestenitis, M.; Orfanidis, G.; Ioannidis, K.; Avgerinakis, K.; Vrochidis, S.; Kompatsiaris, I. Early Identification of Oil Spills in Satellite Images Using Deep CNNs. In MultiMedia Modeling; Kompatsiaris, I., Huet, B., Mezaris, V., Gurrin, C., Cheng, W.H., Vrochidis, S., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 424–435. [Google Scholar]
- Cantorna, D.; Dafonte, C.; Iglesias, A.; Arcay, B. Oil spill segmentation in SAR images using convolutional neural networks. A comparative analysis with clustering and logistic regression algorithms. Appl. Soft Comput. 2019, 84, 105716. [Google Scholar] [CrossRef]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: One Rogers Street Cambridge, MA, USA, 2016. [Google Scholar]
- Zheng, A.; Casari, A. Feature Engineering for Machine Learning: Principles and Techniques for Data Scientists; O’Reilly Media, Inc.: Sevastopol, CA, USA, 2018. [Google Scholar]
- Bengio, Y.; Courville, A.C.; Vincent, P. Unsupervised feature learning and deep learning: A review and new perspectives. CoRR 2012, 1, 2012. [Google Scholar]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Zhao, B.; Feng, J.; Wu, X.; Yan, S. A survey on deep learning-based fine-grained object classification and semantic segmentation. Int. J. Autom. Comput. 2017, 14, 119–135. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2015), Munich, Germany, 5–9 October 2015. [Google Scholar]
- Ghosh, A.; Ehrlich, M.; Shah, S.; Davis, L.S.; Chellappa, R. Stacked U-Nets for Ground Material Segmentation in Remote Sensing Imagery. In Proceedings of the CVPR Workshops, Salt Lake City, UT, USA, 18–23 June 2018; pp. 257–261. [Google Scholar]
- Li, R.; Liu, W.; Yang, L.; Sun, S.; Hu, W.; Zhang, F.; Li, W. Deepunet: A deep fully convolutional network for pixel-level sea-land segmentation. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2018, 11, 3954–3962. [Google Scholar] [CrossRef] [Green Version]
- Bianchi, F.M.; Grahn, J.; Eckerstorfer, M.; Malnes, E.; Vickers, H. Snow avalanche segmentation in SAR images with Fully Convolutional Neural Networks. arXiv 2019, arXiv:1910.05411. [Google Scholar]
- Dumoulin, V.; Visin, F. A guide to convolution arithmetic for deep learning. arXiv 2016, arXiv:1603.07285. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Ding, J.; Chen, B.; Liu, H.; Huang, M. Convolutional neural network with data augmentation for SAR target recognition. IEEE Geosci. Remote Sens. Lett. 2016, 13, 364–368. [Google Scholar] [CrossRef]
- Karras, T.; Aila, T.; Laine, S.; Lehtinen, J. Progressive Growing of GANs for Improved Quality, Stability, and Variation. In Proceedings of the International Conference on Learning Representations, Vancouver Convention Center, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Hawaii Convention Center, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Landau, S. A Handbook of Statistical Analyses Using SPSS; CRC: Boca Raton, FL, USA, 2004. [Google Scholar]
- Holt, B. Chapter 2. SAR imaging of the ocean surface. In Synthetic Aperture Radar Marine User’s Manual (NOAA/NESDIS); Jackson, C.R., Apel, J.R., Eds.; U.S. Department of Commerce: Washington, DC, USA, 2004; pp. 25–80. [Google Scholar]
- Minchew, B.; Jones, C.E.; Holt, B. Polarimetric Analysis of Backscatter From the Deepwater Horizon Oil Spill Using L-Band Synthetic Aperture Radar. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3812–3830. [Google Scholar] [CrossRef]
- Nguyen, A.; Dosovitskiy, A.; Yosinski, J.; Brox, T.; Clune, J. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. Proceedings of Neural Information Processing Systems, International, Barcelona Convention Center, Barcelona, Spain, 5–10 December 2016; pp. 3387–3395. [Google Scholar]
- Alpers, W.; Hühnerfuss, H. The damping of ocean waves by surface films: A new look at an old problem. J. Geophys. Res. 1989, 94, 6251–6265. [Google Scholar] [CrossRef]
- Singh, K.P.; Gray, A.L.; Hawkins, R.K.; O’Neil, R.A. The Influence of Surface Oil on C-and Ku-Band Ocean Backscatter. IEEE Trans. Geosci. Remote Sens. 1986, GE-24, 738–744. [Google Scholar] [CrossRef]
- Girard-Ardhuin, F.; Mercier, G.; Collard, F.; Garello, R. Operational Oil-Slick Characterization by SAR Imagery and Synergistic Data. IEEE J. Ocean. Eng. 2005, 30, 487–495. [Google Scholar] [CrossRef] [Green Version]
- Lin, G.; Shen, C.; Van Den Hengel, A.; Reid, I. Efficient piecewise training of deep structured models for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Caesars Palace, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 3194–3203. [Google Scholar]
- Santurkar, S.; Tsipras, D.; Ilyas, A.; Madry, A. How does batch normalization help optimization. In Advances in Neural Information Processing Systems, Proceedings of the Neural Information Processing Systems, Montreal, QC, Canada, 3–8 December 2018; Montreal Convention Centre: Montreal, QC, Canada; pp. 2483–2493.
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Hawaii Convention Center, Honolulu, HI, USA, 21–26 June 2017; pp. 2980–2988. [Google Scholar]
- Rahman, M.A.; Wang, Y. Optimizing intersection-over-union in deep neural networks for image segmentation. In International Symposium on Visual Computing; Springer: Berlin/Heidelberg, Germany, 2016; pp. 234–244. [Google Scholar]
- Berman, M.; Rannen Triki, A.; Blaschko, M.B. The lovász-softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4413–4421. [Google Scholar]
- Krähenbühl, P.; Koltun, V. Efficient inference in fully connected crfs with gaussian edge potentials. In Proceedings of the Advances in Neural Information Processing Systems, Granada, Spain, 12–17 December 2011; pp. 109–117. [Google Scholar]
- Krestenitis, M.; Orfanidis, G.; Ioannidis, K.; Avgerinakis, K.; Vrochidis, S.; Kompatsiaris, I. Early Identification of Oil Spills in Satellite Images Using Deep CNNs. In Proceedings of the International Conference on Multimedia Modeling, Thessaloniki, Greece, 8–11 January 2019; pp. 424–435. [Google Scholar]
- Ruder, S. An overview of gradient descent optimization algorithms. arXiv 2016, arXiv:1609.04747. [Google Scholar]
Category | Values | Values Distribution | |
---|---|---|---|
Patch shape | {False, True} | {48.2%, 51.8%} | |
Linear shape | {False, True} | {61.8%, 38.2%} | |
Angular shape | {False, True} | {90.4%, 9.6%} | |
Weathered texture | {False, True} | {71.3%, 28.7%} | |
Tailed shape | {False, True} | {83.2%, 16.8%} | |
Droplets texture | {False, True} | {98.3%, 1.7%} | |
Winding texture | {False, True} | {92.5%, 7.5%} | |
Feathered texture | {False, True} | {97.7%, 2.3%} | |
Shape outline | {Fragmented, Continuous} | {78.4%, 21.6%} | |
Texture | {Rough, Smooth, Strong, Variable} | {29.2%, 14.3%, 5.1%, 51.2%} | |
Contrast | {Strong, Weak, Variable} | {23%, 52.8%, 24.1%} | |
Edge | {Sharp, Diffuse, Variable} | {33.7%, 13.3%, 52.9%} |
Original Data | |||
---|---|---|---|
• 713 SAR prod. | • 1843 tr. patches | • 149,856 tr. patches | • 3 SAR products |
• 4 years period | • 150 val. patches | • 37,465 val. patches | • Details in Table 3 |
• 2,093 oil events | • 100 test patches | ||
• 227,964 oil spills |
ID | Image Size | # Oil Spills | # Oil Pixels | # Non-Oil Pixels |
---|---|---|---|---|
T1 | 9836 × 14,894 | 2 | 552 (0.00067%) | 81,877,550 |
T2 | 21,738 | 11 | 19,336 (0.017%) | 112,071,385 |
T3 | 10,602 × 21,471 | 36 | 22,793 (0.018%) | 127,436,689 |
ID | BN | SE | L2 reg. | Dropout | LR | CW | Var F1 () |
---|---|---|---|---|---|---|---|
C1 | True | True | 0.0 | 0.1 | 2 | 0.731 | |
C2 | False | True | 0.1 | 3 | 0.723 | ||
C3 | True | False | 0.0 | 0.0 | 2 | 0.708 |
Model | # Params. | Tr Time (Hours) | Tr Acc. | Tr Loss | Val F1 () |
---|---|---|---|---|---|
U-net | 7,760,069 | 10.1 | 0.984 | 0.058 | 0.741 |
DeepLabV3+ | 41,049,697 | 15.2 | 0.987 | 0.039 | 0.765 |
OFCN | 7,873,729 | 10.9 | 0.988 | 0.038 | 0.775 |
ID | Epochs | Time (Days) | Tr Acc. | Tr Loss | Val F1 () |
---|---|---|---|---|---|
C1 | 400 | 6.8 | 0.995 | 0.016 | 0.857 |
C2 | 400 | 6.9 | 0.990 | 0.047 | 0.750 |
C3 | 400 | 6.2 | 0.993 | 0.018 | 0.802 |
C1-2ST | 400 + 400 | 9.4 | 0.996 | 0.014 | 0.861 |
C1-2ST-Long | 500 + 3000 | 54.3 | 0.997 | 0.009 | 0.892 |
ID | F1 | IoU | TP | FP | FN |
---|---|---|---|---|---|
T1 | 0.73 | 0.81 | 2 | 7 | 0 |
T2 | 0.44 | 0.36 | 11 | 59 | 0 |
T3 | 0.83 | 0.52 | 31 | 14 | 5 |
Category | Accuracy | F1 |
---|---|---|
Patch shape | 80.0% | 0.80 |
Linear shape | 76.8% | 0.77 |
Angular shape | 93.2% | 0.91 |
Weathered texture | 70.4% | 0.64 |
Tailed texture | 78.4% | 0.73 |
Droplets texture | 98.8% | 0.98 |
Winding texture | 94.4% | 0.92 |
Feathered texture | 97.2% | 0.96 |
Shape outline | 93.8% | 0.91 |
Texture | 55.6% | 0.49 |
Contrast | 61.6% | 0.59 |
Edge | 61.6% | 0.58 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bianchi, F.M.; Espeseth, M.M.; Borch, N. Large-Scale Detection and Categorization of Oil Spills from SAR Images with Deep Learning. Remote Sens. 2020, 12, 2260. https://doi.org/10.3390/rs12142260
Bianchi FM, Espeseth MM, Borch N. Large-Scale Detection and Categorization of Oil Spills from SAR Images with Deep Learning. Remote Sensing. 2020; 12(14):2260. https://doi.org/10.3390/rs12142260
Chicago/Turabian StyleBianchi, Filippo Maria, Martine M. Espeseth, and Njål Borch. 2020. "Large-Scale Detection and Categorization of Oil Spills from SAR Images with Deep Learning" Remote Sensing 12, no. 14: 2260. https://doi.org/10.3390/rs12142260