Generation of high-resolution spectral and broadband surface albedo products based on Sentinel-2 MSI measurements, and Super-Resolution ...

Page created by Jorge Sharp
 
CONTINUE READING
Generation of high-resolution spectral and broadband surface albedo products based on Sentinel-2 MSI measurements, and Super-Resolution ...
Generation of high-resolution
       spectral and broadband surface
       albedo products based on
2021

       Sentinel-2 MSI measurements,
       and Super-Resolution Restoration
       from single and repeat EO images

       Jan-Peter Muller,
       j.muller@ucl.ac.uk
       Head, Imaging Group, UCL-MSSL
                          VH-RODA Workshop 2021 | 20-23 April 2021 | Slide 1
Generation of high-resolution spectral and broadband surface albedo products based on Sentinel-2 MSI measurements, and Super-Resolution ...
2021

       Generation of high-resolution spectral and broadband surface
       albedo products based on Sentinel-2 MSI measurements
       Jan-Peter Muller, Rui Song and Alistair Francis, UCL-MSSL*

       *Work supported by ESA under science in society ‘Generation of high-
       resolution spectral and broadband surface albedo products based on
       Sentinel-2 MSI measurements (HR-AlbedoMap)’
                                                    VH-RODA Workshop 2021 | 20-23 April 2021 | Slide 2
Generation of high-resolution spectral and broadband surface albedo products based on Sentinel-2 MSI measurements, and Super-Resolution ...
HR Albedomap retrieval: Overall objectives, previous work and context

• Overall Objectives for the development of high resolution land surface albedo retrieval using Sentinel-2 MSI:
   ➢ Generate 10m/20m spectral & broadband albedo of the Earth’s land surface using Sentinel-2 +MODIS/VIIRS albedo
   ➢ Improve upon existing S2 cloud masks using novel deep learning/AI techniques
   ➢ Improve upon existing S2 atmospheric correction Sen2Cor using new UCL-SIAC method (Feng et al., 2020)
   ➢ Apply EU-Copernicus Land Service GbOV method to provide spectral and broadband land surface albedo using
         optimal estimation to fill in gaps due to persistent cloud cover
   ➢ Validate results using broadband (shortwave) albedos from GbOV and spectral BRDFs from RADCALNET (courtesy CNES)
• Previous work on Ground-Based Observations for Validation (GBOV) of Copernicus Global Land Products:
    ➢ Albedo values upscaled from 20+ tower sites since 2012 were produced to compare against MODIS, CGLS and MISR
       albedo products. (https://land.copernicus.eu/global/gbov/)
                                                      1. Song, R.; Muller, J.-P.; Kharbouche, S.; Woodgate, W.
                                                         Intercomparison of Surface Albedo Retrievals from MISR, MODIS,
                                                         CGLS Using Tower and Upscaled Tower Measurements. Remote
                                                         Sens. 2019, 11, 644.
                                                      2. Song, R.; Muller, J.-P.; Kharbouche, S.; Yin, F.; Woodgate, W.; Kitchen,
                                                         M.; Roland, M.; Arriga, N.; Meyer, W.; Koerber, G.; Bonal, D.; Burban,
                                                         B.; Knohl, A.; Siebicke, L.; Buysse, P.; Loubet, B.; Leonardo, M.;
                                                         Lerebourg, C.; Gobron, N. Validation of Space-Based Albedo
                                                         Products from Upscaled Tower-Based Measurements Over
                                                         Heterogeneous and Homogeneous Landscapes. Remote
                                                         Sens. 2020, 12, 833.
Generation of high-resolution spectral and broadband surface albedo products based on Sentinel-2 MSI measurements, and Super-Resolution ...
HR Albedomap retrieval: processing chain from level-1C to final spectral albedo

                                         MODIS/VIIRS
                                           prior
          CAMS

       MODIS/VIIRS
                           SIAC          Surface BRFs             inversion              HR Albedo
         BRDF

      Sentinel-2 L1C                                    BoA-BRF
                                                        +sigma

                          DeepLab        Cloud Mask                           Spectral     Broadband   RGB
                            v3+                                               +sigma       +sigma      +sigma

        ToA-BRF        S2 training-set
Generation of high-resolution spectral and broadband surface albedo products based on Sentinel-2 MSI measurements, and Super-Resolution ...
AI cloud detection: Schema

A. Francis et al. in preparation
Generation of high-resolution spectral and broadband surface albedo products based on Sentinel-2 MSI measurements, and Super-Resolution ...
AI cloud detection: Sentinel-2 dataset
Alistair Francis, in collaboration with John Mrziglod (recent ESA YGT, now at WFP)

➢ Motivated by lack of labelled Sentinel-2 cloud data
➢ Emphasis placed on number of scenes, not number of pixels
➢ 513 1022-by-1022 subscenes taken from completely random
  selection of 2018 Sentinel-2 L1C catalogue
➢ Fast and accurate annotations made using IRIS (see next slides)
➢ Shadows also included where possible (424/513 subscenes)
➢ 95% agreement between annotators on 50 scene validation set
➢ >1400 unique views on Zenodo so far, many more downloads
  (some, we think, not legitimate)

    https://zenodo.org/record/4172871
Generation of high-resolution spectral and broadband surface albedo products based on Sentinel-2 MSI measurements, and Super-Resolution ...
AI cloud detection: Sentinel-2 dataset

                            https://github.com/ESA-PhiLab/iris
Generation of high-resolution spectral and broadband surface albedo products based on Sentinel-2 MSI measurements, and Super-Resolution ...
AI cloud detection: Sentinel-2 dataset

                                https://github.com/ESA-PhiLab/iris
Generation of high-resolution spectral and broadband surface albedo products based on Sentinel-2 MSI measurements, and Super-Resolution ...
AI cloud detection: Training and Validation

➢ Dataset split into 40% training, 10% validation,                                 Total test set: 257 subscenes
  50% testing                                                                           Model         Accuracy   F1-score

➢ Comparisons made between labelled ground truth                                   L1C product mask    83.1%     83.6%

  and the three models                                                                CloudFCN         91.4%     91.6%

➢ DeepLab v3+ substantially better overall and for                                   DeepLab v3+       94.0%     94.3%

  specific surface types (e.g. Snow/Ice as shown)                                  Snow/Ice: 41
➢ CloudFCN* performs roughly as well as it did on                                  subscenes
  Landsat 8, however perhaps the lack of pre-trained                                    Model         Accuracy   F1-score

  weights and a fewer layers leads to poorer                                       L1C product mask    73.8%     75.7%

  performance than DeepLab v3+                                                        CloudFCN         81.4%     82.0%
Francis, A.; Sidiropoulos, P.; Muller, J.-P. CloudFCN: Accurate and Robust Cloud     DeepLab v3+       83.1%     83.6%
Detection for Satellite Imagery with Deep Learning.Remote Sens. 2019, 11, 2312
Generation of high-resolution spectral and broadband surface albedo products based on Sentinel-2 MSI measurements, and Super-Resolution ...
HR albedo retrieval processing chain
                                                           Sentinel-2           MODIS         CAMS             500-m S2              MODIS
                                                            TOA BRF             BRDF        Prediction        Surface BRF             Prior

                                              AI Cloud                                                                 Albedo/BRF
                                              Detection                           SIAC                                 Calculation

                                              S2 Cloud                         S2 Surface
                                               Mask                               BRF
                                                                                             Reprojection &            Albedo/BRF
                                                                                              Aggregation              Matrix (n-D)
                                                            S2 Masked
                                                            Surface BRF

                                                           enough cloud-                                                     Albedo
                                                           free pixels?                                                     Inversion

                                                                    Yes

                                                          Spatial resampling                                                 20-m
                                       gap-filling                                                                          Albedo

EEA: Endmembers Extraction                                    20-m S2
Algorithms based on                                         Surface BRF
                                                                                                                       Downscaling
Winter, M. E., “N-FINDR: an
algorithm for fast autonomous                              EEA processing
spectral end-member determination
in hyperspectral data”, presented at                          S2 EEA
                                                            Abundance
                                                                                                                             10-m
                                                                                                                            Albedo
Imaging Spectrometry V, Denver,
CO, USA, 1999, vol. 3753, pgs. 266-
275                                                                                                  R. Song et al. in preparation
Endmember extraction analysis

                                  Number of pixels with abundance > 0.7:
                                  Type A: 34524
S2 10km * 10km area at Hainich,   Type B: 1353
Germany (RGB BRF)                 Type C: 17
                                  Type D: 16
500-m BRF, DHR and BHR are calculated at S2 solar and viewing geometries, using
kernels from MCD43A1 or VNP43A1

       665nm BRF                   665nm DHR                   665nm BHR
HR-Albedo is retrieved from an endmember-based Albedo-to-BRF regression model.

S2 10km * 10km area at Hainich,                10*10km area, 665nm
Germany at 10m (RGB BRF)                       spectral albedo at 10m
FLUXNET SW-BHR vs S2-BHR
at the footprint scale

                                                    10km
                           10-m resolution albedo            20-m resolution shortwave
                            (R,G,B composition)                  broadband albedo

                                    10km                                10km
                                                           Histogram of S2 albedo within tower
                                                           FoV on 24th July 2018. Tower
                                                           measured BHR is 0.168±0.0016,
                                                           using tower albedometer
                                                           measurements over a 30-day
                                                           window.
                                                           N.B. 500m projected FoV shown,
                                                           actual calculated footprint is 377m
SURFRAD SW-BHR vs S2-BHR
 at the footprint scale

           10km                      10km

  10-m resolution spectral   20-m resolution shortwave
  albedo (R,G,B composite)       broadband albedo

N.B. Tower footprint in S2 ≈230m
S2 spectral
                                       albedo
                                       assessment

                                       RADCALNET BHR
                                       courtesy of CNES

10-m resolution spectral   20-m resolution shortwave
albedo (R,G,B composite)       broadband albedo
Summary and Future work
       ▪ Have developed automated processing system for fusion of Sentinel-
       2/MSI BRF with common MODIS/VIIRS BRDF/albedo spectral bands to
2021

       generate full Sentinel-2 tiles of pixels consisting of:
          ▪   4*10m (BHR & DHR)
          ▪   3*20m spectral albedo (BHR & DHR) products
          ▪   3*20m broadband products (VIS, NIR & SW)
       ▪ Processing chain has been developed for fully automated processing
       from S2+MODIS/VIIRS
       ▪ Work proceeding with F-TEP and FS-TEP (via ESA NoR) for future
       service to generate “on demand” albedo products
       ▪ Working with 7 end users in agricultural and forestry area for alpha
       test and assessment

                                                      VH-RODA Workshop 2021 | 20-23 April 2021 | Slide 17
2021

       Super-Resolution Restoration from single and repeat EO images
       Yu Tao and Jan-Peter Muller*, UCL-MSSL

       * Work supported by UKSA-CEOI SRR-EO, UKSA-Aurora and UCL
       Enterprise (SpaceJump)

                                                VH-RODA Workshop 2021 | 20-23 April 2021 | Slide 18
Super-resolution Restoration - motivation
       •   Higher spatial resolution imaging data is desirable in many scientific
           and commercial applications of Earth Observation satellite data.
2021

       •   Given the physical constraints of the imaging instruments, we
           always need to trade-off spatial resolution against launch mass,
           usable swath-width, and telecommunications bandwidth for
           transmitting data back to ground stations.
       •   One solution to this conundrum is through the use of super-
           resolution restoration (SRR).
       •   SRR refers to the process of restoring a higher-resolution image
           detail from a single or a sequence of lower-resolution images.
       •   SRR can be achieved by combining non-redundant information
           contained within multiple LR inputs, via a deep learning process, or
           via a combination of the two. We explore the latter here over CEOS-
           WGCV geometric calibration site in Inner Mongolia, China
                                                         VH-RODA Workshop 2021 | 20-23 April 2021 | Slide 19
Super-resolution Restoration – UCL algorithms
       ▪   Imaging Group (UCL-MSSL) has an 8 year track-record of developing state-of-the-
           art SRR techniques applied to Earth and Mars observations.
       ▪   Developed techniques include traditional photogrammetric and stochastic
2021

           approaches, deep-learning based approaches, and novel approaches combining
           the two. These include:
                ▪   The Gotcha Partial-Differential-Equation based Total Variation SRR (GPT-
                    SRR) system to exploit multi-angle information from repeat-pass
                    observations (Tao & Muller, PSS, 2015).
                ▪   Multi-Angle Gotcha image SRR with GAN (MAGiGAN; Tao & Muller,
                    SPIE, 2018 & RS, 2019) for point-and-stare EO images or multi-angle
                    observations (e.g., MISR).
                ▪   The state-of-the-art Multi-scale Adaptive-weighted Residual Super-
                    resolution Generative Adversarial Network (MARSGAN; Tao & Muller,
                    RS, 2021a) deep residual network for single-image SRR.
                ▪   Optical-flow and Total-variation based image SRR with MARSGAN
                    (OpTiGAN; Tao & Muller, RS, 2021c) for continuous EO image
                    sequence or “point-and-stare” video frames.

                                                              VH-RODA Workshop 2021 | 20-23 April 2021 | Slide 20
Super-resolution Restoration – UCL algorithms
       Our developed SRR techniques were demonstrated with multiple EO data with a wide-
       range of spatial resolutions (for single-band, colour, and multi-spectral), including
       300m Sentinel 3 OLCI, 275m MISR, 10m Sentinel 2, 4m and 0.75m UrtheCast
       Deimos-2, 1.1m SSTL Carbonite-2 video, 70cm SkySat, and 31cm WorldView-3.
2021

       Many photographic
       SRR software only
       increases the image
       passive resolution,
       invents visually
       pleasing but fake
       textures, and creates
       artefacts.
       Our SRR techniques
       improve the image
       effective resolution
       and do not invent
       artefacts.
       WorldView-3 (©Digital Globe, Worldview-3 image 2020 - Data provided by the European Space Agency), OpTiGAN SRR
       (Figure taken from Tao & Muller, 2021), Labelled reference ground truth (Figure taken from Zhou et al., 2016)
                                                                              VH-RODA Workshop 2021 | 20-23 April 2021 | Slide 21
SRR – examples for WorldView-3
                                                                     WorldView-3
                                                                     (©Digital Globe,
                                                                     Worldview-3
                                                                     image 2020 -
                                                                     Data provided by
2021

                                                                     the European
                                                                     Space Agency;
                                                                     Figure taken from
                                                                     Tao & Muller,
                                                                     2021b)

                                                                     Average
                                                                     enhancement
                                                                     factor

                                                                     SRR results from
                                                                     SRGAN (Ledig et
                                                                     al., 2017),

                                                                     ESRGAN (Wang
                                                                     et al., 2018),

                                                                     MARSGAN (Tao et
                                                                     al., 2021), OFTV
                                                                     (Tao & Muller,
                                                                     2021a), and

                                                                     OpTiGAN (Tao &
                                                                     Muller, 2021c).
                Effective resolution   VH-RODA Workshop 2021 | 20-23 April 2021 | Slide 22
An example of Sentinel-2 single image SRR
       • Input image ID S2A_MSIL1C_20201018T033751_N0209_R061_T49TCF_20201018T063447
       • Used the same single image MARSGAN model (without retraining) as used for WV-3, SkySat,
         Deimos-2 SRR (Tao & Muller, RS, 2021c)
2021

                                                                    VH-RODA Workshop 2021 | 20-23 April 2021 | Slide 23
Summary and Possible Future Service
       ▪    Traditional multi-image photogrammetric approaches (e.g., OFTV) generally produce little or no
       artefacts, and have better restoration of small objects (e.g., small bar targets in this example), but
       the edge sharpness is generally low (i.e., blurry outlines).
2021

       ▪   Deep learning based approaches (e.g., SRGAN, ESRGAN, MARSGAN) generally produce sharper
       edges, but are more likely to produce artefacts.
       ▪    SRR techniques that combine the two, can restore both high-frequency components (e.g.,
       edges, textures) and low-frequency components (e.g., individual objects) whilst having good control
       on artefacts, and are empirically more likely to produce optimal results and are more robust to
       different datasets.

       ▪    Even though, SRR                                 Proposed SRR service
       results (from any algorithm)                          (SpaceJump 2021)
       may still differ from sensor
       to sensor, and scene to
       scene – a future streamlined
       SRR processing system (if
       funded) should be capable
       of (automatically) selecting
       the best algorithm/method
       to use for the given scene.
                                                                       VH-RODA Workshop 2021 | 20-23 April 2021 | Slide 24
You can also read