Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Development of Integrated Crop Drought Index by Combining Rainfall, Land Surface Temperature, Evapotranspiration, Soil Moisture, and Vegetation Index for Agricultural Drought Monitoring
Next Article in Special Issue
A Photogrammetric-Photometric Stereo Method for High-Resolution Lunar Topographic Mapping Using Yutu-2 Rover Images
Previous Article in Journal
Identify Informative Bands for Hyperspectral Target Detection Using the Third-Order Statistic
Previous Article in Special Issue
A Study about the Temporal Constraints on the Martian Yardangs’ Development in Medusae Fossae Formation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Single Image Super-Resolution Restoration of TGO CaSSIS Colour Images: Demonstration with Perseverance Rover Landing Site and Mars Science Targets

1
Imaging Group, Mullard Space Science Laboratory, Department of Space and Climate Physics, University College London, Holmbury St Mary, Surrey RH5 6NT, UK
2
Laboratoire de Planétologie et Géodynamique, CNRS, UMR 6112, Universités de Nantes, 44300 Nantes, France
3
Department of Atmospheric and Planetary Sciences, Sumatera Institute of Technology (ITERA), Lampung 35365, Indonesia
4
Physikalisches Institut, Universitat Bern, Sidlerstrasse 5, 3012 Bern, Switzerland
5
INAF, Osservatorio Astronomico di Padova, 35122 Padova, Italy
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(9), 1777; https://doi.org/10.3390/rs13091777
Submission received: 8 April 2021 / Revised: 29 April 2021 / Accepted: 30 April 2021 / Published: 2 May 2021
(This article belongs to the Special Issue Planetary 3D Mapping, Remote Sensing and Machine Learning)

Abstract

:
The ExoMars Trace Gas Orbiter (TGO)’s Colour and Stereo Surface Imaging System (CaSSIS) provides multi-spectral optical imagery at 4–5 m/pixel spatial resolution. Improving the spatial resolution of CaSSIS images would allow greater amounts of scientific information to be extracted. In this work, we propose a novel Multi-scale Adaptive weighted Residual Super-resolution Generative Adversarial Network (MARSGAN) for single-image super-resolution restoration of TGO CaSSIS images, and demonstrate how this provides an effective resolution enhancement factor of about 3 times. We demonstrate with qualitative and quantitative assessments of CaSSIS SRR results over the Mars2020 Perseverance rover’s landing site. We also show examples of similar SRR performance over 8 science test sites mainly selected for being covered by HiRISE at higher resolution for comparison, which include many features unique to the Martian surface. Application of MARSGAN will allow high resolution colour imagery from CaSSIS to be obtained over extensive areas of Mars beyond what has been possible to obtain to date from HiRISE.

Graphical Abstract

1. Introduction

Orbital imaging has been a highly effective way of exploring the Martian surface. The ExoMars Trace Gas Orbiter (TGO)’s Colour and Stereo Surface Imaging System (CaSSIS) provides multi-spectral optical imagery at 4–5 m/pixel spatial resolution [1]. CaSSIS has higher spatial resolution, image quality, and with colour bands, comparing to the Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) images at 6 m/pixel [2]. However, the spatial resolution of CaSSIS is limited compared to the details revealed by the MRO High Resolution Imaging Science Experiment (HiRISE) images typically at 25–50 cm/pixel resolution [3]. CaSSIS has much better global coverage compared to HiRISE (<4% since 2006) and will provide more repeat and stereo observations in the future.
Improving the spatial resolution of CaSSIS images would allow greater amounts of information to be extracted about the nature of the surface and how it formed or changes over time. One of the options to achieve a greater spatial resolution is through the use of Super-Resolution Restoration (SRR/SR) techniques. This was first demonstrated with HiRISE 25 cm/pixel orbital imagery in [4] who showed a 5 times (5×) improvement using 8 multi-view images with view-angle differences ≥8° with data acquired over 6 years. Subsequently, this technique was employed to confirm the discovery of the Beagle2 lander in the Isidis plains [5].
With recent success of deep learning based SRR techniques, especially the Generative Adversarial Networks (GANs), in the field of photo-realistic image enhancement, we propose a Multi-scale Adaptive weighted Residual Super-resolution restoration GAN (MARSGAN) network for single-image SRR applied to CaSSIS. We train the proposed MARSGAN model with HiRISE images and demonstrate ~3× resolution enhancement for CaSSIS images. See an example in Figure 1.
MARSGAN SRR can not only be used for supporting image analysis to obtain improved scientific understanding of the Martian surface, but it can also be used for supporting existing and future rover missions. SRR images can be employed for a wide range of applications such as the detection of objects which may present hazards for landers and rover navigation and path planning, the improvement of colour and hyperspectral images for better understanding of surface mineralogy, the detection of spacecraft hardware, and better definition of dynamic features. Such techniques can also be applied to time series for change detection, for example in tracking dynamic features.
In this work, we describe in detail and show qualitative and quantitative assessments of the proposed single-image MARSGAN CaSSIS SRR result for the Mars2020 Perseverance Rover’s landing site [6,7], Jezero Crater. In addition, we demonstrate other potential applications with 8 further test sites with scientifically interesting features.
The layout of this paper is as follows. In Section 1.1, we introduce the 8 study sites. In Section 1.2, we review previous work in image SRR. In Section 2.1, we describe the MARSGAN architecture. In Section 2.2, we show the MARSGAN’s loss functions. In Section 2.3, we introduce different assessment methods. In Section 2.4, we provide training and experimental details. In Section 3.1, we demonstrate CaSSIS SRR results for Jezero crater and provide assessment details. In Section 3.2, we demonstrate CaSSIS SRR results for 8 selected science targets. In Section 4.1, we discuss the perceptual-driven and PSNR-driven SRR solutions. In Section 4.2, we broadly compare the proposed single-image and deep-learning based approach with a traditional multi-image computer-vision based approach. In Section 4.3, we briefly demonstrate the potential of the MARSGAN model for HiRISE, CTX, and CRISM data. In Section 5, we summarise conclusions and discuss future work.

1.1. Study Sites

Our selected science targets include bedrock layers (Site-1), bright and dark slope streaks (Site-2), defrosting dunes and dune gullies (Site-3), gullies at Gasa crater (Site-4), recurring slope lineae at Hale Crater (Site-5), scalloped depressions and dust devils at Peneus Patera (Site-6), gullies at Selevac crater (Site-7), and defrosting (so-called) spiders (Site-8). Figure 2 shows cropped samples of the CaSSIS colour images for the above-mentioned science targets. The CaSSIS and HiRISE image IDs for the above study sites are listed in Table 1 and can be found in Section 2.4.
Site 1 is an image of the floor of a 41 km diameter crater located to the north of the Argyre Basin whose floor exposes ancient layered rock deposits. The light tone of the layers in these deposits suggests they could be ancient clays, e.g., [8,9], and therefore may represent an ancient aqueous environment. The crater floor also hosts a number of dark sand dunes, e.g., [10,11,12], and transverse aeolian ridges, e.g., [13,14]. Many dark dunes on Mars have been shown to be currently in motion [15,16,17,18], whereas transverse aeolian ridges are thought to be inactive [19,20].
Site 2 captures the northern rim slope and floor of an ancient ~45 km diameter crater in Arabia Terra. The steeply sloping hillslopes have many slope streaks, believed to represent avalanches of dust [21,22,23]. Many new slope streaks have been observed and they have also been observed to fade [24,25]. Their exact trigger is still an open question [26,27,28]. The flatter areas host numerous transverse aeolian ridges.
Site 3 comprises a ~55 km diameter crater in Noachis Terra which hosts a dunefield (USGS dune database 0175-546) with active dune gullies [29,30,31]. The CaSSIS image is taken at a time of year when the seasonal frosts are retreating, creating distinctive albedo patterns on the surface [32,33,34]. These defrosting spots represent areas where dark dust has been deposited on top of the bright seasonal ices (mainly carbon dioxide ice) by CO2 gas escaping from underneath the ice [35]. Flows of dark sand along gullies are also thought to occur at this time of year [29,36], driven by CO2 sublimation.
Site 4 is at Gasa crater, a 6.5km diameter crater located inside Cilaos crater, a 21.4km diameter crater. Gasa Crater has annual active gullies along its south-facing wall [36,37,38,39]. Gullies are also located on the south-facing wall of the larger host crater. Simulated and actual CaSSIS images were able to pick out new deposits in this crater based on their colour contrast [40,41]. The gullies in this crater have exceptionally well-developed source alcoves into the bedrock. This crater has a pitted floor, which is thought to indicate an impact into icy materials and subsequent volatile release from the impact melt deposits [40,41].
Site 5 is located on the central peak of Hale Crater, a 120-150 km diameter crater located on the northern rim of the Argyre basin. The slopes here host Recurring Slope Lineae (RSL) and gullies. RSL are dark linear markings that grow downslope during the warmest periods of the year and were initially thought to be liquid water seeps, e.g., [42,43,44], although that interpretation has been overturned and re-established many times over in the last decade, e.g., [45,46,47,48,49,50]. The southern edge of the central peak area is bounded by dark aeolian dunes. Further south the crater floor is intensely pitted, and this texture indicates that the Hale impact liberated volatiles [41,51,52,53].
Site 6 shows terrain on the flank of Peneus Patera which hosts “scalloped depressions”. These depressions are believed to be formed by loss of interstitial ice via sublimation and subsequent collapse of the overlying terrain – often compared to terrestrial thermo karst developed in permafrost terrains [54,55,56,57,58]. This particular image also shows dust devil tracks – dark tracks left on the ground by the passage of small wind vortexes which remove a thin layer of surface dust [59,60,61,62]. Dust devil tracks are constantly being formed and fading, their pattern rarely remaining similar between two orbital images.
Site 7 is the 7.3km diameter Selevac Crater whose south-facing walls hosts numerous gullies, some of which have been active over the last decade [36]. The north-facing walls host talus features typical of fresh impact craters. This crater has a pitted floor similar to that shown by site 4, Gasa Crater. The terrain to the south of this crater hosts the subdued crater rim of a Noachian aged crater that seems to be almost totally infilled and perhaps breached by fluvial erosion [63,64].
Site 8 is located near the south pole of Mars in a terrain intensely patterned by “spiders”. These enigmatic surface features are characterised by hierarchical branching networks of depressions, leading to one or more deeper foci. They are believed to be formed by repeated erosion of the surface caused by gas escaping from under the metre-thick seasonal ice deposits [35,65,66,67,68,69]. In this image, dark spots associated with defrosting can be seen, as described already for site 3. However, no perennial changes have been observed in spider systems so whether they are active today is a subject of debate.

1.2. Previous Work

SRR (or SR) refers to the task of enhancing the spatial resolution of an image from Lower-resolution (LR) to Higher-resolution (HR). In the past, SRR was based on the idea that a combination of the non-redundant information contained in multiple LR images can be used to generate a HR image. This is also referred to as multi-image SRR in the field of computer vision. This was built on the fundamental basis of using image co-registration, followed by multi-image sparse coding [70] or multi-image non-uniform interpolation [71]. The actual enhancement of resolution, as well as their robustness to noise, are generally limited with the simple forward techniques. Later around the 2010s, SRR techniques followed the Maximum a Posteriori (MAP) approach [72,73,74] to resolve the inverse process stochastically by assuming a model that each LR image is a downsampled, distorted, blurred, and noise added version of the true scene, i.e., the HR image. Building on the MAP techniques, we previously proposed two SRR systems in [4] and [75] for Mars orbital imagery and Earth observation satellite imagery, adopting the multi-angle imaging properties, and for the latter one, combining deep learning techniques.
The deep learning based SRR techniques have been fairly successful, during the past decade, in solving the problem of resolution enhancement and texture synthesis of real-life images and videos. The pioneering work of deep learning based SRR techniques, is the three-layer Convolutional Neural Network (CNN) based SRR algorithm (SRCNN) [76] that performs non-linear mapping between LR patches and HR patches, represented using convolutional filters. A simple Mean Squared Error (MSE) loss function is used to train the SRCNN network. Comparing to SRCNN, Very Deep Super-Resolution (VDSR) [77] use a deeper network with smaller convolutional filters to learn only the residual (high frequency information) between LR and HR images. VDSR is based on the popular VGG (named after the Visual Geometry Group at the University of Oxford) architecture that was originally proposed in [78] for large-scale image classification tasks. Instead of trying to learn high-frequency details at the up-sampled scales as used in SRCNN and VDSR, Fast SRCNN (FSRCNN) [79] and Efficient Sub-Pixel CNN (ESPCN) [80] learns the high-frequency details through a deconvolutional layer and sub-pixel convolutional layer at the end of their architecture, respectively, to significantly reduce unnecessary computation overheads.
Recently, residual-network based architectures were fairly successful in SRR tasks. The most representative ones are Enhanced Deep residual SR Network (EDSR) [81], Wide activation Deep residual SR (WDSR) [82] and CAscading Residual Network (CARN) [83]. EDSR is based on the original ResNet [84] and SRResNet [85] architectures using residual learning, and with a Rectified Linear Unit (ReLU) layer and Batch normalisation (BN) layers being removed. Based on EDSR, WDSR further demonstrated expanding features before ReLU activation leads to significant improvements, without adding additional parameters and computation, and used Weight Normalisation (WN) to replace BN for faster convergence and better accuracy. Both EDSR and WDSR have adopted the idea of not using up-sampled input for CNN and used the sub-pixel shuffling at the end of their architecture as proposed in the aforementioned ESPCN. On the other hand, CARN improved on top of the traditional residual network and proposed a cascading mechanism at both the local and global level in order to receive more information, and allow more efficient flow of information, while keeping the network lightweight.
Other successful SRR architectures employed recursive networks that use shared network parameters in convolutional layers in order to reduce memory usage. The most representative ones are Deep Recursive Convolutional Network (DRCN) [86] and Deep Recursive Residual Network (DRRN) [87]. DRCN reuses weight parameters and stack recursive blocks to improve SRR performance without introducing new parameters for convolutions. DRRN improves on top of DRCN by stacking residual blocks with shared parameters to achieve superior results over DRCN.
Unlike the aforementioned SRR networks that treat all spatial locations, features, scales, and channels of an image equally, some novel SRR networks use adaptively weighted importance to different locations, features, scales, and channels of an image. In Adaptive Weighted Super-Resolution Network (AWSRN) [88], the authors proposed a lightweight SRR network that uses a sequence of Adaptive Weighted Residual Units (AWRUs), to replace the original Residual Units used in WDSR, to form a Local Fusion Block (LFB), and then with a sequence of LFBs, to perform the non-linear mapping of extracted features. AWSRN also proposed an Adaptive Weighted Multi-Scale (AWMS) reconstruction module to selectively “stack and fuse” multi-scale convolutions in order to use the feature information, derived from the non-linear mapping module, more effectively. Another successful architecture that uses the idea of “selective attention” is the deep Residual Channel Attention Network (RCAN) [89]. RCAN emphasis is on the discriminative learning ability across different feature channels via selective downscaling and upscaling of feature maps, using Residual Channel Attention Blocks (RCABs) in Residual Groups (RGs), i.e., Residual in Residual (RIR), to focus on more informative components of the LR features. Long and short skip connections were used in RIR to help bypass low-frequency information and stabilise the training process of their very deep network.
More recently, Generative Adversarial Networks (GANs) have become more popular in the field of SRR that exploit perceptual differences rather than the pixel differences between LR and HR images. GANs operate by training a generative model with the goal of restoring high frequency textures, while in parallel, training a discriminator to distinguish SRR images from HR truth. SRGAN (Super-Resolution GAN), proposed in [85], first used a GAN based architecture to generate visually pleasant SRR images. SRGAN uses ResNet/SRResNet [84,85] as a backend and employs a weighted combination of content loss that is defined on feature maps of high level features (the Euclidean distance between the feature representations of generated image and reference image) from the VGG network [78], and the adversarial loss that was originally defined in [90], to achieve visually optimal results. The generator network in SRGAN has 16 identical residual blocks that consist of 2 convolutional layers, BN, and Parametric ReLU, followed by 2 subpixel convolutional layers, that were proposed in [80], for upscaling. The discriminator network in SRGAN contains 8 convolutional layers/BN/Leaky ReLU (LReLU), with increasing number of feature maps and down-sampling when the number of features is doubled, followed by 2 dense layers and a sigmoid activation. In parallel with SRGAN, an independent group of researchers proposed a similar network called EnhanceNet [91]. The generator network of EnhanceNet has 10 residual blocks followed by 2 nearest neighbour up-sampling (of feature activation) layers and followed by a convolutional layer to cancel checkerboard artefacts. In comparison to SRGAN, the major difference is that the EnhanceNet uses an additional texture matching loss, which is computed from the Euclidean distance of local (patch-wise) texture statistics, on top of the perceptual loss and adversarial loss, to enforce locally similar textures between SRR and HR truth. The SRR result from EnhanceNet is perceptually significantly sharper but suffers from more synthetic artefacts. Improved on top of SRGAN, the Enhanced SR GAN (ESRGAN) [92] use the basic architecture of SRGAN, replacing the original RBs with a deeper basic block, namely RIR Dense Block (RRDB), and also uses an improved loss function that incorporates a new perceptual loss, a relativistic adversarial loss, together with the traditional MSE loss.
In this work, we propose a novel MARSGAN network for single image SRR. MARSGAN improves upon ESRGAN, which is used as our backbone architecture, in three aspects: (1) use an adaptive weighted basic block, called AW-RRDB (AWRRDB), with noise inputs, for more effective residual learning while allowing local stochastic variations; (2) use a multi-scale reconstruction scheme to make full use of both low-frequency and high-frequency residuals; (3) use a fine-tuned loss function to balance between perceptual quality and synthetic artefacts. MARSGAN is fully trained with HiRISE images and is used, in this work, for CaSSIS single image SRR.

2. Materials and Methods

2.1. MARSGAN Architecture

GANs provide a state-of-the-art framework for producing high-quality and “photo-realistic” SRR images. Recent GAN variations [92,93,94] have been focusing on optimisations of the original residual architecture of the generator network [85,91] and/or on better modelling of the perceptual loss, in order to improve the visual quality of the SRR results. In this work, we based our model on the ESRGAN architecture [92] due to its solid performance on real-world images. Inspired by the adaptive weighted learning process proposed in AWSRN [88] and the optimisations introduced in ESRGANplus [93], we propose an Adaptive Weighted RRDB with noise inputs (AWRRDB) to replace the original RRDB basic block in ESRGAN for more effective and efficient residual learning. Moreover, we use a multi-scale reconstruction scheme [88] based on subpixel-shuffling [80,85] to replace the up-sampling layers used in ESRGAN to make full use of both low-frequency and high-frequency residuals while avoiding the checkerboard patterned artefacts from using up-sampling layers [91]. We follow the standard discriminator network architecture that was proposed in SRGAN [85] and adopt the relativistic average discriminator concept that was proposed in [95] and employed in [92].
Our proposed Multi-scale Adaptive-weighted Residual SRR GAN (MARSGAN) network architecture is shown in Figure 3. With MARSGAN, our goal is to estimate a super-resolved image I S R from a lower-resolution input image I L R . Here I L R is the lower-resolution version of its higher-resolution counterpart I H R . Note that IHR is only available during training.
The MARSGAN generator starts from a single convolutional layer (3 × 3 kernels, 64 feature maps, stride 1) for initial feature extraction, which can be formulated as
x 0 = f e x t ( I L R )
where f e x t denotes the initial feature extraction function for I L R and the output feature map from the first convolutional layer is x 0 .
In MARSGAN, our basic residual unit for non-linear feature mapping is AWRRDB. AWRRDB is based on the original Dense Block (DB) structure and applies two modifications on top of the RRDB basic residual units that were used in ESRGAN [92]. RRDB has a much deeper and more complex structure (see Figure 3), compared to the Residual Blocks (RBs) used in SRGAN [85], in order to have much higher network capacity benefiting from dense connections. The first modification to improve the RRDB basic blocks is through use of the concept of AWRU, inspired by AWSRN [88]. Instead of applying a fixed value of residual scaling [81] in each DB, i.e., 0.2 used in ESRGAN, we use 11 independent weights for each DB (see Figure 3), which can be adaptively learned after given an initial value, to help the flow of information and gradients more effectively. The second modification to the RRDB structure is adding a Gaussian noise input after each DB. The additive Gaussian noise inputs were demonstrated as being useful in [93] in terms of adding stochastic variation to the generator network, while keeping their effects very localised, i.e., without changing the global perception of the images. Note that 3 of the 11 weights are scaling factors for the additive Gaussian noise. There was another potential improvement to the DB structure, which was also proposed in [93], called Residual DB (RDB), by adding a residual every two layers to augment the generator network capacity. However, we found the improvement of using RDB is marginal compared to DB in AWRRDB, and therefore we keep the ESRGAN’s design of DB in AWRRDB.
Defining the DB used in the original ESRGAN architecture as f D B , then the output of the n-th proposed AWRRDB units, denoted as x n + 1 , for input x n , (n = 0, 1, 2, …, 22) can be expressed as
x n + 1 = f A W R R D B ( x n ) = λ b n ( x n 3 ) + λ a n x n
where λ a n and λ b n are two independent weights for the n-th AWRRDB unit, and x n 3 can be solved via
{ x n 3 = λ r n 2 f D B ( x n 2 ) + λ x n 2 x n 2 + λ n n 2 G n x n 2 = λ r n 1 f D B ( x n 1 ) + λ x n 1 x n 1 + λ n n 1 G n x n 1 = λ r n 0 f D B ( x n ) + λ x n 0 x n + λ n n 0 G n
where λ r n k , λ x n k , and λ n n k , (k = 0, 1, 2), are three independent sets of weights for each DB unit and G n is the additive Gaussian noise inputs. The non-linear feature mapping is represented by a sequence (23 in this work) of the proposed AWRRDBs. As shown in Figure 3, each AWRRDB contains 3 DBs, and each DB contains 5 convolutional layers (3 × 3 kernels, 32 feature maps, stride 1) and 4 LReLU activation with a negative slope of 0.2. Merging Equation (2) and Equation (3), the output of the non-linear mapping, x n + 1 , for the n-th AWRRDB unit, given the initial input, x 0 , from Equation (1), can be expressed as
x n + 1 = f A W R R D B n ( f A W R R D B n 1 ( f A W R R D B 0 ( x 0 ) ) )
After the non-linear feature mapping, we use an Adaptive Weighted Multi-Scale Reconstruction (AWMSR) scheme [88] based on subpixel-shuffling [80,85] to replace the up-sampling layers used in ESRGAN for SRR image reconstruction. The AWMSR unit (see Figure 3), which was originally introduced in [88] and demonstrated helpful on top of the WDSR results [82], stacks 4 different levels of scaling convolutions (3 × 3, 5 × 5, 7 × 7, 9 × 9 kernels) with adaptive weights (initialised with an equal weight of 0.25) to make full use of the learned low-frequency and high-frequency information during SRR reconstruction. Here the output x n + 1 is fed to the AWMSR unit (see Figure 3), denoted as f A W M S R , followed by a final convolutional layer, denoted as f r e c , to generate I S R , which can be expressed as
I S R = f r e c ( i = 0 3 α i f A W M S R i ( x n + 1 ) )
For the discriminator, we use the same network architecture as described in SRGAN [85] and ESRGAN [92], which contains 8 convolutional layers with an increasing number of feature maps and strides of 2 each time the number of features is doubled (3 × 3 kernels; 64 feature maps, stride 1; 64 feature maps, stride 2; 128 feature maps, stride 1; 128 feature maps, stride 2; …; 512 feature maps, stride 1, 512 feature maps, stride 2). The resulting 512 feature maps are followed by two fully connected dense layers together with a final sigmoid activation function for output. We adopt the relativistic concept that was originally proposed in RaGAN [95] and was applied in ESRGAN [92], to use a “relativistic discriminator”, which estimates the probability of the given real data to be relatively more realistic than fake data in average, instead of simply predicting real or fake. The relativistic discriminator network is optimised in an alternating manner [90] along with the generator network to solve the adversarial min-max problem. Given the standard discriminator, denoted as D s , for real input image I r and fake input image I f , then
{ D s ( I r   ) = σ ( C ( I r   ) )   1   (   r e a l ) D s ( I f   ) = σ ( C ( I f   ) )   0   ( f a k e )  
where, σ is the sigmoid function, and C is the non-transformed discriminator output. Then the relativistic average discriminator, denoted as D R a , for real input image I r and fake input image I f , can be formulated as
{ D R a ( I r   ,   I f   ) = σ ( C ( I r   ) E I f ( C ( I f   ) ) )   1   (   m o r e   r e a l   t h a n   f a k e ) D R a ( I f   ,   I r   ) = σ ( C ( I f   ) E I r ( C ( I r   ) ) )   0   ( l e s s   r e a l   t h a n   r e a l )  
where E I f represents the operation of computing the mean of all fake data in a mini-batch, and E I r represents the operation of computing the mean of all real data.

2.2. Loss Functions

Loss function plays an important role in deep learning based SRR techniques. Up until the work of SRGAN [85] and EnhanceNet [91], classic SRR networks mostly minimise the peak Signal to Noise Ratio (SNR; PSNR) between the recovered SRR image and the ground truth (HR). Due to the ill-posed nature of the SRR problem, texture details are typically synthetic textures (if not absent) in the reconstructed SRR image and therefore cannot be “pixel-to-pixel” matched with the ground truth, leading to a smoother solution that averages all potential synthetic solutions within a PSNR oriented model. In SRGAN [85], the authors proposed to replace the MSE based content loss (in pixel space) with a loss (in feature space) defined by feature maps, denoted as φ ( i , j ) , where j and i indicate the j-th convolution (after activation) before the i-th maxpooling layer within the pre-trained VGG19 network. SRGAN used φ ( 2 , 2 ) and φ ( 5 , 4 ) in their experiments. Instead of completely removing the content loss term, in EnhanceNet [91], the authors explicitly experimented with different combinations (weighted averages) of the content loss ( τ E ), perceptual loss ( τ P ), adversarial loss ( τ A ), and an additional texture loss ( τ T ; defined by matching patch-wise statistics of textures). Their study shows the network optimised by τ E has the smoothest and most artefact-free results, whereas τ P or τ P + τ A are much sharper but full of artefacts, whereas τ E + τ P + τ A and τ E + τ P + τ A + τ T produces some “balanced” results that are comparably sharper (more detailed) but with less artefact.
ESRGAN demonstrated greater optimality, on top of SRGAN, to use the VGG features before the activation layers in order to have denser feature representations (before activation), while keeping consistency of reconstructed SRR brightness [92]. Besides, ESRGAN kept the l 1 norm-based content loss term (weighted by a factor of 0.01) to balance the perceptual-driven solutions. Due to the very small weight of the content loss, ESRGAN proposed the use of a “network interpolation” method, which is a weighted average of the two networks trained with perceptual loss and l 1 loss, to balance the perceptual-driven and PSNR-driven solutions.
The l 1 and MSE based content loss, denoted as l S R l 1   a n d   l S R M S E , respectively, can be formulated as
{ l S R l 1 = 1 s 2 W H x = 1 sW y = 1 sH | ( I H R G ( I L R ) ) x , y | l S R M S E = 1 s 2 W H x = 1 sW y = 1 sH ( ( I H R G ( I L R ) ) x , y ) 2
where G represents the generator function, W and H denote the width and height of the I L R , and s denotes the scaling factor for I S R (and I H R ) with respect to I L R .
The VGG based perceptual loss, denoted as l S R V G G / φ ( i , j ) , can be expressed as
l S R V G G / φ ( i , j ) = 1 W i , j H i , j x = 1 W i , j y = 1 H i , j ( ( φ ( i , j ) I H R φ ( i , j ) G ( I L R ) ) x , y ) 2
where W i , j   a n d   H i , j represent the dimensions of the respective feature maps φ ( i , j ) within the VGG network. The authors of ESRGAN also experimented with the VGG loss based on a fine-tuned VGG network for material recognition and concluded the gain is marginal. We also tested with both VGG networks for HiRISE images and visually checked the results - there is not any viewable difference. Future experiments on perceptual loss that focuses on texture still have the potential to improve the SRR results, but in this work, we stick with the original pre-trained VGG19 network [78] for feature representation.
Based on Equation (7), the discriminator loss of RaGAN, denoted as l D R a , can be expressed as
l D R a = E I r [ log ( D R a ( I r   ,   I f   ) ) ] E I f [ log ( 1 D R a ( I f   ,   I r   ) ) ]
The adversarial loss for the generator, denoted as l S R R a , can be expressed as a symmetrical form of Equation (10), as
l S R R a = E I r [ log ( 1 D R a ( I r   ,   I f   ) ) ] E I f [ log ( D R a ( I f   ,   I r   ) ) ]
Given the nature of this work is to derive scientifically meaningful results, we therefore adjust the total loss function to encourage solutions towards an ideal scenario, that is, better than using MSE loss alone, but with minimal tolerance to artefacts. On the other hand, we empirically found that using the same loss function as used in ESRGAN tends to produce fine-scale synthetic textures that contain similar noise patterns introduced from the original HiRISE images (this is further discussed in Section 4.1). Although optimisation of perceptual based loss functions is better suited for photo-SRR applications, it does not appear to be suitable for remote sensing applications, with the current state of the art of deep learning based SRR. We rebalance the lower-level and higher-level perceptual loss derived from the VGG network to act together as the perceptual loss, and also to give a higher weight to the traditional MSE based content loss, in order to minimise the creation of hallucinate finer details.
The total generator loss, L G M A R S G A N , used in this work can be expressed as a weighted sum of the content loss formulated in Equation (8), lower-level and higher-level perceptual losses formulated in Equation (9), and the adversarial loss formulated in Equation (11), as follows
L G M A R S G A N = γ l S R V G G / φ ( 5 , 4 ) + ( 1 γ ) l S R V G G / φ ( 2 , 2 ) + λ l S R R a + η l S R M S E
where γ , λ , and η , are the hyperparameters to balance different loss terms, where, in comparison to the total loss used in ESRGAN, L G E S R G A N , can be expressed as
L G E S R G A N = l S R V G G / φ ( 5 , 4 ) + λ l S R R a + η l S R l 1
In order to show the effectiveness of the MARSGAN architecture, we choose to firstly optimise the ESRGAN’s loss function, as shown in Equation (13), for the MARSGAN model for Jezero crater, as demonstrated in Section 3.1. In Section 3.2, we use our fine-tuned loss function, shown in Equation (12), for the MARSGAN model, for the 8 science sites.

2.3. Assessment Methods

For validation and quality assessment, we follow the standard image quality metrics, which include PSNR, Mean Structural Similarity Index Metric (MSSIM), Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE), and Perception-based Image Quality Evaluator (PIQE), using HiRISE and downsampled HiRISE (at 1 m/pixel) as reference/validation dataset. CaSSIS images and SRR results are co-registered with HiRISE using our in-house multi-resolution image co-registration pipeline [96] in order to calculate PSNR and MSSIM, and also to compare against the BRISQUE and PIQE scores. These metrics are available via Matlab’s “Image Quality Metrics” bundle (https://uk.mathworks.com/help/images/image-quality.html (accessed on 1 May 2021)) and can be summarised as follows:
(1)
PSNR: PSNR is derived from the MSE and indicates the ratio of the maximum pixel intensity to the power of the distortion. A mathematical expression of PSNR can formulated as
P S N R ( T , R ) = 10 l o g 10 ( P e a k V a l 2 M S E ( T ,     R ) )
where T denotes the target image (the CaSSIS SRR image in our case), R denotes the reference image (the down-sampled HiRISE image in our case), and PeakVal is the maximum value of the reference image (normalised to 255 for 8-bit image in our case).
(2)
MSSIM [97]. MSSIM is the mean of locally computed structural similarity. The structural similarity index is derived using patterns of pixel intensities among neighbouring pixels with normalised brightness and contrast. MSSIM can be formulated as
M S S I M ( T , R ) = E [ ( 2 μ T μ R + C 1 ) ( 2 σ T , R + C 2 ) ( μ T 2 + μ R 2 + C 1 ) ( σ T 2 + σ R 2 + C 2 ) ] .  
where E represents the operation of mean, μ T , μ R , σ T , σ R , and σ T , R are the local means, standard deviations, and cross-covariance of the target image and reference image respectively. C 1 and C 2 are constants based on the dynamic range of pixel values.
(3)
BRISQUE [98]. The BRISQUE model provides subjective quality scores based on a pre-trained model using images with known distortions. The score range is [0,100] and lower values reflect better perceptual quality.
(4)
PIQE [99]. PIQE measures the quality of images using block-wise calculation against arbitrary distortions. The score range is [0,100] and lower values reflect better perceptual quality.
In practice, due to large differences of imaging time (year and local Mars time) between HiRISE images and CaSSIS images (as explained in Section 2.4), these measurements, sometimes, may not be appropriate. We try to assess the CaSSIS SRR result with the “closest” HiRISE, in terms of the imaging date and local Mars time, in order to maintain the most “compatible” brightness, contrast and shading characteristics between HiRISE and CaSSIS, even though the choices are extremely limited.
Complementary to the quality measurements, we also demonstrate with SRR results of the Jezero crater site, using sharpness measurement of contrasted-and-slanted edges (see Section 3.1). This is a direct way of measuring image spatial resolution and is not dependent on a reference HiRISE image, which is subject to changes in appearance, due to different imaging time (the closest CaSSIS and HiRISE “pairs” used in this work is still ~1 month apart). Such measurements are critical for remote sensing applications in order to quantify the effective resultant SRR resolution.

2.4. Training and Testing

Our training dataset included ~1.8 million pairs of HiRISE (i.e., HR; at 0.25 m/pixel) and down-sampled HiRISE (i.e., LR; at 1 m/pixel) cropped samples. The training HR samples were extracted from 466 unique HiRISE images, containing non-overlapping unique features of Mars (see Figure 4), including dunes (Figure 4a), craters (Figure 4b), hills (Figure 4c), layering (Figure 4d), slopes (Figure 4e), cones (Figure 4f), scallops (Figure 4g), gullies (Figure 4h), falls (Figure 4i), deposits (Figure 4j), rocks (Figure 4k), chaos (Figure 4l), and other unique features (a list of these features with associated HiRISE image IDs is provided in Supplementary Materials). Note that all experiments (training and testing), in this work, are performed with a scaling factor of 4x between LR and HR (or SR) images. The training LR samples are produced by applying a Gaussian filter and followed by a bicubic down-sampling process of the corresponding HR samples.
In Experiment-1, we train the original ESRGAN network with the original loss function, as shown in Equation (13). In Experiment-2, we train the proposed MARSGAN network optimised with the same loss function that was used in ESRGAN. Finally, in Experiment-3, we train the proposed MARSGAN network optimised by our rebalanced loss function, as shown in Equation (12). Comparisons of the results from the three trained models are given in Section 3.1. For further processing results, which are demonstrated in Section 3.2, on the proposed CaSSIS science scenes, we use the MARSGAN model trained with Experiment-3.
For Experiment-1, the batch size is set to 64 and the spatial sizes of the HR and LR patches are set to 256x256 pixels and 64x64 pixels. This is 4 times larger for each HR/LR patch compared to the spatial sizes used in [92] and ~7 times larger comparing to [85]. It was observed in [92] that training a deep SRR network benefits from a larger patch size due to an enlarged receptive field, with trade-offs to more computing resources and a longer training time. We follow the two-stage training process, proposed in [92] for ESRGAN, to train a PSNR-oriented model initially with the l 1 loss in Equation (8), followed by the perceptual-oriented training with the perceptual loss in Equation (13), with λ = 5 × 10 3 and η = 1 × 10 2 . For Experiment-2, the batch size is 64 and the spatial sizes of the HR and LR patches are set to 128x128 pixels and 32x32 pixels for a shorter training time. The same two-stage training process (with the same hyperparameters) is followed as in Experiment-1.
In Experiment-3, the batch size and spatial patch sizes are the same as Experiment-2. We re-use the pre-trained MARSGAN model from Experiment-2 for initialisation for the generator and continue training with the MARSGAN loss in Equation (12), with λ = 5 × 10 3 , γ = 0.5 , and a higher η = 0.5 in order to encourage solutions with minimised synthetic artefacts. The initial learning rate is 10 4 , and halved at 50k, 100k, 300k, and 500k iterations. Standard Adam optimisation [100] is used with β 1 = 0.9 and β 2 = 0.999 . Training and testing are achieved on the latest NVIDIA RTX 3090 GPU.
Our testing dataset is a collection of CaSSIS colour images for the Perseverance rover’s landing site, and as well as several selected science-oriented scenes introduced in Section 1.1. Note that our HiRISE training dataset is in greyscale. To handle the colour channels of CaSSIS image, we can either feed the CaSSIS colour images directly into the MARSGAN prediction module, which will work on the brightness channel (V) in the Hue-Saturation-Value (H-S-V/HSV) colour space, or we can produce SRR on each individual colour channel in the Red-Green-Blue (R-G-B/RGB) colour space and merge them later, for colour output. In our experiments, we found the two approaches result in similar SRR quality, however, the latter approach leads to a slightly different colour appearance compared to the input image (see Figure 5 for demonstration of the differences). Theoretically, the texture/sharpness manipulation in the separate R-G-B channels should not affect the brightness/reflectance for each channel alone, but it has an effect if we merge back the three channels in R-G-B colour space. On the other hand, texture/sharpness changes in the V channel would not affect the brightness/reflectance and also would not affect the colour appearance which is controlled by the H and S channel. Therefore, when merging the three channels in H-S-V colour space, only texture and sharpness change, brightness/reflectance and colour will remain. We follow this approach for all CaSSIS SRR results presented in this paper.
In addition, CaSSIS uses multiple combinations of long-to-short wavelengths to synthesise colour in R-G-B colour space [101], e.g., NIR-PAN-BLU, and empirically speaking, CaSSIS images generally have a better SNR on their longer wavelength channels, i.e., NIR band (centred at 936.7 nm), RED band (centred at 836.2 nm) and PAN band (centred at 675 nm) and have a lower SNR on their shorter wavelength channels, i.e., BLU band (centred at 499.9 nm). Figure 6 shows that the BLU band is obviously noisier compared to the NIR and PAN band. Therefore, running SRR on the R-G-B colour channels separately may provide an opportunity to produce a better SRR result via different treatment on the three channels, e.g., applying denoising on the BLU channel. However, the issue of the resulting different colour appearance needs to be tackled in a future study.
Finally, for validation purposes, we only pick the CaSSIS testing images that has one or more corresponding HiRISE observations. If multiple HiRISE images are available for comparison, the one that was captured with the closest date and/or Solar Longitude (Ls) to the CaSSIS scene is used. Note that none of these validation HiRISE images are used for training (see Supplementary Materials for a list of the training HiRISE image IDs). The testing and validation datasets for the selected science targets is presented in Table 1. The proposed quantitative assessment is only applied to the results of Jezero Crater (see Section 3.1). For the science-oriented scenes, only visual qualitative comparisons are given (see Section 3.2). A collection of examples of the CaSSIS testing images for the proposed science targets can be found in Figure 2.

3. Results

3.1. Results and Assessment for Jezero Crater

In this section, we first demonstrate our CaSSIS SRR results over the Mars2020 Perseverance rover’s landing site, Jezero Crater. The input LR image is the 4 m/pixel CaSSIS NPB colour image (native resolution is about 4.5 m/pixel), that was captured on 23 February 2021 during the morning (local Mars time). The HR reference image (for validation) is the 25 cm/pixel HiRISE RED band image (native resolution is 29.3 cm/pixel; see https://www.uahirise.org/ESP_068294_1985 (accessed on 1 May 2021)) that was captured on 19 February 2021 at 14.55 in the afternoon (local Mars time). As the two images were captured pretty close to the same date, no obvious difference of the Martian surface at 1 m/pixel scale is expected. For example, both CaSSIS and HiRISE images have captured components from the rover, which landed on 18 February 2021. However, due to different solar illumination directions (morning and afternoon lighting), some surface features may look different due to surface bi-directional reflectance effects.
In this work, we down-sample the 25 cm HiRISE image to 1 m using the GDAL’s “cubicspline” down-sampling method (https://gdal.org/programs/gdal_translate.html (accessed on 1 May 2021)) to simulate 1 m view of the surface, in order to compare with the SRR results (with an effective resolution enhancement factor of ~3) at the scale of 1 m. For the 8 cropped regions shown in Figure 7, a visual comparison of the SRR results, from the three experiments (refer to Section 2.4), against downsampled HiRISE image (1 m/pixel) and as well as the original resolution HiRISE image (at 0.25 m/pixel), can be found in Figure 8. The first experiment (second column of Figure 8) refers to the CaSSIS SRR processing with the ESRGAN network that was trained with HiRISE images. The second experiment (third column of Figure 8) refers to the CaSSIS SRR processing with the proposed MARSGAN network that was trained with the same HiRISE training dataset (but optimised with the ESRGAN’ loss function and with a smaller patch size for faster convergence; hereafter referred to as MARSGAN-m1). The third experiment (fourth column of Figure 8) refers to the CaSSIS SRR processing with our proposed MARSGAN network with our rebalanced loss function (hereafter referred to as MARSGAN-m2). For more details of the three experiments, please refer to Section 2.4.
Figure 8 shows zoom-in views of the 8 selected areas for detailed comparison. The selected areas (crop-A to H) contain different types of features around the landing area, and also include a view of the rover’s jettisoned parachute and back-shell, shown in crop-A. The rover itself is not visible from the CaSSIS image and thus SRR results but is visible from the HiRISE images in between two “blast patterned” bright features. From Figure 8, we can observe that, generally speaking, both the ESRGAN and MARSGAN results are able to show 2-4 times of resolution enhancement in comparison to the input CaSSIS image and referencing HiRISE image. Our proposed MARSGAN models (MARSGAN-m1 and MARSGAN-m2) outperforms the original ESRGAN model in terms of edge sharpness and realistic texture details. Although the larger-scale structural features (e.g., crater ridges, big rocks, dune patterns) are pretty seamless on the CaSSIS SRR image and the 1 m HiRISE image (except for their different illumination directions), some of the very fine scale features (e.g., rocks, ground textures) still show quite a lot of differences between the SRR results and the 1 m HiRISE image. This is due to the ill-posed nature of SRR, as if the information is completely missing from the LR image, then it cannot be recovered. Though textures synthesis is involved, we limit this process at the training stage, to discourage the perceptually pleasing solutions that have artefacts or fault textures (refer to Section 2.2 and Section 2.4).
Table 2 shows the statistics of the standard image quality metrics (refer to Section 2.3), for the input CaSSIS image, ESRGAN SRR, MARSGAN-m1 SRR, MARSGAN-m2 SRR, and validation HiRISE image (at 1 m/pixel resolution), for 8 cropped areas shown in Figure 7 and Figure 8. In order to calculate the PSNR and MSSIM using the 1 m/pixel downsampled HiRISE images as references, the input 4 m/pixel CaSSIS images are upscaled, using GDAL’s bicubic resizing function (see https://gdal.org/programs/gdal_translate.html (accessed on 1 May 2021)), by a factor of 4, to 1 m/pixel. As mentioned in Section 2.4, all SRR results in this work already have an upscaling factor of 4 and they are all in the same scale with the reference 1 m/pixel down-sampled HiRISE images. All the HiRISE images are co-registered to the CaSSIS images.
In general, as shown in Table 2, the SRR results from ESRGAN, MARSGAN-m1, and MARSGAN-m2, have achieved higher PSNR and MSSIM for all 8 areas. MARSGAN results have achieved overall higher PSNR compared to ESRGAN, with one outlier for crop-F, and have higher MSSIM for all 8 areas. The higher MSSIM values of the MARSGAN-m1 and MARSGAN-m2 SRR images, compared to the ESRGAN SRR images, reflect that they contain more abundant and sharper structural features which are observable in the HiRISE image. MARSGAN-m2 has achieved slightly better PSNR comparing to MARSGAN-m1. MARSGAN-m2 has also achieved better MSSIM comparing MARSGAN-m1 for 7 areas, with one outlier for crop-B.
On the other hand, the BRISQUE and PIQE measurements directly reflect the image quality in terms of sharpness, contrast, perceptual quality, and SNR. BRISQUE and PIQE scores between 0 to 100 and lower values mean better image quality (see Section 2.3). From Table 2, we can observe the much better image quality scores of MARSGAN-m2 SRR results compared to the input CaSSIS images and as well as the ESRGAN SRR results. For some of the areas, MARSGAN-m2 has achieved even better BRISQUE (crop-C, D, E, G) and PIQE scores (crop-C and E) than the 1 m HiRISE images. The BRISQUE and PIQE scores do not necessarily correlate with the amount of information in the image, for example, downsampled HiRISE images always contain more finer-scale information which are not recorded on the original CaSSIS and hence not resolvable on CaSSIS SRR images, however, BRISQUE and PIQE scores reflect the images’ quality based on their existing information.
Nonetheless, these image quality metrics do not directly reflect the achieved image resolution of the SRR results. In order to estimate the resolution of the SRR results, we perform edge sharpness measurements on high-contrast slanted-edges using the Imatest® software (https://www.imatest.com/ (accessed on 1 May 2021)). The edge sharpness measurement measures the total amount of pixels from 10% to 90% rise of a high-contrast edge profile (see Figure 9). If the total number of pixels are compared to the total number of pixels of the same edge profile of its LR counterpart, then their ratio can be used to estimate an enhancement factor between the two images. We perform this test for a rippled dune area at the northwest side of the largest crater on the same CaSSIS scene (MY36_014520_019_0), where many high contrast edges are presented to perform this measurement. The MARSGAN-m2 results is compared against the original CaSSIS image at 1 m/pixel scale (only PAN band is used for this measurement).
In this assessment, we perform the slanted-edge measurement (https://www.imatest.com/docs/#sharpness (accessed on 1 May 2021)) using the Imatest® software for 20 high-contrast edges within 20 Regions of Interest (ROIs). Figure 9 shows zoom-in views of the 20 ROIs from the CaSSIS image and corresponding SRR image, 40 plots of the corresponding edge profiles (the orthogonal lines crossing the automatically detected edges), and the total number of pixels for a 10% to 90% rise along the profile line. The statistics in Figure 9 are summarised in Table 3. An enhancement factor between the MARSGAN SRR image and input CaSSIS image, for each “slanted-edge”, is calculated by dividing the total pixels involved for the 10% to 90% profile rise of the original CaSSIS image, with the total pixels involved for the 10% to 90% profile rise of the MARSGAN SRR image. An average of 20 “slanted-edge” measurements, indicates a factor of 2.9625 ± 0.7x (~3x) resolution enhancement for the MARSGAN SRR result compared to the CaSSIS image. This agrees with our visual observation that is illustrated in Figure 8.

3.2. Results and Visual Demonstration of Science Targets/Sites

Further to the initial assessment and validation work of the Perseverance rover’s landing site, we demonstrate CaSSIS SRR results using the proposed MARSGAN model (i.e., MARSGAN-m2), for 8 more CaSSIS scenes, containing different science targets introduced in Section 1.1 (examples are shown in Figure 2). These science targets include (a) Bedrock layers (in Site-1); (b) Bright and dark slope streaks (in Site-2); (c) Defrosting dunes and dune gullies (in Site-3); (d) Gullies at Gasa Crater (in Site-4); (e) Recurring slope lineae at Hale Crater (in Site-5); (f) Scalloped depressions and dust devils at Peneus Patera (in Site-6); (g) Gullies at Selevac Crater (in Site-7); (h) Defrosting spiders (in Site-8). Input CaSSIS image IDs and reference HiRISE image IDs are provided in Table 1. In this section, examples are shown with 4 small crops for each CaSSIS scene (full-strip results are provided in Supplementary Materials).
Figure 10 shows examples of 4 cropped regions of the MARSGAN SRR result for Site-1, in comparison with the input 4m CaSSIS image MY35_012491_213_0 and down-sampled 1 m HiRISE image ESP_022619_1495. The CaSSIS image and HiRISE were taken at different local Mars time, i.e., CaSSIS in the morning and HiRISE in the afternoon, so obvious differences in illumination directions are shown between the CaSSIS/SRR and HiRISE images. Note that the HiRISE images are always imaged in the afternoon (~2–5 pm local Mars time). Crop A-D in Figure 10 shows exposed bedrock) and transverse aeolian ridges on the crater floor. We can observe clearer shapes and outlines of features from the CaSSIS SRR result and the 1 m HiRISE image. Despite some finer scale textures shown in HiRISE, the larger scale structural features shown in CaSSIS SRR are similar to those in the HiRISE reference image.
Figure 11 shows examples of 4 cropped regions of the MARSGAN SRR result for Site-2, in comparison with the input 4m CaSSIS image MY35_007017_173_0 and down-sampled 1 m HiRISE image ESP_012383_1905. For this site, the CaSSIS image was also taken in the morning and is illuminated from the other side compared to the HiRISE image. Crop A-C in Figure 11 shows bright and dark slope streak features. The CaSSIS SRR result reveals clearer boundaries of the slope streak feature and has higher SNR compared to the original input. Crop D in Figure 11 shows transverse aeolian ridges inside a small crater. The CaSSIS SRR result has enhanced sharpness and structural clarity for the aeolian features and agrees broadly with the HiRISE image.
Figure 12 shows examples of 4 cropped regions of the MARSGAN SRR result for Site-3, in comparison with the input 4m CaSSIS image MY35_010749_247_0 and down-sampled 1 m HiRISE image ESP_059289_1210. This site highlights dunes and associated defrosting features on the Martian surface. The CaSSIS image was taken in the afternoon, but due to large Ls (seasonal) differences, frost is no longer present on the HiRISE image, so the albedo patterns are not apparent anymore. The dunes were covered by frost for the CaSSIS/SRR image. Crop D shows gully-channels on dune slip faces with new deposits visible in the CaSSIS SRR image. The CaSSIS SRR image has visually shown improved resolution of the dark defrosting spots and higher SNR comparing to the input.
Figure 13 shows examples of 4 cropped regions of the MARSGAN SRR result for Site-4, in comparison with the input 4m CaSSIS image MY35_012112_221_0 and down-sampled 1 m HiRISE image ESP_065469_1440. The CaSSIS and HiRISE images were taken at very similar local Mars time and just under a month apart, resulting in very similar illumination/contrast between each other. Crop A-C in Figure 13 shows gully channels between bedrock outcrops at the rim of Gasa Crater, and crop D in Figure 13 shows small bedrock outcrops on the floor of Gasa Crater. The CaSSIS SRR result has brought out the details of the gullies and bedrock outcrop and has good agreement with the reference HiRISE image. Note there are local mis-registration/distortions between the CaSSIS/SRR and HiRISE images, which are due to very limited overlapping area between the original CaSSIS and HiRISE images.
Figure 14 shows examples of 4 cropped regions of the MARSGAN SRR result for Site-5, in comparison with the input 4m CaSSIS image MY34_005640_218_1 and down-sampled 1 m HiRISE image ESP_058618_1445. Site-5 highlights RSL features on the central peak of Hale crater. In the CaSSIS SRR result, RSL features have better visibility and show clearer outlines compared to the original CaSSIS image. The HiRISE image was taken one month apart and has a small illumination difference of 3 hours apart the CaSSIS. Better measurement of the changes of the RSL features should be possible using the CaSSIS SRR result after co-registration with the HiRISE.
Figure 15 shows examples of 4 cropped regions of the MARSGAN SRR result for Site-6, in comparison with the input 4m CaSSIS image MY35_012488_241_0 and down-sampled 1 m HiRISE image ESP_013952_1225. The CaSSIS image was taken in the morning and has resulted in an opposite illumination direction compared to the HiRISE image. Site-6 highlights the scalloped depression terrain and dust devil tracks. We can see the dark linear features, in crop D of Figure 15, which are the dust devil tracks, are different in the CaSSIS/SRR and HiRISE because they were imaged 10 years apart (and these features change on a sub-annual timescale). We can observe better structural information of scalloped features, from the CaSSIS SRR result, in crop B of Figure 15, and more fine scale details in crops A and C. The overall noise level for this site is higher than the other sites, and especially for crop D, the improvement of SNR in the SRR result is limited. This is probably due to the lack of any patterned textures or structures from the original input image.
Figure 16 shows examples of 4 cropped regions of the MARSGAN SRR result for Site-7, in comparison with the input 4m CaSSIS image MY35_012121_222_0 and down-sampled 1 m HiRISE image ESP_065307_1425. Site-7 contains gullies in Selevac crater. The CaSSIS and HiRISE images were taken at similar dates and local Mars time and therefore have similar illumination/contrast. Crop A-D of Figure 16 shows larger-scale and finer-scale gullies and rock outcrops at the crater’s rim. Shaper edges, clearer structural detail, and better SNR can be observed from the CaSSIS SRR result in comparison to the input CaSSIS image. Finer scale textures are missing from the SRR result in comparison to HiRISE, but as previously mentioned, we do not seek to introduce details that were not initially present in the LR input in this work.
Finally, Figure 17 shows examples of 4 cropped regions of the MARSGAN SRR result for Site-8 in comparison with the input 4m CaSSIS image MY35_011777_268_0. The co-registration of HiRISE (PSP_002081_1055) and CaSSIS/SRR images were not possible for this site due to large time and seasonal differences of high latitude features. Therefore, no reference samples are shown for this site. Site-8 highlights the spider features terrain with frost. Better SNR and clearer structures of such features are observable from the CaSSIS SRR result in comparison to the input CaSSIS image.

4. Discussion

4.1. Perceptual-Driven Solution or PSNR-Driven Solution

Perceptual-driven models generally produce SRR results with sharper edges and richer textures, which lead to visually more pleasing results, in comparison to the PSNR-driven models. However, due to the ill-posed nature of SRR, lost information or missing textures cannot be fully and correctly recreated based on the LR image. Therefore, the sharper and richer the details are, the more stochastic solutions are involved. On the other hand, PSNR-driven SRR solutions are generally smoother and have less texture details, but they have a much lower chance of creating artefacts and synthetic textures. This was demonstrated in [85] and [91] that PSNR-driven solutions encourage the models to find pixel-wise averages of all potential solutions that have high and sharp texture details. The averaged solutions are, therefore, smoother but less “synthetic”.
Although SRR networks that are optimised for the best perceptual quality are currently popular for SRR research in general computer vision tasks, they are not fit for purpose for remote sensing or scientific applications. The issue is illustrated in Figure 18 using two small example samples of HiRISE images (ESP_029674_1650 & PSP_007455_1785). The first column is the input LR images, the second column is the SRR images produced with l 1 loss optimised ESRGAN model (representing “PSNR-driven” solutions), the third column is the SRR images produced with ESRGAN model that was optimised using a balanced, e.g., η = 1 in Equation (13), perceptual and l 1 loss (representing “Balanced” solutions) and the fourth column is the SRR images using perceptual loss only trained ESRGAN model (representing “Perceptual-driven” solutions). We can observe from Figure 18 that the PSNR-driven solution doesn’t produce any artefact but meanwhile doesn’t produce sharp SRR result. On the other hand, the perceptual-driven solution produces the sharpest result and richest texture. However, in the case of ESP_029674_1650, synthetic textures have been brought into the image, and in the other case of PSP_007455_1785, shapes of the small rocks have been altered, synthetically, compared to the original LR image. As shown from the third column of Figure 18, ESRGAN model with a balanced perceptual- and PSNR-oriented optimisation, produces good quality result with no visible artefacts.
Our MARSGAN SRR solution feeds the model with stochastic variations having the perceptual loss as a weighted term during training, but also keeps a highly weighted term of the MSE loss to minimise texture synthesis and the production of artefacts (refer to Section 2.4). A balanced SRR solution, with the best possible resolution enhancement and minimised artefact creation, is the overall objective of this work. As demonstrated in Section 3.2 with different science targets, there were no obvious synthetic artefact found with our proposed MARSGAN SRR results.

4.2. Single Image SRR or Multi-Image SRR

SRR have been divided into single-image and multi-image techniques (including video SRR). Theoretically, multi-image SRR techniques have more information (resources) to use, for example, the classic multi-frame subpixel information [70], the multi-angle-view information [4], and information from spatial-temporal correlations [102,103]. Therefore, multi-image SRR techniques could theoretically produce more details.
This is also demonstrated in Figure 19, in which a MARSGAN single-image SRR result using a single HiRISE image (PSP_010097_1655_RED) as input, is shown compared with the GPT [4] multi-image SRR result using a sequence of 8 overlapping HiRISE images (with different viewing angles) as input, and by comparison, to the 25 cm/pixel original HiRISE image (PSP_010097_1655_RED) over the Homeplate area visited by MER-A, Spirit. We can observe that the multi-image SRR result brought out more detail, e.g., surface deposits and small rocks, while the single-image SRR result seems to have a sharper reconstruction of the rover tracks.
On the other hand, the GPT multi-image SRR result (for 8 repeat input images with 1000 × 1000 pixels) took a whole day to process on a high-spec CPU machine (Intel Core i7 @2.8GHz), while the MARSGAN SRR prediction for the same-sized single image input only took a few minutes on the same CPU and takes less than a second on the NVIDIA® RTX3090 GPU. The trade-off becomes obvious, when we want to process a large image, like a full-strip CaSSIS or HiRISE. Note that the GPT SRR [4] is based on multi-angle view information and not based on deep learning, and also its key component is not suitable for GPU implementation.

4.3. Extendability with Other Datasets

This paper focuses on SRR processing of the TGO CaSSIS images. However, it should be pointed out that the proposed MARSGAN model can also be applied to other extra-high resolution, e.g., 0.25 m HiRISE, or medium-to-high resolution, e.g., 6 m CTX and 18m Compact Reconnaissance Imaging Spectrometer (CRISM), Mars imaging datasets. Figure 20 shows an example of the MARSGAN SRR result, in comparison to the original HiRISE colour image (ESP_068294_1985; https://www.uahirise.org/ESP_068360_1985 (accessed on 1 May 2021)) of the Perseverance rover’s parachute at the landing site. Figure 21 shows examples of the MARSGAN SRR result, in comparison to the original CTX image (rectJ21_052811_1983_XN_18N282W_v7pt1_6m_Eqc_latTs0_lon0; https://planetarymaps.usgs.gov/mosaic/mars2020_trn/CTX/ (accessed on 1 May 2021)) over Jezero crater. Figure 22 shows an example of the MARSGAN SRR result, in comparison to the original CRISM image (using bands 233, 78, and 13 of frt0000d3a4_07_if164l_trr3_raw downloaded from PlanetServer at http://planetserver.eu/ (accessed on 1 May 2021)), over Capri Chaos, Valles Marineris. In the future, with cross-instrument training (using different datasets with different resolutions to form the LR/HR training dataset), further improvement of the MARSGAN model can be expected.

5. Conclusions

In this paper, we introduced the network architecture and training details of the proposed MARSGAN model for single-image SRR of TGO CaSSIS images. MARSGAN offers improvements over the ESRGAN model by using adaptive weighted basic residual blocks, a multi-scale reconstruction scheme, and a rebalanced loss function. We showed the improvements of MARSGAN in comparison with ESRGAN for CaSSIS SRR over the Perseverance rover’s landing area. Image-quality based assessment (against down-sampled HiRISE images) and edge-sharpness based effective resolution measurement are demonstrated for the landing site image. A resolution enhancement of a factor of ~3x is estimated based on the Imatest®’s slanted-edge measurements. Further demonstration of CaSSIS SRR for 8 selected science-oriented scenes are given, which include many features unique to the Martian surface (e.g., bedrock layers, slope streaks, defrosting dunes, gullies, RSL, scalloped depressions, dust devils, and defrosting Spiders). For these science study sites, we demonstrated general improvement of image SNR, improvement of edge sharpness for different feature outlines, and enhancement of high-frequency details. Finally, the potential extendibility of the proposed MARSGAN model is demonstrated with examples from HiRISE, CTX, and CRISM images. Future work will include scientific studies to demonstrate what new information can be derived from the SRR results. Also, SRR of multi-spectral data (i.e., CRISM) will be explored in the wavelength domain.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/rs13091777/s1. List of HiRISE image IDs that were used for MARSGAN training. SRR results of CaSSIS full strip images. All figures in full resolution.

Author Contributions

Conceptualization, Y.T. and J.-P.M.; methodology, Y.T. and J.-P.M.; software, Y.T.; validation, Y.T., S.J.C., and J.-P.M.; formal analysis, Y.T.; investigation, Y.T., J.-P.M., and S.J.C.; data curation, S.J.C., Y.T., A.R.D.P., N.T., G.C.; writing—original draft preparation, Y.T., S.J.C.; writing—review and editing, J.-P.M., Y.T., and S.J.C.; visualization, Y.T.; project administration, J.-P.M.; funding acquisition, J.-P.M., Y.T., and S.J.C. All authors have read and agreed to the published version of the manuscript.

Funding

The research leading to these results is receiving funding from the UKSA Aurora programme (2018–2021) under grant no. ST/S001891/1 as well as partial funding from the STFC MSSL Consolidated Grant ST/K000977/1. S.J.C. is grateful to the French Space Agency CNES for supporting her CaSSIS and HiRISE related work.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The research leading to these results is receiving funding from the UKSA Aurora programme (2018–2021) under grant ST/S001891/1, as well as partial funding from the STFC MSSL Consolidated Grant ST/K000977/1. S.J.C. is grateful to the French Space Agency CNES for supporting her CaSSIS and HiRISE related work. CaSSIS is a project of the University of Bern and funded through the Swiss Space Office via ESA’s PRODEX programme. The instrument hardware development was also supported by the Italian Space Agency (ASI) (ASI-INAF agreement no. 2020-17-HH.0), INAF/Astronomical Observatory of Padova, and the Space Research Center (CBK) in Warsaw. Support from SGF (Budapest), the University of Arizona (Lunar and Planetary Lab.) and NASA are also gratefully acknowledged. Operations support from the UK Space Agency under grant ST/R003025/1 is also acknowledged.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Thomas, N.; Cremonese, G.; Ziethe, R.; Gerber, M.; Brändli, M.; Bruno, G.; Erismann, M.; Gambicorti, L.; Gerber, T.; Ghose, K.; et al. The colour and stereo surface imaging system (CaSSIS) for the ExoMars trace gas orbiter. Space Sci. Rev. 2017, 212, 1897–1944. [Google Scholar] [CrossRef] [Green Version]
  2. Malin, M.C.; Bell, J.F.; Cantor, B.A.; Caplinger, M.A.; Calvin, W.M.; Clancy, R.T.; Edgett, K.S.; Edwards, L.; Haberle, R.M.; James, P.B.; et al. Context camera investigation on board the Mars Reconnaissance Orbiter. J. Geophys. Res. Space Phys. 2007, 112, 112. [Google Scholar] [CrossRef] [Green Version]
  3. McEwen, A.S.; Eliason, E.M.; Bergstrom, J.W.; Bridges, N.T.; Hansen, C.J.; Delamere, W.A.; Grant, J.A.; Gulick, V.C.; Herkenhoff, K.E.; Keszthelyi, L.; et al. Mars reconnaissance orbiter’s high resolution imaging science experiment (HiRISE). J. Geophys. Res. Space Phys. 2007, 112. [Google Scholar] [CrossRef] [Green Version]
  4. Tao, Y.; Muller, J.-P. A novel method for surface exploration: Super-resolution restoration of Mars repeat-pass orbital imagery. Planet. Space Sci. 2016, 121, 103–114. [Google Scholar] [CrossRef] [Green Version]
  5. Bridges, J.C.; Clemmet, J.; Croon, M.; Sims, M.R.; Pullan, D.; Muller, J.P.; Tao, Y.; Xiong, S.; Putri, A.R.; Parker, T.; et al. Identification of the Beagle 2 lander on Mars. R. Soc. Open Sci. 2017, 4, 170785. [Google Scholar] [CrossRef] [Green Version]
  6. Grant, J.A.; Golombek, M.P.; Wilson, S.A.; Farley, K.A.; Williford, K.H.; Chen, A. The science process for selecting the landing site for the 2020 Mars rover. Planet. Space Sci. 2018, 164, 106–126. [Google Scholar] [CrossRef] [Green Version]
  7. Stack, K.M.; Williams, N.R.; Calef, F.; Sun, V.Z.; Williford, K.H.; Farley, K.A.; Eide, S.; Flannery, D.; Hughes, C.; Jacob, S.R.; et al. Photogeologic map of the perseverance rover field site in Jezero Crater constructed by the Mars 2020 Science Team. Space Sci. Rev. 2020, 216, 1–47. [Google Scholar] [CrossRef]
  8. Ehlmann, B.L.; Mustard, J.F.; Fassett, C.I.; Schon, S.C.; Head, J.W., III; Des Marais, D.J.; Grant, J.A.; Murchie, S.L. Clay minerals in delta deposits and organic preservation potential on Mars. Nat. Geosci. 2008, 1, 355–358. [Google Scholar] [CrossRef]
  9. Wray, J.J.; Murchie, S.L.; Squyres, S.W.; Seelos, F.P.; Tornabene, L.L. Diverse aqueous environments on ancient Mars revealed in the southern highlands. Geology 2009, 37, 1043–1046. [Google Scholar] [CrossRef]
  10. Breed, C.S.; Grolier, M.J.; McCauley, J.F. Morphology and distribution of common ‘sand’ dunes on Mars: Comparison with the Earth. J. Geophys. Res. Solid Earth 1979, 84, 8183–8204. [Google Scholar] [CrossRef]
  11. Bishop, M.A. Dark dunes of Mars: An orbit-to-ground multidisciplinary perspective of aeolian science. In Dynamic Mars; Elsevier: Amsterdam, The Netherlands, 2018; pp. 317–360. [Google Scholar]
  12. Hayward, R.K.; Mullins, K.F.; Fenton, L.K.; Hare, T.M.; Titus, T.N.; Bourke, M.C.; Colaprete, A.; Christensen, P.R. Mars global digital dune database and initial science results. J. Geophys. Res. Space Phys. 2007, 112. [Google Scholar] [CrossRef]
  13. Balme, M.; Berman, D.C.; Bourke, M.C.; Zimbelman, J.R. Transverse aeolian ridges (TARs) on Mars. Geomorphology 2008, 101, 703–720. [Google Scholar] [CrossRef] [Green Version]
  14. Zimbelman, J.R. Transverse aeolian ridges on Mars: First results from HiRISE images. Geomorphology 2010, 121, 22–29. [Google Scholar] [CrossRef]
  15. Baker, M.M.; Lapotre, M.G.; Minitti, M.E.; Newman, C.E.; Sullivan, R.; Weitz, C.M.; Rubin, D.M.; Vasavada, A.R.; Bridges, N.T.; Lewis, K.W. The Bagnold Dunes in southern summer: Active sediment transport on Mars observed by the Curiosity rover. Geophys. Res. Lett. 2018, 45, 8853–8863. [Google Scholar] [CrossRef]
  16. Silvestro, S.; Fenton, L.K.; Vaz, D.A.; Bridges, N.T.; Ori, G.G. Ripple migration and dune activity on Mars: Evidence for dynamic wind processes. Geophys. Res. Lett. 2010, 37. [Google Scholar] [CrossRef]
  17. Hansen, C.J.; Bourke, M.; Bridges, N.T.; Byrne, S.; Colon, C.; Diniega, S.; Dundas, C.; Herkenhoff, K.; McEwen, A.; Mellon, M.; et al. Seasonal erosion and restoration of Mars’ northern polar dunes. Science 2011, 331, 575–578. [Google Scholar] [CrossRef] [PubMed]
  18. Chojnacki, M.; Burr, D.M.; Moersch, J.E.; Michaels, T.I. Orbital observations of contemporary dune activity in Endeavor crater, Meridiani Planum, Mars. J. Geophys. Res. Space Phys. 2011, 116. [Google Scholar] [CrossRef] [Green Version]
  19. Berman, D.C.; Balme, M.R.; Michalski, J.R.; Clark, S.C.; Joseph, E.C. High-resolution investigations of transverse aeolian ridges on Mars. Icarus 2018, 312, 247–266. [Google Scholar] [CrossRef] [Green Version]
  20. Geissler, P.E. The birth and death of transverse aeolian ridges on Mars. J. Geophys. Res. Planets 2014, 119, 2583–2599. [Google Scholar] [CrossRef]
  21. Schorghofer, N.; Aharonson, O.; Gerstell, M.F.; Tatsumi, L. Three decades of slope streak activity on Mars. Icarus 2007, 191, 132–140. [Google Scholar] [CrossRef]
  22. Sullivan, R.; Thomas, P.; Veverka, J.; Malin, M.; Edgett, K.S. Mass movement slope streaks imaged by the Mars Orbiter Camera. J. Geophys. Res. Space Phys. 2001, 106, 23607–23633. [Google Scholar] [CrossRef]
  23. Aharonson, O.; Schorghofer, N.; Gerstell, M.F. Slope streak formation and dust deposition rates on Mars. J. Geophys. Res. Space Phys. 2003, 108. [Google Scholar] [CrossRef] [Green Version]
  24. Schorghofer, N.; King, C.M. Sporadic formation of slope streaks on Mars. Icarus 2011, 216, 159–168. [Google Scholar] [CrossRef]
  25. Heyer, T.; Kreslavsky, M.; Hiesinger, H.; Reiss, D.; Bernhardt, H.; Jaumann, R. Seasonal formation rates of martian slope streaks. Icarus 2019, 323, 76–86. [Google Scholar] [CrossRef]
  26. Bhardwaj, A.; Sam, L.; Martín-Torres, F.J.; Zorzano, M.P. Are Slope Streaks Indicative of Global-Scale Aqueous Processes on Contemporary Mars? Rev. Geophys. 2019, 57, 48–77. [Google Scholar] [CrossRef]
  27. Heyer, T.; Raack, J.; Hiesinger, H.; Jaumann, R. Dust devil triggering of slope streaks on Mars. Icarus 2020, 351, 113951. [Google Scholar] [CrossRef]
  28. Ferris, J.C.; Dohm, J.M.; Baker, V.R.; Maddock III, T. Dark slope streaks on Mars: Are aqueous processes involved? Geophys. Res. Lett. 2002, 29, 128-1–128-4. [Google Scholar] [CrossRef]
  29. Diniega, S.; Byrne, S.; Bridges, N.T.; Dundas, C.M.; McEwen, A.S. Seasonality of present-day Martian dune-gully activity. Geology 2010, 38, 1047–1050. [Google Scholar] [CrossRef] [Green Version]
  30. Pasquon, K.; Gargani, J.; Nachon, M.; Conway, S.J.; Massé, M.; Jouannic, G.; Balme, M.R.; Costard, F.; Vincendon, M. Are different Martian gully morphologies due to different processes on the Kaiser dune field? Geol. Soc. Lond. Spec. Publ. 2019, 467, 145–164. [Google Scholar] [CrossRef]
  31. Pasquon, K.; Gargani, J.; Massé, M.; Vincendon, M.; Conway, S.J.; Séjourné, A.; Jomelli, V.; Balme, M.R.; Lopez, S.; Guimpier, A. Present-day development of gully-channel sinuosity by carbon dioxide gas supported flows on Mars. Icarus 2019, 329, 296–313. [Google Scholar] [CrossRef]
  32. Gardin, E.; Allemand, P.; Quantin, C.; Thollot, P. Defrosting, dark flow features, and dune activity on Mars: Example in Russell crater. J. Geophys. Res. Space Phys. 2010, 115. [Google Scholar] [CrossRef]
  33. Kossacki, K.J.; Leliwa-Kopystyński, J. Non-uniform seasonal defrosting of subpolar dune field on Mars. Icarus 2004, 168, 201–204. [Google Scholar] [CrossRef]
  34. Hansen, C.J.; Byrne, S.; Portyankina, G.; Bourke, M.; Dundas, C.; McEwen, A.; Mellon, M.; Pommerol, A.; Thomas, N. Observations of the northern seasonal polar cap on Mars: I. Spring sublimation activity and processes. Icarus 2013, 225, 881–897. [Google Scholar] [CrossRef]
  35. Kieffer, H.H.; Christensen, P.R.; Titus, T.N. CO 2 jets formed by sublimation beneath translucent slab ice in Mars’ seasonal south polar ice cap. Nature 2006, 442, 793–796. [Google Scholar] [CrossRef] [PubMed]
  36. Dundas, C.M.; McEwen, A.S.; Diniega, S.; Hansen, C.J.; Byrne, S.; McElwaine, J.N. The formation of gullies on Mars today. Geol. Soc. Lond. Spec. Publ. 2019, 467, 67–94. [Google Scholar] [CrossRef] [Green Version]
  37. Dundas, C.M.; Diniega, S.; Hansen, C.J.; Byrne, S.; McEwen, A.S. Seasonal activity and morphological changes in Martian gullies. Icarus 2012, 220, 124–143. [Google Scholar] [CrossRef]
  38. Dundas, C.M.; McEwen, A.S.; Diniega, S.; Byrne, S.; Martinez-Alonso, S. New and recent gully activity on Mars as seen by HiRISE. Geophys. Res. Lett. 2010, 37, 37. [Google Scholar] [CrossRef] [Green Version]
  39. Dundas, C.M.; Diniega, S.; McEwen, A.S. Long-term monitoring of Martian gully formation and evolution with MRO/HiRISE. Icarus 2015, 251, 244–263. [Google Scholar] [CrossRef]
  40. Tornabene, L.L.; Osinski, G.R.; McEwen, A.S.; Boyce, J.M.; Bray, V.J.; Caudill, C.M.; Grant, J.A.; Hamilton, C.W.; Mattson, S.S.; Mouginis-Mark, P.J. Widespread crater-related pitted materials on Mars: Further evidence for the role of target volatiles during the impact process. Icarus 2012, 220, 348–368. [Google Scholar] [CrossRef]
  41. Boyce, J.M.; Wilson, L.; Mouginis-Mark, P.J.; Hamilton, C.W.; Tornabene, L.L. Origin of small pits in martian impact craters. Icarus 2012, 221, 262–275. [Google Scholar] [CrossRef]
  42. McEwen, A.S.; Ojha, L.; Dundas, C.M.; Mattson, S.S.; Byrne, S.; Wray, J.J.; Cull, S.C.; Murchie, S.L.; Thomas, N.; Gulick, V.C. Seasonal flows on warm Martian slopes. Science 2011, 333, 740–743. [Google Scholar] [CrossRef] [Green Version]
  43. Ojha, L.; McEwen, A.; Dundas, C.; Byrne, S.; Mattson, S.; Wray, J.; Masse, M.; Schaefer, E. HiRISE observations of recurring slope lineae (RSL) during southern summer on Mars. Icarus 2014, 231, 365–376. [Google Scholar] [CrossRef]
  44. Munaretto, G.; Pajola, M.; Cremonese, G.; Re, C.; Lucchetti, A.; Simioni, E.; McEwen, A.; Pommerol, A.; Becerra, P.; Conway, S.; et al. Implications for the origin and evolution of Martian Recurring Slope Lineae at Hale crater from CaSSIS observations. Planet. Space Sci. 2020, 187, 104947. [Google Scholar] [CrossRef]
  45. Stillman, D.E.; Michaels, T.I.; Grimm, R.E.; Harrison, K.P. New observations of martian southern mid-latitude recurring slope lineae (RSL) imply formation by freshwater subsurface flows. Icarus 2014, 233, 328–341. [Google Scholar] [CrossRef]
  46. Stillman, D.E.; Grimm, R.E. Two pulses of seasonal activity in martian southern mid-latitude recurring slope lineae (RSL). Icarus 2018, 302, 126–133. [Google Scholar] [CrossRef]
  47. McEwen, A.S.; Schaefer, E.I.; Dundas, C.M.; Sutton, S.S.; Tamppari, L.K.; Chojnacki, M. Mars: Abundant Recurring Slope Lineae (RSL) Following the Planet-Encircling Dust Event (PEDE) of 2018. J. Geophys. Res. Planets 2020. [Google Scholar] [CrossRef]
  48. Gough, R.V.; Nuding, D.L.; Archer Jr, P.D.; Fernanders, M.S.; Guzewich, S.D.; Tolbert, M.A.; Toigo, A.D. Changes in Soil Cohesion Due to Water Vapor Exchange: A Proposed Dry-Flow Trigger Mechanism for Recurring Slope Lineae on Mars. Geophy. Res. Lett. 2020, 47. [Google Scholar] [CrossRef]
  49. Vincendon, M.; Pilorget, C.; Carter, J.; Stcherbinine, A. Observational evidence for a dry dust-wind origin of Mars seasonal dark flows. Icarus 2019, 325, 115–127. [Google Scholar] [CrossRef] [Green Version]
  50. Ojha, L.; Wilhelm, M.B.; Murchie, S.L.; McEwen, A.S.; Wray, J.J.; Hanley, J.; Massé, M.; Chojnacki, M. Spectral evidence for hydrated salts in recurring slope lineae on Mars. Nat. Geosci. 2015, 8, 829–832. [Google Scholar] [CrossRef]
  51. Jones, A.P.; McEwen, A.S.; Tornabene, L.L.; Baker, V.R.; Melosh, H.J.; Berman, D.C. A geomorphic analysis of Hale crater, Mars: The effects of impact into ice-rich crust. Icarus 2011, 211, 259–272. [Google Scholar] [CrossRef]
  52. El-Maarry, M.R.; Dohm, J.M.; Michael, G.; Thomas, N.; Maruyama, S. Morphology and evolution of the ejecta of Hale crater in Argyre basin, Mars: Results from high resolution mapping. Icarus 2013, 226, 905–922. [Google Scholar] [CrossRef]
  53. Collins-May, J.L.; Carr, J.R.; Balme, M.R.; Ross, N.; Russell, A.J.; Brough, S.; Gallagher, C. Postimpact Evolution of the Southern Hale Crater Ejecta, Mars. J. Geophys. Res. Planets 2020, 125, 6302. [Google Scholar] [CrossRef]
  54. Séjourné, A.; Costard, F.; Gargani, J.; Soare, R.J.; Fedorov, A.; Marmo, C. Scalloped depressions and small-sized polygons in western Utopia Planitia, Mars: A new formation hypothesis. Planet. Space Sci. 2011, 59, 412–422. [Google Scholar] [CrossRef]
  55. Lefort, A.; Russell, P.S.; Thomas, N. Scalloped terrains in the Peneus and Amphitrites Paterae region of Mars as observed by HiRISE. Icarus 2010, 205, 259–268. [Google Scholar] [CrossRef]
  56. Zanetti, M.; Hiesinger, H.; Reiss, D.; Hauber, E.; Neukum, G. Distribution and evolution of scalloped terrain in the southern hemisphere, Mars. Icarus 2010, 206, 691–706. [Google Scholar] [CrossRef]
  57. Dundas, C.M. Effects of varying obliquity on Martian sublimation thermokarst landforms. Icarus 2017, 281, 115–120. [Google Scholar] [CrossRef]
  58. Soare, R.J.; Conway, S.J.; Gallagher, C.; Dohm, J.M. Ice-rich (periglacial) vs icy (glacial) depressions in the Argyre region, Mars: A proposed cold-climate dichotomy of landforms. Icarus 2017, 282, 70–83. [Google Scholar] [CrossRef] [Green Version]
  59. Thomas, P.; Gierasch, P.J. Dust devils on Mars. Science 1985, 230, 175–177. [Google Scholar] [CrossRef]
  60. Balme, M.; Greeley, R. Dust devils on Earth and Mars. Rev. Geophys. 2006, 44. [Google Scholar] [CrossRef]
  61. Whelley, P.L.; Greeley, R. The distribution of dust devil activity on Mars. J. Geophys. Res. Space Phys. 2008, 113. [Google Scholar] [CrossRef]
  62. Reiss, D.; Fenton, L.; Neakrase, L.; Zimmerman, M.; Statella, T.; Whelley, P.; Rossi, A.P.; Balme, M. Dust devil tracks. Space Sci. Rev. 2016, 203, 143–181. [Google Scholar] [CrossRef] [Green Version]
  63. Forsberg-Taylor, N.K.; Howard, A.D.; Craddock, R.A. Crater degradation in the Martian highlands: Morphometric analysis of the Sinus Sabaeus region and simulation modeling suggest fluvial processes. J. Geophys. Res. Space Phys. 2004, 109. [Google Scholar] [CrossRef]
  64. Craddock, R.A.; Maxwell, T.A. Geomorphic evolution of the Martian highlands through ancient fluvial processes. J. Geophys. Res. Space Phys. 1993, 98, 3453–3468. [Google Scholar] [CrossRef]
  65. Piqueux, S.; Byrne, S.; Richardson, M.I. Sublimation of Mars’s southern seasonal CO2 ice cap and the formation of spiders. J. Geophys. Res. Phys. Planets 2003, 108. [Google Scholar] [CrossRef] [Green Version]
  66. Hao, J.; Michael, G.G.; Adeli, S.; Jaumann, R. Araneiform terrain formation in Angustus Labyrinthus, Mars. Icarus 2019, 317, 479–490. [Google Scholar] [CrossRef]
  67. Thomas, N.; Portyankina, G.; Hansen, C.J.; Pommerol, A. HiRISE observations of gas sublimation-driven activity in Mars’ southern polar regions: IV. Fluid dynamics models of CO2 jets. Icarus 2011, 212, 66–85. [Google Scholar] [CrossRef]
  68. Hansen, C.J.; Thomas, N.; Portyankina, G.; McEwen, A.; Becker, T.; Byrne, S.; Herkenhoff, K.; Kieffer, H.; Mellon, M. HiRISE observations of gas sublimation-driven activity in Mars’ southern polar regions: I. Erosion of the surface. Icarus 2010, 205, 283–295. [Google Scholar] [CrossRef]
  69. Portyankina, G.; Markiewicz, W.J.; Thomas, N.; Hansen, C.J.; Milazzo, M. HiRISE observations of gas sublimation-driven activity in Mars’ southern polar regions: III. Models of processes involving translucent ice. Icarus 2010, 205, 311–320. [Google Scholar] [CrossRef]
  70. Tsai, R.Y.; Huang, T.S. Multipleframe Image Restoration and Registration. In Advances in Computer Vision and Image Processing; JAI Press Inc.: New York, NY, USA, 1984; pp. 317–339. [Google Scholar]
  71. Keren, D.; Peleg, S.; Brada, R. Image sequence enhancement using subpixel displacements. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Ann Arbor, MI, USA, 5–9 June 1988; pp. 742–746. [Google Scholar]
  72. Hardie, R.C.; Barnard, K.J.; Armstrong, E.E. Joint MAP registration and high resolution image estimation using a sequence of undersampled images. IEEE Trans. Image Process. 1997, 6, 1621–1633. [Google Scholar] [CrossRef] [Green Version]
  73. Farsiu, S.; Robinson, D.; Elad, M.; Milanfar, P. Fast and robust multi-frame super-resolution. IEEE Trans. Image Process. 2004, 13, 1327–1344. [Google Scholar] [CrossRef]
  74. Yuan, Q.; Zhang, L.; Shen, H. Multiframe super-resolution employing a spatially weighted total variation model. IEEE Trans. Circuits Syst. Video Technol. 2012, 22, 379–392. [Google Scholar] [CrossRef]
  75. Tao, Y.; Muller, J.-P. Super-resolution restoration of MISR images using the UCL MAGiGAN system. Remote Sens. 2019, 11, 52. [Google Scholar] [CrossRef] [Green Version]
  76. Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a deep convolutional network for image super-resolution. In Proceedings of the ECCV 2014, Zurich, Switzerland, 6–12 September 2014; pp. 184–199. [Google Scholar]
  77. Kim, J.; Kwon Lee, J.; Mu Lee, K. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar]
  78. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations (ICLR). arXiv 2014, arXiv:1409.1556. [Google Scholar]
  79. Dong, C.; Loy, C.C.; Tang, X. Accelerating the super-resolution convolutional neural network. In Transactions on Petri Nets and Other Models of Concurrency XV; Springer: Cham, Switzerland, 2016; pp. 391–407. [Google Scholar]
  80. Shi, W.; Caballero, J.; Huszár, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1874–1883. [Google Scholar]
  81. Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K.M. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 136–144. [Google Scholar]
  82. Yu, J.; Fan, Y.; Yang, J.; Xu, N.; Wang, Z.; Wang, X.; Huang, T. Wide activation for efficient and accurate image super-resolution. arXiv 2018, arXiv:1808.08718. [Google Scholar]
  83. Ahn, N.; Kang, B.; Sohn, K.A. Fast, accurate, and lightweight super-resolution with cascading residual network. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 252–268. [Google Scholar]
  84. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, Amsterdam, The Netherlands, 11–14 October 2016; pp. 770–778. [Google Scholar]
  85. Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar]
  86. Kim, J.; Lee, J.K.; Lee, K.M. Deeply-recursive convolutional network for image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition, Amsterdam, The Netherlands, 11–14 October 2016; pp. 1637–1645. [Google Scholar]
  87. Tai, Y.; Yang, J.; Liu, X. Image super-resolution via deep recursive residual network. In Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3147–3155. [Google Scholar]
  88. Wang, C.; Li, Z.; Shi, J. Lightweight image super-resolution with adaptive weighted learning network. arXiv 2019, arXiv:1904.02358. [Google Scholar]
  89. Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European conference on computer vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 286–301. [Google Scholar]
  90. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. arXiv 2014, arXiv:1406.2661. [Google Scholar] [CrossRef]
  91. Sajjadi, M.S.; Scholkopf, B.; Hirsch, M. EnhanceNet: Single image super-resolution through automated texture synthesis. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4491–4500. [Google Scholar]
  92. Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; Loy, C.C. ESRGAN: Enhanced super-resolution generative adversarial networks. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 8–14 September 2018. [Google Scholar]
  93. Rakotonirina, N.C.; Rasoanaivo, A. ESRGAN+: Further improving enhanced super-resolution generative adversarial network. In Proceedings of the ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 3637–3641. [Google Scholar]
  94. Zhang, W.; Liu, Y.; Dong, C.; Qiao, Y. RankSRGAN: Generative adversarial networks with ranker for image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27–28 October 2019; pp. 3096–3105. [Google Scholar]
  95. Jolicoeur-Martineau, A. The relativistic discriminator: A key element missing from standard GAN. arXiv 2018, arXiv:1807.00734. [Google Scholar]
  96. Tao, Y.; Muller, J.-P.; Poole, W. Automated localisation of Mars rovers using co-registered HiRISE-CTX-HRSC orthorectified images and wide baseline Navcam orthorectified mosaics. Icarus 2016, 280, 139–157. [Google Scholar] [CrossRef] [Green Version]
  97. Zhou, W.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Qualifty Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar]
  98. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-Reference Image Quality Assessment in the Spatial Domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
  99. Venkatanath, N.; Praneeth, D.; Chandrasekhar, B.M.; Channappayya, S.S.; Medasani, S.S. Blind Image Quality Evaluation Using Perception Based Features. In Proceedings of the 21st National Conference on Communications (NCC) 2015, Mumbai, India, 27 February–1 March 2015. [Google Scholar]
  100. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  101. Tornabene, L.L.; Seelos, F.P.; Pommerol, A.; Thomas, N.; Caudill, C.M.; Becerra, P.; Bridges, J.C.; Byrne, S.; Cardinale, M.; Chojnacki, M.; et al. Image Simulation and Assessment of the Colour and Spatial Capabilities of the Colour and Stereo Surface Imaging System (CaSSIS) on the ExoMars Trace Gas Orbiter. Space Sci. Rev. 2018, 214, 18. [Google Scholar] [CrossRef]
  102. Salvetti, F.; Mazzia, V.; Khaliq, A.; Chiaberge, M. Multi-image Super Resolution of Remotely Sensed Images using Residual Feature Attention Deep Neural Networks. arXiv 2020, arXiv:2007.03107. [Google Scholar]
  103. Chu, M.; Xie, Y.; Leal-Taixé, L.; Thuerey, N. Temporally coherent gans for video super-resolution (tecogan). arXiv 2018, arXiv:1811.09393. [Google Scholar]
Figure 1. An example of a small crop of a CaSSIS image (MY35_007017_173_0_NPB) super-resolved with MARSGAN single-image SRR clearly revealing a small crater feature
Figure 1. An example of a small crop of a CaSSIS image (MY35_007017_173_0_NPB) super-resolved with MARSGAN single-image SRR clearly revealing a small crater feature
Remotesensing 13 01777 g001
Figure 2. Examples of cropped CaSSIS images (in NPB colour) for the 8 study sites. (a) Site 1: Bedrock layers (MY35_012491_213_0); (b) Site 2:Bright and dark slope streaks (MY35_007017_173_0); (c) Site 3: Defrosting dunes and dune gullies (MY35_010749_247_0); (d) Site 4: Possible new gully activity at Gasa Crater (MY35_012112_221_0); (e) Site 5: Recurring slope lineae at Hale Crater (MY34_005640_218_1); (f) Site 6: Scalloped depressions and dust devils at Peneus Patera (MY35_012488_241_0); (g) Site 7: Gullies at Selevac Crater (MY35_012121_222_0); (h) Site 8: Defrosting spiders (MY35_011777_268_0). N.B. the scale bar showing in (e) applies to all sub-figures.
Figure 2. Examples of cropped CaSSIS images (in NPB colour) for the 8 study sites. (a) Site 1: Bedrock layers (MY35_012491_213_0); (b) Site 2:Bright and dark slope streaks (MY35_007017_173_0); (c) Site 3: Defrosting dunes and dune gullies (MY35_010749_247_0); (d) Site 4: Possible new gully activity at Gasa Crater (MY35_012112_221_0); (e) Site 5: Recurring slope lineae at Hale Crater (MY34_005640_218_1); (f) Site 6: Scalloped depressions and dust devils at Peneus Patera (MY35_012488_241_0); (g) Site 7: Gullies at Selevac Crater (MY35_012121_222_0); (h) Site 8: Defrosting spiders (MY35_011777_268_0). N.B. the scale bar showing in (e) applies to all sub-figures.
Remotesensing 13 01777 g002
Figure 3. Network architecture of the proposed MARSGAN.
Figure 3. Network architecture of the proposed MARSGAN.
Remotesensing 13 01777 g003
Figure 4. Examples of HiRISE image crops that are used for training of our networks, containing various unique features of the Martian surface. (a) Dunes at Herschel Crater (ESP_037948_1645); (b) Small craters (possibly filled by deltaic deposits) (PSP_006954_1885); (c) Columbia Hills (PSP_001513_1655); (d) Light-toned layering at Noctis region pit (ESP_017399_1680); (e) Slopes in Coprates Chasma (ESP_030426_1685); (f) Pitted cones in Melas Chasma (ESP_043850_1685); (g) Eroded scallops with layers (PSP_001938_2265); (h) Gullies within the central pit of Bamberg Crater (PSP_010301_2200); (i) Lava falls in northern Kasei Valles (ESP_040659_2025); (j) Putative salt deposits in Terra Sirenum (PSP_005811_1470); (k) Mound of sedimentary rocks in Gale Crater (PSP_006855_1750); (l) Gorgonum Chaos (ESP_016004_1425). For more detail, please refer to the HiRISE site at https://www.uahirise.org/sim/ (accessed on 1 May 2021).
Figure 4. Examples of HiRISE image crops that are used for training of our networks, containing various unique features of the Martian surface. (a) Dunes at Herschel Crater (ESP_037948_1645); (b) Small craters (possibly filled by deltaic deposits) (PSP_006954_1885); (c) Columbia Hills (PSP_001513_1655); (d) Light-toned layering at Noctis region pit (ESP_017399_1680); (e) Slopes in Coprates Chasma (ESP_030426_1685); (f) Pitted cones in Melas Chasma (ESP_043850_1685); (g) Eroded scallops with layers (PSP_001938_2265); (h) Gullies within the central pit of Bamberg Crater (PSP_010301_2200); (i) Lava falls in northern Kasei Valles (ESP_040659_2025); (j) Putative salt deposits in Terra Sirenum (PSP_005811_1470); (k) Mound of sedimentary rocks in Gale Crater (PSP_006855_1750); (l) Gorgonum Chaos (ESP_016004_1425). For more detail, please refer to the HiRISE site at https://www.uahirise.org/sim/ (accessed on 1 May 2021).
Remotesensing 13 01777 g004
Figure 5. Example of the effect on SRR on each individual colour channels in R-G-B colour space and SRR on the brightness intensity channel in H-S-V colour space, in comparison to the input.
Figure 5. Example of the effect on SRR on each individual colour channels in R-G-B colour space and SRR on the brightness intensity channel in H-S-V colour space, in comparison to the input.
Remotesensing 13 01777 g005
Figure 6. An example of the CaSSIS N-P-B colour image (MY36_014520_019_0), and its corresponding NIR, PAN, and BLU bands, visually showing the different SNR levels for NIR, PAN, and BLU.
Figure 6. An example of the CaSSIS N-P-B colour image (MY36_014520_019_0), and its corresponding NIR, PAN, and BLU bands, visually showing the different SNR levels for NIR, PAN, and BLU.
Remotesensing 13 01777 g006
Figure 7. An overview map, from CaSSIS (MY36_014520_019_0) NPB colour image and HiRISE (ESP_068294_1985) RED band greyscale image, of the Perseverance rover’s landing site, Jezero Crater, showing locations (in red bounding box from crop-A to H) of 8 cropped areas for zoom-in comparison (see Figure 8) and quantitative assessment (see Table 2).
Figure 7. An overview map, from CaSSIS (MY36_014520_019_0) NPB colour image and HiRISE (ESP_068294_1985) RED band greyscale image, of the Perseverance rover’s landing site, Jezero Crater, showing locations (in red bounding box from crop-A to H) of 8 cropped areas for zoom-in comparison (see Figure 8) and quantitative assessment (see Table 2).
Remotesensing 13 01777 g007
Figure 8. CaSSIS SRR results, from ESRGAN, MARSGAN-m1, and MARSGAN-m2, of 8 cropped areas at Jezero Crater, in comparison with the original 4 m/pixel CaSSIS input image, 1 m/pixel down-sampled HiRISE image, and 0.25 m/pixel original HiRISE image. See Figure 7 for locations of cropped images.
Figure 8. CaSSIS SRR results, from ESRGAN, MARSGAN-m1, and MARSGAN-m2, of 8 cropped areas at Jezero Crater, in comparison with the original 4 m/pixel CaSSIS input image, 1 m/pixel down-sampled HiRISE image, and 0.25 m/pixel original HiRISE image. See Figure 7 for locations of cropped images.
Remotesensing 13 01777 g008
Figure 9. Slanted-edge profile measurements from 20 automatically detected high-contrast edges, in a rippled dune field at Jezero crater, using the Imatest® software, for the original CaSSIS image (MY36_014520_019_0; the first and the third column) and MARSGAN SRR image (the second and the fourth column). For each slanted edge, there is a sub-figure, showing a profile line crossing the detected edge. Imatest® measurement of the total pixel number of a 10% to 90% rise along the profile line is shown inside each plot. For texts inside the plots, please refer to the original full-resolution figure in Supplementary Materials.
Figure 9. Slanted-edge profile measurements from 20 automatically detected high-contrast edges, in a rippled dune field at Jezero crater, using the Imatest® software, for the original CaSSIS image (MY36_014520_019_0; the first and the third column) and MARSGAN SRR image (the second and the fourth column). For each slanted edge, there is a sub-figure, showing a profile line crossing the detected edge. Imatest® measurement of the total pixel number of a 10% to 90% rise along the profile line is shown inside each plot. For texts inside the plots, please refer to the original full-resolution figure in Supplementary Materials.
Remotesensing 13 01777 g009
Figure 10. (Site-1) Cropped examples (AD) of 4m CaSSIS image (MY35_012491_213_0), MARSGAN CaSSIS SRR, and 1 m downsampled HiRISE image (ESP_022619_1495).
Figure 10. (Site-1) Cropped examples (AD) of 4m CaSSIS image (MY35_012491_213_0), MARSGAN CaSSIS SRR, and 1 m downsampled HiRISE image (ESP_022619_1495).
Remotesensing 13 01777 g010
Figure 11. (Site-2) Cropped examples (AD) of 4m CaSSIS image (MY35_007017_173_0), MARSGAN CaSSIS SRR, and 1 m downsampled HiRISE image (ESP_012383_1905).
Figure 11. (Site-2) Cropped examples (AD) of 4m CaSSIS image (MY35_007017_173_0), MARSGAN CaSSIS SRR, and 1 m downsampled HiRISE image (ESP_012383_1905).
Remotesensing 13 01777 g011
Figure 12. (Site-3) Cropped examples (AD) of 4m CaSSIS image (MY35_010749_247_0), MARSGAN CaSSIS SRR, and 1 m downsampled HiRISE image (ESP_059289_1210). Note the very different appearance of the HiRISE image due to the huge time-gap.
Figure 12. (Site-3) Cropped examples (AD) of 4m CaSSIS image (MY35_010749_247_0), MARSGAN CaSSIS SRR, and 1 m downsampled HiRISE image (ESP_059289_1210). Note the very different appearance of the HiRISE image due to the huge time-gap.
Remotesensing 13 01777 g012
Figure 13. (Site-4) Cropped examples (AD) of 4m CaSSIS image (MY35_012112_221_0), MARSGAN CaSSIS SRR, and 1 m downsampled HiRISE image (ESP_065469_1440).
Figure 13. (Site-4) Cropped examples (AD) of 4m CaSSIS image (MY35_012112_221_0), MARSGAN CaSSIS SRR, and 1 m downsampled HiRISE image (ESP_065469_1440).
Remotesensing 13 01777 g013
Figure 14. (Site-5) Cropped examples (AD) of 4m CaSSIS image (MY34_005640_218_1), MARSGAN CaSSIS SRR, and 1 m downsampled HiRISE image (ESP_058618_1445).
Figure 14. (Site-5) Cropped examples (AD) of 4m CaSSIS image (MY34_005640_218_1), MARSGAN CaSSIS SRR, and 1 m downsampled HiRISE image (ESP_058618_1445).
Remotesensing 13 01777 g014
Figure 15. (Site-6) Cropped examples (AD) of 4m CaSSIS image (MY35_012488_241_0), MARSGAN CaSSIS SRR, and 1 m downsampled HiRISE image (ESP_013952_1225).
Figure 15. (Site-6) Cropped examples (AD) of 4m CaSSIS image (MY35_012488_241_0), MARSGAN CaSSIS SRR, and 1 m downsampled HiRISE image (ESP_013952_1225).
Remotesensing 13 01777 g015
Figure 16. (Site-7) Cropped examples (AD) of 4m CaSSIS image (MY35_012121_222_0), MARSGAN CaSSIS SRR, and 1 m downsampled HiRISE image (ESP_065307_1425).
Figure 16. (Site-7) Cropped examples (AD) of 4m CaSSIS image (MY35_012121_222_0), MARSGAN CaSSIS SRR, and 1 m downsampled HiRISE image (ESP_065307_1425).
Remotesensing 13 01777 g016
Figure 17. (Site-8) Cropped examples (AD) of 4m CaSSIS image (MY35_011777_268_0), MARSGAN CaSSIS SRR, and 1 m downsampled HiRISE image (PSP_002081_1055).
Figure 17. (Site-8) Cropped examples (AD) of 4m CaSSIS image (MY35_011777_268_0), MARSGAN CaSSIS SRR, and 1 m downsampled HiRISE image (PSP_002081_1055).
Remotesensing 13 01777 g017
Figure 18. Illustration with HiRISE SRR images using ESRGAN models that were optimized with l 1 loss function (“PSNR Oriented”), VGG loss function (“Perceptual Oriented”), and our balanced loss function (“Balanced”), showing the impact of having perceptual-driven training/prediction and having PSNR-driven training/prediction.
Figure 18. Illustration with HiRISE SRR images using ESRGAN models that were optimized with l 1 loss function (“PSNR Oriented”), VGG loss function (“Perceptual Oriented”), and our balanced loss function (“Balanced”), showing the impact of having perceptual-driven training/prediction and having PSNR-driven training/prediction.
Remotesensing 13 01777 g018
Figure 19. Comparison of the 25 cm/pixel HiRISE image (PSP_010097_1655_RED), MARSGAN single-image SRR result, and GPT [4] multi-image SRR result, cropped over the Homeplate area.
Figure 19. Comparison of the 25 cm/pixel HiRISE image (PSP_010097_1655_RED), MARSGAN single-image SRR result, and GPT [4] multi-image SRR result, cropped over the Homeplate area.
Remotesensing 13 01777 g019
Figure 20. Comparison of the 25 cm/pixel HiRISE image (ESP_068294_1985) and the MARSGAN SRR result, cropped over a small ROI at the landing site, showing the Perseverance rover’s parachute.
Figure 20. Comparison of the 25 cm/pixel HiRISE image (ESP_068294_1985) and the MARSGAN SRR result, cropped over a small ROI at the landing site, showing the Perseverance rover’s parachute.
Remotesensing 13 01777 g020
Figure 21. Comparison of four cropped examples of the 6 m/pixel CTX image (J21_052811_1983_XN_18N282W) and the MARSGAN SRR result at the Perseverance rover’s landing site, Jezero crater.
Figure 21. Comparison of four cropped examples of the 6 m/pixel CTX image (J21_052811_1983_XN_18N282W) and the MARSGAN SRR result at the Perseverance rover’s landing site, Jezero crater.
Remotesensing 13 01777 g021
Figure 22. Comparison of the 18 m/pixel CRISM image (frt0000d3a4_07_if164l_trr3_raw; using band 233, band 78, and band 13 for RGB colour) and the MARSGAN SRR result over Capri Chaos, Valles Marineris.
Figure 22. Comparison of the 18 m/pixel CRISM image (frt0000d3a4_07_if164l_trr3_raw; using band 233, band 78, and band 13 for RGB colour) and the MARSGAN SRR result over Capri Chaos, Valles Marineris.
Remotesensing 13 01777 g022
Table 1. Testing CaSSIS scenes and overlapping HiRISE images for the selected science targets.
Table 1. Testing CaSSIS scenes and overlapping HiRISE images for the selected science targets.
Site ID /NameScience Targets /locationCaSSIS HiRISE
IDImaging DateLocal TimeLsIDImaging DateLocal TimeLs
1.Argyre BasinBedrock
Layers
(−30.455, 313.292)
MY35_012491_213_02020-09-108:27275.2°ESP_022619_14952011-05-2414:31298.7°
2.Arabia TerraBright & Dark Slope Streaks
(10.409, 41.696)
MY35_007017_173_02019-06-209:0041.8°ESP_012383_19052009-03-1815:32229.5°
3.Noachis TerraDefrosting dunes & Dune gullies
(-58.618, 8.79)
MY35_010749_247_02020-04-2017:38187.0°ESP_059289_12102019-03-2114:30358.9°
4.Gasa CraterGullies
(-35.731, 129.436)
MY35_012112_221_02020-08-1015:43255.6°ESP_065469_14402020-07-1415:50238.7°
5.Hale CraterRecurring Slope Lineae
(-35.504, 323.454)
MY34_005640_218_12019-02-2711:09347.9°ESP_058618_14452019-01-2714:06331.5°
6.Peneus PateraScalloped depressions & Dust Devils
(−57.062, 54.544)
MY35_012488_241_02020-09-109:36275.1°ESP_013952_12252009-07-1814:36305.6°
7.Selevac CraterCrater & Gullies
(-37.386, 228.946)
MY35_012121_222_02020-08-1115:34256.1°ESP_065307_14252020-07-0215:46230.7°
8. South poleDefrosting Spiders
(-74.020, 168.675)
MY35_011777_268_02020-07-142:02238.2°PSP_002081_10552007-01-0516:15161.8°
Table 2. Statistics of the image quality metrics for the input CaSSIS image (LR; at 4 m/pixel native resolution) and upscaled to 1 m/pixel for ESRGAN SRR result, MARSGAN-m1 SRR result, MARSGAN-m2 SRR result, and the down-sampled HiRISE image (HR reference image; at 1 m/pixel) for the 8 areas (as shown in Figure 7 and Figure 8).
Table 2. Statistics of the image quality metrics for the input CaSSIS image (LR; at 4 m/pixel native resolution) and upscaled to 1 m/pixel for ESRGAN SRR result, MARSGAN-m1 SRR result, MARSGAN-m2 SRR result, and the down-sampled HiRISE image (HR reference image; at 1 m/pixel) for the 8 areas (as shown in Figure 7 and Figure 8).
Area IDImagePSNRMSSIMBRISQUE %PIQE %
ACaSSIS 4m (upscaled to 1 m)26.04430.425952.471489.5445
ESRGAN SRR27.43600.644745.259958.2798
MARSGAN-m1 SRR28.38000.662844.384348.1842
MARSGAN-m2 SRR28.86170.734840.888837.9551
HiRISE 1 m-1.037.320717.8052
BCaSSIS 4m (upscaled to 1 m)25.25360.501055.234989.4813
ESRGAN SRR 1 m27.16290.626644.852362.1144
MARSGAN-m1 SRR 1 m27.51650.752743.440958.2852
MARSGAN-m2 SRR27.57880.712143.364257.8908
HiRISE 1 m-1.040.062239.2406
CCaSSIS 4m (upscaled to 1 m)26.62700.589062.209589.4329
ESRGAN SRR27.56280.609951.250653.4608
MARSGAN-m1 SRR28.22370.737849.237452.4337
MARSGAN-m2 SRR28.77300.797040.249737.9989
HiRISE 1 m-1.042.876339.1884
DCaSSIS 4m (upscaled to 1 m)24.84500.412955.654589.3675
ESRGAN SRR26.93550.528244.236469.3333
MARSGAN-m1 SRR27.60770.547934.370554.2366
MARSGAN-m2 SRR28.62580.623129.282045.4305
HiRISE 1 m-1.029.552539.2207
ECaSSIS 4m (upscaled to 1 m)23.41760.502546.678991.6742
ESRGAN SRR24.47530.712840.575789.0071
MARSGAN-m1 SRR24.93280.734840.702075.0569
MARSGAN-m2 SRR25.99990.743440.338954.8428
HiRISE 1 m-1.041.968769.8425
FCaSSIS 4m (upscaled to 1 m)23.02580.715366.677089.5689
ESRGAN SRR25.11950.835454.061655.4445
MARSGAN-m1 SRR24.52180.854541.836547.2499
MARSGAN-m2 SRR25.26740.866744.009648.2412
HiRISE 1 m-1.043.490847.9397
GCaSSIS 4m (upscaled to 1 m)25.05280.453954.554089.6983
ESRGAN SRR26.17690.664345.126369.5151
MARSGAN-m1 SRR26.87090.759043.956357.2191
MARSGAN-m2 SRR27.03460.765941.575258.0473
HiRISE 1 m-1.042.449848.9388
HCaSSIS 4m (upscaled to 1 m)26.68730.597353.389089.5466
ESRGAN SRR27.03940.717044.789469.0773
MARSGAN-m1 SRR27.93130.794543.420263.0145
MARSGAN-m2 SRR28.15640.812141.796058.9527
HiRISE 1 m-1.036.984151.9500
Table 3. Summary of the statistics from Figure 9, and estimation of enhancement factor, from the total pixel counts of 10% to 90% profile rise crossing the 20 automatically detected slanted-edges, for the input CaSSIS image (MY36_014520_019_0) and MARSGAN SRR image.
Table 3. Summary of the statistics from Figure 9, and estimation of enhancement factor, from the total pixel counts of 10% to 90% profile rise crossing the 20 automatically detected slanted-edges, for the input CaSSIS image (MY36_014520_019_0) and MARSGAN SRR image.
Slanted-Edge IDCaSSIS Image (Total Number of Pixels for 10–90% Profile Rise)MARSGAN SRR (Total Number Of Pixels For 10–90% Profile Rise)ROI Size (Pixels)Enhancement Factor
141.8714 × 122.14
26.361.8616 × 143.42
36.262.3219 × 202.70
44.981.8718 × 212.66
55.802.3121 × 272.51
66.992.2326 × 203.13
75.221.0521 × 214.97
85.811.9825 × 192.93
95.041.3620 × 173.71
104.851.2923 × 223.76
114.191.5917 × 222.64
125.872.4422 × 252.41
136.011.5924 × 193.78
144.191.8426 × 252.28
156.072.0716 × 182.93
166.642.5817 × 162.57
174.451.5620 × 152.85
186.091.9123 × 183.19
193.861.8523 × 172.09
206.212.4117 × 112.58
Average---2.9625 ± 0.7
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tao, Y.; Conway, S.J.; Muller, J.-P.; Putri, A.R.D.; Thomas, N.; Cremonese, G. Single Image Super-Resolution Restoration of TGO CaSSIS Colour Images: Demonstration with Perseverance Rover Landing Site and Mars Science Targets. Remote Sens. 2021, 13, 1777. https://doi.org/10.3390/rs13091777

AMA Style

Tao Y, Conway SJ, Muller J-P, Putri ARD, Thomas N, Cremonese G. Single Image Super-Resolution Restoration of TGO CaSSIS Colour Images: Demonstration with Perseverance Rover Landing Site and Mars Science Targets. Remote Sensing. 2021; 13(9):1777. https://doi.org/10.3390/rs13091777

Chicago/Turabian Style

Tao, Yu, Susan J. Conway, Jan-Peter Muller, Alfiah R. D. Putri, Nicolas Thomas, and Gabriele Cremonese. 2021. "Single Image Super-Resolution Restoration of TGO CaSSIS Colour Images: Demonstration with Perseverance Rover Landing Site and Mars Science Targets" Remote Sensing 13, no. 9: 1777. https://doi.org/10.3390/rs13091777

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop