[go: up one dir, main page]

Next Article in Journal
Precise Orbit Determination of FY-3C with Calibration of Orbit Biases in BeiDou GEO Satellites
Next Article in Special Issue
Speckle Suppression Based on Sparse Representation with Non-Local Priors
Previous Article in Journal
Remote Sensing and Geo-Archaeological Data: Inland Water Studies for the Conservation of Underwater Cultural Heritage in the Ferrara District, Italy
Previous Article in Special Issue
Directional 0 Sparse Modeling for Image Stripe Noise Removal
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Relative Radiometric Calibration Method Based on the Histogram of Side-Slither Data for High-Resolution Optical Satellite Imagery

State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2018, 10(3), 381; https://doi.org/10.3390/rs10030381
Submission received: 4 January 2018 / Revised: 25 February 2018 / Accepted: 27 February 2018 / Published: 1 March 2018
(This article belongs to the Special Issue Data Restoration and Denoising of Remote Sensing Data)
Figure 1
<p>(<b>a</b>) Classic push-broom viewing mode; (<b>b</b>) Normalization steered viewing mode, also be called side-slither scan.</p> ">
Figure 2
<p>The relationship between input radiance and output DN (Digital Number).</p> ">
Figure 3
<p>The relative radiometric calibration method based on side-slither data.</p> ">
Figure 4
<p>Results of data pre-processing. (<b>a</b>) Raw side-slither data; (<b>b</b>) Primary image corrected using Equation (4); (<b>c</b>) Results of line detection applied to the primary image; (<b>d</b>) Standard image corrected using Equation (8) with the primary image.</p> ">
Figure 5
<p>The process of basic adjustment using Equation (4).</p> ">
Figure 6
<p>The results of relative radiometric correction. (<b>a</b>) Standard data that resulted from data pre-processing of the raw side-slither data; (<b>b</b>) Result of using on-orbit coefficients; (<b>c</b>) Result of using side-slither coefficients.</p> ">
Figure 7
<p>The detail of results. (<b>a</b>) Detailed image of the red box showed in <a href="#remotesensing-10-00381-f006" class="html-fig">Figure 6</a>a; (<b>b</b>) Detailed image of the red box showed in <a href="#remotesensing-10-00381-f006" class="html-fig">Figure 6</a>b; (<b>c</b>) Detailed image of the red box showed in <a href="#remotesensing-10-00381-f006" class="html-fig">Figure 6</a>c.</p> ">
Figure 8
<p>Column mean value analysis of the standard image and corrected images. (<b>a</b>) The column means of the standard image; (<b>b</b>) The column means of images corrected using the on-orbit coefficients and the side-slither coefficients.</p> ">
Figure 9
<p>Streaking metrics analysis of the standard image and corrected images. (<b>a</b>) The streaking metrics of the standard image; (<b>b</b>) The streaking metrics of the images corrected using the on-orbit coefficients and side-slither coefficients.</p> ">
Figure 10
<p>The raw and corrected images of the water. (<b>a</b>) Raw image; (<b>b</b>) Image corrected using the on-orbit coefficients; (<b>c</b>) Image corrected using the side-slither coefficients.</p> ">
Figure 11
<p>The raw and corrected images of the city. (<b>a</b>) Raw image; (<b>b</b>) Image corrected using the on-orbit coefficients; (<b>c</b>) Image corrected using the side-slither coefficients.</p> ">
Figure 12
<p>The raw and corrected images of the desert. (<b>a</b>) Raw image; (<b>b</b>) Image corrected using the on-orbit coefficients; (<b>c</b>) Image corrected using the side-slither coefficients.</p> ">
Figure 13
<p>Streaking metrics of water images: (<b>a</b>) Streaking metrics of the raw image; (<b>b</b>) Streaking metrics of the images corrected using on-orbit coefficients and side-slither coefficients, respectively.</p> ">
Figure 14
<p>Streaking metrics of city images. (<b>a</b>) Streaking metrics of the raw image; (<b>b</b>) Streaking metrics of images corrected using on-orbit coefficients and side-slither coefficients, respectively.</p> ">
Figure 15
<p>Streaking metrics of desert images. (<b>a</b>) Streaking metrics of the raw image; (<b>b</b>) Streaking metrics of the images corrected using on-orbit coefficients and side-slither coefficients, respectively.</p> ">
Figure 16
<p>The results of data pre-processing. (<b>a</b>) The primary image was obtained using Equation (4) with raw side-slither data; (<b>b</b>) The standard image is the result of standard correction of the primary image.</p> ">
Figure 17
<p>Changes in column means of the primary image and standard image. (<b>a</b>) Column means of the primary and standard images; (<b>b</b>) the absolute value of the difference between column values of the primary and standard images.</p> ">
Versions Notes

Abstract

:
Relative radiometric calibration, or flat fielding, is indispensable for obtaining high-quality optical satellite imagery for sensors that have more than one detector per band. High-resolution optical push-broom sensors with thousands of detectors per band are now common. Multiple techniques have been employed for relative radiometric calibration. One technique, often called side-slither, where the sensor axis is rotated 90° in yaw relative to normal acquisitions, has been gaining popularity, being applied to Landsat 8, QuickBird, RapidEye, and other satellites. Side-slither can be more time efficient than some of the traditional methods, as only one acquisition may be required. In addition, the side-slither does not require any onboard calibration hardware, only a satellite capability to yaw and maintain a stable yawed attitude. A relative radiometric calibration method based on histograms of side-slither data is developed. This method has three steps: pre-processing, extraction of key points, and calculation of coefficients. Histogram matching and Otsu’s method are used to extract key points. Three datasets from the Chinese GaoFen-9 satellite were used: one to obtain the relative radiometric coefficients, and the others to verify the coefficients. Root-mean-square deviations of the corrected imagery were better than 0.1%. The maximum streaking metrics was less than 1. This method produced significantly better relative radiometric calibration than the traditional method used for GaoFen-9.

1. Introduction

With the development of remote sensing technology, a new generation of high-resolution optical remote sensing satellites has been broadly applied to scientific experiments, land surveying, crop yield assessment, and environmental monitoring. The push-broom imaging mode is used by high-resolution satellites because of its high signal-to-noise ratio (SNR) [1]. Ideally, each detector in push-broom imaging mode should have exactly the same output when the incident radiance into the entrance pupil of the camera is uniform. However, this ideal state is not realized, due to several factors, and the response of each detector is not identical. Thus, streaking noise appears on the image, causing target distortion and visual effects that impact target interpretation. To improve the visual quality of the image and the identification of the target, the image must be corrected. This process is called relative radiometric correction, and is a key step in producing high-quality images. Relative radiometric calibration is an indispensable processing procedure for improving the quality of high-resolution optical satellite imagery [2,3].
Throughout the development of remote sensing technology, many relative radiometric calibration methods have been proposed [4]. According to the different ways to obtain the correction coefficients, relative radiometric calibration can be divided into three categories, including calibration-based methods, scene-based methods, and comprehensive methods, as described by Duan [5]. However, based on their efficiency and applicability, laboratory relative radiometric calibration, ground-based relative radiometric calibration, and statistical relative radiometric calibration are the three most commonly used methods for relative radiometric calibration of high-resolution satellite data. The characteristics of the satellite must be measured prior to launch using the laboratory calibration procedures [6]. The responses of sensors will change on-orbit, due to environmental changes, particulate contamination, and other factors. Ground-based relative radiometric calibration requires a large area of “Flat Fields”, such as seas, deserts, or glaciers over the full field of view, which may be difficult to locate [7]. The statistical method can acquire high-accuracy radiometric coefficients with a large number of images and typical detector response stability, but requires a long time to collect data, and thus, does not meet the requirement of time efficiency [8].
In light of the drawbacks of the traditional methods discussed above, new calibration data, which are also called side-slither data, were obtained by using the normalization steered viewing mode [9]. Figure 1 illustrates the classic push-broom viewing mode and the new mode, called normalization steered viewing mode. This new imaging mode has been applied to a satellite that can rotate its array 90 degrees on the yaw axis, which means that every detector is ideally aligned to image the same target as it moves along the velocity vector. Numerous satellites incorporate this imaging mode, including Landsat 8 [10], RapidEye [11], Pleiades-HR [12], and QuickBird [13]. Because the CCDs (charge-coupled device) or detectors are staggered, to ensure that the input of each detector is identical, the satellite needs to take images of a uniform field in the new mode used by Landsat 8, and QuickBird. While the arrangement of the Pleiades-HR CCD ensures that the detector passes over the same ground, the pixel response function is piecewise linear, and thus, the input area must be wide enough to contain a useful range of radiances.
For this analysis, the sensor response was treated as linear throughout the instruments range of response. Being linear, histograms of side-slither data could be used for the analyses. Figure 1b shows the process of side-slither scan and the pixel arrangements of the raw data. Due to different along-track and across track samplings [14,15], the same features are not at a line in side-slither data, therefore, pre-processing is required. Instead of the enhanced horizontal correction proposed by [16], this paper introduced line detection to ensure the efficiency and accuracy of calibration data. Then, Otsu’s method [17] was used to extract key points that are used for obtaining coefficients. Otsu’s method is mainly used for one-dimensional image segmentation, and can be effective for background and object segmentation. This paper adapted Otsu’s method to precisely determine the key points of each detector. Within a specified range of histograms, Otsu’s method can identify the histogram with the maximum key point of change for energy. This key point accurately reflects the reaction of each detector to the same input. Histogram matching has been used to determine the specified range of histograms. Hence, the accuracy of relative radiometric calibration was verified using histogram matching and Otsu’s method. To validate the calibration coefficients, side-slither data have been used for verification of global relative radiometric accuracy. Various objects imaged in the classic push-broom viewing mode have also been used to verify the accuracy of coefficients. The on-orbit coefficients, which are obtained using histogram matching on the data collected from September to December in 2015 [18], were used for comparison. This paper is organized as follows. Section 2 introduces the relative radiometric correction model, which is the basis of the proposed method. In Section 3, details of the relative radiometric calibration method based on the histogram of side-slither data are provided; obtaining high-precision key points through histogram matching and Otsu’s method are covered in this section. Section 4 shows the results of image correction using the coefficients, which are described and analyzed in detail. In Section 5, the analysis of data pre-processing and the results were described in detail. Finally, the conclusions of this study are provided in Section 6.

2. The Model of Relative Radiometric Correction

Using the push-broom imaging system, streaking noise appears on the imagery, due to the different responses of each detector. The method proposed, herein, is based on a linear response model. In this section, the formulas for relative radiometric correction are described, along with the derived relative radiometric coefficients. Before the launch of GaoFen-9 (GF-9), the relationship between input radiance and output DN (Digital Number) was tested and calibrated using a laboratory calibration system. These relationships of one detector are shown in Figure 2, and appear linear.
Therefore, the response of the detectors can be described by Equation (1), which represents the conversion from incident radiance to digital counts, and can be described as follows:
D N i = G i × L + B i , i = 0 , 1 , , n 1
where D N i is the raw detector response in digital counts for the i -th detector, L is the incident radiance, G i is the gain for i -th detector, B i is the bias measured in the absence of light for i -th detector, and n is the number of detectors in the array. Ideally, the output DN values are the same if each detector has the same response to the incident radiance, L . According to this assumption, the response under ideal conditions can be described as follows:
D N ¯ = G ¯ × L + B ¯
where D N ¯ is the ideal response in digital counts, G ¯ is the gain, and B ¯ is the bias measured in the absence of light. Equation (3) can be obtained by combining Equations (1) and (2), and can be described as follows:
D N ¯ = g i × D N i + b i , i = 0 , 1 , ,   n 1
where g i = G ¯ G i and b i = B ¯ G ¯ G i × B i , ( g i , b i ) are the relative radiometric coefficients of the i -th detector. In the laboratory calibration system, the mean response of the detectors to the same incident radiance is taken as the standard output ( D N ¯ ), and then the least squares method is used to obtain ( g i , b i ).
In the relative radiometric calibration method based on side-slither data, the accuracy of the relative radiometric coefficients ( g i , b i ) depends on the accuracy of ( D N i , D N ¯ ). We explain how to obtain highly accurate values of ( D N i , D N ¯ ) in Section 3.

3. Methods

Figure 3 shows the relative radiometric calibration method based on side-slither data. This method can be divided into three parts: pre-processing, extraction of key points, and determination of coefficients. Pre-processing ensures that each input line is seeing the same targets. Key point extraction is the primary technique in the process; histogram matching, and Otsu’s method were used to extract the key points. Finally, least squares was used to obtain the relative radiometric coefficients.

3.1. Pre-Processing

Figure 1 shows the normalization steered viewing mode, in which the same features were not on the same line in the raw side-slither data. To obtain accurate calibration, data pre-processing is essential [16], as it guarantees high-precision relative radiometric coefficients, and ensures that each row has the same features.
Figure 4 describes the results of data pre-processing. Figure 4a shows the raw side-slither data, in which the red box indicates valid image data. The upper left corner and the lower right corner of the raw side-slither data are invalid areas, which will be discarded before basic adjustment. Figure 4b shows the primary corrected image obtained after basic adjustment. Figure 4c is the resultant image after line detection was applied. Figure 4d shows the image that was precisely adjusted using the results of line detection. The adjustment was based on Equation (4), which is described as follows:
D N i j = D N k j , k = i + n j 1 , i = 0 , 1 , , m n 1 , j = 0 , 1 , , n
where D N i j is the pixel value of the primary corrected image, D N k j is the pixel value of the raw side-slither data, m is the total number of rows in the raw side-slither data, and n is the total number of columns in the raw side-slither data. Figure 5 shows the process of basic adjustment using Equation (4).
In the ideal case, each line of the primary image must have the same features. Figure 4b shows that the ideal state was not realized due to differences in the horizontal and vertical resolutions of the TDI-CCD (time delay integrated-couple-charged device) detector. Therefore, further corrections of the primary image were needed. In this paper, the line detection algorithm was used for the further correction. First, a Gaussian filter was applied to the primary image to improve the SNR. Second, the Sobel operator was used to detect lines in the image. Finally, the lines whose lengths do not reach the half of width were removed, and the slopes of the remaining lines were calculated. The slopes were used for the precise adjustment of the primary image. The Gaussian filter is essentially a signal filter with the purpose of smoothing the signal, and is used to obtain an image with a higher SNR. The filter is a mathematical model, through which image data will be converted to exclude the low-energy data that contains noise. If the ideal filter is used, a ringing artifact occurs in the image. With Gaussian filters, system functions are smooth and avoid ringing artifacts. We designed a Gaussian filter, calculated an appropriate mask, and then convolved a raw image with the mask. Here, the standard deviation σ of the Gauss filtering function controls the degree of smoothing, where the Gaussian filtering function was described as follows:
G ( x , y ) = 1 2 π σ 2 e x p [ x 2 + y 2 2 σ 2 ]
The Sobel operator [19], sometimes called the Sobel–Feldman operator or Sobel filter, is primarily used for edge detection. Technically, it is a discrete differentiation operator that approximates the gradient of the image intensity function. The Sobel operator is a typical edge detection operator based on the first derivative. Because it utilizes a local average, the operator has a smoothing effect, eliminating the impact of noise. The Sobel operator weights the position of the pixel more effectively than the Prewitt operator or the Roberts operator. The Sobel operator consists of two 3 × 3 matrices, the horizontal and vertical templates, which are then convoluted with the image to obtain an approximate difference in luminance between the horizontal and vertical directions. In practical application, the following two templates are commonly used to detect lines in images:
x = [ 1 0 1 2 0 2 1 0 1 ] ,   y = [ 1 2 1 0 0 0 1 2 1 ]
At each point in the image, the resulting gradient approximations can be combined to determine the gradient magnitude using:
= x 2 + y 2
Figure 4c shows the resulting image after Gaussian filtering, Sobel filtering, and binarization procedure. To facilitate the subsequent calculation of the slope of the lines, the binarization procedure is to assign the pixels of the detected lines to 1 and the other pixels to 0. The algorithm can detect more straight lines and calculate their actual slopes after removal of short straight lines. Figure 4d shows an image after data pre-processing.
D N i j = D N k j , k = i + I N T ( s × j )
where i = 0 , 1 , , m I N T ( s n ) 1 ,   j = 0 ,   1 ,   ,   n 1 , and s represents the slope of the line in Figure 4c. Note that during pre-processing, only integer shifts are applied to avoid resampling the data.

3.2. Extraction of Key Points

Obtaining accurate key points is essential for calculating high-accuracy coefficients of relative radiometric correction from Equation (3). We define D N i ( i = 1 , , n ) as key points. The key points were the optimum thresholds, which were obtained by using Otsu’s method. They are used to fit the model. Because of its small computational cost and low variance after transformation, rotation, and scaling, the histogram technique is widely used in image processing. In this paper, histograms were used to obtain the key points, due to the fact that this method is not sensitive to the location of a single pixel. First, histograms were used for rough matching using a histogram-matching method. Then, Otsu’s method was used to obtain the precise key point using a histogram within a specific grayscale region. Assume that r is in the range [ 0 ,   L 1 ] , here L = 1024 , and the normalized histogram of the i -th detector p i ( r ) is
p i ( r ) = n i r M N i
where M N i is the total number of pixels in the i -th detector, and n i r is the number of pixels with intensity of r in the i -th detector. Meanwhile, the normalized histogram of the whole image is p ( r ) = n r M N , r = 0 , 1 , , L 1 , M N is the total number of pixels in the whole image, and n r is the number of pixels with intensity of r in the whole image.

3.2.1. Histogram Matching

Histogram matching [20] maps the cumulative distribution function (CDF) of each detector to a reference CDF. The CDF of i -th detector P i ( r ) was:
P i ( r ) = t = 0 r p i ( r )
where P ( r ) = t = 0 r p ( r ) , r = 0 , 1 , , L 1 is the CDF of the s the normalized histogram of the entire image. For the i -th detector, every value of r, (i.e., r = 0, 1, …, L − 1) was matched to the corresponding value of r in the specified histogram so that P i ( r ) was close as possible to P ( r ) . A normalization lookup table was created for each detector to map every DN q to a reference DN q .

3.2.2. Otsu’s Method

The Otsu criterion [17] is optimum in the sense that it maximizes the between-class variance, a well-known measure used in statistical discriminant analysis. Assuming that r is in the range [ S 0 , S 1 ] , we recalculated p i ( r ) , P i ( r ) at the same time. The cumulative mean of the i -th detector m i ( r ) up to level h was
m i ( h ) = t = S 0 h ( t S 0 ) p i ( t ) , h [ S 0 , S 1 ]
And the global mean was given by
m i G = t = S 0 S 1 ( t S 0 ) p i ( t )
The between-class variance was given by
σ i B 2 ( h ) = [ m i G P i ( h ) m i ( h ) ] 2 P i ( h ) [ 1 P i ( h ) ]
Then, the optimum threshold is the value, h , that maximizes σ i B 2 ( h ) in the range [ S 0 , S 1 ] :
σ i B 2 ( h ) = m a x S 0 h S 1 σ i B 2 ( h )
Next, the change in the optimum threshold after linear transformation was determined as follows. Suppose that the linear transformation formula is y = a × x + b , and the image prior to linear transformation is called A, then the image after linear transformation is called A’. For image A, if h is the dividing point, then ω 0 is the proportion of foreground points, ω 1 is the proportion of background points, μ 0 is the foreground grayscale mean, and μ 1 is the background grayscale mean. The optimum threshold h is the dividing point that maximizes the objective function:
g = ω 0 × ω 1 × ( μ 0 μ 1 ) 2
Since the transformation is linear, the point h′ corresponding to h in image A can be found in image A′, and h = a × h + b is satisfied. In image A′, for h′, the proportion of the foreground points is ω 0 = ω 0 , the proportion of the background points is ω 1 = ω 1 , the foreground grayscale mean is μ 0 = a × μ 0 + b , the background grayscale mean μ 1 = a × μ 1 + b , and thus, the objective function g in image A′ can be described as follows:
g = ω 0 × ω 1 × ( μ 0 μ 1 ) 2 = ω 0 × ω 1 × ( a × μ 0 a × μ 1 ) 2 = a 2 × g
Therefore, for image A, there was a one-to-one correspondence ( h , g ) with image A’ for each ( h , g ) . If h is the optimum threshold for image A, h is the optimum threshold for image A’, and both satisfy the relationship h = a × h + b .
According to the above description, it can be concluded that the optimal threshold before and after the linear transformation also meets the linear transformation rule of each detector. Therefore, for the optimal threshold of the same input, the values after applying the linear transformation of each detector are still the optimal threshold of the output. Therefore, Otsu’s method can be utilized to extract high-precision key points. From the analysis above, Otsu’s method must ensure that the input radiance range of each detector is the same, and histogram matching is used for this function.

3.3. Obtaining Coefficients

In Section 3.2, key points were obtained. For the i -th detector, h i k was obtained using Otsu’s method, and the average response to the same input could be calculated as follows:
Y k = 1 N i = 0 N 1 h i k , k = 1 , 2 , , m
where m is the total number of key points obtained for the i -th detector, and N is the total number of detectors. The relative radiometric coefficients were computed using the least squares method:
L S C ( i ) = k = 1 m ( Y k g i × h i k b i ) 2
For the i -th detector, the relative radiometric coefficients ( g i , b i ) were computed to minimize L S C ( i ) .

3.4. Processing of the Proposed Method

Here, the method of obtaining coefficients using the relative radiometric calibration method is described in detail:
  • Definition of n different DN values q k , k = 0 , 1 , , n 1 in the range [ 0 , L 1 ] in the whole image.
  • For the i -th detector:
    • A normalization look-up table was created to map every DN q to a referenced DN q through histogram matching.
    • Obtain n different DN values q i k , k = 0 , 1 , , n 1 , locating the corresponding value of q k , k = 0 , 1 , , n 1 in the look-up table.
    • Based on the n different DN values q i k , k = 0 , 1 , , n 1 , construct m = n 1 ranges [ q i ( k 1 ) , q i k ] ,   k = 1 , 2 , , m .
    • For each range [ q i ( k 1 ) , q i k ] , k = 1 , 2 , , m , obtain the maximum between-class variance point h i k , k = 1 , 2 , , m using Otsu’s method.
  • In step 2, m points were obtained for the i -th detector. Suppose the number of detectors is N, then for index k, the average response is Y k = 1 N i = 0 N 1 h i k , k = 1 , 2 , , m .
  • For the i -th detector, the relative radiometric coefficients ( g i , b i ) were obtained by minimizing L S C ( i ) = k = 1 m ( Y k g i × h i k b i ) 2 ;
The relative radiometric coefficients could be obtained from the detailed proposed method.

4. Results

4.1. Experimental Data

GF-9 is an optical remote sensing satellite launched in September 2015 as part of the China High Resolution Earth Observation System major science and technology project, with submeter ground resolution. The focal plane of GF-9 uses the optical butting system, which is similar to that of the multispectral sensors of ZY-3 [21]. The optical butting system ensures that all the detectors are aligned in a line on the focal plane [22]. Therefore, each detector can take the image of the same features using the side-slither scan. Due to the satellite’s strong agility and attitude control capabilities, side-slither data could be obtained using normalization steered viewing mode with this high-resolution remote sensing satellite. In this paper, three groups of remote sensing image data were used to calculate and verify the relative radiometric coefficients. To identify the coefficients in this text, we used the term on-orbit coefficients, for the relative radiometric coefficients as obtained using histogram matching on the data collected from September to December in 2015 [18], while the side-slither coefficients were the relative radiometric coefficients calculated using the proposed method. To verify the accuracy of the side-slither coefficients, they must be compared to the results of the on-orbit coefficients.
Table 1 shows the details of the experimental data. Here, three groups of data were introduced: Group A was the calibration data, and Groups B and C were the verification data:
Group A: To obtain the relative radiometric coefficient using the proposed method, side-slither data were collected on 16 October 2015, and used to calculate the relative radiometric coefficients with the proposed method. This side-slither image had a total of 625,920 lines.
Group B: To verify the accuracy of relative radiometric coefficients using side-slither data, which guarantee that the same features are recorded by each detector, the standard data after pre-processing was considered as a uniform field. In this paper, side-slither data collected on 29 October 2015 were used as verification data to verify the relative radiometric coefficients. This side-slither image had a total of 443,136 lines.
Group C: To verify the accuracy of relative radiometric coefficients using various objects, the effects of the coefficients on different types of object, images taken in classic push-broom viewing mode were used. In this paper, three types of objects were used to verify the effects of coefficients including water, city, and desert. The image of water was taken on 2 October 2015, that of the city was taken on 2 October 2015, and the desert image was taken on 10 December 2015.

4.2. Accuracy Assessment

To more objectively evaluate the effects of the coefficients, different indices were proposed based on different validation datasets. The side-slither data for Group B described in Section 4.1 made use of the relative radiometric accuracy indices “root-mean-square deviation of the mean line” (RA) and “generalized noise” (RE), as well as streaking metrics [10], which could be treated as a uniform field because side-slither data have the same input radiance for each detector. The image data for Group C could be quantitatively evaluated using streaking metrics, the improvement factor (IF), and the structural similarity index (SSIM). The IF index was used to verify image quality before and after correction, whereas SSIM was used to evaluate the similarity of the two images.

4.2.1. Relative Radiometric Accuracy Indices

The indices RA and RE [23] are used to evaluate uniform field images, and are also referred to as the relative radiometric accuracy. RA and RE are calculated follows:
R A = i = 1 n ( M e a n i M e a n ¯ ) 2 n M e a n ¯ × 100 %
R E = ( i = 1 n | M e a n i M e a n ¯ | ) / n M e a n ¯ × 100 %
where M e a n i is the mean value of the i -th detector, M e a n ¯ is the mean value of the image, and n is the total number of columns in the image. The smaller the values of RA and RE, the higher the accuracy.

4.2.2. Streaking Metrics

The streaking metrics are sensitive to detector-to-detector non-uniformity (streaking) [10,24,25], and are therefore used for detector comparison. It can be described as follows:
s t r e a k i n g i = | M e a n i 1 2 ( M e a n i 1 + M e a n i + 1 ) | 1 2 ( M e a n i 1 + M e a n i + 1 ) × 100
where M e a n i is the mean value of the i -th detector. The lower the streaking metrics, the more uniform the image.

4.2.3. Improvement Factor (IF)

To quantitatively evaluate the correction effects discussed above, the IF [26,27] was applied to the corrected image. It is defined as follows:
d R [ i ] = μ I R [ i ] μ I [ i ]
d E [ i ] = μ I E [ i ] μ I [ i ]
I F = 10   l o g 10 [ i d R 2 [ i ] i d E 2 [ i ] ]
where μ I R [ i ] and μ I E [ i ] are the mean values of the i -th detector in the raw and corrected images, respectively. μ I [ i ] is the curve obtained by low-pass filtering of μ I E [ i ] . The greater the IF, the higher the quality of the image.

4.2.4. Structural Similarity Index (SSIM)

Structural similarity theory holds that images are highly structured (i.e., there is a strong correlation between pixels), particularly the pixels closest to each other in the image domain. Because it is based on structural similarity theory, the SSIM [28] defines structural information from the perspective of image composition as luminance, contrast, and structural similarity. The average value is used to estimate luminance, the standard deviation approximates contrast, and covariance is used to determine the degree of structural similarity. The mathematical model is calculated as follows:
SSIM ( I R , I E ) = ( 2 μ I R μ I E + c 1 ) ( 2 σ I R I E + c 2 ) ( μ I R 2 + μ I E 2 + c 1 ) ( σ I R 2 + σ I E 2 + c 2 )
where μ I R and μ I E are the mean values of the raw and corrected images, respectively. σ I R and σ I E are the variance values of the raw and corrected images, respectively. σ I R I E is the covariance of the raw and corrected images, c 1 = ( k 1 L ) 2 , c 2 = ( k 2 L ) 2 are two stabilizing variables, L is the range of the image, and k 1 = 0.01 , k 2 = 0.03 are default values. The result of SSIM is a decimal number between −1 and 1. If the result is 1, it indicates that the two image data being compared are consistent.

4.3. Verification Results for Raw Image in Group B

It was difficult to verify the quality of the remote sensing images using uniform field data, because even the most uniform deserts have features that diverge from absolutely uniform field data. However, based on the characteristics of the side-slither data and the standard image obtained from data pre-processing, we used the most uniform feature in the linear direction. High-quality test data were available to verify the global relative radiometric accuracy. Here, a portion the Group B data were used to verify the relative radiometric coefficients.
Figure 6 shows the results of the relative radiometric correction. Figure 6a shows the standard data that resulted from pre-processing of the raw side-slither data. Figure 6b shows the result of relative radiometric correction using the on-orbit coefficients with standard data. Figure 6c shows the result of relative radiometric correction using the side-slither coefficients with standard data. Analysis of the whole image shows that the standard image has obvious streaking noise, while the non-uniformity disappears in Figure 6b,c. Figure 7a–c shows the detailed images of the red box shown in Figure 6a–c. Figure 7a shows the detail of the standard image obtained from pre-processing of the raw side-slither data. Non-uniformity is apparent in the detailed image. Figure 7b shows the detail of the red box shown in Figure 6b. Compared to the standard data in Figure 7a,b showed greater uniformity. However, non-uniformity is still easily observed. Figure 7c shows the detail of the red box showed in Figure 6c. The non-uniformity in this detailed image has disappeared. However, detailed quantitative analysis is necessary. Therefore, column mean value analysis and streaking metrics analysis of the corrected image are required.
Due to the characteristics of side-slither data, the column mean clearly reflected the effects of the coefficients. Figure 8 shows the column mean value analysis of the standard image and corrected images. The abscissa represents the detector number, and the ordinate represents the DN value of the column mean of each detector. Figure 8a shows the column means of the standard image. It is apparent that the column mean value of the standard image exhibited large variation, with a range of variation of about 40 DN, which is consistent with the image shown in Figure 6a. Figure 8 shows that there are seven detector groups. The responses between the detector group were very different, so the changes of the mean were very noticeable, resulting in the significant non-uniformity between the detector group shown in Figure 6a. The responses of the detectors inside each detector group were similar, and the variations of the mean were not significant, while the non-uniformity inside the detector group were not obvious shown in Figure 6a. Figure 8b shows the column means of the images corrected using the on-orbit coefficients and side-slither coefficients. The blue curve represents the correction results from using the on-orbit coefficients and the red curve represents the correction results using the side-slither coefficients. It is obvious that the range of variation was smaller, about 6 DN, and that the on-orbit corrected image exhibited non-uniformity shown by the blue curve. On the other hand, the side-slither coefficients performed well; the range of variation was about 2 DN for the red curve. For further analysis, streaking metrics analyses were performed.
Figure 9 shows streaking metrics analysis of the standard image and corrected images. The abscissa represents the detector number, and the ordinate represents the DN value of the streaking metrics for each detector. Figure 9a shows the streaking metrics of the standard image. It shows the same reaction as Figure 8a, and the maximum streaking metrics reaches 2.5. Figure 9b shows the streaking metrics of the images corrected with on-orbit coefficients or side-slither coefficients. The blue dots represent the correction results from the on-orbit coefficients and the red dots represent those from the side-slither coefficients. It can be concluded that the red dots were lower than the blue dots, which indicates that the side-slither coefficients performed better than the on-orbit coefficients. This method performed well, as supported by the results of column means analysis and streaking metric analysis. Next, the quantitative indices RA, RE and the mean of the streaking metrics for these three images were determined.
Table 2 shows the quantitative values of these three images shown in Figure 6a–c. The quantitative values of mean, standard deviation, RA (root-mean-square deviation of the mean line), RE (generalized noise), and the maximum streaking metrics were used for comparison of the effect of on-orbit coefficients and side-slither coefficients. For uniform fields, RA and RE reflected the uniformity of the entire image. The smaller the values of RA and RE, the higher the uniformity. In each line of the results, the target of each detector can be considered as the same. The RA of standard data is 22.4864%, which shows the non-uniformity of the standard data. The RA and RE values of the corrected image using side-slither coefficients were better than 0.1%. It is obvious that the image corrected with side-slither coefficients was better than that with on-orbit coefficients. And the lower the streaking metrics, the more uniform the image. The maximum streaking metrics for the image corrected using side-slither coefficients was lower than those of other images, while the maximum streaking metrics were 2.5835, 0.3456 and 0.0145, respectively. The non-uniformity of the standard image was obvious based on previous analyses, as well as what was shown in this table. Of course, it is better to keep the mean value of corrected images the same with that of standard data, and the mean values of these three images did not vary significantly. The quantitative values of the corrected image using the side-slither coefficients were better than standard data and the corrected image using the on-orbit coefficients. It can be concluded that the side-slither coefficients perform better than on-orbit coefficients.

4.4. Verification Results for Raw Images in Group C

To verify that the side-slither coefficients also perform well using images taken in the classic push-broom viewing mode, three types of image data were chosen for verification images: water, city, and desert images. In this section, the visual effects and quantitative indices were analyzed for images corrected using the on-orbit and side-slither coefficients.
Figure 10 shows the raw and corrected images of the water. Figure 10a shows the raw image of water, while Figure 10b,c show images corrected using the on-orbit coefficients and side-slither coefficients, respectively. Figure 11 shows the raw and corrected images of the city. Figure 11a shows the raw city image, while Figure 11b,c show images corrected using the on-orbit coefficients and side-slither coefficients, respectively. Figure 11 shows the raw and corrected images of the desert. Figure 12a shows the raw desert image, while Figure 12b,c show images corrected using the on-orbit coefficients and side-slither coefficients, respectively. Streaking can be clearly observed in Figure 10a and Figure 12a, while it is hard to observe in Figure 11a. This artifact occurred because the objects in the two images shown in Figure 10a and Figure 12a were uniform fields (i.e., the dynamic range of the images was narrow). When they are displayed such that the images are stretched, a small difference between DN pixels was magnified, while Figure 11a is the raw image of the city, which has a large dynamic range due to various reflective objects in the city. Furthermore, the streaking is not easily observed, and the analysis of streaking metrics is required. From the analysis of the results, Figure 10b clearly shows streaking, while the on-orbit coefficients seem to perform well in Figure 11b and Figure 12b. From Figure 10c, Figure 11c and Figure 12c, the side-slither coefficients also appear to perform well. The streaking metrics are very important for verifying the performance of the coefficients in detail.
Figure 13, Figure 14 and Figure 15 show the streaking metrics of water, city, and desert images, respectively. The abscissa represents the detector number, and the ordinate represents the value of the streaking metrics for each detector. The streaking metrics of the raw data were shown in Figure 13a, Figure 14a and Figure 15a. In Figure 13b, Figure 14b and Figure 15b, the red dots represent the streaking metrics of the corrected images using the side-slither coefficients, while the blue dots represent that of the corrected images using the on-orbit coefficients. Figure 13a, Figure 14a, and Figure 15a show the maximum streaking metrics were at the edge of detector group. Figure 13a shows the streaking metrics of the water image, the maximum value of the streaking metrics can reach 4, and most of the streaking metrics were below 0.3. Figure 13b shows that the streaking metric of the corrected image using the on-orbit coefficients were less than 1.5, and that of the corrected image using the side-slither coefficients were generally less than 0.5. Figure 14a and Figure 15a show the distribution of streaking metrics were approximately the same as in Figure 13a. And the maximum streaking metrics were reaching 2 and 2.5 in Figure 14a and Figure 15a, respectively. Figure 14b shows that the streaking metrics of the corrected image using on-orbit coefficients were generally less than 0.3, while that of the corrected image using side-slither coefficients were generally less than 0.2. Figure 15b shows that the streaking metrics of the corrected image using the side-slither coefficients were below 0.2, which is obviously lower than that of the corrected image using the on-orbit coefficients. It can be seen from the above analysis that there are no obvious rules for the streaking metrics of different objects. However, for the same object, the streaking metrics of the corrected image using the side-slither coefficients were obviously less than that of the corrected image using the on-orbit coefficients. The images corrected using side-slither coefficients had streaking metrics that were consistently relatively small. Thus, the side-slither coefficients perform better than the on-orbit coefficients. For further analysis of the results, the quantitative values of the results are listed.
Table 3, Table 4 and Table 5 show the quantitative values of the water, city, and desert images, respectively. The quantitative values of mean, standard deviation, IF, SSIM, and the maximum streaking metrics were used for comparison the effect of on-orbit coefficients and side-slither coefficients. For the IFs, the greater the IF, the higher the quality of the image. In these tables, the IFs of corrected images using side-slither coefficients are greater than the corrected images using on-orbit coefficients. Therefore, the effects of the side-slither coefficients on image improvement were much greater than those of the on-orbit coefficient. And the lower the streaking metrics, the more uniform the image. The maximum streaking metrics for the image corrected using side-slither coefficients was lower than those of other images, which shows that it was the better to eliminate non-uniformity using the side-slither coefficients. For SSIMs, the result of SSIM is a decimal number between −1 and 1. If the result is 1, it indicates that the two image data being compared are consistent. The SSIMs of the images corrected using the on-orbit coefficients were relatively high, but all of the SSIMs were above 0.99, showing that the structure of the raw images had hardly changed. Of course, It is better to keep the mean value of corrected images the same with that of the raw data. For the water image and city image, the means increased by about 10 percent after applying the correction proposed in this paper. By analyzing the details of the experimental data, it shows the TDI level of desert images is 48, which is the same as group A and group B in Table 1. The TDI level of water image is 32, the same as the city image, which is different from group A. Therefore, the change of the mean may be due to the different TDI levels between the verification data and calibration data. Except for the mean value, the correction proposed performed well on the water image and city image, and the non-uniformity of images had been eliminated. According to Table 3, Table 4 and Table 5, it can be concluded that the verification results of raw images in Group C show that the side-slither coefficients performed better than the on-orbit coefficients.

5. Discussion

5.1. The Analysis of Pre-Processing

The purpose of pre-processing is to ensure that each detector has the same radiance input in the image, and it involves two processes: primary correction and standard correction. Here, the side-slither data used was from Group A described in Section 4.1. A portion of this side-slither data was selected to illustrate the results of pre-processing.
Figure 16 shows the results of data pre-processing. Figure 16a shows the primary image, which was the result of applying primary correction to the raw data. Visually, it was apparent that for a given input in the primary image, the output was not on the same image line. To make subsequent calibration more accurate, the standard image was obtained by applying standard correction to the primary image. Figure 16b shows the result of standard correction. On the standard image, each line responded to the same input, which was necessary for the quantitative analysis of changes in the image before and after standard correction.
Figure 17 shows changes in the column means of the primary and standard images. Figure 17a shows the column means of the primary image and standard image. The blue curve represents the column means of the standard image and the red curve represents the column means of the standard image. Figure 17b shows the absolute value of the difference between the column values of the primary image and standard image. Note that the closer to the right detector in an image, the larger the difference in the column mean. This phenomenon can be explained by analyzing Figure 16. Side-slither data were not collected from a uniform field, and thus, there was large variation among columns. The primary image shown in Figure 16 has brighter features in the upper right corner. As standard correction progressed, the brighter features in the upper right corner were removed, leading to the phenomenon shown in Figure 17.

5.2. The Analysis of Corrected Images

Two sets of different experimental data were used to verify the effect of the coefficients, and four sets of quantitative indicators were used to verify the experimental data. From these four sets of quantitative indices, it can be found the side-slither coefficients were performed well. The on-orbit coefficients, which were obtained using histogram matching on the data collected from September to December in 2015 [18], were used for comparison. It is obvious from the quantitative results that the relative radiometric correction index, RA of the standard image is 22.4864%. The RA of the corrected image using on-orbit coefficients was 0.2657%, while the RA of the corrected image using side-slither coefficients was 0.00082%. The RA of the corrected image using side-slither coefficients is far superior to that of using on-orbit coefficients. This situation also appeared in the analysis of RE; the RE of the corrected image using side-slither coefficients was 0.0355%, far better than that of the corrected image using on-orbit coefficients. In the analysis of the maximum streaking metrics, the maximum streaking metrics of the corrected image using side-slither coefficients was 0.8967, which was lower than 1. The maximum streaking metrics of the corrected image using the on-orbit coefficients were mostly larger than 1. By analyzing the image factor (IF), the IFs of the corrected images using side-slither coefficients were far superior to that of the corrected images using the on-orbit coefficients. From the above four groups of quantitative indices, it can be very obvious that the side-slither coefficients were better than the on-orbit coefficients. Meanwhile, the on-orbit coefficients were obtained by calculating the large amounts of raw data covering different times (e.g., the on-orbit coefficient used in this paper uses three months of raw data). Therefore, the side-slither coefficients have better effect and time efficiency.

6. Conclusions

Due to the development of remote sensing satellite hardware technology, satellites have become more agile, allowing implementation of the normalization steered viewing mode. The TDI-CCD arrangement in this mode is parallel to the direction of satellite flight. In theory, this mode causes each detector of the TDI-CCD to capture the same object, and this type of data also can be called side-slither data. Given a linear response from each detector, this paper proposed a method of relative radiometric calibration based on the histogram of side-slither data, which does not require imaging of uniform fields. Due to the difference between the horizontal and vertical resolutions of the satellite, data pre-processing is an important step to ensure that each line has the same input. Standard data are obtained through data pre-processing. Otsu’s method can be used to calculate the change point of the histogram. After applying linear transformation, the relative position of the change point in the histogram does not change, so the key points can be extracted via Otsu’s method. First, the histogram-matching method was used to determine the ranges of Otsu’s method for each detector. Then, Otsu’s method was used to precisely obtain the key points of each detector in each range. Finally, the normalized relative radiometric coefficients were obtained by the least squares method. We conducted a comparison of the efficacy of on-orbit coefficients and side-slither coefficients for use in relative radiometric correction. According to the characteristics of side-slither data, the side-slither data were used to verify the effects of coefficients used in uniform fields. The relative radiometric accuracy indices RA and RE, as well as streaking metrics, were employed to compare the two coefficients. As can be observed in the experimental results, the “root-mean-square deviation of the mean line” (RA) and “generalized noise” (RE) were greater than 0.1%. The maximum streaking metrics, which are sensitive to detector-to-detector non-uniformity, was less than 1. The results clearly showed that coefficients calculated by the proposed method were superior to on-orbit coefficients on uniform fields. At the same time, to ensure the validity of the proposed method, three types of objects obtained through the classic push-broom viewing mode were used for validation, including waters, cities, and deserts. The quantitative indices of IF, SSIM, and streaking metrics were used to evaluate the corrected images. The experimental results showed that the image corrected using side-slither coefficients had a higher IF value, while the behavior of the SSIMs for the two corrected images was similar. Side-slither coefficients also performed better than on-orbit coefficients in terms of streaking metrics. Compared to on-orbit coefficients, the coefficients obtained by the proposed method had higher accuracy and were more effective for relative radiometric correction. Therefore, it can be concluded that coefficients calculated by the proposed method are superior to on-orbit coefficients.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grant 91438203, the Open Fund of Twenty First Century Aerospace Technology under Grant 21AT-2016-09 and the Open Fund of Twenty First Century Aerospace Technology under Grant 21AT-2016-02.

Author Contributions

Mi Wang and Chaochao Chen contributed equally to this work in designing and performing the research. Jun Pan, Ying Zhu and Xueli Chang contributed to the validation and comparisons.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schott, J.R. Remote Sensing: The Image Chain Approach; Oxford University Press on Demand: Oxford, UK, 2007. [Google Scholar]
  2. Xiong, X.; Barnes, W. An overview of modis radiometric calibration and characterization. Adv. Atmos. Sci. 2006, 23, 69–79. [Google Scholar] [CrossRef]
  3. Li, Z.; Yu, F. A real time de-striping algorithm for geostationary operational environmental satellite (goes) 15 sounder images. In Proceedings of the 2015 CALCON Technical Meeting, Logan, UT, USA, 26 August 2015. [Google Scholar]
  4. Zenin, V.A.; Eremeev, V.V.; Kuznetcov, A.E. Algorithms for relative radiometric correction in earth observing systems resource-p and canopus-v. In Proceedings of the ISPRS—International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Prague, Czech Republic, 12–19 July 2016. [Google Scholar]
  5. Duan, Y.; Zhang, L.; Yan, L.; Wu, T.; Liu, Y.; Tong, Q. Relative radiometric correction methods for remote sensing images and their applicability analysis. Int. J. Remote Sens. 2014. [Google Scholar] [CrossRef]
  6. Krause, K.S. Worldview-1 pre and post-launch radiometric calibration and early on-orbit characterization. In Proceedings of the Optical Engineering + Applications, San Diego, CA, USA, 20 August 2008; International Society for Optics and Photonics: Bellingham, WA, USA, 2008; pp. 708111–708116. [Google Scholar]
  7. Lyon, P.E. An automated de-striping algorithm for ocean colour monitor imagery. Int. J. Remote Sens. 2009, 30, 1493–1502. [Google Scholar] [CrossRef]
  8. Gadallah, F.L.; Csillag, F.; Smith, E.J.M. Destriping multisensor imagery with moment matching. Int. J. Remote Sens. 2000, 21, 2505–2511. [Google Scholar] [CrossRef]
  9. Li, H.; Man, Y.-Y. Relative radiometric calibration method based on linear CCD imaging the same region of non-uniform scene. In Proceedings of the International Symposium on Optoelectronic Technology and Application 2014, Beijing, China, 18 November 2014; pp. 929906–929909. [Google Scholar]
  10. Pesta, F.; Bhatta, S.; Helder, D.; Mishra, N. Radiometric non-uniformity characterization and correction of landsat 8 oli using earth imagery-based techniques. Remote Sens. 2015, 7, 430–446. [Google Scholar] [CrossRef]
  11. Anderson, C.; Naughton, D.; Brunn, A.; Thiele, M. Radiometric correction of rapideye imagery using the on-orbit side-slither method. In Proceedings of the SPIE Remote Sensing, Prague, Czech Republic, 28 October 2011; International Society for Optics and Photonics: Bellingham, WA, USA, 2011; pp. 818008–818015. [Google Scholar]
  12. Kubik, P.; Pascal, W. Amethist: A method for equalization thanks to histograms. In Sensors, Systems, and Next-Generation Satellites VIII; Meynart, R., Neeck, S.P., Shimoda, H., Eds.; Spie-Int Soc Optical Engineering: Bellingham, WA, USA, 2004; Volume 5570, pp. 256–267. [Google Scholar]
  13. Henderson, B.G.; Krause, K.S. Relative radiometric correction of quickbird imagery using the side-slither technique on orbit. In Proceedings of the SPIE 49th Annual Meeting Optical Science and Technology, Denver, CO, USA, 26 October 2004; International Society for Optics and Photonics: Bellingham, WA, USA, 2004; pp. 426–436. [Google Scholar]
  14. Zhang, G.; Litao, L.I. A study on relative radiometric calibration without calibration field for YG-25. Acta Geod. Cartogr. Sin. 2017, 46, 1009–1016. [Google Scholar]
  15. Peng, Y.; Huang, H.-L.; Zhu, L.-M. The in-orbit resolution detection of TH-01 CCD cameras based on the radial target. Geomat. Spat. Inf. Technol. 2013, 7, 47. [Google Scholar]
  16. Gerace, A.; Schott, J.; Gartley, M.; Montanaro, M. An analysis of the side slither on-orbit calibration technique using the dirsig model. Remote Sens. 2014, 6, 10523–10545. [Google Scholar] [CrossRef]
  17. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man. Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  18. Pan, Z.; Gu, X.; Liu, G.; Min, X. Relative radiometric correction of CBERS-01 CCD data based on detector histogram matching. Geomat. Inf. Sci. Wuhan Univ. 2005, 30, 925–927. [Google Scholar]
  19. Spontón, H.; Cardelino, J. A review of classic edge detectors. Image Process. Line 2015, 5, 90–123. [Google Scholar] [CrossRef]
  20. Horn, B.K.; Woodham, R.J. Destriping landsat mss images by histogram modification. Comput. Graph. Image Process. 1979, 10, 69–83. [Google Scholar] [CrossRef]
  21. Tong, X.; Xu, Y.; Ye, Z.; Liu, S.; Tang, X.; Li, L.; Xie, H.; Xie, J. Attitude oscillation detection of the ZY-3 satellite by using multispectral parallax images. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3522–3534. [Google Scholar] [CrossRef]
  22. Chen, S.; Yang, B.; Wang, H. Design and Experiment of the Space Camera; Aerospace Publishing Company: London, UK, 2003. [Google Scholar]
  23. Yufeng, H.Y.Z. Analysis of relative radiometric calibration accuracy of space camera. Spacecr. Recovery Remote Sens. 2007, 4, 12. [Google Scholar]
  24. Krause, K.S. Relative radiometric characterization and performance of the quickbird high-resolution commercial imaging satellite. In Proceedings of the SPIE 49th Annual Meeting Optical Science and Technology, Denver, CO, USA, 26 October 2004; International Society for Optics and Photonics: Bellingham, WA, USA, 2004; pp. 35–44. [Google Scholar]
  25. Krause, K.S. Quickbird relative radiometric performance and on-orbit long term trending. In Proceedings of the SPIE Optics + Photonics, San Diego, CA, USA, 7 September 2006; International Society for Optics and Photonics: Bellingham, WA, USA, 2006; pp. 62912P–62960P. [Google Scholar] [CrossRef]
  26. Chen, J.; Shao, Y.; Guo, H.; Wang, W.; Zhu, B. Destriping cmodis data by power filtering. IEEE Trans. Geosci. Remote Sens. 2003, 41, 2119–2124. [Google Scholar] [CrossRef]
  27. Corsini, G.; Diani, M.; Walzel, T. Striping removal in MOS-B data. IEEE Trans. Geosci. Remote Sens. 2000, 38, 1439–1446. [Google Scholar] [CrossRef]
  28. Ndajah, P.; Kikuchi, H.; Yukawa, M.; Watanabe, H.; Muramatsu, S. SSIM image quality metric for denoised images. In Proceedings of the 3rd WSEAS Internayional Conference on Visualization, Imaging and Simulation, Faro, Portugal, 3–5 November 2010; pp. 53–58. [Google Scholar]
Figure 1. (a) Classic push-broom viewing mode; (b) Normalization steered viewing mode, also be called side-slither scan.
Figure 1. (a) Classic push-broom viewing mode; (b) Normalization steered viewing mode, also be called side-slither scan.
Remotesensing 10 00381 g001
Figure 2. The relationship between input radiance and output DN (Digital Number).
Figure 2. The relationship between input radiance and output DN (Digital Number).
Remotesensing 10 00381 g002
Figure 3. The relative radiometric calibration method based on side-slither data.
Figure 3. The relative radiometric calibration method based on side-slither data.
Remotesensing 10 00381 g003
Figure 4. Results of data pre-processing. (a) Raw side-slither data; (b) Primary image corrected using Equation (4); (c) Results of line detection applied to the primary image; (d) Standard image corrected using Equation (8) with the primary image.
Figure 4. Results of data pre-processing. (a) Raw side-slither data; (b) Primary image corrected using Equation (4); (c) Results of line detection applied to the primary image; (d) Standard image corrected using Equation (8) with the primary image.
Remotesensing 10 00381 g004
Figure 5. The process of basic adjustment using Equation (4).
Figure 5. The process of basic adjustment using Equation (4).
Remotesensing 10 00381 g005
Figure 6. The results of relative radiometric correction. (a) Standard data that resulted from data pre-processing of the raw side-slither data; (b) Result of using on-orbit coefficients; (c) Result of using side-slither coefficients.
Figure 6. The results of relative radiometric correction. (a) Standard data that resulted from data pre-processing of the raw side-slither data; (b) Result of using on-orbit coefficients; (c) Result of using side-slither coefficients.
Remotesensing 10 00381 g006
Figure 7. The detail of results. (a) Detailed image of the red box showed in Figure 6a; (b) Detailed image of the red box showed in Figure 6b; (c) Detailed image of the red box showed in Figure 6c.
Figure 7. The detail of results. (a) Detailed image of the red box showed in Figure 6a; (b) Detailed image of the red box showed in Figure 6b; (c) Detailed image of the red box showed in Figure 6c.
Remotesensing 10 00381 g007
Figure 8. Column mean value analysis of the standard image and corrected images. (a) The column means of the standard image; (b) The column means of images corrected using the on-orbit coefficients and the side-slither coefficients.
Figure 8. Column mean value analysis of the standard image and corrected images. (a) The column means of the standard image; (b) The column means of images corrected using the on-orbit coefficients and the side-slither coefficients.
Remotesensing 10 00381 g008
Figure 9. Streaking metrics analysis of the standard image and corrected images. (a) The streaking metrics of the standard image; (b) The streaking metrics of the images corrected using the on-orbit coefficients and side-slither coefficients.
Figure 9. Streaking metrics analysis of the standard image and corrected images. (a) The streaking metrics of the standard image; (b) The streaking metrics of the images corrected using the on-orbit coefficients and side-slither coefficients.
Remotesensing 10 00381 g009
Figure 10. The raw and corrected images of the water. (a) Raw image; (b) Image corrected using the on-orbit coefficients; (c) Image corrected using the side-slither coefficients.
Figure 10. The raw and corrected images of the water. (a) Raw image; (b) Image corrected using the on-orbit coefficients; (c) Image corrected using the side-slither coefficients.
Remotesensing 10 00381 g010
Figure 11. The raw and corrected images of the city. (a) Raw image; (b) Image corrected using the on-orbit coefficients; (c) Image corrected using the side-slither coefficients.
Figure 11. The raw and corrected images of the city. (a) Raw image; (b) Image corrected using the on-orbit coefficients; (c) Image corrected using the side-slither coefficients.
Remotesensing 10 00381 g011
Figure 12. The raw and corrected images of the desert. (a) Raw image; (b) Image corrected using the on-orbit coefficients; (c) Image corrected using the side-slither coefficients.
Figure 12. The raw and corrected images of the desert. (a) Raw image; (b) Image corrected using the on-orbit coefficients; (c) Image corrected using the side-slither coefficients.
Remotesensing 10 00381 g012
Figure 13. Streaking metrics of water images: (a) Streaking metrics of the raw image; (b) Streaking metrics of the images corrected using on-orbit coefficients and side-slither coefficients, respectively.
Figure 13. Streaking metrics of water images: (a) Streaking metrics of the raw image; (b) Streaking metrics of the images corrected using on-orbit coefficients and side-slither coefficients, respectively.
Remotesensing 10 00381 g013
Figure 14. Streaking metrics of city images. (a) Streaking metrics of the raw image; (b) Streaking metrics of images corrected using on-orbit coefficients and side-slither coefficients, respectively.
Figure 14. Streaking metrics of city images. (a) Streaking metrics of the raw image; (b) Streaking metrics of images corrected using on-orbit coefficients and side-slither coefficients, respectively.
Remotesensing 10 00381 g014
Figure 15. Streaking metrics of desert images. (a) Streaking metrics of the raw image; (b) Streaking metrics of the images corrected using on-orbit coefficients and side-slither coefficients, respectively.
Figure 15. Streaking metrics of desert images. (a) Streaking metrics of the raw image; (b) Streaking metrics of the images corrected using on-orbit coefficients and side-slither coefficients, respectively.
Remotesensing 10 00381 g015
Figure 16. The results of data pre-processing. (a) The primary image was obtained using Equation (4) with raw side-slither data; (b) The standard image is the result of standard correction of the primary image.
Figure 16. The results of data pre-processing. (a) The primary image was obtained using Equation (4) with raw side-slither data; (b) The standard image is the result of standard correction of the primary image.
Remotesensing 10 00381 g016
Figure 17. Changes in column means of the primary image and standard image. (a) Column means of the primary and standard images; (b) the absolute value of the difference between column values of the primary and standard images.
Figure 17. Changes in column means of the primary image and standard image. (a) Column means of the primary and standard images; (b) the absolute value of the difference between column values of the primary and standard images.
Remotesensing 10 00381 g017
Table 1. Details of the experimental data.
Table 1. Details of the experimental data.
Group NameData TypeImaging TimeTDI Level
Group ASide-slither data
(Calibration data)
16 October 201548
Group BSide-slither data
(Verification data)
29 October 201548
Group CWater
(Verification data)
02 October 201532
City
(Verification data)
02 October 201532
Desert
(Verification data)
10 October 201548
Table 2. Comparison of quantitative values of the images (coef: coefficients).
Table 2. Comparison of quantitative values of the images (coef: coefficients).
Correction MethodMean ValueStandard DeviationRA (%)RE (%)The Maximum Streaking Metrics
Standard data565.7841146.052222.48641.67452.5835
On-orbit coef564.7518145.59090.26570.17510.3456
Side-slither coef564.8308145.60840.00820.03350.0145
Table 3. Comparison of quantitative values of the water images.
Table 3. Comparison of quantitative values of the water images.
Correction MethodMean ValueStandard DeviationImproved Factor (IF)SSIMThe maximum Streaking Metrics
Raw data75.94979.6081//4.1526
On-orbit coef75.97638.979317.22230.99831.4877
Side-slither coef83.569110.694422.63570.99230.8967
Table 4. Comparison of quantitative values of the city images.
Table 4. Comparison of quantitative values of the city images.
Correction MethodMean ValueStandard DeviationImproved Factor (IF)SSIMThe Maximum Streaking Metrics
Raw data289.112073.6728//2.0004
On-orbit coef288.064173.16245.41470.99960.7410
Side-slither coef313.760175.252815.04590.99380.5275
Table 5. Comparison of quantitative values of the desert images.
Table 5. Comparison of quantitative values of the desert images.
Correction MethodMean ValueStandard DeviationImproved Factor (IF)SSIMThe Maximum Streaking Metrics
Raw data667.497362.28675//2.5343
On-orbit coef668.10460.17041.70960.99751.4192
Side-slither coef666.426159.431410.40550.99950.3067

Share and Cite

MDPI and ACS Style

Wang, M.; Chen, C.; Pan, J.; Zhu, Y.; Chang, X. A Relative Radiometric Calibration Method Based on the Histogram of Side-Slither Data for High-Resolution Optical Satellite Imagery. Remote Sens. 2018, 10, 381. https://doi.org/10.3390/rs10030381

AMA Style

Wang M, Chen C, Pan J, Zhu Y, Chang X. A Relative Radiometric Calibration Method Based on the Histogram of Side-Slither Data for High-Resolution Optical Satellite Imagery. Remote Sensing. 2018; 10(3):381. https://doi.org/10.3390/rs10030381

Chicago/Turabian Style

Wang, Mi, Chaochao Chen, Jun Pan, Ying Zhu, and Xueli Chang. 2018. "A Relative Radiometric Calibration Method Based on the Histogram of Side-Slither Data for High-Resolution Optical Satellite Imagery" Remote Sensing 10, no. 3: 381. https://doi.org/10.3390/rs10030381

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop