[go: up one dir, main page]

Next Article in Journal
Rational Polynomial Coefficient Estimation via Adaptive Sparse PCA-Based Method
Previous Article in Journal
Enhancing the Resolution of Satellite Ocean Data Using Discretized Satellite Gridding Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving Radiometric Block Adjustment for UAV Multispectral Imagery under Variable Illumination Conditions

1
College of Engineering, China Agricultural University, Beijing 100083, China
2
Agricultural Biosystems Engineering Group, Department of Plant Sciences, Wageningen University & Research, Droevendaalsesteeg 1, 6708 PB Wageningen, The Netherlands
3
Data Science, Crop Protection Development, Syngenta, Westeinde 62, 1601 BK Enkhuizen, The Netherlands
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(16), 3019; https://doi.org/10.3390/rs16163019
Submission received: 2 July 2024 / Revised: 7 August 2024 / Accepted: 14 August 2024 / Published: 17 August 2024
Figure 1
<p>The study area is located in Wageningen, Gelderland province, the Netherlands. The right figure shows the experimental setup, where the red dots represent ground control points (GCPs), and each single yellow five-pointed star denotes a set of reference panels. The white rectangle border indicates the range of the potato monoculture field, and the light blue one represents the potato and grass stripcropping field.</p> ">
Figure 2
<p>The mean reflectance values of four sets of self-made reference panels used in this experiment.</p> ">
Figure 3
<p>Variability of solar irradiance at 560 nm (green channel) during UAV data collection under dynamic cloud conditions, as observed on 14 June, between 11:20 and 12:00.</p> ">
Figure 4
<p>Workflow for the proposed radiometric block adjustment method.</p> ">
Figure 5
<p>Flowchart for selecting tie points located in the vegetation area.</p> ">
Figure 6
<p>Flowchart for identifying tie points located in vegetation areas. (<b>a</b>) Tie points, denoted by the red dots, are located on the example image in the green channel. (<b>b</b>) Histogram of the calculated NDVI map for the corresponding image, and the red line indicates the segmentation threshold between vegetation and non-vegetation. (<b>c</b>) NDVI filtered to highlight vegetation areas, overlaid on the RGB base layer. (<b>d</b>) Tie points that are exclusively located in vegetation regions of the example image.</p> ">
Figure 7
<p>Conceptual framework for reducing the number of tie point equations. (<b>a</b>) displays the result of tie point extraction on the example image pair using the Metashape Python package. (<b>b</b>) is a 2D scatter plot of radiance values on matching tie points between pairs of overlapping images, indicated by blue circles. Green dots highlight the points selected after outlier removal, which are utilized for regression analysis. The fitted regression line is shown in blue, with the maximum and minimum points marked in red on this line chosen to construct the tie points equations. The black line denotes the 1:1 line.</p> ">
Figure 8
<p>The changing trend between the slopes obtained for each image in the dataset and the DLS-recorded corresponding irradiance. The green line denotes the variation in slopes derived for each image, while the blue dashed line represents the change in irradiance. The yellow dots highlight images that capture the reference panels, with their slopes calculated based on observations from these reference panels.</p> ">
Figure 9
<p>Overview of all the reflectance orthomosaics for each method—ELM, DLS-CRP, DLS-LCRP, the optimized RBA, and RBA-Plant—across the green, red, red edge, and NIR bands. The bottom layer displays the false-color composite image (color infrared or CIR).</p> ">
Figure 10
<p>The result of reflectance conversion for tie points in side overlapping regions between two example image pairs in the green channel under different illumination conditions. The points indicate the respective reflectance values of tie points in two images. The ellipses show the 95% confidence ellipses.</p> ">
Figure 11
<p>The trend of coefficient of variation (CV%) of the orthomosaic-extracted reflectance for sampling plants within the potato monoculture field.</p> ">
Figure 12
<p>The trend of coefficient of variation (CV%) of the orthomosaic-extracted reflectance for sampling plants within the potato stripcropping field.</p> ">
Figure 13
<p>The trend of RMSE with the increase in the parameter <math display="inline"><semantics> <mi>ω</mi> </semantics></math> for each channel.</p> ">
Versions Notes

Abstract

:
Unmanned aerial vehicles (UAVs) equipped with multispectral cameras offer great potential for applications in precision agriculture. A critical challenge that limits the deployment of this technology is the varying ambient illumination caused by cloud movement. Rapidly changing solar irradiance primarily affects the radiometric calibration process, resulting in reflectance distortion and heterogeneity in the final generated orthomosaic. In this study, we optimized the radiometric block adjustment (RBA) method, which corrects for changing illumination by comparing adjacent images and from incidental observations of reference panels to produce accurate and uniform reflectance orthomosaics regardless of variable illumination. The radiometric accuracy and uniformity of the generated orthomosaic could be enhanced by improving the weights of the information from the reference panels and by reducing the number of tie points between adjacent images. Furthermore, especially for crop monitoring, we proposed the RBA-Plant method, which extracts tie points solely from vegetation areas, to further improve the accuracy and homogeneity of the orthomosaic for the vegetation areas. To validate the effectiveness of the optimization techniques and the proposed RBA-Plant method, visual and quantitative assessments were conducted on a UAV-image dataset collected under fluctuating solar irradiance conditions. The results demonstrated that the optimized RBA and RBA-Plant methods outperformed the current empirical line method (ELM) and sensor-corrected approaches, showing significant improvements in both radiometric accuracy and homogeneity. Specifically, the average root mean square error (RMSE) decreased from 0.084 acquired by the ELM to 0.047, and the average coefficient of variation (CV) decreased from 24% (ELM) to 10.6%. Furthermore, the orthomosaic generated by the RBA-Plant method achieved the lowest RMSE and CV values, 0.039 and 6.8%, respectively, indicating the highest accuracy and best uniformity. In summary, although UAVs typically incorporate lighting sensors for illumination correction, this research offers different methods for improving uniformity and obtaining more accurate reflectance values from orthomosaics.

1. Introduction

Advances in unmanned aerial vehicles (UAVs) and the miniaturization of multispectral instruments have led to their widespread application in precision agriculture (PA) [1,2,3]. Notable applications of multispectral UAV imagery in PA involve foliar pigment content estimation [4], plant disease detection [5], and plant growth dynamics monitoring [6]. A crucial step in conducting a reliable analysis of plant traits is to recover the surface reflectance from raw image data through radiometric calibration [7]. However, external factors, particularly variable illumination during the flight [8,9], significantly affect the accuracy of reflectance conversion, posing an inevitable challenge for UAV fieldwork in natural environments.
The empirical line method (ELM), as a predominant radiometric calibration technique for multispectral UAV imagery, assumes a linear relationship between image-derived radiance and invariant ground object-derived reflectance [10]. Typically, the ELM calibration is performed once, either before or after a flight campaign using reference panels, and the derived linear model is consistently applied to all subsequently collected images. This operation assumes that all images are acquired under the same illumination as the capture of reference panels, neglecting in-flight lighting variations. Such assumptions explain the underlying reason why UAV-based field monitoring using optical cameras is recommended under a clear sky or uniform cloudy conditions. However, it is not always possible to wait for the ideal conditions for a drone flight. This is limiting, particularly in high-latitude regions such as the Netherlands, Ireland, Denmark, and so on, where overcast and variable solar irradiance is frequent [11]. Additionally, plant growth is a dynamic process with notable physiological and phenotypic variations often observed within just a week [12]. This necessitates collecting UAV data during specific growth stages, which may inevitably coincide with cloud cover conditions. Moreover, advancements in battery technology have also enhanced UAV capabilities for long-duration flight, enabling the coverage of larger areas [13]. This also increases the likelihood of encountering varying lighting conditions, complicating the maintenance of consistent illumination levels during flights. Consequently, expanding the UAV data collection window requires developing methods to alleviate the impact of dynamic illumination on multispectral images and the resulting derived plant reflectance.
To address the challenges of variable illumination in spectral UAV imagery, recently developed methods can be mainly divided into two categories: sensor-based and image-based approaches [5,11,14,15,16]. In terms of sensor-based methods, the main idea is to measure the solar irradiance simultaneously with image capture, either onboard the UAV or on the ground, to compensate for variable illumination effects [16,17]. For instance, manufacturers such as AgEagle (AgEagle Aerial Systems, Wichita, KS, USA) and Parrot Sequoia (Parrot Drone SAS, Paris, France) have developed specialized onboard irradiance sensors like the downwelling light sensor (DLS-2) and the sunshine sensor to help mitigate the influence of poor lighting conditions in real time. The strength of onboard lighting sensors is their ability to provide real-time illumination measurements, enabling the dynamic adjustment of UAV image radiometric properties for accurate data capture under varying light conditions [18]. Additionally, onboard sensors automate correction processes, reducing post-processing and manual intervention, thereby increasing the efficiency of data collection and handling large datasets. Despite these advantages, sensor-based methods have certain drawbacks that must be considered. For instance, Cao et al. (2019) [19] evaluated the DLS-corrected method’s performance under varying illumination conditions, suggesting the need for further improved radiometric accuracy. Additionally, researches [18] indicate that the measured irradiance is prone to earth–sensor and sensor–solar angles caused by the vibration and tilting of drones, potentially introducing systematic errors in compensation. In addition to using onboard sensors, another solution is to place lighting sensors on the ground level. In Xue et al. (2023)’s study [16], they corrected the influence of illumination by placing an additional camera on the ground to capture panel images simultaneously with drone flight. Honkavaara et al. (2013) conducted similar measurements, continuously recording in situ irradiance using an ASD FieldSpec Pro with irradiance optics to compensate for lighting variation [20]. However, ground-based corrections have an intrinsic shortcoming in that such corrections are unreliable if there are potential differences in illumination between the UAV location and the ground sensor location. Additionally, for sensor-based methods, the associated technology increases the overall cost of the UAV system. This can be a limiting factor for budget-constrained projects or organizations.
Compared to sensor-based approaches, image-based methods rely solely on the captured imagery, eliminating the need for extra sensors or equipment. This reduces the overall weight and power consumption of the UAV, allowing for longer flight times and greater payload capacity [21]. Therefore, image-based adjustment methods have increasingly gained attention as an efficient approach to correct the images under different illumination conditions [14,15,22]. For example, Qin et al. (2022) developed a novel illumination estimation model based on illumination consistency within single images and reflectance consistency for the same tie points across images [15]. Subsequently, they proposed an illumination compensation model based on physical imaging principles to mitigate the effects of varying lighting conditions. Honkavaara et al. (2012) introduced the radiometric block adjustment (RBA) method, which estimates linear regression coefficients for image pairs based on the observed values for the same tie points within the overlapping area of consecutive images, where a tie point is the same point in the field detected in two images [23]. In addition, linear regression coefficients are included for images that contain the observation of a reference panel. Jointly, the regression coefficients of the tie points and the reference panels are optimized to deal with variable illumination. Honkavaara and Khoramshahi (2018) further optimized the RBA method by integrating bidirectional reflectance distribution functions (BRDFs) and assigning different weights to observations of tie points and reference panels [14]. However, the method is unsuitable for processing large image sets due to the error accumulation [24]. Subsequently, Kizel et al. (2018) proposed a generalized empirical line method (GELM) based on this similar concept that uses statistical information from tie points to homogenize images [22]. By performing a linear regression for tie points, this method reduced the number of tie point equations, thereby reducing computational complexity. The GELM algorithm is designed for imaging spectroscopy data containing 385 spectral channels. However, the GELM method has been validated on four hyperspectral images and requires further optimization and testing across a broader dataset. Another limitation of these image-based approaches is the lack of consideration for the content of the tie points and the specific monitoring tasks, resulting in general corrections that may not be optimal for specific tasks, such as field monitoring. In summary, current RBA-based algorithms are unsuitable for processing numerous multispectral images and require accuracy enhancement and validation with consideration for specific crop monitoring applications.
In this study, to achieve accurate radiometric calibration of UAV-collected multispectral images under varying lighting conditions for crop monitoring, we optimized the RBA-based method and validated the performance of these methods in terms of accuracy and homogeneity. The specific objectives of this article were the following: (1) to optimize the RBA-based method to make it suitable for processing numerous multispectral images and (2) to improve the performance of the RBA-based method for crop monitoring. To deal with the first objective, we reduced the number of tie points referring to the strategy proposed by Kizel et al. (2018) [22] and investigated the balance between the weights put on the tie points versus the weights on the observations of the reference panels. The second objective was targeted by considering only the tie points that are relevant to the crop monitoring task. To evaluate the effectiveness of the proposed methods, the resulting orthomosaic was evaluated comprehensively on an UAV image dataset collected under fluctuating solar irradiance conditions. Additionally, the contributions of this paper are the following:
  • We optimized the RBA method by reducing the number of tie point observations and assigning higher weights to the reference point observations, thereby successfully achieving robust reflectance image conversion from radiance images under changing lighting conditions.
  • We proposed the RBA-Plant method to enhance the radiometric accuracy and uniformity of the generating reflectance orthomosaic. We demonstrated that the strategy of excluding non-vegetation tie points in the RBA equation system helps improve the performance of generated orthomosaics.
The remaining part of the paper is organized as follows. Section 2 details the experimental settings, principles, and framework of the optimized radiometric block adjustment method. Section 3 presents the performance evaluation of that method. Lastly, Section 4 gives the discussion and conclusions.

2. Materials and Methods

2.1. Study Area

The field experiment was conducted at Unifarm, the agricultural experimental farm in Wageningen, Gelderland province, the Netherlands (51°59′28.8″N, 5°39′50.3″E). The objective of this experiment was to compare two planting systems: monoculture and stripcropping, shown as white and blue rectangles, respectively, in Figure 1. The main cultivated crop was potato, whereas in the stripcropping field, potatoes and grass were grown in alternating strips. By the time of UAV data collection in June, the potato had reached the stolon initiation stage and begun to form a canopy. The ground objects in the study area included potato plants, bare soil, grass, and roads.
Eight ground control points (GCPs), depicted as red dots in Figure 1, were strategically placed within the field. The locations of these GCPs were accurately measured by a real-time kinematic (RTK)-enabled rover with accuracies of 2 cm in the horizontal direction and 5 cm in the vertical direction. Additionally, four sets of custom reference panels, shown as five-pointed stars in Figure 1, were placed along the UAV flight path. Their properties are detailed in Section 2.2.

2.2. Instrumentation and Data Acquisition

In this study, a Matrice M210 RTK UAV (DJI, Shenzhen, Guangdong, China) equipped with an Altum five-band multispectral camera (MicanSense, AgEagle Aerial Systems, Wichita, KS, USA), and a DLS-2 downwelling light sensor was employed for aerial image dataset collection. The Altum camera is one of the most popular cameras for agricultural applications such as plant counting and advanced vegetation research applications. The Altum camera has the following five separate detectors, each characterized by specific focal wavelengths and full-width at half-maximum (FWHM) bandwidths: 475 nm and 20 nm (blue), 560 and 20 nm (green), 668 and 10 nm (red), 717 and 10 nm (red edge), and 840 and 40 nm (near-infrared band—NIR). The onboard DLS sensor recorded the in-flight ambient irradiance at each image capture moment. Moreover, the manufacturer provided a small Calibrated Reference Panel (CRP), which is a homogeneous scattering panel with an approximate reflectance of 0.5, with a size of 10 cm × 10 cm.
Additionally, four sets of self-made reference panels, each consisting of four 60 cm × 60 cm wooden panels individually painted with matte black, light gray, dark gray, and white paints, were employed in this experiment. The custom reference panels were designed with a larger size, allowing them to be captured at flying altitude and then used for calibration. Reflectance measurements for these panels were taken using the Altum camera and the CRP. This process entailed placing the CRP next to a panel under the same lighting, capturing an overhead image, and converting the raw image to reflectance data following MicaSense’s recommended protocol [25]. The reflectance values of each custom panel were calculated from a central 50 × 50-pixel region. In this study, it was observed that white panels, with a reflectance of approximately 0.9, were prone to overexposure, which negatively affected the performance of the latter established conversion models. Thus, only light gray, dark gray, and black panels were employed as ground targets for calibration. Figure 2 displays the average reflectance properties across all bands for the four sets of reference panels.
The UAV flight was conducted on 14 June 2022, from 11:20 to 12:00 under variable illumination conditions. Figure 3 illustrates the trend of solar irradiance changes at the 560 nm wavelength recorded by the DLS sensor during the entire UAV data collection period. The irradiance fluctuated, showing several sharp peaks and valleys, indicating that the sun was intermittently obscured due to the movement of clouds. The UAV operated at a flight altitude of 50 m, maintaining an 85% overlap in images for both forward and sideway directions. Additionally, other imaging parameters, including exposure time, ISO, and shutter speed, were set automatically.
After the UAV data collection, the reflectance of several potato plants was measured using the onboard Altum camera at a close distance to set ground truth reflectance measurements of the plants for evaluation of the method. The operator positioned the camera directly above the crop at a distance of around 1 m at each predetermined sampling point, where the CRP was placed adjacent to the crop for calibration. Images of 133 potato plants in the stripcropping field and 130 plants in the monoculture field were captured. The coordinates of these sampling points were recorded prior to the experiment, and those points were evenly distributed across the field. Each captured image was converted into a reflectance image utilizing the CRP. The reflectance values were derived by averaging the readings from a central 50 × 50 -pixel region of the potato canopy in each image.

2.3. Radiometric Block Adjustment Method

2.3.1. Methodology Overview

Typically, UAVs capture consecutive images with a certain spatial overlap rate, effectively imaging the same area from multiple viewpoints. These overlapping regions in UAV imagery were utilized to homogenize the radiometric differences between sequential images, thereby correcting for differences in the illumination of 2D maps or 3D models. This is done by detecting tie points in neighboring images and comparing the radiance values for these tie points in both images to measure the differences. In addition, placing multiple sets of reference panels with known reflectance within the experimental field could provide baselines for accurate radiometric calibration. However, these reference panels are not detectable in every image and are visible in only a few images across the dataset. To achieve accurate radiometric correction of the entire dataset under varying lighting conditions, a strategy that combines the use of occasionally observed reference panels and tie points from the overlapping regions between adjacent images was developed.
An overview of the workflow for the optimized radiometric block adjustment method developed in this study is illustrated in Figure 4.
Generally, the input for this algorithm consists of a collection of multispectral radiance images, which contains a small number of images capturing reference panels and numerous consecutive images with overlapping regions. The preliminary processing step primarily aims to extract and select appropriate points and their values. For pairs of spatially adjacent images, tie points are extracted using the Metashape Python package (Agisoft LLC, St. Petersburg, Russia). Subsequently, tie points situated within vegetation regions are selected for further analysis. For images capturing reference panels, a 50 × 50 -pixel rectangle centered on the panel is selected to extract average pixel values. These derived values are subsequently imported into the matrix formulation stage to construct the equation systems. This phase aims to eliminate outlier values and minimize the number of linear equations derived from tie points. The resulting equation system is then treated as a quadratic programming optimization problem, which is solved via the interior point method developed by Alibaba Cloud Team. Ultimately, the algorithm produces calibration coefficients for each image in the dataset, converting the input radiance images into well-calibrated reflectance images.
In addition to this overview, detailed descriptions of each stage are provided in the following sections.

2.3.2. Mathematical Principle

In this mathematical formulation in constructing reference point equations and tie point equations, we follow the notation in Kizel et al. (2018) [22]. Let L R W × H × P represent a multispectral aerial radiance image, where W, H, and P denote the image’s width, height, and number of spectral bands, respectively. The multispectral dataset containing S images is denoted by Θ = { L 1 , L 2 , , L S } . To mitigate the impact of variable illumination, two relative parameters a rel λ and b rel λ were introduced into the conventional ELM-based reflectance conversion equation for each individual image. The adjusted equation is given as follows:
ρ ( λ , x , y ) = a rel λ × a abs λ · L ( λ , x , y ) + b abs λ + b rel λ
The first type of equation is constructed based on the points extracted from the reference panels with already known reflectance. Due to the already known reflectance and extracted radiance values from those panels, we could obtain corresponding equations. In this study, a total of 12 panels were used, and each panel provides one equation according to Equation (1), as follows:
ρ i ( λ , x , y ) = a rel λ i × a abs λ i · L i ( λ , x , y ) + b abs λ i + b rel λ i
where ρ i ( λ , x , y ) is the reflectance value of the points on the ith panel at wavelength λ . Let a λ i represent a rel λ i × a abs λ i , and let b λ i denote a rel λ i × b abs λ i + b rel λ i . Equation (2) can be simplified to the following:
ρ i ( λ , x , y ) = a λ i × L i ( λ , x , y ) + b λ i
The parameters a λ i and b λ i are estimated based on panels. L i ( λ , x , y ) denotes aerial radiance image i, which captures reference panels. To give a general form, let n represent the total number of points extracted from images that capture reference panels for the dataset Θ . The equation system M 1 is thus given by the following:
M 1 = a λ 1 × L 1 ( λ , x , y ) + b λ 1 = ρ 1 ( λ , x , y ) , a λ 2 × L 2 ( λ , x , y ) + b λ 2 = ρ 2 ( λ , x , y ) , a λ n × L n ( λ , x , y ) + b λ n = ρ n ( λ , x , y )
The second type of equation is constructed based on the assumption that the reflectance of the same ground objects remains constant across all images. According to the aforementioned assumption, the reflectance difference of the same tie point on different images is expected to be close to zero. Let L j ( λ , x , y ) and L k ( λ , x , y ) denote the radiance values in images j and k, respectively. Let L j ( λ , x j l , y j l ) and L k ( λ , x k l , y k l ) represent the radiance value of the lth tie point, located in the overlapping area between radiance image j and k, respectively, at wavelength λ . The reflectance of each pair of tie points is supposed to be the same, thus, providing the following equation:
ρ j ( λ , x j l , y j l ) = ρ k ( λ , x k l , y k l )
where ρ j ( λ , x j l , y j l ) represents the reflectance value of tie point l on image j, and ρ k ( λ , x k l , y k l ) denotes the reflectance value of the same tie point on image k.
Then, by substituting Equation (1) into Equation (5), the second type of equation is formulated as follows:
a λ j × L j ( λ , x j l , y j l ) + b λ j a λ k × L k ( λ , x k l , y k l ) b λ k = 0
Let m denote the number of tie points extracted from overlapping images for the entire dataset Θ . The linear equation system M 2 is subsequently established and is formulated as follows:
M 2 = a λ j 1 × L j 1 ( λ , x j 1 1 , y j 1 1 ) + b λ j 1 a λ k 1 × L k 1 ( λ , x k 1 1 , y k 1 1 ) b λ k 1 = 0 , a λ j 2 × L j 2 ( λ , x j 2 2 , y j 2 2 ) + b λ j 2 a λ k 2 × L k 2 ( λ , x k 2 2 , y k 2 2 ) b λ k 2 = 0 , a λ j m × L j m ( λ , x j m m , y j m m ) + b λ j m a λ k m × L k m ( λ , x k m m , y k m m ) b λ k m = 0
where the subscript j t and k t represent the index of overlapping images for the tth pair of tie points. ( x j t t , y j t t ) denotes the pixel location of the tth pair of tie points.
Let X   R ( 2 S × P ) × 1 represent the vector of calibration coefficients of all the images in dataset Θ , which is given by the following:
X = a λ 1 b λ 1 a λ 2 b λ 2 a λ s b λ s T
The matrix form of M 1 and M 2 can be formulated as follows:
P 1 X = p 1
P 2 X = p 2
where P 1 R ( n × P ) × ( 2 S × P ) and P 2 R ( m × P ) × ( 2 S × P ) denote the coefficient matrix of X in M 1 and M 2 , respectively, while p 1 R ( n × P ) × 1 and p 2 R ( m × P ) × 1 correspond to the right-hand side vectors of M 1 and M 2 . To better illustrate the specific form of the above matrices, an example is provided as follows.
Assume that a reference panel can be identified in the second image; the corresponding row in P 1 is
0 L 2 ( λ , x 2 , y 2 ) 1 0 ( 1 × 2 S × P )
Assume that a pair of tie points can be recognized in the first and third image; the corresponding row in P 2 is
L 1 ( λ , x 1 , y 1 ) 1 0 0 L 3 ( λ , x 3 , y 3 ) 1 0 ( 1 × 2 S × P )
The calibration parameters are estimated by solving an optimization problem, and the formulation is given as follows:
C = min C ω 2 P 1 X p 1 2 2 + 1 2 P 2 X p 2 2 2
The variable ω represents the weight assigned to reference panel points. In practice, the quantity of tie points equations is significantly more than that of reference points equations, introducing bias into the obtained result. To keep the balance between the two types of equations, we set higher weights to the reference points equations to mitigate this issue partially. In addition, the actual reflectance of plants is expected to be in the range of zero and one ( 0 ρ 1 ) . To ensure that the final obtained reflectance falls within the correct range, we added a constraint to the optimization problem, referring to [22]:
a λ k × L k ( λ , x , y ) + b λ k 1 , a λ k × L k ( λ , x , y ) b λ k 0
Let L k ( λ , x , y ) denote the radiance values for all the pixels in each image within the dataset. In summary, by applying constraints to Equation (13), the complete mathematical form of the radiometric block adjustment method can be described as follows:
C = min C ω 2 P 1 X p 1 2 2 + 1 2 P 2 X p 2 2 2 s . t . A X b

2.3.3. Preliminary Processing

The preliminary processing step involves selecting reference points, extracting tie points, and filtering out non-vegetation points:
  • Reference points selection: The identification of reference points is performed on images capturing reference panels. In this study’s flight campaign, reference panels were captured in 12 images. For each reference image, 50 × 50 -pixel rectangles centered on light gray, dark gray, and black panels were utilized to obtain average values, which served as the observed reference point values. The white panels were excluded, as they are prone to overexposure that results in the loss of valid information. The above operation was performed manually.
  • Tie points extraction and filtering out non-vegetation points: This step is performed on each pair of images with overlapping regions. Tie points are extracted automatically via the Metashape Python package, and their corresponding coordinates are recorded. The operation was executed on a band-by-band basis. Subsequently, tie points located in vegetation areas are selected, focusing our attention on plant analysis, where emphasizing crop-related information helps enhance the quality of our analysis. Figure 5 shows the workflow of selecting proper tie points located in the vegetation area. The normalized difference vegetation index (NDVI) is utilized to help effectively distinguish between vegetation and non-vegetation areas due to its sensitivity to chlorophyll content [26]; the NDVI’s formula is shown as follows:
    N D V I = ( N I R R e d ) ( N I R + R e d )
    where N I R and R e d represent the near-infrared and red bands, respectively.
The first step for calculating the NDVI image is to align one from the image pair followed by the tutorial by the MicaSense manufacturer [25]. The alignment process involves three key steps: unwarping the images via built-in lens calibration, calculating affine transformation matrices to align each band with a reference band, and aligning and cropping the images to exclude non-overlapping pixels across all bands. Notably, once the alignment transformation matrices are determined, they can be applied to additional images from the same flight, significantly simplifying the calculation. Once the NDVI image is acquired, then we can do the vegetation segmentation. In this study, the observed potato plants were at the stage of stolon initiation, and the sensor used was an Altum multispectral camera, resulting in NDVI values for the potatoes generally above 0.45. Distinguishing between potato and non-vegetation areas on the NDVI map was relatively straightforward. We set the threshold to 0.6 to exclude non-vegetation points more strictly, with the segmentation results illustrated in Figure 6c, simultaneously, where all extracted tie points can be projected into the NDVI image using calculated affine transformation matrices. Afterward, tie points with NDVI values above 0.6 were selected, thereby filtering out non-vegetation areas. Ultimately, those selected tie points were projected into the origin single-channel image. Figure 6 illustrates the result of excluding soil and other non-vegetation areas using an image as an example.

2.3.4. Matrix Formulation

In certain instances, outlier values may appear within the image due to pixels that are either saturated or damaged. In particular, this issue becomes more common with dramatic changes in illumination during the flight. Consequently, removing outliers is crucial to ensure the reliability of the results. Let μ λ and σ λ denote the mean value and standard deviation, respectively, of the radiance values for all tie points in the band λ of image L i ( λ , x , y ) . The global outlier values can be removed using the following equation:
L i ( λ , x , y ) μ λ > τ · σ λ
The parameter τ is used to quantify the deviation of outliers from the mean value, and we set the value of τ empirically to 3.
After global outlier detection, some outliers may not be apparent or may be located in locally anomalous regions. The Local Outlier Factor (LOF) method can more precisely detect outliers by analyzing the behavior of data points in their local context. This approach improves the accuracy and comprehensiveness of outlier detection. Consequently, it complements the shortcomings of global outlier detection, ensuring cleaner and more reliable data. In this study, the Sklearn package was used for LOF detection with the parameters n n e i g h b o r s set to 20 and c o n t a m i n a t i o n set to 0.05.
It is worth noting that the quantity of extracted tie points for the dataset Θ substantially exceeds the number of reference points. This imbalance in the number of tie point equations relative to reference point equations can induce bias in the derived solution. In a large system of equations, an excessive number of tie points can introduce significant constraints, potentially rendering the system unsolvable. Furthermore, an imbalance in the number of equations relative to tie points and reference points may result in the reference points having minimal impact on the overall solution. However, since reference points provide ground truth data, their influence in the system is supposed to be increased to obtain more accurate and reliable solutions. To mitigate this issue, we adopted a twofold strategy: (1) selecting only two feature points fitted from the tie points, as suggested by Kizel et al. (2018) [22], and (2) assigning higher weights to the reference points equations. Figure 7 illustrates the methodology for selecting two feature points to represent all tie points across paired images. Initially, a regression line is fitted using the radiance values from all tie points between the image pair. Subsequently, as illustrated in Figure 7b, the maximum and minimum points on this regression line are identified and selected as feature points. These two feature points are then utilized to construct tie point equations. This strategy effectively reduces the number of tie point equations, aiming to achieve a balance between the two equation types, and significantly enhances the solution efficiency. Furthermore, the settings and analysis of ω in Equation (15) are detailed in Section 3.4.

2.3.5. Solution and Optimization

In this section, we introduce the method of optimization and the solution of the equation system and the corresponding package utilized. The Equation (15) can be converted to a constrained quadratic programming formulation.
min x 1 2 x T G x + h T x s . t . a i T x b i
The MindOpt optimization solver developed by Alibaba Cloud Team (2021) [27] was used to help find the optimal solution. The method was run with Python 3.10 on Windows 10. The solution yielded a pair of calibration coefficients for each radiance image within the multispectral dataset.

2.4. Reference Methods

2.4.1. Empirical Line Method

The ELM [10] is the most common way to achieve surface reflectance conversion from raw data. Establishing a linear relationship between the radiance value of reference panels as measured by the camera and its corresponding reflectance, the ELM model is described for each wavelength using Equation (19):
ρ ( λ , x , y ) = a abs λ · L ( λ , x , y ) + b abs λ
where ρ ( λ , x , y ) is the reflectance value at pixel ( x , y ) within wavelength λ , L ( λ , x , y ) denotes the radiance image, and a abs λ and b abs λ are absolute calibration coefficients.

2.4.2. DLS-CRP Method

The manufacturer MicaSense offered a sensor-corrected solution (DLS-CRP) to reduce the impact of variable illumination. A brief introduction of this method is given as follows; for more details, refer to [28]. An image of the CRP is captured before and after the flight to provide a baseline reflectance value. The correction coefficient C ( λ ) is then calculated as follows:
C ( λ ) = L crp ( λ ) · π ρ crp ( λ ) · E dls
where L crp ( λ ) represents the mean radiance values of the CRP at the λ waveband, ρ crp ( λ ) denotes the reflectance of the CRP, and E dls refers to the irradiance value recorded by the DLS at the specific time the CRP was captured.
For later collected images, the DLS records the irradiance value at the moment of capture and embeds it into the picture. The DLS-corrected reflectance image is then derived as follows:
ρ ( λ , x , y ) = L ( λ , x , y ) · π C ( λ ) · E dls
where L ( λ , x , y ) represents the UAV-collected radiance images, E dls denotes the DLS-recorded corresponding irradiance, and ρ ( λ , x , y ) represents the DLS-corrected converted reflectance image.

2.4.3. DLS-LCRP Method

For the DLS-CRP method, the image of the CRP is captured on the ground, indicating that the camera is positioned at a close distance from the panel. This operation suggests that the primary component of radiance reaching the camera is directly reflected radiance from the panel, without considering the effects of atmospheric scattering. In practice, UAVs usually fly at a flight altitude of 30 to 120 m, and the atmospheric scattering lighting indeed affects the accuracy of the radiometrically calibrated outputs. In this study, at the beginning of the flight, an image of the custom light gray panel was captured at a flying altitude of 50 m to calculate the initial correction coefficient C ( λ ) . The subsequent reflectance images conversion flowchart was the same as Section 2.4.2.

2.5. Evaluation

Evaluation 1: The optimized radiometric block adjustment method generates a pair of conversion coefficients for each image, representing the slope and intercept, with the slope being the dominant factor in the conversion equation. We first illustrate the trend between the slope of the transformation parameters for each image and the corresponding DLS-measured irradiance.
Evaluation 2: The performance of the optimized radiometric block adjustment method is visually evaluated by showing the generated orthomosaics. Visual evaluation offers an intuitive understanding of the differences between various methods, focusing primarily on the homogeneity of the reflectance orthomosaics across different bands. Additionally, the visual assessment can also provide a preliminary indication of the variations in reflectance values. To better analyze the performance of the optimized method, a total of five methods is compared: (1) ELM-converted orthomosaic; (2) DLS-CRP corrected orthomosaic; (3) DLS-LCRP corrected orthomosaic; (4) the optimized radiometric block adjustment (RBA) corrected orthomosaic, produced using the above-described method and without filtering out non-vegetation points; (5) RBA-Plant generated orthomosaic. For brevity, the optimized method involving selecting tie points in vegetation regions is referred to as RBA-Plant in the subsequent content.
Evaluation 3: Quantitative assessment.
Evaluation 3.1: The first perspective of quantitative assessment is to evaluate the performance of the correction to image pairs. After correction, the values of tie points located in overlapping regions between image pairs are supposed to be similar. Image pairs with side overlaps were used because their irradiance changes are more noticeable, allowing for better observation of the correction performance. The indicator, the normalized root mean squared deviation (NRMSD), was used to evaluate the difference between image pairs, and its definition is as follows:
NRMSD = 1 n i = 1 n ( ρ i j ρ i k ) 2 ρ max ρ min
where ρ i j and ρ i k denote the reflectance of tie point i on images j and k, respectively. n represents the number of tie points between image pairs. ρ m a x and ρ m i n represent the maximum and minimum reflectance values from all images, respectively.
Evaluation 3.2: The second aspect of quantitative analysis focuses on radiometric accuracy. As mentioned in Section 2.2, the reflectance of a total of 263 plants was measured on the ground and used as ground truth reflectance. For ease of comparison, the GPS location of those plants was recorded and can be found on the orthomosaic. Let ρ plant i λ denote the reflectance values at the λ band for the plant i, as extracted from the corrected orthomosaics, and let ρ i GT λ represent the ground truth reflectance of plant i at the λ waveband. The indicator, the root mean squared error (RMSE), was used to evaluate the difference between ρ plant i and ρ i G T , representing radiometric accuracy. Its formula is given as follows:
RMSE λ = 1 N i = 1 N ρ plant i λ ρ i GT λ 2
Evaluation 3.3: The last aspect of quantitative evaluation is the uniformity of the generated orthomosaic. To this end, the metric Coefficient of Variation (CV) was utilized to assess the reflectance values’ consistency among the sampled plants. The formula is given as follows:
CV λ = σ λ μ λ × 100 [ % ]
where σ λ signifies the standard deviation of the reflectance values extracted from the orthomosaic across all N sampled plants in the field, and μ λ denotes the mean reflectance of all sampled plants. Consequently, lower values of CV λ are indicative of a higher level of uniformity for the corrected orthomosaic.
It is critical to acknowledge that our analysis is based on green, red, red edge, and NIR bands. The abandonment of the blue channel is justified by two main factors: first, plants predominantly absorb blue light for photosynthesis, resulting in a significant decrease in signals captured from vegetation areas in the blue channel. Second, under cloudy conditions, the proportion of scattered light increases, with the blue light constituting a significant portion of this scattered light. It introduces additional noise into the blue channel [8] under such conditions. Given the consequent low signal-to-noise ratio in the blue waveband, excluding the blue band from our analysis was deemed necessary.

3. Results

3.1. Solution Result

Figure 8 shows the trend between the solved slopes (a) for each image and the DLS-recorded irradiance. The yellow dots represent the slope derived from the reference points. The blue line reflects the irradiance values that vary with changing ambient lighting conditions during the flight. In general, the irradiance values exhibited significant variability throughout the observation period. Initially, the flight started with a high intensity of irradiance, which then abruptly decreased, indicating that cloud cover obstructed the sunlight. Subsequently, the irradiance fluctuated, exhibiting several peaks. Towards the later stages, the irradiance levels rose back to high intensity and then declined once again, maintaining a lower level by the end of the observation period. The solved slope parameter ( a i ) generally follows the opposite trend of the irradiance, with the value fluctuating at a low level during periods of high irradiance and increasing to a high value as the irradiance decreased. This inverse relationship demonstrates the method’s ability to compensate for changes in lighting conditions, trying to maintain consistent data conversion across varying illumination scenarios. Moreover, it is observed that the values derived from the reference points constrained the solving process, with the solved values converging towards those at the reference points. This convergence indicates that reference points effectively anchor the method’s output, providing a reliable baseline amidst dynamic lighting conditions, thereby enhancing the radiometric accuracy of the solutions. Toward the end of the green line, there was a significant decrease in values. The potential reason for this reduction could be the diminished number of overlapping images on the last flight path.
Overall, the obtained slope values compensate, to some extent, for changes in irradiance, thereby ensuring both the accuracy and uniformity of the subsequently generated radiometric products.

3.2. Visual Evaluation of Orthomosaic

To intuitively demonstrate the performance of various radiometric correction methods under variable illumination conditions, Figure 9 illustrates the reflectance orthomosaics produced by the aforementioned conversion methods across each band. The bottom layer of each subfigure presents the false-color image (color infrared or CIR). As can be seen from Figure 9a, the ELM-converted reflectance orthomosaic was significantly affected by the varying illumination, as evidenced by the presence of alternating bright and dark strips. These strips reflect substantial variations in reflectance values for the same crop within the field. It is clear that variable illumination significantly influences the ELM method, thereby showing poor performance in the orthomosaic’s homogeneity. The DLS-CRP, developed by MicaSense and utilizing an onboard irradiance sensor for correction, notably improved the uniformity of the orthomosaic, as shown in Figure 9b. However, two small areas in the middle of the orthomosaic still exhibit differences from their surrounding crops. In summary, the strategy of using sensors could effectively mitigate the impact of illumination and improve the uniformity of the generated orthomosaic. Figure 9c displays the performance of the DLS-LCRP method. It is observed that the homogeneity of the orthomosaic produced by this method was maintained at almost the same level as that of the DLS-CRP method. However, the reflectance values, particularly in the red edge and NIR bands, show noticeable differences compared with the DLS-CRP method. According to the result of assessing radiometric accuracy in Section 3.3.2, the DLS-LCRP method achieved the highest accuracy when comparing orthomosaic-extracted reflectance to ground-collected crop reflectance data. These results show that the DLS-LCRP method could effectively improve radiometric accuracy while keeping a high level of uniformity as the DLS-CRP method.
Figure 9d illustrates the performance of the optimized radiometric block adjustment method. In terms of homogeneity, the RBA method worked fairly well in mitigating the influence of variable illumination on the orthomosaic, as evidenced by successfully mitigating the obvious striping effects observed in the ELM orthomosaic. From an intuitive perspective, the optimized RBA method significantly enhanced the uniformity of the orthomosaic, performing even better than methods that utilize onboard irradiance sensors. Regarding accuracy, the colors in the red edge and NIR bands of the orthomosaic from the optimized RBA method align more closely with those from the DLS-LCRP method, reflecting similar magnitudes of reflectance values. This similarity indicates that the optimized RBA method also contributes to improving accuracy. As shown in Figure 9e, the further optimized RBA-Plant method reached almost the same level of uniformity as the RBA method. The main difference between these two methods is reflected in the reflectance values. For instance, the right edge of the color infrared (CIR) orthomosaic generated by the RBA-Plant method appears greener compared to that produced by the RBA method. Details of the quantitative analysis of radiometric accuracy will be provided in the subsequent section. In conclusion, both the optimized RBA and RBA-Plant methods proved highly effective in reducing the impact of variable illumination on orthomosaics, as seen from an intuitive perspective, enhancing the overall uniformity of these orthomosaics.

3.3. Quantitative Assessment

3.3.1. The Adjustment on Image Pairs

To quantitatively assess the performance of correction on the image pairs, Table 1 presents the result of the normalized root mean square deviation (NRMSD, Equation (22)) between the converted reflectance values of tie points for all image pairs with side overlapping regions across four wavebands. A smaller NRMSD value indicates less difference between transferred reflectance of tie points in image pairs, which in turn signifies a more effective correction by radiometric correction methods. It is apparent from this table that the ELM method failed to mitigate the radiometric difference in the overlapping regions of image pairs, as evident by its highest NRMSD values across four bands. This indicates that the ELM method is not robust against variable illumination. Meanwhile, the use of irradiance sensors could not effectively improve the uniformity of transferred reflectance values between image pairs, as demonstrated by the similar NRMSD values obtained by the ELM and DLS-CRP methods. The potential reason will be analyzed in the next paragraph. What stands out in the table is the substantial decrease in the NRMSD values of the optimized RBA and RBA-Plant approaches, with average NRMSD values of 6.72% and 5.58%, respectively. This notable reduction indicates that the image-based methods (RBA and RBA-Plant) are highly effective in mitigating the radiometric differences between image pairs caused by variations in illumination. Furthermore, the NRMSD values of the RBA-Plant method across all four bands were consistently lower than those of the RBA method, indicating that extracting useful information solely from regions of interest can further enhance performance.
To demonstrate the correction effects of these radiometric correction methods in more detail, Figure 10 illustrates the result of reflectance conversion for tie points located in overlapping areas between two example image pairs in the green channel. The points indicate that the converted reflectance at the tie points in both the images and the ellipses show the 95% confidence ellipses. Figure 10a shows the result of adjustment for an image pair collected under similar solar irradiance levels, while Figure 10b depicts the same for an image pair under relatively greater variation in irradiance. Ideally, under optimal conversion conditions, the resulting values of the tie points should be distributed on the 1:1 dashed line. It is apparent from Figure 10 that the ELM exhibited the poorest transformation performance under varying irradiance conditions, as the error ellipses are not parallel to the 1:1 line. Moreover, the sensor-corrected methods, denoted by the black crosses (DLS-CRP) and cyan stars (DLS-LCRP), did not perform well in correction when the irradiance levels between the two images were similar, as shown by the adjusted values deviating far from the 1:1 line. However, the sensor-based approaches became effective in correction illumination when there were substantial differences in irradiance between image pairs, as demonstrated by the converted values approaching the 1:1 line. This is the potential reason why the NRMSD values of the sensor-based methods in Table 1 ended up being high. Notably, the red and green circles, which represent the optimized RBA and RBA-Plant methods, are closely aligned with the 1:1 line in both subfigures. The optimized RBA and RBA-Plant methods achieved superior performance in mitigating the impact of illumination, regardless of the extent of irradiance variation.

3.3.2. Radiometric Accuracy

To quantitatively evaluate the performance of those radiometric correction methods regarding radiometric accuracy, the results of RMSE comparison between orthomosaic-extracted reflectance with ground-collected reflectance for sample plants are shown in Table 2. In the monoculture field dataset, overall, the ELM method also demonstrated the poorest accuracy among the five investigated methods, as evident by its highest average RMSE of 0.084. This result further indicates that the orthomosaic generated using the ELM method under varying illumination conditions not only has poor uniformity but also exhibits the lowest radiometric accuracy. Additionally, the mean RMSE for the DLS-CRP method is decreased slightly compared to that of the ELM. However, its overall accuracy remains suboptimal, especially, in the green, red, and red edge bands, where its RMSE is even higher than that of the ELM approach. This result demonstrates that the radiometric accuracy of the DLS-CRP method is unreliable and requires further accuracy enhancement. The optimized DLS-LCRP method significantly enhanced the accuracy, as evidenced by a reduction in the RMSE to 0.041, representing a 47.4% improvement in accuracy over the DLS-CRP approach. Compared to the DLS-CRP method, capturing reference panels at flight altitude (DLS-LCRP) is effective in improving accuracy by taking atmospheric effects into account. What stands out in the table is that the optimized RBA method achieved an accuracy nearly equivalent to the DLS-LCRP method, with a mean RMSE of 0.047—only slightly higher than that of the DLS-LCRP. Compared to the ELM generated orthomosaic, the RBA method achieved a 42.5% improvement in accuracy. This result indicates the optimized RBA method effectively reduces the impact of varying illumination, which is proven by both the improving homogeneity and radiometric accuracy of the generated orthomosaic. Notably, the RBA-Plant approach further enhanced the performance in accuracy, with the mean RMSE values reducing to 0.039, even lower than those of the DLS-LCRP method. Extracting tie points on vegetation areas, thus filtering out other important information, could further improve radiometric accuracy while maintaining uniformity as in the RBA method.
In the stripcropping field dataset, the overall trend of accuracy remained consistent with that observed in the monoculture dataset. The DLS-LCRP method maintained stable performance, achieving the highest radiometric accuracy with the lowest RMSE value of 0.04. Conversely, the accuracy of the RBA method showed a slight decline in the stripcropping dataset, with the RMSE increasing to 0.066. Nonetheless, the radiometric accuracy achieved by the RBA method still surpasses that of the DLS-CRP method. This decline in the accuracy of the RBA method could be potentially attributed to the increased complexity of image scenes in the stripcropping field compared to that of the monoculture field. Moreover, the RBA-Plant approach demonstrated improved accuracy compared to the optimized RBA method, with an RMSE value of 0.052, which is lower than that of the RBA method.
In conclusion, those results demonstrate that the optimized RBA method effectively mitigates the effects of variable illumination on reflectance orthomosaics, enhancing its radiometric accuracy. Moreover, the further improvement in the accuracy of the RBA-Plant method indicates that the performance of those image-based correction approaches is content-relevant. Filtering out non-vegetation tie points could be an effective strategy to improve their performance.

3.3.3. Radiometric Uniformity

To evaluate the performance of the radiometric correction methods in terms of uniformity, Figure 10 and Figure 11 illustrate the trend of CV (%) for all sampling plants within the monoculture and stripcropping fields, respectively.
To comprehensively evaluate the performance of these radiometric correction methods quantitatively, Figure 11 and Figure 12 illustrate the CV (%) trend of the reflectance of all sampled plants within the monoculture and stripcropping fields, respectively, representing the uniformity of the orthomosaic. In the monoculture field dataset, it is apparent from Figure 11 that the ELM yielded the highest CV values across four bands, demonstrating the poorest homogeneity, which is consistent with the visual assessment. Moreover, as shown in Figure 11, the CV values for the DLS-CRP and DLS-LCRP methods decreased from an average of over 25% with the ELM method to approximately 15%. This result indicates that using onboard irradiance sensors for illumination correction could effectively mitigate the impact of illumination variations, producing reflectance orthomosaics with improved uniformity. What is striking in this figure is the notable reduction in CV values of the optimized RBA method in the green, red edge, and NIR bands. This result demonstrates that the optimized RBA method effectively alleviates the influence of varying illumination on orthomosaics from a quantitative perspective. Furthermore, the RBA-Plant method further improved the uniformity, achieving CV values of 6.69%, 10.53%, 6.06%, and 5.89% in the green, red, NIR, and red edge bands, respectively—the lowest values across all methods. This result further demonstrates the superior performance of the RBA-Plant approach when dealing with flaunting lighting conditions for accurate and precise calibration.
In the stripcropping dataset, it can be observed that the performance of the sensor-based methods remained stable, reducing the CV values to a range between 11% and 17% and almost maintaining their performance, as seen in the monoculture dataset. The image-based methods (RBA and RBA-Plant) improved the uniformity as well. The RBA method achieved low CV values of around 6% in the green, NIR, and red edge bands. In contrast, the CV value in the red band was only reduced to 21.63%, which is merely 5% lower than that achieved by the ELM method. The potential reason could be that crops absorb red light, resulting in low values in the red band [11]. Consequently, values in the red band are more susceptible to noise, leading to high CV values. The RBA-Plant method, which excludes tie points in non-vegetation regions, could alleviate this issue, achieving a CV of 15.9% in the red band. Overall, the RBA-Plant method reduced the CV values to a range of 4% to 16%, showing a better performance than the RBA method.
In summary, consistent with the visual assessment results, quantitative analysis also indicates that the ELM method performed the worst. Using sensors for illumination correction can significantly improve radiometric uniformity. The optimized RBA method effectively and significantly enhances radiometric uniformity, with the RBA-Plant method achieving the best results.

3.4. Analysis of the Parameter ω

In the constructed equation system, the number of reference point observation equations is far fewer than that of tie point observation equations. Although we have significantly reduced the number of tie points by adopting the strategy of tie point fitting, the number of tie points is still far more than that of reference points. This imbalance could cause the obtained solution to focus more on reducing differences between image pairs rather than ensuring the accuracy of the solution. Additionally, tie points equations, being treated as relatively strong constraints in the optimization process, may cause the equation system to be unsolvable, as it is difficult to find suitable solutions that satisfy all the constraints simultaneously. Assigning greater weight to the reference point observations could to some extent alleviate this issue by reinforcing the impact of the reference points, which offer a baseline for surrounding images. Figure 13 provides the trend of RMSE with the increase in parameter ω in Equation (13). It is observed that the RMSE decreased with the gradual rise in ω ; however, beyond a specific threshold, the value of the RMSE tended to stabilize. This result indicates that improving the weight of the reference points in the equation system indeed helps obtain more accurate results; however, the improvement is not unlimited. Moreover, the correction effects were shown to vary across different spectral bands. The weight of reference points ω in the equations was set uniformly 10, and this had a great impact on the accuracy and improvement in channels red and NIR, which is a great result—especially since the NIR channel is one of the most important channels used for calculating different VIs used in plant-based observations [9,16]. Future studies could further investigate the optimal weight settings within the equation system. Such studies have the potential to improve calibration accuracy and tailor adjustments to the specific characteristics of each spectral band.

4. Discussion

4.1. Comparsion of Radiometric Correction Methods

In this study, as depicted in Figure 9, the ELM method was substantially affected by variable illumination, achieving the poorest radiometric accuracy and homogeneity of the orthomosaics. Conversely, both the optimized RBA and RBA-Plant methods achieved good performance in converting radiance images to reflectance under varying illumination conditions. The RBA-based approaches effectively mitigated the radiometric differences between neighboring image pairs, ultimately producing orthomosaics with high visual quality. Moreover, the quantitative evaluation results also demonstrated that the RBA-based methods are effective not only at enhancing the radiometric accuracy but also the homogeneity. Our result also demonstrated that placing multiple in-field reference panels provides baselines for optimization and prevents error accumulation. Most strikingly, the RBA-Plant method achieved the best performance in terms of both accuracy and uniformity. This result indicates that the strategy of filtering out irrelevant observations and maximizing the utilization of effective information is effective in improving the performance of RBA-based approaches. Employing tie points from vegetation areas (excluding other noise) to fit the regression line could obtain more relevant feature points representing the vegetation, thereby improving the accuracy of derived reflectance.
Using lighting sensors, such as the DLS, could be another efficient solution to mitigate the impact of variable illumination on multispectral imagery [29]. As shown in Figure 8, sensor-based approaches effectively improved the uniformity of the generated orthomosaics. Moreover, capturing reference images at flight altitude, which takes atmospheric effects into account, greatly helps improve radiometric accuracy [19]. When compared with sensor-based approaches, RBA-based methods offer a cost advantage over them, as the price of panels is far less than that of lighting sensors. In situations where variable illumination is unavoidable and an irradiance sensor is unavailable, placing several sets of reference panels strategically within the field offers an alternative method to ensure accurate and reliable multispectral data collection [30]. Furthermore, a drawback of sensor-based methods is their susceptibility to inaccurate irradiance measurements, which is often caused by the rapidly changing solar–UAV angles due to UAV vibration [17]. Conversely, there is no need to add additional sensors for RBA-based methods, which avoids this issue [14]. Nonetheless, sensor-based methods indeed benefit from flexible deployment, enabling calibration without the need for extensive setup. The disadvantage of the RBA-based methods is that they rely on the frequent observation of reference panels as a baseline. There are circumstances in which it may be unfeasible for personnel to access certain locations to place panels. Additionally, it is quite a workload to place the reference panels, especially in a very large field.
In comparison to other image-based methods for UAV multispectral imagery [11,15,22,23,31,32], the optimized RBA method not only achieves accurate reflectance transformation for numerous multispectral images but also improves solution efficiency by significantly reducing the number of tie point equations. The approach for minimizing the number of tie point equations was referred to in the study conducted by [22]. Our research optimizes and extends the application of this method to process numerous drone multispectral images, with a particular focus on optimizing its performance for agricultural crop monitoring applications. Additionally, Qin et al. (2022) [15] proposed a novel image-based illumination estimation and compensation method based on the physical imaging principle for aerial multispectral imagery. The strength of our proposed method is the significant reduction in computational complexity and memory consumption. In Peng et al. (2023)’s [31] research, they integrated vignetting correction into RBA methods, achieving commendable results. However, their approach necessitated the manual extraction of tie points, which is a process that is labor-intensive and lacks automation. In contrast, our method enables the automatic extraction of tie points, thereby enhancing efficiency and significantly reducing the need for manual intervention. In conclusion, our optimized RBA-based methods effectively perform robust transformations of UAV radiance images collected under varying illumination conditions while requiring less computational resources and manual intervention.

4.2. Parameter Settings

Three main factors affect the performance of the RBA-based algorithm: the overlapping rate, the number of reference panels, and the weight ω for the reference point equations. THe overlapping rate plays a pivotal role in determining the number of tie points between a given image and its consecutive images. An insufficient overlap rate might lead to the inadequate observation of the region of interest, resulting in a decrease in the algorithm’s performance [33]. Conversely, an excessive overlap rate could result in redundant observation, making data processing and analysis cumbersome and time-consuming. Peng et al. (2023) found that the radiometric accuracy achieved by the RBA method can be improved to some extent with an increase in the number of tie points [31]. However, this improvement is quite limited. Moreover, a high overlapping rate setting leads to UAVs flying longer paths to cover the same area, which consumes more battery power and can reduce the operational efficiency of the UAV. In our study, the strategy of using a robust line of tie points [22] can also achieve good radiometric conversion accuracy, indicating that the number of tie points has a limited impact on the performance of RBA-based methods. However, selecting a certain amount of tie points to fit a correct line is also essential. Therefore, based on ensuring adequate observation, appropriately increasing the overlapping rate has a limited influence on the radiometric conversion accuracy of RBA-based methods, but this improves the UAV’s efficiency.
In-flight reference panel observations provide calibration baselines and help improve RBA-based methods’ radiometric accuracy. Peng et al. (2023) demonstrated that increasing the number of reference panel observations would improve the accuracy of the output [31]. In our study, reference points indeed mitigated the error propagation and accumulation in the adjustment. As can be seen from Figure 8, the farther away from the reference points, the more obvious the cumulative effects of the errors. However, in practice, placing numerous sets of panels within the field is usually labor-intensive and inconvenient. In particular, in certain scenarios, such as dense forest applications, it could be unfeasible for UAVs to capture the images of reference panels. Therefore, further work is needed to investigate the impact of the placement locations of those panels along the UAV route on the performance of RBA algorithms. Moreover, it is noteworthy from Table 2 that the accuracy improvement in NIR and red edge bands is more obvious than that in visible bands for the Altum camera. The potential reason is that the reflectance in NIR and red edge are more sensitive to the atmospheric water vapor content affected by clouds. Those reference panels can provide a better correction baseline on NIR and red edge bands.
Finally, the setting of the weight parameter ω for reference point observations is also a crucial factor to be considered. Increasing the value of the parameter ω amplifies the impact of reference point equations within the system of equations, consequently affecting the performance of the algorithm. Figure 13 illustrates that, in our work, increasing ω could enhance output accuracy. However, the improvement in accuracy was relatively limited by assigning higher weight.
It is important to note that the algorithm’s performance is influenced by the three aforementioned factors combined, the combination of which has not been thoroughly explored. Future research is needed to fully evaluate how these factors affect the performance of the RBA algorithm and to identify the optimal parameters for UAV investigations.

4.3. Limitations and Prospects

Despite the advancements achieved by the RBA and RBA-Plant methods, they have certain inherent limitations. First of all, the study was conducted in only one sample area and solely in a potato plantation field, which stands out as its weakest aspect. This limited geographical and crop-specific scope may not fully represent the diverse conditions under which UAV remote sensing is applied. The variability in crop types and soil conditions factors may influence the generalizability of the results. Therefore, further studies are necessary to validate the effectiveness of these methods across different agricultural settings, such as orchards and various types of crops. Another primary issue of consideration is the omission of the bidirectional reflectance distribution function (BRDF) effect in this study. The BRDF refers to the variations in reflectance factors with viewing geometries [34]. For the same tie points on two images, the differences in viewing geometries can result in discrepancies in the measured reflectance values, affecting the accuracy of radiometric calibration. In this study, it was assumed that with small view angles, the BRDF effects were limited so that the BRDF model could be ignored. Neglecting this simplified the optimization process, yet it can lead to inaccuracies in reflectance measurement [14]. Future studies may benefit from integrating BRDF models [34] that address these angular dependencies to enhance the accuracy of the derived reflectance [18]. Additionally, another limitation is that there is the tendency of solved slopes to go to zero between observations of the reference panels, as shown in Figure 8. The potential reason for this phenomenon is that the tie point formulations are relatively strong constraints in the optimization process, making it difficult to find an optimal solution. Moreover, the number and distribution of reference points also affect this issue, as a limited number of reference points may not provide enough baselines to accurately constrain the solution. Future work is required to further explore it.
In this study, potato plants were the main ground object, which are characterized by low reflectance in blue and red channels and high reflectance in the green, NIR, and red edge bands [6]. Low reflectance in those wavebands potentially leads to a low signal-to-noise ratio, making it difficult for cameras to collect useful information and consequently decreasing the performance of the algorithm. Particularly under rapidly changing illumination conditions, the fluctuating levels of strong and weak irradiance make this problem worse. As a result, the improvement in the algorithm’s performance within those low-reflectance wavebands was not as obvious as those observed within the NIR waveband, as shown in Table 2. To improve camera sensitivity and obtain higher signal-to-noise ratios under low and variable lighting conditions, Wang et al. (2019) conducted a radiometric pixel-wise calibration with a mini-MCA6 multispectral camera to employ individual settings of integration times for each channel rather than uniform settings [11]. In contrast, in our study, the Altum camera was configured with uniform settings across its four channels. In the future, identifying optimal sensor settings for individual detectors could be further explored to improve the performance of the algorithm. Moreover, Peng et al. (2023) investigated the impact of vignetting correction on the RBA method and achieved improvements in accuracy and homogeneity by integrating vignetting correction into the RBA model [31]. In this study, the influence of other photo parameter corrections, such as geometric correction, has not been integrated into the RBA models. The omission of these corrections may limit the overall performance and accuracy of the RBA and RBA-Plant methods, as geometric distortions and other uncorrected photo parameters can introduce errors in the reflectance measurements. Future research should consider incorporating these additional corrections to further enhance the robustness and reliability of the radiometric calibration process.

5. Conclusions

In this paper, referring to the idea of reducing the number of tie points proposed by [22], we optimized the lightweight radiometric block adjustment method to make it suitable to deal with numerous multispectral images by assigning greater weights to reference point equations. Reducing the number of tie points effectively simplifies the computational process, whereas improving the weights of reference point equations enhances the radiometric accuracy of reflectance derivation. Moreover, the usage of multiple ground-based reference panels as a baseline helps mitigate the error propagation and accumulation in the adjustment. Additionally, we demonstrate the effectiveness of extracting tie points solely from the area of interest in the RBA method. This RBA-Plant method further enhances the performance both in radiometric accuracy and homogeneity. Furthermore, our findings indicate that increasing the key parameter ω —putting more weight on the observations of the reference panels—led to improved accuracy.

Author Contributions

Y.W.: Conceptualization, Methodology, Software, Investigation, Data Analysis, Writing—Original draft; G.K.: Conceptualization, Writing—Review and Editing, Supervision; Funding acquisition; Z.Y.: Conceptualization, Writing—Review and Editing, Supervision, Funding acquisition; H.A.K.: Writing—Review and Editing, Supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China (No.2022YFD2002300). This research was supported by the Agricultural Green Development (AGD) project, and it was also funded by the China Scholarship Council (No.201913043), Wageningen University & Research, China Agricultural University, and Hainan University.

Data Availability Statement

The data presented in this study are available upon request from the correspondence author at [email protected]. The data are not publicly available due to privacy.

Acknowledgments

We give thanks for the assistance provided by the Agricultural Biosystems Engineering Group.

Conflicts of Interest

We declare that there are no financial or personal relationships that have inappropriately influenced the work reported in this paper.

References

  1. Aasen, H.; Honkavaara, E.; Lucieer, A.; Zarco-Tejada, P. Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows. Remote Sens. 2018, 10, 1091. [Google Scholar] [CrossRef]
  2. Weiss, M.; Jacob, F.; Duveiller, G. Remote sensing for agricultural applications: A meta-review. Remote Sens. Environ. 2020, 236, 111402. [Google Scholar] [CrossRef]
  3. Nex, F.; Armenakis, C.; Cramer, M.; Cucci, D.; Gerke, M.; Honkavaara, E.; Kukko, A.; Persello, C.; Skaloud, J. UAV in the advent of the twenties: Where we stand and what is next. ISPRS J. Photogramm. Remote Sens. 2022, 184, 215–242. [Google Scholar] [CrossRef]
  4. Zhu, W.; Sun, Z.; Yang, T.; Li, J.; Peng, J.; Zhu, K.; Li, S.; Gong, H.; Lyu, Y.; Li, B.; et al. Estimating leaf chlorophyll content of crops via optimal unmanned aerial vehicle hyperspectral data at multi-scales. Comput. Electron. Agric. 2020, 178, 105786. [Google Scholar] [CrossRef]
  5. Nebiker, S.; Lack, N.; Abächerli, M.; Läderach, S. Light-Weight Multispectral UAV Sensors and Their Capabilities for Predicting Grain Yield and Detecting Plant Diseases. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B1, 963–970. [Google Scholar] [CrossRef]
  6. Van Iersel, W.; Straatsma, M.; Addink, E.; Middelkoop, H. Monitoring height and greenness of non-woody floodplain vegetation with UAV time series. ISPRS J. Photogramm. Remote Sens. 2018, 141, 112–123. [Google Scholar] [CrossRef]
  7. Wang, Y.; Yang, Z.; Kootstra, G.; Khan, H.A. The impact of variable illumination on vegetation indices and evaluation of illumination correction methods on chlorophyll content estimation using UAV imagery. Plant Methods 2023, 19, 51. [Google Scholar] [CrossRef]
  8. Arroyo-Mora, J.P.; Kalacska, M.; Løke, T.; Schläpfer, D.; Coops, N.C.; Lucanus, O.; Leblanc, G. Assessing the impact of illumination on UAV pushbroom hyperspectral imagery collected under various cloud cover conditions. Remote Sens. Environ. 2021, 258, 112396. [Google Scholar] [CrossRef]
  9. Li, X.; Tupayachi, J.; Sharmin, A.; Martinez Ferguson, M. Drone-Aided Delivery Methods, Challenge, and the Future: A Methodological Review. Drones 2023, 7, 191. [Google Scholar] [CrossRef]
  10. Smith, G.M.; Milton, E.J. The use of the empirical line method to calibrate remotely sensed data to reflectance. Int. J. Remote Sens. 1999, 20, 2653–2662. [Google Scholar] [CrossRef]
  11. Wang, S.; Baum, A.; Zarco-Tejada, P.J.; Dam-Hansen, C.; Thorseth, A.; Bauer-Gottwein, P.; Bandini, F.; Garcia, M. Unmanned Aerial System multispectral mapping for low and variable solar irradiance conditions: Potential of tensor decomposition. ISPRS J. Photogramm. Remote Sens. 2019, 155, 58–71. [Google Scholar] [CrossRef]
  12. Zhu, H.; Huang, Y.; An, Z.; Zhang, H.; Han, Y.; Zhao, Z.; Li, F.; Zhang, C.; Hou, C. Assessing radiometric calibration methods for multispectral UAV imagery and the influence of illumination, flight altitude and flight time on reflectance, vegetation index and inversion of winter wheat AGB and LAI. Comput. Electron. Agric. 2024, 219, 108821. [Google Scholar] [CrossRef]
  13. Wendel, A.; Underwood, J. Illumination compensation in ground based hyperspectral imaging. ISPRS J. Photogramm. Remote Sens. 2017, 129, 162–178. [Google Scholar] [CrossRef]
  14. Honkavaara, E.; Khoramshahi, E. Radiometric Correction of Close-Range Spectral Image Blocks Captured Using an Unmanned Aerial Vehicle with a Radiometric Block Adjustment. Remote Sens. 2018, 10, 256. [Google Scholar] [CrossRef]
  15. Qin, Z.; Li, X.; Gu, Y. An Illumination Estimation and Compensation Method for Radiometric Correction of UAV Multispectral Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5545012. [Google Scholar] [CrossRef]
  16. Xue, B.; Ming, B.; Xin, J.; Yang, H.; Gao, S.; Guo, H.; Feng, D.; Nie, C.; Wang, K.; Li, S. Radiometric Correction of Multispectral Field Images Captured under Changing Ambient Light Conditions and Applications in Crop Monitoring. Drones 2023, 7, 223. [Google Scholar] [CrossRef]
  17. Olsson, P.O.; Vivekar, A.; Adler, K.; Garcia Millan, V.E.; Koc, A.; Alamrani, M.; Eklundh, L. Radiometric Correction of Multispectral UAS Images: Evaluating the Accuracy of the Parrot Sequoia Camera and Sunshine Sensor. Remote Sens. 2021, 13, 577. [Google Scholar] [CrossRef]
  18. Suomalainen, J.; Oliveira, R.A.; Hakala, T.; Koivumäki, N.; Markelin, L.; Näsi, R.; Honkavaara, E. Direct reflectance transformation methodology for drone-based hyperspectral imaging. Remote Sens. Environ. 2021, 266, 112691. [Google Scholar] [CrossRef]
  19. Cao, S.; Danielson, B.; Clare, S.; Koenig, S.; Campos-Vargas, C.; Sanchez-Azofeifa, A. Radiometric calibration assessments for UAS-borne multispectral cameras: Laboratory and field protocols. ISPRS J. Photogramm. Remote Sens. 2019, 149, 132–145. [Google Scholar] [CrossRef]
  20. Honkavaara, E.; Saari, H.; Kaivosoja, J.; Pölönen, I.; Hakala, T.; Litkey, P.; Mäkynen, J.; Pesonen, L. Processing and Assessment of Spectrometric, Stereoscopic Imagery Collected Using a Lightweight UAV Spectral Camera for Precision Agriculture. Remote Sens. 2013, 5, 5006–5039. [Google Scholar] [CrossRef]
  21. Fawcett, D.; Panigada, C.; Tagliabue, G.; Boschetti, M.; Celesti, M.; Evdokimov, A.; Biriukova, K.; Colombo, R.; Miglietta, F.; Rascher, U.; et al. Multi-Scale Evaluation of Drone-Based Multispectral Surface Reflectance and Vegetation Indices in Operational Conditions. Remote Sens. 2020, 12, 514. [Google Scholar] [CrossRef]
  22. Kizel, F.; Benediktsson, J.A.; Bruzzone, L.; Pedersen, G.B.M.; Vilmundardottir, O.K.; Falco, N. Simultaneous and Constrained Calibration of Multiple Hyperspectral Images Through a New Generalized Empirical Line Model. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2047–2058. [Google Scholar] [CrossRef]
  23. Honkavaara, E.; Hakala, T.; Markelin, L.; Rosnell, T.; Saari, H.; Mäkynen, J. A Process for Radiometric Correction of UAV Image Blocks. Photogramm.-Fernerkund.-Geoinf. 2012, 2012, 115–127. [Google Scholar] [CrossRef] [PubMed]
  24. Shin, J.I.; Cho, Y.M.; Lim, P.C.; Lee, H.M.; Ahn, H.Y.; Park, C.W.; Kim, T. Relative Radiometric Calibration Using Tie Points and Optimal Path Selection for UAV Images. Remote Sens. 2020, 12, 1726. [Google Scholar] [CrossRef]
  25. MicaSense. Alignment.ipynb. 2023. Available online: https://github.com/micasense/imageprocessing/blob/master/Alignment.ipynb (accessed on 22 April 2023).
  26. Jiang, Z.; Huete, A.R.; Chen, J.; Chen, Y.; Li, J.; Yan, G.; Zhang, X. Analysis of NDVI and scaled difference vegetation index retrievals of vegetation fraction. Remote Sens. Environ. 2006, 101, 366–378. [Google Scholar] [CrossRef]
  27. MindOpt Studio. Available online: https://opt.aliyun.com (accessed on 18 June 2024).
  28. Micasense. Available online: http://www.micasense.com (accessed on 17 June 2024).
  29. Mamaghani, B.; Salvaggio, C. Multispectral Sensor Calibration and Characterization for sUAS Remote Sensing. Sensors 2019, 19, 4453. [Google Scholar] [CrossRef] [PubMed]
  30. Guo, Y.; Senthilnath, J.; Wu, W.; Zhang, X.; Zeng, Z.; Huang, H. Radiometric Calibration for Multispectral Camera of Different Imaging Conditions Mounted on a UAV Platform. Sustainability 2019, 11, 978. [Google Scholar] [CrossRef]
  31. Peng, W.; Gong, Y.; Fang, S.; Zhang, Y.; Dash, J.; Ren, J.; Mo, J. A Radiometric Block Adjustment Method for Unmanned Aerial Vehicle Images Considering the Image Vignetting. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5402514. [Google Scholar] [CrossRef]
  32. Liu, Y.; Yan, Z.; Tan, J.; Li, Y. Multi-Purpose Oriented Single Nighttime Image Haze Removal Based on Unified Variational Retinex Model. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 1643–1657. [Google Scholar] [CrossRef]
  33. Xing, C.; Wang, J.; Xu, Y. Overlap Analysis of the Images from Unmanned Aerial Vehicles. In Proceedings of the 2010 International Conference on Electrical and Control Engineering, Wuhan, China, 25–27 June 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 1459–1462. [Google Scholar] [CrossRef]
  34. Qin, Z.; Li, X.; Gu, Y. Hemisphere Harmonics Basis: A Universal Approach to Remote Sensing BRDF Approximation. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–12. [Google Scholar] [CrossRef]
Figure 1. The study area is located in Wageningen, Gelderland province, the Netherlands. The right figure shows the experimental setup, where the red dots represent ground control points (GCPs), and each single yellow five-pointed star denotes a set of reference panels. The white rectangle border indicates the range of the potato monoculture field, and the light blue one represents the potato and grass stripcropping field.
Figure 1. The study area is located in Wageningen, Gelderland province, the Netherlands. The right figure shows the experimental setup, where the red dots represent ground control points (GCPs), and each single yellow five-pointed star denotes a set of reference panels. The white rectangle border indicates the range of the potato monoculture field, and the light blue one represents the potato and grass stripcropping field.
Remotesensing 16 03019 g001
Figure 2. The mean reflectance values of four sets of self-made reference panels used in this experiment.
Figure 2. The mean reflectance values of four sets of self-made reference panels used in this experiment.
Remotesensing 16 03019 g002
Figure 3. Variability of solar irradiance at 560 nm (green channel) during UAV data collection under dynamic cloud conditions, as observed on 14 June, between 11:20 and 12:00.
Figure 3. Variability of solar irradiance at 560 nm (green channel) during UAV data collection under dynamic cloud conditions, as observed on 14 June, between 11:20 and 12:00.
Remotesensing 16 03019 g003
Figure 4. Workflow for the proposed radiometric block adjustment method.
Figure 4. Workflow for the proposed radiometric block adjustment method.
Remotesensing 16 03019 g004
Figure 5. Flowchart for selecting tie points located in the vegetation area.
Figure 5. Flowchart for selecting tie points located in the vegetation area.
Remotesensing 16 03019 g005
Figure 6. Flowchart for identifying tie points located in vegetation areas. (a) Tie points, denoted by the red dots, are located on the example image in the green channel. (b) Histogram of the calculated NDVI map for the corresponding image, and the red line indicates the segmentation threshold between vegetation and non-vegetation. (c) NDVI filtered to highlight vegetation areas, overlaid on the RGB base layer. (d) Tie points that are exclusively located in vegetation regions of the example image.
Figure 6. Flowchart for identifying tie points located in vegetation areas. (a) Tie points, denoted by the red dots, are located on the example image in the green channel. (b) Histogram of the calculated NDVI map for the corresponding image, and the red line indicates the segmentation threshold between vegetation and non-vegetation. (c) NDVI filtered to highlight vegetation areas, overlaid on the RGB base layer. (d) Tie points that are exclusively located in vegetation regions of the example image.
Remotesensing 16 03019 g006
Figure 7. Conceptual framework for reducing the number of tie point equations. (a) displays the result of tie point extraction on the example image pair using the Metashape Python package. (b) is a 2D scatter plot of radiance values on matching tie points between pairs of overlapping images, indicated by blue circles. Green dots highlight the points selected after outlier removal, which are utilized for regression analysis. The fitted regression line is shown in blue, with the maximum and minimum points marked in red on this line chosen to construct the tie points equations. The black line denotes the 1:1 line.
Figure 7. Conceptual framework for reducing the number of tie point equations. (a) displays the result of tie point extraction on the example image pair using the Metashape Python package. (b) is a 2D scatter plot of radiance values on matching tie points between pairs of overlapping images, indicated by blue circles. Green dots highlight the points selected after outlier removal, which are utilized for regression analysis. The fitted regression line is shown in blue, with the maximum and minimum points marked in red on this line chosen to construct the tie points equations. The black line denotes the 1:1 line.
Remotesensing 16 03019 g007
Figure 8. The changing trend between the slopes obtained for each image in the dataset and the DLS-recorded corresponding irradiance. The green line denotes the variation in slopes derived for each image, while the blue dashed line represents the change in irradiance. The yellow dots highlight images that capture the reference panels, with their slopes calculated based on observations from these reference panels.
Figure 8. The changing trend between the slopes obtained for each image in the dataset and the DLS-recorded corresponding irradiance. The green line denotes the variation in slopes derived for each image, while the blue dashed line represents the change in irradiance. The yellow dots highlight images that capture the reference panels, with their slopes calculated based on observations from these reference panels.
Remotesensing 16 03019 g008
Figure 9. Overview of all the reflectance orthomosaics for each method—ELM, DLS-CRP, DLS-LCRP, the optimized RBA, and RBA-Plant—across the green, red, red edge, and NIR bands. The bottom layer displays the false-color composite image (color infrared or CIR).
Figure 9. Overview of all the reflectance orthomosaics for each method—ELM, DLS-CRP, DLS-LCRP, the optimized RBA, and RBA-Plant—across the green, red, red edge, and NIR bands. The bottom layer displays the false-color composite image (color infrared or CIR).
Remotesensing 16 03019 g009
Figure 10. The result of reflectance conversion for tie points in side overlapping regions between two example image pairs in the green channel under different illumination conditions. The points indicate the respective reflectance values of tie points in two images. The ellipses show the 95% confidence ellipses.
Figure 10. The result of reflectance conversion for tie points in side overlapping regions between two example image pairs in the green channel under different illumination conditions. The points indicate the respective reflectance values of tie points in two images. The ellipses show the 95% confidence ellipses.
Remotesensing 16 03019 g010
Figure 11. The trend of coefficient of variation (CV%) of the orthomosaic-extracted reflectance for sampling plants within the potato monoculture field.
Figure 11. The trend of coefficient of variation (CV%) of the orthomosaic-extracted reflectance for sampling plants within the potato monoculture field.
Remotesensing 16 03019 g011
Figure 12. The trend of coefficient of variation (CV%) of the orthomosaic-extracted reflectance for sampling plants within the potato stripcropping field.
Figure 12. The trend of coefficient of variation (CV%) of the orthomosaic-extracted reflectance for sampling plants within the potato stripcropping field.
Remotesensing 16 03019 g012
Figure 13. The trend of RMSE with the increase in the parameter ω for each channel.
Figure 13. The trend of RMSE with the increase in the parameter ω for each channel.
Remotesensing 16 03019 g013
Table 1. The result of NRMSD (%) between reflectance values of tie points for pairs of images with side overlapping areas.
Table 1. The result of NRMSD (%) between reflectance values of tie points for pairs of images with side overlapping areas.
NRMSDELMDLS-CRPDLS-LCRPRBARBA-Plant
Green10.2110.710.936.65.59
Red8.237.578.15.774.47
NIR14.9512.6912.976.95.72
Red Edge13.8513.1113.337.596.55
Average11.8111.0211.336.725.58
Table 2. Result of the RMSE between ground-measured reflectance and corrected orthomosaic-extracted reflectance in the monoculture and stripcropping fields, respectively.
Table 2. Result of the RMSE between ground-measured reflectance and corrected orthomosaic-extracted reflectance in the monoculture and stripcropping fields, respectively.
FieldBandELMDLS-CRPDLS-LCRPRBARBA-Plant
Monoculture
Field
Green0.0210.0480.0200.0160.014
Red0.0210.0230.0150.0280.028
NIR0.2350.1580.0950.1160.088
Red Edge0.0590.0840.0320.0270.025
Mean0.0840.0780.0410.0470.039
Stripcropping
Field
Green0.030.0430.0150.0130.011
Red0.0170.0240.0150.0240.019
NIR0.3110.1470.0990.1740.126
Red Edge0.0910.0750.0310.0550.054
Mean0.1120.0720.040.0660.052
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Yang, Z.; Khan, H.A.; Kootstra, G. Improving Radiometric Block Adjustment for UAV Multispectral Imagery under Variable Illumination Conditions. Remote Sens. 2024, 16, 3019. https://doi.org/10.3390/rs16163019

AMA Style

Wang Y, Yang Z, Khan HA, Kootstra G. Improving Radiometric Block Adjustment for UAV Multispectral Imagery under Variable Illumination Conditions. Remote Sensing. 2024; 16(16):3019. https://doi.org/10.3390/rs16163019

Chicago/Turabian Style

Wang, Yuxiang, Zengling Yang, Haris Ahmad Khan, and Gert Kootstra. 2024. "Improving Radiometric Block Adjustment for UAV Multispectral Imagery under Variable Illumination Conditions" Remote Sensing 16, no. 16: 3019. https://doi.org/10.3390/rs16163019

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop