Isprs Archives XLIII B1 2021 71 2021
Isprs Archives XLIII B1 2021 71 2021
Isprs Archives XLIII B1 2021 71 2021
Commission I
KEY WORDS: UV Camera, Corona Camera, Discharge Monitoring, Power line, Direct Referencing, Calibration, Co-
registration
ABSTRACT:
Power lines are an important infrastructure and need special attention. Their functionality is of high importance for our life.
Their monitoring is important to guarantee their sustainable operation. It is done mostly by aerial survey using different tech-
nologies. In addition to standard setups, e.g. LiDAR and high resolution RGB cameras, often UV cameras, so called Corona
Cameras, are used to detect unwanted discharges. In many cases this UV cameras are combined along with RGB cameras with
a similar field of view in order to superimpose the detected UV signals onto a visual image of the inspected objects. Discharges
at power lines indicate findings that should be monitored with such additional sensors like thermal or high resolution mapping
cameras then.
The use of Corona-Cameras for such inspection work needs several steps of calibration, co registering and direct referencing
methods in order to get them in a good georeference to the other sensors.
Power lines belong to the most important infrastructure es- 2.1 The cameras and their features
pecially in the industrialized urban world. Our daily life Detecting discharges needs a UV sensible camera with re-
highly depends on a sustainable electric power supply. lated lenses that consists of UV-permeable optical compo-
Monitoring them to guarantee their reliable operation is a nents in combination with a daylight blocking filter with a
legal obligation in many countries and/or part of homeland narrow bandwidth of 265nm +- 15 nm. Daylight blocking
security. Big efforts are put into overhead line inspections is important so that natural UV radiation does not interfere
to prevent issues on the infrastructure itself or of vegeta- with the detection of unwanted discharges. The transmitted
tion that grows to close. radiation must be amplified by a multi-stage UV sensitive
Every year thousands of km are monitored mainly with amplifier and projected on a phosphorescent material
helicopters either by manual inspections with observers which is connected to a CMOS Sensor via fiber optic com-
and/or combined with sensors that capture data for docu- ponents.
mentation or also analytics on the captured datasets.
A typical setup is the use of LiDAR that nicely maps the The advantage using such a construction is, that behind the
power line and detects vegetation issues due to first and phosphorescent plane you can use any kind of industrial
last pulse analytics. Aerial cameras are used for detailed camera to capture the emitted photons. Because of the
imaging with resolutions of better than 1 cm in combina- small spectrum a monochrome sensor is sufficient. As
tion. mentioned before, it is not necessary to put a sensor with
More and more complex systems are used for automated bayern-pattern into the UV sensing system. So the optimal
issue detection like for example in the "SIEAERO" Project way is to use an achromatic sensor to capture the intensity
of Siemens in Germany. Many sensors deliver huge da- of the multi-stage amplification. Corona discharges appear
tasets which enter automated feature and issue detection as blobs of white pixels on the otherwise black images. To
algorithms using AI Techniques. identify the location of detected discharges, a second cam-
Beside LiDAR and high resolution imaging specific sen- era, working in the visual spectrum is used in parallel with
sors are used for detecting anomalies. Beside thermal cam- the UV sensitive camera. To get the most of information
eras that capture images of hotspots, the use of Corona- out of the VIS camera, a RGB-bayern-pattern Camera with
Detectors became important. Discharges in the UV band a relatively high resolution and high frame rates is pre-
indicate issues on the infrastructure. Their detection guides ferred.
the automated algorithms to analyze on the other data what
the problem finally is. To achieve the best result, two almost similar industrial
cameras were chosen for the task. This does not only pro-
vide the best PTP-synchronization compatibility, but also
ensures that both Sensors have the same sensor size. Using
similar camera lenses, theoretically the fields of view of
both cameras are almost identical, i.e. the visual cameras
covers the same region of interest as the UV sensitive cam-
era. One major difficulty is to compensate the intensity
losses caused by the daylight filter of the UV Camera. Be-
_____________________________
cause of its structure the remote areas at the frame hit the
* Corresponding author
filter by a wider angle that causes higher absorption. Also,
diffraction artifacts, caused by the grid structure of the fil- 2.3 Capturing Images
ter, can cause problems as mentioned at the border of the
image. The basic idea of the UV and RGB Camera combination is
to overlay detected UV radiation with the RGB image data.
While the RGB Camera shows frames of the real scenery,
2.2 Camera synchronization the discharges create only single photon point-clouds visi-
ble as BW signal. The overlay of these points on the RGB
Discharges are detected as single dots in an 8Bit mono- image enables a real-time monitoring and to detect and in-
chrome video stream with up to 50 frames per second in spect the place of discharge visually e.g. insulators, cable
order to capture discharges that can be initiated by the AC clamps etc.
frequency of the power lines. A basic issue was to synchro- To get the best result, a tradeoff between the highest pos-
nize both cameras, UV and visual, to an exact GPS time sible camera resolution and a suitable frame rate must be
tag. This was managed by synchronizing both cameras done. Limiting factors are the operating system where the
with the precision time protocol (PTP) and a GNSS based image processing takes place beside the maximum availa-
PTP timeserver (Eidson, J. C, 2006). The time of all de- ble network bandwidth on the other hand. The result is
vices will be synchronized to the GPS-time provided by considered to be a lightweight system to be operated by
the timeserver. Logging the data with the exact timestamp UAVs, but both, the computing unit and the network solu-
is the basis for a direct referencing to a GNSS-INS trajec- tion, will grow with increasing the processing perfor-
tory, post processed after the mission. mance. To reach the target of 50 frames per second, a re-
duction of the dataflow on the network is needed. Because
the multistage amplifier limits the optical resolution of the
UV-camera, the sensors pixel resolution can be reduced by
a procedure called binning or decimation. “Decimation is
used primarily for the reduction in the number of pixels
and the amount of data while retaining the original image
area angle and image brightness” (Allied Vision, 2021) .
This process can be applied to both the horizontal and ver-
tical resolution. In this case a vertical and horizontal deci-
mation of 2 (skipping every second pixel) gives an im-
provement of 37.5% compared to both cameras running at
full resolution:
Figure 1. PTP Synchronization scheme
Each frame capture gets triggered by a pre-calculated datanobinning = width ∙ height ∙ 8Bit (1)
timestamp of the internal clock, which is periodically syn-
width height
chronized to the GPS Time in order to correct the drifts of databinning = ∙ ∙ 8Bit (2)
2 2
the internal clocks. Because the cameras get triggered at a
specific GPS-timestamp and the flight data also is refer- datanobinning −databinning
enced to this time, both data streams can be interpolated datasaved = = 37.5% (3)
2∙datanobinning
afterwards in order to enable photogrammetric processing.
As shown in Figure 3, at a sensor resolution of 1936x1216,
frame rates up to 40fps are possible with 1GigE band-
width. To further improve this performance up to 50 fps
either a 2.5 or 10GigE capable system can be installed or
the height and width can be reduced by the ROI (Region
of Interest) feature. Another solution might be applying
decimation on the RGB-Camera too, but because this leads
to information losses in the real visual picture it is not rec-
ommended.
The UV-image does not contain the most precise and sharp
2.4 Image overlay information about the corona discharge. Therefore the
choice of the interpolation method can be reduced to two
After capturing images from both cameras, this data needs methods while polynomial and higher ordered interpola-
to be combined in case of a detected discharge. For this tion methods will increase the arithmetic operations with-
purpose, an RGB- to UV-image calibration or co-registra- out a noticeable effect. Both interpolation principles, near-
tion has been performed. For this step, 10 special elec- est-neighbor and bilinear, are represented in Figure 5.
tronic targets have been developed that emit UV light in
the needed narrow UV band width. Placing them on a fa-
cade, determining the relative coordinates with a total sta-
tion and capturing images at different perspectives with
both cameras enable a relative orientation of the UV data
using a transformation into the RGB image.
For this task, several options are available. Not only linear
geometric transformations like the affine- or perspective
transformation but also polynomial transformations must
be taken in consideration. This is due to several source of
errors like misalignment parallaxes error or lens distortion
of the camera optics. Depending on the impact of the errors
mentioned before a linear transformation may be insuffi- Figure 5. Comparing nearest-neighbor and bilinear inter-
cient. But also, the increasing processing power, needed polation
for image transformation of higher orders must be kept in
mind. The challenge consists of finding the most efficient
way. Where linear transformations can be described by 2.5 Calibration Process and Resulting Image
small transformation matrices (B. Jähne, 2005),
There are several steps to be performed to successfully get
Affine transformation (6 degrees of freedom): the UV data overlaid on the RGB image. First is to analyze
x′ a00 a01 t x x the distortion of both lenses (K. Kraus, 1994). To calibrate
[y ′ ] = [a10 a11 t y ] ∙ [y] (4) the RGB camera a line pattern with equal intervals can be
1 0 0 1 1 captured. In terms of getting the most precise information,
the captured image was evaluated with an automatic detec-
a00 a11 includingscale, dilataion and shear. tion programmed with OpenCV. Thus, not only time is
𝑡x and t y discribes translation. saved by the automated processing, also the calibration
points are detected with subpixel precision by getting its
Perspective transformation: center of mass (shown in Figure 6).
x′ a00 a01 a02 x
[y ′ ] = [a10 a11 a12 ] ∙ [y] (5)
1 a20 a21 1 1
task a setup with a stepper motor can deliver high precision as saving the result to an appropriate drive needs to be done
linear movement. almost simultaneously to reach the requested frame rates.
To fulfill this requirement a multithreading environment is
Based on the achieved data, an evaluation process was essential. Splitting up the tasks in different sections and
done to analyze which transformation will result in a connecting them with generous sized data buffers leads to
good error-to-performance ratio. establish a dynamic system. In case of a powerful multi-
core system or even one with a graphic processing unit, the
latency of the live output will be negligible. Also, less
powerful systems like modern and lightweight single
board computers can handle this operation even though
with a noticeable output delay. The concept is represented
by the flow diagram in figure 10. For further improve-
ments of the performance on slower hardware the concept
can be slightly modified.
evaluation process. Therefor a UV- and bright white LED of the system can lead to a nonlinear relationship between
were placed on a circuit board as close as possible. This the images, although those cameras are almost similar.
construction can be passed through great sections of the
entire image area. While doing that, the PTP-synchronized Further research needs to be done to get a lower average
cameras capture the image with a decent framerate. For error. The difficulty is maintaining efficiency although
this, the exposure time needs to be that short that the mo- nonlinear transformations like polynomial approximation
tion blur does not disturb the measurement. In our soft- is applied. One option could be a remapping process
ware, two related incoming images which are captured at (OpenCV,accessed 2021), where all approximation results
the same time, can be processed. Not just the UV-image are saved to a lookup table (LUT) to improve performance.
represents the position of the LEDs as a white filled circle,
also the RGB-image shows the same white dot due to the
high intense light. By auto detecting the midpoints of both 4. ENTERING IN THE MULTI SENSOR SYSTEM
white group of pixels, the position of the LEDs in both pic- AND THE DATA WORKFLOW
tures can be stored. NOTE: Bright reflected sunlight can
be a major issue during this process. After doing this, for The new synthetic image still is not calibrated like images
each taken image pair, the positions were used to evaluate received by a metric camera in order to enter a photogram-
the error at the corresponding x- and y-position: metric workflow. Nevertheless due to the dominant RGB
data, a classic tie point matching can be applied in contrast
overlayerror = √(xUV − xRGB )2 + (yUV − yRGB )2 (6) to the pure UV image. That way, the synthetic images can
enter the entire workflow in calibration the multi sensor
The error in relation to the position can be plotted for fur- head. Usually using ground control points, the images pass
ther inspection. Figure 11 shows the uncalibrated overlay the typical workflow by using tie-point matching, first
results and those of the perspective transformation. The alignment, and entering Ground control points and calcu-
calibration images were taken at a distance of 4 m that re- late a final adjustment and calibration that results with pa-
duces but still not erases the parallax effect. rameters like radial distortion, calibrated focal length and
PPS. Using GNSS-INS data will in addition provide the
boresight angles and offset parameters in order to direct
reference the camera data.
Usually all sensor data are related to the GNSS-INS deice,
that way the PTP on a PTP GPS serer gives the best syn-
chronization that is available for that technology.
5. OUTLOOK