[go: up one dir, main page]

0% found this document useful (0 votes)
3 views6 pages

Isprs Archives XLIII B1 2021 71 2021

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 6

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B1-2021

XXIV ISPRS Congress (2021 edition)

SETUP OF A CORONA CAMERA AND IMAGE CO-REGISTRATION /


CALIBRATION
N.Kunz a *, P. Bochmann a ,G. Kemper a
a GGS GmbH, Speyer / Germany – noah.kunz@ggs-speyer.de

Commission I

KEY WORDS: UV Camera, Corona Camera, Discharge Monitoring, Power line, Direct Referencing, Calibration, Co-
registration

ABSTRACT:
Power lines are an important infrastructure and need special attention. Their functionality is of high importance for our life.
Their monitoring is important to guarantee their sustainable operation. It is done mostly by aerial survey using different tech-
nologies. In addition to standard setups, e.g. LiDAR and high resolution RGB cameras, often UV cameras, so called Corona
Cameras, are used to detect unwanted discharges. In many cases this UV cameras are combined along with RGB cameras with
a similar field of view in order to superimpose the detected UV signals onto a visual image of the inspected objects. Discharges
at power lines indicate findings that should be monitored with such additional sensors like thermal or high resolution mapping
cameras then.
The use of Corona-Cameras for such inspection work needs several steps of calibration, co registering and direct referencing
methods in order to get them in a good georeference to the other sensors.

1. BACKGROUND 2. SETUP OF A CORONA CAMERA

Power lines belong to the most important infrastructure es- 2.1 The cameras and their features
pecially in the industrialized urban world. Our daily life Detecting discharges needs a UV sensible camera with re-
highly depends on a sustainable electric power supply. lated lenses that consists of UV-permeable optical compo-
Monitoring them to guarantee their reliable operation is a nents in combination with a daylight blocking filter with a
legal obligation in many countries and/or part of homeland narrow bandwidth of 265nm +- 15 nm. Daylight blocking
security. Big efforts are put into overhead line inspections is important so that natural UV radiation does not interfere
to prevent issues on the infrastructure itself or of vegeta- with the detection of unwanted discharges. The transmitted
tion that grows to close. radiation must be amplified by a multi-stage UV sensitive
Every year thousands of km are monitored mainly with amplifier and projected on a phosphorescent material
helicopters either by manual inspections with observers which is connected to a CMOS Sensor via fiber optic com-
and/or combined with sensors that capture data for docu- ponents.
mentation or also analytics on the captured datasets.
A typical setup is the use of LiDAR that nicely maps the The advantage using such a construction is, that behind the
power line and detects vegetation issues due to first and phosphorescent plane you can use any kind of industrial
last pulse analytics. Aerial cameras are used for detailed camera to capture the emitted photons. Because of the
imaging with resolutions of better than 1 cm in combina- small spectrum a monochrome sensor is sufficient. As
tion. mentioned before, it is not necessary to put a sensor with
More and more complex systems are used for automated bayern-pattern into the UV sensing system. So the optimal
issue detection like for example in the "SIEAERO" Project way is to use an achromatic sensor to capture the intensity
of Siemens in Germany. Many sensors deliver huge da- of the multi-stage amplification. Corona discharges appear
tasets which enter automated feature and issue detection as blobs of white pixels on the otherwise black images. To
algorithms using AI Techniques. identify the location of detected discharges, a second cam-
Beside LiDAR and high resolution imaging specific sen- era, working in the visual spectrum is used in parallel with
sors are used for detecting anomalies. Beside thermal cam- the UV sensitive camera. To get the most of information
eras that capture images of hotspots, the use of Corona- out of the VIS camera, a RGB-bayern-pattern Camera with
Detectors became important. Discharges in the UV band a relatively high resolution and high frame rates is pre-
indicate issues on the infrastructure. Their detection guides ferred.
the automated algorithms to analyze on the other data what
the problem finally is. To achieve the best result, two almost similar industrial
cameras were chosen for the task. This does not only pro-
vide the best PTP-synchronization compatibility, but also
ensures that both Sensors have the same sensor size. Using
similar camera lenses, theoretically the fields of view of
both cameras are almost identical, i.e. the visual cameras
covers the same region of interest as the UV sensitive cam-
era. One major difficulty is to compensate the intensity
losses caused by the daylight filter of the UV Camera. Be-
_____________________________
cause of its structure the remote areas at the frame hit the
* Corresponding author
filter by a wider angle that causes higher absorption. Also,

This contribution has been peer-reviewed.


https://doi.org/10.5194/isprs-archives-XLIII-B1-2021-71-2021 | © Author(s) 2021. CC BY 4.0 License. 71
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B1-2021
XXIV ISPRS Congress (2021 edition)

diffraction artifacts, caused by the grid structure of the fil- 2.3 Capturing Images
ter, can cause problems as mentioned at the border of the
image. The basic idea of the UV and RGB Camera combination is
to overlay detected UV radiation with the RGB image data.
While the RGB Camera shows frames of the real scenery,
2.2 Camera synchronization the discharges create only single photon point-clouds visi-
ble as BW signal. The overlay of these points on the RGB
Discharges are detected as single dots in an 8Bit mono- image enables a real-time monitoring and to detect and in-
chrome video stream with up to 50 frames per second in spect the place of discharge visually e.g. insulators, cable
order to capture discharges that can be initiated by the AC clamps etc.
frequency of the power lines. A basic issue was to synchro- To get the best result, a tradeoff between the highest pos-
nize both cameras, UV and visual, to an exact GPS time sible camera resolution and a suitable frame rate must be
tag. This was managed by synchronizing both cameras done. Limiting factors are the operating system where the
with the precision time protocol (PTP) and a GNSS based image processing takes place beside the maximum availa-
PTP timeserver (Eidson, J. C, 2006). The time of all de- ble network bandwidth on the other hand. The result is
vices will be synchronized to the GPS-time provided by considered to be a lightweight system to be operated by
the timeserver. Logging the data with the exact timestamp UAVs, but both, the computing unit and the network solu-
is the basis for a direct referencing to a GNSS-INS trajec- tion, will grow with increasing the processing perfor-
tory, post processed after the mission. mance. To reach the target of 50 frames per second, a re-
duction of the dataflow on the network is needed. Because
the multistage amplifier limits the optical resolution of the
UV-camera, the sensors pixel resolution can be reduced by
a procedure called binning or decimation. “Decimation is
used primarily for the reduction in the number of pixels
and the amount of data while retaining the original image
area angle and image brightness” (Allied Vision, 2021) .
This process can be applied to both the horizontal and ver-
tical resolution. In this case a vertical and horizontal deci-
mation of 2 (skipping every second pixel) gives an im-
provement of 37.5% compared to both cameras running at
full resolution:
Figure 1. PTP Synchronization scheme

Each frame capture gets triggered by a pre-calculated datanobinning = width ∙ height ∙ 8Bit (1)
timestamp of the internal clock, which is periodically syn-
width height
chronized to the GPS Time in order to correct the drifts of databinning = ∙ ∙ 8Bit (2)
2 2
the internal clocks. Because the cameras get triggered at a
specific GPS-timestamp and the flight data also is refer- datanobinning −databinning
enced to this time, both data streams can be interpolated datasaved = = 37.5% (3)
2∙datanobinning
afterwards in order to enable photogrammetric processing.
As shown in Figure 3, at a sensor resolution of 1936x1216,
frame rates up to 40fps are possible with 1GigE band-
width. To further improve this performance up to 50 fps
either a 2.5 or 10GigE capable system can be installed or
the height and width can be reduced by the ROI (Region
of Interest) feature. Another solution might be applying
decimation on the RGB-Camera too, but because this leads
to information losses in the real visual picture it is not rec-
ommended.

Figure 2. Precision of the cameras PTP-synchronization

The precision of the synchronal image capture can be


measured via the sync outputs of the cameras. If the frame
acquisition begins the IO Pin is set to high and can be
measured by an oscilloscope. The result is shown in 2. The
Figure 3. Network load depending on the framerate of the
delay between the exposures of the cameras is 500-560ns.
sensor system if RGB-Camera records FHD and the UV-
The handling of the image data, capture and storage, is a
Camera half the resolution.
further big challenge.

This contribution has been peer-reviewed.


https://doi.org/10.5194/isprs-archives-XLIII-B1-2021-71-2021 | © Author(s) 2021. CC BY 4.0 License. 72
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B1-2021
XXIV ISPRS Congress (2021 edition)

The UV-image does not contain the most precise and sharp
2.4 Image overlay information about the corona discharge. Therefore the
choice of the interpolation method can be reduced to two
After capturing images from both cameras, this data needs methods while polynomial and higher ordered interpola-
to be combined in case of a detected discharge. For this tion methods will increase the arithmetic operations with-
purpose, an RGB- to UV-image calibration or co-registra- out a noticeable effect. Both interpolation principles, near-
tion has been performed. For this step, 10 special elec- est-neighbor and bilinear, are represented in Figure 5.
tronic targets have been developed that emit UV light in
the needed narrow UV band width. Placing them on a fa-
cade, determining the relative coordinates with a total sta-
tion and capturing images at different perspectives with
both cameras enable a relative orientation of the UV data
using a transformation into the RGB image.

For this task, several options are available. Not only linear
geometric transformations like the affine- or perspective
transformation but also polynomial transformations must
be taken in consideration. This is due to several source of
errors like misalignment parallaxes error or lens distortion
of the camera optics. Depending on the impact of the errors
mentioned before a linear transformation may be insuffi- Figure 5. Comparing nearest-neighbor and bilinear inter-
cient. But also, the increasing processing power, needed polation
for image transformation of higher orders must be kept in
mind. The challenge consists of finding the most efficient
way. Where linear transformations can be described by 2.5 Calibration Process and Resulting Image
small transformation matrices (B. Jähne, 2005),
There are several steps to be performed to successfully get
Affine transformation (6 degrees of freedom): the UV data overlaid on the RGB image. First is to analyze
x′ a00 a01 t x x the distortion of both lenses (K. Kraus, 1994). To calibrate
[y ′ ] = [a10 a11 t y ] ∙ [y] (4) the RGB camera a line pattern with equal intervals can be
1 0 0 1 1 captured. In terms of getting the most precise information,
the captured image was evaluated with an automatic detec-
a00 a11 includingscale, dilataion and shear. tion programmed with OpenCV. Thus, not only time is
𝑡x and t y discribes translation. saved by the automated processing, also the calibration
points are detected with subpixel precision by getting its
Perspective transformation: center of mass (shown in Figure 6).
x′ a00 a01 a02 x
[y ′ ] = [a10 a11 a12 ] ∙ [y] (5)
1 a20 a21 1 1

a20 and a21 yields two more degrees of freedom

another method needs to be applied to project the UV-im-


age on the color image with a non-linear transformation.
Considering that the transformation takes place up to 50 Figure 6. Getting center of calibration points automated
times a second, the calculation needs to be done quite fast. with OpenCV.
To do so, instead of calculating the new position of the co-
ordinate with polynomials for every frame, a map can be To receive the same data from the UV-camera was more
created within the calibration process. This results in a complex. Reproducing a pattern with equal distances is
large lookup table, storing shifting data for every pixel in achieved by moving a UV LED target with a linear motion
the image. This saves a lot of computation load. through the diagonal of the field of view. For this
After calculating the UV-pixel positions in the RGB-im-
age a second step has to be done. In order to calculate the
8Bit-value in the new picture, an interpolation method has Figure 7. UV-image LED-Target-Detection used as cali-
to be applied. This is shown by the schematic in Figure 4.

Figure 4. Schematic of the interpolation process (own rep-


resentation) bration pattern

This contribution has been peer-reviewed.


https://doi.org/10.5194/isprs-archives-XLIII-B1-2021-71-2021 | © Author(s) 2021. CC BY 4.0 License. 73
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B1-2021
XXIV ISPRS Congress (2021 edition)

task a setup with a stepper motor can deliver high precision as saving the result to an appropriate drive needs to be done
linear movement. almost simultaneously to reach the requested frame rates.
To fulfill this requirement a multithreading environment is
Based on the achieved data, an evaluation process was essential. Splitting up the tasks in different sections and
done to analyze which transformation will result in a connecting them with generous sized data buffers leads to
good error-to-performance ratio. establish a dynamic system. In case of a powerful multi-
core system or even one with a graphic processing unit, the
latency of the live output will be negligible. Also, less
powerful systems like modern and lightweight single
board computers can handle this operation even though
with a noticeable output delay. The concept is represented
by the flow diagram in figure 10. For further improve-
ments of the performance on slower hardware the concept
can be slightly modified.

Figure 8. UV Targets developed for the calibration-pro-


cedure

Figure 9. Flow diagram a software structure to achieve


live output

Instead of displaying and saving all image data, the rele-


vant data is processed only. In this case processing will
only applied if the UV-camera detects photons above a de-
fined threshold. This can simply be done searching for in-
dicators in the histogram of the image or furthermore an
efficient outline detection to analyze the dimension of a
discharge cloud. This will make the system even more dy-
namic, since the processing load is higher when detecting
discharges and that again will fill up the buffer. On the
other hand, if nothing is detected, the system has time to
process queued frames and clean up the buffer. This not
only reduces the overall system load, it also prevents un-
necessary data to be stored into the HDD. In addition and
depending on the requirements of the user, less data output
might end up in a faster post processing and therefore a
better workflow.

3.1 Evaluation of the accuracy

Another task to be performed is the evaluation of the accu-


Figure 9. Comparing different overlay transformations, racy. As already mentioned, the designed UV-System it is
from top to bottom: Without calibration, with affine rather complex to evaluate the difference between the
transformation & with perspective transformation overlaid UV- and the RGB-image. To measure the differ-
ence at certain points the calibration LED-targets can be
placed to different positions and their pixel positions be
3. ONLINE DATA EVALUATION measured in the images before and after the calibration
process. But to evaluate the entire overlaid image another
Finally, to process all these steps and ensure that live video method can be applied.
output as well as saving the relevant data is possible, the
operating software has to deal with several tasks. PTP-syn- For calculating the error depending on the position all sec-
chronized capturing, image processing, displaying as well tions of the image, it has to be represented by the

This contribution has been peer-reviewed.


https://doi.org/10.5194/isprs-archives-XLIII-B1-2021-71-2021 | © Author(s) 2021. CC BY 4.0 License. 74
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B1-2021
XXIV ISPRS Congress (2021 edition)

evaluation process. Therefor a UV- and bright white LED of the system can lead to a nonlinear relationship between
were placed on a circuit board as close as possible. This the images, although those cameras are almost similar.
construction can be passed through great sections of the
entire image area. While doing that, the PTP-synchronized Further research needs to be done to get a lower average
cameras capture the image with a decent framerate. For error. The difficulty is maintaining efficiency although
this, the exposure time needs to be that short that the mo- nonlinear transformations like polynomial approximation
tion blur does not disturb the measurement. In our soft- is applied. One option could be a remapping process
ware, two related incoming images which are captured at (OpenCV,accessed 2021), where all approximation results
the same time, can be processed. Not just the UV-image are saved to a lookup table (LUT) to improve performance.
represents the position of the LEDs as a white filled circle,
also the RGB-image shows the same white dot due to the
high intense light. By auto detecting the midpoints of both 4. ENTERING IN THE MULTI SENSOR SYSTEM
white group of pixels, the position of the LEDs in both pic- AND THE DATA WORKFLOW
tures can be stored. NOTE: Bright reflected sunlight can
be a major issue during this process. After doing this, for The new synthetic image still is not calibrated like images
each taken image pair, the positions were used to evaluate received by a metric camera in order to enter a photogram-
the error at the corresponding x- and y-position: metric workflow. Nevertheless due to the dominant RGB
data, a classic tie point matching can be applied in contrast
overlayerror = √(xUV − xRGB )2 + (yUV − yRGB )2 (6) to the pure UV image. That way, the synthetic images can
enter the entire workflow in calibration the multi sensor
The error in relation to the position can be plotted for fur- head. Usually using ground control points, the images pass
ther inspection. Figure 11 shows the uncalibrated overlay the typical workflow by using tie-point matching, first
results and those of the perspective transformation. The alignment, and entering Ground control points and calcu-
calibration images were taken at a distance of 4 m that re- late a final adjustment and calibration that results with pa-
duces but still not erases the parallax effect. rameters like radial distortion, calibrated focal length and
PPS. Using GNSS-INS data will in addition provide the
boresight angles and offset parameters in order to direct
reference the camera data.
Usually all sensor data are related to the GNSS-INS deice,
that way the PTP on a PTP GPS serer gives the best syn-
chronization that is available for that technology.

We do not expect the most perfect calibration since we still


have to deal with rolling shutter technologies, but at least
we are able to adjust the synthetic images with a few pixels
offset to the sensor data out of very high resolution data
e.g. LiDAR and RGB images. Not the synthetic image will
be used for reconstructing a digital twin of a power-line,
this data will be super positioned to the other data in order
to guide automated AI algorithms to the exact points of the
findings.

5. OUTLOOK

Monitoring Powerlines became a growing task within the


last 10 years. Multiple sensor installations support data an-
alytics using AI algorithms. To assist the software in man-
aging the huge amount of data, sensors that show anoma-
lies assist in searching at the right data position for the ob-
jects. In both directions, using high resolution data and
sensors that directly guide to findings, AI technology will
improve.
The new PTP technology enable to use non photogram-
metric sensors now precisely georeferenced in multi sen-
Figure 11. Visualizing the overlay error in relation to the sor systems.
position. Top is uncalibrated and the bottom one represents The idea, to use smaller and different sensor technologies
a perspective transformation. The heat map is scaled the in a combined way as synthetic images will assist in smart
same in both diagrams. sensor integrations. Other technologies e.g. radar and HF
receivers will surely enter the monitoring of powerlines in
Because of multiple factors, like lens distortion in combi- the future
nation with the parallaxes error, also small misalignments

This contribution has been peer-reviewed.


https://doi.org/10.5194/isprs-archives-XLIII-B1-2021-71-2021 | © Author(s) 2021. CC BY 4.0 License. 75
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B1-2021
XXIV ISPRS Congress (2021 edition)

6. REFERENCES Hamamatsu 2006: Photomultiplier Tubes – Basics and Ap-


plications. 3. Edition, 2.3 Electron Multiplier. (Dy-
Allied Vision, 2021. Decimation: https://cdn.al- nodeSection), https://www.hamamatsu.com/re-
liedvision.com/fileadmin/content/documents/prod- sources/pdf/etd/PMT_handbook_v3aE.pdf.
ucts/cameras/various/appnote/various/Decimation.pdf.
Hering, E., Martin, R., & Stohrer, M. 2012. Physik für In-
Eidson, J. C. 2006. Measurement, control, and communi- genieure. Springer-Ed., Heidelberg.
cation using IEEE 1588. Springer Science & Business Me-
dia. Jähne B., 2005: Digital image processing. Digitale Bild-
verarbeitung. Springer. Berlin.
Kähler, O., et al. 2020.Automating Powerline Inspection:
a Novel Multisensor System for Data Analysis Using Deep Kraus K., 1994. Photogrammetrie. Ferd. Dümmlers Ver-
Learning. The International Archives of Photogrammetry, lag, Bonn.
Remote Sensing and Spatial Information Sciences 43, 747-
754. OpenCV, 2021. Geometric Image Transformations:
https://docs.opencv.org/3.4/da/d54/group__imgproc__tra
nsform.html#gab75ef31ce5cdfb5c44b6da5f3b908ea4

This contribution has been peer-reviewed.


https://doi.org/10.5194/isprs-archives-XLIII-B1-2021-71-2021 | © Author(s) 2021. CC BY 4.0 License. 76

You might also like