[go: up one dir, main page]

CN114581383A - Binocular three-dimensional vision-based fire detection method for outdoor oil-immersed transformer - Google Patents

Binocular three-dimensional vision-based fire detection method for outdoor oil-immersed transformer Download PDF

Info

Publication number
CN114581383A
CN114581383A CN202210166184.3A CN202210166184A CN114581383A CN 114581383 A CN114581383 A CN 114581383A CN 202210166184 A CN202210166184 A CN 202210166184A CN 114581383 A CN114581383 A CN 114581383A
Authority
CN
China
Prior art keywords
point
camera
coordinate system
coordinates
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210166184.3A
Other languages
Chinese (zh)
Inventor
汤锦慧
姜德亮
伍发元
刘专
代小敏
毛梦婷
刘晓磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang University
Electric Power Research Institute of State Grid Jiangxi Electric Power Co Ltd
State Grid Corp of China SGCC
Original Assignee
Nanchang University
Electric Power Research Institute of State Grid Jiangxi Electric Power Co Ltd
State Grid Corp of China SGCC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang University, Electric Power Research Institute of State Grid Jiangxi Electric Power Co Ltd, State Grid Corp of China SGCC filed Critical Nanchang University
Priority to CN202210166184.3A priority Critical patent/CN114581383A/en
Publication of CN114581383A publication Critical patent/CN114581383A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明公开一种基于双目三维视觉的室外油浸式变压器火灾探测方法,根据红外热像仪的成像效果,将红外热像仪几何成像模型视为针孔相机模型,建立针孔相机模型;制作棋盘格标定板;采用外加透射式红外辐射的方法,将特制棋盘格标定板附着在热源表面;采集多张不同角度下的标定板的红外图像,然后采用OpenCV提供的相机标定工具进行标定;采用爬梯横杆和竖杆的相交中心作为特征参考点,来进行红外热像仪外参的标定;采用EPnP算法和基于ICP的3D‑3D的位姿估计计算相机的位姿;红外图像和可见光图像的配准后进行异常高温区域及目标点的提取。本发明解决了单波段红外火灾探测方法受户外阳光、湿度、风速干扰存在较大误差的问题,提高了标定的准确性。

Figure 202210166184

The invention discloses an outdoor oil-immersed transformer fire detection method based on binocular three-dimensional vision. According to the imaging effect of the infrared thermal imager, the geometric imaging model of the infrared thermal imager is regarded as a pinhole camera model, and the pinhole camera model is established; Make a checkerboard calibration board; attach a special checkerboard calibration board to the surface of the heat source by adding transmissive infrared radiation; collect multiple infrared images of the calibration board at different angles, and then use the camera calibration tool provided by OpenCV for calibration; The intersection center of the ladder crossbar and vertical bar is used as the feature reference point to calibrate the external parameters of the infrared thermal imager; the EPnP algorithm and ICP-based 3D‑3D pose estimation are used to calculate the pose of the camera; infrared images and visible light After image registration, abnormal high temperature areas and target points are extracted. The invention solves the problem that the single-band infrared fire detection method is interfered by outdoor sunlight, humidity and wind speed and has large errors, and improves the calibration accuracy.

Figure 202210166184

Description

Binocular three-dimensional vision-based fire detection method for outdoor oil-immersed transformer
Technical Field
The invention belongs to the technical field of transformers, and particularly relates to an outdoor oil-immersed transformer fire detection method based on binocular three-dimensional vision.
Background
The main reasons of fire and explosion of the oil-immersed power transformer in operation are insulation damage, poor contact, lightning overvoltage, load short circuit, transformer overheating, ignition of external fire source of the transformer and the like, and the main reasons of ignition of the transformer are electrical faults inside the transformer. Under the action of the heat dissipation system, the internal part of the transformer is characterized in that the temperature of the external surface of the transformer can be rapidly increased when an electrical fault occurs. Therefore, fire detection of the transformer mainly takes the external surface temperature as a monitoring object, the fire detector mainly takes a temperature-sensing fire detector, more oil point type temperature-sensing fire detectors, flame detectors, cable type linear temperature-sensing fire detectors and the like are applied, and the transformer is installed on the surface of the transformer in a contact manner, so that the defects that the response speed is slow, the response threshold value is single, the electromagnetic interference resistance is weak, the transformer is easily interfered by the external environment, the position of a fire source cannot be accurately judged, the operation and maintenance of the transformer are influenced and the like exist.
Disclosure of Invention
The invention discloses a binocular three-dimensional vision fusion fire detection device for an outdoor transformer, and solves the problems that how to find temperature abnormity and locate the spatial position of an abnormal area by using a vision method in an outdoor complex environment aiming at the irregular surface of the transformer. The invention fuses the heat radiation image of the transformer with the high-sensitivity visible light video image outdoors, and makes the monitoring information more comprehensive and clear while judging the position of the fire source of the transformer through three-dimensional recovery and reconstruction, thereby realizing early recognition and early warning of the fire hazard of the outdoor transformer and finding the fire source in time to avoid the occurrence and expansion of accidents.
In order to realize early identification and early warning of fire hazard of the outdoor transformer, the key of the fire detection system is to find temperature abnormality and locate the spatial position of an abnormal area by taking a body, an oil conservator, a sleeve lifting seat and a radiator of the oil immersed transformer as objects under an outdoor complex environment aiming at the irregular surface of the transformer. The technical scheme adopted by the invention is as follows: a binocular three-dimensional vision-based fire detection method for an outdoor oil-immersed transformer comprises the following steps:
step S1, establishing a model: according to the imaging effect of the thermal infrared imager, taking the thermal infrared imager geometric imaging model as a pinhole camera model, and establishing the pinhole camera model;
step S2, internal reference calibration: cutting the square blocks of the white grids on the basis of the black and white grid calibration plate, and adhering the cut square blocks at the black grids through heat insulation foam adhesive to completely cover the black grids to obtain a grid calibration plate; attaching a special checkerboard calibration plate to the surface of a heat source by adopting an external transmission infrared radiation method, wherein the heat insulation capacity of processed grids and unprocessed grids in the checkerboard calibration plate is different, so that the surface temperature is different, and checkerboard grids are obviously displayed in an infrared image; collecting infrared images of a plurality of calibration plates at different angles, and then calibrating by adopting a camera calibration tool provided by OpenCV;
step S3, collection of characteristic reference point coordinates: the intersection center of the ladder stand transverse rod and the vertical rod is used as a characteristic reference point to calibrate the external parameters of the thermal infrared imager;
s4, calculating the pose of the camera by adopting an EPnP algorithm and the pose estimation of 3D-3D based on ICP: firstly, estimating the pose of a camera by using an EPnP algorithm, and then constructing a problem of minimized reprojection error to adjust an estimated value; after the pose of the characteristic reference point under the camera coordinate system is obtained, ICP solution is carried out by adopting a linear algebra mode,
step S5, registration of infrared image and visible light image: acquiring depth information by using a depth camera, and then indirectly obtaining depth information corresponding to the infrared image by using a matching relation between the infrared image and the depth camera image;
step S6, extracting the abnormal high temperature region and the target point: and after the three-dimensional space point corresponding to the infrared image pixel point is recovered, if an abnormal high-temperature point is monitored, extracting the space position of the abnormal high-temperature point.
Further preferably, the calibration step of step S2 is as follows: initializing, and distributing a storage space of space coordinates and pixel coordinates for the corner points; reading a calibration plate image and extracting angular points; judging whether the angular points are successfully extracted, if not, directly judging whether all the calibration images are read; if so, calculating sub-pixel coordinates of the corner points, drawing the corner points, then storing the coordinates of the corner points, and then judging whether all the calibration images are read; and if all the calibration images are not read, returning to the step of reading one calibration plate image and extracting the corner points, and if all the calibration images are read, calibrating the infrared thermal imager and outputting the result.
More preferably, in step S4, a plurality of control points are selected by a principal component analysis method using a characteristic reference point whose coordinates are known in a world coordinate system, and the characteristic reference point is expressed in a weighted form of the plurality of control points; the same representation is carried out under a camera coordinate system, and the weight distribution corresponding to the feature reference points is the same as that under a world coordinate system; and then calculating the position of each virtual point under a camera coordinate system according to the obtained weight distribution, the internal parameters of the thermal infrared imager and the coordinates of the two-dimensional points in the image, and further obtaining the coordinates of the reference point under the camera coordinate system.
Further preferably, in step S6, the spatial region in which the transformer is located is divided into a plurality of voxel arrays by using three-dimensional voxels having a predetermined size, and abnormal high temperature points exceeding a threshold value in each voxel are counted to determine whether or not the region is an abnormal high temperature region.
Preferably, in step S6, first, a rectangular parallelepiped box is used to envelop the transformer, and then the rectangular parallelepiped is divided by the small grid voxels with side length of l to obtain a voxel array; after obtaining the voxel array, the specific steps of abnormal region division and target point determination are as follows:
(1) for any point piCalculating the voxel grid of the point, and recording the voxel grid of the point as SjThe members of the voxel grid object have all spatial points, all high-temperature abnormal spatial points, the highest temperature and the regional target points contained in the voxel grid object.
(2) For any point piIf its temperature is greater than the set threshold value TsIt is added to the abnormal spatial point member of its corresponding voxel grid.
(3) For any voxel grid SjIf the number of the abnormal points is larger than the set threshold value N, the area is judged to be an abnormal area, the target point of the area is the centroid of all the abnormal points, and the highest temperature of the area is the temperature of the highest temperature space point.
The invention has the beneficial effects that: the method changes the traditional fire detection direct contact transformer surface detection method of the existing transformer, overcomes the defects that the traditional contact type fire detector has low response speed, single response threshold value, weak electromagnetic interference resistance, easy interference from external environment, incapability of accurately judging the position of a fire source, influence on operation, maintenance and overhaul of the transformer and the like, solves the problem that the single-waveband infrared fire detection method has large errors due to the interference of outdoor sunlight, humidity and wind speed, adopts a heating and light-emitting device to design a chessboard calibration device for calibrating infrared images, improves the calibration accuracy, can accurately judge the position of the fire source, and has important significance for detecting and early warning the fire danger sign of the transformer in the outdoor environment.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a schematic view of a coordinate projection.
FIG. 3 is a flow chart of internal reference calibration.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Referring to fig. 1, a binocular three-dimensional vision-based fire detection method for an outdoor oil-immersed transformer comprises the following steps,
step S1, establishing a model;
the thermal infrared imager geometric imaging model can project external three-dimensional points to an internal imaging plane of the thermal infrared imager, so that internal parameters of the thermal infrared imager are formed, and the form of the internal parameters of the thermal infrared imager depends on the selection of the thermal infrared imager geometric imaging model. And according to the imaging effect of the thermal infrared imager, taking the thermal infrared imager geometric imaging model as a pinhole camera model, establishing the pinhole camera model, and then calibrating the internal parameters according to the pinhole camera model. In the pinhole camera model, the camera imaging is simplified to pinhole imaging, but for convenience of processing, the imaging plane is often moved to the front of the camera in mathematical processing.
The coordinates of the point P in the world coordinate system and the camera coordinate system are respectively Pw=[Xw,Yw,Zw]TAnd Pc= [Xc,Yc,Zc]TThen the coordinate P in the world coordinate systemwTo the coordinates P in the camera coordinate systemcThe conversion formula of (c) is as follows:
Pc=RPw+t (1)
wherein R is a rotation matrix of third-order orthogonal units, and t is a translation vector.
Considering the central projection of point P onto a plane, the projection plane is at a position where z ═ f (f is the focal length, in mm), as shown in fig. 2. p ═ x, y]TIs a projection plane coordinate, Pc=[Xc,Yc,Zc]TIs a camera coordinate systemThe following coordinates, from the similarity relationship, can be obtained:
Figure RE-GDA0003579063440000031
the above formula is rewritten in matrix form with homogeneous coordinates:
Figure RE-GDA0003579063440000041
let the pixel coordinate of point P be [ mu, v]T,[μ00]TThe pixel coordinate of the camera center (optical center), a and b are the scale expansion factors from the image plane to the pixel plane in the directions of the x axis and the y axis, respectively, and γ is the non-perpendicular factor of the μ axis and the v axis under the pixel coordinate system, then the relationship between the pixel coordinate and the image coordinate is:
Figure RE-GDA0003579063440000042
by substituting the formulae (3) and (4) into the formula (1), it is possible to obtain:
Figure RE-GDA0003579063440000043
let α ═ af and β ═ bf simplify the above equation:
Figure RE-GDA0003579063440000044
in formula (6): k is an internal reference matrix of the camera; d is the external parameter matrix of the camera.
Step S2, calibrating internal parameters;
and cutting 28 mm square blocks of white squares on the basis of the black and white chessboard grid calibration plate, and adhering the cut 28 mm square blocks at the black squares through heat insulation foam glue to completely cover the black squares to obtain the special chessboard grid calibration plate of the embodiment. The special chessboard is attached to the surface of a heat source by adopting a method of adding transmission infrared radiation, and the temperature of the surface is different due to the different heat insulating capability of the treated grids and the untreated grids in the special chessboard, so that the chessboard grids can be obviously displayed in an infrared image. And acquiring infrared images of the calibration plate at a plurality of different angles, and then calibrating by adopting a camera calibration tool provided by OpenCV. As shown in fig. 3, the calibration steps are as follows: initializing, and distributing a storage space of space coordinates and pixel coordinates for the corner points; reading a calibration plate image and extracting angular points; judging whether the angular points are successfully extracted, if not, directly judging whether all the calibration images are read; if so, calculating sub-pixel coordinates of the corner points, drawing the corner points, then storing the coordinates of the corner points, and then judging whether all the calibration images are read; and if all the calibration images are not read, returning to the step of reading one calibration plate image and extracting the corner points, and if all the calibration images are read, calibrating the infrared thermal imager and outputting the result.
Marking the world homogeneous coordinate of the mth point on the calibration plate as Pm=[X,Y,Z,1]TAnd the homogeneous coordinate of the pixel of the corresponding two-dimensional camera plane is pm=[μ,ν,1]TThen, according to the pinhole camera model, there are:
spm=K[R t]Pm (7)
wherein s is a non-zero scale factor, K is an internal reference matrix of the camera, R is a rotation matrix of a third-order orthogonal unit, and t is a translation vector.
Regarding the checkerboard plane as a plane where z is 0 in the world coordinate system, we can obtain:
Figure RE-GDA0003579063440000051
[r1 r2 r3 t]is a matrix [ R t]The column vector expansion form of (1);
in formula (8): h is a homography matrix, which is expanded according to column vectors and comprises the following components:
H=[h1 h2 h3]=λK[r1 r2 t] (9)
[h1 h2 h3]is a vector [ H]The column vector expansion form of (1);
in formula (9): λ is an arbitrary scaling factor. R is known from the orthogonal property of the rotation matrix R of the third-order orthogonal unit1And r2Being orthogonal, we can get the constraint equation of the internal reference:
Figure RE-GDA0003579063440000052
for ease of calculation, the following matrix definition is made:
Figure RE-GDA0003579063440000053
wherein
Figure RE-GDA0003579063440000054
Figure RE-GDA0003579063440000055
It can be seen that B is a symmetric matrix with 6 active elements, defining a vector B of active elementsmComprises the following steps:
bm=[B11 B12 B22 B13 B23 B33]T (12)
it can be deduced that:
Figure RE-GDA0003579063440000056
in formula (13):
Figure RE-GDA0003579063440000057
[hi1,hi2,hi3]、[hj1,hj2,hj3]is a vector [ H]In the form of an expansion of the row vector of,
the constraint equation can be re-expressed as:
Figure RE-GDA0003579063440000058
assuming that the acquired images with n different angles are acquired, all the internal reference constraint equations are written into a large linear equation set:
Vbm=0 (16)
in formula (16): v is a 2n × 6 matrix. When n is not less than 3, b can be obtainedmThe unique solution of (a) is solved by using a Singular Value Decomposition (SVD) method, and then the internal reference matrix K can be obtained.
Step S3, collecting the coordinates of the characteristic reference points;
the external reference calibration of the thermal infrared imager is to solve the pose of the thermal infrared imager in a world coordinate system, and is related to the selection of the world coordinate system and the pose of the thermal infrared imager. The thermal infrared imager cannot obtain depth information, 3-dimensional space coordinates and corresponding 2-dimensional image coordinates of more than three reference points need to be known when the pose of the camera needs to be solved, and then a Perspectral-n-Point (PnP) problem is constructed and solved. The ladder stand grids on the transformer are in regular shapes, the space positions are easy to measure, and the ladder stand grids are obvious in infrared images, so that the intersection centers of the transverse rods and the vertical rods of the ladder stand are used as characteristic reference points, and 10 ladder stand grids are used for calibrating external parameters of the thermal infrared imager.
S4, calculating the pose of the camera by adopting an EPnP algorithm and the pose estimation of 3D-3D based on ICP;
firstly, estimating the pose of a camera by using an Efficient perceptual-n-point (EPnP) method, and then constructing a problem of minimizing a reprojection error to adjust an estimated value.
The core idea of the EPnP algorithm is to represent the three-dimensional coordinates of the characteristic reference points by linear combination of a plurality of virtual control points. And selecting 4 control points by a principal component analysis method by using a characteristic reference point with known coordinates in a world coordinate system, and expressing the characteristic reference point by using a weighted form of the 4 control points. The same is also expressed in the camera coordinate system, and the weight assignment corresponding to the feature reference point is the same as in the world coordinate system. And then calculating the position of each virtual point under a camera coordinate system according to the obtained weight distribution, the internal parameters of the thermal infrared imager and the coordinates of the two-dimensional points in the image, and further obtaining the coordinates of the reference point under the camera coordinate system.
The coordinates of the characteristic reference point in the world coordinate system and the camera coordinate system are respectively set as
Figure RE-GDA0003579063440000061
And
Figure RE-GDA0003579063440000062
Figure RE-GDA0003579063440000063
the coordinates of the 4 control points in the world coordinate system and the camera coordinate system are respectively
Figure RE-GDA0003579063440000064
Figure RE-GDA0003579063440000065
And
Figure RE-GDA0003579063440000066
the feature reference points can be expressed as follows:
Figure RE-GDA0003579063440000067
in the formula, alphaijFor each index point 4 weighting coefficients aij(j ═ 1,2,3,4) and the sum is 1.
Assuming that the external parameter of the thermal infrared imager is [ R t ], there are:
Figure RE-GDA0003579063440000068
since the characteristic reference points can be expressed as a weighted sum form of the control points, it is further possible to obtain:
Figure RE-GDA0003579063440000069
by substituting formula (18) for formula (19), it is possible to obtain:
Figure RE-GDA0003579063440000071
according to the formula, for a certain space point, the weight corresponding to each control point is the same under the two coordinate systems, and the external parameters of the thermal imager can be further obtained after the coordinates of the control points under the camera coordinate system are solved.
Let ui(i=1,…,n)=[μii]TIs a reference point Pi(i 1, …, n) on the image plane (z 1 xy plane), then the projection model of the camera can obtain:
Figure RE-GDA0003579063440000072
two constraint equations are available:
Figure RE-GDA0003579063440000073
in the above formula except for the coordinates of the control point
Figure RE-GDA0003579063440000074
Unknown, the others known, combining the constraint equations corresponding to all points to obtain a linear equation:
Mcx=0 (23)
in formula (23): m represents a matrix of 2n x 12,
Figure RE-GDA0003579063440000075
and x is in the right null space of M, then:
Figure RE-GDA0003579063440000076
in formula (24): n is MTDimension of M nuclear space, viIs the right singular vector of M, the corresponding singular value is 0. Beta is aiIs a pending coefficient, so for the ith control point:
Figure RE-GDA0003579063440000077
in formula (25):
Figure RE-GDA0003579063440000078
is a feature vector vkThe ith 3 × 1 sub-vector of (1).
To obtain
Figure RE-GDA0003579063440000079
The coordinates of the reference point can then be calculated according to equation (20)
Figure RE-GDA00035790634400000710
And then solving the pose of the camera by using an ICP (inductively coupled plasma) method.
After the pose of the feature reference point in the camera coordinate system is obtained, the pose solution problem is transformed into a problem of pose estimation according to a set of matched 3D points, and ICP can be generally used for solution. The ICP solution has two different linear and nonlinear modes, under the condition that the 3D point pair matching is known, the nonlinear solution mode can also obtain an analytic solution, and iterative optimization is not needed, so the ICP solution is carried out by adopting a linear algebra mode, and the steps are as follows:
(1) computing parameters under world coordinate systemCentroid of examination point
Figure RE-GDA00035790634400000711
And a centroid coordinate matrix a:
Figure RE-GDA0003579063440000081
(2) calculating the center of mass of a reference point in a camera coordinate system
Figure RE-GDA0003579063440000082
And removing the centroid coordinate matrix B:
Figure RE-GDA0003579063440000083
(3) defining matrix H ═ BTA, and calculating SVD decomposition of H: h ═ U ∑ VT
(4) Calculating a rotation matrix R in the pose of the thermal infrared imager: r ═ UVT(ii) a If R | < u ><0, then R [2 ]:]=-R[2,:]。
(5) calculating a translation vector t in the pose:
Figure RE-GDA0003579063440000084
step S5, registration of infrared image and visible light image:
after the internal reference and the external reference of the camera are known, the depth information is obtained by adopting the depth camera, and then the depth information corresponding to the infrared image is indirectly obtained by utilizing the matching relation between the infrared image and the depth camera image.
Suppose that the spatial coordinates of a certain point P in the infrared camera coordinate system are P [ X, Y, Z ]]TThe pixel point of the infrared image is p1The pixel point in the visible light image is p2If so, the pixel positions of the two pixel points are:
Figure RE-GDA0003579063440000085
in formula (29): k is1Is an internal reference of the thermal infrared imager; k2Is an internal reference of a visible light camera; and R and t are relative pose relations between the thermal imager coordinate system and the visible light coordinate system. The above formula can be rewritten using homogeneous coordinates as:
Figure RE-GDA0003579063440000086
let x1=K1 -1p1,x2=K2 -1The formula (30) can be substituted with:
x2=Rx1+t (31)
two sides of the above formula are simultaneously multiplied by t and then multiplied by t
Figure RE-GDA0003579063440000087
It is possible to obtain:
Figure RE-GDA0003579063440000088
re-substituting p1And p2The method comprises the following steps:
Figure RE-GDA0003579063440000089
let E be t R, which is an intrinsic matrix, and equation (33) above is referred to as antipodal constraint. It can be seen that K1And K2It is known that if E is solved, the pixel coordinate of a point in one image corresponding to the E in the other image can be solved according to the pixel coordinate of the point in the other image, and thus registration between images is achieved.
Step S6, extracting the abnormal high temperature region and the target point: after the three-dimensional space point corresponding to the infrared image pixel point is recovered, if an abnormal high-temperature point is monitored, the space position of the abnormal high-temperature point can be extracted.
In fact, if the transformer has temperature anomaly, the high-temperature region extracted from the infrared image is in an irregular shape with high probability, so that it is difficult to perform effective uniform segmentation of the three-dimensional space of the anomaly region based on an image method. Therefore, the invention divides the space region where the transformer is located by using three-dimensional voxels with certain sizes to form a voxel array, then counts abnormal high-temperature points exceeding a threshold value in each voxel, and further judges whether the region is an abnormal high-temperature region. The early recognition and early warning of the fire hazard of the outdoor transformer are realized, and the fire source is found in time to avoid the occurrence and expansion of accidents.
Firstly, a cuboid box is used for enveloping a ratio transformer, and then a small square voxel with the side length of l is used for dividing the cuboid to obtain a voxel array. After obtaining the voxel array, the specific steps of abnormal region division and target point determination are as follows:
(1) for any point piCalculating the voxel grid of the point, and recording the voxel grid of the point as SjThe members of the voxel grid object have all spatial points, all high-temperature abnormal spatial points, the highest temperature and the regional target points contained in the voxel grid object.
(2) For any point piIf its temperature is greater than a set threshold value TsIt is added to the abnormal spatial point member of its corresponding voxel grid.
(3) For any voxel grid SjIf the number of the abnormal points is larger than the set threshold value N, the area is judged to be an abnormal area, the target point of the area is the centroid of all the abnormal points, and the highest temperature of the area is the temperature of the highest temperature space point.
Example 1
Step S1, establishing a model: taking the thermal infrared imager geometric imaging model as a pinhole camera model, establishing the pinhole camera model, and then calibrating internal parameters according to the pinhole camera model;
step S2, calibrating internal reference
When the visible light camera is calibrated, the calibration work can be completed by using a printed black and white checkerboard. However, for an infrared thermal imaging camera, there is no thermal radiation difference between grids of a common black and white chessboard grid calibration plate, so there is no grid texture information in an infrared image, and the infrared thermal imaging camera cannot be used as a calibration plate of a thermal infrared imager. Therefore, this embodiment improves the zhang's scaling method and produces a scale plate with a clear and distinct grid texture in the infrared image.
The length and width of each chequer board is 28 mm; the special checkerboard calibration plate of the embodiment is obtained by cutting 28 mm square blocks of white squares on the basis of the black and white checkerboard calibration plate, and adhering the cut 28 mm square blocks at the black squares through heat insulation foam glue to completely cover the black squares. The special chessboard is attached to the surface of a heat source by adopting a method of adding transmission infrared radiation, and the temperature of the surface is different due to the different heat insulating capability of the treated grids and the untreated grids in the special chessboard, so that the chessboard grids can be obviously displayed in an infrared image.
Acquiring infrared images of a plurality of calibration plates at different angles, and then calibrating by using a camera calibration tool provided by OpenCV (open computer vision library), wherein the calibration steps are as follows with reference to FIG. 3: initializing, and distributing a storage space of space coordinates and pixel coordinates for the corner points; reading a calibration plate image and extracting angular points; judging whether the angular points are successfully extracted, if not, directly judging whether all the calibration images are read; if so, calculating sub-pixel coordinates of the corner points, drawing the corner points, then storing the coordinates of the corner points, and then judging whether all the calibration images are read; and if all the calibration images are not read, returning to the step of reading one calibration plate image and extracting the corner points, and if all the calibration images are read, calibrating the infrared thermal imager and outputting the result.
The result of the internal reference matrix obtained by calibration is shown as formula (34), and the corresponding back projection error is 0.456.
Figure RE-GDA0003579063440000101
2 calibration of external parameters of thermal infrared imager
Step S3, collection of characteristic reference point coordinates: in order to carry out external reference calibration, pixel coordinates in an image and three-dimensional coordinates in space of the intersection center of the ladder climbing cross rod and the vertical rod need to be extracted. Firstly, extracting pixel coordinates, selecting 2D points near a reference point by using a mouse, then searching for near corner points and performing sub-pixelation, taking the centroid coordinates of all the corner points as the pixel coordinates of an actual characteristic reference point, and obtaining the pixel coordinates of 10 reference points as follows: (705, 510), (706, 573), (707, 632), (707, 695), (707, 757), (763, 511), (762, 570), (763, 631), (763, 693), (763, 755).
The feature reference point space coordinates are obtained by actual measurement and are (2824, 2484, 2638), (2799, 2484, 2240), (2774, 2484, 1842), (2749, 2484, 1444), (2724, 2484, 1046), (2824, 2133, 2638), (2799, 2133, 2240), (2774, 2133, 1842), (2749, 2133, 1444), (2724, 2133, 1046), respectively, in millimeters (mm).
S4, estimating and calculating the pose of the camera by adopting an EPnP algorithm based on the pose of 3D-3D of ICP;
utilizing a solvePnP tool in OpenCV and adopting an EPnP method to calibrate external parameters, obtaining the external parameters of the camera as follows:
Figure RE-GDA0003579063440000102
tcw=[3900.93 3497.12 6020.03]T
the coordinates of the origin of the camera in the world coordinate system are:
twc=-Rcw -1·tcw=[-5589.39 2853.46 4230.19]T
step S5, registration of infrared image and visible light image
Because the difference between the infrared image and the visible light image is large, the gray scale and the texture features are inconsistent, and the effect of automatically extracting the feature reference points and registering is poor. Therefore, the characteristic reference points are extracted and matched in a manual mode, and the angular points which are obvious are extracted from the infrared image and the visible light image obtained by the depth camera and are used as the characteristic reference points and are matched.
In formula (33), let
Figure RE-GDA0003579063440000103
The mapping conversion between image pixels can be completed by solving F. Next, F is solved by the eight-point method.
Considering a pair of matching points, the homogeneous pixel coordinate is p1=[u1,v1,1]T,p2=[u2,v2,1]TAccording to the epipolar constraint, the following components are provided:
Figure RE-GDA0003579063440000104
according to the epipolar constraint of all the matching points, the following linear equation system can be obtained:
Figure RE-GDA0003579063440000111
the linear equation system comprises the epipolar constraint relation of 8 pairs of matched feature points, and the solution f of the equation is in the null space of the coefficient matrix. Substituting the pixel coordinates of each characteristic reference point to obtain an F matrix as follows:
Figure RE-GDA0003579063440000112
and after F is solved, the relative pose relations R and t between the thermal infrared imager and the depth camera can be solved according to the SVD decomposition method. Due to the fact that
Figure RE-GDA0003579063440000113
Therefore, for any point p in the infrared image1All can find out the corresponding point p in the visible light image2According to point p2The coordinates of the corresponding space point under the depth camera can be obtained according to the depth d, and the coordinates of the point under a world coordinate system are recovered according to the relative pose between the thermal infrared imager and the depth camera and the external parameters of the thermal infrared imager.
Step S6, extracting abnormal high temperature area and target point
Firstly, a cuboid box is used for enveloping a ratio transformer, and then a small square voxel with the side length of l is used for dividing the cuboid to obtain a voxel array. After obtaining the voxel array, the specific steps of abnormal region division and target point determination are as follows:
(1) for any point piCalculating the voxel grid of the point, and recording the voxel grid of the point as SjThe members of the voxel grid object have all spatial points, all high-temperature abnormal spatial points, the highest temperature and the regional target points contained in the voxel grid object.
(2) For any point piIf its temperature is greater than the set threshold value TsIt is added to the abnormal spatial point member of its corresponding voxel grid.
(3) For any voxel grid SjIf the number of the abnormal points is larger than the set threshold value N, the area is judged to be an abnormal area, the target point of the area is the centroid of all the abnormal points, and the highest temperature of the area is the temperature of the highest temperature space point.
And observing abnormal areas, realizing early identification and early warning of fire hazards of the outdoor transformer, and finding out a fire source in time to avoid accidents and expansion.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1.一种基于双目三维视觉的室外油浸式变压器火灾探测方法,其特征在于,包括如下步骤:1. an outdoor oil-immersed transformer fire detection method based on binocular three-dimensional vision, is characterized in that, comprises the steps: 步骤S1、建立模型:根据红外热像仪的成像效果,将红外热像仪几何成像模型视为针孔相机模型,建立针孔相机模型;Step S1, establishing a model: According to the imaging effect of the infrared thermal imager, the geometric imaging model of the infrared thermal imager is regarded as a pinhole camera model, and a pinhole camera model is established; 步骤S2、内参标定:在黑白棋盘格标定板的基础上裁剪白色方格的方块,将裁剪的方块通过绝热泡沫胶粘在黑色方格处使其完全盖住黑色方格,得到棋盘格标定板;采用外加透射式红外辐射的方法,将特制棋盘格标定板附着在热源表面,棋盘格标定板中被处理过的格子和未被处理的格子绝热能力不同,导致表面的温度会有所不同,在红外图像中明显地显示出棋盘方格;采集多张不同角度下的标定板的红外图像,然后采用OpenCV提供的相机标定工具进行标定;Step S2, internal parameter calibration: cut white squares on the basis of the black and white checkerboard calibration plate, and glue the cut squares to the black squares through thermal insulation foam to completely cover the black squares to obtain a checkerboard calibration plate ; The special checkerboard calibration plate is attached to the surface of the heat source by the method of external transmission infrared radiation. The treated lattice and the untreated lattice in the checkerboard calibration plate have different thermal insulation capabilities, resulting in different surface temperatures. The checkerboard is clearly displayed in the infrared image; multiple infrared images of the calibration board at different angles are collected, and then the camera calibration tool provided by OpenCV is used for calibration; 步骤S3、特征参考点坐标的采集:采用爬梯横杆和竖杆的相交中心作为特征参考点,来进行红外热像仪外参的标定;Step S3, the collection of the coordinates of the feature reference point: the intersection center of the horizontal bar and the vertical bar of the climbing ladder is used as the feature reference point to calibrate the external parameters of the infrared thermal imager; 步骤S4、采用EPnP算法和基于ICP的3D-3D的位姿估计计算相机的位姿:先用EPnP算法法估计相机的位姿,然后构建最小化重投影误差问题对估计值进行调整;当得到了特征参考点在相机坐标系下的位姿后,采用线性代数的方式进行ICP求解,Step S4, using the EPnP algorithm and ICP-based 3D-3D pose estimation to calculate the pose of the camera: first use the EPnP algorithm to estimate the pose of the camera, and then construct the problem of minimizing the reprojection error to adjust the estimated value; After the pose of the feature reference point in the camera coordinate system is obtained, the linear algebra is used to solve the ICP. 步骤S5、红外图像和可见光图像的配准:采用深度相机获取深度信息,然后利用红外图像和深度相机图像之间的匹配关系间接得到红外图像对应的深度信息;Step S5, the registration of the infrared image and the visible light image: using the depth camera to obtain the depth information, and then indirectly obtaining the depth information corresponding to the infrared image by using the matching relationship between the infrared image and the depth camera image; 步骤S6、异常高温区域及目标点的提取:在恢复出红外图像像素点对应的三维空间点之后,若监测到异常高温点,提取出其空间位置。Step S6, extraction of abnormally high temperature area and target point: after recovering the three-dimensional space point corresponding to the pixel point of the infrared image, if the abnormally high temperature point is monitored, its spatial position is extracted. 2.根据权利要求1所述的一种基于双目三维视觉的室外油浸式变压器火灾探测方法,其特征在于,步骤S2的标定步骤如下:初始化,为角点分配空间坐标和像素坐标的存储空间;读取一幅标定板图像,提取角点;判断角点是否提取成功,如否,则直接进入判断所有标定图像是否都被读取;如是,则计算角点亚像素坐标,绘制角点,然后存储角点坐标,之后判断所有标定图像是否都被读取;如所有标定图像没有都被读取,则返回至“读取一幅标定板图像,提取角点”,如所有标定图像都被读取,则标定红外热象仪,输出结果。2. a kind of outdoor oil-immersed transformer fire detection method based on binocular three-dimensional vision according to claim 1, is characterized in that, the calibration step of step S2 is as follows: initialization, for the storage of corner point distribution space coordinates and pixel coordinates space; read a calibration board image and extract the corner points; determine whether the corner points are successfully extracted, if not, directly enter to determine whether all the calibration images have been read; if so, calculate the sub-pixel coordinates of the corner points and draw the corner points , and then store the corner coordinates, and then judge whether all the calibration images have been read; if all the calibration images have not been read, return to "read a calibration plate image, extract the corners", if all the calibration images are If it is read, the infrared thermal imager is calibrated and the result is output. 3.根据权利要求1所述的一种基于双目三维视觉的室外油浸式变压器火灾探测方法,其特征在于,步骤S4中,利用在世界坐标系下坐标已知的特征参考点,通过主成分分析法选择多个控制点,并将特征参考点用多个控制点的加权形式表示出来;在相机坐标系下,也进行同样表示,且特征参考点对应的权重分配与在世界坐标系下的相同;然后根据已得到的权重分配、红外热像仪内参和图像中二维点的坐标,计算各虚拟点在相机坐标系下的位置,进而可得参考点在相机坐标系下的坐标。3. a kind of outdoor oil-immersed transformer fire detection method based on binocular three-dimensional vision according to claim 1, is characterized in that, in step S4, utilizes the characteristic reference point whose coordinates are known under the world coordinate system, through main The component analysis method selects multiple control points, and expresses the feature reference point in the weighted form of multiple control points; in the camera coordinate system, the same representation is also performed, and the weight distribution corresponding to the feature reference point is the same as that in the world coordinate system. Then, according to the obtained weight distribution, the internal parameters of the infrared thermal imager, and the coordinates of the two-dimensional points in the image, the position of each virtual point in the camera coordinate system is calculated, and then the coordinates of the reference point in the camera coordinate system can be obtained. 4.根据权利要求1所述的一种基于双目三维视觉的室外油浸式变压器火灾探测方法,其特征在于,步骤S6中,利用一定大小的三维体素对变压器所在空间区域进行分割,行成一个体素阵列,然后统计各个体素中超过阈值的异常高温点,进而判断该区域是否是异常高温区域。4. a kind of outdoor oil-immersed transformer fire detection method based on binocular three-dimensional vision according to claim 1, is characterized in that, in step S6, utilizes the three-dimensional voxel of certain size to divide the space area where transformer is located, A voxel array is formed, and then the abnormal high temperature points exceeding the threshold in each voxel are counted, and then it is judged whether the area is an abnormal high temperature area. 5.根据权利要求1所述的一种基于双目三维视觉的室外油浸式变压器火灾探测方法,其特征在于,步骤S6中,首先利用一个长方体盒将比变压器包络起来,然后用边长为l的小方格体素对长方体进行分割,得到体素阵列;在得到体素阵列后,异常区域划分及确定目标点的具体步骤如下:5. a kind of outdoor oil-immersed transformer fire detection method based on binocular three-dimensional vision according to claim 1, is characterized in that, in step S6, first utilizes a rectangular parallelepiped box to envelop than transformer, then use side length The cuboid is divided for the small square voxels of l to obtain a voxel array; after obtaining the voxel array, the specific steps of dividing the abnormal area and determining the target point are as follows: (1)对于任意一点pi,计算该点所在的体素格,记其所在体素格为Sj,该体素格对象的成员有其包含的所有空间点、所有高温异常空间点、最高温度、区域目标点;(1) For any point p i , calculate the voxel grid where the point is located, and denote the voxel grid where it is located as S j . The members of the voxel grid object include all spatial points, all high-temperature anomaly spatial points, and the highest temperature, regional target point; (2)对于任意一点pi,若其温度大于设定的阈值Ts,则将其加入其对应体素格的异常空间点成员中;(2) For any point p i , if its temperature is greater than the set threshold T s , add it to the abnormal space point members of its corresponding voxel grid; (3)对于任意一体素格Sj,若其中的异常点数量大于设定的阈值N,则判定该区域为异常区域,该区域目标点为所有异常点的质心,区域最高温度为其中最高温度空间点的温度。(3) For any one-element lattice S j , if the number of abnormal points in it is greater than the set threshold N, the area is determined to be an abnormal area, the target point in this area is the centroid of all abnormal points, and the highest temperature in the area is the highest temperature in the area. The temperature of the point in space. 6.根据权利要求1所述的一种基于双目三维视觉的室外油浸式变压器火灾探测方法,其特征在于,步骤S1中,设点P在世界坐标系下和相机坐标系下的坐标分别为Pw=[Xw,Yw,Zw]T和Pc=[Xc,Yc,Zc]T,则世界坐标系中的坐标Pw到相机坐标系中的坐标Pc的转换公式如下:6. a kind of outdoor oil-immersed transformer fire detection method based on binocular three-dimensional vision according to claim 1, is characterized in that, in step S1, the coordinates of setting point P under the world coordinate system and the camera coordinate system are respectively For Pw =[ Xw , Yw , Zw ] T and Pc=[ Xc , Yc , Zc ] T , then the coordinate Pw in the world coordinate system to the coordinate Pc in the camera coordinate system The conversion formula is as follows: Pc=RPw+t (1)P c =RP w +t (1) 其中R为三阶正交单位的旋转矩阵,t为平移向量;where R is the rotation matrix of the third-order orthogonal unit, and t is the translation vector; 考虑点P到一个平面上的中心投影,投影平面处于z=f处的位置,f为焦距,p=[x,y]T是投影平面坐标,Pc=[Xc,Yc,Zc]T是相机坐标系下的坐标,由相似关系可得:Consider the central projection of point P onto a plane, the projection plane is at the position of z=f, f is the focal length, p=[x, y] T is the coordinate of the projection plane, P c =[X c , Y c , Z c ] T is the coordinate in the camera coordinate system, which can be obtained from the similarity relationship:
Figure FDA0003516116200000021
Figure FDA0003516116200000021
用齐次坐标将上式改写为矩阵的形式:Rewrite the above equation in matrix form using homogeneous coordinates:
Figure FDA0003516116200000022
Figure FDA0003516116200000022
设点P的像素坐标为[μ,ν]T,[μ00]T为相机中心的像素坐标,a和b分别为图像平面到像素平面在x轴和y轴方向上的尺度伸缩因子,γ为像素坐标系下μ轴和ν轴的不垂直因子,则像素坐标与图像坐标的关系为:Let the pixel coordinates of point P be [μ, ν] T , [μ 0 , ν 0 ] T is the pixel coordinates of the camera center, a and b are the scale expansion from the image plane to the pixel plane in the x-axis and y-axis directions, respectively factor, γ is the non-vertical factor of the μ and ν axes in the pixel coordinate system, then the relationship between the pixel coordinates and the image coordinates is:
Figure FDA0003516116200000031
Figure FDA0003516116200000031
将式(3)和式(4)代入到式(1)中,可得:Substituting equations (3) and (4) into equation (1), we can get:
Figure FDA0003516116200000032
Figure FDA0003516116200000032
令α=af,令β=bf则上式可简化为:Let α=af, let β=bf, the above formula can be simplified to:
Figure FDA0003516116200000033
Figure FDA0003516116200000033
式(6)中:K为相机的内参矩阵;D是相机的外参矩阵。In formula (6): K is the internal parameter matrix of the camera; D is the external parameter matrix of the camera.
7.根据权利要求6所述的一种基于双目三维视觉的室外油浸式变压器火灾探测方法,其特征在于,步骤S2中,记标定板上第m点的世界齐次坐标为Pm=[X,Y,Z,1]T,与其对应的所在二维相机平面的像素齐次坐标为pm=[μ,ν,1]T,则依据针孔相机模型有:7. a kind of outdoor oil-immersed transformer fire detection method based on binocular three-dimensional vision according to claim 6, is characterized in that, in step S2, the world homogeneous coordinate of the mth point on the marking plate is P m = [X,Y,Z,1] T , the corresponding pixel homogeneous coordinates of the two-dimensional camera plane are p m =[μ,ν,1] T , then according to the pinhole camera model: spm=K[R t]Pm (7)sp m =K[R t]P m (7) 其中,s为非零尺度因子,K为相机的内参矩阵,R为三阶正交单位的旋转矩阵,t为平移向量;Among them, s is a non-zero scale factor, K is the internal parameter matrix of the camera, R is the rotation matrix of the third-order orthogonal unit, and t is the translation vector; 将棋盘格平面视为世界坐标系中z=0的平面,则可得:Considering the checkerboard plane as the plane of z=0 in the world coordinate system, we can get:
Figure FDA0003516116200000034
Figure FDA0003516116200000034
[r1 r2 r3 t]是矩阵[R t]的列向量展开形式;[r 1 r 2 r 3 t] is the column vector expansion of matrix [R t]; 式(8)中:H为单应矩阵,将其按列向量展开有:In formula (8): H is a homography matrix, which can be expanded by column vector to have: H=[h1 h2 h3]=λK[r1 r2 t] (9)H=[h 1 h 2 h 3 ]=λK[r 1 r 2 t] (9) [h1 h2 h3]是向量[H]的列向量展开形式;[h 1 h 2 h 3 ] is the column vector expansion of vector [H]; 式(9)中:λ为任意比例系数;由三阶正交单位的旋转矩阵R正交性质可知r1和r2是正交的,则可得到内参的约束方程:In formula (9): λ is an arbitrary scale coefficient; according to the orthogonality of the rotation matrix R of the third-order orthogonal unit, it can be known that r 1 and r 2 are orthogonal, then the constraint equation of the internal parameters can be obtained:
Figure FDA0003516116200000035
Figure FDA0003516116200000035
为了方便计算,进行如下矩阵定义:In order to facilitate the calculation, the following matrix definitions are made:
Figure FDA0003516116200000036
其中
Figure FDA0003516116200000037
Figure FDA0003516116200000036
in
Figure FDA0003516116200000037
Figure FDA0003516116200000041
Figure FDA0003516116200000041
可以看到,B是一个对称矩阵,有效元素有6个,定义有效元素组成的向量bm为:It can be seen that B is a symmetric matrix with 6 effective elements. The vector b m that defines the effective elements is: bm=[B11 B12 B22 B13 B23 B33]T (12)b m = [B 11 B 12 B 22 B 13 B 23 B 33 ] T (12) 可以推导得到:It can be deduced that:
Figure FDA0003516116200000042
Figure FDA0003516116200000042
式(13)中:In formula (13):
Figure FDA0003516116200000043
Figure FDA0003516116200000043
[hi1,hi2,hi3]、[hj1,hj2,hj3]是向量[H]的行向量展开形式,[h i1 ,h i2 ,h i3 ], [h j1 ,h j2 ,h j3 ] are row vector expansions of vector [H], 则约束方程可以重新表示为:Then the constraint equation can be re-expressed as:
Figure FDA0003516116200000044
Figure FDA0003516116200000044
假设采集得到的有n幅不同角度的图像,把所有的内参约束方程写成一个大的线性方程组:Assuming that there are n images of different angles collected, write all the internal parameter constraint equations as a large linear equation system: Vbm=0 (16)Vb m = 0 (16) 式(16)中:V是一个2n×6的矩阵;当n≥3时,可得到bm的唯一解,常用奇异值分解法进行求解,然后便可求得内参矩阵K。In formula (16): V is a 2n×6 matrix; when n≥3, the unique solution of b m can be obtained, and the singular value decomposition method is commonly used to solve it, and then the internal parameter matrix K can be obtained.
8.根据权利要求7所述的一种基于双目三维视觉的室外油浸式变压器火灾探测方法,其特征在于,步骤S4中,设特征参考点在世界坐标系和相机坐标系下的坐标分别为
Figure FDA0003516116200000045
Figure FDA0003516116200000046
4个控制点在世界坐标系和相机坐标系下的坐标分别为
Figure FDA0003516116200000047
(j=1,2,3,4)和
Figure FDA0003516116200000048
(j=1,2,3,4);则各特征参考点可表示如下:
8. a kind of outdoor oil-immersed transformer fire detection method based on binocular three-dimensional vision according to claim 7, is characterized in that, in step S4, suppose that the coordinates of the feature reference point under the world coordinate system and the camera coordinate system are respectively for
Figure FDA0003516116200000045
and
Figure FDA0003516116200000046
The coordinates of the four control points in the world coordinate system and the camera coordinate system are:
Figure FDA0003516116200000047
(j=1,2,3,4) and
Figure FDA0003516116200000048
(j=1, 2, 3, 4); then each feature reference point can be expressed as follows:
Figure FDA0003516116200000049
Figure FDA0003516116200000049
式中,αij为每个标志点对应4个加权系数aij(j=1,2,3,4)且和为1;In the formula, α ij is 4 weighting coefficients a ij (j=1, 2, 3, 4) corresponding to each marker point and the sum is 1; 假设红外热像仪的外参为[R t],则有:Assuming that the external parameter of the infrared thermal imager is [R t], there are:
Figure FDA00035161162000000410
Figure FDA00035161162000000410
由于特征参考点可以表示为控制点的加权和形式,进一步可以得到:Since the feature reference point can be expressed as a weighted sum of control points, it can be further obtained:
Figure FDA00035161162000000411
Figure FDA00035161162000000411
将式(18)代入式(19)中,可得:Substituting equation (18) into equation (19), we can get:
Figure FDA0003516116200000051
Figure FDA0003516116200000051
根据上式可以看出,对于某一空间点,各控制点与之对应的权重在两个坐标系下是相同的,求解出控制点在相机坐标系下的坐标后便能进一步求得热像仪的外参;According to the above formula, it can be seen that for a certain spatial point, the corresponding weight of each control point is the same in the two coordinate systems. After solving the coordinates of the control point in the camera coordinate system, the thermal image can be further obtained. external parameters of the instrument; 设ui(i=1,…,n)=[μii]T是参考点Pi(i=1,…,n)在图像平面上(z=1下的xy平面)的投影点,则由相机的投影模型可得:Let u i (i=1,...,n)=[μ ii ] T be the projection of the reference point P i (i=1,...,n) on the image plane (xy plane under z=1) point, then the projection model of the camera can be obtained:
Figure FDA0003516116200000052
Figure FDA0003516116200000052
可得两个约束方程:Two constraint equations can be obtained:
Figure FDA0003516116200000053
Figure FDA0003516116200000053
上式中除了控制点的坐标
Figure FDA0003516116200000054
是未知的,其余都是已知的,把所有点对应的约束方程组合起来,得到线性方程:
In the above formula, except for the coordinates of the control points
Figure FDA0003516116200000054
is unknown, and the rest are known. Combine the constraint equations corresponding to all points to obtain a linear equation:
Mcx=0 (23)Mc x = 0 (23) 式(23)中:M表示2n*12的矩阵,
Figure FDA0003516116200000055
且x在M的右零空间中,于是有:
In formula (23): M represents a matrix of 2n*12,
Figure FDA0003516116200000055
And x is in the right null space of M, so we have:
Figure FDA0003516116200000056
Figure FDA0003516116200000056
式(24)中:N是MTM核空间的维数,vi是M的右奇异向量,对应的奇异值为0;βi是待定系数,所以对于第i个控制点有:In formula (24): N is the dimension of the M T M kernel space, v i is the right singular vector of M, and the corresponding singular value is 0; β i is the undetermined coefficient, so for the i-th control point:
Figure FDA0003516116200000057
Figure FDA0003516116200000057
式(25)中:
Figure FDA0003516116200000058
为特征向量vk的第i个3×1的子向量;
In formula (25):
Figure FDA0003516116200000058
is the i-th 3×1 sub-vector of the feature vector v k ;
得到
Figure FDA0003516116200000059
后便可根据式(20)计算出参考点的坐标
Figure FDA00035161162000000510
然后利用ICP方法求解出相机位姿。
get
Figure FDA0003516116200000059
Then, the coordinates of the reference point can be calculated according to formula (20)
Figure FDA00035161162000000510
Then use the ICP method to solve the camera pose.
9.根据权利要求8所述的一种基于双目三维视觉的室外油浸式变压器火灾探测方法,其特征在于,步骤S4中,采用线性代数的方式进行ICP求解,步骤如下:9. a kind of outdoor oil-immersed transformer fire detection method based on binocular three-dimensional vision according to claim 8, is characterized in that, in step S4, adopts the mode of linear algebra to carry out ICP solution, and the steps are as follows: (1)计算世界坐标系下的参考点质心
Figure FDA00035161162000000511
和去质心坐标矩阵A:
(1) Calculate the center of mass of the reference point in the world coordinate system
Figure FDA00035161162000000511
and de-centroid coordinate matrix A:
Figure FDA00035161162000000512
Figure FDA00035161162000000512
(2)计算相机坐标系下的参考点质心
Figure FDA00035161162000000513
和去质心坐标矩阵B:
(2) Calculate the center of mass of the reference point in the camera coordinate system
Figure FDA00035161162000000513
and de-centroid coordinate matrix B:
Figure FDA0003516116200000061
Figure FDA0003516116200000061
(3)定义矩阵H=BTA,并计算H的SVD分解:H=U∑VT(3) Define matrix H=B T A, and calculate the SVD decomposition of H: H=U∑V T ; (4)计算红外热像仪位姿中的旋转矩阵R:R=UVT;若|R|<0,则R[2,:]=-R[2,:];(4) Calculate the rotation matrix R in the pose of the infrared thermal imager: R=UV T ; if |R|<0, then R[2,:]=-R[2,:]; (5)计算位姿中的平移向量t:
Figure FDA0003516116200000062
(5) Calculate the translation vector t in the pose:
Figure FDA0003516116200000062
10.根据权利要求9所述的一种基于双目三维视觉的室外油浸式变压器火灾探测方法,其特征在于,步骤S5中,假设在红外相机坐标系中,某一点P的空间坐标为P[X,Y,Z]T,其在红外图像中的像素点为p1,在可见光图像中的像素点为p2,则两个像素点的像素位置为:10. The method for detecting fire of an outdoor oil-immersed transformer based on binocular three-dimensional vision according to claim 9, wherein in step S5, it is assumed that in the infrared camera coordinate system, the spatial coordinate of a certain point P is P [X,Y,Z] T , the pixel point in the infrared image is p 1 , and the pixel point in the visible light image is p 2 , then the pixel positions of the two pixel points are:
Figure FDA0003516116200000063
Figure FDA0003516116200000063
式(29)中:K1为红外热像仪的内参;K2为可见光相机内参;R,t为热像仪坐标系与可见光坐标系之间的相对位姿关系;采用齐次坐标可将上式改写为:In formula (29): K 1 is the internal reference of the infrared thermal imager; K 2 is the internal reference of the visible light camera; R, t is the relative pose relationship between the thermal imager coordinate system and the visible light coordinate system; The above formula is rewritten as:
Figure FDA0003516116200000064
Figure FDA0003516116200000064
令x1=K1 -1p1,x2=K2 -1,代入式(30)中可得:Let x 1 =K 1 -1 p 1 , x 2 =K 2 -1 , and substitute into equation (30) to get: x2=Rx1+t (31)x 2 =Rx 1 +t (31) 将上式两侧同时与t做外积,然后再左乘
Figure FDA0003516116200000065
可以得到:
Take the outer product of both sides of the above formula with t at the same time, and then left-multiply
Figure FDA0003516116200000065
You can get:
Figure FDA0003516116200000066
Figure FDA0003516116200000066
重新代入p1和p2,有:Resubstituting p 1 and p 2 , we have:
Figure FDA0003516116200000067
Figure FDA0003516116200000067
令E=tR,它被成为本质矩阵,上式(33)则被成为对极约束;可以看到,K1和K2是已知的,若求解出了E,则可根据其中一幅图像中某点的像素坐标求解在另一幅图像中与其对应的点的像素坐标,实现了图像间的配准。Let E=t R, it is called an essential matrix, and the above formula (33) is called an epipolar constraint; it can be seen that K 1 and K 2 are known, if E is solved, it can be obtained according to one of them The pixel coordinates of a point in one image can be solved to the pixel coordinates of the corresponding point in another image, and the registration between images is realized.
CN202210166184.3A 2022-02-23 2022-02-23 Binocular three-dimensional vision-based fire detection method for outdoor oil-immersed transformer Pending CN114581383A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210166184.3A CN114581383A (en) 2022-02-23 2022-02-23 Binocular three-dimensional vision-based fire detection method for outdoor oil-immersed transformer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210166184.3A CN114581383A (en) 2022-02-23 2022-02-23 Binocular three-dimensional vision-based fire detection method for outdoor oil-immersed transformer

Publications (1)

Publication Number Publication Date
CN114581383A true CN114581383A (en) 2022-06-03

Family

ID=81770561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210166184.3A Pending CN114581383A (en) 2022-02-23 2022-02-23 Binocular three-dimensional vision-based fire detection method for outdoor oil-immersed transformer

Country Status (1)

Country Link
CN (1) CN114581383A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758079A (en) * 2023-08-18 2023-09-15 杭州浩联智能科技有限公司 A hazard early warning method based on spark pixels

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130304710A1 (en) * 2010-07-26 2013-11-14 Ucl Business Plc Method and system for anomaly detection in data sets
CN108388341A (en) * 2018-02-11 2018-08-10 苏州笛卡测试技术有限公司 A kind of man-machine interactive system and device based on thermal camera-visible light projector
KR20200059520A (en) * 2018-11-21 2020-05-29 대한민국(산림청 국립산림과학원장) Fire Detection Parameter Generation Apparatus and Fire Detection Device having the same
CN112115874A (en) * 2020-09-21 2020-12-22 武汉大学 Cloud-fused visual SLAM system and method
CN112215905A (en) * 2020-10-22 2021-01-12 北京易达恩能科技有限公司 Automatic calibration method of mobile infrared temperature measurement system
CN112634374A (en) * 2020-12-18 2021-04-09 杭州海康威视数字技术股份有限公司 Binocular camera three-dimensional calibration method, device and system and binocular camera
CN112785702A (en) * 2020-12-31 2021-05-11 华南理工大学 SLAM method based on tight coupling of 2D laser radar and binocular camera
CN113299035A (en) * 2021-05-21 2021-08-24 上海电机学院 Fire identification method and system based on artificial intelligence and binocular vision
US20220044949A1 (en) * 2020-08-06 2022-02-10 Carl Zeiss Smt Gmbh Interactive and iterative training of a classification algorithm for classifying anomalies in imaging datasets

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130304710A1 (en) * 2010-07-26 2013-11-14 Ucl Business Plc Method and system for anomaly detection in data sets
CN108388341A (en) * 2018-02-11 2018-08-10 苏州笛卡测试技术有限公司 A kind of man-machine interactive system and device based on thermal camera-visible light projector
KR20200059520A (en) * 2018-11-21 2020-05-29 대한민국(산림청 국립산림과학원장) Fire Detection Parameter Generation Apparatus and Fire Detection Device having the same
US20220044949A1 (en) * 2020-08-06 2022-02-10 Carl Zeiss Smt Gmbh Interactive and iterative training of a classification algorithm for classifying anomalies in imaging datasets
CN112115874A (en) * 2020-09-21 2020-12-22 武汉大学 Cloud-fused visual SLAM system and method
CN112215905A (en) * 2020-10-22 2021-01-12 北京易达恩能科技有限公司 Automatic calibration method of mobile infrared temperature measurement system
CN112634374A (en) * 2020-12-18 2021-04-09 杭州海康威视数字技术股份有限公司 Binocular camera three-dimensional calibration method, device and system and binocular camera
CN112785702A (en) * 2020-12-31 2021-05-11 华南理工大学 SLAM method based on tight coupling of 2D laser radar and binocular camera
CN113299035A (en) * 2021-05-21 2021-08-24 上海电机学院 Fire identification method and system based on artificial intelligence and binocular vision

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
任贵文;: "基于OpenCV的红外与可见光双摄像机标定方法研究", 科学技术与工程, no. 03, 28 January 2016 (2016-01-28), pages 211 - 214 *
刘今越;唐旭;贾晓辉;杨冬;李铁军: "三维激光雷达-相机间外参的高效标定方法", 仪器仪表学报, no. 11, 15 November 2019 (2019-11-15) *
张志彦;付冬梅: "基于体素分割与识别的红外目标三维建模", 激光与红外, no. 07, 20 July 2007 (2007-07-20) *
李斌;谭光华;高春鸣: "改进基本矩阵计算和优化的多摄像机并行标定算法", 计算机应用, no. 08, 1 August 2013 (2013-08-01) *
焦宏伟;秦石乔;胡春生;王省书: "一种脉冲激光雷达与摄像机标定方法的研究", 中国激光, no. 01, 10 January 2011 (2011-01-10) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758079A (en) * 2023-08-18 2023-09-15 杭州浩联智能科技有限公司 A hazard early warning method based on spark pixels
CN116758079B (en) * 2023-08-18 2023-12-05 杭州浩联智能科技有限公司 Harm early warning method based on spark pixels

Similar Documents

Publication Publication Date Title
Kukelova et al. Real-time solution to the absolute pose problem with unknown radial distortion and focal length
CN101908231B (en) Reconstruction method and system for processing three-dimensional point cloud containing main plane scene
US8290305B2 (en) Registration of 3D point cloud data to 2D electro-optical image data
CN108805936B (en) Camera external parameter calibration method and device and electronic equipment
CN111385558B (en) TOF camera module accuracy measurement method and system
Wang et al. Single view based pose estimation from circle or parallel lines
CN109443209A (en) A kind of line-structured light system calibrating method based on homography matrix
CN113298883B (en) Method, electronic device and storage medium for calibrating multiple cameras
CN101943563A (en) Rapid calibration method of line-structured light vision sensor based on space plane restriction
CN112184793B (en) Depth data processing method and device and readable storage medium
CN104729481B (en) Cooperative target pose precision measurement method based on PNP perspective model
CN115511878A (en) Side slope earth surface displacement monitoring method, device, medium and equipment
CN114862973A (en) Space positioning method, device and equipment based on fixed point location and storage medium
CN113129255B (en) Method, computing device, system and storage medium for detecting package
CN109506569B (en) Method for monitoring three-dimensional sizes of cubic and columnar crystals in crystallization process based on binocular vision
CN119515938A (en) A multi-target, long-distance field temperature measurement method based on dual-spectrum imaging
CN114581383A (en) Binocular three-dimensional vision-based fire detection method for outdoor oil-immersed transformer
Tu et al. Detecting facade damage on moderate damaged type from high-resolution oblique aerial images
CN120125650B (en) A position monitoring method and system based on heterogeneous image registration and binocular positioning
Recker et al. Visualization of scene structure uncertainty in multi-view reconstruction
CN112907650A (en) Cloud height measuring method and equipment based on binocular vision
Bier et al. Error analysis of stereo calibration and reconstruction
CN114910021B (en) A grating binocular stereo vision three-dimensional measurement system and method
CN114120372B (en) Space passenger flow heat distribution method and system based on human body detection and identification
CN117726803A (en) Target laser stripe centerline extraction method, system, equipment and medium based on visual simulation prior information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 330096 No. 88, Minqiang Road, private science and Technology Park, Qingshanhu District, Nanchang City, Jiangxi Province

Applicant after: STATE GRID JIANGXI ELECTRIC POWER COMPANY LIMITED Research Institute

Applicant after: STATE GRID CORPORATION OF CHINA

Applicant after: Nanchang University

Address before: 330096 No.88 Minqiang Road, private science and Technology Park, high tech Zone, Nanchang City, Jiangxi Province

Applicant before: STATE GRID JIANGXI ELECTRIC POWER COMPANY LIMITED Research Institute

Applicant before: STATE GRID CORPORATION OF CHINA

Applicant before: Nanchang University

CB02 Change of applicant information