CN115542362A - High-precision space positioning method, system, equipment and medium for electric power operation site - Google Patents
High-precision space positioning method, system, equipment and medium for electric power operation site Download PDFInfo
- Publication number
- CN115542362A CN115542362A CN202211524412.6A CN202211524412A CN115542362A CN 115542362 A CN115542362 A CN 115542362A CN 202211524412 A CN202211524412 A CN 202211524412A CN 115542362 A CN115542362 A CN 115542362A
- Authority
- CN
- China
- Prior art keywords
- positioning
- electric power
- uwb
- moving target
- coordinates
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000000007 visual effect Effects 0.000 claims abstract description 47
- 238000005516 engineering process Methods 0.000 claims abstract description 37
- 238000010191 image analysis Methods 0.000 claims abstract description 4
- 239000011159 matrix material Substances 0.000 claims description 24
- 238000005259 measurement Methods 0.000 claims description 23
- 238000004422 calculation algorithm Methods 0.000 claims description 17
- 238000010276 construction Methods 0.000 claims description 16
- 230000004927 fusion Effects 0.000 claims description 14
- 230000006870 function Effects 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000007613 environmental effect Effects 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 3
- 230000005477 standard model Effects 0.000 claims description 3
- 238000003708 edge detection Methods 0.000 claims description 2
- 238000007670 refining Methods 0.000 claims description 2
- 238000007619 statistical method Methods 0.000 claims description 2
- 238000003860 storage Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 8
- 238000012360 testing method Methods 0.000 description 9
- 238000012423 maintenance Methods 0.000 description 8
- 238000004458 analytical method Methods 0.000 description 6
- 238000004088 simulation Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000011160 research Methods 0.000 description 5
- 239000013598 vector Substances 0.000 description 5
- 230000006872 improvement Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013016 damping Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000012212 insulator Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/45—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/45—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
- G01S19/46—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being of a radio-wave signal type
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/625—License plates
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/30—Services specially adapted for particular environments, situations or purposes
- H04W4/33—Services specially adapted for particular environments, situations or purposes for indoor environments, e.g. buildings
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W64/00—Locating users or terminals or network equipment for network management purposes, e.g. mobility management
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了电力作业现场的高精度空间定位方法、系统、设备及介质,本发明属于空间定位技术领域,包括基于视觉定位技术,获取电力作业运动目标的定位;根据电力作业运动目标的定位,对电力作业运动目标进行图像分析,得到电力作业运动目标的特征信息并构建电力作业运动目标的三维模型;根据特征信息细化电力作业运动目标的三维模型,得到高精度三维模型并实时更新所述高精度三维模型在三维场景中的定位。通过在动态目标上配带UWB或北斗定位标签,在运动目标定位的基础上进行图像处理,实现运动目标在三维场景中的精细化重构,从而获得动态目标的实时位置信息并在三维场景中实时更新动态目标模型位置状态。
The invention discloses a high-precision spatial positioning method, system, equipment, and medium for an electric power operation site. The invention belongs to the field of spatial positioning technology, including obtaining the positioning of the moving target of the electric power operation based on the visual positioning technology; according to the positioning of the moving target of the electric power operation, Carry out image analysis on the moving target of electric power work, obtain the characteristic information of the moving target of electric power work and construct the three-dimensional model of the moving target of electric power work; refine the three-dimensional model of the moving target of electric power work according to the characteristic information, obtain the high-precision three-dimensional model and update the described in real time Positioning of high-precision 3D models in 3D scenes. By attaching UWB or Beidou positioning tags to the dynamic target, image processing is performed on the basis of the positioning of the moving target, and the refined reconstruction of the moving target in the 3D scene is realized, so as to obtain the real-time position information of the dynamic target and display it in the 3D scene. Update the dynamic target model position status in real time.
Description
技术领域technical field
本发明涉及空间定位技术领域,具体涉及电力作业现场的高精度空间定位方法、系统、设备及介质。The invention relates to the technical field of space positioning, in particular to a high-precision space positioning method, system, equipment and medium for electric power work sites.
背景技术Background technique
基于视觉的目标检测、识别与跟踪算法的研究是人工智能领域的热点,近年来随着深度学习的广泛应用取得了巨大的进步。深度学习技术已逐步应用于电力生产环境中,实现绝缘子、架空弧线、变电站异物、设备温度等一些目标的识别,但变电站场景复杂,实现电力作业各个环节的关键目标识别还有待深入研究。Research on vision-based target detection, recognition and tracking algorithms is a hotspot in the field of artificial intelligence. In recent years, great progress has been made with the wide application of deep learning. Deep learning technology has been gradually applied in the power production environment to realize the identification of some targets such as insulators, overhead arcs, foreign objects in substations, and equipment temperature. However, the scene of substations is complex, and the identification of key targets in all aspects of power operations still needs further research.
将真实场景视频信息融合到三维虚拟模型中是虚拟现实技术的一个分支,或者说是虚拟现实的一个发展阶段。三维视频融合技术是指把一个或多个由摄像机图像序列的视频和与之相关的三维虚拟场景加以匹配、融合,生成一个新的关于此场景的动态虚拟场景或模型,实现虚拟场景与实时视频的融合。三维视频融合技术,可依托于单独的三维引擎,实现小范围或局部的三维场景与视频等资源的融合应用,也可依托于三维地理信息系统,实现全球广域范围内的三维地理信息可视化融合应用。Integrating real scene video information into 3D virtual model is a branch of virtual reality technology, or a stage of development of virtual reality. 3D video fusion technology refers to the matching and fusion of one or more video sequences of camera images and related 3D virtual scenes to generate a new dynamic virtual scene or model of this scene, realizing virtual scene and real-time video fusion. The 3D video fusion technology can rely on a separate 3D engine to realize the fusion application of resources such as small-scale or partial 3D scenes and videos, and can also rely on the 3D geographic information system to realize the visual fusion of 3D geographic information in a wide area around the world application.
在电力作业现场,对作业人员的位置进行准确有效的定位是预防意外发生、保障工作人员安全的重要依据。目前对电力作业现场的作业人员进行位置定位的手段多采用GPS、北斗定位系统和UWB定位系统实现,但是仅仅依靠于GPS、北斗定位系统和UWB定位系统无法提取精确的空间位置信息,同时在电力作业现场存在遮挡、目标姿态变化、光照等问题,也会影响在电力作业现场对于作业目标的定位。In the electric power operation site, accurate and effective positioning of the operator's position is an important basis for preventing accidents and ensuring the safety of workers. At present, GPS, Beidou positioning system and UWB positioning system are mostly used to locate the workers on the electric power work site, but only relying on GPS, Beidou positioning system and UWB positioning system cannot extract accurate spatial position information. There are problems such as occlusion, target posture change, and lighting on the job site, which will also affect the positioning of the job target at the power job site.
综上所述,基于视觉的目标检测、识别和跟踪算法的研究取得了一定的研究进展,但如何处理遮挡、目标姿态变化、光照等问题的影响仍然是研究的重点。现有的视觉定位算法处理的视频图像数据是目标从三维空间到二维空间的投影,无法提取精确的空间位置信息。In summary, the research on vision-based target detection, recognition and tracking algorithms has made some research progress, but how to deal with the impact of occlusion, target pose changes, lighting and other issues is still the focus of research. The video image data processed by the existing visual positioning algorithm is the projection of the target from the three-dimensional space to the two-dimensional space, which cannot extract accurate spatial position information.
现有技术存在的问题如下:The problems of prior art are as follows:
现有用于电力作业现场的视觉定位算法处理的视频图像数据是目标从三维空间到二维空间的投影,无法提取三维空间精确的空间位置信息。The video image data processed by the existing visual positioning algorithm used in the electric power operation site is the projection of the target from the three-dimensional space to the two-dimensional space, and the precise spatial position information in the three-dimensional space cannot be extracted.
发明内容Contents of the invention
本申请所要解决的技术问题是现有用于电力作业现场的视觉定位算法处理的视频图像数据是目标从三维空间到二维空间的投影,无法提取三维空间精确的空间位置信息,目的在于提供电力作业现场的高精度空间定位方法、系统、设备及介质,通过在动态目标上配带UWB或北斗定位标签,在运动目标定位的基础上,对其进行图像处理,实现运动目标在三维场景中的精细化重构,从而获得动态目标的实时位置信息并在三维场景中实时更新动态目标模型位置状态,解决了现有的视觉定位算法无法提取三维空间精确的空间位置信息问题。The technical problem to be solved in this application is that the video image data processed by the existing visual positioning algorithm for the electric power operation site is the projection of the target from the three-dimensional space to the two-dimensional space, and the accurate spatial position information of the three-dimensional space cannot be extracted. On-site high-precision spatial positioning methods, systems, equipment and media, by attaching UWB or Beidou positioning tags to dynamic targets, image processing is performed on them on the basis of positioning of moving targets, so as to realize fine precision of moving targets in 3D scenes In order to obtain the real-time position information of the dynamic target and update the position status of the dynamic target model in the 3D scene in real time, it solves the problem that the existing visual positioning algorithm cannot extract the accurate spatial position information in the 3D space.
本申请通过下述技术方案实现:The application is realized through the following technical solutions:
本申请第一方面提供电力作业现场的高精度空间定位方法,包括The first aspect of this application provides a high-precision spatial positioning method for an electric power operation site, including
S1、基于视觉定位技术,获取电力作业运动目标的定位;其中,对室内的电力作业运动目标采用UWB辅助视觉定位技术,对室外的电力作业运动目标采用北斗辅助视觉定位技术;UWB辅助视觉定位技术包括UWB辅助的被动视频定位;S1. Based on the visual positioning technology, obtain the positioning of the electric power operation moving target; among them, the UWB auxiliary visual positioning technology is used for the indoor electric power operation moving target, and the Beidou auxiliary visual positioning technology is used for the outdoor electric power operation moving target; UWB auxiliary visual positioning technology Including UWB-assisted passive video positioning;
S2、根据电力作业运动目标的定位,对电力作业运动目标进行图像分析,得到电力作业运动目标的特征信息并构建电力作业运动目标的三维模型;S2. According to the positioning of the moving target of the electric work, image analysis is performed on the moving target of the electric work, the characteristic information of the moving target of the electric work is obtained and a three-dimensional model of the moving target of the electric work is constructed;
S3、根据特征信息细化电力作业运动目标的三维模型,得到高精度三维模型并实时更新所述高精度三维模型在三维场景中的定位。S3. Refining the three-dimensional model of the moving target of electric power work according to the characteristic information, obtaining a high-precision three-dimensional model and updating the positioning of the high-precision three-dimensional model in the three-dimensional scene in real time.
在上述技术方案中,对室内的电力作业运动目标采用UWB辅助视觉定位技术,对室外的电力作业运动目标采用北斗辅助视觉定位技术,通过将单纯的UWB定位技术、北斗定位技术与视觉定位技术相结合,可以获取电力作业运动目标在三维空间中的定位,减小电力作业场景干扰、遮挡等问题,从而实现厘米级定位。通过在电力作业运动目标上佩戴UWB标签或北斗定位标签,在运动目标定位的基础上对运动目标进行图像分析,实现运动目标在三维场景中的精细化重构,从而获得运动目标的实时位置信息并在三维场景中实时更新运动目标模型位置状态,从而获得三维空间中电力作业运动目标的精确定位。In the above technical scheme, UWB-assisted visual positioning technology is used for indoor electric power operation moving targets, and Beidou auxiliary visual positioning technology is used for outdoor electric power work moving targets. By combining simple UWB positioning technology, Beidou positioning technology and visual positioning technology Combined, the positioning of the moving target of the electric power operation in the three-dimensional space can be obtained, and problems such as interference and occlusion of the electric power operation scene can be reduced, thereby achieving centimeter-level positioning. By wearing UWB tags or Beidou positioning tags on moving targets in electric power operations, image analysis of moving targets is performed on the basis of moving target positioning to achieve refined reconstruction of moving targets in 3D scenes, thereby obtaining real-time position information of moving targets And in the 3D scene, the position state of the moving target model is updated in real time, so as to obtain the precise positioning of the moving target of the electric power work in the 3D space.
在一种可选实施例中,所述UWB辅助的被动视频定位的方法包括:In an optional embodiment, the method for UWB-assisted passive video positioning includes:
通过两个相机形成的双目视觉系统对电力作业运动目标进行定位,获得电力作业运动目标在空间中的位置信息;The binocular vision system formed by two cameras locates the moving target of electric power work, and obtains the position information of the moving target of electric power work in space;
基于UWB模型对电力作业运动目标进行定位,获得电力作业运动目标的UWB定位信息;Based on the UWB model, the moving target of the electric power operation is positioned, and the UWB positioning information of the moving target of the electric power work is obtained;
综合电力作业运动目标在空间中的位置信息和电力作业运动目标的UWB定位信息,获得电力作业运动目标的定位。The position information of the moving target in electric power work and the UWB positioning information of the moving target in electric power work are integrated to obtain the positioning of the moving target in electric power work.
在一种可选实施例中,综合电力作业运动目标在空间中的位置信息和电力作业运动目标的UWB定位信息的方法如下:In an optional embodiment, the method of integrating the position information of the moving object in electric power work in space and the UWB positioning information of the moving object in electric power work is as follows:
将双目视觉系统测得的位置、航向和UWB模型测得的位置作为观测值,建立UWB/视觉融合观测方程;UWB/视觉融合观测方程如下:The position and heading measured by the binocular vision system and the position measured by the UWB model are used as observation values to establish the UWB/vision fusion observation equation; the UWB/vision fusion observation equation is as follows:
上式中,表示双目视觉系统vision测量的平面坐标,表示UWB测量的平面坐标,表示双目视觉系统的位置测量误差,表示 UWB 位置测量误差,表示双目视觉系统测量的平面坐标与UWB测量的平面坐标之间的偏转角,表示中间矩阵。In the above formula, Represents the plane coordinates measured by the binocular vision system vision, Indicates the plane coordinates of the UWB measurement, Indicates the position measurement error of the binocular vision system, Indicates the UWB position measurement error, Indicates the deflection angle between the plane coordinates measured by the binocular vision system and the plane coordinates measured by UWB, represents the intermediate matrix.
在一种可选实施例中,UWB模型的构建方法如下:In an optional embodiment, the construction method of the UWB model is as follows:
建立改进的抗差EKF模型,并采用改进的抗差EKF模型作为UWB模型的标准模型;Establish an improved robust EKF model, and use the improved robust EKF model as the standard model of the UWB model;
采用统计方法判断改进的抗差EKF模型是否存在粗差;Use statistical methods to judge whether there are gross errors in the improved robust EKF model;
如果存在粗差,调用鲁棒EKF模型作为改进的抗差EKF模型;If there is a gross error, call the robust EKF model as an improved robust EKF model;
如果不存在粗差,调用EKF模型作为改进的抗差EKF模型。If there is no gross error, call the EKF model as an improved robust EKF model.
在一种可选实施例中,北斗辅助视觉定位技术的定位方法包括:In an optional embodiment, the positioning method of the Beidou-assisted visual positioning technology includes:
基于卡尔曼滤波模型和作业边界线预测位置,获取感兴趣区域;Based on the Kalman filter model and the predicted position of the operation boundary line, the region of interest is obtained;
对感兴趣区域进行图像边缘检测,获取定位区域;Perform image edge detection on the region of interest to obtain the positioning region;
通过多目视觉传感器构建多目视觉测量模型,通过多目视觉测量模型在定位区域中构建多目视觉坐标;Build a multi-eye vision measurement model through the multi-eye vision sensor, and build multi-eye vision coordinates in the positioning area through the multi-eye vision measurement model;
基于加权LM算法,对通过多目视觉测量模型在定位区域中构建的多目视觉坐标进行优化,得到优化多目视觉坐标;Based on the weighted LM algorithm, optimize the multi-eye vision coordinates constructed in the positioning area through the multi-eye vision measurement model, and obtain the optimized multi-eye vision coordinates;
基于北斗定位技术,将优化多目视觉坐标转化为全球定位系统的坐标。Based on Beidou positioning technology, the optimized multi-eye visual coordinates are converted into coordinates of the global positioning system.
在一种可选实施例中,基于卡尔曼滤波模型,通过多个视觉传感器动态获取感兴趣区域的方法如下:In an optional embodiment, based on the Kalman filter model, the method for dynamically acquiring the region of interest through multiple visual sensors is as follows:
基于卡尔曼滤波预测当前直线位置,获得第一动态感兴趣区;Predict the current straight line position based on Kalman filter to obtain the first dynamic region of interest;
基于投影法预测作业边界线在图像坐标中所在列号,以该列号为基准,结合当前摄像头调用情况,获得第二动态感兴趣区。Based on the projection method, the column number of the operation boundary line in the image coordinates is predicted, and the second dynamic ROI is obtained based on the column number and combined with the current camera call situation.
在一种可选实施例中,基于加权LM算法,对通过多目视觉测量模型在定位区域中构建的多目视觉坐标进行优化的方法如下:In an optional embodiment, based on the weighted LM algorithm, the method for optimizing the multi-eye vision coordinates constructed in the positioning area by the multi-eye vision measurement model is as follows:
采用最小化重投影误差对物点变换至第i个多目视觉传感器后得到的图像坐标进行非线性优化;Non-linear optimization is performed on the image coordinates obtained after the object point is transformed to the i-th multi-eye vision sensor by minimizing the re-projection error;
将多目视觉传感器到物体的距离进行归一化处理,并将归一化处理后的距离的倒数转化为加权因子;Normalize the distance from the multi-eye vision sensor to the object, and convert the reciprocal of the normalized distance into a weighting factor;
通过加权因子构建目标函数,将多目视觉坐标代入目标函数进行计算,得到优化多目视觉坐标。The objective function is constructed by weighting factors, and the multi-eye vision coordinates are substituted into the objective function for calculation, and the optimized multi-eye vision coordinates are obtained.
本申请第二方面提供电力作业现场的高精度空间定位系统,包括The second aspect of the application provides a high-precision spatial positioning system for electric power work sites, including
UWB辅助视频定位模块,所述UWB辅助主动视频定位模块用于获取室内电力作业运动目标的定位,所述UWB辅助主动视频定位模块包括用于初始化环境图像和IMU数据的初始化单元、用于获取视觉坐标的视觉/惯性组合单元和用于获取UWB坐标的超宽带单元;UWB assisted video positioning module, the UWB assisted active video locating module is used to obtain the positioning of indoor power work moving targets, and the UWB assisted active video locating module includes an initialization unit for initializing environmental images and IMU data, and is used for obtaining visual A combined visual/inertial unit for coordinates and an ultra-wideband unit for acquiring UWB coordinates;
北斗辅助视觉定位模块,所述北斗辅助视觉定位模块用于获取室外电力作业运动目标的定位,所述北斗辅助视觉定位模块包括用于减少环境因素对作业边界线提取的动态感兴趣区域模块、用于构建多目视觉区域坐标的多目视觉模块和用于将多目视觉区域坐标转化为全球定位系统坐标的北斗定位模块。The Beidou auxiliary visual positioning module, the Beidou auxiliary visual positioning module is used to obtain the positioning of the moving target of the outdoor electric power operation, and the Beidou auxiliary visual positioning module includes a dynamic area of interest module for reducing environmental factors to extract the operation boundary line. A multi-eye vision module for constructing multi-eye vision area coordinates and a Beidou positioning module for converting multi-eye vision area coordinates into GPS coordinates.
本申请第三方面提供了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时用于实现电力作业现场的高精度空间定位方法。The third aspect of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and operable on the processor, characterized in that, when the processor executes the program, it is used to realize power A high-precision spatial positioning method for a job site.
本申请第四方面提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时用于实现电力作业现场的高精度空间定位方法。The fourth aspect of the present application provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, it is used to implement a high-precision spatial positioning method for an electric power operation site.
本申请与现有技术相比,具有如下的优点和有益效果:Compared with the prior art, the present application has the following advantages and beneficial effects:
通过在动态目标上配带UWB或北斗定位标签,在运动目标定位的基础上,对其进行图像处理,实现运动目标在三维场景中的精细化重构,从而获得动态目标的实时位置信息并在三维场景中实时更新动态目标模型位置状态,通过验证实验证明重构的动态模型与实际尺寸的误差小于1%。By attaching UWB or Beidou positioning tags to dynamic targets, image processing is performed on them based on the positioning of the moving targets to achieve fine reconstruction of the moving targets in the 3D scene, so as to obtain the real-time position information of the dynamic targets and In the 3D scene, the position state of the dynamic target model is updated in real time, and the verification experiment proves that the error between the reconstructed dynamic model and the actual size is less than 1%.
附图说明Description of drawings
为了更清楚地说明本发明示例性实施方式的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本发明的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。在附图中:In order to more clearly illustrate the technical solutions of the exemplary embodiments of the present invention, the accompanying drawings used in the embodiments will be briefly introduced below. It should be understood that the following drawings only show some embodiments of the present invention. Therefore, it should not be regarded as a limitation on the scope. For those skilled in the art, other related drawings can also be obtained according to these drawings without creative work. In the attached picture:
图1为本申请一实施例提供的电力作业现场的高精度空间定位方法的流程示意图;FIG. 1 is a schematic flow diagram of a high-precision spatial positioning method for an electric power operation site provided by an embodiment of the present application;
图2为本申请一实施例提供的多目视觉融合北斗定位的坐标变换示意图;Fig. 2 is a schematic diagram of coordinate transformation of multi-eye vision fusion Beidou positioning provided by an embodiment of the present application;
图3为本申请一实施例提供的速度为0.4m/s时试验场地中的辅助导航追踪结果图;Fig. 3 is the auxiliary navigation tracking result diagram in the test site when the speed provided by an embodiment of the present application is 0.4m/s;
图4为本申请一实施例提供的速度为0.8m/s时试验场地中的辅助导航追踪结果图;Fig. 4 is the auxiliary navigation tracking result diagram in the test field when the speed provided by an embodiment of the present application is 0.8m/s;
图5为本申请一实施例提供的速度为1.2m/s时试验场地中的辅助导航追踪结果图。Fig. 5 is a diagram of the auxiliary navigation tracking results in the test site when the speed is 1.2m/s provided by an embodiment of the present application.
具体实施方式detailed description
为使本发明的目的、技术方案和优点更加清楚明白,下面结合实施例和附图,对本发明作进一步的详细说明,本发明的示意性实施方式及其说明仅用于解释本发明,并不作为对本发明的限定。In order to make the purpose, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the examples and accompanying drawings. As a limitation of the present invention.
实施例1Example 1
本实施例提供了一种电力作业现场的高精度空间定位方法,其方法的流程如图1所示,包括:This embodiment provides a high-precision spatial positioning method for an electric power operation site. The flow of the method is shown in Figure 1, including:
步骤S1、基于视觉定位技术,获取电力作业运动目标的定位信息。Step S1. Based on the visual positioning technology, the positioning information of the moving target of the electric power operation is obtained.
其中,对室内的电力作业运动目标采用UWB辅助视觉定位技术,对室外的电力作业运动目标采用北斗辅助视觉定位技术;UWB辅助视觉定位技术包括UWB辅助的主动视频定位和UWB辅助的被动视频定位。Among them, UWB-assisted visual positioning technology is used for indoor electric power operation moving targets, and Beidou-assisted visual positioning technology is used for outdoor electric power operation moving targets; UWB-assisted visual positioning technology includes UWB-assisted active video positioning and UWB-assisted passive video positioning.
其中,对于未配置视觉系统的穿戴设备,可通过变电站安装的摄像机结合UWB技术实现运动目标的定位。Among them, for wearable devices that are not equipped with a vision system, the positioning of moving targets can be realized through the camera installed in the substation combined with UWB technology.
其中,采用改进的抗差EKF模型作为UWB模型的标准模型。Among them, the improved robust EKF model is adopted as the standard model of the UWB model.
在改进抗差EKF定位模型中,抗差EKF增益矩阵构造如下:In the improved robust EKF positioning model, the robust EKF gain matrix is constructed as follows:
和是稳健参数,取2.5-3.5,取3.5-4.5,为EKF模型的增益矩阵,为抗差优化后的增益矩阵,为残差相关阈值。 and is a robust parameter, Take 2.5-3.5, Take 3.5-4.5, is the gain matrix of the EKF model, is the gain matrix optimized for robustness, is the residual correlation threshold.
表示观测向量维度。、和分别表示观测向量的预测残差、冗余观测分量和测量标准差。冗余观测分量可表示为: Indicates the observation vector dimension. , and Respectively represent the observation vector The prediction residuals, redundant observation components, and measurement standard deviations of . Redundant observation components can be expressed as:
和分别是残差向量的协方差矩阵和观测值的权重矩阵。每次更新迭代给定迭代次数,状态预测值,如下: and are the covariance matrix of the residual vector and the weight matrix of the observations, respectively. For each update iteration given the number of iterations, the state prediction value ,as follows:
预测残差,如下:prediction residual ,as follows:
在t时刻迭代的状态预测值由t-1时刻的状态滤波值及其预测残差确定,表示某时刻迭代结束的计算赋值,表示初始时刻的预测值,为分布特性的矩阵,为测量值观测。根据上述公式计算等效增益矩阵,抗差滤波值为: The state prediction value iterated at time t is determined by the state filter value and its prediction residual at time t-1, Indicates the calculation assignment at the end of the iteration at a certain moment, Indicates the predicted value at the initial moment, is a matrix of distribution properties, Observations for measured values. According to the above formula Calculate the equivalent gain matrix, and the robust filter value is:
为抗差EKF增益矩阵,为t时刻迭代结束的计算值,为由t-1时刻预测的t时刻预测值。如果与之差小于给定的极限差,则迭代结束。当t=1时,是k时刻标准EKF的赋值。后验协方差矩阵为: is the robust EKF gain matrix, is the calculated value at the end of the iteration at time t, is the predicted value at time t predicted from time t-1. if and If the difference is less than the given limit difference, the iteration ends. When t=1, is the assignment of the standard EKF at time k. The posterior covariance matrix is:
表示状态向量维度,是迭代结束时的最终等价卡尔曼滤波增益矩阵,为协方差矩阵。 represents the state vector dimension, is the final equivalent Kalman filter gain matrix at the end of the iteration, is the covariance matrix.
为了提高定位的角度,构建顾及视觉尺度因子和初始方向的视觉/UWB融合EKF定位模型,状态与量测方程分别表示为:In order to improve the positioning angle, a vision/UWB fusion EKF positioning model is constructed considering the visual scale factor and initial direction, and the state and measurement equations are expressed as:
上式中,和分别表示状态方程和量测方程,和分别是协方差阵和的独立、零均值、高斯噪声过程,为状态转移矩阵,H为分布特性的矩阵。In the above formula, and represent the state equation and the measurement equation, respectively, and covariance matrix and Independent, zero-mean, Gaussian noise process of , Is the state transition matrix, and H is the matrix of distribution characteristics.
上式中,,表示平面坐标,表示行人速度,表示移动方向角,表示比例尺模糊度,表示视觉计算的平面坐标与UWB计算的平面坐标之间的偏转角。In the above formula, , represent plane coordinates, represents the pedestrian speed, represents the direction angle of movement, represents the scale ambiguity, Indicates the deflection angle between the plane coordinates calculated by vision and the plane coordinates calculated by UWB.
根据视觉和UWB的误差方程,对应的状态模型为:According to the error equation of vision and UWB, the corresponding state model is:
如果将视觉测得的位置、航向和 UWB 测得的位置作为观测值,则 UWB/视觉融合观测方程可表示为:If the vision-measured position, heading and UWB-measured position are taken as observations, the UWB/vision fusion observation equation can be expressed as:
上式中,表示双目视觉系统vision测量的平面坐标,表示UWB测量的平面坐标,表示双目视觉系统的位置测量误差,表示 UWB 位置测量误差,表示双目视觉系统测量的平面坐标与UWB测量的平面坐标之间的偏转角。In the above formula, Represents the plane coordinates measured by the binocular vision system vision, Indicates the plane coordinates of the UWB measurement, Indicates the position measurement error of the binocular vision system, Indicates the UWB position measurement error, Indicates the deflection angle between the plane coordinates measured by the binocular vision system and the plane coordinates measured by UWB.
在一种可选实施例中,对于室外的电力作业运动目标,可以采用北斗辅助视觉定位技术。受制于北斗定位精度和现场作业施工设备运动、地理环境限制,目前单纯依靠北斗定位仅可实现米级精度定位,无法满足动态电力作业场景的厘米级高精度定位需求。故将北斗定位技术与多目视觉辅助定位技术相结合以提供定位精度。In an optional embodiment, Beidou-assisted visual positioning technology can be used for outdoor electric power work moving targets. Restricted by the positioning accuracy of Beidou and the movement of on-site construction equipment and the geographical environment, currently only relying on Beidou positioning can only achieve meter-level precision positioning, which cannot meet the centimeter-level high-precision positioning requirements of dynamic power operation scenarios. Therefore, the Beidou positioning technology is combined with the multi-eye vision-assisted positioning technology to provide positioning accuracy.
如图2所示,该图中的坐标系为建立在优化过的加权LM算法迭代后的坐标系,取坐标系中一点,已知行驶航线上的一点P在坐标系下的坐标为、G点在坐标系下的坐标为,以G点为原点建立坐标系,则P点在坐标系下的坐标为As shown in Figure 2, the coordinate system in this figure is the coordinate system established after the iteration of the optimized weighted LM algorithm, and a point in the coordinate system is taken , a point P on the known driving route is at The coordinates in the coordinate system are , G point at The coordinates in the coordinate system are , establish with point G as the origin coordinate system, then point P is at coordinates in the coordinate system for
其中,中坐标的单位是像素,可由此得出单位为长度m的坐标(x l,y l):in, The unit of the coordinates is the pixel, so the coordinates ( x l , y l ) whose unit is the length m can be obtained from this:
以相机上固定点C为原点建立坐标系,可测得C点到地面G点得水平距离为l 1为2.5m,可知P点O c在坐标系下的坐标为:The coordinate system is established with the fixed point C on the camera as the origin, and the horizontal distance from point C to point G on the ground can be measured as l 1 is 2.5m. It can be known that the coordinates of point P O c in the coordinate system are:
施工机械安装有北斗定位导航装置,通过解算可以得出车辆控制点M点在大地平面坐标系O中的单位为m的坐标及车辆的航向角。以M点为原点建立坐标系O m,测得和相机上固定点O c的距离为l 2,P点在O m坐标系下的坐标为:The construction machinery is equipped with a Beidou positioning and navigation device. Through the calculation, the coordinates of the vehicle control point M in the earth plane coordinate system O in units of m and the heading angle of the vehicle can be obtained. The coordinate system O m is established with point M as the origin, the measured distance from the fixed point O c on the camera is l 2 , and the coordinates of point P in the O m coordinate system are:
通过定位系统可知M在大地坐标系O中的坐标为(x w,y w),通过计算可得P点在大地坐标系下的坐标(x o,y o),即:Through the positioning system, it can be known that the coordinates of M in the earth coordinate system O are ( x w , y w ), and the coordinates ( x o , y o ) of point P in the earth coordinate system can be obtained through calculation, namely:
综上所述,已知航线路径上一点P的图像坐标(x p,y p),可得其在全球定位系统中的坐标(x o,y o),即:To sum up, given the image coordinates ( x p , y p ) of a point P on the flight path, its coordinates ( x o , y o ) in the GPS can be obtained, namely:
其中,为矩阵,为矩阵,为矩阵,为矩阵,为矩阵。in, for the matrix , for the matrix , for the matrix , for the matrix , for the matrix .
在多目视觉测量系统中,通常采用最小化重投影误差的方式进行非线性优化,即In the multi-eye vision measurement system, the method of minimizing the reprojection error is usually used for nonlinear optimization, that is,
上式中:表示调整前物点变换至第i个相机后得到的图像坐标,表示调整后的物点变换至第i个相机后得到的图像坐标。当各相机参数、测量环境等因素均相同时,相机到物点的距离对成像噪声影响较为明显,距离越远,噪声越大,反之噪声越小。在设计目标函数时,将相机到物点距离的倒数转化为加权因子,同时为了增加不同相机之间的可比性,对距离信息进行归一化处理。In the above formula: Indicates the image coordinates obtained after the object point is transformed to the i-th camera before adjustment, Indicates the image coordinates obtained after the adjusted object point is transformed to the i-th camera. When the camera parameters, measurement environment and other factors are the same, the distance from the camera to the object point has a more obvious impact on the imaging noise. The farther the distance is, the greater the noise is, and vice versa. When designing the objective function, the reciprocal of the distance from the camera to the object point is converted into a weighting factor, and in order to increase the comparability between different cameras, the distance information is normalized.
上式中,为加权因子,为物点o到相机i的距离,为oi物点的坐标,为ci相机的坐标。因此,最终的目标函数为In the above formula, is the weighting factor, is the distance from object point o to camera i, is the coordinates of the oi object point, is the coordinates of the ci camera. Therefore, the final objective function is
将代入目标函数并转化为方程组形式,可得Will Substituting into the objective function and transforming into the form of equations, we can get
上式中xw、yw、zw表示三维空间的单位矢量;为参量系数,Fui、Fvi为优化的子目标函数,ui,vi为迭代求解的中间变量。In the above formula, x w , y w , and z w represent unit vectors in three-dimensional space; are parameter coefficients, F ui and F vi are optimized sub-objective functions, u i and v i are intermediate variables for iterative solution.
计算式关于的导数,可得雅可比矩阵为Calculations About The derivative of , the Jacobian matrix can be obtained as
上式中,雅可比矩阵J里面的值分别为Fui、Fvi关于xw、yw、zw方向的一阶偏导数。将上式代入下式中,可得增量为In the above formula, the values in the Jacobian matrix J are the first-order partial derivatives of F ui and F vi with respect to x w , y w , and z w directions respectively. Substituting the above formula into the following formula, the increment can be obtained as
上式中,μ为更新后的阻尼系数。因此,采用加权LM算法对通过多目视觉测量模型在定位区域中构建的坐标进行优化,具体计算步骤如下:In the above formula, μ is the updated damping coefficient. Therefore, the weighted LM algorithm is used to optimize the coordinates constructed in the positioning area through the multi-eye vision measurement model. The specific calculation steps are as follows:
(1)根据正交投影法计算物点坐标初值 ,设置迭代终止常量为e、实际下降效果阈值为ε、迭代次数为k,并初始化μ;(1) Calculate the initial value of the object point coordinates according to the orthogonal projection method, set the iteration termination constant as e, the actual drop effect threshold as ε, the number of iterations as k, and initialize μ;
(2)根据当前估算的物点世界坐标及相机参数,利用关于的导数求解雅可比矩阵J,并计算μ;(2) According to the current estimated object point world coordinates and camera parameters, use the The derivative of solves the Jacobian matrix J, and calculates μ;
(3)计算增量ΔP;(3) Calculate the increment ΔP;
(4)计算ρ,评估当前下降效果;(4) Calculate ρ and evaluate the current decline effect;
(5)若ρ<ε,则μ=0.5μ,并返回步骤(3),否则 ,继续执行步骤(6);(5) If ρ<ε, then μ=0.5μ, and return to step (3), otherwise, proceed to step (6);
(6)若或迭代次数大于等于k,停止迭代,输出优化后的结果;否则返回步骤(2)进入下一轮迭代。(6) If Or if the number of iterations is greater than or equal to k , stop the iteration and output the optimized result; otherwise, return to step (2) to enter the next iteration.
为了对作业边界线提取算法实时性进行分析,本实施例统计了100幅施工过程图片在算法改进前后的各图像处理步骤,以及整个图像处理过程的平均时间,其结果如下表所示:In order to analyze the real-time performance of the operation boundary line extraction algorithm, this embodiment counts the image processing steps of 100 construction process pictures before and after the algorithm improvement, and the average time of the entire image processing process. The results are shown in the following table:
由上表中数据可知,平均每幅图像处理时间由算法改进前的0.176087s缩短至0.064547s,比原处理时间缩短了63.3%,算法改进后的图像处理时间能较好地满足导航系统对图像处理时间的需求。It can be seen from the data in the above table that the average processing time of each image is shortened from 0.176087s before the algorithm improvement to 0.064547s, which is 63.3% shorter than the original processing time. processing time requirements.
速度为0.4m/s时,场地中的辅助导航追踪结果图如图3所示,速度为0.4m/s时田间地块中的辅助导航结果如下表所示:When the speed is 0.4m/s, the auxiliary navigation tracking results in the field are shown in Figure 3, and the auxiliary navigation results in the field plot when the speed is 0.4m/s are shown in the following table:
由图3和上表可知,施工机械以0.4m/s的速度在试验场地中进行直线作业时,在初始偏差为30cm的情况下,首次作业至横向偏差至0后,最大横向偏差为6.5cm,平均值为0.49cm,标准差为3.05cm。其结果表明施工机械在试验场地可以在有较大初始偏差的情况下通过不断更新北斗定位数据和视觉图片信息,系统仿真轨迹与施工机械实际位置趋同。随着作业跟踪时间延长,仿真作业定位精度越发稳定,辅助导航效果可以很好满足要求。It can be seen from Figure 3 and the above table that when the construction machinery is working in a straight line at the speed of 0.4m/s in the test site, the maximum lateral deviation is 6.5cm after the first operation until the lateral deviation reaches 0 when the initial deviation is 30cm , with a mean of 0.49cm and a standard deviation of 3.05cm. The results show that the construction machinery can continuously update the Beidou positioning data and visual picture information in the case of a large initial deviation in the test site, and the system simulation trajectory converges with the actual position of the construction machinery. With the extension of the job tracking time, the positioning accuracy of the simulation job becomes more stable, and the auxiliary navigation effect can well meet the requirements.
速度为0.8m/s时,场地中的辅助导航追踪结果图如图4所示,速度为0.8m/s时田间地块中的辅助导航结果如下表所示:When the speed is 0.8m/s, the auxiliary navigation tracking results in the field are shown in Figure 4, and the auxiliary navigation results in the field plot when the speed is 0.8m/s are shown in the following table:
由图4和上表可知,施工机械以0.8m/s的速度在试验场地中进行直线作业时,在初始偏差为30cm的情况下,首次作业至横向偏差至0后,最大横向偏差为13.7cm,平均值为0.82cm,标准差为5.98cm。在有较大初始偏差的情况下调节施工机械位姿,使得施工机械按照预计的轨迹行走。其结果表明施工机械在试验场地可以在有较大初始偏差的情况下通过不断更新北斗定位数据和视觉图片信息,系统仿真轨迹与施工机械实际位置趋同。随着作业跟踪时间延长,仿真作业定位精度越发稳定,辅助导航效果可以很好满足设计要求。It can be seen from Figure 4 and the above table that when the construction machine is working in a straight line at the speed of 0.8m/s in the test site, the maximum lateral deviation is 13.7cm when the initial deviation is 30cm, after the first operation until the lateral deviation reaches 0 , with a mean of 0.82cm and a standard deviation of 5.98cm. In the case of a large initial deviation, adjust the pose of the construction machine so that the construction machine can walk according to the expected trajectory. The results show that the construction machinery can continuously update the Beidou positioning data and visual picture information in the case of a large initial deviation in the test site, and the system simulation trajectory converges with the actual position of the construction machinery. With the extension of the job tracking time, the positioning accuracy of the simulation job becomes more stable, and the auxiliary navigation effect can well meet the design requirements.
速度为1.2m/s时,场地中的辅助导航追踪结果图如图5所示,速度为0.8m/s时田间地块中的辅助导航结果如下表所示:When the speed is 1.2m/s, the auxiliary navigation tracking results in the field are shown in Figure 5, and the auxiliary navigation results in the field plot when the speed is 0.8m/s are shown in the following table:
由图5和上表可知,施工机械以1.2m/s的速度在试验场地中进行直线作业时,在初始偏差为30cm的情况下,首次作业至横向偏差至0后,最大横向偏差为19.2cm,平均值为1.40cm,标准差为7.06cm。其结果可基本满足辅助导航设计要求,但在辅助导航过程中出现小范围明显的波动,后续通过车轮印记分析,是施工机械在形式过程中刹车转向导致,该偏差不影响仿北斗辅助视觉定位技术在仿真系统中的应用效果。It can be seen from Figure 5 and the above table that when the construction machine is working in a straight line at a speed of 1.2m/s in the test site, the maximum lateral deviation is 19.2cm after the first operation until the lateral deviation reaches 0 when the initial deviation is 30cm , with a mean of 1.40cm and a standard deviation of 7.06cm. The result can basically meet the design requirements of auxiliary navigation, but there are small-scale obvious fluctuations in the auxiliary navigation process. The follow-up analysis of the wheel marks is caused by the braking and steering of construction machinery during the formal process. This deviation does not affect the imitation Beidou auxiliary visual positioning technology The application effect in the simulation system.
场地试验结果表明,施工机械分别以0.4m/s、0.8m/s、1.2m/s 的速度行进实验时,横向偏差的最大值分别为 6.5cm、13.7cm、19.2cm,偏差平均值分别为0.49cm,0.87cm,1.40cm,偏差标准差分别为3.05cm,5.98cm,7.06cm,定位精度高于单纯依靠北斗导航定位精度,可满足电力作业环境下,仿真系统辅助导航的定位精度要求。The field test results show that when the construction machinery travels at speeds of 0.4m/s, 0.8m/s, and 1.2m/s, the maximum lateral deviations are 6.5cm, 13.7cm, and 19.2cm, and the average deviations are 0.49cm, 0.87cm, 1.40cm, and the deviation and standard deviation are 3.05cm, 5.98cm, 7.06cm respectively. The positioning accuracy is higher than that of Beidou navigation alone, which can meet the positioning accuracy requirements of simulation system-assisted navigation in the power operation environment.
步骤S2、通过对该定位上的电力作业运动目标进行图像分析,获得电力作业运动目标的特征信息并构建电力作业运动目标的三维模型。Step S2, by analyzing the image of the electric power work moving target at the location, the feature information of the electric power work moving target is obtained and a three-dimensional model of the electric power work moving target is constructed.
采用实景三维模型扫描技术、点云融合技术、精细化建模技术对多种检修车、作业人员进行建模,还原其真实结构、纹理。检修车模型按型号命名,与作业人员模型形成动态目标精细化模型库。Using real-world 3D model scanning technology, point cloud fusion technology, and refined modeling technology to model various maintenance vehicles and operators, and restore their real structures and textures. The maintenance car model is named according to the model, and forms a dynamic target refined model library with the operator model.
对于检修车,通过图像识别技术识别出其型号编码,将其型号编码作为特征信息。For the overhaul vehicle, its model code is identified through image recognition technology, and its model code is used as characteristic information.
具体而言,步骤如下:Specifically, the steps are as follows:
A、采用动态采集的方式,从监控视频的实时视频流图像中采集车辆信息。A. Use dynamic collection to collect vehicle information from real-time video streaming images of surveillance video.
B、对采集到检修车型号编码图像进行噪声过滤、对比度增强、图像缩放等预处理。B. Perform preprocessing such as noise filtering, contrast enhancement, and image scaling on the collected maintenance vehicle model coded images.
C、依据牌照纹理特征、颜色特征和形状特征等信息,采用投影分析、连通域分析、机器学习等算法检测车牌。投影分析方法根据车牌字符与背景交替出现的次数相比于其他部分要多这个特征,通过图像在水平和垂直方向的投影分析来定位检修车型号编码。C. Based on information such as license plate texture features, color features, and shape features, algorithms such as projection analysis, connected domain analysis, and machine learning are used to detect license plates. The projection analysis method locates the model code of the overhaul vehicle through the projection analysis of the image in the horizontal and vertical directions according to the feature that the license plate characters and the background alternately appear more than other parts.
连通域分析根据检修车型号编码中的每个字符都是一个连通域且这些连通域的结构和颜色都一致的特征,通过检测并合并这些连通域来定位检修车型号编码。Connected domain analysis According to the characteristics that each character in the maintenance vehicle model code is a connected domain and the structure and color of these connected domains are consistent, the maintenance vehicle model code is located by detecting and merging these connected domains.
D、提取出检修车型号编码区域后,需要再对检修车型号编码区域以一个字符为单位进行分割。D. After extracting the code area of the maintenance vehicle model, it is necessary to divide the code area of the maintenance vehicle model in units of one character.
E、对分割后的字符的灰度图像进行归一化处理,特征提取,然后经过机器学习或与字符数据库模板进行匹配,最后选取匹配度最高的结果作为识别结果。E. Normalize the grayscale image of the segmented characters, extract features, and then perform machine learning or match with character database templates, and finally select the result with the highest matching degree as the recognition result.
对于作业人员,通过图像识别技术识别出具有个体代表性的外貌表观特征和几何特征作为特征信息。For workers, through image recognition technology, individual representative appearance and geometric features are identified as feature information.
具体而言,步骤如下:Specifically, the steps are as follows:
A、建立人脸图像数据库。A. Establish a face image database.
B、根据人脸特征信息,提取人脸图像数据库中的人脸图像特征。B. According to the face feature information, extract the face image features in the face image database.
步骤S3、根据特征信息细化电力作业运动目标的三维模型,获取高精度三维模型以及该高精度三维模型在三维场景定位更新。Step S3: refine the 3D model of the moving target in electric power work according to the characteristic information, obtain a high-precision 3D model and update the positioning of the high-precision 3D model in the 3D scene.
动态目标精细化模型库中包含各种检修车模型、作业人员模型。对于检修车,通过识别到的型号编码在数据库中进行提取;对于作业人员,通过识别到的人员姓名进行提取,并根据作业人员信息中的身高,调整三维模型。The dynamic target refinement model library contains various maintenance vehicle models and operator models. For the overhaul vehicle, it is extracted in the database through the identified model code; for the operator, it is extracted through the identified name of the personnel, and the 3D model is adjusted according to the height of the operator in the information.
通过在动态目标上配带UWB或北斗定位标签,在运动目标定位的基础上,采用括图像采集、预处理、型号编码定位、字符分割、字符识别、结果输出等一系列算法运算,实现运动目标在三维场景中的精细化重构,从而获得动态目标的实时位置信息并在三维场景中实时更新动态目标模型位置状态,通过验证实验证明重构的动态模型与实际尺寸的误差小于1%。By attaching UWB or Beidou positioning tags to dynamic targets, on the basis of moving target positioning, a series of algorithm operations including image acquisition, preprocessing, model code positioning, character segmentation, character recognition, and result output are used to achieve moving targets. The refined reconstruction in the 3D scene obtains the real-time position information of the dynamic target and updates the position status of the dynamic target model in real time in the 3D scene. The verification experiment proves that the error between the reconstructed dynamic model and the actual size is less than 1%.
实施例2Example 2
本实施例在实施例1的基础上提供了一种电力作业现场的高精度空间定位系统,包括:On the basis of Embodiment 1, this embodiment provides a high-precision spatial positioning system for an electric power operation site, including:
UWB辅助视频定位模块,UWB辅助视频定位模块用于获取室内电力作业运动目标的定位,所述UWB辅助视频定位模块包括用于初始化环境图像和IMU数据的初始化单元、用于获取视觉坐标的视觉/惯性组合单元和用于获取UWB坐标的超宽带单元;UWB assisted video positioning module, the UWB assisted video positioning module is used to obtain the positioning of indoor power work moving targets, and the UWB assisted video positioning module includes an initialization unit for initializing environmental images and IMU data, and a visual/visual system for obtaining visual coordinates Inertial combination unit and ultra-wideband unit for obtaining UWB coordinates;
北斗辅助视觉定位模块,北斗辅助视觉定位模块用于获取室外电力作业运动目标的定位,所述北斗辅助视觉定位模块包括用于减少环境因素对作业边界线提取的动态感兴趣区域模块、用于构建多目视觉区域坐标的多目视觉模块和用于将多目视觉区域坐标转化为全球定位系统坐标的北斗定位模块。The Beidou auxiliary visual positioning module, the Beidou auxiliary visual positioning module is used to obtain the positioning of the moving target of the outdoor electric power operation. Multi-eye vision module for multi-eye vision area coordinates and Beidou positioning module for converting multi-eye vision area coordinates into GPS coordinates.
以上所述的具体实施方式,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施方式而已,并不用于限定本发明的保护范围,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The specific embodiments described above have further described the purpose, technical solutions and beneficial effects of the present invention in detail. It should be understood that the above descriptions are only specific embodiments of the present invention and are not intended to limit the scope of the present invention. Protection scope, within the spirit and principles of the present invention, any modification, equivalent replacement, improvement, etc., shall be included in the protection scope of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211524412.6A CN115542362A (en) | 2022-12-01 | 2022-12-01 | High-precision space positioning method, system, equipment and medium for electric power operation site |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211524412.6A CN115542362A (en) | 2022-12-01 | 2022-12-01 | High-precision space positioning method, system, equipment and medium for electric power operation site |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115542362A true CN115542362A (en) | 2022-12-30 |
Family
ID=84722565
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211524412.6A Pending CN115542362A (en) | 2022-12-01 | 2022-12-01 | High-precision space positioning method, system, equipment and medium for electric power operation site |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115542362A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118102233A (en) * | 2024-04-22 | 2024-05-28 | 南方电网调峰调频发电有限公司 | Object positioning method and device for multiple scenes of pumped storage foundation engineering |
Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103077539A (en) * | 2013-01-23 | 2013-05-01 | 上海交通大学 | Moving object tracking method under complicated background and sheltering condition |
CN104715252A (en) * | 2015-03-12 | 2015-06-17 | 电子科技大学 | License plate character segmentation method with combination of dynamic template and pixel points |
CN106647784A (en) * | 2016-11-15 | 2017-05-10 | 天津大学 | Miniaturized unmanned aerial vehicle positioning and navigation method based on Beidou navigation system |
CN107133563A (en) * | 2017-03-17 | 2017-09-05 | 深圳市能信安科技股份有限公司 | A kind of video analytic system and method based on police field |
CN107967473A (en) * | 2016-10-20 | 2018-04-27 | 南京万云信息技术有限公司 | Based on picture and text identification and semantic robot autonomous localization and navigation |
CN108012325A (en) * | 2017-10-30 | 2018-05-08 | 上海神添实业有限公司 | A kind of navigation locating method based on UWB and binocular vision |
CN108549771A (en) * | 2018-04-13 | 2018-09-18 | 山东天星北斗信息科技有限公司 | A kind of excavator auxiliary construction system and method |
CN109002744A (en) * | 2017-06-06 | 2018-12-14 | 中兴通讯股份有限公司 | Image-recognizing method, device and video monitoring equipment |
CN109489629A (en) * | 2018-12-07 | 2019-03-19 | 国网四川省电力公司电力科学研究院 | A kind of safety monitoring method of electric power line pole tower |
CN111401364A (en) * | 2020-03-18 | 2020-07-10 | 深圳市市政设计研究院有限公司 | License plate positioning algorithm based on combination of color features and template matching |
CN111476233A (en) * | 2020-03-12 | 2020-07-31 | 广州杰赛科技股份有限公司 | License plate number positioning method and device |
CN111508006A (en) * | 2020-04-23 | 2020-08-07 | 南开大学 | Moving target synchronous detection, identification and tracking method based on deep learning |
CN112101343A (en) * | 2020-08-17 | 2020-12-18 | 广东工业大学 | License plate character segmentation and recognition method |
CN112465401A (en) * | 2020-12-17 | 2021-03-09 | 国网四川省电力公司电力科学研究院 | Electric power operation safety control system based on multi-dimensional information fusion and control method thereof |
CN112560745A (en) * | 2020-12-23 | 2021-03-26 | 南方电网电力科技股份有限公司 | Method for discriminating personnel on electric power operation site and related device |
CN113392839A (en) * | 2021-05-18 | 2021-09-14 | 浙江大华技术股份有限公司 | Method and device for recognizing license plate of non-motor vehicle, computer equipment and storage medium |
CN114092875A (en) * | 2021-11-01 | 2022-02-25 | 南方电网深圳数字电网研究院有限公司 | Method and device for job site safety supervision based on machine learning |
CN114723824A (en) * | 2022-04-01 | 2022-07-08 | 浙江工业大学 | Indoor single positioning method based on binocular camera and ultra wide band fusion |
CN115326053A (en) * | 2022-08-18 | 2022-11-11 | 华南理工大学 | Mobile robot multi-sensor fusion positioning method based on double-layer vision |
CN115412846A (en) * | 2022-08-31 | 2022-11-29 | 常熟理工学院 | Underground multi-scene identity detection positioning method, system and storage medium |
-
2022
- 2022-12-01 CN CN202211524412.6A patent/CN115542362A/en active Pending
Patent Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103077539A (en) * | 2013-01-23 | 2013-05-01 | 上海交通大学 | Moving object tracking method under complicated background and sheltering condition |
CN104715252A (en) * | 2015-03-12 | 2015-06-17 | 电子科技大学 | License plate character segmentation method with combination of dynamic template and pixel points |
CN107967473A (en) * | 2016-10-20 | 2018-04-27 | 南京万云信息技术有限公司 | Based on picture and text identification and semantic robot autonomous localization and navigation |
CN106647784A (en) * | 2016-11-15 | 2017-05-10 | 天津大学 | Miniaturized unmanned aerial vehicle positioning and navigation method based on Beidou navigation system |
CN107133563A (en) * | 2017-03-17 | 2017-09-05 | 深圳市能信安科技股份有限公司 | A kind of video analytic system and method based on police field |
CN109002744A (en) * | 2017-06-06 | 2018-12-14 | 中兴通讯股份有限公司 | Image-recognizing method, device and video monitoring equipment |
CN108012325A (en) * | 2017-10-30 | 2018-05-08 | 上海神添实业有限公司 | A kind of navigation locating method based on UWB and binocular vision |
CN108549771A (en) * | 2018-04-13 | 2018-09-18 | 山东天星北斗信息科技有限公司 | A kind of excavator auxiliary construction system and method |
CN109489629A (en) * | 2018-12-07 | 2019-03-19 | 国网四川省电力公司电力科学研究院 | A kind of safety monitoring method of electric power line pole tower |
CN111476233A (en) * | 2020-03-12 | 2020-07-31 | 广州杰赛科技股份有限公司 | License plate number positioning method and device |
CN111401364A (en) * | 2020-03-18 | 2020-07-10 | 深圳市市政设计研究院有限公司 | License plate positioning algorithm based on combination of color features and template matching |
CN111508006A (en) * | 2020-04-23 | 2020-08-07 | 南开大学 | Moving target synchronous detection, identification and tracking method based on deep learning |
CN112101343A (en) * | 2020-08-17 | 2020-12-18 | 广东工业大学 | License plate character segmentation and recognition method |
CN112465401A (en) * | 2020-12-17 | 2021-03-09 | 国网四川省电力公司电力科学研究院 | Electric power operation safety control system based on multi-dimensional information fusion and control method thereof |
CN112560745A (en) * | 2020-12-23 | 2021-03-26 | 南方电网电力科技股份有限公司 | Method for discriminating personnel on electric power operation site and related device |
CN113392839A (en) * | 2021-05-18 | 2021-09-14 | 浙江大华技术股份有限公司 | Method and device for recognizing license plate of non-motor vehicle, computer equipment and storage medium |
CN114092875A (en) * | 2021-11-01 | 2022-02-25 | 南方电网深圳数字电网研究院有限公司 | Method and device for job site safety supervision based on machine learning |
CN114723824A (en) * | 2022-04-01 | 2022-07-08 | 浙江工业大学 | Indoor single positioning method based on binocular camera and ultra wide band fusion |
CN115326053A (en) * | 2022-08-18 | 2022-11-11 | 华南理工大学 | Mobile robot multi-sensor fusion positioning method based on double-layer vision |
CN115412846A (en) * | 2022-08-31 | 2022-11-29 | 常熟理工学院 | Underground multi-scene identity detection positioning method, system and storage medium |
Non-Patent Citations (13)
Title |
---|
刘德辉: ""基于双目视觉里程计辅助的UWB/GNSS室内外定位算法研究"" * |
刘德辉: ""基于双目视觉里程计辅助的UWB/GNSS室内外定位算法研究"", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
刘飞: ""多传感器融合的高精度无缝定位模型与方法研究"" * |
周爱国 等: ""基于加权 Levenberg-Marquardt的多目视觉同名物点定位算法"" * |
周爱国 等: ""基于加权 Levenberg-Marquardt的多目视觉同名物点定位算法"", 《激光与光电子学进展》 * |
张明军 等: ""一种基于机器学习的车牌识别系统的设计"" * |
李鹏 等: ""改进自适应抗差容积卡尔曼滤波多源室内定位"", 《导航定位与授时》 * |
杨博: ""视觉/惯性/超宽带组合定位系统关键技术研究"" * |
王璇: ""基于视觉的电力系统输电线检测与跟踪"" * |
王璇: ""基于视觉的电力系统输电线检测与跟踪"", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》 * |
申炳琦 等: ""移动机器人 UWB 与 VIO 组合室内定位算法"" * |
申炳琦 等: ""移动机器人 UWB 与 VIO 组合室内定位算法"", 《计算机应用》 * |
郑顾平 等: ""基于机器学习的多车牌识别算法应用研究"" * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118102233A (en) * | 2024-04-22 | 2024-05-28 | 南方电网调峰调频发电有限公司 | Object positioning method and device for multiple scenes of pumped storage foundation engineering |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111798475B (en) | Indoor environment 3D semantic map construction method based on point cloud deep learning | |
CN108898676B (en) | Method and system for detecting collision and shielding between virtual and real objects | |
Saeedi et al. | Vision-based 3-D trajectory tracking for unknown environments | |
CN109597087A (en) | A kind of 3D object detection method based on point cloud data | |
Qian et al. | Robust visual-lidar simultaneous localization and mapping system for UAV | |
CN115479598A (en) | Positioning and mapping method based on multi-sensor fusion and tight coupling system | |
CN112833892B (en) | Semantic mapping method based on track alignment | |
CN104281148A (en) | Mobile robot autonomous navigation method based on binocular stereoscopic vision | |
CN114708293A (en) | Robot motion estimation method based on deep learning point-line feature and IMU tight coupling | |
CN116518984B (en) | Vehicle road co-location system and method for underground coal mine auxiliary transportation robot | |
CN110260866A (en) | A kind of robot localization and barrier-avoiding method of view-based access control model sensor | |
Yu et al. | Accurate and robust visual localization system in large-scale appearance-changing environments | |
CN112541938A (en) | Pedestrian speed measuring method, system, medium and computing device | |
Liu et al. | A multisensor fusion with automatic vision–LiDAR calibration based on factor graph joint optimization for SLAM | |
CN118172399A (en) | Target ranging system based on self-supervision monocular depth estimation method | |
CN114049362A (en) | Transform-based point cloud instance segmentation method | |
Wang et al. | Target detection for construction machinery based on deep learning and multisource data fusion | |
CN115542362A (en) | High-precision space positioning method, system, equipment and medium for electric power operation site | |
CN111784798B (en) | Map generation method and device, electronic equipment and storage medium | |
CN113256736A (en) | Multi-camera visual SLAM method based on observability optimization | |
Song et al. | MF-LIO: integrating multi-feature LiDAR inertial odometry with FPFH loop closure in SLAM | |
Yang et al. | UAV image target localization method based on outlier filter and frame buffer | |
CN114323038A (en) | Outdoor positioning method fusing binocular vision and 2D laser radar | |
CN114565669A (en) | Method for fusion positioning of field-end multi-camera | |
CN117152199B (en) | A method, system, device and storage medium for estimating motion vector of dynamic target |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20221230 |
|
RJ01 | Rejection of invention patent application after publication |