CN120411430A - Distribution line equipment control method and system based on VR and AR technology - Google Patents
Distribution line equipment control method and system based on VR and AR technologyInfo
- Publication number
- CN120411430A CN120411430A CN202510469100.7A CN202510469100A CN120411430A CN 120411430 A CN120411430 A CN 120411430A CN 202510469100 A CN202510469100 A CN 202510469100A CN 120411430 A CN120411430 A CN 120411430A
- Authority
- CN
- China
- Prior art keywords
- equipment
- model
- dimensional
- point cloud
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- H02J13/10—
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a distribution line equipment control method and system based on VR and AR technology, in particular to the technical field of AR and VR, which comprises the steps of deploying edge equipment with visual perception function on the distribution line site to collect data, extracting information of the distribution equipment through a target detection algorithm, realizing space position and posture estimation of each target equipment, constructing a three-dimensional virtual model according to a sparse point cloud structure, comparing the currently generated three-dimensional virtual model with a prestored historical model, the method comprises the steps of identifying a difference region, executing local increment update on the difference region, realizing dynamic synchronization and synchronous presentation of a three-dimensional virtual model, only needing local reconstruction when equipment is updated based on a scheme of automatic perception and self-adaptive modeling, not affecting an overall model, constructing a topological graph by combining a spatial adjacent relation, providing a basis for subsequent simulation, binding an attitude matrix by each equipment, and ensuring that virtual labels, models and real equipment are in one-to-one correspondence and do not drift.
Description
Technical Field
The invention relates to the technical field of AR and VR, in particular to a distribution line equipment control method and system based on VR and AR technology.
Background
With the intelligent development of the power system, the distribution line is taken as an important component of the power grid operation, the monitoring, control and maintenance of the equipment operation state are more and more dependent on digital and visual means, and after the virtual reality VR and augmented reality AR technology is applied to the equipment management and remote operation and maintenance of the distribution line, the real line is modeled and information labels are superimposed in a virtual environment, so that the patrol personnel can realize visual perception, remote collaboration and auxiliary decision making, and the working efficiency and safety are improved;
the existing distribution line equipment control system based on VR and AR generally relies on high-precision three-dimensional modeling of the field environment, and virtual-real binding is carried out on actual equipment through image recognition and space positioning technology, but in actual application, a certain technical bottleneck is faced;
The distribution line equipment is widely distributed, has a complex structure, is influenced by factors such as climate, topography and the like, the state of the distribution line equipment is often changed, and once the trend of the distribution line or the form of the equipment is adjusted, an original virtual model is invalid, a professional modeling team is required to be used for modeling and deployment again, the maintenance cost is high, the updating efficiency is low, and the continuity and the effectiveness of a digital system are seriously influenced;
the prior art cannot fundamentally solve the technical challenges of high model updating cost and no virtual-real synchronization capability, so a control method which combines the three-dimensional modeling updating efficiency and the field device identification accuracy is needed to improve the intelligent level and the practical application value of the distribution line equipment based on VR and AR technologies.
In order to solve the above-mentioned defect, a technical scheme is proposed.
Disclosure of Invention
The invention aims to provide a distribution line equipment control method and system based on VR and AR technologies, so as to solve the problems in the background technology.
The invention provides a distribution line equipment control method based on VR and AR technology, which comprises the specific steps of deploying edge equipment with visual perception function on the distribution line site, wherein the edge equipment comprises a visible light camera, an infrared camera and a depth sensor and is used for collecting image and video data of the distribution line and equipment;
The edge equipment preprocesses the acquired image through a target detection algorithm, and extracts boundary information, geometric outline, structural feature points and spatial positions of the power distribution equipment;
The space position and the gesture of each target device are estimated by utilizing the depth map and camera calibration parameters and performing back projection from the two-dimensional image to the three-dimensional coordinates;
generating a sparse point cloud structure based on the depth map and the image sequence, constructing a local topological structure model by combining the connection relation between the devices, and carrying out semantic classification on the device types;
The edge equipment builds a three-dimensional virtual model of the distribution line and the auxiliary equipment thereof based on the collected data through image recognition, structure modeling and topology analysis;
performing rigid alignment on a currently generated three-dimensional virtual model and a prestored historical model through a point cloud registration algorithm based on SVD, comparing the currently generated three-dimensional virtual model with the prestored historical model, identifying a difference region, and performing local increment update on the difference region to realize dynamic synchronization of the three-dimensional virtual model;
And synchronously presenting the updated three-dimensional model, the equipment identification result and the space coordinates in a VR or AR interface to realize the control, inspection and maintenance operation of the distribution line equipment.
Preferably, the method for preprocessing the acquired image by adopting a target detection algorithm and extracting boundary information, geometric outline, structural feature points and spatial positions of the power distribution equipment comprises the following steps:
Performing target detection on the acquired image at the edge equipment, identifying the power distribution equipment by adopting a YOLOv-Nano network, inputting the size of the acquired image and the type of the target equipment, outputting a rectangular frame, confidence coefficient and type label of the framed target equipment, and expressing the YOLOv5-Nano network as f θ(I)→{(ci,xi,ti,wi,hi,si in a functional form, wherein f θ (I) is an output result of the YOLOv-Nano network, I is an input image, c i is the type of the identified target equipment, x i,yi is the position coordinate of the rectangular frame of the target equipment, w i,hi is the size coordinate of the rectangular frame of the target equipment, and s i is the confidence coefficient of the identification;
Cutting the identified rectangular frame area, converting the rectangular frame area into a gray level image, obtaining the boundary outline of target equipment in the rectangular frame area by using a Canny edge detection method, and extracting structural feature points by combining a corner extraction algorithm.
Preferably, the method for obtaining the boundary profile of the target device in the rectangular frame area by using the Canny edge detection method comprises the following steps:
calculating the gradient of the gray level graph in the x and y directions by utilizing Sobel operator, wherein the calculation expression is as follows And Wherein G x (x, y) is a gradient value of a gray scale image (x, y) point in the x direction, G y (x, y) is a gradient value of a gray scale image (x, y) point in the y direction, I (x, y) is a pixel value of a rectangular frame region before clipping at the (x, y), K x (I, j) is a convolution kernel coefficient of Sobel in the x direction, K y (I, j) is a convolution kernel coefficient of Sobel in the y direction, I, j e { -1,0,1} represents a neighborhood pixel index offset which is offset up, down, left and right with a current pixel point as a center;
Calculating the gradient amplitude and the gradient direction according to G x (x, y) and G y (x, y), wherein the calculation expression of the gradient amplitude is as follows Wherein, the Gradient amplitude of (x, y) point, representing edge intensity, and gradient direction is calculated by the expression Wherein θ (x, y) is the gradient direction of the (x, y) point, representing the edge direction;
Preserving the pixels in the local maximum gradient direction, suppressing edge blurring, setting two edge thresholds as T h and T l, and T h being greater than T l, if Greater than T h, then the label (x, y) is a strong edge pixel ifGreater than T l andLess than or equal to T h, then the label (x, y) is a weak edge pixel;
for a weak edge pixel, when the weak edge pixel is communicated with a strong edge pixel, the weak edge pixel is reserved, otherwise, the weak edge pixel is removed;
And generating a binary edge map according to the reserved weak edge pixels and strong edge pixels, extracting edge contours, and packaging the contours into contour point sets to obtain a contour point set with C g={(x1,y1),(x2,y2),...,(xn,yn)},Cg as a g-th edge, wherein n is the total number of edges.
Preferably, the method for extracting the structural feature points by combining the corner extraction algorithm comprises the following steps:
For each pixel point in the gray level diagram, calculating the image gradient in the neighborhood window, constructing a gradient covariance matrix M, and calculating the expression as follows M is a gradient covariance matrix of a gray scale image, I x is a gradient of the gray scale image in a horizontal direction, I y is a gradient of the gray scale image in a vertical direction, sigma represents summation of pixels in the gray scale image, a corner response function R is utilized to judge a corner, a calculation expression is R=det (M) -k (trace (M)) 2, wherein R is a corner response value obtained by calculating a pixel point, det (M) is a product of characteristic values lambda 1、λ2, represents joint strength of gradient change, trace (M) is a sum of characteristic values lambda 1、λ2, represents overall change strength, k is an empirical constant, a corner threshold value is set to be Cp th, and when the corner response function R is larger than the corner threshold value Cp th, the pixel point is marked as a corner, and the corner point is a structural characteristic point of target equipment.
Preferably, the method for estimating the space position and the gesture of each target device by using the depth map and the camera calibration parameters and performing back projection from the two-dimensional image to the three-dimensional coordinates comprises the following steps:
According to the camera imaging model, the two-dimensional pixel coordinates (u, v) and the corresponding depth d, converting the camera internal reference matrix K into three-dimensional space coordinates, wherein the converted back projection formula is as follows Wherein X, Y and Z are three-dimensional coordinates of the structural feature points;
according to the geometric centers of three-dimensional coordinates of a plurality of structural feature points as the center positions of target equipment, estimating a rotation matrix R and a translation vector t by using a point cloud method, and calculating a gesture matrix according to the rotation matrix R and the translation vector t, wherein the calculation formula of the gesture matrix is as follows Wherein SE (3) is a special rigid body transformation group in 3-dimensional Euclidean space, and represents that the target equipment contains a set of all rigid transformations;
The depth map acquisition method comprises the following steps:
Two cameras which are horizontally arranged are used for acquiring images from left and right viewing angles, and depth is reversely deduced by calculating parallax of pixel points in the left and right viewing angle images, wherein the calculation expression is as follows Where d is a depth value, fle is a focal length of the camera, ble is a base line length between the two cameras, dis is a parallax of a corresponding point in the left and right view images, and a gray value of each pixel point in the depth map represents a depth from the camera to the point.
Preferably, a sparse point cloud structure is generated based on the depth map and the image sequence, a local topological structure model is constructed by combining the connection relation between devices, and the logic for carrying out semantic classification on the device types is as follows:
According to a continuously acquired image sequence, matching structural feature points in each frame of images, triangulating the matched points to generate an initial sparse point cloud P, wherein the expression of the initial sparse point cloud P is P= Trian (P1, P2, K, T1 and T2), wherein P1 and P2 are the matched structural feature points in two frames of images, K is a camera internal reference matrix, T1 and T2 are camera pose matrixes corresponding to two frames, and multi-frame images and depth back projection are utilized to be fused into a sparse point cloud structure;
Classifying densely distributed points in a sparse point cloud space into the same class by using a density-based spatial clustering algorithm, removing outliers, dividing the sparse point cloud into a plurality of subsets of equipment examples, wherein each equipment example corresponds to a sparse point cloud cluster, and the sparse point cloud clusters comprise geometric outline and structural feature point coordinates of the equipment examples;
Constructing Euclidean distance graphs G (V, E) among centers of the sparse point cloud clusters, wherein V represents a centroid of the sparse point cloud cluster corresponding to each equipment instance, E represents a spatial connection relation between two equipment instances, a directed graph topology is constructed according to a current flow direction of a power distribution network, and equipment connection types are marked, wherein the equipment connection types comprise wire connection and bracket connection;
semantic classification is carried out based on geometric features and structural features, the volume, the aspect ratio, the number and the distribution of structural feature points of each sparse point cloud cluster and the connection degree features of other equipment examples are extracted, and the equipment types of the sparse point cloud clusters are judged by using an SVM classification model.
Preferably, the method for classifying the densely distributed points in the space into the same class by using a spatial clustering algorithm based on density comprises the following steps:
Representing the fused sparse point cloud as a set poc= { p1, p 2..once., pm }, wherein each point pr contains three-dimensional coordinate information (xf, yf, zf), and performing denoising and smoothing processing after constructing the point cloud to exclude isolated noise points;
Setting parameters of a density clustering algorithm according to the distribution density of point clouds and the actual distance between devices, wherein the parameters comprise a distance threshold dith and the minimum neighbor number minpts, calculating a point set N (p r)={py∈Poc∣∥pr-py is less than or equal to dith) in the neighborhood of the distance threshold dith for all the points pr E Poc, wherein N (p r) is a point set in the neighborhood of the distance threshold dith in the sparse point cloud set Poc, namely, p y is a neighborhood point set of the sparse point cloud set Poc, the distance between p r-py and p y is Euclidean distance, the distance threshold dith is a neighborhood radius, if N (p t) is more than or equal to minpts, p r is a core point, a cluster is constructed to the neighbor point with reachable density from the core point, and if p r does not belong to any cluster and does not have enough neighbors in the neighborhood, the cluster is marked as noise point;
b clusters of points { C1, C2, &., CB }, each cluster being formed by the above procedure Corresponding to a group of sparse point cloud clusters with geometric continuity and spatial proximity, representing one power distribution device or component in a scene;
the method for judging the equipment type of the clustering point cloud cluster by using the SVM classification model comprises the following steps:
Extracting the characteristics of the volume, the height-width ratio, the number and the distribution of structural characteristic points and the connection degree with other equipment from each sparse point cloud cluster to form a characteristic vector, training and predicting the equipment type by using a multi-type support vector machine model, collecting the sparse point cloud clusters of the power distribution equipment with known equipment labels, constructing a training sample set according to the characteristic vector, constructing a nonlinear classifier by adopting a radial basis function, determining a classification hyperplane by solving an optimization problem, calculating a classification score for the input unknown sparse point cloud cluster characteristics by using the trained model, and outputting the type.
Preferably, the point cloud registration algorithm based on SVD is used for carrying out rigid alignment on the currently generated three-dimensional virtual model and a prestored historical model, comparing the currently generated three-dimensional virtual model with the prestored historical model, identifying a difference region, and executing local increment update on the difference region, wherein the logic for realizing dynamic synchronization of the three-dimensional virtual model is as follows:
Marking a three-dimensional lightweight virtual model of the built distribution line and auxiliary equipment thereof as a current model, marking a prestored three-dimensional lightweight virtual model as a historical model, carrying out rigid alignment on the current model and the historical model in a three-dimensional space through a point cloud registration algorithm based on SVD (singular value decomposition), enabling the two models to be in the same reference coordinate system, calibrating a current model point set to be H= { H1, H2, & gt, hd }, a historical model point set to be Q= { Q1, Q2, & gt, qd }, H1, H2, & gt, wherein hd is a three-dimensional coordinate of a midpoint of the current model, Q1, Q2, & gt is a three-dimensional coordinate of a midpoint of the historical model, and solving an optimal rotation matrix R and a translation vector t to enable qd to be the three-dimensional coordinate of the midpoint of the current model Wherein he e H is a point in the current model, qe e Q is a point in the history model corresponding to the current model, e is a sequence number of the point and e= {1,2,..d }, where d is a positive integer, R is a rotation matrix, and t is a translation vector, to obtain a registered current model Hre;
Performing difference degree calculation on the corresponding region of the registered current model Hre and the history model Q, setting a difference detection threshold Efth, hree epsilon Hre as points of the registered current model Hre by adopting a mode based on point-to-point distance threshold comparison, judging that structural differences exist in the region if the I hree-qe I Eftg is met for each point pair (hree, qe), wherein hree belongs to a registered current model point set Hre, marking point sets meeting all conditions as difference regions Dare, and
If a certain structure in the history model cannot find a similar structure in the current model, marking the structure as invalid and rejecting the structure from the history model, and if a newly detected sparse point cloud cluster in the current model is not matched with the history model or has no record in the category identification result, writing the structural information of the sparse point cloud cluster into the history model as a newly added entity, and if the structural part has position deviation or size change, performing parameter coverage update of part of the model, and reconstructing only local grids instead of overall reconstruction;
And storing the locally updated model as a new historical model version, and pushing the update content to the AR client through the edge computing node so as to keep the three-dimensional virtual environment and the actual equipment state synchronous.
Preferably, the updated three-dimensional model, the equipment identification result and the space coordinate are synchronously presented in a VR or AR interface, and the method for realizing the control, inspection and maintenance operation of the distribution line equipment comprises the following steps:
mapping three-dimensional model coordinates into the world coordinate system of an AR device or VR scene based on the device's pose matrix, utilizing the pose matrix of each device instance Wherein R is a rotation matrix of the equipment instance, t is a translation vector of the equipment instance, a three-dimensional equipment model vertex is Xlco, a three-dimensional equipment model vertex Xlco is mapped to a global coordinate Xwor, the expression of the global coordinate Xwor is Xwor =ama× Xlco, and all equipment models are subjected to unified spatial alignment to complete a virtual-real fusion foundation;
Based on the device identification result output by the SVM classifier, combining the spatial position and the topological relation, adding semantic information for each device model instance, wherein the semantic information comprises a device type label, function marking information and a risk state, and generating a multi-mode marking element to be bound to a corresponding model node;
In an AR scene, an SLAM mechanism is adopted to continuously track the view angles of equipment and users, so that accurate superposition of a virtual model and a real scene is realized, and in a VR scene, a complete topological structure view is loaded through a virtual space, so that roaming inspection, simulation exercise and remote control in a virtual power distribution environment are realized.
The distribution line equipment control system based on VR and AR technologies comprises an edge vision acquisition and processing module, a self-adaptive model generation and topology analysis module, a three-dimensional model dynamic synchronization module and an interactive presentation module;
The edge vision acquisition and processing module is used for deploying edge equipment with a vision perception function on the distribution line site, wherein the edge equipment comprises a visible light camera, an infrared camera and a depth sensor and is used for acquiring image and video data of the distribution line and the equipment;
The edge equipment preprocesses the acquired image through a target detection algorithm, and extracts boundary information, geometric outline, structural feature points and spatial positions of the power distribution equipment;
The space position and the gesture of each target device are estimated by utilizing the depth map and camera calibration parameters and performing back projection from the two-dimensional image to the three-dimensional coordinates;
the self-adaptive model generation and topology analysis module is used for generating a sparse point cloud structure based on the depth map and the image sequence, constructing a local topology structure model by combining the connection relation between devices, and carrying out semantic classification on the device types;
The edge equipment builds a three-dimensional virtual model of the distribution line and the auxiliary equipment thereof based on the collected data through image recognition, structure modeling and topology analysis;
The three-dimensional model dynamic synchronization module is used for carrying out rigid alignment on the currently generated three-dimensional virtual model and a prestored historical model through a point cloud registration algorithm based on SVD, comparing the currently generated three-dimensional virtual model with the prestored historical model, identifying a difference region, and carrying out local increment update on the difference region to realize dynamic synchronization of the three-dimensional virtual model;
And the interactive presentation module is used for synchronously presenting the updated three-dimensional model, the equipment identification result and the space coordinates in the VR or AR interface, so as to realize the control, inspection and maintenance operation of the distribution line equipment.
In the technical scheme, the invention has the technical effects and advantages that:
According to the application, through the scheme of automatic perception and self-adaptive modeling, the edge vision acquisition, the structural reconstruction and the semantic modeling of distribution line equipment are realized, only partial reconstruction is needed when the equipment is updated, the whole model is not influenced, the maintenance cost is remarkably reduced, the equipment type identification is realized based on point cloud clustering and SVM classification, a topological graph is constructed by combining a spatial adjacent relation, a foundation is provided for subsequent simulation and routing inspection path planning, each equipment is bound with a gesture matrix, accurate superposition in an AR interface is facilitated, one-to-one correspondence of a virtual label, a model and real equipment is ensured, drift does not occur, the existing technical scheme lacks space structural understanding, topology faults or linkage cannot be simulated, the topological graph is constructed by connecting edges through a clustering center, the spatial structured expression is realized, the synchronization of multi-type equipment identification and multi-terminal display is supported, and the expansibility of the virtual model is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings required for the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a block diagram of a system according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, the invention discloses a distribution line equipment control method based on VR and AR technologies, which specifically includes the steps of deploying an edge device with a visual perception function on a distribution line site, wherein the edge device comprises a visible light camera, an infrared camera and a depth sensor and is used for acquiring image and video data of the distribution line and the equipment;
The edge equipment preprocesses the acquired image through a target detection algorithm, and extracts boundary information, geometric outline, structural feature points and spatial positions of the power distribution equipment;
The space position and the gesture of each target device are estimated by utilizing the depth map and camera calibration parameters and performing back projection from the two-dimensional image to the three-dimensional coordinates;
generating a sparse point cloud structure based on the depth map and the image sequence, constructing a local topological structure model by combining the connection relation between the devices, and carrying out semantic classification on the device types;
The edge equipment builds a three-dimensional virtual model of the distribution line and the auxiliary equipment thereof based on the collected data through image recognition, structure modeling and topology analysis;
performing rigid alignment on a currently generated three-dimensional virtual model and a prestored historical model through a point cloud registration algorithm based on SVD, comparing the currently generated three-dimensional virtual model with the prestored historical model, identifying a difference region, and performing local increment update on the difference region to realize dynamic synchronization of the three-dimensional virtual model;
And synchronously presenting the updated three-dimensional model, the equipment identification result and the space coordinates in a VR or AR interface to realize the control, inspection and maintenance operation of the distribution line equipment.
Disposing edge equipment integrated with a visible light camera, an infrared camera, a depth sensor and an embedded AI processing unit at key nodes of the distribution line, and acquiring multi-mode visual information of a scene of the distribution line;
The edge equipment is used for starting a synchronous acquisition task and acquiring multi-source data including high-definition images, depth maps, infrared maps and pose information;
Preprocessing the acquired image by adopting a target detection algorithm, and extracting boundary information, geometric outline, structural feature points and spatial positions of the power distribution equipment;
generating a sparse point cloud structure based on the depth map and the image sequence, constructing a local topological structure model by combining the connection relation between the devices, and carrying out semantic classification on the device types;
Packaging the processed image features, depth structures and semantic information into a unified intermediate modeling format, and performing redundant frame rejection and compression transmission to provide data support for subsequent virtual modeling and difference analysis;
the edge vision acquisition and processing method effectively reduces the network transmission pressure and the processing load of the central server, improves the equipment identification response speed and the model construction efficiency, and has good expandability and engineering application value;
It should be noted that pose information is a key parameter describing the position and the pose of the device or the camera in the three-dimensional space, so as to ensure that the acquired image and depth information have space references, thereby realizing the alignment and reconstruction of virtual and real space;
the method for preprocessing the acquired image by adopting the target detection algorithm and extracting boundary information, geometric outline, structural feature points and spatial positions of the power distribution equipment comprises the following steps:
Performing target detection on the acquired image at the edge equipment, identifying the power distribution equipment by adopting a YOLOv-Nano network, inputting the size of the acquired image and the type of the target equipment, outputting a rectangular frame, confidence coefficient and type label of the framed target equipment, and expressing the YOLOv5-Nano network as f θ(I)→{(ci,xi,yi,wi,hi,si in a functional form, wherein f θ (I) is an output result of the YOLOv-Nano network, I is an input image, c i is the type of the identified target equipment, x i,yi is the position coordinate of the rectangular frame of the target equipment, w i,hi is the size coordinate of the rectangular frame of the target equipment, and s i is the confidence coefficient of the identification;
Cutting the identified rectangular frame area, converting the rectangular frame area into a gray level image, obtaining the boundary outline of target equipment in the rectangular frame area by using a Canny edge detection method, and extracting structural feature points by combining a corner extraction algorithm;
The method for obtaining the boundary outline of the target device in the rectangular frame area by using the Canny edge detection method comprises the following steps:
calculating the gradient of the gray level graph in the x and y directions by utilizing Sobel operator, wherein the calculation expression is as follows And Wherein G x (x, y) is a gradient value of a gray scale image (x, y) point in the x direction, G y (x, y) is a gradient value of a gray scale image (x, y) point in the y direction, I (x, y) is a pixel value of a rectangular frame region before clipping at the (x, y), K x (I, j) is a convolution kernel coefficient of Sobel in the x direction, K y (I, j) is a convolution kernel coefficient of Sobel in the y direction, I, j e { -1,0,1} represents a neighborhood pixel index offset which is offset up, down, left and right with a current pixel point as a center;
Calculating the gradient amplitude and the gradient direction according to G x (x, y) and G y (x, y), wherein the calculation expression of the gradient amplitude is as follows Wherein, the Gradient amplitude of (x, y) point, representing edge intensity, and gradient direction is calculated by the expression Wherein θ (x, y) is the gradient direction of the (x, y) point, representing the edge direction;
Preserving the pixels in the local maximum gradient direction, suppressing edge blurring, setting two edge thresholds as T h and T l, and T h being greater than T l, if Greater than T h, then the label (x, y) is a strong edge pixel ifGreater than T l andLess than or equal to T h, then the label (x, y) is a weak edge pixel;
for a weak edge pixel, when the weak edge pixel is communicated with a strong edge pixel, the weak edge pixel is reserved, otherwise, the weak edge pixel is removed;
Generating a binary edge map according to the reserved weak edge pixels and strong edge pixels, extracting edge contours and packaging the contours into contour point sets to obtain contour point sets with C g={(x1,y1),(x2,y2),...,(xn,yn)},Cg as a g-th edge, wherein n is the total number of edges;
The boundary contour of the target device in the rectangular frame area is obtained by using a Canny edge detection method, the equipment contour is definitely used for extracting subsequent characteristic points, the geometric center and contour shape of the equipment are conveniently positioned, an edge projection foundation is provided for three-dimensional modeling, and the equipment is seamlessly connected with the subsequent technologies such as corner detection, space positioning and the like.
The method for extracting the structural feature points by combining the corner extraction algorithm comprises the following steps:
For each pixel point in the gray level diagram, calculating the image gradient in the neighborhood window, constructing a gradient covariance matrix M, and calculating the expression as follows M is a gradient covariance matrix of a gray scale image, I x is a gradient of the gray scale image in a horizontal direction, I y is a gradient of the gray scale image in a vertical direction, sigma represents summation of pixels in the gray scale image, a corner response function R is utilized to judge a corner, a calculation expression is R=det (M) -k (trace (M)) 2, wherein R is a corner response value obtained by calculating a pixel point, det (M) is a product of characteristic values lambda 1、λ2, represents joint strength of gradient change, trace (M) is a sum of characteristic values lambda 1、λ2, represents overall change strength, k is an empirical constant, a corner threshold value is set to be Cp th, when the corner response function R is larger than a corner threshold value Cp th, the pixel point is marked as a corner, and the corner point is a structural characteristic point of target equipment;
It should be noted that, the matrix M is a symmetric real matrix, and two eigenvalues are λ 1、λ2 respectively, which represents the changing intensity of the gray scale change of the image in two main directions, and the larger the eigenvalue, the more severe the change of the direction;
The Harris corner point is a crossing point with severe gray level change in an image, such as a corner, a connecting point, a boundary crossing point and the like of equipment, in power distribution equipment, the characteristic points have structural uniqueness and stability, the power distribution equipment is suitable for space geometric modeling and matching, the corner point extraction algorithm has certain anti-interference capability on noise, and is particularly suitable for the problems of image blurring, compression and the like existing in outdoor power distribution environment.
The method for estimating the space position and the gesture of each target device by utilizing the depth map and the camera calibration parameters and by back projecting the two-dimensional image to the three-dimensional coordinates comprises the following steps:
According to the camera imaging model, the two-dimensional pixel coordinates (u, v) and the corresponding depth d, converting the camera internal reference matrix K into three-dimensional space coordinates, wherein the converted back projection formula is as follows Wherein X, Y and Z are three-dimensional coordinates of the structural feature points;
according to the geometric centers of three-dimensional coordinates of a plurality of structural feature points as the center positions of target equipment, estimating a rotation matrix R and a translation vector t by using a point cloud method, and calculating a gesture matrix according to the rotation matrix R and the translation vector t, wherein the calculation formula of the gesture matrix is as follows Wherein SE (3) is a special rigid body transformation group in 3-dimensional Euclidean space, and represents that the target equipment contains a set of all rigid transformations;
The pixel positions in the two-dimensional image are converted into space positions, tasks of virtual-real alignment, augmented reality rendering and equipment state tracking are performed, the rotation state of an object in the three-dimensional space is represented by the gesture estimation through the rotation matrix R, direction information such as pitching, yawing, rolling and the like of target equipment is distinguished, states such as inclination, deformation and installation errors are identified, and if the equipment position or gesture changes, the real position can be quickly reconstructed by the back projection technology.
The depth map acquisition method comprises the following steps:
Two cameras which are horizontally arranged are used for acquiring images from left and right viewing angles, and depth is reversely deduced by calculating parallax of pixel points in the left and right viewing angle images, wherein the calculation expression is as follows Wherein d is a depth value, fle is a focal length of the cameras, ble is a base line length between the two cameras, dis is parallax of corresponding points in left and right view images, and gray values of each pixel point in the depth map represent depths from the cameras to the points;
generating a sparse point cloud structure based on the depth map and the image sequence, constructing a local topological structure model by combining the connection relation between devices, and carrying out semantic classification on the device types, wherein the logic comprises the following steps:
According to a continuously acquired image sequence, matching structural feature points in each frame of images, triangulating the matched points to generate an initial sparse point cloud P, wherein the expression of the initial sparse point cloud P is P= Trian (P1, P2, K, T1 and T2), wherein P1 and P2 are the matched structural feature points in two frames of images, K is a camera internal reference matrix, T1 and T2 are camera pose matrixes corresponding to two frames, and multi-frame images and depth back projection are utilized to be fused into a sparse point cloud structure;
Classifying densely distributed points in a sparse point cloud space into the same class by using a density-based spatial clustering algorithm, removing outliers, dividing the sparse point cloud into a plurality of subsets of equipment examples, wherein each equipment example corresponds to a sparse point cloud cluster, and the sparse point cloud clusters comprise geometric outline and structural feature point coordinates of the equipment examples;
Constructing Euclidean distance graphs G (V, E) among centers of the sparse point cloud clusters, wherein V represents a centroid of the sparse point cloud cluster corresponding to each equipment instance, E represents a spatial connection relation between two equipment instances, a directed graph topology is constructed according to a current flow direction of a power distribution network, and equipment connection types are marked, wherein the equipment connection types comprise wire connection and bracket connection;
Semantic classification is carried out based on geometric features and structural features, the volume, the aspect ratio, the number and the distribution of structural feature points and the connection degree features of other equipment examples of each sparse point cloud cluster are extracted, and the equipment types of the sparse point cloud clusters are judged by using an SVM classification model;
the method for classifying densely distributed points in space into the same class by using a density-based spatial clustering algorithm comprises the following steps:
Representing the fused sparse point cloud as a set poc= { p1, p 2..once., pm }, wherein each point pr contains three-dimensional coordinate information (xf, yf, zf), and performing denoising and smoothing processing after constructing the point cloud to exclude isolated noise points;
Setting parameters of a density clustering algorithm according to the distribution density of point clouds and the actual distance between devices, wherein the parameters comprise a distance threshold dith and the minimum neighbor number minpts, calculating a point set N (p r)={py∈Poc∣∥pr-py II which is less than or equal to dith) in the neighborhood of the distance threshold dith for all the points pr E Poc, wherein N (p r) is a point set in the neighborhood of the distance threshold dith in the sparse point cloud set Poc, namely, p y is a neighborhood point set meeting the distance threshold dith, p r is the other points in the sparse point cloud set Poc, which are subjected to distance judgment with p r, p r-py is the Euclidean distance between p r and p y, the distance threshold dith is the neighborhood radius, if N (p t) is more than or equal to minpts, marking p r as a core point, and starting from the core point, constructing a clustering neighbor point with reachable density, and marking as a noise point neighbor if p r does not belong to any clustering cluster and does not exist in the neighborhood;
b clusters of points { C1, C2, &., CB }, each cluster being formed by the above procedure A corresponding set of sparse point cloud clusters with geometric continuity and spatial proximity represents one power distribution device or component in the scene.
The density-based spatial clustering algorithm is a point cloud clustering algorithm and is used for processing spatial data with different densities and noise in a power distribution scene.
The method has the advantages that the effective segmentation of the point clouds of the plurality of distribution equipment in the scene is realized through a spatial clustering algorithm based on density, the accuracy of local modeling and semantic recognition is improved, the interference caused by background noise points and false recognition points is effectively restrained, and an independent node basis is provided for the subsequent construction of equipment connection relations and topological structure models.
The method for judging the equipment type of the clustering point cloud cluster by using the SVM classification model comprises the following steps:
Extracting the characteristics of the volume, the height-width ratio, the number and the distribution of structural characteristic points and the connection degree with other equipment from each sparse point cloud cluster to form a characteristic vector, training and predicting the equipment type by using a multi-type support vector machine model, collecting the sparse point cloud clusters of the power distribution equipment with known equipment labels, constructing a training sample set according to the characteristic vector, constructing a nonlinear classifier by adopting a radial basis function, determining a classification hyperplane by solving an optimization problem, calculating a classification score for the input unknown sparse point cloud cluster characteristics by using the trained model, and outputting the type.
Marking a three-dimensional lightweight virtual model of the built distribution line and auxiliary equipment thereof as a current model, marking a prestored three-dimensional lightweight virtual model as a historical model, carrying out rigid alignment on the current model and the historical model in a three-dimensional space through a point cloud registration algorithm based on SVD (singular value decomposition), enabling the two models to be in the same reference coordinate system, calibrating a current model point set to be H= { H1, H2, & gt, hd }, a historical model point set to be Q= { Q1, Q2, & gt, qd }, H1, H2, & gt, wherein hd is a three-dimensional coordinate of a midpoint of the current model, Q1, Q2, & gt is a three-dimensional coordinate of a midpoint of the historical model, and solving an optimal rotation matrix R and a translation vector t to enable qd to be the three-dimensional coordinate of the midpoint of the current modelWherein he e H is a point in the current model, qe e Q is a point in the history model corresponding to the current model, e is a sequence number of the point and e= {1,2,..d }, where d is a positive integer, R is a rotation matrix, and t is a translation vector, to obtain a registered current model Hre;
Performing difference degree calculation on the corresponding region of the registered current model Hre and the history model Q, setting a difference detection threshold Efth, hree epsilon Hre as points of the registered current model Hre by adopting a mode based on point-to-point distance threshold comparison, judging that structural differences exist in the region if the I hree-qe I Efth is met for each point pair (hree, qe), wherein hree belongs to a registered current model point set Hre, marking point sets meeting all conditions as difference regions Dare, and
If a certain structure in the history model cannot find a similar structure in the current model, marking the structure as invalid and rejecting the structure from the history model, and if a newly detected sparse point cloud cluster in the current model is not matched with the history model or has no record in the category identification result, writing the structural information of the sparse point cloud cluster into the history model as a newly added entity, and if the structural part has position deviation or size change, performing parameter coverage update of part of the model, and reconstructing only local grids instead of overall reconstruction;
And storing the locally updated model as a new historical model version, and pushing the update content to the AR client through the edge computing node so as to keep the three-dimensional virtual environment and the actual equipment state synchronous.
The current model is compared with the historical model, the difference area is determined, and only the difference area is locally updated, so that the power distribution equipment is enhanced to realize quick sensing of state changes such as addition, movement and detachment, the whole model reconstruction is avoided, the processing efficiency is improved, the communication load is reduced by supporting distributed edge sensing and model partition synchronization, and more stable and accurate space information support is provided for subsequent AR labeling and interaction.
The updated three-dimensional model, the equipment identification result and the space coordinates are synchronously presented in a VR or AR interface, and the method for realizing the control, inspection and maintenance operation of the distribution line equipment comprises the following steps:
mapping three-dimensional model coordinates into the world coordinate system of an AR device or VR scene based on the device's pose matrix, utilizing the pose matrix of each device instance Wherein R is a rotation matrix of the equipment instance, t is a translation vector of the equipment instance, a three-dimensional equipment model vertex is Xlco, a three-dimensional equipment model vertex Xlco is mapped to a global coordinate Xwor, the expression of the global coordinate Xwor is Xwor =ama× Xlco, and all equipment models are subjected to unified spatial alignment to complete a virtual-real fusion foundation;
Based on the device identification result output by the SVM classifier, combining the spatial position and the topological relation, adding semantic information for each device model instance, wherein the semantic information comprises a device type label, function marking information and a risk state, and generating a multi-mode marking element to be bound to a corresponding model node;
In an AR scene, an SLAM mechanism is adopted to continuously track the view angles of equipment and users, so that accurate superposition of a virtual model and a real scene is realized, and in a VR scene, a complete topological structure view is loaded through a virtual space, so that roaming inspection, simulation exercise and remote control in a virtual power distribution environment are realized.
Note that Xlco is the three-dimensional coordinates of the device model in its own local coordinate system, that is, the coordinates of each vertex in the original modeling, and Xwor represents the three-dimensional position of the vertex in the global coordinate system.
Referring to fig. 2, the invention discloses a distribution line equipment control system based on VR and AR technology, which comprises an edge vision acquisition and processing module, an adaptive model generation and topology analysis module, a three-dimensional model dynamic synchronization module and an interactive presentation module;
The edge vision acquisition and processing module is used for deploying edge equipment with a vision perception function on the distribution line site, wherein the edge equipment comprises a visible light camera, an infrared camera and a depth sensor and is used for acquiring image and video data of the distribution line and the equipment;
The edge equipment preprocesses the acquired image through a target detection algorithm, and extracts boundary information, geometric outline, structural feature points and spatial positions of the power distribution equipment;
The space position and the gesture of each target device are estimated by utilizing the depth map and camera calibration parameters and performing back projection from the two-dimensional image to the three-dimensional coordinates;
the self-adaptive model generation and topology analysis module is used for generating a sparse point cloud structure based on the depth map and the image sequence, constructing a local topology structure model by combining the connection relation between devices, and carrying out semantic classification on the device types;
The edge equipment builds a three-dimensional virtual model of the distribution line and the auxiliary equipment thereof based on the collected data through image recognition, structure modeling and topology analysis;
The three-dimensional model dynamic synchronization module is used for carrying out rigid alignment on the currently generated three-dimensional virtual model and a prestored historical model through a point cloud registration algorithm based on SVD, comparing the currently generated three-dimensional virtual model with the prestored historical model, identifying a difference region, and carrying out local increment update on the difference region to realize dynamic synchronization of the three-dimensional virtual model;
And the interactive presentation module is used for synchronously presenting the updated three-dimensional model, the equipment identification result and the space coordinates in the VR or AR interface, so as to realize the control, inspection and maintenance operation of the distribution line equipment.
According to the application, through the scheme of automatic perception and self-adaptive modeling, the edge vision acquisition, the structural reconstruction and the semantic modeling of distribution line equipment are realized, only partial reconstruction is needed when the equipment is updated, the whole model is not influenced, the maintenance cost is remarkably reduced, the equipment type identification is realized based on point cloud clustering and SVM classification, a topological graph is constructed by combining a spatial adjacent relation, a foundation is provided for subsequent simulation and routing inspection path planning, each equipment is bound with a gesture matrix, accurate superposition in an AR interface is facilitated, one-to-one correspondence of a virtual label, a model and real equipment is ensured, drift does not occur, the existing technical scheme lacks space structural understanding, topology faults or linkage cannot be simulated, the topological graph is constructed by connecting edges through a clustering center, the spatial structured expression is realized, the synchronization of multi-type equipment identification and multi-terminal display is supported, and the expansibility of the virtual model is improved.
The above formulas are all formulas with dimensions removed and numerical values calculated, the formulas are formulas with a large amount of data collected for software simulation to obtain the latest real situation, and preset parameters in the formulas are set by those skilled in the art according to the actual situation.
It will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to the corresponding process in the foregoing method embodiment for the specific working process of the above-described system, which is not described herein again.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. The distribution line equipment control method based on VR and AR technology is characterized by comprising the specific steps of deploying edge equipment with a visual perception function on a distribution line site, wherein the edge equipment comprises a visible light camera, an infrared camera and a depth sensor and is used for collecting images and video data of the distribution line and equipment;
The edge equipment preprocesses the acquired image through a target detection algorithm, and extracts boundary information, geometric outline, structural feature points and spatial positions of the power distribution equipment;
The space position and the gesture of each target device are estimated by utilizing the depth map and camera calibration parameters and performing back projection from the two-dimensional image to the three-dimensional coordinates;
generating a sparse point cloud structure based on the depth map and the image sequence, constructing a local topological structure model by combining the connection relation between the devices, and carrying out semantic classification on the device types;
The edge equipment builds a three-dimensional virtual model of the distribution line and the auxiliary equipment thereof based on the collected data through image recognition, structure modeling and topology analysis;
performing rigid alignment on a currently generated three-dimensional virtual model and a prestored historical model through a point cloud registration algorithm based on SVD, comparing the currently generated three-dimensional virtual model with the prestored historical model, identifying a difference region, and performing local increment update on the difference region to realize dynamic synchronization of the three-dimensional virtual model;
And synchronously presenting the updated three-dimensional model, the equipment identification result and the space coordinates in a VR or AR interface to realize the control, inspection and maintenance operation of the distribution line equipment.
2. The method for controlling distribution line equipment based on VR and AR technology according to claim 1, wherein the method for preprocessing the collected image by using a target detection algorithm and extracting boundary information, geometric outline, structural feature points and spatial positions of the distribution equipment is as follows:
Performing target detection on the acquired image at the edge equipment, identifying the power distribution equipment by adopting a YOLOv-Nano network, inputting the size of the acquired image and the type of the target equipment, outputting a rectangular frame, confidence coefficient and type label of the framed target equipment, and expressing the YOLOv5-Nano network as f θ(I)→{(ci,xi,yi,wi,hi,si in a functional form, wherein f θ (I) is an output result of the YOLOv-Nano network, I is an input image, c i is the type of the identified target equipment, x i,yi is the position coordinate of the rectangular frame of the target equipment, w i,hi is the size coordinate of the rectangular frame of the target equipment, and s i is the confidence coefficient of the identification;
Cutting the identified rectangular frame area, converting the rectangular frame area into a gray level image, obtaining the boundary outline of target equipment in the rectangular frame area by using a Canny edge detection method, and extracting structural feature points by combining a corner extraction algorithm.
3. The VR and AR technology based distribution line equipment control method of claim 2, wherein the method for obtaining the boundary profile of the target equipment in the rectangular frame area using the Canny edge detection method comprises the steps of:
calculating the gradient of the gray level graph in the x and y directions by utilizing Sobel operator, wherein the calculation expression is as follows And Wherein G x (x, y) is a gradient value of a gray scale image (x, y) point in the x direction, G y (x, y) is a gradient value of a gray scale image (x, y) point in the y direction, I (x, y) is a pixel value of a rectangular frame region before clipping at the (x, y), K x (I, j) is a convolution kernel coefficient of Sobel in the x direction, K y (I, j) is a convolution kernel coefficient of Sobel in the y direction, I, j e { -1,0,1} represents a neighborhood pixel index offset which is offset up, down, left and right with a current pixel point as a center;
Calculating the gradient amplitude and the gradient direction according to G x (x, y) and G y (x, y), wherein the calculation expression of the gradient amplitude is as follows Wherein, the Gradient amplitude of (x, y) point, representing edge intensity, and gradient direction is calculated by the expression Wherein θ (x, y) is the gradient direction of the (x, y) point, representing the edge direction;
Preserving the pixels in the local maximum gradient direction, suppressing edge blurring, setting two edge thresholds as T h and T l, and T h being greater than T l, if Greater than T h, then the label (x, y) is a strong edge pixel ifGreater than T l andLess than or equal to T h, then the label (x, y) is a weak edge pixel;
for a weak edge pixel, when the weak edge pixel is communicated with a strong edge pixel, the weak edge pixel is reserved, otherwise, the weak edge pixel is removed;
And generating a binary edge map according to the reserved weak edge pixels and strong edge pixels, extracting edge contours, and packaging the contours into contour point sets to obtain a contour point set with C g={(x1,y1),(x2,y2),...,(xn,yn)},Cg as a g-th edge, wherein n is the total number of edges.
4. The control method for power distribution line equipment based on VR and AR technology according to claim 2, wherein the method for extracting structural feature points by combining with a corner extraction algorithm is as follows:
For each pixel point in the gray level diagram, calculating the image gradient in the neighborhood window, constructing a gradient covariance matrix M, and calculating the expression as follows M is a gradient covariance matrix of a gray scale image, I x is a gradient of the gray scale image in the horizontal direction, I y is a gradient of the gray scale image in the vertical direction, sigma represents summation of pixels in the gray scale image, a corner response function Crf is utilized to judge a corner, a calculation expression is Crf=det (M) -k (trace (M)) 2, wherein Crf is a corner response value calculated on a pixel point, det (M) is a product of characteristic values lambda 1、λ2, the gradient (M) is a sum of characteristic values lambda 1、λ2, the k is an empirical constant, a corner threshold value Cp th is set, and when the corner response function Crf is larger than the corner threshold value Cp th, the pixel point is marked as a corner point, and the corner point is a structural characteristic point of target equipment.
5. The control method for distribution line equipment based on VR and AR technology according to claim 1, wherein the method for estimating the spatial position and posture of each target equipment by using the depth map and the camera calibration parameters through two-dimensional image to three-dimensional coordinate back projection is as follows:
According to the camera imaging model, the two-dimensional pixel coordinates (u, v) and the corresponding depth d, converting the camera internal reference matrix K into three-dimensional space coordinates, wherein the converted back projection formula is as follows Wherein X, Y and Z are three-dimensional coordinates of the structural feature points;
according to the geometric centers of three-dimensional coordinates of a plurality of structural feature points as the center positions of target equipment, estimating a rotation matrix R and a translation vector t by using a point cloud method, and calculating a gesture matrix according to the rotation matrix R and the translation vector t, wherein the calculation formula of the gesture matrix is as follows Wherein SE (3) is a special rigid body transformation group in 3-dimensional Euclidean space, and represents that the target equipment contains a set of all rigid transformations;
The depth map acquisition method comprises the following steps:
Two cameras which are horizontally arranged are used for acquiring images from left and right viewing angles, and depth is reversely deduced by calculating parallax of pixel points in the left and right viewing angle images, wherein the calculation expression is as follows Where d is a depth value, fle is a focal length of the camera, ble is a base line length between the two cameras, dis is a parallax of a corresponding point in the left and right view images, and a gray value of each pixel point in the depth map represents a depth from the camera to the point.
6. The method for controlling distribution line equipment based on VR and AR technology according to claim 1, wherein generating a sparse point cloud structure based on a depth map and an image sequence, constructing a local topological structure model by combining a connection relationship between devices, and performing semantic classification on device types comprises:
According to a continuously acquired image sequence, matching structural feature points in each frame of images, triangulating the matched points to generate an initial sparse point cloud P, wherein the expression of the initial sparse point cloud P is P= Trian (P1, P2, K, T1 and T2), wherein P1 and P2 are the matched structural feature points in two frames of images, K is a camera internal reference matrix, T1 and T2 are camera pose matrixes corresponding to two frames, and multi-frame images and depth back projection are utilized to be fused into a sparse point cloud structure;
Classifying densely distributed points in a sparse point cloud space into the same class by using a density-based spatial clustering algorithm, removing outliers, dividing the sparse point cloud into a plurality of subsets of equipment examples, wherein each equipment example corresponds to a sparse point cloud cluster, and the sparse point cloud clusters comprise geometric outline and structural feature point coordinates of the equipment examples;
Constructing Euclidean distance graphs G (V, E) among centers of the sparse point cloud clusters, wherein V represents a centroid of the sparse point cloud cluster corresponding to each equipment instance, E represents a spatial connection relation between two equipment instances, a directed graph topology is constructed according to a current flow direction of a power distribution network, and equipment connection types are marked, wherein the equipment connection types comprise wire connection and bracket connection;
semantic classification is carried out based on geometric features and structural features, the volume, the aspect ratio, the number and the distribution of structural feature points of each sparse point cloud cluster and the connection degree features of other equipment examples are extracted, and the equipment types of the sparse point cloud clusters are judged by using an SVM classification model.
7. The method for controlling distribution line equipment based on VR and AR technology according to claim 6, wherein the method for classifying densely distributed points in space into the same class by using a spatial clustering algorithm based on density is as follows:
Representing the fused sparse point cloud as a set poc= { p1, p 2..once., pm }, wherein each point pr contains three-dimensional coordinate information (xf, yf, zf), and performing denoising and smoothing processing after constructing the point cloud to exclude isolated noise points;
Setting parameters of a density clustering algorithm according to the distribution density of point clouds and the actual distance between devices, wherein the parameters comprise a distance threshold dith and a minimum neighbor number minpts, calculating a point set N (p r)={py∈Poc∣||pr-py |is less than or equal to dith) in the neighborhood of the distance threshold dith for all the points pr E Poc, wherein N (p r) is a point set in the neighborhood of the distance threshold dith in the sparse point cloud set Poc, namely, p y is a neighborhood point set of p r, p r-py is the Euclidean distance between p r and p y in the sparse point cloud set Poc, the distance threshold dith is the neighborhood radius, if N (p t) is more than or equal to minpts, marking p r as a core point, and starting from the core point, constructing a cluster Cb, and if p r does not belong to any cluster and does not have enough neighbors in the neighborhood, marking as noise points;
b clusters of points { C1, C2, &., CB }, each cluster being formed by the above procedure Corresponding to a group of sparse point cloud clusters with geometric continuity and spatial proximity, representing one power distribution device or component in a scene;
the method for judging the equipment type of the clustering point cloud cluster by using the SVM classification model comprises the following steps:
Extracting the characteristics of the volume, the height-width ratio, the number and the distribution of structural characteristic points and the connection degree with other equipment from each sparse point cloud cluster to form a characteristic vector, training and predicting the equipment type by using a multi-type support vector machine model, collecting the sparse point cloud clusters of the power distribution equipment with known equipment labels, constructing a training sample set according to the characteristic vector, constructing a nonlinear classifier by adopting a radial basis function, determining a classification hyperplane by solving an optimization problem, calculating a classification score for the input unknown sparse point cloud cluster characteristics by using the trained model, and outputting the type.
8. The VR and AR technology based distribution line equipment control method of claim 1, wherein the logic for implementing dynamic synchronization of the three-dimensional virtual model by rigidly aligning the three-dimensional virtual model currently generated with the pre-stored historical model by the SVD based point cloud registration algorithm, comparing the three-dimensional virtual model currently generated with the pre-stored historical model, identifying a difference region, and performing local incremental update on the difference region is as follows:
Marking a three-dimensional lightweight virtual model of the built distribution line and auxiliary equipment thereof as a current model, marking a prestored three-dimensional lightweight virtual model as a historical model, carrying out rigid alignment on the current model and the historical model in a three-dimensional space through a point cloud registration algorithm based on SVD (singular value decomposition), enabling the two models to be in the same reference coordinate system, calibrating a current model point set to be H= { H1, H2, & gt, hd }, a historical model point set to be Q= { Q1, Q2, & gt, qd }, H1, H2, & gt, wherein hd is a three-dimensional coordinate of a midpoint of the current model, Q1, Q2, & gt is a three-dimensional coordinate of a midpoint of the historical model, and solving an optimal rotation matrix R and a translation vector t to enable qd to be the three-dimensional coordinate of the midpoint of the current model Wherein he e H is a point in the current model, qe e Q is a point in the history model corresponding to the current model, e is a sequence number of the point and e= {1,2,..d }, where d is a positive integer, R is a rotation matrix, and t is a translation vector, to obtain a registered current model Hre;
Performing difference degree calculation on the corresponding region of the registered current model Hre and the history model Q, setting a difference detection threshold Efth, hree epsilon Hre as points of the registered current model Hre by adopting a mode based on point-to-point distance threshold comparison, judging that structural differences exist in the region if the I hree-qe I Eftg is met for each point pair (hree, qe), wherein hree belongs to a registered current model point set Hre, marking point sets meeting all conditions as difference regions Dare, and
If a certain structure in the history model cannot find a similar structure in the current model, marking the structure as invalid and rejecting the structure from the history model, and if a newly detected sparse point cloud cluster in the current model is not matched with the history model or has no record in the category identification result, writing the structural information of the sparse point cloud cluster into the history model as a newly added entity, and if the structural part has position deviation or size change, performing parameter coverage update of part of the model, and reconstructing only local grids instead of overall reconstruction;
And storing the locally updated model as a new historical model version, and pushing the update content to the AR client through the edge computing node so as to keep the three-dimensional virtual environment and the actual equipment state synchronous.
9. The method for controlling distribution line equipment based on VR and AR technology according to claim 8, wherein the method for synchronously presenting the updated three-dimensional model, the equipment identification result and the space coordinates in the VR or AR interface to realize the control, inspection and maintenance operations of the distribution line equipment is as follows:
mapping three-dimensional model coordinates into the world coordinate system of an AR device or VR scene based on the device's pose matrix, utilizing the pose matrix of each device instance Wherein R is a rotation matrix of the equipment instance, t is a translation vector of the equipment instance, a three-dimensional equipment model vertex is Xlco, a three-dimensional equipment model vertex Xlco is mapped to a global coordinate Xwor, the expression of the global coordinate Xwor is Xwor =ama× Xlco, and all equipment models are subjected to unified spatial alignment to complete a virtual-real fusion foundation;
Based on the device identification result output by the SVM classifier, combining the spatial position and the topological relation, adding semantic information for each device model instance, wherein the semantic information comprises a device type label, function marking information and a risk state, and generating a multi-mode marking element to be bound to a corresponding model node;
In an AR scene, an SLAM mechanism is adopted to continuously track the view angles of equipment and users, so that accurate superposition of a virtual model and a real scene is realized, and in a VR scene, a complete topological structure view is loaded through a virtual space, so that roaming inspection, simulation exercise and remote control in a virtual power distribution environment are realized.
10. The distribution line equipment control system based on VR and AR technology is used for realizing the distribution line equipment control method based on VR and AR technology as set forth in any one of claims 1-9, and comprises an edge vision acquisition and processing module, an adaptive model generation and topology analysis module, a three-dimensional model dynamic synchronization module and an interactive presentation module;
The edge vision acquisition and processing module is used for deploying edge equipment with a vision perception function on the distribution line site, wherein the edge equipment comprises a visible light camera, an infrared camera and a depth sensor and is used for acquiring image and video data of the distribution line and the equipment;
The edge equipment preprocesses the acquired image through a target detection algorithm, and extracts boundary information, geometric outline, structural feature points and spatial positions of the power distribution equipment;
The space position and the gesture of each target device are estimated by utilizing the depth map and camera calibration parameters and performing back projection from the two-dimensional image to the three-dimensional coordinates;
the self-adaptive model generation and topology analysis module is used for generating a sparse point cloud structure based on the depth map and the image sequence, constructing a local topology structure model by combining the connection relation between devices, and carrying out semantic classification on the device types;
The edge equipment builds a three-dimensional virtual model of the distribution line and the auxiliary equipment thereof based on the collected data through image recognition, structure modeling and topology analysis;
The three-dimensional model dynamic synchronization module is used for carrying out rigid alignment on the currently generated three-dimensional virtual model and a prestored historical model through a point cloud registration algorithm based on SVD, comparing the currently generated three-dimensional virtual model with the prestored historical model, identifying a difference region, and carrying out local increment update on the difference region to realize dynamic synchronization of the three-dimensional virtual model;
And the interactive presentation module is used for synchronously presenting the updated three-dimensional model, the equipment identification result and the space coordinates in the VR or AR interface, so as to realize the control, inspection and maintenance operation of the distribution line equipment.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510469100.7A CN120411430A (en) | 2025-04-15 | 2025-04-15 | Distribution line equipment control method and system based on VR and AR technology |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510469100.7A CN120411430A (en) | 2025-04-15 | 2025-04-15 | Distribution line equipment control method and system based on VR and AR technology |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN120411430A true CN120411430A (en) | 2025-08-01 |
Family
ID=96509578
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202510469100.7A Withdrawn CN120411430A (en) | 2025-04-15 | 2025-04-15 | Distribution line equipment control method and system based on VR and AR technology |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN120411430A (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120747651A (en) * | 2025-09-02 | 2025-10-03 | 山东登远信息科技有限公司 | Intelligent inspection method and system based on AR equipment |
-
2025
- 2025-04-15 CN CN202510469100.7A patent/CN120411430A/en not_active Withdrawn
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120747651A (en) * | 2025-09-02 | 2025-10-03 | 山东登远信息科技有限公司 | Intelligent inspection method and system based on AR equipment |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111563442B (en) | A slam method and system for fusion of point cloud and camera image data based on lidar | |
| Fan et al. | Rethinking road surface 3-D reconstruction and pothole detection: From perspective transformation to disparity map segmentation | |
| CN111798475B (en) | Indoor environment 3D semantic map construction method based on point cloud deep learning | |
| Sun et al. | Aerial 3D building detection and modeling from airborne LiDAR point clouds | |
| Taneja et al. | Geometric change detection in urban environments using images | |
| CN113536959A (en) | Dynamic obstacle detection method based on stereoscopic vision | |
| CN115222884A (en) | Space object analysis and modeling optimization method based on artificial intelligence | |
| CN116045965A (en) | Multi-sensor-integrated environment map construction method | |
| CN113936210A (en) | Anti-collision method for tower crane | |
| Smith | ASSET-2: Real-time motion segmentation and object tracking | |
| CN109492522B (en) | Specific object detection model training program, apparatus, and computer-readable storage medium | |
| CN119006736B (en) | A UAV real-scene 3D data automatic update system | |
| CN120411430A (en) | Distribution line equipment control method and system based on VR and AR technology | |
| CN118584992A (en) | An intelligent tracking method for drone towers | |
| CN116403275B (en) | Method and system for detecting the movement posture of people in closed space based on multi-eye vision | |
| CN113487741A (en) | Dense three-dimensional map updating method and device | |
| CN120279509A (en) | Different-position double-camera intelligent vehicle speed measuring method and system based on depth optimization | |
| CN119832049A (en) | Non-contact multi-scale deformation monitoring system and method for ancient architecture | |
| CN119553593A (en) | Method and device for multi-point synchronous precise lifting and positioning of prefabricated bridge beams | |
| CN119722967A (en) | A three-dimensional semantic reconstruction method, device, equipment and storage medium for multi-agent perception fusion in a park | |
| CN118397204A (en) | Three-dimensional scene graph construction method based on hierarchical representation and semantic modeling | |
| CN116452929A (en) | Terrain semantic map construction method and system in field environment | |
| Wu et al. | 3D human curve skeleton extraction based on solid-state LiDAR | |
| CN113538487A (en) | A virtual 3D perimeter control algorithm based on multi-camera 3D reconstruction | |
| CN118314162B (en) | Dynamic visual SLAM method and device for time sequence sparse reconstruction |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| WW01 | Invention patent application withdrawn after publication | ||
| WW01 | Invention patent application withdrawn after publication |
Application publication date: 20250801 |