CN118037956A - Method and system for generating three-dimensional virtual reality in fixed space - Google Patents
Method and system for generating three-dimensional virtual reality in fixed space Download PDFInfo
- Publication number
- CN118037956A CN118037956A CN202410175008.5A CN202410175008A CN118037956A CN 118037956 A CN118037956 A CN 118037956A CN 202410175008 A CN202410175008 A CN 202410175008A CN 118037956 A CN118037956 A CN 118037956A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- moving object
- coordinate system
- image
- imaging device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000003384 imaging method Methods 0.000 claims abstract description 176
- 238000013507 mapping Methods 0.000 claims abstract description 52
- 230000003287 optical effect Effects 0.000 claims abstract description 22
- 239000011159 matrix material Substances 0.000 claims description 32
- 239000003550 marker Substances 0.000 claims description 17
- 238000002372 labelling Methods 0.000 claims description 14
- 238000012544 monitoring process Methods 0.000 abstract description 39
- 238000006243 chemical reaction Methods 0.000 description 17
- 238000012634 optical imaging Methods 0.000 description 8
- 230000009466 transformation Effects 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 230000036544 posture Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000013461 design Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 230000002159 abnormal effect Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000005021 gait Effects 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 230000016776 visual perception Effects 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention provides a method and a system for generating a three-dimensional virtual reality in a fixed space, wherein after a plurality of two-dimensional imaging devices acquire two-dimensional background images in a field of view corresponding to each two-dimensional imaging device and obtain three-dimensional modeling data in the field of view corresponding to each two-dimensional imaging device, a corresponding virtual two-dimensional image can be generated for each two-dimensional background image by utilizing a pre-established mapping relation, optical parameters of the existing two-dimensional imaging devices and the obtained three-dimensional modeling data, and an integral three-dimensional model of the fixed space is established, when the two-dimensional imaging devices recognize a moving object, the three-dimensional virtual reality of the fixed space can be generated and displayed based on the integral three-dimensional model, the virtual two-dimensional image corresponding to the moving object and a recognition result, and the problems that a video monitoring system in the fixed scene cannot well realize track tracking of the moving object and dynamic representation of a panoramic picture of the integral scene can be alleviated.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a system for generating a three-dimensional virtual reality in a fixed space.
Background
Video intelligent monitoring is applied to urban management and intelligent traffic industries on a large scale, a camera is generally adopted to collect continuous images of a monitored area, and an artificial naked eye observation mode or an artificial intelligent algorithm is adopted to intelligently identify the collected images so as to realize detection, tracking, abnormal condition alarming and the like of targets (such as personnel, vehicles and the like).
For some scenes (such as stations, communities, malls, supermarkets, buildings, roads and the like) in fixed space, a monitoring room is generally arranged, a monitoring screen is installed in the monitoring room, any single camera can be switched, the monitoring personnel can observe and monitor the scenes by eyes, and video files of all cameras can be stored in a monitoring database, so that later calling and playback viewing are facilitated. The defects of manual monitoring based on two-dimensional video are mainly as follows: because the camera is through perspective imaging of focus, obeys pinhole imaging principle, can cause the losing of the three-dimensional true dimension of object and space position in the two-dimensional image after the formation of image, have near big or small problem far away, can't judge and measure the size and the far and near of object from single two-dimensional image, can't carry out space position location for the object.
For some scenes in fixed space, the prior art also comprises the steps of identifying two-dimensional images acquired by a camera (such as face recognition, gait recognition and the like) based on various intelligent image identification algorithms so as to find abnormal conditions and track suspicious personnel, and the algorithms can grasp a specific image or a small section of video in a mode of marking the video stream and independently store and alarm the images, so that abnormal conditions cannot be handled in time due to personnel monitoring negligence are avoided, manual resources required by monitoring are reduced, and monitoring efficiency is improved. The main problems existing at present based on two-dimensional image intelligent recognition are as follows: because the environments facing different cameras and the identified objects are quite different, the problems of low identification accuracy, large quantity of field images required to be acquired for identification, large manual labeling workload, high equipment cost required by an intelligent image identification algorithm, poor interpretability of the intelligent image identification algorithm and the like commonly exist in engineering application.
At present, another problem of a video monitoring system using a two-dimensional camera as main visual perception hardware in a fixed scene is that: the linkage and the integral video presentation cannot be realized among different cameras, the video file and the picture of each camera are independent, the independent monitoring picture is occupied for viewing, and when a moving target (such as a vehicle, a person, other objects and the like) passes through one camera to enter the other camera, the pictures acquired by the two cameras cannot be well associated, so that the track tracking of the moving target and the dynamic presentation of the panoramic picture of the whole scene cannot be well realized.
In order to solve the problem of overall scene panoramic picture presentation of video monitoring in a fixed space range, the current digital twinning or virtual live-action technology also obtains some applications, and the basic mode is to perform three-dimensional modeling on the fixed space range by adopting a three-dimensional modeling means in advance, then associate a three-dimensional model of the fixed space range with a camera at a fixed space position, and realize the effect of overall scene panoramic picture presentation by combining the three-dimensional model and a two-dimensional image. The main problems with this approach are: the three-dimensional modeling is one-time integral modeling, the established three-dimensional model cannot be dynamically updated according to the change of an actual scene, the change of a moving target cannot be presented in the three-dimensional model, a panoramic picture can only provide a spatial position structure of the integral scene and cannot be associated with the actual occurrence condition, and the specific real-time state of the moving target can only be completed by means of video monitoring and intelligent recognition by an independent camera.
Disclosure of Invention
In view of the above, the present invention aims to provide a method and a system for generating a three-dimensional virtual reality in a fixed space, so as to alleviate the problem that a video monitoring system in a fixed scene cannot well realize track tracking of a moving target and dynamic presentation of a panoramic picture of the whole scene.
In a first aspect, an embodiment of the present invention provides a method for generating a three-dimensional virtual reality in a fixed space, including: when no moving object exists in the fixed space, acquiring a two-dimensional background image in a view field range corresponding to each two-dimensional imaging device through a plurality of two-dimensional imaging devices arranged at different positions in the fixed space, and simultaneously acquiring three-dimensional modeling data in the view field range corresponding to each two-dimensional imaging device; the obtained three-dimensional modeling data comprise labeling categories and three-dimensional points corresponding to all fixed objects in a fixed space, each three-dimensional point is provided with a first coordinate under a three-dimensional model space coordinate system corresponding to the fixed space, and the fixed objects comprise fixed objects and/or fixed backgrounds; generating a corresponding virtual two-dimensional image for each two-dimensional background image based on the obtained three-dimensional modeling data, optical parameters of two-dimensional imaging devices arranged in a fixed space and a pre-established first mapping relation between a three-dimensional coordinate system of a field range corresponding to each two-dimensional imaging device and a coordinate system where the corresponding three-dimensional modeling data is located; wherein each virtual two-dimensional image has a second coordinate in a corresponding three-dimensional coordinate system; moving object identification is carried out on the two-dimensional real-time images acquired by the two-dimensional imaging equipment, so that classification and edge contour of each moving object are obtained when the moving object is identified; wherein the moving object comprises a person and/or a moving object; and carrying out three-dimensional modeling based on the obtained three-dimensional modeling data to obtain an overall three-dimensional model of the fixed space, generating a three-dimensional virtual reality of the fixed space based on the overall three-dimensional model, the virtual two-dimensional image corresponding to the identified moving object, the obtained classification and the obtained edge profile, and then displaying the three-dimensional virtual reality.
In a second aspect, an embodiment of the present invention further provides a system for generating a three-dimensional virtual reality in a fixed space, including: the acquisition module is used for acquiring a two-dimensional background image in a view field range corresponding to each two-dimensional imaging device through a plurality of two-dimensional imaging devices arranged at different positions in the fixed space when no moving object exists in the fixed space, and simultaneously acquiring three-dimensional modeling data in the view field range corresponding to each two-dimensional imaging device; the obtained three-dimensional modeling data comprise labeling categories and three-dimensional points corresponding to all fixed objects in a fixed space, each three-dimensional point is provided with a first coordinate under a three-dimensional model space coordinate system corresponding to the fixed space, and the fixed objects comprise fixed objects and/or fixed backgrounds; the generation module is used for generating a corresponding virtual two-dimensional image for each two-dimensional background image based on the obtained three-dimensional modeling data, the optical parameters of the two-dimensional imaging devices which are arranged in the fixed space and a pre-established first mapping relation between a three-dimensional coordinate system of a field range corresponding to each two-dimensional imaging device and a coordinate system where the corresponding three-dimensional modeling data is located; wherein each virtual two-dimensional image has a second coordinate in a corresponding three-dimensional coordinate system; the identification module is used for carrying out mobile object identification on the two-dimensional real-time images acquired by the two-dimensional imaging equipment through each two-dimensional imaging equipment so as to obtain the classification and the edge profile of each mobile object when the mobile object is identified; wherein the moving object comprises a person and/or a moving object; the real scene module is used for carrying out three-dimensional modeling based on the obtained three-dimensional modeling data to obtain an overall three-dimensional model of the fixed space, generating a three-dimensional virtual real scene of the fixed space based on the overall three-dimensional model, the virtual two-dimensional image corresponding to the identified moving object, the obtained classification and the obtained edge contour, and then displaying the three-dimensional virtual real scene.
According to the method and the system for generating the three-dimensional virtual reality in the fixed space, after the two-dimensional background image in the field of view corresponding to each two-dimensional imaging device is acquired through the two-dimensional imaging devices and the three-dimensional modeling data in the field of view corresponding to each two-dimensional imaging device are obtained, the corresponding virtual two-dimensional image can be generated for each two-dimensional background image through the pre-established mapping relation, the optical parameters of the existing two-dimensional imaging devices and the obtained three-dimensional modeling data, and an integral three-dimensional model of the fixed space is built, when the two-dimensional imaging devices recognize a moving object, the three-dimensional virtual reality of the fixed space can be generated and displayed based on the integral three-dimensional model, the virtual two-dimensional image corresponding to the moving object and the recognition result (including classification and edge outline), so that the specific real-time state of the moving object in the fixed space can be displayed through the three-dimensional virtual reality, and track tracking of the moving object in the fixed space and dynamic representation of the integral scene panoramic image of the fixed space can be better realized.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a method for generating three-dimensional virtual reality in a fixed space according to an embodiment of the invention;
FIG. 2 is an exemplary diagram of a fixed space, respective coordinate systems of a camera and a three-dimensional imaging device, and respective field of view ranges of the camera and the three-dimensional imaging device in an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a camera calibration and three-dimensional point cloud mapping process according to an embodiment of the present invention;
Fig. 4 is a schematic structural diagram of a three-dimensional virtual reality generating system in a fixed space according to an embodiment of the invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described in conjunction with the embodiments, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
At present, a video monitoring system using a two-dimensional camera as main visual perception hardware in a fixed scene has the following problems: the linkage and the integral video presentation cannot be realized among different cameras, the video file and the picture of each camera are independent, the independent monitoring picture is occupied for viewing, and when a moving target (such as a vehicle, a person, other objects and the like) passes through one camera to enter the other camera, the pictures acquired by the two cameras cannot be well associated, so that the track tracking of the moving target and the dynamic presentation of the panoramic picture of the whole scene cannot be well realized.
Based on the above, the method and the system for generating the three-dimensional virtual reality in the fixed space can alleviate the problems existing in the related technology.
For the sake of understanding the present embodiment, first, a detailed description will be given of a method for generating a three-dimensional virtual reality in a fixed space disclosed in the present embodiment, referring to a schematic flow chart of a method for generating a three-dimensional virtual reality in a fixed space shown in fig. 1, the method may include the following steps:
step S102, when no moving object exists in the fixed space, acquiring two-dimensional background images in the field of view range corresponding to each two-dimensional imaging device through a plurality of two-dimensional imaging devices arranged at different positions in the fixed space, and simultaneously acquiring three-dimensional modeling data in the field of view range corresponding to each two-dimensional imaging device.
The obtained three-dimensional modeling data comprise labeling categories and three-dimensional points corresponding to all the fixed objects in the fixed space, each three-dimensional point is provided with a first coordinate under a three-dimensional model space coordinate system corresponding to the fixed space, and the fixed objects can comprise fixed objects and/or fixed backgrounds.
For each two-dimensional imaging device, with the field of view range of the two-dimensional imaging device being known, existing three-dimensional modeling data located within the field of view range of the two-dimensional imaging device can be acquired, the existing three-dimensional modeling data being pre-labeled with a classification of a fixed object such as a fixed object, a fixed background, etc., and the existing three-dimensional modeling data being characterized by a plurality of three-dimensional points, each three-dimensional point corresponding to a position coordinate in a fixed space (i.e., a first coordinate in a three-dimensional model space coordinate system corresponding to the fixed space) being known.
Step S104, based on the obtained three-dimensional modeling data, the optical parameters of the two-dimensional imaging devices which are arranged in the fixed space, and a pre-established first mapping relation between the three-dimensional coordinate system of the field range corresponding to each two-dimensional imaging device and the coordinate system where the corresponding three-dimensional modeling data is located, a corresponding virtual two-dimensional image is generated for each two-dimensional background image.
Wherein each virtual two-dimensional image has a respective second coordinate in a respective three-dimensional coordinate system.
The first mapping relationship may be specifically represented by a coordinate transformation matrix or other forms, which is not limited.
For each two-dimensional imaging device, a coordinate system (namely, a three-dimensional coordinate system of a field of view range corresponding to the two-dimensional imaging device) can be built in advance for an acquisition range (namely, the field of view range) of the two-dimensional imaging device, so that each pixel position in a two-dimensional image acquired by the two-dimensional imaging device can be represented by utilizing coordinate values under the coordinate system.
And step S106, carrying out mobile object recognition on the two-dimensional real-time images acquired by the two-dimensional imaging equipment by each two-dimensional imaging equipment so as to obtain the classification and the edge profile of each mobile object when the mobile object is recognized.
Wherein the moving object may comprise a person and/or a moving object.
Since each two-dimensional imaging device is used for acquiring two-dimensional images within the field of view of the two-dimensional imaging device and identifying moving objects (such as moving vehicles, moving personnel and the like) in the acquired two-dimensional images through a pre-deployed automatic identification algorithm, when a certain two-dimensional imaging device identifies that the acquired two-dimensional images contain the moving objects, the two-dimensional imaging device can calculate the classification and the edge contour of each identified moving object through an object detection algorithm deployed on the two-dimensional imaging device. The automatic recognition algorithm may be a commonly used target detection algorithm, an edge detection algorithm, or the like, and is not limited thereto.
Step S108, three-dimensional modeling is carried out based on the obtained three-dimensional modeling data to obtain an overall three-dimensional model of the fixed space, and a three-dimensional virtual reality of the fixed space is generated based on the overall three-dimensional model, the virtual two-dimensional image corresponding to the identified moving object, the obtained classification and the obtained edge profile, and then the three-dimensional virtual reality is displayed.
The obtained three-dimensional modeling data can be spliced into integral splicing data, then an integral three-dimensional model of the fixed space is established by utilizing the integral splicing data in a three-dimensional modeling mode to represent integral background information of the fixed space, and the integral three-dimensional model still comprises labeling categories and three-dimensional points corresponding to all the fixed objects and first coordinates of each three-dimensional point under a space coordinate system of the three-dimensional model corresponding to the fixed space. After the whole three-dimensional model is obtained, a three-dimensional virtual real scene of the fixed space is generated by utilizing the whole three-dimensional model, a virtual two-dimensional image generated by all two-dimensional images containing the moving objects and classification and edge contours of all the moving objects obtained through recognition so as to represent the real-time state (including classification condition and position condition) of each moving object in the whole fixed space and present the real-time state, so that track tracking of the moving objects in the fixed space is realized, and panoramic pictures of the whole scene of the fixed space can be dynamically presented in a mode of synchronizing the real-time positions of the moving objects.
According to the three-dimensional virtual live-action generation method in the fixed space, after two-dimensional background images in the field of view corresponding to each two-dimensional imaging device are acquired through the two-dimensional imaging devices and three-dimensional modeling data in the field of view corresponding to each two-dimensional imaging device are obtained, corresponding virtual two-dimensional images can be generated for each two-dimensional background image through a pre-established mapping relation, optical parameters of existing two-dimensional imaging devices and the obtained three-dimensional modeling data, an integral three-dimensional model of the fixed space is built, and when the two-dimensional imaging devices recognize a moving object, three-dimensional virtual live-actions of the fixed space can be generated and displayed based on the integral three-dimensional model, the virtual two-dimensional images corresponding to the moving object and recognition results (including classification and edge outlines), so that the specific real-time state of the moving object in the fixed space can be displayed through the three-dimensional virtual live-actions, and track tracking of the moving object in the fixed space and dynamic representation of an integral scene picture of the fixed space can be better achieved.
As a possible implementation manner, the generating the three-dimensional virtual reality of the fixed space based on the overall three-dimensional model and the virtual two-dimensional image corresponding to the identified moving object and the obtained classification and edge contour in the step S108 may include:
And step 1, acquiring a second coordinate of each moving object in the corresponding virtual two-dimensional image based on the obtained edge profile.
For each moving object, when the edge contour of the moving object is obtained, the pixel position occupied by the moving object corresponding to the corresponding two-dimensional real-time image is known, because the two-dimensional real-time image containing the moving object and the two-dimensional background image corresponding to the moving object are acquired by the same two-dimensional imaging device at the same position, and the virtual two-dimensional image corresponding to the corresponding two-dimensional background image is generated by mapping three-dimensional points based on the optical parameters of the corresponding two-dimensional imaging device (namely, the size of each virtual two-dimensional image is actually completely consistent with the size of the corresponding two-dimensional image, and the pixel position of each virtual two-dimensional image corresponds to the pixel position of the corresponding two-dimensional image one by one), a one correspondence exists between the pixel position of the two-dimensional real-time image corresponding to the moving object and the pixel position of the corresponding two-dimensional background image, and the pixel position occupied by the moving object in the corresponding two-dimensional real-time image can be directly determined by using the pixel position correspondence under the condition that the pixel position occupied by the moving object is known.
And 2, determining the three-dimensional size of each moving object in the fixed space based on the second coordinates corresponding to the identified moving objects.
For each moving object, after the second coordinates of the moving object under the corresponding three-dimensional coordinate system are obtained, the three-dimensional size of the moving object in the fixed space can be calculated through coordinate operation (such as a coordinate distance calculation formula).
And step 3, converting the second coordinate corresponding to each moving object into the corresponding first coordinate based on a second mapping relation between the space coordinate system of the pre-established three-dimensional model and each three-dimensional coordinate system.
The second mapping relationship may be specifically represented by a coordinate transformation matrix or other forms, which is not limited thereto, similar to the first mapping relationship.
The mapping relation between the three-dimensional model space coordinate system corresponding to the fixed space and the three-dimensional coordinate system of the field of view range corresponding to each two-dimensional imaging device can be established in advance, so that when the second coordinates of each moving object under the corresponding three-dimensional coordinate system are obtained, each obtained second coordinate can be converted into the corresponding first coordinate, and the generation and calculation of the three-dimensional virtual reality are conveniently carried out under the same coordinate system.
And 4, generating a three-dimensional virtual reality based on the whole three-dimensional model, all obtained classifications and the first coordinates and the three-dimensional size corresponding to the identified moving object.
Illustratively, the operation manner of the step 4 may include:
Step 41, constructing a corresponding three-dimensional object model for each moving object according to the corresponding three-dimensional size.
For each moving object, a corresponding three-dimensional object model can be constructed in a three-dimensional modeling mode based on the three-dimensional size of the moving object to characterize the size of the space range occupied by the moving object in a fixed space.
Step 42, generating a three-dimensional virtual reality based on the whole three-dimensional model, the first coordinates corresponding to the identified moving object and the three-dimensional object model.
Each three-dimensional object model can be added to the integral three-dimensional position corresponding to the first coordinate corresponding to the corresponding moving object in the integral three-dimensional model, and classification of the corresponding moving object is bound for each integral three-dimensional position, so that a three-dimensional virtual reality is obtained.
As a possible implementation manner, the acquiring three-dimensional modeling data in the field of view corresponding to each two-dimensional imaging device in step S102 may include: three-dimensional data acquisition is carried out on the target positions corresponding to the positions of the two-dimensional imaging devices through the three-dimensional data acquisition device, so that three-dimensional data in the field of view corresponding to the two-dimensional imaging devices are obtained; wherein, the overlapping range exists between the acquisition range of the three-dimensional data acquisition equipment at each target position and the field of view range of the corresponding two-dimensional imaging equipment; the method comprises the steps of obtaining the labeling category and the three-dimensional point corresponding to each fixed object through a three-dimensional data acquisition device, and taking the three-dimensional data acquired by the three-dimensional data acquisition device at each target position and positioned in the view field range corresponding to the data of the corresponding two-dimensional imaging device and the obtained labeling category and three-dimensional point positioned in the view field range corresponding to the corresponding two-dimensional imaging device as three-dimensional modeling data in the view field range corresponding to the corresponding two-dimensional imaging device.
In the practical application process, the three-dimensional data acquisition device may be one or more, which is not limited. The three-dimensional data acquisition equipment can acquire three-dimensional data of the visual area in a certain visual field range, and can acquire three-dimensional space coordinate values of the three-dimensional points.
The three-dimensional data acquisition device can adopt a three-dimensional matrix camera comprising a plurality of image sensors which are arranged on the same plane in a matrix manner, can also adopt a device composed of a laser radar and an image sensor, can be specifically selected according to actual requirements, and is not limited. In view of the equipment cost, the three-dimensional matrix camera may preferably be configured to include four image sensors arranged in a matrix on the same plane.
As a possible implementation manner, the step S104 (generating, for each two-dimensional background image, a corresponding virtual two-dimensional image based on the obtained three-dimensional modeling data and the optical parameters of the two-dimensional imaging devices already set in the fixed space, and the pre-established first mapping relationship between the three-dimensional coordinate system of the field of view range corresponding to each two-dimensional imaging device and the coordinate system where the corresponding three-dimensional modeling data is located) may include:
and step A, based on a first mapping relation established in advance, converting the three-dimensional data corresponding to each target position into a local three-dimensional point cloud under a corresponding three-dimensional coordinate system.
Wherein each local three-dimensional point cloud has a respective second coordinate in a respective three-dimensional coordinate system.
For each two-dimensional imaging device, after corresponding three-dimensional data is acquired at a corresponding target position near the position of the two-dimensional imaging device through a three-dimensional data acquisition device, the acquired three-dimensional data can be mapped into local three-dimensional point clouds under a three-dimensional coordinate system corresponding to the two-dimensional imaging device by utilizing a corresponding pre-established mapping relation, so that three-dimensional point information is represented through the local three-dimensional point clouds.
And B, for each local three-dimensional point cloud, converting the local three-dimensional point cloud into a corresponding virtual two-dimensional image based on the optical parameters of the two-dimensional imaging device to which the local three-dimensional point cloud is applied.
Continuing the previous example, for each two-dimensional imaging device, after obtaining a local three-dimensional point cloud under a three-dimensional coordinate system corresponding to the two-dimensional imaging device, the optical parameters of the two-dimensional imaging device can be utilized to map the local three-dimensional point cloud onto an image sensor of the two-dimensional imaging device in an analog optical imaging mode to form a virtual two-dimensional image (namely a virtual two-dimensional image), and because the virtual two-dimensional image is obtained by converting three-dimensional data acquired by a three-dimensional data acquisition device through the coordinate system and then performing analog optical imaging through the corresponding two-dimensional imaging device, the basic position point of the virtual two-dimensional image is completely corresponding and consistent with the basic position point of the image acquired by the corresponding two-dimensional imaging device in theory.
In the actual application process, in order to ensure the accuracy of the establishment of the first mapping relationship, the distance between the position of each two-dimensional imaging device and the corresponding target position of each two-dimensional imaging device can be set to be smaller than a preset distance, and the overlapping rate between the acquisition range of each target position of the three-dimensional imaging device and the acquisition range of the corresponding two-dimensional imaging device can be set to be larger than a preset overlapping rate; based on this, the method for generating a three-dimensional virtual reality in a fixed space may further include:
and a1, when no moving object exists in the overlapping range corresponding to the position of each two-dimensional imaging device and a calibration device with a plurality of calibration points is arranged, acquiring three-dimensional data at the target position corresponding to the position through the three-dimensional imaging device to obtain three-dimensional calibration data in the overlapping range corresponding to the position, and acquiring a two-dimensional calibration image in the overlapping range corresponding to the position through the two-dimensional imaging device arranged at the position.
The three-dimensional calibration data may include a third coordinate of each calibration point in a coordinate system where the corresponding three-dimensional calibration data is located, and the two-dimensional calibration image includes a second coordinate of each calibration point in the corresponding three-dimensional coordinate system.
And b1, establishing a mapping relation between each three-dimensional coordinate system and a coordinate system where the corresponding three-dimensional calibration data are located as a corresponding first mapping relation based on the obtained three-dimensional calibration data and the obtained two-dimensional calibration image.
In the practical application process, in order to ensure the accuracy of the first mapping relation establishment, the method for generating the three-dimensional virtual reality in the fixed space may further include:
And a2, acquiring a two-dimensional marker image in the overlapping range corresponding to each position by the two-dimensional imaging equipment arranged at the position when a plurality of fixed markers are arranged in the overlapping range corresponding to each position.
Wherein each fixed marker has a first coordinate in a three-dimensional model space coordinate system, and the two-dimensional marker image includes a second coordinate in a corresponding three-dimensional coordinate system.
And b2, establishing a mapping relation between the three-dimensional model space coordinate system and each three-dimensional coordinate system as a corresponding second mapping relation based on the obtained two-dimensional marker image and the obtained first coordinates of the fixed markers.
For ease of understanding, the operation of the three-dimensional virtual reality generating method in a fixed space described above is described herein by way of example in specific application.
The method for generating the three-dimensional virtual reality in the fixed space can be realized through a set of existing or newly-built video monitoring system in the fixed space range, one or more three-dimensional dynamic imaging devices, one or more calibration devices, a set of background running devices, corresponding data processing software and one or more monitoring large screens.
The existing or newly-built video monitoring system in the fixed space range generally comprises monitoring cameras distributed in each key monitoring area, a set of management system for video image processing and storage, and one or more centralized or distributed video playing screens.
One or more three-dimensional dynamic imaging devices can perform three-dimensional high-speed imaging on a visual area within a certain visual field range, and can synchronously obtain three-dimensional space coordinate values of corresponding pixel points on a two-dimensional image while obtaining the two-dimensional image, and at present, the device for three-dimensional dynamic imaging comprises: a three-dimensional matrix camera and a device that is composed of a laser radar and an image sensor.
The three-dimensional matrix camera forms a matrix structure through four image sensors on the same plane, the matrix structure can form a unique mapping relation between the space position on the space to be observed and a group of points on the four image sensors, and then the three-dimensional rapid imaging by means of pure optical images is realized by combining an image pixel point matching mode. The three-dimensional dynamic imaging device can also adopt a device composed of a laser radar and an image sensor, after the device can be correspondingly matched with the image sensor through the three-dimensional point cloud of the laser radar, the three-dimensional space coordinate values of corresponding pixel points on the two-dimensional image are synchronously obtained while the two-dimensional image is obtained, and the fusion matching degree between the laser radar point cloud and the pixels of the image sensor can influence the three-dimensional space coordinate values obtained by the device.
One or more calibration means, means capable of having a plurality of determined and precise geometric position points themselves, each of which is easily precisely imaged by an image sensor, for example: the checkerboard calibration plate is formed by intersecting black and white squares, so that accurate pixel points of black and white grid intersecting points can be easily obtained when the image sensor images, and if the two-dimensional image pixel positions and the three-dimensional space positions of each geometric position point are known at the same time, the mapping relation between the two-dimensional image and the space positions can be established. In order to be convenient to use and carry, the checkerboard calibration plates are generally positioned on the same plane, and in order to establish a three-dimensional mapping relation during use, the checkerboard calibration plates are required to be rotated or translated in a three-dimensional space to perform calibration operations of various postures of the checkerboard calibration plates, so that a good corresponding relation between two-dimensional image pixel points collected by a camera and the three-dimensional space is conveniently established. In addition, the calibration device can be arranged in a circular pattern, or the checkerboard is changed into a stereoscopic intersection with concave and convex, or cylinders with different heights form the calibration device, and the selection mode of the calibration device is not limited.
The hardware system of the installable software can be one or more PC machines or servers installed in the monitoring center or intelligent chips installed in the cameras, and the intelligent chips can directly deploy intelligent recognition algorithms or image processing algorithms on the camera terminals.
The data processing software mainly comprises: the system comprises space three-dimensional modeling software, three-dimensional point cloud acquisition and generation software of three-dimensional dynamic imaging equipment, camera three-dimensional point cloud mapping software, camera image intelligent recognition software, camera moving object (such as moving vehicles, moving personnel and the like) outline grabbing or background image segmentation software, camera moving object three-dimensional space position and area calculation software, space live-action three-dimensional dynamic updating and displaying software, three-dimensional information and event information data intelligent management software, image and information transmission software and the like, and is not limited to the space three-dimensional modeling software.
One or more centralized or distributed video playing screens can visually display three-dimensional real scenes in a fixed space in a centralized or distributed mode, information of a plurality of cameras is concentrated on one or more screens to be displayed, and related personnel can conveniently complete screen operation required by a monitoring center.
The implementation completion process of the three-dimensional virtual reality generation method in the fixed space mainly comprises the following steps:
step one, space three-dimensional modeling is carried out, and a unified three-dimensional space coordinate system is established.
The three-dimensional modeling is needed in a fixed live-action space range, after the modeling is completed, the modeling space and a real geographic position space are subjected to unified coordinate transformation according to GPS information and geographic positions of the fixed space and building geographic information mark points in the fixed space, unique correspondence between geographic space coordinate information and modeling space coordinate information is formed, and a unified three-dimensional model coordinate system is established.
The step can adopt various three-dimensional modeling modes, for example, direct artificial design modeling is carried out by adopting a building design drawing of a fixed space scene, three-dimensional modeling is carried out by adopting on-site images and data acquisition to carry out three-dimensional matching through image three-dimensional matching or directly acquiring three-dimensional point cloud, and three-dimensional modeling is carried out by adopting the building design drawing, the on-site images and the data acquisition to carry out artificial design and three-dimensional modeling through the combination of the image three-dimensional matching or the direct acquisition of the three-dimensional point cloud.
And secondly, carrying out geographic position positioning on each camera and a fixed object in the corresponding field of view.
Marking and measuring basic geographic position information of each existing camera in all spatial ranges, including GPS positioning information of the installation position of each camera, and attitude information of each camera (which can be obtained through the following step three); and marking and measuring basic geographic position information of at least more than three marking points distributed at different positions in the view field of the camera, wherein the basic geographic position information comprises GPS positioning information and spatial three-dimensional geographic coordinate information of each marking point.
The marking and measuring are carried out according to the general technical specifications of the existing building measurement, the existing geographical position mark points obtained through the geographical position and the building measurement are generally used as reference points, the position of the camera is positioned by placing the geographical position standard measuring equipment at the position of the camera, and the marking and the measuring of the fixed object can be carried out by utilizing the fixed point (such as a corner point and the like) on the fixed object placed in the view field of the camera, so that the accurate category and the accurate geographical position of the fixed object in the fixed real scene space range can be obtained.
And thirdly, shooting three-dimensional point cloud mapping of the background image by using a camera.
The purpose of the step is to form a unique corresponding relation between a two-dimensional background image shot by a camera and not containing a moving object and solid background of physical space under a unified three-dimensional space coordinate system established in the step one and/or actual three-dimensional point cloud data on the surface of a fixed object (namely, a fixed object), and generate a virtual image which is completely consistent with the two-dimensional image shot by the camera in size and corresponds to pixel positions one by virtue of the corresponding relation through a mapping mode of the three-dimensional point cloud data, wherein each virtual image contains actual three-dimensional point cloud data of the corresponding background.
Each camera can independently work on the basis of completing the third step, and the main work of the cameras is that moving objects under the monitoring cameras or the scenes of the monitoring cameras change significantly, generally, the background of the cameras can comprise fixed objects which basically cannot change at all, such as ground, roads, ground fixed objects and the like, and particularly, the situation that the moving objects in the fixed scenes such as shops and stations change significantly is less. After the step three is completed, the virtual image corresponding to each camera has corresponding three-dimensional point cloud data through the image pixel, so that three-dimensional information of a real scene space is given to the virtual image, namely, each pixel in each virtual image has corresponding three-dimensional coordinate information.
Each camera can be calibrated one by one and mapped with three-dimensional point cloud by using three-dimensional dynamic imaging equipment and a calibration device, and the calibration method and the three-dimensional point cloud mapping method are as follows: the three-dimensional dynamic imaging device and the camera are placed at the approximately same position, and the calibration device is placed in the range of the field of view covered by the two devices, as shown in fig. 2, the calibration device in fig. 2 is a black-white checkerboard calibration plate, and other calibration plates or calibration devices can be adopted. The three-dimensional dynamic imaging device and the camera can synchronously acquire the image data of the calibration device as calibration image data, and the three-dimensional dynamic imaging device and the camera also acquire the image data without the calibration device as background image data.
In fig. 2, the O1 coordinate system is a three-dimensional coordinate system of the three-dimensional model which is built in the step one and is unified corresponding to the three-dimensional real physical space, the O2 coordinate system is a three-dimensional coordinate system of an image imaging field of view range physically specified by the calibrated camera shooting system, the O3 coordinate system is a coordinate system of a three-dimensional image and three-dimensional point cloud data physically specified by the three-dimensional dynamic imaging device, the field of view space ranges corresponding to the O2 and the O3 are all located in the space range of the O1 coordinate system, the field of view space ranges corresponding to the O2 and the O3 can overlap as much as possible, and the field of view space range corresponding to the O3 covers the field of view space range corresponding to the O2 as much as possible.
Referring to fig. 3, the steps of camera calibration and three-dimensional point cloud mapping may be performed in the following operation manner:
And 100, synchronously acquiring image data of a calibration device by the three-dimensional dynamic imaging equipment and the camera.
As shown in fig. 2, the calibration plate is placed in a field of view space shared by the three-dimensional dynamic imaging device and the camera, the three-dimensional dynamic imaging device and the camera keep unchanged in position, the calibration plate needs to be moved to different positions within the field of view space shared by the three-dimensional dynamic imaging device and the camera, moreover, the posture of the calibration plate needs to be adjusted to different posture angles, so that the images acquired by the three-dimensional dynamic imaging device and the camera can obtain geometric information of the three-dimensional space as much as possible, the three-dimensional dynamic imaging device and the camera need to synchronously acquire image data of the calibration plate every time the calibration plate moves or every time the posture of the three-dimensional dynamic imaging device and the camera changes at the same position, and the acquired image data at least needs to comprise images in five directions of the upper direction, the lower direction, the left direction, the right direction and the center of an imaging plane of the camera, and generally at least needs to acquire images of the calibration plate for more than 5 times.
And step 101, calibrating optical parameters of the three-dimensional dynamic imaging equipment and the single-camera.
For a single camera, calibration of optical imaging parameters such as focal length, distortion coefficient, optical axis center coordinates, pixel size and the like of the lens of the single camera can be achieved by shooting the checkerboard calibration plate, so that various optical imaging parameters of imaging systems of the camera and each image sensor in the three-dimensional dynamic imaging equipment can be conveniently obtained by utilizing the calibration plate image shot in the step 100, the physical relations of corresponding optical imaging and perspective projection between the image sensor and reality in the three-dimensional space are basically determined, and meanwhile, the positions of the camera and the three-dimensional dynamic imaging equipment and the spatial postures of the three-dimensional coordinate axes are also determined.
Step 102, obtaining parameters of a transformation matrix for transforming the O3 coordinate system into the O2 coordinate system.
As can be seen from fig. 2, the field of view space range corresponding to the O3 coordinate system and the field of view space range corresponding to the O2 coordinate system have a large part of overlapping areas, three-dimensional data in the O3 coordinate system space can be converted into three-dimensional data in the O2 coordinate system space through a coordinate conversion matrix, and the parameters of the conversion matrix are generally six parameters, including three coordinate axis rotation angle parameters and three coordinate origin translation parameters, so in principle, through black and white intersecting points on a calibration plate, synchronous imaging is performed in two sets of imaging systems (i.e. the imaging systems of a camera and a three-dimensional dynamic imaging device), only at least six points with known positions under the O3 coordinate system and known positions under the O2 coordinate system are required to be determined as corresponding relation points, but participation operation of more points is helpful to enable the conversion matrix to be more accurate and stable, multiple times of synchronous shooting of the conversion matrix, through the intersecting points with mutual determined position relation on the calibration plate provide a bridge for the two sets of coordinate systems, and a conversion relation set of multiple corresponding points can be obtained at the same time, so that solving of the conversion matrix becomes possible. The basic conversion relation obtained by the operation mode comprises various parameters required in a rotation matrix and a translation matrix between an O3 coordinate system and an O2 coordinate system, and the operation mode can obtain a three-dimensional space corresponding relation matrix between a camera used for synchronously shooting a calibration plate and image pixels corresponding to a certain image sensor optical imaging system in the three-dimensional dynamic imaging equipment, namely, the operation mode can obtain the parameters of the conversion matrix converted from the O3 coordinate system into the O2 coordinate system.
And 200, synchronously acquiring image data of a fixed space live-action by the three-dimensional dynamic imaging equipment and the camera.
The three-dimensional dynamic imaging device and the camera can synchronously acquire background images of a three-dimensional space of a moving object in a state without a calibration plate, and the three-dimensional dynamic imaging device can directly acquire three-dimensional space position data of a background (namely a fixed object) in the three-dimensional real space corresponding to a pixel point on the image while acquiring a two-dimensional image.
And step 103, converting the three-dimensional point cloud data under the O3 coordinate system obtained by the three-dimensional dynamic imaging equipment into three-dimensional point cloud data under the O2 coordinate system through a conversion matrix converted from the O3 coordinate system into the O2 coordinate system.
After the three-dimensional point cloud data is obtained in step 200, the three-dimensional point cloud data in the O3 coordinate system may be all converted into the three-dimensional point cloud data in the O2 coordinate system by the conversion matrix in step 102.
And 104, mapping the obtained three-dimensional point cloud data under the O2 coordinate system through virtual optical imaging to generate a virtual two-dimensional image containing the three-dimensional data corresponding to the corresponding three-dimensional space background.
The step is to map the three-dimensional point cloud data under the O2 coordinate system obtained in the step 103 onto the image sensor of the camera by using the optical imaging parameters of the camera obtained in the step 101 through analog optical imaging to form a virtual two-dimensional image, and because the image is a virtual image obtained by converting the space three-dimensional point cloud data obtained by the three-dimensional dynamic imaging device through the coordinate system and then mapping the space three-dimensional point cloud data onto the image sensor of the camera, the basic position points of the virtual image and the image obtained by the camera are completely corresponding and consistent in theory.
Step 201, actually measuring three-dimensional coordinates of three-dimensional live-action space mark points.
The step is combined with the step II, and under the condition that the pixel positions of the mark points cannot be automatically obtained on the image sensor, if necessary, the mark can be placed on the actually measured position points, so that the image sensor can automatically identify the position points. In principle, at least more than 6 marker points are measured in the common field of view of the camera and the three-dimensional dynamic imaging device.
Step 202, obtaining parameters of a transformation matrix for transforming the O3 coordinate system into the O1 coordinate system.
The conversion between the O3 coordinate system and the O1 coordinate system can be realized through a coordinate conversion matrix, in principle, the conversion matrix is composed of three coordinate axis rotation angles and three coordinate axis translation distances, and six sets of linear equations can be used for solving the six parameters, so that the parameter calculation of the conversion matrix can be completed by finding at least six space mark points in the step 201 in the field of view of the three-dimensional dynamic imaging device, but the more space mark points are found, the higher the accuracy and the stability of the conversion matrix are, the better the effect of converting the O3 coordinate system into the O1 coordinate system is.
And 203, converting the three-dimensional point cloud data in the O3 coordinate system into three-dimensional data in the O1 coordinate system.
According to the actual three-dimensional coordinates of the marker points in the camera view field range obtained in the step 201, the three-dimensional point cloud data under the O3 coordinate system obtained by the three-dimensional dynamic imaging device in the step 200 is converted into three-dimensional point cloud data under the O1 coordinate system through the conversion matrix in the step 202.
And 300, converting the three-dimensional point cloud data in the O2 coordinate system into the three-dimensional point cloud data in the O1 coordinate system.
In this step, three-dimensional data included in the virtual image obtained in step 104 is converted into real scene space data in the O1 coordinate system according to the actual three-dimensional coordinates of the marker points in the camera field of view obtained in step 201 and the three-dimensional point cloud data in the O1 coordinate system obtained in step 203.
The coordinate transformation of this step may be performed using the transformation matrix and the inverse thereof obtained in step 102 and the transformation matrix and the inverse thereof obtained in step 202.
And step four, automatically identifying the moving object of the camera.
The method comprises the step of automatically completing automatic identification of the moving object and image segmentation of the outline and the background of the moving object by adopting an automatic identification algorithm.
The camera can automatically identify the acquired images by adopting an automatic identification algorithm in daily monitoring, the objects identified by the automatic identification algorithm can comprise people, roads, vehicles and other objects needing to be identified, and the automatic identification algorithm can further identify faces, gaits, various behaviors of the people and the like on the basis of identifying the people.
And fifthly, positioning the moving object in a three-dimensional space.
And (3) automatically extracting and calculating the mapping three-dimensional point cloud at the image segmentation edge (namely the outline of the moving object) of the moving object identified in the step four, obtaining the depth value of the moving object and carrying out space positioning on the moving object.
After the step three is completed, the camera automatically collects images in the view field range, a virtual image corresponding to the images is obtained through the step three, and the virtual image comprises a three-dimensional live-action image in the shooting range of the camera and is overlapped with X, Y, Z coordinate values of three-dimensional points.
After the step four is completed, the recognition result of each moving object can be obtained according to the automatic recognition algorithm of the camera, wherein the recognition result comprises classification of each moving object and edge contours of each moving object and the cut background.
After the classification and edge contour of the moving object are obtained, three-dimensional point cloud data of the contact position of the moving object and the ground can be easily found by overlapping the edge contour of the moving object and the virtual image, the data can very conveniently obtain X, Y, Z coordinate values of the identified moving object in the three-dimensional real scene, the distance (or called depth value) from the camera to the moving object can be calculated through the Z coordinate values, and X, Y, Z coordinate values of the moving object can be directly used for positioning the three-dimensional real scene of the moving object due to the adoption of three-dimensional real scene coordinates, for example: for people and vehicles, three-dimensional space positioning can be performed for the people and the vehicles only by knowing the three-dimensional coordinates of the contact points of the feet of the people and the tires of the vehicles with the ground.
And step six, calculating the size of the moving object.
Because the actual size of the moving object does not completely correspond to the number of pixels of the image, and the problem of near-far-small exists, in order to keep the moving object as true as possible during three-dimensional live-action display, the actual size of the moving object needs to be estimated according to the depth value obtained by the calculation in the step five, and the three-dimensional live-action display of the moving object is amplified, and the specific operation mode is as follows: after the optical parameters of the camera are obtained in step 101, the distance between two pixels of the two-dimensional image at a certain Z-coordinate value is fixed according to the parameters, and the closer the moving object is to the camera (i.e., the larger the Z-coordinate value is), the smaller the distance is. The angle value between the vertical direction of the moving object and the image plane of the camera is considered, and the angle value can be obtained in advance, so that the size of the moving object can be corrected by multiplying the trigonometric function of the angle value to make projection after the size of the moving object is calculated, and the estimation of the size of the moving object becomes more accurate.
And seventhly, embedding, updating and displaying the dynamic data of the camera in the three-dimensional model.
Through the fourth step, the fifth step and the sixth step, the positioning data of the identified mobile object in the three-dimensional space and the size estimation data of the mobile object can be obtained, the data are processed and calculated by the real-time monitoring image of each camera, the monitoring center can directly embed the information (such as classification information, position information and the like) of the identified mobile object in the three-dimensional model obtained in the first step according to the positioning data and the estimated size data after obtaining the data, and the information is displayed on a screen through three-dimensional display software, and the real-time calculation and display can be realized through the fourth step, the fifth step, the sixth step and the real-time calculation of the third step, so that the real-time dynamic display of the three-dimensional virtual reality can be realized.
And step eight, comprehensively processing and storing the data of each camera of the monitoring center.
After the monitoring center obtains the identification data obtained after the images shot by the cameras are identified, the data information sent by all the cameras can be uniformly managed, so that the automatic information extraction and storage of the camera data are realized, the storage space can be greatly compressed, and meanwhile, the identification data transmitted synchronously can be processed and marked at a higher level, so that various higher-level data management functions are realized, for example: because the shooting areas of the cameras at the adjacent positions are overlapped, the dynamic tracking of the flow tracks of personnel, vehicles and the like can be conveniently realized, and automatic passenger flow statistics, vehicle concentration statistics and the like are realized.
By adopting the method for generating the three-dimensional virtual reality in the fixed space, three-dimensional imaging can be carried out on all objects in the space by utilizing a three-dimensional image technology means to obtain three-dimensional point cloud data, then all three-dimensional point clouds are mapped to a physical coordinate system of the same space through coordinate conversion and spliced together to form an integral three-dimensional point cloud model, and the three-dimensional point cloud model is integrally presented in a three-dimensional form in a video monitoring center so that a screen display and the integral space form a completely corresponding relation; respectively carrying out two-dimensional and three-dimensional imaging on a camera and a three-dimensional dynamic imaging device which are installed in the original fixed space in the same space range, enabling a two-dimensional image in the camera to be completely corresponding to the three-dimensional space through position points in a calibration device, completing the correspondence between corresponding points in a three-dimensional point cloud model corresponding to the camera in a physical space and pixel points of the two-dimensional image of the camera, enabling the two-dimensional image and the three-dimensional real space to establish a mapping relation, and forming a virtual two-dimensional image containing three-dimensional data of a three-dimensional space background; then, when the camera changes once the real image and the virtual two-dimensional image generated by mapping in the daily monitoring process, the camera can automatically identify the moving object through an automatic identification algorithm, the classification and the edge contour of the identified moving object are given out in the image, then the specific position of the identified moving object in the three-dimensional space can be calculated according to the three-dimensional point cloud data of the corresponding position of the edge contour of the identified moving object on the virtual two-dimensional image, the approximate size of the identified moving object can be calculated according to the position and the camera parameter, and then the image content and the corresponding calculated position in the edge contour of the identified moving object are transmitted to a video monitoring control center; after receiving the corresponding information of the moving objects sent by each camera, the video monitoring control center superimposes the image of each moving object on the three-dimensional overall model according to the optical parameters of the corresponding camera and the position relation between the view field of the corresponding camera and the overall three-dimensional model according to the calculated position and the calculated size, so as to form the dynamic embedding of the information of the moving objects, and thus the moving objects within the view field range of each camera can be dynamically presented in the video monitoring control center; the video monitoring control center can also enable events monitored by all cameras to be subjected to live-action mapping on the screen of the same monitoring control center to realize dynamic seamless connection, so that the monitoring control center can realize continuous dynamic track tracking of individual specific moving objects on the whole, three-dimensional live-action and three-dimensional data in all spatial ranges and data formatting and unification of monitored objects, and three-dimensional digitization of overall scene monitoring is realized.
In summary, the method for generating the three-dimensional virtual reality in the fixed space provided by the embodiment of the invention can realize the overall three-dimensional presentation of the fixed space by utilizing all two-dimensional video monitoring cameras in the existing fixed space and scene, can realize real-time capturing and position mapping based on the three-dimensional real scene for the monitored object and event in the fixed space, realizes the formatting of data and the three-dimensional implementation of the data, realizes the full automation of information extraction and analysis and the integration and three-dimensional implementation of effect presentation for the monitoring video, greatly facilitates the inquiry and playback of video records of various events, saves the space for storing a large amount of videos, greatly lightens the workload of video monitoring personnel, saves labor and realizes the automation and the intellectualization of video monitoring.
Based on the method for generating the three-dimensional virtual reality in the fixed space, the embodiment of the invention also provides a device for generating the three-dimensional virtual reality in the fixed space, as shown in fig. 4, the device can comprise the following modules:
The acquiring module 402 is configured to acquire, when no moving object exists in the fixed space, a two-dimensional background image in a field of view range corresponding to each two-dimensional imaging device through a plurality of two-dimensional imaging devices arranged at different positions in the fixed space, and acquire three-dimensional modeling data in the field of view range corresponding to each two-dimensional imaging device; the obtained three-dimensional modeling data comprise labeling categories and three-dimensional points corresponding to all fixed objects in a fixed space, each three-dimensional point is provided with a first coordinate under a three-dimensional model space coordinate system corresponding to the fixed space, and the fixed objects comprise fixed objects and/or fixed backgrounds.
A generating module 404, configured to generate a corresponding virtual two-dimensional image for each two-dimensional background image based on the obtained three-dimensional modeling data, the optical parameters of the two-dimensional imaging devices already set in the fixed space, and a pre-established first mapping relationship between the three-dimensional coordinate system of the field of view range corresponding to each two-dimensional imaging device and the coordinate system where the corresponding three-dimensional modeling data is located; wherein each virtual two-dimensional image has a respective second coordinate in a respective three-dimensional coordinate system.
The identifying module 406 is configured to identify, by using each two-dimensional imaging device, a moving object from a two-dimensional real-time image acquired by the two-dimensional imaging device, so as to obtain a classification and an edge contour of each moving object when the moving object is identified; wherein the moving object comprises a person and/or a moving object.
The real scene module 408 is configured to perform three-dimensional modeling based on the obtained three-dimensional modeling data to obtain an overall three-dimensional model of the fixed space, generate a three-dimensional virtual real scene of the fixed space based on the overall three-dimensional model, the virtual two-dimensional image corresponding to the identified moving object, and the obtained classification and edge contour, and then display the three-dimensional virtual real scene.
According to the three-dimensional virtual reality generating device in the fixed space, when the two-dimensional imaging equipment identifies the moving object, the three-dimensional virtual reality of the fixed space can be generated and displayed based on the whole three-dimensional model, the virtual two-dimensional image corresponding to the moving object and the identification result (including classification and edge outline), so that the specific real-time state of the moving object in the fixed space can be displayed through the three-dimensional virtual reality, and therefore track tracking of the moving object in the fixed space and dynamic presentation of a panoramic picture of the whole scene in the fixed space can be better achieved.
The live-action module 408 described above may also be used to: acquiring a second coordinate of each moving object in the corresponding virtual two-dimensional image based on the obtained edge profile; determining the three-dimensional size of each moving object in the fixed space based on the second coordinates corresponding to the identified moving objects; converting a second coordinate corresponding to each moving object into a corresponding first coordinate based on a second mapping relation between a pre-established three-dimensional model space coordinate system and each three-dimensional coordinate system; and generating the three-dimensional virtual reality based on the whole three-dimensional model, all the obtained classifications and the first coordinates and the three-dimensional size corresponding to the identified moving object.
The live-action module 408 described above may also be used to: constructing a corresponding three-dimensional object model for each moving object according to the corresponding three-dimensional size; and generating the three-dimensional virtual reality based on the integral three-dimensional model, the first coordinates corresponding to the identified moving object and the three-dimensional object model.
The acquisition module 402 may also be configured to: three-dimensional data acquisition is carried out on the target positions corresponding to the positions of the two-dimensional imaging devices through the three-dimensional data acquisition device, so that three-dimensional data in the field of view corresponding to the two-dimensional imaging devices are obtained; wherein, the overlapping range exists between the acquisition range of the three-dimensional data acquisition equipment at each target position and the field of view range of the corresponding two-dimensional imaging equipment; the method comprises the steps of obtaining the labeling category and the three-dimensional point corresponding to each fixed object through a three-dimensional data acquisition device, and taking the three-dimensional data acquired by the three-dimensional data acquisition device at each target position and positioned in the view field range corresponding to the data of the corresponding two-dimensional imaging device and the obtained labeling category and three-dimensional point positioned in the view field range corresponding to the corresponding two-dimensional imaging device as three-dimensional modeling data in the view field range corresponding to the corresponding two-dimensional imaging device.
The generation module 404 described above may also be used to: based on a first mapping relation established in advance, converting three-dimensional data corresponding to each target position into a local three-dimensional point cloud under a corresponding three-dimensional coordinate system; each local three-dimensional point cloud is provided with a second coordinate under a corresponding three-dimensional coordinate system; for each local three-dimensional point cloud, converting the local three-dimensional point cloud into a corresponding virtual two-dimensional image based on optical parameters of the two-dimensional imaging device to which the local three-dimensional point cloud is directed.
The distance between the position of each two-dimensional imaging device and the corresponding target position is smaller than a preset distance, and the overlapping rate of the acquisition range of the three-dimensional imaging device at each target position and the acquisition range of the corresponding two-dimensional imaging device is larger than a preset overlapping rate; based on this, referring to fig. 4, the apparatus may further include:
A first setup module 410 for: when a mobile object does not exist in the overlapping range corresponding to the position of each two-dimensional imaging device and a calibration device with a plurality of calibration points is arranged in the overlapping range corresponding to the position, three-dimensional data acquisition is carried out on the target position corresponding to the position through the three-dimensional imaging devices so as to obtain three-dimensional calibration data in the overlapping range corresponding to the position, and meanwhile, two-dimensional calibration images in the overlapping range corresponding to the position are acquired through the two-dimensional imaging devices arranged at the position; the three-dimensional calibration data comprises a third coordinate of each calibration point under the coordinate system where the corresponding three-dimensional calibration data is located, and the two-dimensional calibration image comprises a second coordinate of each calibration point under the corresponding three-dimensional coordinate system; and establishing a mapping relation between each three-dimensional coordinate system and the coordinate system where the corresponding three-dimensional calibration data are located as a corresponding first mapping relation based on the obtained three-dimensional calibration data and the obtained two-dimensional calibration image.
A second setup module 412 for: when a plurality of fixed markers are arranged in the overlapping range corresponding to the position of each two-dimensional imaging device, acquiring a two-dimensional marker image in the overlapping range corresponding to the position by the two-dimensional imaging device arranged at the position; each fixed marker is provided with a first coordinate under a space coordinate system of the three-dimensional model, and the two-dimensional marker image comprises a second coordinate of each marker under a corresponding three-dimensional coordinate system; and establishing a mapping relation between the three-dimensional model space coordinate system and each three-dimensional coordinate system as a corresponding second mapping relation based on the obtained two-dimensional marker image and the obtained first coordinates of the fixed markers.
The live-action module 408 described above may also be used to: and adding each three-dimensional object model to the integral three-dimensional position corresponding to the first coordinate corresponding to the corresponding moving object in the integral three-dimensional model, and binding the classification of the corresponding moving object for each integral three-dimensional position to obtain the three-dimensional virtual reality.
The implementation principle and the generated technical effects of the device for generating the three-dimensional virtual reality in the fixed space provided by the embodiment of the invention are the same as those of the embodiment of the method for generating the three-dimensional virtual reality in the fixed space, and for the sake of brief description, the corresponding contents in the embodiment of the method can be referred to where the embodiment of the device is not mentioned.
The relative steps, numerical expressions and numerical values of the components and steps set forth in these embodiments do not limit the scope of the present invention unless it is specifically stated otherwise.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. The method for generating the three-dimensional virtual reality in the fixed space is characterized by comprising the following steps of:
When no moving object exists in the fixed space, acquiring a two-dimensional background image in a view field range corresponding to each two-dimensional imaging device through a plurality of two-dimensional imaging devices arranged at different positions in the fixed space, and simultaneously acquiring three-dimensional modeling data in the view field range corresponding to each two-dimensional imaging device; the obtained three-dimensional modeling data comprise labeling categories and three-dimensional points corresponding to all fixed objects in a fixed space, each three-dimensional point is provided with a first coordinate under a three-dimensional model space coordinate system corresponding to the fixed space, and the fixed objects comprise fixed objects and/or fixed backgrounds;
Generating a corresponding virtual two-dimensional image for each two-dimensional background image based on the obtained three-dimensional modeling data, optical parameters of two-dimensional imaging devices arranged in a fixed space and a pre-established first mapping relation between a three-dimensional coordinate system of a field range corresponding to each two-dimensional imaging device and a coordinate system where the corresponding three-dimensional modeling data is located; wherein each virtual two-dimensional image has a second coordinate in a corresponding three-dimensional coordinate system;
Moving object identification is carried out on the two-dimensional real-time images acquired by the two-dimensional imaging equipment, so that classification and edge contour of each moving object are obtained when the moving object is identified; wherein the moving object comprises a person and/or a moving object;
and carrying out three-dimensional modeling based on the obtained three-dimensional modeling data to obtain an overall three-dimensional model of the fixed space, generating a three-dimensional virtual reality of the fixed space based on the overall three-dimensional model, the virtual two-dimensional image corresponding to the identified moving object, the obtained classification and the obtained edge profile, and then displaying the three-dimensional virtual reality.
2. The method of claim 1, wherein generating a three-dimensional virtual reality of the fixed space based on the overall three-dimensional model and the virtual two-dimensional image corresponding to the identified moving object and the resulting classification and edge profile, comprises:
acquiring a second coordinate of each moving object in the corresponding virtual two-dimensional image based on the obtained edge profile;
Determining the three-dimensional size of each moving object in the fixed space based on the second coordinates corresponding to the identified moving objects;
Converting a second coordinate corresponding to each moving object into a corresponding first coordinate based on a second mapping relation between a pre-established three-dimensional model space coordinate system and each three-dimensional coordinate system;
And generating the three-dimensional virtual reality based on the whole three-dimensional model, all the obtained classifications and the first coordinates and the three-dimensional size corresponding to the identified moving object.
3. The method of claim 2, wherein generating the three-dimensional virtual reality based on the overall three-dimensional model and the resulting overall classification and the first coordinates and three-dimensional dimensions corresponding to the identified moving object comprises:
Constructing a corresponding three-dimensional object model for each moving object according to the corresponding three-dimensional size;
And generating the three-dimensional virtual reality based on the integral three-dimensional model, the first coordinates corresponding to the identified moving object and the three-dimensional object model.
4. A method according to claim 3, wherein obtaining three-dimensional modeling data over a field of view corresponding to each two-dimensional imaging device comprises:
three-dimensional data acquisition is carried out on the target positions corresponding to the positions of the two-dimensional imaging devices through the three-dimensional data acquisition device, so that three-dimensional data in the field of view corresponding to the two-dimensional imaging devices are obtained; wherein, the overlapping range exists between the acquisition range of the three-dimensional data acquisition equipment at each target position and the field of view range of the corresponding two-dimensional imaging equipment;
the method comprises the steps of obtaining the labeling category and the three-dimensional point corresponding to each fixed object through a three-dimensional data acquisition device, and taking the three-dimensional data acquired by the three-dimensional data acquisition device at each target position and positioned in the view field range corresponding to the data of the corresponding two-dimensional imaging device and the obtained labeling category and three-dimensional point positioned in the view field range corresponding to the corresponding two-dimensional imaging device as three-dimensional modeling data in the view field range corresponding to the corresponding two-dimensional imaging device.
5. The method of claim 4, wherein generating a respective virtual two-dimensional image for each two-dimensional background image based on the obtained three-dimensional modeling data and the optical parameters of the two-dimensional imaging devices already disposed in the fixed space and a pre-established first mapping relationship between the three-dimensional coordinate system of the field of view range corresponding to each two-dimensional imaging device and the coordinate system in which the respective three-dimensional modeling data is located, comprises:
Based on a first mapping relation established in advance, converting three-dimensional data corresponding to each target position into a local three-dimensional point cloud under a corresponding three-dimensional coordinate system; each local three-dimensional point cloud is provided with a second coordinate under a corresponding three-dimensional coordinate system;
For each local three-dimensional point cloud, converting the local three-dimensional point cloud into a corresponding virtual two-dimensional image based on optical parameters of the two-dimensional imaging device to which the local three-dimensional point cloud is directed.
6. The method of claim 5, wherein a distance between a position of each two-dimensional imaging device and a corresponding target position is smaller than a preset distance, and an overlapping rate of the three-dimensional imaging device between an acquisition range of each target position and an acquisition range of the corresponding two-dimensional imaging device is larger than a preset overlapping rate; the method further comprises the steps of:
When a mobile object does not exist in the overlapping range corresponding to the position of each two-dimensional imaging device and a calibration device with a plurality of calibration points is arranged in the overlapping range corresponding to the position, three-dimensional data acquisition is carried out on the target position corresponding to the position through the three-dimensional imaging devices so as to obtain three-dimensional calibration data in the overlapping range corresponding to the position, and meanwhile, two-dimensional calibration images in the overlapping range corresponding to the position are acquired through the two-dimensional imaging devices arranged at the position; the three-dimensional calibration data comprises a third coordinate of each calibration point under the coordinate system where the corresponding three-dimensional calibration data is located, and the two-dimensional calibration image comprises a second coordinate of each calibration point under the corresponding three-dimensional coordinate system;
And establishing a mapping relation between each three-dimensional coordinate system and the coordinate system where the corresponding three-dimensional calibration data are located as a corresponding first mapping relation based on the obtained three-dimensional calibration data and the obtained two-dimensional calibration image.
7. The method of claim 6, wherein the method further comprises:
When a plurality of fixed markers are arranged in the overlapping range corresponding to the position of each two-dimensional imaging device, acquiring a two-dimensional marker image in the overlapping range corresponding to the position by the two-dimensional imaging device arranged at the position; each fixed marker is provided with a first coordinate under a space coordinate system of the three-dimensional model, and the two-dimensional marker image comprises a second coordinate of each marker under a corresponding three-dimensional coordinate system;
and establishing a mapping relation between the three-dimensional model space coordinate system and each three-dimensional coordinate system as a corresponding second mapping relation based on the obtained two-dimensional marker image and the obtained first coordinates of the fixed markers.
8. The method of claim 3, wherein generating the three-dimensional virtual reality based on the global three-dimensional model and the identified first coordinates corresponding to the moving object and the three-dimensional object model comprises:
And adding each three-dimensional object model to the integral three-dimensional position corresponding to the first coordinate corresponding to the corresponding moving object in the integral three-dimensional model, and binding the classification of the corresponding moving object for each integral three-dimensional position to obtain the three-dimensional virtual reality.
9. The method of any of claims 4-7, wherein the three-dimensional data acquisition device is a three-dimensional matrix camera comprising four image sensors arranged in a matrix on the same plane.
10. A three-dimensional virtual reality generating device in a fixed space, comprising:
The acquisition module is used for acquiring a two-dimensional background image in a view field range corresponding to each two-dimensional imaging device through a plurality of two-dimensional imaging devices arranged at different positions in the fixed space when no moving object exists in the fixed space, and simultaneously acquiring three-dimensional modeling data in the view field range corresponding to each two-dimensional imaging device; the obtained three-dimensional modeling data comprise labeling categories and three-dimensional points corresponding to all fixed objects in a fixed space, each three-dimensional point is provided with a first coordinate under a three-dimensional model space coordinate system corresponding to the fixed space, and the fixed objects comprise fixed objects and/or fixed backgrounds;
the generation module is used for generating a corresponding virtual two-dimensional image for each two-dimensional background image based on the obtained three-dimensional modeling data, the optical parameters of the two-dimensional imaging devices which are arranged in the fixed space and a pre-established first mapping relation between a three-dimensional coordinate system of a field range corresponding to each two-dimensional imaging device and a coordinate system where the corresponding three-dimensional modeling data is located; wherein each virtual two-dimensional image has a second coordinate in a corresponding three-dimensional coordinate system;
The identification module is used for carrying out mobile object identification on the two-dimensional real-time images acquired by the two-dimensional imaging equipment through each two-dimensional imaging equipment so as to obtain the classification and the edge profile of each mobile object when the mobile object is identified; wherein the moving object comprises a person and/or a moving object;
The real scene module is used for carrying out three-dimensional modeling based on the obtained three-dimensional modeling data to obtain an overall three-dimensional model of the fixed space, generating a three-dimensional virtual real scene of the fixed space based on the overall three-dimensional model, the virtual two-dimensional image corresponding to the identified moving object, the obtained classification and the obtained edge contour, and then displaying the three-dimensional virtual real scene.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410175008.5A CN118037956A (en) | 2024-02-07 | 2024-02-07 | Method and system for generating three-dimensional virtual reality in fixed space |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410175008.5A CN118037956A (en) | 2024-02-07 | 2024-02-07 | Method and system for generating three-dimensional virtual reality in fixed space |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118037956A true CN118037956A (en) | 2024-05-14 |
Family
ID=90983571
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410175008.5A Pending CN118037956A (en) | 2024-02-07 | 2024-02-07 | Method and system for generating three-dimensional virtual reality in fixed space |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118037956A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114282035A (en) * | 2021-08-17 | 2022-04-05 | 腾讯科技(深圳)有限公司 | Image retrieval model training and retrieval method, device, equipment and medium |
CN118644408A (en) * | 2024-08-15 | 2024-09-13 | 湖南快乐阳光互动娱乐传媒有限公司 | A method and device for generating virtual-real fusion images, electronic equipment, and storage medium |
-
2024
- 2024-02-07 CN CN202410175008.5A patent/CN118037956A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114282035A (en) * | 2021-08-17 | 2022-04-05 | 腾讯科技(深圳)有限公司 | Image retrieval model training and retrieval method, device, equipment and medium |
CN118644408A (en) * | 2024-08-15 | 2024-09-13 | 湖南快乐阳光互动娱乐传媒有限公司 | A method and device for generating virtual-real fusion images, electronic equipment, and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111473739B (en) | Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area | |
US9965870B2 (en) | Camera calibration method using a calibration target | |
EP3550513B1 (en) | Method of generating panorama views on a mobile mapping system | |
Golparvar-Fard et al. | Evaluation of image-based modeling and laser scanning accuracy for emerging automated performance monitoring techniques | |
CN118037956A (en) | Method and system for generating three-dimensional virtual reality in fixed space | |
US20060007308A1 (en) | Environmentally aware, intelligent surveillance device | |
CN102917171B (en) | Based on the small target auto-orientation method of pixel | |
CN111192321B (en) | Target three-dimensional positioning method and device | |
CN103226838A (en) | Real-time spatial positioning method for mobile monitoring target in geographical scene | |
CN113345028B (en) | Method and equipment for determining target coordinate transformation information | |
JP2010504711A (en) | Video surveillance system and method for tracking moving objects in a geospatial model | |
CN105741379A (en) | Method for panoramic inspection on substation | |
CN101701814A (en) | Method for judging spatial position of target by linkage of multi-cameras and system thereof | |
KR102473804B1 (en) | method of mapping monitoring point in CCTV video for video surveillance system | |
CN112950717A (en) | Space calibration method and system | |
CN114742905B (en) | Multi-camera parameter calibration method, device, equipment and storage medium | |
WO2022025283A1 (en) | Measurement processing device, method, and program | |
KR102458559B1 (en) | Construction management system and method using mobile electric device | |
JP2024039070A (en) | Feature management system | |
CN112860946B (en) | Method and system for converting video image information into geographic information | |
CN111275823B (en) | Target associated data display method, device and system | |
CN113450462B (en) | Three-dimensional scene dynamic element restoration method, device and storage medium | |
JP2006215939A (en) | Free viewpoint image composition method and free viewpoint image composition apparatus | |
CN118642121B (en) | Monocular vision ranging and laser point cloud fusion space positioning method and system | |
CN118778064B (en) | Method and system for monitoring dangerous trees along railways based on optical SAR fusion and binocular depth estimation technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |