CN112383746A - Video monitoring method and device in three-dimensional scene, electronic equipment and storage medium - Google Patents
Video monitoring method and device in three-dimensional scene, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN112383746A CN112383746A CN202011182441.XA CN202011182441A CN112383746A CN 112383746 A CN112383746 A CN 112383746A CN 202011182441 A CN202011182441 A CN 202011182441A CN 112383746 A CN112383746 A CN 112383746A
- Authority
- CN
- China
- Prior art keywords
- video
- information
- dimensional map
- video information
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Architecture (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Remote Sensing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The invention discloses a video monitoring method, a video monitoring device, electronic equipment and a storage medium in a three-dimensional scene, wherein the method comprises the following steps: carrying out three-dimensional modeling on the specified area to construct a three-dimensional map scene; collecting data information of each camera in a designated area; positioning the camera in the three-dimensional map scene according to the position information, and projecting the video information at the position of the three-dimensional map scene; and splicing and fusing the video information of each projection, and monitoring the video according to the splicing and fusing result. The video information of different cameras is projected to corresponding positions of a three-dimensional map scene, and adjacent video information is spliced and fused, so that an integral video monitoring scene is displayed on the three-dimensional map scene, the combination of a virtual scene and a real video is realized, a worker only needs to observe a screen, the integral safety condition of the whole monitoring area can be obtained, and the working efficiency and the emergency response capability of the worker are correspondingly improved.
Description
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a video monitoring method and device in a three-dimensional scene, electronic equipment and a storage medium.
Background
Video monitoring is an important component of a safety precaution system, and the current video monitoring technology tends to mature. Generally, traditional two-dimensional video monitoring is adopted, hundreds of video monitoring points are simultaneously accessed to one monitoring system, and monitoring is carried out based on mutually independent sub-lens modes.
In the prior art, when a monitoring area is monitored in a split-lens mode, a worker needs to observe a plurality of screens at the same time, so that the overall safety condition of the monitoring area is difficult to visually acquire, and the position of an image cannot be quickly and accurately positioned, so that the monitoring requirements of a user cannot be met by the existing video monitoring method.
Disclosure of Invention
The embodiment of the invention provides a video monitoring method, a video monitoring device, video monitoring equipment and a storage medium in a three-dimensional scene, so as to realize the integral monitoring of a specified area in a three-dimensional map scene.
In a first aspect, an embodiment of the present invention provides a method for monitoring a video in a three-dimensional scene, including: carrying out three-dimensional modeling on the specified area to construct a three-dimensional map scene;
collecting data information of each camera in a designated area, wherein the data information comprises video information and position information;
positioning the camera in the three-dimensional map scene according to the position information, and projecting the video information at the position of the three-dimensional map scene;
and splicing and fusing the video information of each projection, and monitoring the video according to the splicing and fusing result.
In a second aspect, an embodiment of the present invention provides a video monitoring apparatus in a three-dimensional scene, including:
the three-dimensional map scene construction module is used for carrying out three-dimensional modeling on the specified area to construct a three-dimensional map scene;
the data information acquisition module is used for acquiring data information of each camera in a designated area, wherein the data information comprises video information and position information;
the video information projection module is used for positioning the camera in the three-dimensional map scene according to the position information and projecting the video information at the positioning position of the three-dimensional map scene;
and the splicing and fusing module is used for splicing and fusing each piece of projected video information and monitoring the video according to the splicing and fusing result.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
storage means for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the methods of any of the embodiments of the present invention.
In a fourth aspect, the embodiments of the present invention also provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the method of any of the embodiments of the present invention.
In the embodiment of the invention, the video information of different cameras is projected to the corresponding positions of the three-dimensional map scene, and the adjacent video information is spliced and fused, so that an integral video monitoring scene is displayed on the three-dimensional map scene, the combination of a virtual scene and a real video is realized, a worker can acquire the integral safety condition of the whole monitoring area only by observing one screen, and the working efficiency and the emergency response capability of the worker are correspondingly improved.
Drawings
Fig. 1 is a flowchart of a video monitoring method in a three-dimensional scene according to an embodiment of the present invention;
fig. 2 is a flowchart of a video monitoring method in a three-dimensional scene according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a video monitoring apparatus in a three-dimensional scene according to a third embodiment of the present invention;
fig. 4 is a block diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a video monitoring method in a three-dimensional scene according to an embodiment of the present invention, where this embodiment is applicable to a situation where a designated area is integrally monitored in a three-dimensional map scene, and the method may be executed by a packet generation device in an embodiment of the present invention, where the device may be implemented in a software and/or hardware manner, and the method in an embodiment of the present invention specifically includes the following steps:
The designated area may refer to an area that needs to be monitored, such as a school area, a company area, or a community area, and the specific range of the designated area is not limited in this embodiment.
Specifically, in the embodiment, a satellite image base map and an urban building model can be loaded based on a CIMMap three-dimensional Geographic Information System (GIS) engine, and an electronic map and urban components (such as landmark buildings) are gathered in a three-dimensional map scene to construct a virtual scene consistent with a real environment.
It should be noted that, in this embodiment, according to the monitoring requirement of the user, the three-dimensional map scene may also be rotated by different angles or zoomed by different scales according to the user instruction, so that the worker can monitor the designated area at multiple angles by fully positioning.
Specifically, real cameras are arranged at different positions of a designated area in a real scene, and each camera is used for shooting different area ranges in the designated area. The electronic device applied to this embodiment collects data information of each camera in the designated area, where the collected data information includes not only video information shot by each camera, but also position information of each camera, for example, a Global Positioning System (GPS) position.
The data information also includes camera attitude information, for example, information such as a pitch angle of the camera.
Optionally, after acquiring data information of each camera in the designated area, the method further includes: the image contained in the video information is preprocessed. The preprocessing specifically includes denoising the video information to eliminate interference noise in the video information, so as to improve the quality of the video information.
And 103, positioning the camera in the three-dimensional map scene according to the position information, and projecting the video information at the position where the three-dimensional map scene is positioned.
Optionally, positioning the camera in the three-dimensional map scene according to the position information may include: searching a coordinate point matched with the position information in the three-dimensional map scene; and loading the virtual camera model at the coordinate point to realize the positioning of the camera.
Specifically, the position information of each camera can be extracted from the acquired data information of each camera, and each coordinate point in the virtual three-dimensional map scene corresponds to each position of the designated area one by one because the three-dimensional map scene is a virtual display form of the designated area. Therefore, a coordinate point matched with the position information is searched in the three-dimensional map scene aiming at each camera, and the virtual camera model is loaded at the searched coordinate point, so that the positioning of the camera is completed.
Optionally, projecting the video information at the position of the three-dimensional map scene may include: video information is projected at a location of the three-dimensional map scene based on the pose information.
Specifically, in the embodiment, as the positioning is realized by loading the virtual camera model in the virtual three-dimensional map scene for the camera in each real scene in the designated area, the video information can be further projected at the position where the three-dimensional map scene is positioned. Because the data information includes camera attitude information, such as the pitch angle of the camera. The different attitude information influences the accurate projection of the video information at the positioning position, and if the pitch angle is increased by an angle, the video information may be shifted to the left or right by a certain distance at the positioning position of the three-dimensional map scene, so that in order to ensure the projection accuracy, the video information is projected at the positioning position of the three-dimensional map scene according to the attitude information in the embodiment.
And 104, splicing and fusing the video information projected by each step, and monitoring the video according to the splicing and fusing result.
Optionally, splicing and fusing the video information of each projection may include: performing video splicing on each piece of projected video information; and fusing with the three-dimensional map scene according to the splicing result.
After the time-frequency information of each camera is projected on the three-dimensional map scene, splicing and fusing each projected video information, specifically splicing and fusing adjacent projected video information in an overlapping area, and realizing the fusion of the spliced video information and the three-dimensional map scene.
Optionally, performing video splicing on each piece of projected video information may include: determining an outsourcing frame of each projected video information, wherein the outsourcing frame comprises edge pixel characteristic points of the video information; when the adjacent outsourcing frames are determined to contain the same pixel feature points, the same pixel feature points in one outsourcing frame are reserved, the same pixel feature points in the other outsourcing frame are deleted, and a new video outsourcing frame is obtained; and obtaining a splicing result of the projected video information based on the new video outsourcing frame, wherein the splicing result comprises a rendering area of each projected video information.
For example, the projected video information a and the projected video information B are included for two adjacent projected video information in a three-dimensional map scene. And determining the outsourcing frames of the projected video information A and the projected video information B, wherein the outsourcing frames contain the edge pixel characteristic points of the video information, so that the image range of the projected video information can be determined according to the outsourcing frames. When it is determined that the outsourcing frame of the projected video information a and the outsourcing frame of the projected video information B contain the same pixel feature point, it indicates that the frames of the projected video information a and the projected video information B overlap, at this time, one of the outsourcing frames can be selected arbitrarily, and the contained same pixel feature point is deleted, in this embodiment, the same pixel feature point in the outsourcing frame corresponding to the projected video information a can be selected to be reserved, and the same pixel feature point in the outsourcing frame corresponding to the projected video information B can be deleted, so as to obtain a new video outsourcing frame for the projected video information a and the projected video information B, and obtain a splicing result of the projected video information based on the new video outsourcing frame, and the splicing result contains a rendering area of the projected video information a and a rendering area of the projected video information B, thus, after splicing, the rendering area of the projected video information B is the same as the range of the bounding box of the projected video information B, and the rendering area of the projected video information a is smaller than the range of the bounding box of the projected video information a.
Optionally, the fusion with the three-dimensional map scene according to the splicing result may include: adjusting the corresponding video information in the rendering area of each projected video information; and integrally displaying the adjustment result in the three-dimensional map scene to realize the fusion with the three-dimensional map scene.
In the embodiment of the invention, the video information of different cameras is projected to the corresponding positions of the three-dimensional map scene, and the adjacent video information is spliced and fused, so that an integral video monitoring scene is displayed on the three-dimensional map scene, the combination of a virtual scene and a real video is realized, a worker can acquire the integral safety condition of the whole monitoring area only by observing one screen, and the working efficiency and the emergency response capability of the worker are correspondingly improved.
Example two
Fig. 2 is a flowchart of a video monitoring method in a three-dimensional scene according to a second embodiment of the present invention, where the present embodiment is based on the foregoing embodiment, and performs video monitoring according to a result of stitching fusion, where the method includes: and displaying the splicing and fusing result, and giving an alarm when a preset scene appears in the splicing and fusing result.
As shown in fig. 2, the method of the embodiment of the present disclosure specifically includes:
And step 203, positioning the camera in the three-dimensional map scene according to the position information, and projecting the video information at the position where the three-dimensional map scene is positioned.
And step 204, splicing and fusing the video information of each projection.
And step 205, displaying the splicing and fusion result, and giving an alarm when a preset scene appears in the splicing and fusion result.
Specifically, in this embodiment, after the projected video information is spliced and fused, the splicing and fusing result can be displayed on an independent display screen, so that the worker can acquire the overall safety condition of the whole monitoring area by observing a display picture.
It should be noted that, in this embodiment, a preset scene may also be set, for example, a scene including a dangerous article such as a hijack, a cutter, and the like, and when an image in the monitored video information is compared with the preset scene through the pixel feature points and the similarity is determined to exceed a preset threshold, it is indicated that a potential safety hazard exists in the monitored area, and an alarm may be automatically issued, so that the monitoring accuracy of the monitored area is further improved.
In the embodiment of the invention, the video information of different cameras is projected to the corresponding positions of the three-dimensional map scene, and the adjacent video information is spliced and fused, so that an integral video monitoring scene is displayed on the three-dimensional map scene, the combination of a virtual scene and a real video is realized, a worker can acquire the integral safety condition of the whole monitoring area only by observing one screen, and the working efficiency and the emergency response capability of the worker are correspondingly improved. And when the preset scene appears in the splicing and fusing result, alarming is carried out so as to prompt the safety of the monitored area, thereby further improving the monitoring accuracy of the monitored area.
EXAMPLE III
Fig. 3 is a video monitoring apparatus in a three-dimensional scene according to a third embodiment of the present invention, which specifically includes: the three-dimensional map scene construction module 310, the data information acquisition module 320, the video information projection module 330 and the stitching fusion module 340.
The three-dimensional map scene construction module 310 is configured to perform three-dimensional modeling on a specified area to construct a three-dimensional map scene; the data information acquisition module 320 is configured to acquire data information of each camera in a designated area, where the data information includes video information and position information; the video information projection module 330 is configured to position the camera in the three-dimensional map scene according to the position information, and project video information at the position where the three-dimensional map scene is positioned; and the splicing and fusing module 340 is configured to splice and fuse each piece of projected video information, and perform video monitoring according to a splicing and fusing result.
Optionally, the video information projection module includes a positioning sub-module, configured to: searching a coordinate point matched with the position information in the three-dimensional map scene;
and loading the virtual camera model at the coordinate point to realize the positioning of the camera.
Optionally, the data information further includes camera attitude information; the video information projection module includes a projection sub-module for projecting video information at a location of the three-dimensional map scene based on the pose information.
Optionally, the apparatus further includes a preprocessing module, configured to preprocess an image included in the video information.
Optionally, the splicing fusion module includes a splicing submodule configured to: performing video splicing on each piece of projected video information; a fusion submodule to: and fusing with the three-dimensional map scene according to the splicing result.
Optionally, the sub-modules are spliced to: determining an outsourcing frame of each projected video information, wherein the outsourcing frame comprises edge pixel characteristic points of the video information;
when the adjacent outsourcing frames are determined to contain the same pixel feature points, the same pixel feature points in one outsourcing frame are reserved, the same pixel feature points in the other outsourcing frame are deleted, and a new video outsourcing frame is obtained;
and obtaining a splicing result of the projected video information based on the new video outsourcing frame, wherein the splicing result comprises a rendering area of each projected video information.
Optionally, a fusion submodule, configured to: adjusting the corresponding video information in the rendering area of each projected video information;
and integrally displaying the adjustment result in the three-dimensional map scene to realize the fusion with the three-dimensional map scene.
Optionally, the splicing and fusing module further includes a monitoring submodule, configured to: and displaying the splicing and fusing result, and giving an alarm when a preset scene appears in the splicing and fusing result.
The device can execute the video monitoring method in the three-dimensional scene provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. For technical details not described in detail in this embodiment, reference may be made to the method provided in any embodiment of the present invention.
Example four
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. FIG. 4 illustrates a block diagram of an exemplary electronic device 412 suitable for use in implementing embodiments of the present invention. The electronic device 412 shown in fig. 4 is only an example and should not bring any limitations to the functionality and scope of use of the embodiments of the present invention.
As shown in fig. 4, the electronic device 412 is in the form of a general purpose computing device. The components of the electronic device 412 may include, but are not limited to: one or more processors 412, a memory 428, and a bus 418 that couples the various system components (including the memory 428 and the processor 416).
The memory 428 is used to store instructions. Memory 428 can include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)430 and/or cache memory 432. The electronic device 412 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 434 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, commonly referred to as a "hard drive"). Although not shown in FIG. 4, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 418 by one or more data media interfaces. Memory 428 can include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 440 having a set (at least one) of program modules 442 may be stored, for instance, in memory 428, such program modules 442 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. The program modules 442 generally perform the functions and/or methodologies of the described embodiments of the invention.
The electronic device 412 may also communicate with one or more external devices 414 (e.g., keyboard, pointing device, display 424, etc.), with one or more devices that enable a user to interact with the electronic device 412, and/or with any devices (e.g., network card, modem, etc.) that enable the electronic device 412 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 422. Also, the electronic device 412 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) through the network adapter 420. As shown, network adapter 420 communicates with the other modules of electronic device 412 over bus 418. It should be appreciated that although not shown in FIG. 4, other hardware and/or software modules may be used in conjunction with the electronic device 412, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 416 performs various functional applications and data processing by executing instructions stored in the memory 428, such as performing the following:
carrying out three-dimensional modeling on the specified area to construct a three-dimensional map scene; collecting data information of each camera in a designated area, wherein the data information comprises video information and position information; positioning the camera in the three-dimensional map scene according to the position information, and projecting the video information at the position of the three-dimensional map scene; and splicing and fusing the video information of each projection, and monitoring the video according to the splicing and fusing result.
EXAMPLE five
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, where the computer-executable instructions are executed by a computer processor to perform a video monitoring method in a three-dimensional scene, and the method includes:
carrying out three-dimensional modeling on the specified area to construct a three-dimensional map scene; collecting data information of each camera in a designated area, wherein the data information comprises video information and position information; positioning the camera in the three-dimensional map scene according to the position information, and projecting the video information at the position of the three-dimensional map scene; and splicing and fusing the video information of each projection, and monitoring the video according to the splicing and fusing result.
Of course, the storage medium provided by the embodiment of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the method operations described above, and may also perform related operations in the video monitoring method in a three-dimensional scene provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk, or an optical disk of a computer, and includes several instructions to enable an electronic device (which may be a personal computer, a server, or a network device) to execute the video monitoring method in the three-dimensional scene according to the embodiments of the present invention.
It should be noted that, in the embodiment of the video monitoring apparatus in the three-dimensional scene, the units and modules included in the embodiment are only divided according to functional logic, but are not limited to the above division, as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (11)
1. A video monitoring method in a three-dimensional scene is characterized by comprising the following steps:
carrying out three-dimensional modeling on the specified area to construct a three-dimensional map scene;
collecting data information of each camera in the designated area, wherein the data information comprises video information and position information;
positioning a camera in the three-dimensional map scene according to the position information, and projecting video information at the position where the three-dimensional map scene is positioned;
and splicing and fusing the video information of each projection, and monitoring the video according to the splicing and fusing result.
2. The method of claim 1, wherein said positioning a camera in the three-dimensional map scene according to the location information comprises:
searching a coordinate point matched with the position information in the three-dimensional map scene;
and loading a virtual camera model at the coordinate point to realize the positioning of the camera.
3. The method of claim 1, wherein the data information further comprises camera pose information;
the projecting video information at a location of a three-dimensional map scene includes:
projecting video information at a location of a three-dimensional map scene based on the pose information.
4. The method of claim 1, wherein after collecting the data information of each camera in the designated area, further comprising:
pre-processing images contained in the video information.
5. The method of claim 4, wherein said stitching and fusing each of the projected video information comprises:
performing video splicing on each piece of projected video information;
and fusing with the three-dimensional map scene according to the splicing result.
6. The method of claim 5, wherein said video stitching each projected video information comprises:
determining each projected video information outer packaging frame, wherein the outer packaging frame comprises edge pixel characteristic points of the video information;
when the adjacent outsourcing frames are determined to contain the same pixel feature points, the same pixel feature points in one outsourcing frame are reserved, the same pixel feature points in the other outsourcing frame are deleted, and a new video outsourcing frame is obtained;
and obtaining a splicing result of the projected video information based on the new video outsourcing frame, wherein the splicing result comprises a rendering area of each projected video information.
7. The method of claim 6, wherein the fusing with the three-dimensional map scene according to the stitching result comprises:
adjusting the corresponding video information in the rendering area of each projected video information;
and integrally displaying the adjustment result in the three-dimensional map scene to realize the fusion with the three-dimensional map scene.
8. The method according to claim 1, wherein the video monitoring according to the result of the stitching fusion comprises:
and displaying the splicing and fusion result, and giving an alarm when a preset scene appears in the splicing and fusion result.
9. An apparatus for video surveillance in a three-dimensional scene, the apparatus comprising:
the three-dimensional map scene construction module is used for carrying out three-dimensional modeling on the specified area to construct a three-dimensional map scene;
the data information acquisition module is used for acquiring data information of each camera in the designated area, wherein the data information comprises video information and position information;
the video information projection module is used for positioning the camera in the three-dimensional map scene according to the position information and projecting the video information at the positioning position of the three-dimensional map scene;
and the splicing and fusing module is used for splicing and fusing each piece of projected video information and monitoring the video according to the splicing and fusing result.
10. An electronic device, characterized in that the electronic device comprises:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-8.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011182441.XA CN112383746A (en) | 2020-10-29 | 2020-10-29 | Video monitoring method and device in three-dimensional scene, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011182441.XA CN112383746A (en) | 2020-10-29 | 2020-10-29 | Video monitoring method and device in three-dimensional scene, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112383746A true CN112383746A (en) | 2021-02-19 |
Family
ID=74576960
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011182441.XA Pending CN112383746A (en) | 2020-10-29 | 2020-10-29 | Video monitoring method and device in three-dimensional scene, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112383746A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113115015A (en) * | 2021-02-25 | 2021-07-13 | 北京邮电大学 | Multi-source information fusion visualization method and system |
CN113162229A (en) * | 2021-03-24 | 2021-07-23 | 北京潞电电气设备有限公司 | Monitoring device and method thereof |
CN113259624A (en) * | 2021-03-24 | 2021-08-13 | 北京潞电电气设备有限公司 | Monitoring equipment and method thereof |
CN113271434A (en) * | 2021-03-24 | 2021-08-17 | 北京潞电电气设备有限公司 | Monitoring system and method thereof |
CN114582188A (en) * | 2022-01-26 | 2022-06-03 | 广州市乐拓电子科技有限公司 | AR-based immersive simulation physical training room |
CN116582653A (en) * | 2023-07-14 | 2023-08-11 | 广东天亿马信息产业股份有限公司 | Intelligent video monitoring method and system based on multi-camera data fusion |
CN116760963A (en) * | 2023-06-13 | 2023-09-15 | 中影电影数字制作基地有限公司 | Video panorama stitching and three-dimensional fusion method and device |
CN117395369A (en) * | 2023-10-11 | 2024-01-12 | 浪潮通用软件有限公司 | A method, equipment and medium for regional overcrowding control based on multi-source video fusion |
CN117495694A (en) * | 2023-11-09 | 2024-02-02 | 大庆安瑞达科技开发有限公司 | Method for fusing video and map three-dimensional scene, electronic equipment and storage medium |
CN118555352A (en) * | 2023-02-27 | 2024-08-27 | 腾讯科技(深圳)有限公司 | Video generation method, device, computer equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110007150A1 (en) * | 2009-07-13 | 2011-01-13 | Raytheon Company | Extraction of Real World Positional Information from Video |
CN104320616A (en) * | 2014-10-21 | 2015-01-28 | 广东惠利普路桥信息工程有限公司 | Video monitoring system based on three-dimensional scene modeling |
CN107197209A (en) * | 2017-06-29 | 2017-09-22 | 中国电建集团成都勘测设计研究院有限公司 | The dynamic method for managing and monitoring of video based on panorama camera |
CN107993276A (en) * | 2016-10-25 | 2018-05-04 | 杭州海康威视数字技术股份有限公司 | The generation method and device of a kind of panoramic picture |
CN110072087A (en) * | 2019-05-07 | 2019-07-30 | 高新兴科技集团股份有限公司 | Video camera interlock method, device, equipment and storage medium based on 3D map |
CN110798677A (en) * | 2018-08-01 | 2020-02-14 | Oppo广东移动通信有限公司 | Three-dimensional scene modeling method and device, electronic device, readable storage medium and computer equipment |
-
2020
- 2020-10-29 CN CN202011182441.XA patent/CN112383746A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110007150A1 (en) * | 2009-07-13 | 2011-01-13 | Raytheon Company | Extraction of Real World Positional Information from Video |
CN104320616A (en) * | 2014-10-21 | 2015-01-28 | 广东惠利普路桥信息工程有限公司 | Video monitoring system based on three-dimensional scene modeling |
CN107993276A (en) * | 2016-10-25 | 2018-05-04 | 杭州海康威视数字技术股份有限公司 | The generation method and device of a kind of panoramic picture |
CN107197209A (en) * | 2017-06-29 | 2017-09-22 | 中国电建集团成都勘测设计研究院有限公司 | The dynamic method for managing and monitoring of video based on panorama camera |
CN110798677A (en) * | 2018-08-01 | 2020-02-14 | Oppo广东移动通信有限公司 | Three-dimensional scene modeling method and device, electronic device, readable storage medium and computer equipment |
CN110072087A (en) * | 2019-05-07 | 2019-07-30 | 高新兴科技集团股份有限公司 | Video camera interlock method, device, equipment and storage medium based on 3D map |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113115015A (en) * | 2021-02-25 | 2021-07-13 | 北京邮电大学 | Multi-source information fusion visualization method and system |
CN113162229A (en) * | 2021-03-24 | 2021-07-23 | 北京潞电电气设备有限公司 | Monitoring device and method thereof |
CN113259624A (en) * | 2021-03-24 | 2021-08-13 | 北京潞电电气设备有限公司 | Monitoring equipment and method thereof |
CN113271434A (en) * | 2021-03-24 | 2021-08-17 | 北京潞电电气设备有限公司 | Monitoring system and method thereof |
CN114582188A (en) * | 2022-01-26 | 2022-06-03 | 广州市乐拓电子科技有限公司 | AR-based immersive simulation physical training room |
CN118555352A (en) * | 2023-02-27 | 2024-08-27 | 腾讯科技(深圳)有限公司 | Video generation method, device, computer equipment and storage medium |
CN116760963A (en) * | 2023-06-13 | 2023-09-15 | 中影电影数字制作基地有限公司 | Video panorama stitching and three-dimensional fusion method and device |
CN116582653B (en) * | 2023-07-14 | 2023-10-27 | 广东天亿马信息产业股份有限公司 | Intelligent video monitoring method and system based on multi-camera data fusion |
CN116582653A (en) * | 2023-07-14 | 2023-08-11 | 广东天亿马信息产业股份有限公司 | Intelligent video monitoring method and system based on multi-camera data fusion |
CN117395369A (en) * | 2023-10-11 | 2024-01-12 | 浪潮通用软件有限公司 | A method, equipment and medium for regional overcrowding control based on multi-source video fusion |
CN117395369B (en) * | 2023-10-11 | 2024-11-19 | 浪潮通用软件有限公司 | A method, device and medium for regional overcrowding control based on multi-source video fusion |
CN117495694A (en) * | 2023-11-09 | 2024-02-02 | 大庆安瑞达科技开发有限公司 | Method for fusing video and map three-dimensional scene, electronic equipment and storage medium |
CN117495694B (en) * | 2023-11-09 | 2024-05-31 | 大庆安瑞达科技开发有限公司 | Method for fusing video and map three-dimensional scene, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112383746A (en) | Video monitoring method and device in three-dimensional scene, electronic equipment and storage medium | |
CN111586360B (en) | Unmanned aerial vehicle projection method, device, equipment and storage medium | |
US11526992B2 (en) | Imagery-based construction progress tracking | |
CN108805917B (en) | Method, medium, apparatus and computing device for spatial localization | |
CN109931945B (en) | AR navigation method, device, equipment and storage medium | |
CN113378605B (en) | Multi-source information fusion method and device, electronic equipment and storage medium | |
CN111429518B (en) | Labeling method, labeling device, computing equipment and storage medium | |
WO2023093217A1 (en) | Data labeling method and apparatus, and computer device, storage medium and program | |
CN109961522A (en) | Image projection method, apparatus, device and storage medium | |
CN108509621B (en) | Scenic spot identification method, device, server and storage medium for scenic spot panoramic image | |
CN112714266B (en) | Method and device for displaying labeling information, electronic equipment and storage medium | |
JP7207073B2 (en) | Inspection work support device, inspection work support method and inspection work support program | |
KR20180017108A (en) | Display of objects based on multiple models | |
CN113034347A (en) | Oblique photographic image processing method, device, processing equipment and storage medium | |
CN113496503A (en) | Point cloud data generation and real-time display method, device, equipment and medium | |
CN113836337A (en) | BIM display method, device, equipment and storage medium | |
CN112465971B (en) | Method and device for guiding point positions in model, storage medium and electronic equipment | |
CN111107307A (en) | Video fusion method, system, terminal and medium based on homography transformation | |
CN109883414B (en) | Vehicle navigation method and device, electronic equipment and storage medium | |
CN109887078B (en) | Sky drawing method, device, equipment and medium | |
Kamat et al. | GPS and 3DOF tracking for georeferenced registration of construction graphics in outdoor augmented reality | |
RU2679200C1 (en) | Data from the video camera displaying method and system | |
CN110853098A (en) | Robot positioning method, device, equipment and storage medium | |
CN111127661A (en) | Data processing method and device and electronic equipment | |
CN114089836B (en) | Labeling method, terminal, server and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210219 |