CN112489121A - Video fusion method, device, equipment and storage medium - Google Patents
Video fusion method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN112489121A CN112489121A CN201910857230.2A CN201910857230A CN112489121A CN 112489121 A CN112489121 A CN 112489121A CN 201910857230 A CN201910857230 A CN 201910857230A CN 112489121 A CN112489121 A CN 112489121A
- Authority
- CN
- China
- Prior art keywords
- video
- video data
- time
- dimensional space
- acquiring
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 30
- 230000004927 fusion Effects 0.000 claims abstract description 34
- 238000009877 rendering Methods 0.000 claims description 131
- 239000011159 matrix material Substances 0.000 claims description 41
- 238000000034 method Methods 0.000 claims description 17
- 238000006243 chemical reaction Methods 0.000 claims description 9
- 238000012937 correction Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 6
- 230000036544 posture Effects 0.000 description 31
- 230000006870 function Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 208000014733 refractive error Diseases 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Remote Sensing (AREA)
- Computer Graphics (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the application discloses a video fusion method, a video fusion device, video fusion equipment and a computer-readable storage medium, wherein video data are acquired, and shooting parameters corresponding to the video data are acquired; acquiring a motion trail and a motion attitude corresponding to the video data according to the shooting parameters; acquiring a three-dimensional space scene, loading the video data in the three-dimensional space scene according to the motion track and the motion posture, and acquiring initial coordinates corresponding to the video data; calculating projection coordinates of the video data in the three-dimensional space scene according to the shooting parameters and the video data; adjusting the initial coordinate according to the projection coordinate to obtain an adjusted coordinate; and fusing the video data and the three-dimensional space scene according to the adjusted coordinates. The fusion of the motion video and the three-dimensional scene is realized, and the accuracy and the efficiency of acquiring the specific position in the video data are improved.
Description
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a video fusion method, apparatus, device, and storage medium.
Background
At present, a common positioning algorithm research is based on a two-dimensional or three-dimensional spatial information platform without video fusion, but in practical application, not only the coordinates of a moving target need to be determined, but also a specific space of the moving target needs to be known for positioning and tracking, and the two-dimensional or three-dimensional spatial information platform without video fusion cannot determine the coordinates of the moving target, so that the video and three-dimensional spatial information fusion has large deviation and very low accuracy.
Disclosure of Invention
The embodiment of the application provides a video fusion method, a video fusion device, video fusion equipment and a storage medium, which can determine the coordinates of a moving target, so that the accuracy and precision of fusion are improved.
In a first aspect, an embodiment of the present application provides a video fusion method, including:
acquiring video data and acquiring shooting parameters corresponding to the video data;
acquiring a motion trail and a motion attitude corresponding to the video data according to the shooting parameters;
acquiring a three-dimensional space scene, loading the video data in the three-dimensional space scene according to the motion track and the motion posture, and acquiring initial coordinates corresponding to the video data;
calculating projection coordinates of the video data in the three-dimensional space scene according to the shooting parameters and the video data;
adjusting the initial coordinate according to the projection coordinate to obtain an adjusted coordinate;
and fusing the video data and the three-dimensional space scene according to the adjusted coordinates.
In some embodiments, the capturing parameters include an angular pose, and the acquiring capturing parameters corresponding to the video data includes:
acquiring an initial geographic coordinate and an initial angle posture;
acquiring a sequence image of the video data;
extracting feature points of adjacent sequence images;
acquiring a basic matrix of adjacent sequence images according to the characteristic points;
fusing the basic matrix and the motion matrix corresponding to the motion trail to obtain a fused matrix;
adjusting the initial angle posture through the fused matrix to obtain an angle posture, and adjusting the initial geographic coordinate through the fused matrix to obtain an adjusted geographic coordinate.
In some embodiments, the calculating the projection coordinates of the video data in the three-dimensional space scene according to the shooting parameters and the video data further includes:
acquiring a video frame of the video data, and acquiring an initial pixel coordinate of the video frame;
carrying out distortion correction on the video frame based on the initial pixel coordinates of the video frame to obtain the pixel coordinates of the video frame;
acquiring camera coordinates corresponding to the video data through coordinate conversion based on the pixel coordinates and camera internal parameters;
and acquiring the projection coordinates of the video data in the three-dimensional space scene according to the camera external parameters and the camera coordinates.
In some embodiments, the capturing parameters include a positioning time, and after the fusing the video data with the three-dimensional spatial scene according to the adjusted coordinates, the method further includes:
acquiring a three-dimensional rendering frame rate and a video frame rate in video data;
matching rendering time of a three-dimensional space corresponding to the three-dimensional rendering frame rate, video time corresponding to the video frame rate and the positioning time to obtain matched time;
moving the projection coordinate according to the motion trail data, the angle posture and the matched time;
and fusing the video data and the three-dimensional space scene according to the moved projection coordinates.
In some embodiments, the matching the rendering time of the three-dimensional space corresponding to the three-dimensional rendering frame rate and the video time corresponding to the video frame rate with the positioning time to obtain the matched time includes:
acquiring a three-dimensional rendering frame rate, and setting the three-dimensional rendering frame rate as a video frame rate;
acquiring rendering time of a corresponding three-dimensional space according to the three-dimensional rendering frame rate;
acquiring the video time according to the video frame rate;
and matching the video time, the rendering time of the three-dimensional space and the positioning time according to the video time, the rendering time of the three-dimensional space and the positioning time to obtain the matched video time, rendering time of the three-dimensional space and positioning time.
In some embodiments, the matching the rendering time of the three-dimensional space corresponding to the three-dimensional rendering frame rate and the video time corresponding to the video frame rate with the positioning time to obtain the matched time includes:
setting the three-dimensional rendering frame rate to a preset value;
acquiring rendering time of a corresponding three-dimensional space according to the three-dimensional rendering frame rate;
calculating the playing time of the video data according to the rendering time of the three-dimensional space;
and matching the video time, the rendering time of the three-dimensional space and the positioning time according to the rendering time, the playing time and the positioning time of the three-dimensional space to obtain the matched video time, rendering time of the three-dimensional space and positioning time.
In some embodiments, after the fusing the video data with the three-dimensional space scene according to the adjusted coordinates, the method further includes:
receiving a split screen display instruction;
and respectively displaying the video data and the fused three-dimensional space scene according to the split-screen display instruction.
In a second aspect, an embodiment of the present application further provides a video fusion apparatus, including:
the device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring video data and acquiring shooting parameters corresponding to the video data; acquiring a motion trail and a motion attitude corresponding to the video data according to the shooting parameters; acquiring a three-dimensional space scene, loading the video data in the three-dimensional space scene according to the motion track and the motion posture, and acquiring initial coordinates corresponding to the video data;
the calculating unit is used for calculating the projection coordinates of the video data in the three-dimensional space scene according to the shooting parameters and the video data;
the adjusting unit is used for adjusting the initial coordinate according to the projection coordinate to obtain an adjusted coordinate;
and the first fusion unit is used for fusing the video data and the three-dimensional space scene according to the adjusted coordinates.
In some embodiments, the first obtaining unit includes:
the first acquisition subunit is used for acquiring an initial geographic coordinate and an initial angle posture; acquiring a sequence image of the video data;
the extraction subunit is used for extracting the characteristic points of the adjacent sequence images;
the second acquisition subunit is used for acquiring a basic matrix of the adjacent sequence images according to the characteristic points;
the fusion subunit is used for fusing the basic matrix and the motion matrix corresponding to the motion trail to obtain a fused matrix;
and the adjusting subunit is used for adjusting the initial angular posture through the fused matrix to obtain an angular posture, and adjusting the initial geographic coordinate through the fused matrix to obtain an adjusted geographic coordinate.
In some embodiments, the computing unit includes:
the third acquisition subunit is used for acquiring a video frame of the video data and acquiring initial pixel coordinates of the video frame;
the correction subunit is used for carrying out distortion correction on the video frame based on the initial pixel coordinates of the video frame to obtain the pixel coordinates of the video frame;
the fourth acquisition subunit is used for acquiring camera coordinates corresponding to the video data through coordinate conversion based on the pixel coordinates and the camera internal parameters;
and the fifth acquiring subunit is configured to acquire the projection coordinates of the video data in the three-dimensional space scene according to the camera external parameters and the camera coordinates.
In some embodiments, the video fusion apparatus further includes:
the second acquisition unit is used for acquiring the three-dimensional rendering frame rate and the video frame rate in the video data;
the matching unit is used for matching rendering time of a three-dimensional space corresponding to the three-dimensional rendering frame rate, video time corresponding to the video frame rate and the positioning time to obtain matched time;
the moving unit is used for moving the projection coordinate according to the motion trail data, the angle posture and the matched time;
and the second fusion unit is used for fusing the video data and the three-dimensional space scene according to the moved projection coordinates.
In some embodiments, the matching unit includes:
a sixth obtaining subunit, configured to obtain a three-dimensional rendering frame rate, and set the three-dimensional rendering frame rate as a video frame rate; acquiring rendering time of a corresponding three-dimensional space according to the three-dimensional rendering frame rate; acquiring the video time according to the video frame rate;
and the first matching subunit is used for matching the video time, the rendering time of the three-dimensional space and the positioning time according to the video time, the rendering time of the three-dimensional space and the positioning time to obtain the matched video time, rendering time of the three-dimensional space and positioning time.
In some embodiments, the matching unit includes:
the setting subunit is used for setting the three-dimensional rendering frame rate to a preset value;
a seventh obtaining subunit, configured to obtain, according to the three-dimensional rendering frame rate, rendering time of a corresponding three-dimensional space;
the calculating subunit is used for calculating the playing time of the video data according to the rendering time of the three-dimensional space;
and the second matching subunit is used for matching the video time, the rendering time of the three-dimensional space and the positioning time according to the rendering time, the playing time and the positioning time of the three-dimensional space to obtain the matched video time, the rendering time of the three-dimensional space and the positioning time.
In some embodiments, the video fusion apparatus further includes:
the receiving unit is used for receiving a split-screen display instruction;
and the display unit is used for respectively displaying the video data and the fused three-dimensional space scene according to the split-screen display instruction.
In a third aspect, an embodiment of the present application further provides an apparatus, where the apparatus includes a processor and a memory, where the memory stores program codes, and the processor executes the video fusion method as described above when calling the program codes in the memory.
In a fourth aspect, the present application further provides a storage medium, where the storage medium stores a computer program, and the program is loaded by a processor to execute the video fusion method described above.
The method comprises the steps of acquiring video data, wherein the video data are motion video data, and acquiring shooting parameters corresponding to the video data; acquiring a motion track and a motion attitude corresponding to the video data according to the shooting parameters; the method comprises the steps of obtaining a three-dimensional scene, loading video data in the three-dimensional scene according to a motion track and a motion gesture, obtaining initial coordinates corresponding to the video data, and improving the efficiency of loading the video data without constructing the three-dimensional scene; calculating projection coordinates of the video data in a three-dimensional space scene according to the shooting parameters and the video data so as to determine coordinates of a moving object in the video data, and adjusting initial coordinates according to the projection coordinates to obtain adjusted coordinates; fusing the video data and the three-dimensional space scene according to the adjusted coordinates, so as to accurately map the motion video data into a real three-dimensional scene and integrally render the motion video data and other geographic space data; and mapping the geographic elements of the three-dimensional space scene onto the video data, enhancing the information of the video image, and improving the efficiency of the video and the three-dimensional space scene due to the improvement of the efficiency of loading the video data.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a video fusion method provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of a video fusion apparatus provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of an apparatus provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a video fusion method according to an embodiment of the present application. The main body of the video fusion method may be the video fusion device provided in the embodiment of the present application, or an apparatus such as a terminal or a server integrated with the video fusion device, where the apparatus may be a smart phone, a tablet computer, a palm computer, or a notebook computer, a fixed computer, a server, and the like, which are equipped with a camera head and an IMU (Inertial measurement unit). The video fusion method can comprise the following steps:
s101, acquiring video data and acquiring shooting parameters corresponding to the video data.
Specifically, in this embodiment, the video data may be obtained by shooting with a shooting device installed on the transportation vehicle, such as a shooting device installed on a tricycle, a bus, or a two-wheeled electric vehicle, or obtained by shooting with a handheld shooting device, such as a mobile phone, without limitation. Then, shooting parameters corresponding to the video data are obtained, wherein the shooting parameters corresponding to the video data are parameters acquired by a shooting device corresponding to the video data, and the parameters include Positioning time, Positioning coordinates, angle postures and the like acquired by an Inertial Measurement Unit (IMU) and a Global Positioning System (GPS) in the shooting device.
Specifically, when the shooting parameters include an angular posture, the process of acquiring the angular posture may include:
acquiring an initial geographic coordinate and an initial angle posture;
acquiring a sequence image of the video data;
extracting feature points of adjacent sequence images;
acquiring a basic matrix of adjacent sequence images according to the characteristic points;
fusing the basic matrix and the motion matrix corresponding to the motion trail to obtain a fused matrix;
adjusting the initial angle posture through the fused matrix to obtain an angle posture, and adjusting the initial geographic coordinate through the fused matrix to obtain an adjusted geographic coordinate.
Acquiring an initial geographic coordinate and an initial angular pose, wherein the initial angular pose is an angular pose acquired by an IMU, the initial geographic coordinate is an actual positioning coordinate when a shooting module shoots, further acquiring a sequence image of video data, extracting feature points of adjacent sequence images, specifically extracting the feature points by an SURF feature extraction method, acquiring a basic matrix of the adjacent sequence images according to a epipolar geometry principle and the feature points, fusing the basic matrix and a motion matrix corresponding to a motion track to obtain a fused matrix, wherein the motion matrix corresponding to the motion track is the motion matrix acquired by the IMU, adjusting the initial angular pose by the fused matrix to obtain the angular pose, and adjusting the initial geographic coordinate by the fused matrix to obtain the adjusted geographic coordinate, therefore, the accuracy correction of the geographic coordinates and the angular postures is realized, the angular postures with higher accuracy are obtained, and the accuracy of subsequent video fusion is higher.
And S102, acquiring a motion track and a motion posture corresponding to the video data according to the shooting parameters.
Specifically, the motion trail corresponding to the video data is a motion trail of a moving target in the video data, and displacement, speed and acceleration of the moving target in three directions are obtained according to a positioning coordinate of the moving target at each moment, wherein the three directions are directions of an X axis, a Y axis and a Z axis, so that the motion trail of the moving target is constructed according to the displacement, the speed and the acceleration of the moving target in the three directions. The motion attitude is positioning time, positioning coordinates, acceleration and the like acquired by the IMU.
S103, acquiring a three-dimensional space scene, loading the video data in the three-dimensional space scene according to the motion track and the motion posture, and acquiring initial coordinates corresponding to the video data.
The method comprises the steps of obtaining a three-dimensional space scene, and loading digital terrain, aerial images or satellite images, three-dimensional building models, road models, vector points, lines, surfaces and the like in the three-dimensional space scene, wherein the three-dimensional space model can be a pre-established three-dimensional virtual space scene, and can specifically comprise the aerial images or the satellite images, the three-dimensional building models, the road models, the vector points, the lines, the surfaces and the like according to collected images or models, and the three-dimensional virtual space scene is obtained by fusing the three-dimensional model with the three-dimensional model. And then loading video data in the three-dimensional space scene according to the motion track and the motion posture to form a target three-dimensional space scene with an initial space coordinate, wherein the loading mode can be online loaded in real time through a GB28281 protocol or offline, and is not limited herein.
And S104, calculating the projection coordinates of the video data in the three-dimensional space scene according to the shooting parameters and the video data.
The shooting parameters may include camera external parameters and camera internal parameters, that is, projection coordinates of the video data in the three-dimensional space scene may be calculated according to the camera internal parameters and the camera external parameters, and images in the video data.
Specifically, step S104 may include:
acquiring a video frame of the video data, and acquiring an initial pixel coordinate of the video frame;
carrying out distortion correction on the video frame based on the initial pixel coordinates of the video frame to obtain the pixel coordinates of the video frame;
acquiring camera coordinates corresponding to the video data through coordinate conversion based on the pixel coordinates and camera internal parameters;
and acquiring the projection coordinates of the video data in the three-dimensional space scene according to the camera external parameters and the camera coordinates.
The method comprises the steps of firstly obtaining each video frame in video data, specifically converting the video data from an electric signal into an image signal in a compressed state, and further decompressing the image signal in the compressed state to obtain the video frame. And then establishing a coordinate system by taking the upper left corner of the video frame as an origin and the pixel as a unit, so as to obtain the initial pixel coordinates of each pixel point in the video frame. Due to camera manufacturing process variations, and refractive errors and CCD (Charge-coupled Device) of incident light rays as they pass through the respective lensesDevice) lattice position error, etc., the actual optical system has nonlinear geometric distortion, so that various geometric distortions exist between the target image point and the theoretical image point, and therefore, distortion correction needs to be performed on the video frame, so as to obtain an original distortion figure, further obtain the pixel coordinates of the original distortion figure, specifically, obtain a lens distortion coefficient and a distortion model in the camera internal parameters, and adjust the initial pixel coordinates through the distortion model and the lens distortion coefficient, so as to obtain the pixel coordinates. Then, based on the pixel coordinates and the camera internal parameters, obtaining camera coordinates corresponding to the video data through coordinate conversion, specifically, based on the camera internal parameters, obtaining camera coordinates corresponding to the video data through a corresponding conversion relationship, where the conversion relationship is specifically: u-u0 ═ fsxx/z=fxx/z,v-v0=fsyy/z=fyy/z, wherein fsx=fx,fsy=fyDefined as the effective focal length in the X and Y directions, respectively, fx、fyAs camera intrinsic parameters, u0、v0For the origin of pixel coordinates, the lens distortion coefficients (k, s, p) are further combined, thereby obtaining camera coordinates. Then, according to the camera external parameters and the camera coordinates, acquiring projection coordinates of the video data in the three-dimensional space scene, specifically, the conversion relationship is as follows:wherein T is the coordinate of the origin of the projection coordinate system in the camera coordinate system, the matrix R is an orthogonal rotation matrix, and R satisfies the constraint condition R11 2+r12 2+r13 2=1,r21 2+r22 2+r23 2=1,r31 2+r32 2+r33 21, R and tx,ty,tzAre camera extrinsic parameters.
Determining geometric and optical characteristics (internal parameters) inside the camera and coordinate relation (external parameters) of the camera in the three-dimensional world by using reference point coordinates (x, y, z) and image coordinates (u, v) in the video frame). The internal parameters include lens distortion coefficients (k, s, p) and image origin of coordinates (u)0,v0) And the like. The external parameters include parameters such as an orthogonal rotation matrix R and a translation vector t of the camera coordinate system relative to the world coordinate system.
And S105, adjusting the initial coordinate according to the projection coordinate to obtain an adjusted coordinate.
And adjusting the initial coordinate according to the calculated projection coordinate, for example, adjusting the coordinate value of the initial coordinate to the coordinate value of the projection coordinate, and correspondingly moving the coordinate value of the initial coordinate to obtain the adjusted coordinate.
And S106, fusing the video data and the three-dimensional space scene according to the adjusted coordinates.
And performing scene matching on the video data and the three-dimensional space scene according to the adjusted coordinates, and then fusing the video data and the three-dimensional space scene. Further, after the video data and the three-dimensional space scene are fused, the fused three-dimensional space scene may be displayed, and the specific display manner may include displaying the fused three-dimensional space scene separately, or displaying in a split-screen manner, that is, displaying the video data and the fused three-dimensional space scene in the screen, and when displaying in a split-screen manner, specifically, displaying may be performed synchronously according to time, that is, the specific scene between the displayed video data and the fused three-dimensional space scene is corresponding, for example, assuming that the displayed picture of the video data is a house, the displayed picture of the fused three-dimensional space scene is also a house. In the specific implementation process, the video image can be dynamically projected onto the surface of the scene model, or the video image moves in a three-dimensional scene according to a track, the three-dimensional scene camera moves along the track of the camera, the video is played, the video scene is locked, and the three-dimensional scene is reversely fused on the video for display and the like.
Further, since the video data may be a video acquired in real time, real-time fusion may be performed, specifically, step 106 further includes:
acquiring a three-dimensional rendering frame rate and a video frame rate in video data;
matching rendering time of a three-dimensional space corresponding to the three-dimensional rendering frame rate, video time corresponding to the video frame rate and the positioning time to obtain matched time;
moving the projection coordinate according to the motion trail data, the angle posture and the matched time;
and fusing the video data and the three-dimensional space scene according to the moved projection coordinates.
And acquiring a three-dimensional rendering rate and a video frame rate in the video data, wherein the three-dimensional rendering rate refers to the number of frames of pictures refreshed every second when the fused three-dimensional space scene is displayed, and the video frame rate refers to the number of frames of pictures refreshed every second when the video is played. Matching rendering time of a three-dimensional space corresponding to the three-dimensional rendering rate, video time corresponding to the video frame rate and positioning time to obtain matched time; the method includes that rendering time, video time and positioning time of a three-dimensional space are synchronized, and specifically, time matching can be performed by taking a three-dimensional rendering frame rate as a reference, or time matching can be performed by taking a video frame rate as a reference. And then, according to the motion track data, the angle posture and the matched time, moving projection coordinates, namely, according to the updating of the video, including time updating, motion track updating and angle posture updating, moving the projection coordinates, and fusing the video data and the three-dimensional space scene according to the moved projection coordinates, so that the video data and the three-dimensional space scene are fused in real time, and the real-time positioning and tracking of a moving target in the video data can be realized.
Specifically, when the video frame rate is used as a reference to perform time matching, matching the rendering time of the three-dimensional space corresponding to the three-dimensional rendering frame rate, the video time corresponding to the video frame rate, and the positioning time, and obtaining the matched time includes:
acquiring a three-dimensional rendering frame rate, and setting the three-dimensional rendering frame rate as a video frame rate;
acquiring rendering time of a corresponding three-dimensional space according to the three-dimensional rendering frame rate;
acquiring the video time according to the video frame rate;
and matching the video time, the rendering time of the three-dimensional space and the positioning time according to the video time, the rendering time of the three-dimensional space and the positioning time to obtain the matched video time, rendering time of the three-dimensional space and positioning time.
Specifically, a three-dimensional rendering frame rate is obtained, that is, an initial three-dimensional rendering frame rate is obtained, wherein the initial three-dimensional rendering frame rate may be a universal playing frame rate or 0, then the obtained initial three-dimensional rendering frame rate is set as a video frame rate, that is, the initial three-dimensional rendering frame rate is set as the same value as the video frame rate, then a corresponding three-dimensional space rendering time, that is, the number of pictures played per second, is obtained through the three-dimensional rendering frame rate, and then a video time is obtained according to the video frame rate, and similarly, the video time, that is, the number of video images played per second, and the video time, the three-dimensional space rendering time and the positioning time are matched according to the video time, the three-dimensional space rendering time and the positioning time, so as to obtain the matched video time, the three-dimensional space rendering time and the positioning time, therefore, the video time and the rendering time of the three-dimensional space are consistent, and at this time, only the video time needs to be matched with the positioning time, specifically, the positioning time for starting shooting is matched with the video time, for example, if the video shooting starts from 2 seconds, 2 seconds in the positioning time are matched with 1 second in the video time, so that the matching of the rendering time of the video time and the three-dimensional space and the positioning time is realized.
Specifically, when the three-dimensional rendering frame rate is used as a reference to perform time matching, matching the rendering time of the three-dimensional space corresponding to the three-dimensional rendering frame rate, the video time corresponding to the video frame rate, and the positioning time, and obtaining the matched time includes:
setting the three-dimensional rendering frame rate to a preset value;
acquiring rendering time of a corresponding three-dimensional space according to the three-dimensional rendering frame rate;
calculating the playing time of the video data according to the rendering time of the three-dimensional space;
and matching the video time, the rendering time of the three-dimensional space and the positioning time according to the rendering time, the playing time and the positioning time of the three-dimensional space to obtain the matched video time, rendering time of the three-dimensional space and positioning time.
Specifically, the three-dimensional rendering frame rate is set to a preset value, which can be set according to the user's requirements, for example, when the multiple speed playing that the user is accustomed to playing is a higher multiple speed, the video playing time is shorter because the multiple speed playing is a higher multiple speed, and at this time, the three-dimensional rendering frame rate can be set to a higher value, because the corresponding rendering time of the three-dimensional space is obtained according to the set three-dimensional rendering frame rate, and the playing time of the video data is calculated according to the rendering time of the three-dimensional space, so that the calculated playing time is shorter, that is, the playing time corresponds to the three-dimensional rendering frame rate set by the user, for example, assuming that under the condition of a normal multiple speed of 1.0, the video playing time is one hour, and under the 1.5 multiple speed, the playing can be completed only in 40 minutes, therefore, when the multiple speed playing that the user is accustomed to, at this time, the three-dimensional rendering frame rate may be set to a higher value, so that the calculated playing time is shorter and matched with the habit of the user. Specifically, the three-dimensional rendering frame rate is set according to a received preset value, then the set three-dimensional rendering frame rate obtains rendering time of a corresponding three-dimensional space, playing time of video data is calculated according to the rendering time of the three-dimensional space, specifically, the playing time of the video data can be calculated according to the size of the video data and the rendering time of the three-dimensional space, the video time, the rendering time of the three-dimensional space and the positioning time are matched according to the rendering time, the playing time and the positioning time of the three-dimensional space, the matched video time, the rendering time of the three-dimensional space and the positioning time are obtained, specifically, a matching process when the video frame rate is used as a reference for time matching can be referred to, and details are not repeated.
The embodiment obtains video data, wherein the video data is motion video data, and obtains shooting parameters corresponding to the video data; acquiring a motion track and a motion attitude corresponding to the video data according to the shooting parameters; the method comprises the steps of obtaining a three-dimensional scene, loading video data in the three-dimensional scene according to a motion track and a motion gesture, obtaining initial coordinates corresponding to the video data, and improving the efficiency of loading the video data without constructing the three-dimensional scene; calculating projection coordinates of the video data in a three-dimensional space scene according to the shooting parameters and the video data so as to determine coordinates of a moving object in the video data, and adjusting initial coordinates according to the projection coordinates to obtain adjusted coordinates; fusing the video data and the three-dimensional space scene according to the adjusted coordinates, so as to accurately map the motion video data into a real three-dimensional scene and integrally render the motion video data and other geographic space data; and mapping the geographic elements of the three-dimensional space scene onto the video data, enhancing the information of the video image, and improving the efficiency of the video and the three-dimensional space scene due to the improvement of the efficiency of loading the video data.
In order to better implement the video fusion method provided by the embodiment of the present application, an embodiment of the present application further provides a video fusion device based on the foregoing video fusion method. The terms are the same as those in the video fusion method, and specific implementation details can refer to the description in the method embodiment.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a video fusion apparatus according to an embodiment of the present disclosure, where the video fusion apparatus may include a first obtaining unit 201, a calculating unit 202, an adjusting unit 203, a first fusion unit 204, and the like.
Specifically, the video fusion apparatus includes:
a first obtaining unit 201, configured to obtain video data and obtain shooting parameters corresponding to the video data; acquiring a motion trail and a motion attitude corresponding to the video data according to the shooting parameters; acquiring a three-dimensional space scene, loading the video data in the three-dimensional space scene according to the motion track and the motion posture, and acquiring initial coordinates corresponding to the video data;
a calculating unit 202, configured to calculate projection coordinates of the video data in the three-dimensional space scene according to the shooting parameters and the video data;
an adjusting unit 203, configured to adjust the initial coordinate according to the projection coordinate, to obtain an adjusted coordinate;
a first fusion unit 204, configured to fuse the video data with the three-dimensional space scene according to the adjusted coordinates.
In some embodiments, the first obtaining unit 201 includes:
the first acquisition subunit is used for acquiring an initial geographic coordinate and an initial angle posture;
the extraction subunit is used for extracting the characteristic points of the adjacent sequence images;
the second acquisition subunit is used for acquiring a basic matrix of the adjacent sequence images according to the characteristic points;
the fusion subunit is used for fusing the basic matrix and the motion matrix corresponding to the motion trail to obtain a fused matrix;
and the adjusting subunit is used for adjusting the initial angular posture through the fused matrix to obtain an angular posture, and adjusting the initial geographic coordinate through the fused matrix to obtain an adjusted geographic coordinate.
In some embodiments, the computing unit 202 includes:
the third acquisition subunit is used for acquiring a video frame of the video data and acquiring initial pixel coordinates of the video frame;
the correction subunit is used for carrying out distortion correction on the video frame based on the initial pixel coordinates of the video frame to obtain the pixel coordinates of the video frame;
the fourth acquisition subunit is used for acquiring camera coordinates corresponding to the video data through coordinate conversion based on the pixel coordinates and the camera internal parameters;
and the fifth acquiring subunit is configured to acquire the projection coordinates of the video data in the three-dimensional space scene according to the camera external parameters and the camera coordinates.
In some embodiments, the video fusion apparatus further includes:
the second acquisition unit is used for acquiring the three-dimensional rendering frame rate and the video frame rate in the video data;
the matching unit is used for matching rendering time of a three-dimensional space corresponding to the three-dimensional rendering frame rate, video time corresponding to the video frame rate and the positioning time to obtain matched time;
the moving unit is used for moving the projection coordinate according to the motion trail data, the angle posture and the matched time;
and the second fusion unit is used for fusing the video data and the three-dimensional space scene according to the moved projection coordinates.
In some embodiments, the matching unit includes:
a sixth obtaining subunit, configured to obtain a three-dimensional rendering frame rate, and set the three-dimensional rendering frame rate as a video frame rate; acquiring rendering time of a corresponding three-dimensional space according to the three-dimensional rendering frame rate; acquiring the video time according to the video frame rate;
and the first matching subunit is used for matching the video time, the rendering time of the three-dimensional space and the positioning time according to the video time, the rendering time of the three-dimensional space and the positioning time to obtain the matched video time, rendering time of the three-dimensional space and positioning time.
In some embodiments, the matching unit includes:
the setting subunit is used for setting the three-dimensional rendering frame rate to a preset value;
a seventh obtaining subunit, configured to obtain, according to the three-dimensional rendering frame rate, rendering time of a corresponding three-dimensional space;
the calculating subunit is used for calculating the playing time of the video data according to the rendering time of the three-dimensional space;
and the second matching subunit is used for matching the video time, the rendering time of the three-dimensional space and the positioning time according to the rendering time, the playing time and the positioning time of the three-dimensional space to obtain the matched video time, the rendering time of the three-dimensional space and the positioning time.
In some embodiments, the video fusion apparatus further includes:
the receiving unit is used for receiving a split-screen display instruction;
and the display unit is used for respectively displaying the video data and the fused three-dimensional space scene according to the split-screen display instruction.
The specific implementation of the above operations can refer to the first embodiment, and is not described herein again.
Fig. 3 shows a block diagram of a specific structure of a device provided in an embodiment of the present invention, where the device is a video fusion device, and is specifically configured to implement the video fusion method provided in the foregoing embodiment. The device 400 may be a terminal such as a smart phone or a tablet computer, or a server.
As shown in fig. 3, the apparatus 400 may include RF (Radio Frequency) circuitry 110, a memory 120 including one or more computer-readable storage media (only one shown), an input unit 130, a display unit 140, a transmission module 170, a processor 180 including one or more processing cores (only one shown), and a power supply 190. Those skilled in the art will appreciate that the configuration of the apparatus 400 shown in fig. 3 is not intended to be limiting of the apparatus 400 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the RF circuit 110 is used for receiving and transmitting electromagnetic waves, and performs interconversion between the electromagnetic waves and electrical signals, so as to communicate with a communication network or other devices. The RF circuitry 110 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. The RF circuitry 110 may communicate with various networks such as the internet, an intranet, a wireless network, or with other devices over a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. The Wireless network may use various Communication standards, protocols, and technologies, including, but not limited to, Global System for Mobile Communication (GSM), Enhanced Data GSM Environment (EDGE), Wideband Code Division Multiple Access (WCDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Wireless Fidelity (Wi-Fi) (e.g., Institute of Electrical and Electronics Engineers (IEEE) standard IEEE802.11 a, IEEE802.11 b, IEEE802.11g, and/or IEEE802.11 n), Voice over Internet Protocol (VoIP), world wide mail Access (Microwave Access for micro), wimax-1, other suitable short message protocols, and any other suitable Protocol for instant messaging, and may even include those protocols that have not yet been developed.
The memory 120 may be used to store software programs and modules, such as program instructions/modules of the video fusion method in the above-described embodiment, and the processor 180 executes various functional applications and data processing, i.e., functions of calculating the volume of the object, by running the software programs and modules stored in the memory 120. Memory 120 may include high speed random access memory and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 120 may further include memory located remotely from processor 180, which may be connected to device 400 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input unit 130 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 130 may include a touch-sensitive surface 131 as well as other input devices 132. The touch-sensitive surface 131, also referred to as a touch display screen or a touch pad, may collect touch operations by a user on or near the touch-sensitive surface 131 (e.g., operations by a user on or near the touch-sensitive surface 131 using a finger, a stylus, or any other suitable object or attachment), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface 131 may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 180, and can receive and execute commands sent by the processor 180. Additionally, the touch-sensitive surface 131 may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. In addition to the touch-sensitive surface 131, the input unit 130 may also include other input devices 132. In particular, other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 140 may be used to display information input by or provided to a user and various graphical user interfaces of the device 400, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 140 may include a Display panel 141, and optionally, the Display panel 141 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like. Further, the touch-sensitive surface 131 may cover the display panel 141, and when a touch operation is detected on or near the touch-sensitive surface 131, the touch operation is transmitted to the processor 180 to determine the type of the touch event, and then the processor 180 provides a corresponding visual output on the display panel 141 according to the type of the touch event. Although in FIG. 3, touch-sensitive surface 131 and display panel 141 are shown as two separate components to implement input and output functions, in some embodiments, touch-sensitive surface 131 may be integrated with display panel 141 to implement input and output functions.
The device 400, via the transport module 170 (e.g., Wi-Fi module), may assist the user in emailing, browsing web pages, accessing streaming media, etc., which provides wireless broadband internet access to the user. Although fig. 3 shows the transmission module 170, it is understood that it does not belong to the essential constitution of the device 400 and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 180 is the control center of the device 400, connects various parts of the entire handset using various interfaces and lines, and performs various functions of the device 400 and processes data by running or executing software programs and/or modules stored in the memory 120 and calling data stored in the memory 120, thereby performing overall monitoring of the handset. Optionally, processor 180 may include one or more processing cores; in some embodiments, the processor 180 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 180.
The device 400 also includes a power supply 190 (e.g., a battery) for powering the various components, which may be logically coupled to the processor 180 via a power management system in some embodiments to manage charging, discharging, and power consumption management functions via the power management system. The power supply 190 may also include any component including one or more of a dc or ac power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Specifically, in this embodiment, the display unit 140 of the apparatus 400 is a touch screen display, the apparatus 400 further includes a memory 120, and one or more programs, wherein the one or more programs are stored in the memory 120, and the one or more programs configured to be executed by the one or more processors 180 include instructions for:
acquiring video data and acquiring shooting parameters corresponding to the video data;
acquiring a motion trail and a motion attitude corresponding to the video data according to the shooting parameters;
acquiring a three-dimensional space scene, loading the video data in the three-dimensional space scene according to the motion track and the motion posture, and acquiring initial coordinates corresponding to the video data;
calculating projection coordinates of the video data in the three-dimensional space scene according to the shooting parameters and the video data;
adjusting the initial coordinate according to the projection coordinate to obtain an adjusted coordinate;
and fusing the video data and the three-dimensional space scene according to the adjusted coordinates.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the video fusion method, and are not described herein again.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by a program, which may be stored in a computer-readable storage medium and loaded and executed by a processor, or by a program controlling associated hardware.
To this end, embodiments of the present application provide a storage medium, where a computer program is stored, where the computer program is loaded by a processor to execute the steps in any one of the video fusion methods provided in the embodiments of the present application. For example, the computer program may perform the steps of:
acquiring video data and acquiring shooting parameters corresponding to the video data;
acquiring a motion trail and a motion attitude corresponding to the video data according to the shooting parameters;
acquiring a three-dimensional space scene, loading the video data in the three-dimensional space scene according to the motion track and the motion posture, and acquiring initial coordinates corresponding to the video data;
calculating projection coordinates of the video data in the three-dimensional space scene according to the shooting parameters and the video data;
adjusting the initial coordinate according to the projection coordinate to obtain an adjusted coordinate;
and fusing the video data and the three-dimensional space scene according to the adjusted coordinates.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the computer program stored in the storage medium can execute the steps in any video fusion method provided in the embodiments of the present application, beneficial effects that can be achieved by any video fusion method provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The video fusion method, apparatus, device and storage medium provided by the embodiments of the present application are described in detail above, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (10)
1. A method of video fusion, comprising:
acquiring video data and acquiring shooting parameters corresponding to the video data;
acquiring a motion trail and a motion attitude corresponding to the video data according to the shooting parameters;
acquiring a three-dimensional space scene, loading the video data in the three-dimensional space scene according to the motion track and the motion posture, and acquiring initial coordinates corresponding to the video data;
calculating projection coordinates of the video data in the three-dimensional space scene according to the shooting parameters and the video data;
adjusting the initial coordinate according to the projection coordinate to obtain an adjusted coordinate;
and fusing the video data and the three-dimensional space scene according to the adjusted coordinates.
2. The video fusion method according to claim 1, wherein the shooting parameters include an angular pose, and wherein obtaining the shooting parameters corresponding to the video data comprises:
acquiring an initial geographic coordinate and an initial angle posture;
acquiring a sequence image of the video data;
extracting feature points of adjacent sequence images;
acquiring a basic matrix of adjacent sequence images according to the characteristic points;
fusing the basic matrix and the motion matrix corresponding to the motion trail to obtain a fused matrix;
adjusting the initial angle posture through the fused matrix to obtain an angle posture, and adjusting the initial geographic coordinate through the fused matrix to obtain an adjusted geographic coordinate.
3. The video fusion method according to claim 1, wherein said shooting parameters further comprise camera internal parameters and camera external parameters, and said calculating projection coordinates of said video data in said three-dimensional spatial scene according to said shooting parameters and video data comprises:
acquiring a video frame of the video data, and acquiring an initial pixel coordinate of the video frame;
carrying out distortion correction on the video frame based on the initial pixel coordinates of the video frame to obtain the pixel coordinates of the video frame;
acquiring camera coordinates corresponding to the video data through coordinate conversion based on the pixel coordinates and camera internal parameters;
and acquiring the projection coordinates of the video data in the three-dimensional space scene according to the camera external parameters and the camera coordinates.
4. The video fusion method of claim 1, wherein the shooting parameters include a positioning time, and wherein after the fusing the video data with the three-dimensional spatial scene according to the adjusted coordinates, the method further comprises:
acquiring a three-dimensional rendering frame rate and a video frame rate in video data;
matching rendering time of a three-dimensional space corresponding to the three-dimensional rendering frame rate, video time corresponding to the video frame rate and the positioning time to obtain matched time;
moving the projection coordinate according to the motion trail data, the angle posture and the matched time;
and fusing the video data and the three-dimensional space scene according to the moved projection coordinates.
5. The video fusion method according to claim 4, wherein the matching the rendering time of the three-dimensional space corresponding to the three-dimensional rendering frame rate, the video time corresponding to the video frame rate, and the positioning time to obtain the matched time comprises:
acquiring a three-dimensional rendering frame rate, and setting the three-dimensional rendering frame rate as a video frame rate;
acquiring rendering time of a corresponding three-dimensional space according to the three-dimensional rendering frame rate;
acquiring the video time according to the video frame rate;
and matching the video time, the rendering time of the three-dimensional space and the positioning time according to the video time, the rendering time of the three-dimensional space and the positioning time to obtain the matched video time, rendering time of the three-dimensional space and positioning time.
6. The video fusion method according to claim 4, wherein the matching the rendering time of the three-dimensional space corresponding to the three-dimensional rendering frame rate, the video time corresponding to the video frame rate, and the positioning time to obtain the matched time comprises:
setting the three-dimensional rendering frame rate to a preset value;
acquiring rendering time of a corresponding three-dimensional space according to the three-dimensional rendering frame rate;
calculating the playing time of the video data according to the rendering time of the three-dimensional space;
and matching the video time, the rendering time of the three-dimensional space and the positioning time according to the rendering time, the playing time and the positioning time of the three-dimensional space to obtain the matched video time, rendering time of the three-dimensional space and positioning time.
7. The video fusion method according to any one of claims 1 to 6, further comprising, after fusing the video data with the three-dimensional spatial scene according to the adjusted coordinates:
receiving a split screen display instruction;
and respectively displaying the video data and the fused three-dimensional space scene according to the split-screen display instruction.
8. A video fusion apparatus, comprising:
the device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring video data and acquiring shooting parameters corresponding to the video data; acquiring a motion trail and a motion attitude corresponding to the video data according to the shooting parameters; acquiring a three-dimensional space scene, loading the video data in the three-dimensional space scene according to the motion track and the motion posture, and acquiring initial coordinates corresponding to the video data;
the calculating unit is used for calculating the projection coordinates of the video data in the three-dimensional space scene according to the shooting parameters and the video data;
the adjusting unit is used for adjusting the initial coordinate according to the projection coordinate to obtain an adjusted coordinate;
and the first fusion unit is used for fusing the video data and the three-dimensional space scene according to the adjusted coordinates.
9. An apparatus comprising a processor and a memory, the memory having program code stored therein, the processor when calling the program code in the memory performing the video fusion method of any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the storage medium stores a computer program which is loaded by a processor to execute the video fusion method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910857230.2A CN112489121B (en) | 2019-09-11 | 2019-09-11 | Video fusion method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910857230.2A CN112489121B (en) | 2019-09-11 | 2019-09-11 | Video fusion method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112489121A true CN112489121A (en) | 2021-03-12 |
CN112489121B CN112489121B (en) | 2024-10-22 |
Family
ID=74920461
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910857230.2A Active CN112489121B (en) | 2019-09-11 | 2019-09-11 | Video fusion method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112489121B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113691796A (en) * | 2021-08-16 | 2021-11-23 | 福建凯米网络科技有限公司 | Three-dimensional scene interaction method through two-dimensional simulation and computer-readable storage medium |
CN113870163A (en) * | 2021-09-24 | 2021-12-31 | 埃洛克航空科技(北京)有限公司 | Video fusion method and device based on three-dimensional scene, storage medium and electronic device |
CN114067071A (en) * | 2021-11-26 | 2022-02-18 | 湖南汽车工程职业学院 | High-precision map making system based on big data |
CN114449247A (en) * | 2022-04-11 | 2022-05-06 | 深圳市其域创新科技有限公司 | Multi-channel video 3D superposition method and system |
CN114612360A (en) * | 2022-03-11 | 2022-06-10 | 北京拙河科技有限公司 | Video fusion method and system based on motion model |
CN115396720A (en) * | 2022-07-21 | 2022-11-25 | 贝壳找房(北京)科技有限公司 | Video fusion method based on video control, electronic equipment and storage medium |
CN116055700A (en) * | 2023-03-23 | 2023-05-02 | 北京清扬通信有限公司 | Multi-path video processing method, equipment and medium for reducing network traffic |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101646067A (en) * | 2009-05-26 | 2010-02-10 | 华中师范大学 | Digital full-space intelligent monitoring system and method |
CN103226838A (en) * | 2013-04-10 | 2013-07-31 | 福州林景行信息技术有限公司 | Real-time spatial positioning method for mobile monitoring target in geographical scene |
CN106204656A (en) * | 2016-07-21 | 2016-12-07 | 中国科学院遥感与数字地球研究所 | Target based on video and three-dimensional spatial information location and tracking system and method |
CN107801083A (en) * | 2016-09-06 | 2018-03-13 | 星播网(深圳)信息有限公司 | A kind of network real-time interactive live broadcasting method and device based on three dimensional virtual technique |
CN108022302A (en) * | 2017-12-01 | 2018-05-11 | 深圳市天界幻境科技有限公司 | A kind of sterically defined AR 3 d display devices of Inside-Out |
-
2019
- 2019-09-11 CN CN201910857230.2A patent/CN112489121B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101646067A (en) * | 2009-05-26 | 2010-02-10 | 华中师范大学 | Digital full-space intelligent monitoring system and method |
CN103226838A (en) * | 2013-04-10 | 2013-07-31 | 福州林景行信息技术有限公司 | Real-time spatial positioning method for mobile monitoring target in geographical scene |
CN106204656A (en) * | 2016-07-21 | 2016-12-07 | 中国科学院遥感与数字地球研究所 | Target based on video and three-dimensional spatial information location and tracking system and method |
CN107801083A (en) * | 2016-09-06 | 2018-03-13 | 星播网(深圳)信息有限公司 | A kind of network real-time interactive live broadcasting method and device based on three dimensional virtual technique |
CN108022302A (en) * | 2017-12-01 | 2018-05-11 | 深圳市天界幻境科技有限公司 | A kind of sterically defined AR 3 d display devices of Inside-Out |
Non-Patent Citations (3)
Title |
---|
宋宏权;刘学军;闾国年;王美珍;: "基于视频的地理场景增强表达研究", 地理与地理信息科学, vol. 28, no. 05, 15 September 2012 (2012-09-15), pages 6 - 9 * |
陈光;郑宏伟;: "三维场景中无人机地理视频数据的集成方法", 地理与地理信息科学, vol. 33, no. 01, 15 January 2017 (2017-01-15), pages 40 - 43 * |
陈泽婵;陈靖;严雷;张运超;: "基于Unity3D的移动增强现实光学实验平台", 计算机应用, vol. 35, no. 2, 15 December 2015 (2015-12-15) * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113691796A (en) * | 2021-08-16 | 2021-11-23 | 福建凯米网络科技有限公司 | Three-dimensional scene interaction method through two-dimensional simulation and computer-readable storage medium |
CN113691796B (en) * | 2021-08-16 | 2023-06-02 | 福建凯米网络科技有限公司 | Three-dimensional scene interaction method through two-dimensional simulation and computer readable storage medium |
CN113870163B (en) * | 2021-09-24 | 2022-11-29 | 埃洛克航空科技(北京)有限公司 | Video fusion method and device based on three-dimensional scene, storage medium and electronic device |
CN113870163A (en) * | 2021-09-24 | 2021-12-31 | 埃洛克航空科技(北京)有限公司 | Video fusion method and device based on three-dimensional scene, storage medium and electronic device |
CN114067071A (en) * | 2021-11-26 | 2022-02-18 | 湖南汽车工程职业学院 | High-precision map making system based on big data |
CN114067071B (en) * | 2021-11-26 | 2022-08-30 | 湖南汽车工程职业学院 | High-precision map making system based on big data |
CN114612360A (en) * | 2022-03-11 | 2022-06-10 | 北京拙河科技有限公司 | Video fusion method and system based on motion model |
CN114612360B (en) * | 2022-03-11 | 2022-10-18 | 北京拙河科技有限公司 | Video fusion method and system based on motion model |
CN114449247A (en) * | 2022-04-11 | 2022-05-06 | 深圳市其域创新科技有限公司 | Multi-channel video 3D superposition method and system |
CN115396720A (en) * | 2022-07-21 | 2022-11-25 | 贝壳找房(北京)科技有限公司 | Video fusion method based on video control, electronic equipment and storage medium |
CN115396720B (en) * | 2022-07-21 | 2023-11-14 | 贝壳找房(北京)科技有限公司 | Video fusion method based on video control, electronic equipment and storage medium |
CN116055700A (en) * | 2023-03-23 | 2023-05-02 | 北京清扬通信有限公司 | Multi-path video processing method, equipment and medium for reducing network traffic |
CN116055700B (en) * | 2023-03-23 | 2023-06-20 | 北京清扬通信有限公司 | Multi-path video processing method, equipment and medium for reducing network traffic |
Also Published As
Publication number | Publication date |
---|---|
CN112489121B (en) | 2024-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112489121B (en) | Video fusion method, device, equipment and storage medium | |
US11605214B2 (en) | Method, device and storage medium for determining camera posture information | |
CN112530024B (en) | Data processing method and device for virtual scene | |
CN108876739B (en) | Image compensation method, electronic equipment and computer readable storage medium | |
CN112785715B (en) | Virtual object display method and electronic device | |
WO2019233229A1 (en) | Image fusion method, apparatus, and storage medium | |
CN111145339B (en) | Image processing method and device, equipment and storage medium | |
CN108038825B (en) | Image processing method and mobile terminal | |
CN104867095B (en) | Image processing method and device | |
CN109660723B (en) | Panoramic shooting method and device | |
EP3748533B1 (en) | Method, apparatus, and storage medium for obtaining object information | |
CN108156374B (en) | Image processing method, terminal and readable storage medium | |
CN112330756B (en) | Camera calibration method and device, intelligent vehicle and storage medium | |
CN107968917B (en) | Image processing method and apparatus, computer device, computer-readable storage medium | |
CN103577023A (en) | Video processing method and terminal | |
CN113888452A (en) | Image fusion method, electronic device, storage medium, and computer program product | |
US10270963B2 (en) | Angle switching method and apparatus for image captured in electronic terminal | |
CN117474988A (en) | Image acquisition method and related device based on camera | |
CN117115244A (en) | Cloud repositioning method, device and storage medium | |
CN110717467A (en) | Head pose estimation method, device, equipment and storage medium | |
CN106210514A (en) | Method, device and smart device for taking pictures and focusing | |
CN111182206B (en) | Image processing method and device | |
CN113489903A (en) | Shooting method, shooting device, terminal equipment and storage medium | |
CN115134527B (en) | Processing method, intelligent terminal and storage medium | |
CN108769529B (en) | An image correction method, electronic device and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |