[go: up one dir, main page]

CN111599005B - Three-dimensional model implantation method and device, electronic equipment and storage medium - Google Patents

Three-dimensional model implantation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111599005B
CN111599005B CN202010429172.6A CN202010429172A CN111599005B CN 111599005 B CN111599005 B CN 111599005B CN 202010429172 A CN202010429172 A CN 202010429172A CN 111599005 B CN111599005 B CN 111599005B
Authority
CN
China
Prior art keywords
image
dimensional model
projection
rendering
key points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010429172.6A
Other languages
Chinese (zh)
Other versions
CN111599005A (en
Inventor
胡飞
胡波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Feige Digital Technology Co ltd
Original Assignee
Hunan Feige Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Feige Digital Technology Co ltd filed Critical Hunan Feige Digital Technology Co ltd
Priority to CN202010429172.6A priority Critical patent/CN111599005B/en
Publication of CN111599005A publication Critical patent/CN111599005A/en
Application granted granted Critical
Publication of CN111599005B publication Critical patent/CN111599005B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a three-dimensional model implantation method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: performing projection matching on a first image in the target video and a second image in the target video to obtain a first projection relationship, wherein the first image is a local area image of the second image; performing projection matching on the first image and the surface of the three-dimensional model to obtain a second projection relation; rendering the three-dimensional model according to the first projection relation and the second projection relation to obtain a rendering diagram; and fusing the areas of the three-dimensional model in the second image according to the rendering graph to obtain an implantation image. In the implementation process, rendering the three-dimensional model according to the first projection relation and the second projection relation, and fusing the area of the three-dimensional model in the second image according to the obtained rendering diagram to obtain an implantation image; therefore, the difficulty of implanting the three-dimensional model into the video image frame is reduced, and the efficiency of implanting the three-dimensional model into the video image frame is effectively improved.

Description

Three-dimensional model implantation method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing and three-dimensional model implantation, and in particular, to a three-dimensional model implantation method, apparatus, electronic device, and storage medium.
Background
Three-dimensional models, which are three-dimensional polygonal representations of objects, are typically displayed using a computer or other video equipment; the displayed object can be a real world entity or fictitious thing, can be as small as an atom or as large as a large size, and can be represented by a three-dimensional model in the physical nature.
In the current playing scene of the internet video, multimedia information needs to be added in the internet video, so that the internet video plays the multimedia information simultaneously when playing, specifically for example: and implanting a planar advertisement or a three-dimensional advertisement of a three-dimensional model into the Internet video, thereby popularizing the commodity through the implanted Internet video. At present, a worker judges whether the three-dimensional model is matched with a preset implantation entity in a video image frame, and if so, the implanted internet video is played. In a specific practice, it has been found that it is difficult to implant three-dimensional models into video image frames using manual means.
Disclosure of Invention
An object of an embodiment of the present application is to provide a three-dimensional model implantation method, apparatus, electronic device, and storage medium for improving a problem that it is difficult to implant a three-dimensional model into a video image frame.
The embodiment of the application provides a three-dimensional model implantation method, which comprises the following steps: performing projection matching on a first image in the target video and a second image in the target video to obtain a first projection relationship, wherein the first image is a local area image of the second image; performing projection matching on the first image and the surface of the three-dimensional model to obtain a second projection relation; rendering the three-dimensional model according to the first projection relation and the second projection relation to obtain a rendering diagram; and fusing the areas of the three-dimensional model in the second image according to the rendering graph to obtain an implanted image after the three-dimensional model is implanted. In the implementation process, a rendering diagram is obtained by rendering the three-dimensional model according to a first projection relation of a first image and a second image in the representation target video and a surface second projection relation of the representation first image and the three-dimensional model; fusing the areas of the three-dimensional model in the second image according to the rendering graph to obtain an implanted image after the three-dimensional model is implanted; therefore, the difficulty of implanting the three-dimensional model into the video image frame is reduced, and the efficiency of implanting the three-dimensional model into the video image frame is effectively improved.
Optionally, in an embodiment of the present application, performing projection matching on the first image in the target video and the second image in the target video includes: obtaining four first key points of the first image, wherein any three of the four first key points cannot be collinear; obtaining four second key points of the second image, wherein any three of the four second key points cannot be collinear; and performing projection matching on the four first key points and the four second key points. In the implementation process, projection matching is carried out on four first key points of the first image and four second key points of the second image; thereby effectively improving the speed of projection matching of the first image and the second image.
Optionally, in an embodiment of the present application, performing projection matching on the first image and the surface of the three-dimensional model includes: obtaining four third key points on the surface of the three-dimensional model, wherein any three of the four third key points cannot be collinear; and performing projection matching on the four third key points and the four first key points. In the implementation process, projection matching is carried out on four third key points and four first key points on the surface of the obtained three-dimensional model; thereby effectively improving the speed of projection matching of the surface of the three-dimensional model and the first image.
Optionally, in an embodiment of the present application, rendering the three-dimensional model according to the first projection relationship and the second projection relationship includes: determining a projection transformation relation between the three-dimensional model and the second image according to the first projection relation and the second projection relation; and rendering the three-dimensional model according to the projective transformation relation. In the implementation process, determining a projection transformation relation between the three-dimensional model and the second image according to the first projection relation and the second projection relation; rendering the three-dimensional model according to the projection transformation relation; thereby effectively improving the speed of rendering the three-dimensional model.
Optionally, in an embodiment of the present application, fusing, according to a rendering graph, a region of the three-dimensional model in the second image to obtain an implanted image after implantation of the three-dimensional model, including: performing image registration on the rendering graph and the second image to obtain a registered rendering graph; and carrying out image fusion on the region of the three-dimensional model in the second image according to the registered rendering graph to obtain an implantation image. In the implementation process, the rendering graph and the second image are subjected to image registration, so that a registered rendering graph is obtained; image fusion is carried out on the area of the three-dimensional model in the second image according to the registered rendering graph, and an implantation image is obtained; thereby effectively improving the speed of image registration and image fusion.
Optionally, in an embodiment of the present application, after obtaining the implantation image after implanting the three-dimensional model, the method further includes: receiving a data request sent by terminal equipment; and sending an implanted image corresponding to the data request to the terminal equipment, wherein the implanted image is used for being displayed by the terminal equipment. In the implementation process, a data request sent by the terminal equipment is received; transmitting an implantation image corresponding to the data request to the terminal equipment, wherein the implantation image is used for being displayed by the terminal equipment; thereby effectively improving the speed of the terminal device to acquire and display the implantation image.
Optionally, in an embodiment of the present application, the method further includes: implanting the three-dimensional model into a target frame except the second image in the target video to obtain an implanted video, wherein the target frame comprises the second image and at least one image except the second image; and sending the implanted video to the terminal equipment, wherein the implanted video is used for being played by the terminal equipment. In the implementation process, the three-dimensional model is implanted into a target frame except the second image in the target video to obtain an implanted video, wherein the target frame comprises the second image and at least one image except the second image; transmitting an implantation video to the terminal equipment, wherein the implantation video is used for being played by the terminal equipment; thereby effectively improving the speed of the terminal equipment for obtaining and playing the embedded video.
The embodiment of the application also provides a three-dimensional model implantation device, which comprises: the first relation obtaining module is used for carrying out projection matching on a first image in the target video and a second image in the target video to obtain a first projection relation, wherein the first image is a local area image of the second image; the second relation obtaining module is used for carrying out projection matching on the first image and the surface of the three-dimensional model to obtain a second projection relation; the rendering diagram obtaining module is used for rendering the three-dimensional model according to the first projection relation and the second projection relation to obtain a rendering diagram; and the implantation image obtaining module is used for fusing the areas of the three-dimensional model in the second image according to the rendering graph to obtain an implantation image implanted with the three-dimensional model. In the implementation process, a rendering diagram is obtained by rendering the three-dimensional model according to a first projection relation of a first image and a second image in the representation target video and a surface second projection relation of the representation first image and the three-dimensional model; fusing the areas of the three-dimensional model in the second image according to the rendering graph to obtain an implanted image after the three-dimensional model is implanted; therefore, the difficulty of implanting the three-dimensional model into the video image frame is reduced, and the efficiency of implanting the three-dimensional model into the video image frame is effectively improved.
Optionally, in an embodiment of the present application, the first relationship obtaining module includes: the first key point obtaining module is used for obtaining four first key points of the first image, and any three of the four first key points cannot be collinear; the second key point obtaining module is used for obtaining four second key points of the second image, and any three of the four second key points cannot be collinear; and the first projection matching module is used for carrying out projection matching on the four first key points and the four second key points.
Optionally, in an embodiment of the present application, the second relationship obtaining module includes: a third key point obtaining module, configured to obtain four third key points on the surface of the three-dimensional model, where any three of the four third key points cannot be collinear; and the second projection matching module is used for carrying out projection matching on the four third key points and the four first key points.
Optionally, in an embodiment of the present application, the rendering graph obtaining module includes: the transformation relation determining module is used for determining the projection transformation relation between the three-dimensional model and the second image according to the first projection relation and the second projection relation; and the three-dimensional model rendering module is used for rendering the three-dimensional model according to the projection transformation relation.
Optionally, in an embodiment of the present application, the implanting image obtaining module includes: the rendering image registration module is used for carrying out image registration on the rendering image and the second image to obtain a registered rendering image; and the rendering image fusion module is used for carrying out image fusion on the region of the three-dimensional model in the second image according to the registered rendering image to obtain an implantation image.
Optionally, in an embodiment of the present application, the three-dimensional model implantation device further includes: the data request receiving module is used for receiving a data request sent by the terminal equipment; and the implantation image sending module is used for sending the implantation image corresponding to the data request to the terminal equipment, and the implantation image is used for being displayed by the terminal equipment.
Optionally, in an embodiment of the present application, the three-dimensional model implantation device further includes: an implantation video obtaining module, configured to implant a three-dimensional model into a target frame except for the second image in the target video, to obtain an implantation video, where the target frame includes the second image and at least one image except for the second image; the embedded video transmitting module is used for transmitting the embedded video to the terminal equipment, and the embedded video is used for being played by the terminal equipment.
The embodiment of the application also provides electronic equipment, which comprises: a processor and a memory storing machine-readable instructions executable by the processor to perform the method as described above when executed by the processor.
The present embodiments also provide a storage medium having stored thereon a computer program which, when executed by a processor, performs a method as described above.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a three-dimensional model implantation method according to an embodiment of the present disclosure;
FIG. 2 illustrates an exemplary view of a first image in a three-dimensional model implantation method provided by an embodiment of the present application;
FIG. 3 is an exemplary diagram of a second image in a three-dimensional model implantation method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a table model after mapping according to an embodiment of the present application;
fig. 5 is a schematic diagram of projection matching between a first image and a second image according to an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating projection matching of a first image and a three-dimensional model according to an embodiment of the present application;
FIG. 7 illustrates a rendered map using a 3D rendering engine provided by an embodiment of the present application;
FIG. 8 illustrates a schematic diagram of image registration and image fusion provided by an embodiment of the present application;
FIG. 9 is a schematic structural view of a three-dimensional model implant device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Before introducing the three-dimensional model implantation method provided by the embodiment of the present application, some concepts related to the embodiment of the present application are described first, and some concepts related to the embodiment of the present application are as follows:
AutoCAD (Autodesk Computer Aided Design), sometimes abbreviated as CAD, is automated computer-aided design software developed by Autodesk, inc., for two-dimensional drawing, detailed drawing, design documentation, and basic three-dimensional design; autoCAD has a good user interface and can perform various operations by means of interactive menus or command lines.
SolidWorks software is a software product developed by the Daabout system (Dassault Systemes) for marketing machine design software. Embodiments for obtaining a robot model using sales machine design software are for example: the robot model was designed using the SolidWorks software and derived from the SolidWorks software.
Homography is a concept in geometry; homography is a reversible transformation from the real projection plane to the projection plane, under which transformation the straight line still maps to a straight line; words of the same meaning also include direct conversion, projective property, and the like.
A server refers to a device that provides computing services over a network, such as: an x86 server and a non-x 86 server, the non-x 86 server comprising: mainframe, minicomputer, and UNIX servers. Of course, in a specific implementation process, the server may specifically select a mainframe or a mini-computer, where the mini-computer refers to a special processor such as a reduced instruction set computing (Reduced Instruction Set Computing, RISC) or a single word length fixed point instruction average execution speed (Million Instructions Per Second, MIPS), and mainly supports a closed and special device for providing computing services of a UNIX operating system; a mainframe, also referred to herein as a mainframe, refers to a device that provides computing services using a dedicated processor instruction set, operating system, and application software.
It should be noted that, the three-dimensional model implantation method provided in the embodiment of the present application may be executed by an electronic device, where the electronic device refers to a device terminal having a function of executing a computer program or the server described above, and the device terminal is for example: smart phones, personal computers (personal computer, PCs), tablet computers, personal digital assistants (personal digital assistant, PDAs), mobile internet appliances (mobile Internet device, MIDs), network switches or network routers, and the like.
Before introducing the three-dimensional model implantation method provided in the embodiments of the present application, application scenarios suitable for the three-dimensional model implantation method are introduced, where the application scenarios include, but are not limited to: implanting a three-dimensional model into an image or video by using the three-dimensional model implantation method, wherein the three-dimensional model can be a model of a person, an animal or an object in the advertising industry or the cartoon industry; or a stereoscopic model of a three-dimensional subtitle is embedded in an image or video, etc.
Please refer to fig. 1 for a schematic diagram of a three-dimensional model implantation method according to an embodiment of the present application; the three-dimensional model implantation method may include:
step S110: and performing projection matching on a first image in the target video and a second image in the target video to obtain a first projection relationship, wherein the first image is a local area image of the second image.
The target video refers to a video of a three-dimensional model to be implanted, and the target video includes a preset implantation entity, where the preset implantation entity refers to a reference object of the three-dimensional model to be implanted, and specifically includes: if a three-dimensional model of a teacup is required to be implanted in the target video, the preset implantation entity can be a reference object such as a tea table or a table; video (video or video) refers broadly to a variety of information carriers that electronically capture, record, process, store, transmit, and reproduce a series of still images. The target video obtaining method comprises the following steps: the method comprises the steps that firstly, a pre-stored target video is obtained, the target video is obtained from a file system, or the target video is obtained from a database; the second way is that the target video is received and obtained from other terminal equipment; in the third way, a target video on the internet is obtained using software such as a browser, or the target video is obtained using other application programs to access the internet.
The first image refers to a local image in one of image frames in the target video, and the first image may be represented by an letter S in a formula, specifically for example: please refer to an exemplary diagram of a first image in the three-dimensional model implantation method provided in the embodiment of the present application shown in fig. 2; assuming that the target video is taken for a table, the first image may be a desktop image of the table, where the desktop image may specifically be an image composed of two concentric rectangular frames of different sizes, for example. The first image may be obtained by shooting when shooting the target video, or may be obtained by capturing a screenshot or clipping from the target video.
A second image, which refers to one of the image frames in the target video, may be represented in the formula using the letter F; it will be appreciated that the first image is a partial area image of the second image, such as: please refer to fig. 3, which is a schematic diagram illustrating a second image in the three-dimensional model implantation method according to the embodiment of the present application; if the first image is a table top image of a table, the second image may be an image of the table. The second image may be obtained by extracting one image frame from the target video, or may be obtained by capturing a picture when the target video is played, or may be obtained when the target video is captured.
Optionally, before projection matching the first image with the second image, the steps may further be performed including: measuring the specific size of a preset implanted entity in the first image, manufacturing a model of the preset implanted entity according to the specific size of the preset implanted entity, and mapping one surface of the model of the preset implanted entity by using the first image to obtain a mapped entity model.
Please refer to fig. 4, which illustrates a schematic diagram of a table model after mapping according to an embodiment of the present application; specific embodiments of the above steps include: assuming that the preset implantation entity is a table, the table surface size of the table is assumed to be 2 meters×1.2 meters, then AutoCAD (Autodesk Computer Aided Design) or SolidWorks is used for manufacturing a table model with the table surface size of 2 meters×1.2 meters, and then the first image is a table surface image and is attached to the table surface of the table model, so that a mapped table model is obtained.
The implementation of projection matching the first image in the target video with the second image in the target video in step S110 may include:
step S111: four first keypoints of the first image are obtained, any three of the four first keypoints being non-collinear.
The first key points refer to points representing the positions of the spatial key features of the first image, and it can be understood that at least four first key points can determine a homography matrix of the projective transformation, that is, in a specific implementation process, five, six or ten first key points can also be selected to determine the matrix of the projective transformation; the manner of selecting four first keypoints is, for example: four points S1, S2, S3 and S4 are selected from the first image S.
Step S112: four second keypoints of the second image are obtained, any three of which cannot be collinear.
The second key points refer to points representing the positions of the spatial key features of the second image, and it can be understood that at least four second key points can determine a homography matrix of the projective transformation, that is, in a specific implementation process, five, six or nine second key points can also be selected to determine the matrix of the projective transformation; the manner of selecting four second keypoints is, for example: four points F1, F2, F3 and F4 are selected from the second image F.
The embodiments of the step S111 and the step S112 are relatively similar, and thus, two steps will be described together, and the description of the two steps can be understood with reference to each other; the embodiments of step S111 and step S112 are, for example: four first key points, any three of which are not collinear, are randomly selected in the first image.
Step S113: and performing projection matching on the four first key points and the four second key points.
Please refer to a schematic diagram of projection matching between the first image and the second image provided in the embodiment of the present application shown in fig. 5; in the above embodiment of performing projection matching on the four first keypoints and the four second keypoints in step S113, for example: and performing projection matching according to the four points S1, S2, S3 and S4 and the four points F1, F2, F3 and F4, determining a first homography matrix M1 between the first image and the second image, and determining the homography matrix M1 as a first projection relation.
In the implementation process, projection matching is carried out on four first key points of the first image and four second key points of the second image; thereby effectively improving the speed of projection matching of the first image and the second image.
Step S120: and performing projection matching on the first image and the surface of the three-dimensional model to obtain a second projection relation.
The implementation principle and implementation of this step are similar or analogous to those of step S110; the embodiment of projection matching the first image with the surface of the three-dimensional model in step S120 may include the steps of:
step S121: four third keypoints on the surface of the three-dimensional model are obtained, any three of which cannot be collinear.
The surface of the three-dimensional model refers to one of the surfaces of the three-dimensional model to be implanted, and the three-dimensional model can be represented by the letter D in the formula, and of course, the surface can be a plane, and in the specific event, the surface can also be a curved surface.
Third key points, which are points representing the positions of the spatial key features on the surface of the three-dimensional model, at least four third key points can determine a homography matrix of the projective transformation, that is, in a specific implementation process, five, six or nine third key points can be selected to determine the matrix of the projective transformation; the way of selecting four third keypoints is for example: four points D1, D2, D3 and D4 are selected from the three-dimensional model D.
Step S122: and performing projection matching on the four third key points and the four first key points.
Please refer to fig. 6, which illustrates a schematic diagram of projection matching between a first image and a surface of a three-dimensional model according to an embodiment of the present application; in the above embodiment of performing projection matching between the four third keypoints and the four first keypoints in step S122, for example: and performing projection matching according to the four points S1, S2, S3 and S4 on the first image and the four points D1, D2, D3 and D4 on the surface of the three-dimensional model, determining a second homography matrix M2 between the first image and the surface of the three-dimensional model, and determining the homography matrix M2 as a second projection relation.
In the implementation process, projection matching is carried out on four third key points and four first key points on the surface of the obtained three-dimensional model; thereby effectively improving the speed of projection matching of the surface of the three-dimensional model and the first image.
Step S130: and rendering the three-dimensional model according to the first projection relation and the second projection relation to obtain a rendering diagram.
The embodiment of rendering the three-dimensional model according to the first projection relationship and the second projection relationship in the step S130 may include the following steps:
step S131: and determining the projection transformation relation between the three-dimensional model and the second image according to the first projection relation and the second projection relation.
Step S132: and rendering the three-dimensional model according to the projective transformation relation.
Please refer to fig. 7, which illustrates a rendering diagram after rendering using a 3D rendering engine according to an embodiment of the present application; the embodiments of step S131 and step S132 described above are, for example: multiplying the first homography matrix M1 by the second homography matrix M2 to obtain a projective transformation relation between the three-dimensional model and the second image, where the projective transformation relation may be expressed as m=m1×m2 using a formula, and in a specific implementation process, the second homography matrix M2 may be changed, and an inverse matrix M1 of the first homography matrix M1 is calculated -1 Readjusting camera parameters of the 3D rendering engine such that m2=m1 -1 Then M becomes a unit matrix, and the 3D rendering engine is used to render the three-dimensional model and the second image, so as to obtain the rendering graph, where the rendered desktop overlaps with the original desktop.
In the implementation process, determining a projection transformation relation between the three-dimensional model and the second image according to the first projection relation and the second projection relation; rendering the three-dimensional model according to the projection transformation relation; thereby effectively improving the speed of rendering the three-dimensional model.
Step S140: and fusing the areas of the three-dimensional model in the second image according to the rendering graph to obtain an implanted image after the three-dimensional model is implanted.
Please refer to a schematic diagram of image registration and image fusion provided by the embodiment of the present application shown in fig. 8; the above embodiment of fusing the region of the three-dimensional model in the second image according to the rendering map in step S140 may include the steps of:
step S141: and carrying out image registration on the rendering graph and the second image to obtain a registered rendering graph.
Image registration, namely, for two images in a group of image data sets, mapping one image to the other image by searching for a space transformation, so that points corresponding to the same position in space in the two images are corresponding to each other, thereby achieving the purpose of information fusion; the purpose of image registration is to compare or fuse images acquired under different conditions for the same object, such as in particular: the images may come from different acquisition devices, from different times, from different viewing angles, etc.
The embodiment in step S141 is, for example: the position occupied by the implanted three-dimensional model is a basic matrix frame (a dotted line rectangular frame shown in the figure), and the basic matrix frame is outwards expanded by a preset pixel (a solid line rectangular frame shown in the figure) to obtain an expanded matrix frame; in other words, the basic matrix frame here is that the basic frame frames all the occupied pixels of the three-dimensional model, and the extended matrix frame frames all the occupied pixels of the three-dimensional model, the area of the extended matrix frame is larger than that of the basic matrix frame, and the extended matrix frame completely covers the basic matrix frame.
Step S142: and carrying out image fusion on the region of the three-dimensional model in the second image according to the registered rendering graph to obtain an implantation image.
The embodiment in step S142 described above is, for example: covering the area outside the frame of the expansion matrix by adopting pixels of the original second image, and fusing the area inside the frame of the expansion matrix by adopting an image fusion mode to obtain an implanted image after image fusion; specific image fusion methods are as follows: poisson fusion (Poisson blend) or laplace fusion (laplacian blend) and the like. In the implementation process, the rendering graph and the second image are subjected to image registration, so that a registered rendering graph is obtained; image fusion is carried out on the area of the three-dimensional model in the second image according to the registered rendering graph, and an implantation image is obtained; thereby effectively improving the speed of image registration and image fusion.
In the implementation process, a rendering diagram is obtained by rendering the three-dimensional model according to a first projection relation of a first image and a second image in the representation target video and a surface second projection relation of the representation first image and the three-dimensional model; fusing the areas of the three-dimensional model in the second image according to the rendering graph to obtain an implanted image after the three-dimensional model is implanted; therefore, the difficulty of implanting the three-dimensional model into the video image frame is reduced, and the efficiency of implanting the three-dimensional model into the video image frame is effectively improved.
Optionally, in the embodiment of the present application, after obtaining the image after the implantation of the three-dimensional model, the implantation image may also be sent to other devices; the three-dimensional model implantation method described above may include the steps of:
step S210: and the electronic equipment performs projection matching on a first image in the target video and a second image in the target video to obtain a first projection relationship, wherein the first image is a local area image of the second image.
Step S220: and the electronic equipment performs projection matching on the first image and the surface of the three-dimensional model to obtain a second projection relation.
Step S230: and the electronic equipment renders the three-dimensional model according to the first projection relation and the second projection relation to obtain a rendering chart.
Step S240: and the electronic equipment fuses the areas of the three-dimensional model in the second image according to the rendering graph to obtain an implanted image after the three-dimensional model is implanted.
The implementation principles and embodiments of the steps S210 to S240 are similar or analogous to those of the steps S110 to S140, and thus, the implementation principles and embodiments of the steps are not described herein, and reference may be made to the descriptions of the steps S110 to S140 if not clear.
Step S250: the electronic equipment receives a data request sent by the terminal equipment.
The embodiment in step S250 described above is, for example: the electronic device receives the data request sent by the terminal device via the hypertext transfer protocol (Hyper Text Transfer Protocol, HTTP) or hypertext transfer security protocol (HyperText Transfer Protocol Secure, HTTPs). The HTTP protocol is a simple request response protocol, which typically runs on top of the transmission control protocol (Transmission Control Protocol, TCP), which specifies what messages a client might send to a server and what responses get. The HTTPS protocol, also referred to herein as HTTP Secure, is a transport protocol that communicates securely over a computer network; the main purpose of HTTPS development is to provide identity authentication for web servers, protecting the privacy and integrity of exchanged data.
Step S260: the electronic device sends an implantation image corresponding to the data request to the terminal device, and the implantation image is used for being displayed by the terminal device.
The embodiment in step S260 described above is, for example: the electronic device sends the implanted image corresponding to the data request to the terminal device through the HTTP protocol or the HTTPS protocol, and the implanted image is used for being displayed by the terminal device. In the implementation process, a data request sent by the terminal equipment is received; transmitting an implantation image corresponding to the data request to the terminal equipment, wherein the implantation image is used for being displayed by the terminal equipment; thereby effectively improving the speed of the terminal device to acquire and display the implantation image.
Optionally, in an embodiment of the present application, the electronic device performing the above three-dimensional model implantation method may further obtain a video of implanting the three-dimensional model, and send the video to other devices, and then the above three-dimensional model implantation method may further include:
step S270: the electronic device implants the three-dimensional model into a target frame except the second image in the target video to obtain an implanted video, wherein the target frame comprises the second image and at least one image except the second image.
The embodiment of obtaining the implantation video in the step S270 is as follows: the electronic device implants the three-dimensional model into the image including the second image and at least one image other than the second image, in other words, the electronic device implants all frames including the preset implant entity in the target video into the three-dimensional model, the implanted video can be obtained.
Step S280: the electronic device sends the implanted video to the terminal device, and the implanted video is used for being played by the terminal device.
The embodiment of transmitting the implant video to the terminal device in the above step S270 is, for example: the electronic device sends the implanted video to the terminal device through a real-time streaming protocol (Real Time Streaming Protocol, RTSP); the RTSP protocol herein is a network application protocol dedicated to the use of entertainment and communication systems to control streaming media servers; the protocol is used to create and control a media session between terminals. The client of the media server issues VCR commands such as play, record and pause to facilitate real-time control of the media stream from the server to the client (video on demand) or from the client to the server (voice recording).
In the implementation process, the three-dimensional model is implanted into a target frame except the second image in the target video to obtain an implanted video, wherein the target frame comprises the second image and at least one image except the second image; transmitting an implantation video to the terminal equipment, wherein the implantation video is used for being played by the terminal equipment; thereby effectively improving the speed of the terminal equipment for obtaining and playing the embedded video.
Please refer to fig. 9, which illustrates a schematic structural diagram of a three-dimensional model implantation apparatus according to an embodiment of the present application; the three-dimensional model implant apparatus 300 may include:
the first relation obtaining module 310 is configured to perform projection matching on a first image in the target video and a second image in the target video, so as to obtain a first projection relation, where the first image is a local area image of the second image.
The second relation obtaining module 320 is configured to perform projection matching on the first image and the surface of the three-dimensional model, so as to obtain a second projection relation.
The rendering map obtaining module 330 is configured to render the three-dimensional model according to the first projection relationship and the second projection relationship, and obtain a rendering map.
And the implantation image obtaining module 340 is configured to fuse the regions of the three-dimensional model in the second image according to the rendering map, and obtain an implantation image after implantation of the three-dimensional model.
Optionally, in an embodiment of the present application, the first relationship obtaining module includes:
the first key point obtaining module is used for obtaining four first key points of the first image, and any three of the four first key points cannot be collinear.
And the second key point obtaining module is used for obtaining four second key points of the second image, and any three of the four second key points cannot be collinear.
And the first projection matching module is used for carrying out projection matching on the four first key points and the four second key points.
Optionally, in an embodiment of the present application, the second relationship obtaining module includes:
and the third key point obtaining module is used for obtaining four third key points on the surface of the three-dimensional model, and any three of the four third key points cannot be collinear.
And the second projection matching module is used for carrying out projection matching on the four third key points and the four first key points.
Optionally, in an embodiment of the present application, the rendering graph obtaining module includes:
and the transformation relation determining module is used for determining the projection transformation relation between the three-dimensional model and the second image according to the first projection relation and the second projection relation.
And the three-dimensional model rendering module is used for rendering the three-dimensional model according to the projection transformation relation.
Optionally, in an embodiment of the present application, the implanting image obtaining module includes:
and the rendering image registration module is used for carrying out image registration on the rendering image and the second image to obtain a registered rendering image.
And the rendering image fusion module is used for carrying out image fusion on the region of the three-dimensional model in the second image according to the registered rendering image to obtain an implantation image.
Optionally, in an embodiment of the present application, the three-dimensional model implantation device further includes:
and the data request receiving module is used for receiving the data request sent by the terminal equipment.
And the implantation image sending module is used for sending the implantation image corresponding to the data request to the terminal equipment, and the implantation image is used for being displayed by the terminal equipment.
Optionally, in an embodiment of the present application, the three-dimensional model implantation device may further include:
and the implantation video obtaining module is used for implanting the three-dimensional model into a target frame except the second image in the target video to obtain the implantation video, wherein the target frame comprises the second image and at least one image except the second image.
The embedded video transmitting module is used for transmitting the embedded video to the terminal equipment, and the embedded video is used for being played by the terminal equipment.
It should be understood that the apparatus corresponds to the above three-dimensional model implantation method embodiment, and is capable of performing the steps involved in the above method embodiment, and specific functions of the apparatus may be referred to the above description, and detailed descriptions thereof are omitted herein as appropriate to avoid redundancy. The device includes at least one software functional module that can be stored in memory in the form of software or firmware (firmware) or cured in an Operating System (OS) of the device.
Please refer to fig. 10, which illustrates a schematic structural diagram of an electronic device provided in an embodiment of the present application. An electronic device 400 provided in an embodiment of the present application includes: a processor 410 and a memory 420, the memory 420 storing machine-readable instructions executable by the processor 410, which when executed by the processor 410 perform the method as described above.
The present embodiment also provides a storage medium 430, on which storage medium 430 a computer program is stored which, when executed by the processor 410, performs a method as above.
The storage medium 430 may be implemented by any type or combination of volatile or nonvolatile Memory devices, such as a static random access Memory (Static Random Access Memory, SRAM), an electrically erasable Programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), an erasable Programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The foregoing description is merely an optional implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any person skilled in the art may easily think about changes or substitutions within the technical scope of the embodiments of the present application, and the changes or substitutions should be covered in the scope of the embodiments of the present application.

Claims (8)

1. A method of three-dimensional model implantation comprising:
performing projection matching on a first image in a target video and a second image in the target video to obtain a first projection relationship, wherein the first image is a local area image of the second image;
performing projection matching on the first image and the surface of the three-dimensional model to obtain a second projection relation;
rendering the three-dimensional model according to the first projection relation and the second projection relation to obtain a rendering diagram;
fusing the areas of the three-dimensional model in the second image according to the rendering graph to obtain an implanted image after the three-dimensional model is implanted;
the performing projection matching on the first image in the target video and the second image in the target video includes: obtaining four first key points of the first image, wherein any three of the four first key points cannot be collinear; obtaining four second key points of the second image, wherein any three of the four second key points cannot be collinear; performing projection matching on the four first key points and the four second key points;
the rendering the three-dimensional model according to the first projection relationship and the second projection relationship comprises: determining a projection transformation relationship between the three-dimensional model and the second image according to the first projection relationship and the second projection relationship; and rendering the three-dimensional model according to the projective transformation relation.
2. The method of claim 1, wherein said projectively matching the first image with the surface of the three-dimensional model comprises:
obtaining four third keypoints on the surface of the three-dimensional model, any three of the four third keypoints being non-collinear;
and carrying out projection matching on the four third key points and the four first key points.
3. The method according to claim 1, wherein fusing the region of the three-dimensional model in the second image according to the rendering map to obtain an implanted image after implantation of the three-dimensional model comprises:
performing image registration on the rendering graph and the second image to obtain a registered rendering graph;
and carrying out image fusion on the area of the three-dimensional model in the second image according to the registered rendering graph to obtain the implantation image.
4. The method of claim 1, further comprising, after the obtaining the implant image after implanting the three-dimensional model:
receiving a data request sent by terminal equipment;
and sending the implantation image corresponding to the data request to the terminal equipment, wherein the implantation image is used for being displayed by the terminal equipment.
5. The method as recited in claim 4, further comprising:
implanting the three-dimensional model into a target frame except the second image in the target video to obtain an implanted video, wherein the target frame comprises the second image and at least one image except the second image;
and sending the implantation video to the terminal equipment, wherein the implantation video is used for being played by the terminal equipment.
6. A three-dimensional model implant device, comprising:
the first relation obtaining module is used for carrying out projection matching on a first image in a target video and a second image in the target video to obtain a first projection relation, wherein the first image is a local area image of the second image;
the second relation obtaining module is used for carrying out projection matching on the first image and the surface of the three-dimensional model to obtain a second projection relation;
the rendering diagram obtaining module is used for rendering the three-dimensional model according to the first projection relation and the second projection relation to obtain a rendering diagram;
the implantation image obtaining module is used for fusing the areas of the three-dimensional model in the second image according to the rendering graph to obtain an implantation image after the three-dimensional model is implanted;
the performing projection matching on the first image in the target video and the second image in the target video includes: obtaining four first key points of the first image, wherein any three of the four first key points cannot be collinear; obtaining four second key points of the second image, wherein any three of the four second key points cannot be collinear; performing projection matching on the four first key points and the four second key points;
the rendering the three-dimensional model according to the first projection relationship and the second projection relationship comprises: determining a projection transformation relationship between the three-dimensional model and the second image according to the first projection relationship and the second projection relationship; and rendering the three-dimensional model according to the projective transformation relation.
7. An electronic device, comprising: a processor and a memory storing machine-readable instructions executable by the processor to perform the method of any one of claims 1 to 5 when executed by the processor.
8. A storage medium having stored thereon a computer program which, when executed by a processor, performs the method of any of claims 1 to 5.
CN202010429172.6A 2020-05-19 2020-05-19 Three-dimensional model implantation method and device, electronic equipment and storage medium Active CN111599005B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010429172.6A CN111599005B (en) 2020-05-19 2020-05-19 Three-dimensional model implantation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010429172.6A CN111599005B (en) 2020-05-19 2020-05-19 Three-dimensional model implantation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111599005A CN111599005A (en) 2020-08-28
CN111599005B true CN111599005B (en) 2024-01-05

Family

ID=72187476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010429172.6A Active CN111599005B (en) 2020-05-19 2020-05-19 Three-dimensional model implantation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111599005B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118474327B (en) * 2024-05-10 2024-12-10 深圳市塔普智能科技有限公司 Video editing method, device, equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6611266B1 (en) * 1999-06-07 2003-08-26 Yoram Pollack Method for achieving roaming capabilities and performing interactive CGI implanting, and computer games using same
CN101521828A (en) * 2009-02-20 2009-09-02 南京师范大学 Implanted type true three-dimensional rendering method oriented to ESRI three-dimensional GIS module
CN103024480A (en) * 2012-12-28 2013-04-03 杭州泰一指尚科技有限公司 Method for implanting advertisement in video
CN103093491A (en) * 2013-01-18 2013-05-08 浙江大学 Three-dimensional model high sense of reality virtuality and reality combination rendering method based on multi-view video
CN103400409A (en) * 2013-08-27 2013-11-20 华中师范大学 3D (three-dimensional) visualization method for coverage range based on quick estimation of attitude of camera
WO2014019498A1 (en) * 2012-08-01 2014-02-06 成都理想境界科技有限公司 Video playing method and system based on augmented reality technology and mobile terminal
WO2019034142A1 (en) * 2017-08-17 2019-02-21 腾讯科技(深圳)有限公司 Three-dimensional image display method and device, terminal, and storage medium
CN109842811A (en) * 2019-04-03 2019-06-04 腾讯科技(深圳)有限公司 A kind of method, apparatus and electronic equipment being implanted into pushed information in video
CN110599605A (en) * 2019-09-10 2019-12-20 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and computer readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6611266B1 (en) * 1999-06-07 2003-08-26 Yoram Pollack Method for achieving roaming capabilities and performing interactive CGI implanting, and computer games using same
CN101521828A (en) * 2009-02-20 2009-09-02 南京师范大学 Implanted type true three-dimensional rendering method oriented to ESRI three-dimensional GIS module
WO2014019498A1 (en) * 2012-08-01 2014-02-06 成都理想境界科技有限公司 Video playing method and system based on augmented reality technology and mobile terminal
CN103024480A (en) * 2012-12-28 2013-04-03 杭州泰一指尚科技有限公司 Method for implanting advertisement in video
CN103093491A (en) * 2013-01-18 2013-05-08 浙江大学 Three-dimensional model high sense of reality virtuality and reality combination rendering method based on multi-view video
CN103400409A (en) * 2013-08-27 2013-11-20 华中师范大学 3D (three-dimensional) visualization method for coverage range based on quick estimation of attitude of camera
WO2019034142A1 (en) * 2017-08-17 2019-02-21 腾讯科技(深圳)有限公司 Three-dimensional image display method and device, terminal, and storage medium
CN109842811A (en) * 2019-04-03 2019-06-04 腾讯科技(深圳)有限公司 A kind of method, apparatus and electronic equipment being implanted into pushed information in video
CN110599605A (en) * 2019-09-10 2019-12-20 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and computer readable storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Haritha H 等.Vicode: 3d Barcode with Embedded Video Using Histogram Shifting Based Reversible Data Hiding.《International Journal of Science and Research》.2017,第第6卷卷(第第6卷期),全文. *
李自力 等.虚拟演播室中节目主持人或演员与虚拟场景融合的一种技术方案.《通信学报》.2003,第24卷(第10期),第102-107页. *
林立宇 等.云演播厅技术在游戏直播中的应用与研究.《广东通信技术》.2018,第第38卷卷(第第38卷期),第5-7页. *
赵刚 等.PTZ摄像机视频与三维模型的配准技术研究.《计算机工程与设计》.2013,第34卷(第10期),第354-3550页. *

Also Published As

Publication number Publication date
CN111599005A (en) 2020-08-28

Similar Documents

Publication Publication Date Title
EP4394554A1 (en) Method for determining and presenting target mark information and apparatus
CN109035334B (en) Pose determining method and device, storage medium and electronic device
TWI582710B (en) The method of recognizing the object of moving image and the interactive film establishment method of automatically intercepting target image
JP5134664B2 (en) Annotation device
CN111008985B (en) Panorama picture seam detection method and device, readable storage medium and electronic equipment
US20170302714A1 (en) Methods and systems for conversion, playback and tagging and streaming of spherical images and video
CN109891466A (en) The enhancing of 3D model scans
US20140247272A1 (en) Image processing apparatus, method and computer program product
US10943370B2 (en) Compression of multi-dimensional object representations
US20130297675A1 (en) System For Learning Trail Application Creation
CN112639870B (en) Image processing device, image processing method and image processing program
CN103914876A (en) Method and apparatus for displaying video on 3D map
CN107071557A (en) A kind of method and apparatus for playing video
CN112446312A (en) Three-dimensional model identification method and device, electronic equipment and storage medium
US20240134942A1 (en) Systems and Methods for Digital Asset Management
CN110111241B (en) Method and apparatus for generating dynamic image
CN112581632B (en) House source data processing method and device
CN108171801A (en) A kind of method, apparatus and terminal device for realizing augmented reality
CN111599005B (en) Three-dimensional model implantation method and device, electronic equipment and storage medium
CN106445298A (en) Visual operation method and device for internet-of-things device
CN113012031A (en) Image processing method and image processing apparatus
JP7124957B2 (en) Image processing system, estimation device, processing method and program
CN111399655B (en) Image processing method and device based on VR synchronization
JP2016218849A (en) Planar transformation parameter estimation apparatus, method, and program
CN112102145B (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20231122

Address after: 410000, Room 502, Building 12, Wangxing Community, Wangchengpo Street, Yuelu District, Changsha City, Hunan Province

Applicant after: Hunan Feige Digital Technology Co.,Ltd.

Address before: 2 / F, 979 Yunhan Road, Pudong New Area, Shanghai, 200120

Applicant before: Shanghai Wanmian Intelligent Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant