[go: up one dir, main page]

CN113421292A - Three-dimensional modeling detail enhancement method and device - Google Patents

Three-dimensional modeling detail enhancement method and device Download PDF

Info

Publication number
CN113421292A
CN113421292A CN202110713305.7A CN202110713305A CN113421292A CN 113421292 A CN113421292 A CN 113421292A CN 202110713305 A CN202110713305 A CN 202110713305A CN 113421292 A CN113421292 A CN 113421292A
Authority
CN
China
Prior art keywords
dimensional
model
geometric model
parameters
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110713305.7A
Other languages
Chinese (zh)
Inventor
郭建亚
李骊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing HJIMI Technology Co Ltd
Original Assignee
Beijing HJIMI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing HJIMI Technology Co Ltd filed Critical Beijing HJIMI Technology Co Ltd
Priority to CN202110713305.7A priority Critical patent/CN113421292A/en
Publication of CN113421292A publication Critical patent/CN113421292A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The application provides a three-dimensional modeling detail enhancement method and a device, and the method comprises the following steps: acquiring an initial three-dimensional geometric model of a shot object and a texture mapping model corresponding to the initial three-dimensional geometric model; wherein the initial three-dimensional geometric model contains depth information of a photographic object; carrying out two-dimensional parameterization on the initial three-dimensional geometric model and the texture mapping model to obtain a geometric model and a texture mapping model represented by two-dimensional parameters; and calculating to obtain the three-dimensional shape parameters containing details of the shot object based on the texture mapping model of the two-dimensional parameters and the geometric model of the two-dimensional parameters. The method can establish the three-dimensional model of the shot object by shooting the image of the shot object, thereby getting rid of the dependence of three-dimensional modeling on expensive three-dimensional scanning equipment and enabling the detailed reconstruction of the three-dimensional modeling to be easier to realize.

Description

Three-dimensional modeling detail enhancement method and device
Technical Field
The application relates to the technical field of image processing, in particular to a three-dimensional modeling detail enhancement method and device.
Background
Three-dimensional modeling is a common process for acquiring three-dimensional information of a photographic subject. In general, three-dimensional modeling requires three-dimensional scanning of a photographic subject by means of a three-dimensional scanner, so that a precise and detailed three-dimensional model of the photographic subject can be obtained, and three-dimensional information of the photographic subject can be obtained.
However, the three-dimensional scanner is expensive, which results in increased modeling cost, and general users have no equipment foundation and cannot realize accurate and detailed three-dimensional modeling.
Disclosure of Invention
Based on the technical current situation, the application provides a three-dimensional modeling detail enhancement method and device, which can enable a user to realize accurate and detailed three-dimensional modeling without expensive equipment.
The technical scheme provided by the application is as follows:
a three-dimensional modeling detail enhancement method, comprising:
acquiring an initial three-dimensional geometric model of a shot object and a texture mapping model corresponding to the initial three-dimensional geometric model; wherein the initial three-dimensional geometric model contains depth information of a photographic object;
carrying out two-dimensional parameterization on the initial three-dimensional geometric model and the texture mapping model to obtain a geometric model and a texture mapping model represented by two-dimensional parameters;
and calculating to obtain the three-dimensional shape parameters containing details of the shot object based on the texture mapping model of the two-dimensional parameters and the geometric model of the two-dimensional parameters.
Optionally, the calculating of the three-dimensional shape parameter of the shot object, which includes details, based on the texture mapping model of the two-dimensional parameter and the geometric model of the two-dimensional parameter includes:
calculating to obtain three-dimensional shape parameters of the shot object based on a texture mapping model of the two-dimensional parameters;
and correcting the three-dimensional shape parameters obtained by calculation based on the geometric model of the two-dimensional parameters.
Optionally, the three-dimensional shape parameters obtained through calculation at least include a shooting object surface brightness parameter and a shooting object depth parameter, wherein the shooting object surface brightness is determined by shooting object surface reflectivity and an illumination direction;
the geometric model based on the two-dimensional parameters corrects the three-dimensional shape parameters obtained by calculation, and comprises the following steps:
comparing the three-dimensional shape parameters obtained by calculation with the geometric model of the two-dimensional parameters to determine calculation errors; the calculation error is used for representing the error between the calculated three-dimensional shape parameter and the real parameter of the shooting object;
recalculating the three-dimensional shape parameters of the photographic object based on the calculation error;
and repeating the processing process until the calculation error is smaller than the set error threshold.
Optionally, the recalculating the three-dimensional shape parameter of the photographic object based on the calculation error includes:
and iteratively calculating new three-dimensional shape parameters based on the calculation errors and the calculated three-dimensional shape parameters.
Optionally, performing two-dimensional parameterization on the initial three-dimensional geometric model and the texture mapping model to obtain a geometric model and a texture mapping model represented by two-dimensional parameters, includes:
establishing a three-dimensional rectangular coordinate system based on the initial three-dimensional geometric model and the texture mapping model;
acquiring cylindrical projection coordinates corresponding to coordinate points on an initial three-dimensional geometric model and a texture mapping model in a three-dimensional rectangular coordinate system by adopting a cylindrical projection method;
and converting the cylindrical projection coordinates corresponding to the coordinate points on the initial three-dimensional geometric model and the texture mapping model into plane coordinates to obtain a geometric model and a texture mapping model represented by two-dimensional parameters.
A three-dimensional modeling apparatus, comprising:
the data acquisition unit is used for acquiring an initial three-dimensional geometric model of a shooting object and a texture mapping model corresponding to the initial three-dimensional geometric model; wherein the initial three-dimensional geometric model contains depth information of a photographic object;
the parameter processing unit is used for carrying out two-dimensional parameterization on the initial three-dimensional geometric model and the texture mapping model to obtain a geometric model and a texture mapping model represented by two-dimensional parameters;
and the calculation processing unit is used for calculating and obtaining the three-dimensional shape parameters containing the details of the shot object based on the texture mapping model of the two-dimensional parameters and the geometric model of the two-dimensional parameters.
Optionally, the calculating and processing unit calculates the three-dimensional shape parameter of the shot object, which includes details, based on a texture mapping model of the two-dimensional parameter and a geometric model of the two-dimensional parameter, and specifically includes:
calculating to obtain three-dimensional shape parameters of the shot object based on a texture mapping model of the two-dimensional parameters;
and correcting the three-dimensional shape parameters obtained by calculation based on the geometric model of the two-dimensional parameters.
Optionally, the three-dimensional shape parameters obtained through calculation at least include a shooting object surface brightness parameter and a shooting object depth parameter, wherein the shooting object surface brightness is determined by shooting object surface reflectivity and an illumination direction;
the calculation processing unit corrects the calculated three-dimensional shape parameters based on the geometric model of the two-dimensional parameters, and specifically includes:
comparing the three-dimensional shape parameters obtained by calculation with the geometric model of the two-dimensional parameters to determine calculation errors; the calculation error is used for representing the error between the calculated three-dimensional shape parameter and the real parameter of the shooting object;
recalculating the three-dimensional shape parameters of the photographic object based on the calculation error;
and repeating the processing process until the calculation error is smaller than the set error threshold.
Optionally, the calculating and processing unit recalculates the three-dimensional shape parameter of the photographic object based on the calculation error, and specifically includes:
and iteratively calculating new three-dimensional shape parameters based on the calculation errors and the calculated three-dimensional shape parameters.
Optionally, the parameter processing unit performs two-dimensional parameterization on the initial three-dimensional geometric model and the texture mapping model to obtain a geometric model and a texture mapping model represented by two-dimensional parameters, and the method specifically includes:
establishing a three-dimensional rectangular coordinate system based on the initial three-dimensional geometric model and the texture mapping model;
acquiring cylindrical projection coordinates corresponding to coordinate points on an initial three-dimensional geometric model and a texture mapping model in a three-dimensional rectangular coordinate system by adopting a cylindrical projection method;
and converting the cylindrical projection coordinates corresponding to the coordinate points on the initial three-dimensional geometric model and the texture mapping model into plane coordinates to obtain a geometric model and a texture mapping model represented by two-dimensional parameters.
The three-dimensional modeling detail enhancement method can acquire the three-dimensional shape parameters containing the details of the shot object based on the initial rough three-dimensional geometric model of the shot object and the texture mapping model corresponding to the three-dimensional geometric model, namely, the method can establish the fine three-dimensional model of the shot object by carrying out image shooting on the shot object, so that the dependence of accurate and detailed three-dimensional modeling on expensive three-dimensional scanning equipment is eliminated, and the detailed reconstruction of the three-dimensional modeling is easier to realize.
Meanwhile, the three-dimensional modeling detail enhancement method provided by the embodiment of the application can take the original geometric model of the shot object as prior information to be used for correcting the three-dimensional shape parameters obtained by modeling, so that the modeling precision can be higher, and a high-precision three-dimensional modeling result can be obtained.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram of a three-dimensional modeling detail enhancement method provided in an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram of another three-dimensional modeling detail enhancement method provided by the embodiment of the application;
FIG. 3 is a schematic flow chart diagram of another three-dimensional modeling detail enhancement method provided by an embodiment of the present application;
fig. 4 is a schematic structural diagram of a three-dimensional modeling detail enhancement device according to an embodiment of the present application.
Detailed Description
The technical scheme of the embodiment of the application is suitable for a three-dimensional modeling detail enhancement application scene, and by adopting the technical scheme of the embodiment of the application, the image of the shot object can be shot, and then the three-dimensional model of the shot object is reconstructed based on the shot image, so that the purpose of obtaining the accurate and detailed three-dimensional information of the shot object is achieved.
Regarding the three-dimensional modeling problem, two ways can be generally realized, one is to perform three-dimensional scanning on a target to be modeled by means of a three-dimensional scanning device, such as a three-dimensional laser scanner, and then establish a three-dimensional model of the target.
In another mode, the acquisition equipment based on consumption level acquires the image information of the target to be modeled, and the three-dimensional information of the target to be modeled is acquired based on the image.
The above two methods have respective disadvantages and defects, which are specifically as follows:
first, the main problem with the three-dimensional modeling scheme using a three-dimensional scanning device is the high cost. Three-dimensional scanning devices are often expensive and typically do not have the capability of being equipped by the user, and therefore such a solution cannot be applied.
For the scheme of performing three-dimensional modeling by using consumer-level acquisition equipment, a target three-dimensional model based on a target image and a target depth image is usually implemented by combining a tsdf (truncated signed distance function) algorithm.
In the process of model creation, because the pose of the current frame needs to be accurately positioned, a large number of overlapping regions exist between frames, and because of the problem of drift of the pose between frames, the model is also smoother and the details are lost along with the update of the weight coefficient in the TSDF.
In addition, the TSDF algorithm is similar to sampling data at equal intervals in three-dimensional space, so that model details are smoothed within its spatial resolution, resulting in failure to obtain modeling model data of higher accuracy.
In summary, the existing three-dimensional modeling scheme either needs expensive equipment to implement or has low modeling accuracy, that is, the existing three-dimensional modeling scheme cannot be applied to ordinary users or cannot meet the user requirements.
In view of the above problems, an embodiment of the present application provides a three-dimensional modeling detail enhancement method, which is capable of establishing an accurate and detailed three-dimensional model of a photographic subject by using an initial three-dimensional image taken of the photographic subject, that is, enhancing the accuracy and detail of the three-dimensional model of the photographic subject by means of the image.
In addition, the embodiment of the application establishes the three-dimensional model of the shooting object by utilizing the image of the shooting object acquired by the consumption-level equipment, and the cost is low, so that the method can be implemented in a large range and is beneficial to popularization.
For convenience of introduction, the face is taken as a shooting object, and a processing process of face three-dimensional modeling is introduced, so that the purpose of displaying the three-dimensional modeling detail enhancement method provided by the embodiment of the application is achieved.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, a three-dimensional modeling detail enhancing method provided by an embodiment of the present application includes:
s101, obtaining an initial three-dimensional geometric model of a shooting object and a texture mapping model corresponding to the initial three-dimensional geometric model.
Wherein the initial three-dimensional geometric model contains depth information of a photographic subject.
Specifically, the above-mentioned shooting object may be any object for which a three-dimensional model is desired to be established, for example, any object, a human body or a human body part structure, such as a table, a cup, a human face, a hand, an animal, and the like.
The three-dimensional geometric model comprises an initial three-dimensional geometric model of a shot object and a texture mapping model corresponding to the initial three-dimensional geometric model, and specifically comprises the three-dimensional geometric model and the texture mapping model of the shot object which are established through consumer-level equipment. For example, a consumer-grade RGBD module is used to perform three-dimensional face reconstruction, so as to obtain a three-dimensional geometric model with a texture map. Therefore, it can be understood that the initial three-dimensional geometric model of the photographic subject and the texture map model corresponding to the initial three-dimensional geometric model are low-precision three-dimensional geometric models and texture map models.
The consumption-level RGBD module can acquire two-dimensional RGB information of a shooting object and acquire depth information of the shooting object. Therefore, the initial three-dimensional geometric model established by the consumption-level RGBD module includes depth information of the shot object.
The initial three-dimensional geometric model of the shot object and the texture mapping model corresponding to the initial three-dimensional geometric model may be acquired and established by consumer-level equipment, or may be read from a database.
As an optional implementation manner, the initial three-dimensional geometric model may be established by methods such as Kinect Fusion and Bundle Fusion based on an image of a shooting object acquired by a consumer-grade device; the texture mapping model may be obtained by an MVS (Multi-view Stereo) algorithm.
S102, carrying out two-dimensional parameterization on the initial three-dimensional geometric model and the texture mapping model to obtain a geometric model and a texture mapping model represented by two-dimensional parameters.
Specifically, the initial three-dimensional geometric model and the texture mapping model corresponding to the initial three-dimensional geometric model are subjected to two-dimensional parameterization simultaneously, that is, the initial three-dimensional geometric model and the texture mapping model corresponding to the initial three-dimensional geometric model are converted into two-dimensional parameters for representation, and the geometric model and the texture mapping model represented by the two-dimensional parameters are obtained.
Illustratively, two-dimensional parameterization processing on the initial three-dimensional geometric model and the texture mapping model can be realized by adopting methods such as cylindrical projection expansion, spherical projection expansion, UV parameterization and the like. In the embodiment of the present application, a cylindrical projection expansion method is selected to implement two-dimensional parameterization processing on an initial three-dimensional geometric model and a texture mapping model, and a specific processing procedure thereof will be described in detail in the following embodiment.
S103, calculating to obtain three-dimensional shape parameters containing details of the shot object based on a texture mapping model of the two-dimensional parameters and a geometric model of the two-dimensional parameters.
Specifically, a Shape recovery is performed on the object to be photographed by using an SFS (Shape From Shape) method based on a two-dimensional parametric texture mapping model, and three-dimensional Shape information is acquired. And correcting the three-dimensional shape parameters obtained by calculation according to the geometric model of the two-dimensional parameters to finally obtain the accurate three-dimensional shape parameters containing details of the shot object, namely obtaining the accurate and detailed three-dimensional model of the shot object.
As can be seen from the above description, the three-dimensional modeling detail enhancement method provided in the embodiment of the present application can obtain the three-dimensional shape parameters including the details of the photographic object based on the initial three-dimensional geometric model of the photographic object and the texture map model corresponding to the three-dimensional geometric model, that is, the method can establish the fine three-dimensional model of the photographic object by performing image shooting on the photographic object, so that the dependence of accurate and detailed three-dimensional modeling on expensive three-dimensional scanning equipment is eliminated, and the detailed reconstruction of the three-dimensional modeling is easier to implement.
Meanwhile, the three-dimensional modeling detail enhancement method provided by the embodiment of the application can take the original geometric model of the shot object as prior information to be used for correcting the three-dimensional shape parameters obtained by modeling, so that the modeling precision can be higher, and a high-precision three-dimensional modeling result can be obtained.
The three-dimensional modeling detail enhancement method proposed by the present application is further described in detail below with reference to fig. 2:
s201, obtaining an initial three-dimensional geometric model of a shooting object and a texture mapping model corresponding to the initial three-dimensional geometric model.
Wherein the initial three-dimensional geometric model contains depth information of a photographic subject.
Specifically, the above-mentioned shooting object may be any object for which a three-dimensional model is desired to be established, for example, any object, a human body or a human body part structure, such as a table, a cup, a human face, a hand, an animal, and the like.
The three-dimensional geometric model comprises an initial three-dimensional geometric model of a shot object and a texture mapping model corresponding to the initial three-dimensional geometric model, and specifically comprises the three-dimensional geometric model and the texture mapping model of the shot object which are established through consumer-level equipment. For example, a consumer-grade RGBD module is used to perform three-dimensional face reconstruction, so as to obtain a three-dimensional geometric model with a texture map.
The consumption-level RGBD module can acquire two-dimensional RGB information of a shooting object and acquire depth information of the shooting object. Therefore, the initial three-dimensional geometric model established by the consumption-level RGBD module includes depth information of the shot object.
The initial three-dimensional geometric model of the shot object and the texture mapping model corresponding to the initial three-dimensional geometric model may be acquired and established by consumer-level equipment, or may be read from a database.
As an optional implementation manner, the initial three-dimensional geometric model may be established by methods such as Kinect Fusion and Bundle Fusion based on an image of a shooting object acquired by a consumer-grade device; the texture mapping model may be obtained by an MVS (Multi-view Stereo) algorithm.
S202, carrying out two-dimensional parameterization on the initial three-dimensional geometric model and the texture mapping model to obtain a geometric model and a texture mapping model represented by two-dimensional parameters.
Specifically, the initial three-dimensional geometric model and the texture mapping model corresponding to the initial three-dimensional geometric model are subjected to two-dimensional parameterization simultaneously, that is, the initial three-dimensional geometric model and the texture mapping model corresponding to the initial three-dimensional geometric model are converted into two-dimensional parameters for representation, and the geometric model and the texture mapping model represented by the two-dimensional parameters are obtained.
Illustratively, two-dimensional parameterization processing on the initial three-dimensional geometric model and the texture mapping model can be realized by adopting methods such as cylindrical projection expansion, spherical projection expansion, UV parameterization and the like. In the embodiment of the present application, a cylindrical projection expansion method is selected to implement two-dimensional parameterization processing on an initial three-dimensional geometric model and a texture mapping model, and a specific processing procedure thereof will be described in detail in the following embodiment.
And S203, calculating to obtain a three-dimensional shape parameter containing details of the shot object based on the texture mapping model of the two-dimensional parameter.
Specifically, the Shape of the object is restored by using an SFS (Shape From Shading) method based on a two-dimensional parameter texture mapping model, and the three-dimensional Shape parameters are obtained. The three-dimensional shape parameters obtained by calculation include parameters such as the surface brightness of the shot object and the depth of the shot object. The surface brightness of the object can be determined by first determining the surface reflectivity and the illumination direction of the object, and then determining the surface brightness of the object based on the surface reflectivity and the illumination direction of the object.
And S204, comparing the three-dimensional shape parameters obtained by calculation with the geometric model of the two-dimensional parameters to determine calculation errors.
And the calculation error is used for representing the error between the calculated three-dimensional shape parameter and the real parameter of the shooting object.
As a preferred implementation manner, in the embodiment of the present application, the following objective function is pre-established for measuring an error between a three-dimensional shape parameter obtained by calculation and a geometric model of a two-dimensional parameter, and simultaneously, performing optimization modeling on a surface reflectivity ρ of an object, an illumination direction l, and a depth value z mapped by a two-dimensional parametric image:
Figure BDA0003133803800000091
wherein mu, nu and lambda are preset weight coefficients; and I is a brightness graph of an input color image, namely a color image of a shooting object obtained by shooting the shooting object by the RGBD module.
||(l·mz,▽z)ρ-I||2Representing the difference between the brightness of the object estimated by the reflectogram model of the SFS algorithm and the brightness of the two-dimensional parametric color image represented by the geometric model of the two-dimensional parameters, aiming at accurately estimating the reflectivity of the object surface;
wherein:
Figure BDA0003133803800000092
f represents the focal length and p represents the pixel position in the two-dimensional parametric image.
||Kz-z0||2For measuring the depth z of the subject estimated by the SFS algorithm and the original depth z of the subject represented by the geometric model of the two-dimensional parameters of the subject0The purpose of the difference between the original depth value and the estimated depth value is to ensure that the estimated depth value is as close as possible to the original depth value. K is a parameter.
Figure BDA0003133803800000093
For measuring two-dimensional parametric spaceGradient information for point p.
| | | ρ | | is used to measure the reflectivity of objects in the two-dimensional parameterized color map.
The objective function can represent the error between the calculated three-dimensional shape parameter and the real parameter of the photographic object. The objective function is optimized to minimize the value, so that the purpose of optimizing the three-dimensional shape parameters obtained by calculation can be achieved.
Meanwhile, the original depth, namely the depth in the geometric model represented by the two-dimensional parameters, is added in the optimization method as prior information, and the optimization of the reflectivity and the illumination direction of the surface of the object is also added, so that the real detailed information of the surface of the object can be better acquired.
And S205, recalculating the three-dimensional shape parameters of the shooting object based on the calculation error.
Specifically, if the calculation error obtained in the above step is not less than the set error threshold, iterative calculation is performed based on the three-dimensional shape parameter that has been obtained by calculation, so as to obtain a new three-dimensional shape parameter that is obtained by iterative calculation.
And repeating the processing procedures of the steps S203 to S205 until the calculation error is smaller than the set error threshold value, and stopping the calculation procedure.
As a preferred implementation manner, the embodiment of the present application performs iterative computation based on ADMM (Alternating Direction Method of Multipliers), that is, recalculates the three-dimensional shape parameter of the photographic object.
In particular, the estimated value (p) according to the current k-th iteration(k),l(k)(k),z(k)) Iterating in the following mode, and solving the result of the (k + 1) th iteration:
Figure BDA0003133803800000101
Figure BDA0003133803800000102
Figure BDA0003133803800000103
Figure BDA0003133803800000104
u(k+1)=u(k)(k+1)-(z(k+1),▽z(k+1))
up to p(k),l(k)(k),z(k)Respectively less than a given error threshold value, or stopping iterative computation when the iteration times reach a given time threshold value.
Where ρ is(k+1)、l(k+1)、θ(k+1)、z(k+1)、u(k+1)Respectively, the difference between the corresponding physical quantities between the result of the k-th iteration and the result of the (k + 1) -th iteration.
Fig. 3 shows another embodiment of the three-dimensional modeling detail enhancement method proposed in the present application, which illustrates the specific process of performing two-dimensional parameterization on the initial three-dimensional geometric model and the texture map model to obtain a geometric model and a texture map model represented by two-dimensional parameters.
Referring to fig. 3, a three-dimensional modeling detail enhancing method provided by an embodiment of the present application includes:
s301, obtaining an initial three-dimensional geometric model of the shooting object and a texture mapping model corresponding to the initial three-dimensional geometric model.
Wherein the initial three-dimensional geometric model contains depth information of a photographic subject.
Specifically, the above-mentioned shooting object may be any object for which a three-dimensional model is desired to be established, for example, any object, a human body or a human body part structure, such as a table, a cup, a human face, a hand, an animal, and the like.
The three-dimensional geometric model comprises an initial three-dimensional geometric model of a shot object and a texture mapping model corresponding to the initial three-dimensional geometric model, and specifically comprises the three-dimensional geometric model and the texture mapping model of the shot object which are established through consumer-level equipment. For example, a consumer-grade RGBD module is used to perform three-dimensional face reconstruction, so as to obtain a three-dimensional geometric model with a texture map. Therefore, it can be understood that the three-dimensional geometric model of the photographic subject and the texture map model corresponding to the three-dimensional geometric model are low-precision three-dimensional geometric models and texture map models.
The consumption-level RGBD module can acquire two-dimensional RGB information of a shooting object and acquire depth information of the shooting object. Therefore, the initial three-dimensional geometric model established by the consumption-level RGBD module includes depth information of the shot object.
The initial three-dimensional geometric model of the shot object and the texture mapping model corresponding to the initial three-dimensional geometric model may be acquired and established by consumer-level equipment, or may be read from a database.
As an optional implementation manner, the initial three-dimensional geometric model may be established by methods such as Kinect Fusion and Bundle Fusion based on an image of a shooting object acquired by a consumer-grade device; the texture mapping model may be obtained by an MVS (Multi-view Stereo) algorithm.
S302, establishing a three-dimensional rectangular coordinate system based on the initial three-dimensional geometric model and the texture mapping model.
Specifically, taking a human face three-dimensional geometric model and a texture mapping model corresponding to the human face geometric model as an example, taking the right-left direction of a human face curved surface model as an x axis, the upward direction as a z axis, the backward direction as a y axis, and taking a center point of a head of the model as an origin to establish a three-dimensional rectangular coordinate system.
S303, acquiring cylindrical projection coordinates corresponding to the coordinate points on the initial three-dimensional geometric model and the texture mapping model in the three-dimensional rectangular coordinate system by adopting a cylindrical projection method.
Specifically, the cylindrical projection coordinates (a, b) corresponding to the coordinate points V (x, y, z) on the three-dimensional geometric model and the texture mapping model are calculated by the following formula:
a=arctan2(y,x)+π
b=z-zlow
wherein a is more than or equal to 0 and less than or equal to pi, b is more than or equal to 0 and less than or equal to H, and zlowIs the z value of the lowest point of the face grid, and H is the height of the grid.
S304, converting the cylindrical projection coordinates corresponding to the coordinate points on the initial three-dimensional geometric model and the texture mapping model into plane coordinates to obtain a geometric model and a texture mapping model represented by two-dimensional parameters.
Specifically, the cylindrical surface is expanded into a rectangular surface, and point coordinates p (u, v) in the two-dimensional parametric space corresponding to the grid points can be correspondingly obtained as follows:
u=a/2π
v=b/H
wherein u is more than or equal to 0 and less than or equal to 1, and v is more than or equal to 0 and less than or equal to 1.
Through the two steps, the two-dimensional parameterized coordinate space of the vertex of the three-dimensional face mesh model is obtained. Each point (u, v) in the two-dimensional parametric space stores six quantities (x, y, z, r, g, b) representing the spatial position (x, y, z) of the mesh point of the face model corresponding to the two-dimensional parametric image and the texture color (r, g, b) of the mesh point, respectively.
S305, calculating to obtain a three-dimensional shape parameter containing details of the shot object based on a texture mapping model of the two-dimensional parameters and a geometric model of the two-dimensional parameters.
Specifically, a Shape recovery is performed on the object to be photographed by using an SFS (Shape From Shape) method based on a two-dimensional parametric texture mapping model, and three-dimensional Shape information is acquired. And correcting the three-dimensional shape parameters obtained by calculation according to the geometric model of the two-dimensional parameters to finally obtain the accurate three-dimensional shape parameters containing details of the shot object, namely obtaining the accurate and detailed three-dimensional model of the shot object.
The specific processing content of step S305 can also refer to the method embodiment shown in fig. 2, which is not repeated here.
Another embodiment of the present application further provides a three-dimensional modeling detail enhancement apparatus, as shown in fig. 4, the apparatus including:
a data obtaining unit 100 for obtaining an initial three-dimensional geometric model of a photographic subject and a texture map model corresponding to the initial three-dimensional geometric model; wherein the initial three-dimensional geometric model contains depth information of a photographic object;
a parameter processing unit 110, configured to perform two-dimensional parameterization on the initial three-dimensional geometric model and the texture map model to obtain a geometric model and a texture map model represented by two-dimensional parameters;
and the calculation processing unit 120 is configured to calculate a three-dimensional shape parameter including details of the photographic object based on the texture mapping model of the two-dimensional parameter and the geometric model of the two-dimensional parameter.
The three-dimensional modeling detail enhancing device provided by the embodiment of the application can acquire the three-dimensional shape parameters containing the details of the shot object based on the initial rough three-dimensional geometric model of the shot object and the texture mapping model corresponding to the three-dimensional geometric model, namely, the method can establish the fine three-dimensional model of the shot object by carrying out image shooting on the shot object, so that the dependence of accurate and detailed three-dimensional modeling on expensive three-dimensional scanning equipment is eliminated, and the detailed reconstruction of the three-dimensional modeling is easier to realize.
Meanwhile, the three-dimensional modeling detail enhancement device provided by the embodiment of the application can take the original geometric model of the shot object as prior information to be used for correcting the three-dimensional shape parameters obtained by modeling, so that the modeling precision can be higher, and a high-precision three-dimensional modeling result can be obtained.
Optionally, the calculating and processing unit 120 calculates the three-dimensional shape parameter of the shot object, which includes details, based on the texture mapping model of the two-dimensional parameter and the geometric model of the two-dimensional parameter, and specifically includes:
calculating to obtain three-dimensional shape parameters of the shot object based on a texture mapping model of the two-dimensional parameters;
and correcting the three-dimensional shape parameters obtained by calculation based on the geometric model of the two-dimensional parameters.
Optionally, the three-dimensional shape parameters obtained through calculation at least include a shooting object surface brightness parameter and a shooting object depth parameter, wherein the shooting object surface brightness is determined by shooting object surface reflectivity and an illumination direction;
the calculation processing unit 120 corrects the calculated three-dimensional shape parameter based on the geometric model of the two-dimensional parameter, and specifically includes:
comparing the three-dimensional shape parameters obtained by calculation with the geometric model of the two-dimensional parameters to determine calculation errors; the calculation error is used for representing the error between the calculated three-dimensional shape parameter and the real parameter of the shooting object;
recalculating the three-dimensional shape parameters of the photographic object based on the calculation error;
and repeating the processing process until the calculation error is smaller than the set error threshold.
Optionally, the calculating and processing unit 120 recalculates the three-dimensional shape parameter of the photographic object based on the calculation error, and specifically includes:
and iteratively calculating new three-dimensional shape parameters based on the calculation errors and the calculated three-dimensional shape parameters.
Optionally, the parameter processing unit 110 performs two-dimensional parameterization on the initial three-dimensional geometric model and the texture map model to obtain a geometric model and a texture map model represented by two-dimensional parameters, and specifically includes:
establishing a three-dimensional rectangular coordinate system based on the initial three-dimensional geometric model and the texture mapping model;
acquiring cylindrical projection coordinates corresponding to coordinate points on an initial three-dimensional geometric model and a texture mapping model in a three-dimensional rectangular coordinate system by adopting a cylindrical projection method;
and converting the cylindrical projection coordinates corresponding to the coordinate points on the initial three-dimensional geometric model and the texture mapping model into plane coordinates to obtain a geometric model and a texture mapping model represented by two-dimensional parameters.
Specifically, please refer to the corresponding contents of the above method embodiments for the specific working contents of each unit of the three-dimensional modeling detail enhancing apparatus, which is not repeated here.
While, for purposes of simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present application is not limited by the order of acts or acts described, as some steps may occur in other orders or concurrently with other steps in accordance with the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The steps in the method of each embodiment of the present application may be sequentially adjusted, combined, and deleted according to actual needs, and technical features described in each embodiment may be replaced or combined.
The modules and sub-modules in the device and the terminal in the embodiments of the application can be combined, divided and deleted according to actual needs.
In the several embodiments provided in the present application, it should be understood that the disclosed terminal, apparatus and method may be implemented in other manners. For example, the above-described terminal embodiments are merely illustrative, and for example, the division of a module or a sub-module is only one logical division, and there may be other divisions when the terminal is actually implemented, for example, a plurality of sub-modules or modules may be combined or integrated into another module, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules or sub-modules described as separate parts may or may not be physically separate, and parts that are modules or sub-modules may or may not be physical modules or sub-modules, may be located in one place, or may be distributed over a plurality of network modules or sub-modules. Some or all of the modules or sub-modules can be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, each functional module or sub-module in the embodiments of the present application may be integrated into one processing module, or each module or sub-module may exist alone physically, or two or more modules or sub-modules may be integrated into one module. The integrated modules or sub-modules may be implemented in the form of hardware, or may be implemented in the form of software functional modules or sub-modules.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software unit executed by a processor, or in a combination of the two. The software cells may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A three-dimensional modeling detail enhancement method is characterized by comprising the following steps:
acquiring an initial three-dimensional geometric model of a shot object and a texture mapping model corresponding to the initial three-dimensional geometric model; wherein the initial three-dimensional geometric model contains depth information of a photographic object;
carrying out two-dimensional parameterization on the initial three-dimensional geometric model and the texture mapping model to obtain a geometric model and a texture mapping model represented by two-dimensional parameters;
and calculating to obtain the three-dimensional shape parameters containing details of the shot object based on the texture mapping model of the two-dimensional parameters and the geometric model of the two-dimensional parameters.
2. The method according to claim 1, wherein the calculating of the three-dimensional shape parameters of the photographic subject including details based on the texture mapping model of the two-dimensional parameters and the geometric model of the two-dimensional parameters comprises:
calculating to obtain three-dimensional shape parameters of the shot object based on a texture mapping model of the two-dimensional parameters;
and correcting the three-dimensional shape parameters obtained by calculation based on the geometric model of the two-dimensional parameters.
3. The method according to claim 2, wherein the three-dimensional shape parameters obtained by calculation at least comprise a photographic subject surface brightness parameter and a photographic subject depth parameter, wherein the photographic subject surface brightness is determined by the photographic subject surface reflectivity and the illumination direction;
the geometric model based on the two-dimensional parameters corrects the three-dimensional shape parameters obtained by calculation, and comprises the following steps:
comparing the three-dimensional shape parameters obtained by calculation with the geometric model of the two-dimensional parameters to determine calculation errors; the calculation error is used for representing the error between the calculated three-dimensional shape parameter and the real parameter of the shooting object;
recalculating the three-dimensional shape parameters of the photographic object based on the calculation error;
and repeating the processing process until the calculation error is smaller than the set error threshold.
4. The method according to claim 3, wherein the recalculating the three-dimensional shape parameter of the photographic object based on the calculation error comprises:
and iteratively calculating new three-dimensional shape parameters based on the calculation errors and the calculated three-dimensional shape parameters.
5. The method of claim 1, wherein performing a two-dimensional parameterization on the initial three-dimensional geometric model and the texture map model to obtain a geometric model and a texture map model represented by two-dimensional parameters comprises:
establishing a three-dimensional rectangular coordinate system based on the initial three-dimensional geometric model and the texture mapping model;
acquiring cylindrical projection coordinates corresponding to coordinate points on an initial three-dimensional geometric model and a texture mapping model in a three-dimensional rectangular coordinate system by adopting a cylindrical projection method;
and converting the cylindrical projection coordinates corresponding to the coordinate points on the initial three-dimensional geometric model and the texture mapping model into plane coordinates to obtain a geometric model and a texture mapping model represented by two-dimensional parameters.
6. A three-dimensional modeling detail enhancement apparatus, comprising:
the data acquisition unit is used for acquiring an initial three-dimensional geometric model of a shooting object and a texture mapping model corresponding to the initial three-dimensional geometric model; wherein the initial three-dimensional geometric model contains depth information of a photographic object;
the parameter processing unit is used for carrying out two-dimensional parameterization on the initial three-dimensional geometric model and the texture mapping model to obtain a geometric model and a texture mapping model represented by two-dimensional parameters;
and the calculation processing unit is used for calculating and obtaining the three-dimensional shape parameters containing the details of the shot object based on the texture mapping model of the two-dimensional parameters and the geometric model of the two-dimensional parameters.
7. The apparatus according to claim 6, wherein the calculating and processing unit calculates a three-dimensional shape parameter including details of the photographic subject based on a texture map model of the two-dimensional parameters and a geometric model of the two-dimensional parameters, and specifically includes:
calculating to obtain three-dimensional shape parameters of the shot object based on a texture mapping model of the two-dimensional parameters;
and correcting the three-dimensional shape parameters obtained by calculation based on the geometric model of the two-dimensional parameters.
8. The apparatus according to claim 7, wherein the three-dimensional shape parameters obtained by calculation at least include a subject surface brightness parameter and a subject depth parameter, wherein the subject surface brightness is determined by a subject surface reflectivity and an illumination direction;
the calculation processing unit corrects the calculated three-dimensional shape parameters based on the geometric model of the two-dimensional parameters, and specifically includes:
comparing the three-dimensional shape parameters obtained by calculation with the geometric model of the two-dimensional parameters to determine calculation errors; the calculation error is used for representing the error between the calculated three-dimensional shape parameter and the real parameter of the shooting object;
recalculating the three-dimensional shape parameters of the photographic object based on the calculation error;
and repeating the processing process until the calculation error is smaller than the set error threshold.
9. The apparatus according to claim 8, wherein the calculation processing unit recalculates the three-dimensional shape parameter of the photographic subject based on the calculation error, and specifically includes:
and iteratively calculating new three-dimensional shape parameters based on the calculation errors and the calculated three-dimensional shape parameters.
10. The apparatus according to claim 6, wherein the parameter processing unit performs two-dimensional parameterization on the initial three-dimensional geometric model and the texture map model to obtain a geometric model and a texture map model represented by two-dimensional parameters, and specifically includes:
establishing a three-dimensional rectangular coordinate system based on the initial three-dimensional geometric model and the texture mapping model;
acquiring cylindrical projection coordinates corresponding to coordinate points on an initial three-dimensional geometric model and a texture mapping model in a three-dimensional rectangular coordinate system by adopting a cylindrical projection method;
and converting the cylindrical projection coordinates corresponding to the coordinate points on the initial three-dimensional geometric model and the texture mapping model into plane coordinates to obtain a geometric model and a texture mapping model represented by two-dimensional parameters.
CN202110713305.7A 2021-06-25 2021-06-25 Three-dimensional modeling detail enhancement method and device Pending CN113421292A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110713305.7A CN113421292A (en) 2021-06-25 2021-06-25 Three-dimensional modeling detail enhancement method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110713305.7A CN113421292A (en) 2021-06-25 2021-06-25 Three-dimensional modeling detail enhancement method and device

Publications (1)

Publication Number Publication Date
CN113421292A true CN113421292A (en) 2021-09-21

Family

ID=77716836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110713305.7A Pending CN113421292A (en) 2021-06-25 2021-06-25 Three-dimensional modeling detail enhancement method and device

Country Status (1)

Country Link
CN (1) CN113421292A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549768A (en) * 2022-04-26 2022-05-27 苏州浪潮智能科技有限公司 Three-dimensional reconstruction effect detection method, device, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2005218060A1 (en) * 2000-03-08 2005-10-27 Cyberextruder.Com, Inc. Apparatus and method for generating a three-dimensional representation from a two-dimensional image
CN108154550A (en) * 2017-11-29 2018-06-12 深圳奥比中光科技有限公司 Face real-time three-dimensional method for reconstructing based on RGBD cameras
CN110097624A (en) * 2019-05-07 2019-08-06 洛阳众智软件科技股份有限公司 Generate the method and device of three-dimensional data LOD simplified model
CN110942506A (en) * 2019-12-05 2020-03-31 河北科技大学 A kind of object surface texture reconstruction method, terminal equipment and system
CN111008422A (en) * 2019-11-29 2020-04-14 北京建筑大学 A method and system for making a real scene map of a building
CN111127633A (en) * 2019-12-20 2020-05-08 支付宝(杭州)信息技术有限公司 Three-dimensional reconstruction method, apparatus, and computer-readable medium
CN111640180A (en) * 2020-08-03 2020-09-08 深圳市优必选科技股份有限公司 Three-dimensional reconstruction method and device and terminal equipment
US20210319621A1 (en) * 2018-09-26 2021-10-14 Beijing Kuangshi Technology Co., Ltd. Face modeling method and apparatus, electronic device and computer-readable medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2005218060A1 (en) * 2000-03-08 2005-10-27 Cyberextruder.Com, Inc. Apparatus and method for generating a three-dimensional representation from a two-dimensional image
CN108154550A (en) * 2017-11-29 2018-06-12 深圳奥比中光科技有限公司 Face real-time three-dimensional method for reconstructing based on RGBD cameras
US20210319621A1 (en) * 2018-09-26 2021-10-14 Beijing Kuangshi Technology Co., Ltd. Face modeling method and apparatus, electronic device and computer-readable medium
CN110097624A (en) * 2019-05-07 2019-08-06 洛阳众智软件科技股份有限公司 Generate the method and device of three-dimensional data LOD simplified model
CN111008422A (en) * 2019-11-29 2020-04-14 北京建筑大学 A method and system for making a real scene map of a building
CN110942506A (en) * 2019-12-05 2020-03-31 河北科技大学 A kind of object surface texture reconstruction method, terminal equipment and system
CN111127633A (en) * 2019-12-20 2020-05-08 支付宝(杭州)信息技术有限公司 Three-dimensional reconstruction method, apparatus, and computer-readable medium
CN111640180A (en) * 2020-08-03 2020-09-08 深圳市优必选科技股份有限公司 Three-dimensional reconstruction method and device and terminal equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
夏国芳;胡春梅;范亮;: "一种面向造像类文物的真三维模型精细重建方法", 敦煌研究, no. 03 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549768A (en) * 2022-04-26 2022-05-27 苏州浪潮智能科技有限公司 Three-dimensional reconstruction effect detection method, device, equipment and storage medium
CN114549768B (en) * 2022-04-26 2022-07-22 苏州浪潮智能科技有限公司 A three-dimensional reconstruction effect detection method, device, equipment and storage medium
WO2023206780A1 (en) * 2022-04-26 2023-11-02 苏州元脑智能科技有限公司 Three-dimensional reconstruction effect detection method and apparatus, and device and storage medium

Similar Documents

Publication Publication Date Title
CN110363858B (en) Three-dimensional face reconstruction method and system
CN113436238B (en) Point cloud registration accuracy evaluation method and device and electronic equipment
AU2011312140B2 (en) Rapid 3D modeling
CN103218812B (en) Method for rapidly acquiring tree morphological model parameters based on photogrammetry
CN110866531A (en) Building feature extraction method and system based on three-dimensional modeling and storage medium
CN113916130B (en) Building position measuring method based on least square method
CN113566793A (en) True orthoimage generation method and device based on unmanned aerial vehicle oblique image
Gadasin et al. Reconstruction of a Three-Dimensional Scene from its Projections in Computer Vision Systems
CN110738730A (en) Point cloud matching method and device, computer equipment and storage medium
CN110766731A (en) Method and device for automatically registering panoramic image and point cloud and storage medium
CN114494589A (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and computer-readable storage medium
CN113393577B (en) Oblique photography terrain reconstruction method
JP2019091122A (en) Depth map filter processing device, depth map filter processing method and program
WO2024255182A1 (en) Three-dimensional image generation method, apparatus and device based on panoramic image, and storage medium
Chatterjee et al. A nonlinear Gauss–Seidel algorithm for noncoplanar and coplanar camera calibration with convergence analysis
CN113421292A (en) Three-dimensional modeling detail enhancement method and device
Zhu et al. Triangulation of well-defined points as a constraint for reliable image matching
CN117132737B (en) Three-dimensional building model construction method, system and equipment
Awange et al. Fundamentals of photogrammetry
CN109166176B (en) Three-dimensional face image generation method and device
CN117911512A (en) Camera pose relation determining method, point cloud fusion method and system thereof
JP4035018B2 (en) Shape acquisition method, apparatus, program, and recording medium recording this program
CN112991525B (en) Digital surface model generation method for image space and object space mixed matching primitive
CN108921908B (en) Surface light field acquisition method and device and electronic equipment
CN112819900A (en) Method for calibrating internal azimuth, relative orientation and distortion coefficient of intelligent stereography

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210921

WD01 Invention patent application deemed withdrawn after publication