CN110458932A - Image processing method, device, system, storage medium and image scanning apparatus - Google Patents
Image processing method, device, system, storage medium and image scanning apparatus Download PDFInfo
- Publication number
- CN110458932A CN110458932A CN201810427030.9A CN201810427030A CN110458932A CN 110458932 A CN110458932 A CN 110458932A CN 201810427030 A CN201810427030 A CN 201810427030A CN 110458932 A CN110458932 A CN 110458932A
- Authority
- CN
- China
- Prior art keywords
- texture
- image
- dimensional
- dimensional modeling
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Image Generation (AREA)
Abstract
The invention discloses a kind of image processing method, device, system, storage medium and image scanning apparatus.This method comprises: first visual angle of the selection for the threedimensional model of subject, determines texture image subset associated with the first visual angle in the original texture image of subject;For the first visual angle, the image for meeting image clearly restrictive condition in texture image subset is selected;Determine the smallest overlapping region of color difference met between the image of image clearly restrictive condition;Texture splicing is carried out to the smallest overlapping region of color difference, obtains splicing texture image corresponding with the first visual angle.The anti-noise ability and precision of 3D texture mapping can be improved in the image processing method provided according to embodiments of the present invention, obtains ideal texture mapping effect.
Description
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an image processing method, an image processing apparatus, an image processing system, a storage medium, and an image scanning device.
Background
Three-Dimensional (3D) texture maps are widely used in various fields of computer graphics to increase the realism of 3D objects. The three-dimensional texture mapping is a process of scanning an object by a three-dimensional scanning device, obtaining three-dimensional geometric information of the object, constructing a three-dimensional model of the object by using the obtained three-dimensional geometric information, and rendering colors of a three-dimensional geometric form on the three-dimensional model of the object.
The traditional three-dimensional scanning equipment cannot meet the consumer-grade requirement due to high cost, and recently, consumer-grade scanners emerging in the market provide basic conditions for realizing three-dimensional scanning civilization.
The 3D texture mapping is a key step of 3D object authenticity display, consumer-grade equipment on the market generally uses a full-automatic texture mapping technology, most consumer-grade three-dimensional scanning equipment can only achieve certain restoration on an object structure, and cannot obtain complete and accurate 3D texture information, so that when the three-dimensional model of the object is subjected to color rendering of a three-dimensional geometric form, the anti-noise capability is not strong, and the precision of the 3D texture mapping is difficult to guarantee.
Disclosure of Invention
Embodiments of the present invention provide an image processing method, apparatus, system, storage medium, and image scanning device, which may improve noise immunity of a 3D texture map and improve accuracy of the 3D texture map.
According to an aspect of an embodiment of the present invention, there is provided an image processing method including:
selecting a first perspective for a three-dimensional model of an object, determining a texture image subset associated with the first perspective in an original texture image of the object;
selecting an image in the texture image subset which meets the image definition limiting condition aiming at the first visual angle;
determining an overlapping area where the color difference between the images satisfying the image definition limitation condition is minimum;
and performing texture splicing on the overlapped area with the minimum color difference to obtain a spliced texture image corresponding to the first visual angle.
According to another aspect of embodiments of the present invention, there is provided an image processing apparatus including:
a perspective selection module for selecting a first perspective for a three-dimensional model of an object, determining a texture image subset associated with the first perspective in an original texture image of the object;
the clear image selection module is used for selecting images which meet the image clear limitation condition in the texture image subset aiming at the first visual angle;
the uniform image selection module is used for determining an overlapping area with the minimum color difference between the images meeting the definition limitation condition of the images;
and the texture image splicing module is used for performing texture splicing on the overlapped area with the minimum color difference to obtain a spliced texture image corresponding to the first visual angle.
According to still another aspect of embodiments of the present invention, there is provided an image processing system including: a memory and a processor; the memory is used for storing programs; the processor is used for reading the executable program codes stored in the memory to execute the image processing method.
According to still another aspect of embodiments of the present invention, there is provided a computer-readable storage medium having stored therein instructions that, when executed on a computer, cause the computer to execute the image processing method of the above-described aspects.
According to still another aspect of the embodiments of the present invention, there is provided an image scanning apparatus, including a projection apparatus, an image acquisition apparatus, and a processor; a projection device for projecting the structured light encoded pattern onto the object; the image acquisition equipment is used for acquiring an original texture image of the object under the projection of the structured light code pattern; and a processor for executing the image processing method described in the above embodiments.
According to the image processing method, the image processing device, the image processing system, the storage medium and the image scanning equipment in the embodiment of the invention, the anti-noise capability and the anti-noise precision of the 3D texture mapping can be improved, and a relatively ideal texture mapping effect can be obtained.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram showing a basic principle of image processing at a plurality of photographing viewpoints according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating the effect of texture mapping using fully automatic texture mapping techniques in a prior art consumer-grade scanner.
FIG. 3a is a process diagram illustrating an image processing method according to an exemplary embodiment of the present invention;
FIG. 3b is a diagram illustrating comparison of effects of an image processing method according to an exemplary embodiment of the present invention;
FIG. 4 is a flowchart illustrating an image processing method according to another embodiment of the present invention;
fig. 5 is a schematic structural diagram illustrating an image processing apparatus provided according to an embodiment of the present invention;
fig. 6 is a block diagram illustrating an exemplary hardware architecture of a computing device that may implement the image processing method and apparatus according to embodiments of the present invention.
Detailed Description
Features and exemplary embodiments of various aspects of the present invention will be described in detail below, and in order to make objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention. It will be apparent to one skilled in the art that the present invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present invention by illustrating examples of the present invention.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
In the embodiment of the invention, an Augmented Reality (AR) technology can capture an image video of a real world through a shooting terminal carried by a system, estimate a three-dimensional posture of a virtual object in the image video by using technologies such as image registration and the like, further place the virtual object or a scene at a preset position in a real environment, and finally render the scene through a visual angle of a camera lens, wherein the three-dimensional posture is used for describing a three-dimensional coordinate and a deflection angle of the object.
In the embodiment of the present invention, a Virtual Reality (VR) technology is a computer simulation system in which a Virtual world can be created and experienced. In essence, the system utilizes a computer to create a simulated environment that includes an interactive three-dimensional dynamic view of multi-source information fusion and a systematic simulation of entity behavior, which can achieve an immersive experience.
In the above embodiment, when the augmented reality technology and the virtual reality technology need to use a three-dimensional scene, the method may include the following steps:
first, a three-dimensional model of the object is constructed.
In embodiments of the present invention, a three-dimensional model of an object may be constructed in a variety of ways.
In one embodiment, an instrument device that can perform three-dimensional scanning, such as a three-dimensional scanner, is used to three-dimensionally model the actual object. As an example, a three-dimensional scanner may be used to project structured light to an object in a real scene, obtain multiple point data of the object in the real scene, where the point data constitutes point cloud data, and a point cloud model may be constructed based on the point cloud data, so as to realize modeling of a three-dimensional shape of the object, and obtain a three-dimensional model of the object. The point cloud data may include color information, depth information, and geometric position information represented by three-dimensional coordinates of the object, and the three-dimensional size of the object may be obtained from the geometric position information of the object.
In one embodiment, a three-dimensional reconstruction may be performed using a video or picture of an object to obtain a three-dimensional model of the object.
In one embodiment, a three-dimensional model of the object may also be created by modeling software, such as three-dimensional modeling rendering and animation software 3D Max, computer animation and modeling software Maya, or three-dimensional model design software meshimixer.
And secondly, performing three-dimensional texture mapping by using the constructed three-dimensional model of the object.
As one example, when a real object is three-dimensionally modeled using a three-dimensional scanner, three-dimensional texture mapping may be performed on the basis of the acquired point cloud model. Specifically, the point cloud model can be converted into a polygonal mesh model by a surface reconstruction method, each polygon in the polygonal mesh model can uniquely determine a polygon plane, and texture mapping from the texture image to the polygonal mesh model can be completed by attaching texture in the texture image corresponding to the position to each polygon plane.
Those skilled in the art will appreciate that various implementations may be used to accomplish the creation of the polygonal mesh model. For example, it may be constructed using a triangular, rectangular, pentagonal, hexagonal, etc. form. For simplicity of description, the embodiments described below illustrate the specific way of establishing the polygonal mesh model by taking the triangular mesh model as an example. However, this description is not to be interpreted as limiting the scope or implementation possibilities of the present solution, and the processing methods of other polygonal mesh models than the triangular mesh model are consistent with the processing method of the triangular mesh model.
Specifically, point cloud data of the point cloud model can be triangulated to obtain a triangular mesh model, each triangle in the triangular mesh model can uniquely determine a plane, namely a triangular patch, and texture in a texture image corresponding to the position of each triangular patch is attached to each triangular patch, so that texture mapping from the texture image to the triangular mesh model can be completed.
It will also be appreciated by those skilled in the art that the texture image for the object may be obtained in a variety of ways, such as using a pre-stored general texture image (e.g., a metal texture image template, a wood texture image template, etc.), a texture image prepared for the object in advance (e.g., pre-captured), a surface image of the object captured in real time (e.g., captured by a real-time capture device). In the embodiment of the present invention, the texture in the texture image may include information such as scene color and scene brightness.
In this embodiment, the point cloud model and the triangular mesh model are different presentation forms at different stages in a process of performing three-dimensional texture mapping on a three-dimensional model constructed according to a three-dimensional scene, and in the following description of the embodiment of the present invention, the triangular mesh model may be referred to as a three-dimensional mesh model.
As can be seen from the foregoing embodiments, in the image processing method according to the embodiments of the present invention, when performing three-dimensional texture mapping, a corresponding texture may be applied to each triangular patch in the three-dimensional model through a mapping relationship between the texture image and the three-dimensional model.
In the embodiment of the present invention, an image of a target modeling object is captured by an image capturing device, so that an original texture in the image of the target modeling object is further obtained according to the captured image of the target modeling object, and a texture region corresponding to a triangle patch in a three-dimensional model in the original texture may be referred to as a texture triangle patch in the following description of the embodiment.
In one embodiment, the image capture device may include a camera, a webcam, or the like.
Since the image acquisition device acquires an image of the target modeled object based on the same imaging principle to acquire a texture in the image of the target modeled object. For convenience of understanding, in the embodiment of the present invention, an image acquisition device is taken as an example, and how to establish a mapping relationship between a three-dimensional model to be subjected to texture mapping and a texture image through parameters of the image acquisition device is described.
Fig. 1 is a schematic diagram showing a basic principle of image processing in a multi-photographing view according to an embodiment of the present invention. First, a method for establishing a correspondence between a three-dimensional model of an object and a texture image by using parameters of an image capturing apparatus will be described in detail with reference to fig. 1.
As an example, as shown in fig. 1, a shot may be taken around an object at different camera positions by using the same camera and by changing the camera parameters; or a certain number of cameras can be arranged at positions surrounding the object, and the object is shot by the cameras at different positions respectively to acquire an object image.
When an object is photographed by an image capturing device such as a camera to capture an image of the object, for one patch of the surface of the object, there is a ray which can be vertically incident on the camera lens in the reflected ray of the patch, and the ray which is vertically incident on the camera lens is called as an incident ray of the camera lens.
With continued reference to fig. 1, in one embodiment, an extended line of an incident line of a camera lens forms an intersection point X with a patch of the object surface, and the intersection point X can be used as a shooting viewpoint for shooting the patch by the camera.
When the camera is located at the camera position 1, an included angle between a connecting line of the camera and the viewpoint and a normal vector of a surface patch where the viewpoint is located, that is, an included angle between an incident line of the camera and the normal vector of the surface patch, can be used as a shooting view angle when the camera collects an object image at the camera position 1.
According to the pinhole camera imaging model, three-dimensional points in a real scene are mapped to two-dimensional points on an image through projection transformation, and the projection transformation process can be expressed as a process of performing linear transformation on the three-dimensional points in the acquired real scene by using external parameters of an image acquisition device.
As an example, projective transformation on three-dimensional points in a real scene can be described by the following expression (1):
Ti(x,y,1)=M4×4×P(x,y,z,1) (1)
in the above formula (1), P (x, y, z,1) represents three-dimensional coordinates of a three-dimensional point in a real scene, Ti(x, y,1) represents the image physical coordinates of the corresponding point of the three-dimensional modeling point P in the texture image shot at the ith shooting view angle by the image capture device, and the matrix M4×4Is an external parameter of the image acquisition device, such as an external camera parameter.
By the above formula (1), the external parameter matrix M of the image capturing device4×4The position relation between the characteristic point of the surface of the space object and the corresponding point of the characteristic point in the camera coordinate system can be characterized.
As an example, the extrinsic parameter matrix in embodiments of the present inventionWherein, R and t form an external parameter matrix of the image acquisition equipment, R is a rotation matrix, and t is a translation matrix. R may be represented as a 3 × 3 matrix and t may be represented as a 3 × 1 matrix.
In one embodiment, the parameters of the image acquisition device further comprise intra-camera parameters. The in-camera parameters are parameters related to the characteristics of the camera itself, such as the focal length of the camera, the pixel size, and the optical center position. The image physical coordinates of the corresponding points of the space object surface feature points in the camera coordinate system can be converted into texture coordinates in the image pixel coordinate system through the camera intrinsic parameters.
By the image processing method in the above embodiment, a three-dimensional model of an object may be established by using an image acquisition device, and a mapping relationship between the three-dimensional model and a texture image of the object may be established based on parameters of the image acquisition device.
In the embodiment of the present invention, when a patch of the surface of an object is photographed by using a camera, since texture images photographed at respective photographing view angles may overlap, in order to obtain a texture image with low noise and high quality, texture maps photographed at respective photographing view angles need to be selected.
In one embodiment, if the selected shooting angle of view satisfies a certain angle constraint condition, a clearer image of the surface patch of the object surface can be obtained, so that a clearer texture of the surface patch of the object surface can be obtained.
In one embodiment, the shooting angle of view satisfying the angle limitation condition may be used as the first angle of view for the camera to shoot the patch of the object surface.
As an example, the angle limiting condition may be that an angle between a connecting line of the camera and the shooting viewpoint and a normal vector of a patch where the viewpoint is located is smaller than a preset angle threshold, for example, 45 ° or 30 °. The smaller the preset included angle threshold value is, the higher the quality of the shot image and the shot texture is.
Because the object image shot by the camera relates to different shooting positions, shooting visual angles and image color differences among different shooting visual angles, in order to obtain a better jigsaw effect, unprocessed original texture images shot by the camera need to be subjected to shooting visual angle analysis, color differences of texture images corresponding to different shooting visual angles need to be balanced, a splicing seam area needs to be selected and other operations, the operations are difficult to solve through a traditional two-dimensional image splicing technology, time cost of manual completion is high, and accuracy is difficult to guarantee.
FIG. 2 is a diagram illustrating the effect of texture mapping using fully automatic texture mapping technique in a consumer-grade scanner in the prior art. As shown in fig. 2, an object is taken as an example of a doll, and a three-dimensional texture image obtained by scanning the doll with a consumer-grade three-dimensional scanner commonly available in the market is shown.
In fig. 2, the texture image region P1, the texture image region P2, the texture image region P3, and the texture image region P4 respectively represent: and the texture image area of the object is shot by utilizing the image scanning equipment of the consumer-grade three-dimensional scanner under the 1 st shooting visual angle, the 2 nd shooting visual angle, the 3 rd shooting visual angle and the 4 th shooting visual angle respectively.
The texture image region P1-1, the texture image region P2-1, the texture image region P3-1, and the texture image region P4-1 respectively show a schematic enlarged region view of the texture image region P1, a schematic enlarged region view of the texture image region P2, a schematic enlarged region view of the texture image region P3, and a schematic enlarged region view of the texture image region P4.
As can be seen from fig. 2, most consumer-grade three-dimensional scanners on the market can only restore the object structure to a certain extent due to the limitation of the consumer-grade three-dimensional scanner device, and cannot obtain complete and accurate 3D texture information. In the consumer-grade device, various errors or mistakes, such as calibration errors, three-dimensional depth estimation errors, registration errors and the like, may exist in the modeling process, so that the three-dimensional texture map obtained by using the consumer-grade three-dimensional scanner device has the problems of misalignment, blurring, artifacts and the like as shown in the texture image region P1, the texture image region P2, the texture image region P3 and the texture image region P4, and the problems seriously affect the final representation effect of the three-dimensional texture.
Therefore, embodiments of the present invention provide an image processing method, apparatus, system and storage medium, which can improve the anti-noise capability and precision of three-dimensional texture mapping and obtain a more ideal texture mapping effect.
With reference to fig. 3a and 3b, a detailed description will be given below of a method and a step for performing definition selection and uniformity selection on a texture image region obtained by an image acquisition device in a process of performing texture mapping on a three-dimensional model of an object by using the image processing method according to an embodiment of the present invention, taking a specific object as a doll as an example.
Fig. 3 is a process diagram illustrating an image processing method according to an embodiment of the present invention. With reference to fig. 3 and the image processing method according to the embodiment of the present invention, a schematic effect display may be performed on a processing procedure of the image processing method.
In this exemplary embodiment, the image processing method may include the steps of:
as shown in step S11 in fig. 3a, a three-dimensional model of a sculpture is constructed.
In one embodiment, a particular doll object may optionally be three-dimensionally modeled using modeling software according to the method of constructing a three-dimensional model of an object in the above-described embodiments of the invention.
As an example, the sculpture may be modeled three-dimensionally using modeling software such as three-dimensional model design software meshimixer, resulting in a three-dimensional model of the sculpture. In this example, the three-dimensional model of the sculpture may be a polygonal mesh model.
As shown in step S12 of fig. 3a, a photographing perspective of the three-dimensional model for the sculpture is determined, and a subset of the texture images corresponding to the first perspective among the texture images is determined according to the selected first perspective.
In one embodiment, the first perspective for the three-dimensional model may be selected by the following steps.
And step S12-01, calculating normal vectors of triangular patches in the three-dimensional model.
In the embodiment of the present invention, the Normal (Normal) is a perpendicular line to the triangular patch of the three-dimensional model, and the Normal is a vector, which is referred to as a Normal vector for short. The normal vector of the triangular patch can be obtained by solving the outer product of the two vectors through the two vectors of the triangular patch.
In one embodiment, the three vertices of each triangle patch are denoted pt1, pt2, and pt3, respectively, and the two vectors of the triangle patch are the join vector between point pt1 and point pt2, and the join vector between point pt2 and point pt 3.
In this embodiment, the normal vector of the triangular patch in the three-dimensional model may be represented as the outer product of the link vector between point pt1 and point pt2 and the link vector between point pt2 and point pt 3. As an example, the normal vector of the triangular patch in the three-dimensional model can be calculated by the following expression (2):
Normalface=Vectpt1→pt2×Vectpt2→pt3 (2)
in the above formula (2), Vectpt1→pt2Representing the vector of the connecting line, Vect, between point pt1 and point pt2pt2→pt3Representing the vector of the connecting line, Vect, between point pt2 and point pt3pt1→pt2×Vectpt2→pt3Express Vectpt1→pt2And Vectpt2→pt3So as to obtain a vector perpendicular to the two vectors, i.e., the Normal vector of the triangular patchface。
Step S12-02, a first angle of view when a triangular patch of the three-dimensional model is photographed is determined.
In one embodiment, when the shooting position of the image acquisition device is opposite to the front of the triangular patch of the three-dimensional model, the texture image shot on the triangular patch of the three-dimensional model in the shooting position of the camera is considered to be clearer.
In another embodiment, for a triangular patch of the three-dimensional model, the normal vector of the triangular patch may be taken as the first normal vector; and acquiring the current position of the camera, and when an included angle between an incident ray of the camera at the current position and the first normal vector meets an included angle limiting condition, taking the included angle meeting the included angle limiting condition as a first visual angle of the three-dimensional model.
In this step, the determined photographing view angle for the three-dimensional model of the doll may be used as the first view angle for the model of the doll, and the first view angle satisfying the angle limitation condition in the embodiment of the present invention is obtained by the method of selecting the first view angle for the three-dimensional model in the above-described embodiment.
As shown in fig. 3a, when an angle between an incident ray of an image capturing device (not shown in the figure) and a normal vector of a polygonal patch of the three-dimensional model satisfies an angle limit condition in an embodiment of the present invention, the angle is selected as a shooting angle of the three-dimensional model for the sculpture.
As an example, the angle limit is: the angle between the incident line of the camera at the current position and the first normal vector is less than or equal to 45 °, in this example, the first view angle for the three-dimensional model is 0 ° when the camera shooting position is opposite to the face of the triangular patch of the three-dimensional model.
As one example, the first angle of view may be a photographing angle of view of 45 degrees or less. In this example, when the first angle of view is a photographing angle of view of 45 degrees or less, it is considered that the image pickup apparatus performs front-view photographing on the object; when the first visual angle is a shooting visual angle larger than 45 degrees, the image acquisition equipment is not considered to be front-view shooting for image acquisition of the object, at the moment, image textures included in the acquired texture image subset of the object are unclear, and the image quality is not high.
And step S12-03, according to the selected first view angle, determining a texture image subset corresponding to the first view angle in the texture image.
In the embodiment of the present invention, in order to obtain a sharp texture image of the triangle patch, the texture image corresponding to the first view may be used as the texture image subset.
In this step, according to the images included in the texture image subsets, the boundary of the images included in each texture image subset may be determined, so as to determine an overlapping region between the images included in the texture image subsets, and the overlapping region may be used as a stitching region between the images included in the texture image subsets.
As an example, the stitching region between the images comprised by the texture image subset may be determined by a markov random field algorithm.
As an example, the method of image segmentation in the embodiment of the present invention may be one or more of a region growing Algorithm, a Flood Fill Algorithm (Flood Fill Algorithm), and a Graph cut Algorithm (Graph Cuts Algorithm).
As shown in step S13 in fig. 3a, a texture error correction function of the texture image subset is constructed by using the markov random field method, and the texture image obtained by the image capturing device is subjected to texture error correction.
In an embodiment, the step of performing texture error correction on the texture image may specifically include the following steps:
step S13-01, constructing an image definition function of the texture image subset, wherein the image definition function is used for selecting the first visual angle according to the image definition included by the texture image subset.
In the embodiment of the invention, when the shooting visual angle of the three-dimensional model meets the included angle limiting condition, the texture image acquired by the camera position is considered to have better definition.
Therefore, a definition function of the images included in the texture image subset can be constructed, and the definition function is used for describing the relationship between the definition of the images included in the texture image subset and the shooting visual angle, so that the first visual angle is selected according to the definition of the images included in the texture image subset.
In step S13-02, an image uniformity function for the texture image subset is constructed, where the image uniformity function is used to describe how uniform the images in the overlapping region are based on the color difference of the overlapping region between the images included in the texture image subset.
In the embodiment of the present invention, in general, in an image included in the texture image subset, a high-frequency detail region of the image may be understood as a region in the image where brightness or gray scale change is severe, such as an edge or a contour in the image; the low-frequency detail region of the image can be understood as a region with gentle brightness or gray scale change in the image, such as a large patch region in the image.
Since the human eye is more sensitive to high frequency detail regions in the image, if a seam is generated in the high frequency detail region, such as an edge or contour region in the image, discontinuous textures are likely to appear on adjacent texture polygon patches in the image, which causes a seam dislocation problem.
Therefore, when the image is divided according to the selected first visual angle, the problem of selecting the seams among different visual angles after division is considered, and reasonable division paths are generated to avoid high-frequency detail areas of the image as much as possible so as to generate the seams at positions with uniform image content, so that the dislocation problem caused by direct camera projection is avoided.
In the embodiment of the present invention, the adjacent texture polygon patches may be understood as texture polygons containing common edges in the designated image area. In the following description of embodiments, adjacent texture polygon patches may also be referred to as co-edge texture polygon patches.
When the original texture image is segmented according to the shooting visual angle, the overlapped area between the images included in the texture image subset can be used as a splicing area, and the smaller the color difference between the polygon facets of the common-edge texture in the splicing area is, the better the uniformity of the texture image obtained by the shooting visual angle is considered.
In the embodiment of the invention, the color difference between the polygon patches of the common edge texture can be measured by the color distance. Color distance refers to the difference between two colors, typically the greater the color distance, the greater the color difference.
In one embodiment, a uniformity function for a stitching region between images comprised by the texture image subset may be constructed, which may be represented by a color difference between co-edge texture polygon patches in the stitching region, i.e. a color difference between sample points of one of the polygon patches and sample points of another of the polygon patches of the co-edge texture polygon patches in the stitching region.
In one embodiment, the difference in color magnitude between two sample points may be described by a distance metric algorithm.
For example, in one embodiment, sample point liAnd sampling point ljThe color difference between the two sampling points can be represented by the Euclidean distance of the colors of the two sampling points in a color space such as an RGB space; in another embodiment, sample point liAnd sampling point ljThe color difference between the two sample points can also be represented by a Hash distance of the color feature values of the two sample points.
As an example, in a stitching region between images included in the texture image subset, an image uniformity function between co-edge texture polygon patches may be expressed as the following expression (3):
wherein liSample points, l, of one of the polygon patches representing a co-edge texture polygon patchjSample points, D (l), of another polygon patch representing a co-edge texture polygon patchi,ij) As a function of image uniformity, CliRepresents the sample point liColor value of (C)ljRepresents the sample point ljColor value of d (C)li-Clj) By sampling point liAnd sampling point ljRepresenting the sampling point l in the Euclidean distance of the color in the color spaceiAnd sampling point ljThe difference in color between.
In the above expression (3), li=ljRepresents the sample point liAnd sampling point ljThe same sampling point, namely the common point of the polygonal surface patch of the common edge texture in the splicing area.
In one embodiment, to calculate the sample points liAnd sampling point ljThe Euclidean distance of the colors is taken as an example, and a sampling point liAnd sampling point ljThe euclidean distance of colors in the RGB space can be expressed as the following expression (4):
wherein,represents the sample point liOf the R channel of the RGB spaceAnd represents the sample point ljOf the R channel of the RGB spaceThe difference between them;
represents the sample point liOf the G channel of the RGB spaceAnd sampling point ljOf the G channel of the RGB spaceThe difference between them;
represents the sample point liOf the B channel of the RGB spaceAnd sampling point ljDereferencing of B channel in RGB spaceThe difference between them.
And (4) calculating to obtain the difference of texture color values between the two co-edge polygon patches in the splicing region through the expression (4). As an example, in the color RGB space, if the euclidean distance between two sampling points is less than 25, the two sampling points can be considered as belonging to the same color, the color difference is small, and the selected seam position can be considered as having better uniformity.
And step S13-03, performing texture splicing on spliced areas among the images included in the texture image subset based on the image definition function and the image uniformity function to obtain a spliced texture image corresponding to the first visual angle.
In one embodiment, an object function for texture stitching between images included in the texture image subset may be constructed by using an image sharpness function and an image uniformity function, and the object function may select a stitching region between the images included in the texture image subset by using a minimum value of a difference value of image sharpness variation and a difference value of color variation between two sampling points between the images included in the texture image subset.
In one embodiment, the constructed objective function for selecting the splicing region between the images included in the texture image subset can be represented by the following expression (5):
E=min(Edata+Esmooth) (5)
in the above expression (5), EdataA sharpness function representing the image comprised by the texture image subset, EsmoothRepresenting the uniformity function, min (E), of the images comprised by the texture image subsetdata+Esmooth) And the minimum value of the difference value representing the image definition change and the difference value representing the color change of the two sampling points between the images included in the texture image subset.
In this embodiment, E is utilizeddataAnd EsmoothAnd constructing an objective function for texture splicing on the texture image subset, and solving the objective function to obtain a spliced image included by the texture image subset with higher definition and uniformity.
With continued reference to fig. 3a, a neural network that fuses images included in the texture image subset is constructed, and the neural network is trained according to a shooting angle of view and an area of a splicing region, so that a texture image with higher definition and integrity is obtained.
Fig. 3b is a schematic diagram illustrating the effect comparison of texture images obtained by the image processing method in the embodiment of the present invention. As shown in fig. 3b, the image processing method according to the embodiment of the present invention can effectively bypass the high-frequency detail region at the position of the seam, and select a texture region with low frequency to splice the texture images, thereby avoiding the occurrence of seam dislocation, and obtaining a more accurate texture map.
In the embodiment of the invention, the shooting visual angle for the three-dimensional model can be selected, and the texture image is segmented and texture fused based on the selected shooting visual angle, so that the spliced texture image obtained by the texture fusion method of the embodiment can effectively bypass the high-frequency detail region at the position of the seam, and the texture image is spliced in the low-frequency texture region, thereby avoiding the generation of seam dislocation, obtaining a high-quality texture map, improving the anti-noise capability of the three-dimensional texture map and the precision of the three-dimensional texture map, and greatly saving the labor cost.
In the embodiment of the present invention, after the stitched texture image corresponding to the first view is obtained, the stitched texture image corresponding to the first view may be mapped to the three-dimensional model by using a mapping relationship between the three-dimensional model and the texture image.
And mapping the spliced texture image to the corresponding position of the three-dimensional model through the mapping relation between the three-dimensional model and the texture image, so as to realize the three-dimensional texture mapping of the three-dimensional model.
In one embodiment, in order to obtain a complete texture model of the three-dimensional model, the first view angle is iteratively calculated for multiple times during the process of mapping the stitched texture image to the three-dimensional model, so as to obtain multiple stitched texture images until the complete texture model of the three-dimensional model is obtained.
In practical application scenarios, due to the limitation of the device itself, texture errors exist in the texture image of the object acquired by using the three-dimensional scanning device. The main sources of texture errors may include: the calibration process of the image acquisition equipment causes inaccurate camera external parameters, point cloud registration errors caused by the camera external parameters and inaccurate point cloud depth calculation.
As one example, the point cloud registration process refers to calculating a rotational-translation matrix (camera extrinsic parameters) between two point clouds to transform a source point cloud (source cloud) into the same coordinate system as a target cloud (target cloud) through a rigid transformation or an euclidean transformation of the rotational-translation matrix.
As an example, the image depth calculation process refers to a process of calculating depth values of pixel points in a texture image of an object, and the depth values of the pixel points in the texture image may be used to measure color resolution of the image, that is, determine a possible color number of each pixel in a color image or a possible gray scale number of each pixel in a gray scale image, so that the image depth determines the maximum color number that may appear in the color image and the maximum gray scale number in the gray scale image, and represents color definition of the image.
Therefore, the process of stitching the texture image needs to consider how to eliminate the texture problem caused by the texture error source, that is, error correction needs to be performed on the camera external parameters, the point cloud registration process and the point cloud depth calculation process.
The following describes, by way of specific embodiments, a method for correcting a texture error of a stitched texture image corresponding to a first view angle, which is caused by the texture error source according to an embodiment of the present invention.
In the embodiment of the invention, the texture error of the spliced texture image can be corrected by utilizing the constructed texture error correction model of the spliced texture image.
In an embodiment, the modifying the texture error of the stitched texture image by using the texture error modification model may specifically include the following steps:
step S21, performing coordinate transformation on the three-dimensional coordinates of the object in the real environment by using the external parameters of the image capturing device and the internal parameters of the image capturing device, to obtain texture coordinates of corresponding texture pixels in the stitched texture image of the object.
In this step, when coordinate conversion is performed on the three-dimensional point coordinates of the object in the real environment, the three-dimensional coordinates of the three-dimensional modeling points may be converted into the image coordinates of the corresponding texture image points by using the external parameter matrix of the image acquisition device, and the image coordinates of the texture image points may be converted into the texture coordinates of the corresponding texture pixel points by using the internal parameters of the image acquisition device.
It can be known from the above embodiments that the external parameter of the image acquisition device is nonlinear as a 4 × 4 matrix, and in order to correct a texture image error caused by inaccuracy of the external parameter of the image acquisition device due to the calibration process, a translation component and a rotation component in the external parameter of the image acquisition device may be represented as a hexahydric group description vector, and the hexahydric group description vector is linearly solved to obtain a corrected and more accurate external parameter of the image acquisition device.
In one embodiment the translation and rotation components in the external parameters of the image acquisition device are represented as a set of six-membered vectors (u)1,u2,u3,u4,u5,u6)。
As an example, the external parameter matrix of the image capturing device is represented by a six-membered vector group, which can be represented by the following expression (6):
wherein epsilon represents hexagram descriptive vector of camera external parameter, u1,u2,u3For positions in the three-dimensional space of the camera, i.e. translation components in the camera's extrinsic parameters, u4,u5,u6The rotation components of the camera in the x, y and z directions under the world coordinate system are respectively.
In one embodiment, a closed curve may be introduced to linearly solve the six-membered vector group of the external parameter matrix of the image acquisition device. For example, a projection curve is constructed by using three-dimensional feature points in a three-dimensional model of the object and two-dimensional image points in a texture image corresponding to a first view angle of the three-dimensional model of the object, linear fitting is performed on the projection curve, the projection curve is solved to obtain a hexahydric group vector of the external parameter matrix, and then the external parameter matrix of the camera is determined according to the hexahydric group vector of the external parameter matrix.
In one embodiment, the process of performing coordinate conversion on the three-dimensional point coordinates in step S21 may be represented by using a coordinate conversion sub-function, where the coordinate conversion sub-function is configured to convert the three-dimensional coordinates of the three-dimensional modeling point into the image coordinates of the corresponding texture image point by using the external parameters of the image capturing device, and convert the image coordinates of the texture image point into the texture coordinates of the corresponding texture pixel point by using the internal parameters of the image capturing device.
And step S22, performing texture coordinate correction on the corresponding texture pixel points in the spliced texture image obtained by conversion.
In one embodiment, the texture coordinates of the texture pixel points may be corrected based on the texture coordinates in the standard color palette. Texels can be represented by a two-dimensional array of color values, the elements in the two-dimensional array each having a unique address in the texture. Because the acquired image color is affected by the acquisition environment, the image color of the same object obtained in different acquisition environments is different, and therefore the acquired image needs to be subjected to color correction.
In one embodiment, the coordinate correction process for the texture coordinates of the texture pixel points may be represented by a texture coordinate correction function, and the correction function for the texture coordinates may be represented by a texture coordinate correction curve having a curvature continuity characteristic.
In one embodiment, the texture coordinate correction Curve with curvature continuity characteristic is a Bezier Curve (Bezier Curve), abbreviated as B-spline Curve.
As an example, the B-spline curve can be understood as a parametric curve, the coordinate correction function fitted by the B-spline curve has a better smoothness, and even if an error between a calculated value of texture coordinates of an individual three-dimensional modeling point and an actual value of texture coordinates of the three-dimensional modeling point is large in the process of fitting the correction function of texture coordinates, the influence on the fitted curve is only local and does not affect the whole.
That is to say, the fitting process of the coordinate correction function by using the B-spline curve has good statistical characteristics and filtering effect, so that the pixel value of the texture coordinate position of the three-dimensional modeling point after correction and the pixel value of the texture coordinate position of the corresponding point in the texture image corresponding to the adjacent view angle are more consistent with the requirement of consistency, the adjacent view angle is the shooting view angle of the visible three-dimensional modeling point P, and the adjacent view angle includes the selected first view angle.
And step S23, calculating the average value of the corresponding texture pixel values of the three-dimensional modeling point in the texture image subset associated with each visible view angle used for the three-dimensional modeling point, and taking the calculated average value as the average value of the texture pixel values corresponding to the visible view angle of the three-dimensional modeling point.
In one embodiment, the average of the texel values corresponding to the visible view of the three-dimensional modeled point may be calculated by a color reference sub-function.
In this embodiment, calculating the average of the texel values corresponding to the visible views of the three-dimensional modeled point is linearly solvable.
Step S24, performing consistency analysis on the modified pixel values of the three-dimensional modeling point in the texture image subset associated with the first view angle by using the average value of the texture pixel values corresponding to the visible view angles of the three-dimensional modeling point, so as to perform error correction on the pixel values of the stitched texture image.
In one embodiment, the calculated average value may be used as a color reference criterion of the three-dimensional modeling point, and the consistency of the modified pixel values of the three-dimensional modeling point in the texture image subset associated with the first view angle may be represented by a residual value of the color reference criterion of the three-dimensional modeling point and the modified pixel values of the three-dimensional modeling point in the texture image subset associated with the first view angle.
In this embodiment, the residual value is used to measure the calculated error of the texture pixel value of the corresponding texture image point P' in the texture image subset associated with the view angle of the ith visible three-dimensional modeling point P.
In one embodiment, the consistency of the modified pixel values of the three-dimensional modeling point in the texture image subset associated with the first view angle is analyzed, and the minimum value of the residual values is calculated by using the constructed texture error correction model of the stitched texture image, so as to modify the pixel values of the texture pixel points.
In one embodiment, the texture error correction model of the stitched texture image may be expressed as the following expression (7):
wherein e isi,pCan be expressed as the following expression (8):
ei,p=Cp-G(F(u(T×p))) (8)
in the above expression (8), p is a three-dimensional modeling point in the three-dimensional model of the object, and txp represents that the three-dimensional coordinates of the three-dimensional modeling point are converted into the image coordinates of the corresponding texture image point p' by using the external parameter T of the image capturing device; u (T × p) represents the image coordinates of the corresponding texture image point p' are converted into texture coordinates of the corresponding texture image point using the internal parameter u of the image capturing device; f (u (T × p)) represents correction of the texture coordinates of the texture image point using the coordinate correction function F; g (F (u (T × P)))) represents a modified texel value for a corresponding texture image point P' in the ith view-dependent texture image subset in which the three-dimensional modeled point P is visible; cpRepresents the mean of the modified texel values of the corresponding texture image point p' in the texture image subset associated with the view of all visible three-dimensional modeling points p,representing the residual value of the texel value of the corresponding texture image point P 'in the view-dependent texture image subset for which the three-dimensional modeled point P is i' th visible.
In the objective function described in the above expression (7), MiCan represent the three-dimensional modeling point of the object shot under the selected ith view angle, and K represents the shooting view angles of all visible three-dimensional modeling points P;the method is a regular term of the objective function, prevents the over-fitting of the objective function in the solving process, and ensures that the value of each variable and parameter is in a reasonable range.
In one embodiment, because of the non-linearity of the extrinsic parameter matrix T, the expression (7) of the objective function is a non-linear solution process, and in order to make the expression (6) of the objective function linearly solvable, only one of the variables is solved in each iteration process for the variables C, T and F in the objective function by a one-by-one solution method.
In one embodiment, due to the nonlinearity of the extrinsic parameter matrix T, the solution may also be performed by a nonlinear optimization algorithm according to the error correction function of the texture image shown in expression (6) in the embodiment of the present invention. The nonlinear optimization algorithm can be understood as a function approximation method, and the solving direction of the objective function can be determined by using derivative operation.
As an example, the non-linear optimization algorithm may be a Gauss Newton (Gauss Newton) algorithm, or a Levenberg-Marquardt (Levenberg-Marquardt) algorithm, abbreviated as LM algorithm.
In the embodiment of the present invention, the three-dimensional structure of the object obtained by scanning with the consumer-grade three-dimensional scanning device has certain errors, such as camera calibration errors, which may cause texture errors, blurring, or artificial artifacts in texture mapping. The patent provides a method for correcting texture errors to eliminate the problem of the texture errors, so that even for an error three-dimensional structure produced by consumer equipment, the consistency of pixel values of texture map coordinate positions corresponding to adjacent visual angles of the same three-dimensional modeling point can be still ensured, an accurate texture image is obtained, and an accurate texture mapping effect is obtained.
Fig. 4 shows a flow chart of an image processing method according to another embodiment of the invention. As shown in fig. 4, the image processing method 400 in the embodiment of the present invention includes the following steps:
step S410, selecting a first perspective for the three-dimensional model of the object, and determining a texture image subset associated with the first perspective in the original texture image of the object.
In one embodiment, step S410 may specifically include:
step S411, a polygonal patch of the three-dimensional model is obtained as a patch to be processed, and a normal vector of the patch to be processed is determined.
Step S412, when an included angle between an incident ray of the image acquisition device and a normal vector of a patch to be processed meets an included angle threshold condition, taking the included angle meeting the included angle threshold condition as a first visual angle of the three-dimensional model.
In step S413, an image corresponding to the first view angle of the image capturing device in the original texture image is used as the texture image subset associated with the first view angle.
Step S420, an image definition function of the texture image subset is constructed, and the image definition function is used for selecting the first visual angle according to the image definition of the texture image subset.
In step S430, an image uniformity function of the texture image subset is constructed, where the image uniformity function is used to describe color differences according to overlapping regions between images included in the texture image subset, and to describe image uniformity of the overlapping regions.
In this embodiment, the image clarity restriction condition is: and an included angle between an incident ray of the image acquisition equipment and a normal vector of a polygonal patch of the three-dimensional model is smaller than an included angle threshold value.
In one embodiment, in step S430,
the color difference between the images is expressed by the Euclidean distance of the sampling point of one polygon patch of the common-edge texture polygon patches of the images and the sampling point of the other polygon patch of the common-edge texture polygon patches in the color space.
In one embodiment, the color difference between the images may be calculated by an image uniformity function, and in particular, the calculating the color difference between the images by the image uniformity function may include:
and obtaining a polygon facet of the texture image subset as a texture polygon facet, and constructing an image uniformity function of the texture image subset by using texture difference between the common-edge texture polygon facets in the texture polygon facet, wherein common edges exist between the common-edge texture polygon facets.
As an example, the image uniformity function is:
and wherein the one or more of the one,
lisample points, l, of one of the polygon patches representing a co-edge texture polygon patchjRepresent common edgesSampling points of another polygon patch of the texture polygon patches, D (l)i,ij) As a function of image uniformity, CliRepresents the sample point liColor value of (C)ljRepresents the sample point ljColor value of d (C)li-Clj) For passing through the sampling point liAnd sampling point ljDistance in color space, characterizing sample point liAnd sampling point ljThe difference in color between.
Step S440, performing texture splicing on the overlapped area with the minimum color difference to obtain a spliced texture image corresponding to the first visual angle.
According to the image processing method provided by the embodiment of the invention, a reasonable segmentation path can be generated to avoid the high-frequency detail region of the image as much as possible so as to generate a seam at a position with uniform image content, and by the way, the dislocation problem generated by direct camera projection is avoided, the anti-noise capability of the texture image is improved, and the texture image with higher precision is obtained.
In one embodiment, the image processing method 400 may further include:
step S450, according to the average value of the texture pixel values associated with the visible visual angles of the three-dimensional modeling points in the three-dimensional model and the error value between the texture pixel values of the texture pixel points corresponding to the three-dimensional modeling points in the stitched texture image associated with the first visual angle, error correction is carried out on the stitched texture image corresponding to the first visual angle, wherein,
the adjacent visual angles are visual angles of the three-dimensional modeling points, and the adjacent visual angles comprise the first visual angle.
In one embodiment, the texture pixel value of the stitched texture image corresponding to the first view may be corrected by the constructed texture error correction function.
In one embodiment, the step of performing error correction on the stitched texture image corresponding to the first view may include:
and S451, correcting the external parameters of the image acquisition equipment, and converting the three-dimensional coordinates of the three-dimensional modeling points in the three-dimensional model into texture coordinates of corresponding texture pixel points by using the corrected external parameters of the image acquisition equipment and the corrected internal parameters of the image acquisition equipment.
And step S452, carrying out coordinate correction on the texture coordinates of the texture pixel points corresponding to the three-dimensional modeling points obtained through conversion to obtain coordinate-corrected texture pixel values corresponding to the three-dimensional modeling points.
Step S453 is to calculate an average value of the texel values of the three-dimensional modeling point corresponding to the texture image subset associated with each visible view angle for the three-dimensional modeling point, and use the calculated average value as the average value of the texel values corresponding to the visible view angles of the three-dimensional modeling point.
Step S454, using the average value of the texture pixel values corresponding to the visible viewing angles of the three-dimensional modeling point, corrects the coordinate-corrected texture pixel values of the three-dimensional modeling point in the texture image subset associated with the first viewing angle, to obtain a corrected stitched texture image associated with the first viewing angle.
According to the image processing method provided by the embodiment of the invention, for the error three-dimensional structure produced by the consumer-grade equipment, the consistency of the pixel values of the texture map coordinate positions corresponding to the adjacent visual angles of the same three-dimensional modeling point can still be ensured, so that an accurate texture image is obtained, and an accurate texture mapping effect is obtained.
An image processing apparatus according to an embodiment of the present invention will be described in detail below with reference to the accompanying drawings.
Fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention. As shown in fig. 5, the image processing apparatus 500 includes:
a view selection module 510 selects a first view for the three-dimensional model of the object, determines a subset of texture images in the original texture image of the object associated with the first view.
In one embodiment, the view selecting module 510 may specifically include:
the normal vector calculation unit is used for acquiring a polygonal surface patch of the three-dimensional model as a surface patch to be processed and determining a normal vector of the surface patch to be processed;
the first visual angle determining unit is used for taking an included angle meeting a threshold condition as a first visual angle of the three-dimensional model when the included angle between an incident ray of the image acquisition equipment and a normal vector of a surface patch to be processed meets the threshold condition;
and the texture image acquisition unit is used for taking an image corresponding to the first view angle of the image acquisition equipment in the original texture image as a texture image subset associated with the first view angle.
And a sharp image selection module 520, configured to select, for the first view, an image in the texture image subset that meets an image sharp limitation condition.
In one embodiment, the image sharpness limitation condition includes: and an included angle between an incident ray of the image acquisition equipment and a normal vector of a polygonal patch of the three-dimensional model is smaller than an included angle threshold value.
A uniformity image selection module 530 for determining an overlap region where the color difference between images satisfying the image sharpness constraint is minimal.
In one embodiment, the color difference between the images is represented by the Euclidean distance of the sampling point of one polygon patch of the common-edge texture polygon patches of the images and the sampling point of the other polygon patch of the common-edge texture polygon patches in the color space.
In one embodiment, the uniform image selection module 530 is specifically configured to:
and obtaining a polygon facet of the texture image subset as a texture polygon facet, and constructing an image uniformity function of the texture image subset by using texture difference between the common-edge texture polygon facets in the texture polygon facet, wherein common edges exist between the common-edge texture polygon facets.
As an example, the image uniformity function is:
and wherein the one or more of the one,
lisample points, l, of one of the polygon patches representing a co-edge texture polygon patchjRepresenting another of the co-edge textured polygon patchesSampling points of polygonal patches, D (l)i,ij) As a function of image uniformity, CliRepresents the sample point liColor value of (C)ljRepresents the sample point ljColor value of d (C)li-Clj) For passing through the sampling point liAnd sampling point ljDistance in color space, characterizing sample point liAnd sampling point ljThe difference in color between.
And the texture stitching module 540 is configured to perform texture stitching on the overlapping area with the smallest color difference to obtain a stitched texture image corresponding to the first view angle.
According to the image processing device provided by the embodiment of the invention, a reasonable segmentation path can be generated to avoid a high-frequency detail region of an image as much as possible so as to generate a seam at a position with uniform image content, and therefore, the dislocation problem caused by direct camera projection is avoided, the anti-noise capability of a texture image is improved, and the texture image with higher precision is obtained.
In one embodiment, the image processing apparatus may further include:
a texture error correction module for performing error correction on the stitched texture image corresponding to the first view angle according to an average value of texture pixel values associated with visible view angles of the three-dimensional modeling point in the three-dimensional model and an error value between the texture pixel values of texture pixel points corresponding to the three-dimensional modeling point in the stitched texture image associated with the first view angle, wherein,
the adjacent visual angles are visual angles of the three-dimensional modeling points, and the adjacent visual angles comprise the first visual angle.
In one embodiment, the texture error correction module comprises:
the coordinate conversion and correction unit is used for correcting the external parameters of the image acquisition equipment and converting the three-dimensional coordinates of the three-dimensional modeling points in the three-dimensional model into texture coordinates of corresponding texture pixel points by using the corrected external parameters of the image acquisition equipment and the corrected internal parameters of the image acquisition equipment;
the texture coordinate correction unit is used for carrying out coordinate correction on texture coordinates of texture pixel points corresponding to the three-dimensional modeling point obtained through conversion to obtain texture pixel values corresponding to the three-dimensional modeling point after coordinate correction;
the pixel value average value calculating unit is used for calculating the average value of the texture pixel values of the three-dimensional modeling point in the texture image subset associated with each visible view angle for the three-dimensional modeling point, and taking the calculated average value as the average value of the texture pixel values corresponding to the visible view angle of the three-dimensional modeling point;
and the texture correction unit is used for correcting the coordinate-corrected texture pixel values of the three-dimensional modeling points in the texture image subset associated with the first view angle by using the average value of the texture pixel values corresponding to the visible view angles of the three-dimensional modeling points to obtain a corrected spliced texture image associated with the first view angle.
In this embodiment, the texture error correction module may be configured to analyze consistency of the texture values of the three-dimensional modeling point in the visual view by using an average value of the texture pixel values corresponding to the three-dimensional modeling point and a difference between coordinate-corrected texture coordinates of the three-dimensional modeling point in the texture image subset associated with each visual angle for the three-dimensional modeling point, so as to perform error correction on the stitched texture image.
According to the image processing device provided by the embodiment of the invention, errors, blurring or artificial artifacts of textures caused in texture mapping can be corrected, and even for an error three-dimensional structure produced by a consumer-grade device, the consistency of pixel values of texture map coordinate positions corresponding to adjacent visual angles of the same three-dimensional modeling point can be still ensured, so that an accurate texture image is obtained, and an accurate texture mapping effect is obtained.
In one embodiment, during the error correction of the spliced texture image, each iteration process may correct only one of the correction of the external parameter of the image acquisition device in the coordinate transformation correction unit, the correction of the texture coordinate in the texture correction unit, and the correction of the texel value in the texture correction unit.
In one embodiment, the image processing apparatus is further configured to:
and in the process of mapping the spliced texture images to the three-dimensional model, iteratively calculating the first visual angle for multiple times to obtain a plurality of spliced texture images until a complete texture model of the three-dimensional model is obtained.
In this embodiment, the texture map subjected to the texture error correction is mapped to the three-dimensional model to obtain a complete texture model of the three-dimensional model.
Other details of the image processing apparatus according to the embodiment of the present invention are similar to the image processing method according to the embodiment of the present invention described above with reference to fig. 1 to 4, and are not repeated herein.
The embodiment of the invention also provides image scanning equipment, which comprises projection equipment, image acquisition equipment and a processor; a projection device operable to project a structured light encoded pattern onto an object; the image acquisition equipment can be used for acquiring an original texture image of the object under the projection of the structured light code pattern; and a processor, which may be configured to perform the image processing method described in the above embodiments with reference to fig. 1 to 5.
The image scanning device provided by the embodiment of the invention can improve the anti-noise capability of the 3D texture mapping and generate the three-dimensional texture with smooth texture stripes.
Fig. 6 is a block diagram illustrating an exemplary hardware architecture of a computing device capable of implementing an image processing method and apparatus according to an embodiment of the present invention.
As shown in fig. 6, computing device 600 includes an input device 601, an input interface 602, a central processor 603, a memory 604, an output interface 605, and an output device 606. The input interface 602, the central processing unit 603, the memory 604, and the output interface 605 are connected to each other via a bus 610, and the input device 601 and the output device 606 are connected to the bus 610 via the input interface 602 and the output interface 605, respectively, and further connected to other components of the computing device 600.
Specifically, the input device 601 receives input information from the outside (for example, an image pickup device), and transmits the input information to the central processor 603 through the input interface 602; the central processor 603 processes input information based on computer-executable instructions stored in the memory 604 to generate output information, stores the output information temporarily or permanently in the memory 604, and then transmits the output information to the output device 606 through the output interface 605; output device 606 outputs output information to the exterior of computing device 600 for use by a user.
That is, the computing device shown in fig. 6 may also be implemented as an image processing system including: a memory storing computer-executable instructions; and a processor which, when executing computer executable instructions, may implement the image processing methods and apparatus described in connection with fig. 1-5. Here, the processor may communicate with the image acquisition device to execute computer-executable instructions based on relevant information from image processing to implement the image processing methods and apparatus described in connection with fig. 1-5.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product or computer-readable storage medium. The computer program product or computer-readable storage medium includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is to be understood that the invention is not limited to the specific arrangements and instrumentality described above and shown in the drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications and additions or change the order between the steps after comprehending the spirit of the present invention.
As described above, only the specific embodiments of the present invention are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the module and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the protection limitation of the present invention is not limited thereto, and any person skilled in the art can easily conceive of various equivalent modifications or substitutions within the technical limitation of the present invention, and these modifications or substitutions should be covered within the protection limitation of the present invention.
Claims (15)
1. An image processing method comprising:
selecting a first perspective for a three-dimensional model of an object, determining a texture image subset associated with the first perspective in an original texture image of the object;
selecting images in the texture image subset which meet an image sharpness limitation condition for the first view;
determining an overlapping area with minimum color difference between the images meeting the image definition limitation condition;
and performing texture splicing on the overlapped area with the minimum color difference to obtain a spliced texture image corresponding to the first visual angle.
2. The image processing method according to claim 1, wherein the image clarity restriction condition includes:
and an included angle between an incident ray of the image acquisition equipment and a normal vector of a polygonal surface patch of the three-dimensional model is smaller than an included angle threshold value.
3. The image processing method according to claim 1,
the color difference between the images is expressed by the Euclidean distance of the sampling point of one polygon patch of the co-edge texture polygon patches of the images and the sampling point of the other polygon patch of the co-edge texture polygon patches in the color space.
4. The image processing method according to claim 1, further comprising:
correcting errors of a stitched texture image corresponding to a first view angle according to an average value of texture pixel values associated with a visible view angle of a three-dimensional modeling point in the three-dimensional model and an error value between texture pixel values of texture pixel points corresponding to the three-dimensional modeling point in the stitched texture image associated with the first view angle, wherein,
the adjacent view is a visible view of the three-dimensional modeling point, and the adjacent view comprises the first view.
5. The image processing method according to claim 4, wherein the performing error correction on the stitched texture image corresponding to the first view comprises:
correcting external parameters of the image acquisition equipment, and converting three-dimensional coordinates of three-dimensional modeling points in the three-dimensional model into texture coordinates of corresponding texture pixel points by using the corrected external parameters of the image acquisition equipment and the corrected internal parameters of the image acquisition equipment;
carrying out coordinate correction on texture coordinates of texture pixel points corresponding to the three-dimensional modeling point obtained through conversion to obtain texture pixel values corresponding to the three-dimensional modeling point after coordinate correction;
calculating the average value of the corresponding texture pixel values of the three-dimensional modeling point in the texture image subset associated with each visible view angle used for the three-dimensional modeling point, and taking the calculated average value as the average value of the corresponding texture pixel values of the visible view angle of the three-dimensional modeling point;
and correcting the coordinate-corrected texture pixel values of the three-dimensional modeling point in the texture image subset associated with the first view angle by using the average value of the texture pixel values corresponding to the visible view angles of the three-dimensional modeling point to obtain a corrected spliced texture image associated with the first view angle.
6. The image processing method according to claim 1, further comprising:
and in the process of mapping the spliced texture images to the three-dimensional model, iteratively calculating a first visual angle for multiple times to obtain a plurality of spliced texture images until a complete texture model of the three-dimensional model is obtained.
7. An image processing apparatus comprising:
a perspective selection module for selecting a first perspective for a three-dimensional model of an object, determining a subset of texture images in an original texture image of the object associated with the first perspective;
a sharp image selection module, configured to select, for the first view, an image in the texture image subset that meets an image sharp limitation condition;
the uniform image selection module is used for determining an overlapping area with the minimum color difference between the images meeting the image definition limiting condition;
and the texture image splicing module is used for performing texture splicing on the overlapped area with the minimum color difference to obtain a spliced texture image corresponding to the first visual angle.
8. The image processing apparatus according to claim 7, wherein the image clarity limitation condition includes:
and an included angle between an incident ray of the image acquisition equipment and a normal vector of a polygonal surface patch of the three-dimensional model is smaller than an included angle threshold value.
9. The image processing apparatus according to claim 7,
the color difference between the images is expressed by the Euclidean distance of the sampling point of one polygon patch of the co-edge texture polygon patches of the images and the sampling point of the other polygon patch of the co-edge texture polygon patches in the color space.
10. The image processing apparatus according to claim 7, further comprising:
a texture error correction module, configured to perform error correction on a stitched texture image corresponding to a first view according to an average value of texture pixel values associated with visible views of three-dimensional modeling points in the three-dimensional model and an error value between texture pixel values of texture pixel points corresponding to the three-dimensional modeling points in the stitched texture image associated with the first view, where,
the adjacent view is a visible view of the three-dimensional modeling point, and the adjacent view comprises the first view.
11. The image processing apparatus according to claim 10, wherein the texture error correction module includes:
the coordinate conversion and correction unit is used for correcting the external parameters of the image acquisition equipment and converting the three-dimensional coordinates of the three-dimensional modeling points in the three-dimensional model into texture coordinates of corresponding texture pixel points by using the corrected external parameters of the image acquisition equipment and the corrected internal parameters of the image acquisition equipment;
the texture coordinate correction unit is used for carrying out coordinate correction on texture coordinates of texture pixel points corresponding to the three-dimensional modeling point obtained through conversion to obtain texture pixel values corresponding to the three-dimensional modeling point after coordinate correction;
the pixel value mean value calculating unit is used for calculating the mean value of the corresponding texture pixel values of the three-dimensional modeling point in the texture image subset associated with each visible view angle used for the three-dimensional modeling point, and taking the calculated mean value as the mean value of the texture pixel values corresponding to the visible view angle of the three-dimensional modeling point;
and the texture correction unit is used for correcting the coordinate-corrected texture pixel value of the three-dimensional modeling point in the texture image subset associated with the first view angle by using the average value of the texture pixel values corresponding to the visible view angles of the three-dimensional modeling point, so as to obtain a corrected spliced texture image associated with the first view angle.
12. The image processing device of claim 7, the image processing device further to:
and in the process of mapping the spliced texture images to the three-dimensional model, iteratively calculating a first visual angle for multiple times to obtain a plurality of spliced texture images until a complete texture model of the three-dimensional model is obtained.
13. An image scanning device comprises a projection device, an image acquisition device and a processor;
the projection equipment is used for projecting the structured light coding pattern to the object;
the image acquisition equipment is used for acquiring an original texture image of the object under the projection of the structured light code pattern;
the processor configured to perform the image processing method of any one of claims 1 to 6.
14. An image processing system comprising a memory and a processor;
the memory is used for storing executable program codes;
the processor is configured to read executable program code stored in the memory to perform the image processing method of any one of claims 1 to 6.
15. A computer-readable storage medium, comprising instructions which, when executed on a computer, cause the computer to perform the image processing method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810427030.9A CN110458932B (en) | 2018-05-07 | 2018-05-07 | Image processing method, device, system, storage medium and image scanning apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810427030.9A CN110458932B (en) | 2018-05-07 | 2018-05-07 | Image processing method, device, system, storage medium and image scanning apparatus |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110458932A true CN110458932A (en) | 2019-11-15 |
CN110458932B CN110458932B (en) | 2023-08-22 |
Family
ID=68471944
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810427030.9A Active CN110458932B (en) | 2018-05-07 | 2018-05-07 | Image processing method, device, system, storage medium and image scanning apparatus |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110458932B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111714883A (en) * | 2020-06-19 | 2020-09-29 | 网易(杭州)网络有限公司 | Method and device for processing map and electronic equipment |
CN111754635A (en) * | 2020-06-22 | 2020-10-09 | 北京市商汤科技开发有限公司 | Texture fusion method and device, electronic equipment and storage medium |
CN111862342A (en) * | 2020-07-16 | 2020-10-30 | 北京字节跳动网络技术有限公司 | Augmented reality texture processing method, device, electronic device and storage medium |
CN112308955A (en) * | 2020-10-30 | 2021-02-02 | 北京字跳网络技术有限公司 | Image-based texture filling method, device, device and storage medium |
CN112489225A (en) * | 2020-11-26 | 2021-03-12 | 北京邮电大学 | Method and device for fusing video and three-dimensional scene, electronic equipment and storage medium |
CN113220251A (en) * | 2021-05-18 | 2021-08-06 | 北京达佳互联信息技术有限公司 | Object display method, device, electronic equipment and storage medium |
CN113570617A (en) * | 2021-06-24 | 2021-10-29 | 荣耀终端有限公司 | Image processing method, device and electronic device |
CN114007059A (en) * | 2020-07-28 | 2022-02-01 | 阿里巴巴集团控股有限公司 | Video compression method, decompression method, device, electronic device and storage medium |
CN114359504A (en) * | 2021-12-14 | 2022-04-15 | 聚好看科技股份有限公司 | Three-dimensional model display method and equipment |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103049896A (en) * | 2012-12-27 | 2013-04-17 | 浙江大学 | Automatic registration algorithm for geometric data and texture data of three-dimensional model |
CN104574501A (en) * | 2014-12-19 | 2015-04-29 | 浙江大学 | High-quality texture mapping method aiming at complicated three-dimensional scene |
CN104732577A (en) * | 2015-03-10 | 2015-06-24 | 山东科技大学 | Building texture extraction method based on UAV low-altitude aerial survey system |
US20150187126A1 (en) * | 2013-12-31 | 2015-07-02 | Nvidia Corporation | Using indirection maps for rendering texture space effects |
CN105551078A (en) * | 2015-12-02 | 2016-05-04 | 北京建筑大学 | Method and system of virtual imaging of broken cultural relics |
CN107330964A (en) * | 2017-07-24 | 2017-11-07 | 广东工业大学 | A kind of display methods and system of complex three-dimensional object |
CN107392987A (en) * | 2017-07-13 | 2017-11-24 | 深圳市魔眼科技有限公司 | Method, apparatus, mobile terminal and the storage medium of the texture acquirement of 3D scannings |
CN107945267A (en) * | 2017-12-13 | 2018-04-20 | 四川川大智胜软件股份有限公司 | A kind of method and apparatus for human face three-dimensional model grain table |
-
2018
- 2018-05-07 CN CN201810427030.9A patent/CN110458932B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103049896A (en) * | 2012-12-27 | 2013-04-17 | 浙江大学 | Automatic registration algorithm for geometric data and texture data of three-dimensional model |
US20150187126A1 (en) * | 2013-12-31 | 2015-07-02 | Nvidia Corporation | Using indirection maps for rendering texture space effects |
CN104574501A (en) * | 2014-12-19 | 2015-04-29 | 浙江大学 | High-quality texture mapping method aiming at complicated three-dimensional scene |
CN104732577A (en) * | 2015-03-10 | 2015-06-24 | 山东科技大学 | Building texture extraction method based on UAV low-altitude aerial survey system |
CN105551078A (en) * | 2015-12-02 | 2016-05-04 | 北京建筑大学 | Method and system of virtual imaging of broken cultural relics |
CN107392987A (en) * | 2017-07-13 | 2017-11-24 | 深圳市魔眼科技有限公司 | Method, apparatus, mobile terminal and the storage medium of the texture acquirement of 3D scannings |
CN107330964A (en) * | 2017-07-24 | 2017-11-07 | 广东工业大学 | A kind of display methods and system of complex three-dimensional object |
CN107945267A (en) * | 2017-12-13 | 2018-04-20 | 四川川大智胜软件股份有限公司 | A kind of method and apparatus for human face three-dimensional model grain table |
Non-Patent Citations (3)
Title |
---|
侯顺风等: "块拼接纹理合成算法在图像拼接中的应用", 《计算机技术与发展》 * |
姜翰青等: "面向复杂三维场景的高质量纹理映射", 《计算机学报》 * |
王世淼等: "基于结构光扫描三维重建技术研究", 《北京印刷学院学报》 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111714883A (en) * | 2020-06-19 | 2020-09-29 | 网易(杭州)网络有限公司 | Method and device for processing map and electronic equipment |
CN111714883B (en) * | 2020-06-19 | 2024-06-04 | 网易(杭州)网络有限公司 | Mapping processing method and device and electronic equipment |
CN111754635A (en) * | 2020-06-22 | 2020-10-09 | 北京市商汤科技开发有限公司 | Texture fusion method and device, electronic equipment and storage medium |
CN111754635B (en) * | 2020-06-22 | 2022-12-20 | 北京市商汤科技开发有限公司 | Texture fusion method and device, electronic equipment and storage medium |
CN111862342A (en) * | 2020-07-16 | 2020-10-30 | 北京字节跳动网络技术有限公司 | Augmented reality texture processing method, device, electronic device and storage medium |
CN111862342B (en) * | 2020-07-16 | 2024-08-27 | 北京字节跳动网络技术有限公司 | Augmented reality texture processing method and device, electronic equipment and storage medium |
CN114007059A (en) * | 2020-07-28 | 2022-02-01 | 阿里巴巴集团控股有限公司 | Video compression method, decompression method, device, electronic device and storage medium |
CN112308955A (en) * | 2020-10-30 | 2021-02-02 | 北京字跳网络技术有限公司 | Image-based texture filling method, device, device and storage medium |
CN112489225A (en) * | 2020-11-26 | 2021-03-12 | 北京邮电大学 | Method and device for fusing video and three-dimensional scene, electronic equipment and storage medium |
CN113220251B (en) * | 2021-05-18 | 2024-04-09 | 北京达佳互联信息技术有限公司 | Object display method, device, electronic equipment and storage medium |
CN113220251A (en) * | 2021-05-18 | 2021-08-06 | 北京达佳互联信息技术有限公司 | Object display method, device, electronic equipment and storage medium |
CN113570617A (en) * | 2021-06-24 | 2021-10-29 | 荣耀终端有限公司 | Image processing method, device and electronic device |
CN113570617B (en) * | 2021-06-24 | 2022-08-23 | 荣耀终端有限公司 | Image processing method and device and electronic equipment |
CN114359504B (en) * | 2021-12-14 | 2024-05-03 | 聚好看科技股份有限公司 | Three-dimensional model display method and acquisition equipment |
CN114359504A (en) * | 2021-12-14 | 2022-04-15 | 聚好看科技股份有限公司 | Three-dimensional model display method and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN110458932B (en) | 2023-08-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110458932B (en) | Image processing method, device, system, storage medium and image scanning apparatus | |
US11410320B2 (en) | Image processing method, apparatus, and storage medium | |
JP7403528B2 (en) | Method and system for reconstructing color and depth information of a scene | |
CN108876926B (en) | Navigation method and system in panoramic scene and AR/VR client equipment | |
US9317970B2 (en) | Coupled reconstruction of hair and skin | |
JP2009211335A (en) | Virtual viewpoint image generation method, virtual viewpoint image generation apparatus, virtual viewpoint image generation program, and recording medium from which same recorded program can be read by computer | |
Gibson et al. | Interactive reconstruction of virtual environments from video sequences | |
US9147279B1 (en) | Systems and methods for merging textures | |
CN113689578A (en) | Human body data set generation method and device | |
WO2019164497A1 (en) | Methods, devices, and computer program products for gradient based depth reconstructions with robust statistics | |
CN103093426B (en) | Method recovering texture and illumination of calibration plate sheltered area | |
CN113781621A (en) | Three-dimensional reconstruction processing method, device, equipment and storage medium | |
KR20190060228A (en) | Method for reconstrucing 3d color mesh and apparatus for the same | |
CN112802186B (en) | Real-time 3D reconstruction method of dynamic scene based on binary feature code matching | |
CN116863101A (en) | Reconstruction model geometry and texture optimization method based on adaptive mesh subdivision | |
CN118247429A (en) | A method and system for rapid three-dimensional modeling in air-ground collaboration | |
KR100681320B1 (en) | Three-Dimensional Shape Modeling of Objects Using Level Set Solution of Partial Differential Equations Derived from Helmholtz Exchange Conditions | |
CN113989434A (en) | Human body three-dimensional reconstruction method and device | |
JP6807034B2 (en) | Image processing device and image processing method | |
JP2003337953A (en) | Apparatus and method for image processing, and computer program | |
CA2716257A1 (en) | System and method for interactive painting of 2d images for iterative 3d modeling | |
CN116912393A (en) | Face reconstruction method and device, electronic equipment and readable storage medium | |
CN113763558B (en) | Information processing method, device, equipment, and storage medium | |
JP2017215706A (en) | Video synthesis method, video acquisition device, video synthesis system, and computer program | |
JP2004227095A (en) | Texture map creation method, texture map creation program, and texture map creation device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |