[go: up one dir, main page]

CN113362440B - Material map acquisition method and device, electronic equipment and storage medium - Google Patents

Material map acquisition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113362440B
CN113362440B CN202110729001.XA CN202110729001A CN113362440B CN 113362440 B CN113362440 B CN 113362440B CN 202110729001 A CN202110729001 A CN 202110729001A CN 113362440 B CN113362440 B CN 113362440B
Authority
CN
China
Prior art keywords
images
mapping
group
groups
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110729001.XA
Other languages
Chinese (zh)
Other versions
CN113362440A (en
Inventor
陈君玺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Digital Sky Technology Co ltd
Original Assignee
Chengdu Digital Sky Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Digital Sky Technology Co ltd filed Critical Chengdu Digital Sky Technology Co ltd
Priority to CN202110729001.XA priority Critical patent/CN113362440B/en
Publication of CN113362440A publication Critical patent/CN113362440A/en
Application granted granted Critical
Publication of CN113362440B publication Critical patent/CN113362440B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application provides a material map acquisition method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a plurality of groups of area images of different areas of a target object, wherein each group of area images in the plurality of groups of area images are obtained by shooting the same area of the target object in different light source directions; performing mapping calculation on each group of region images in the plurality of groups of region images to obtain color mapping and normal mapping corresponding to the plurality of groups of region images; and respectively carrying out projection transformation and seam fusion on the color maps and the normal maps corresponding to the multiple groups of regional images to obtain the material maps of the target objects. In the implementation process, the mapping calculation is performed on each group of region images in the plurality of groups of region images, and the projection transformation and the seam fusion are respectively performed on the color mapping and the normal mapping corresponding to each group of region images, so that the material mapping capable of showing the sense of reality of the plane material is obtained, and the sense of reality of the plane material is effectively increased.

Description

Material map acquisition method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the technical field of computer data processing and image processing, and in particular, to a method and apparatus for obtaining a texture map, an electronic device, and a storage medium.
Background
Texture Mapping, also known as Texture Mapping, is used in computer graphics to wrap a bitmap stored in memory onto the surface of a 3D rendered object, and provides the object with rich detail, which simulates a complex appearance in a simple manner. An image (texture) is pasted (mapped) onto a simple feature in the scene just as a print is pasted onto a plane. This greatly reduces the amount of computation required to make shapes and textures in the scene.
At present, in the process of game animation and video rendering, usually, works of art of planar materials such as painting and calligraphy are scanned and rendered to show color details to show reality. However, in the related art of digital cultural relics museums, it is difficult to express the concave-convex realism of planar materials through color details for works of art or cultural relics of planar materials such as line engraving, relief, interior engraving and intaglio.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method, an apparatus, an electronic device, and a storage medium for obtaining a texture map, which are used for improving the problem that it is difficult to represent the concave-convex realism of a planar texture through color details.
The embodiment of the application provides a material map acquisition method, which comprises the following steps: acquiring a plurality of groups of area images of different areas of a target object, wherein each group of area images in the plurality of groups of area images are obtained by shooting the same area of the target object in different light source directions; performing mapping calculation on each group of region images in the plurality of groups of region images to obtain color mapping and normal mapping corresponding to the plurality of groups of region images; and respectively carrying out projection transformation and seam fusion on the color maps and the normal maps corresponding to the multiple groups of regional images to obtain the material maps of the target objects. In the implementation process, the mapping calculation is performed on each group of region images in the plurality of groups of region images, and the projection transformation and the seam fusion are respectively performed on the color mapping and the normal mapping corresponding to each group of region images, so that the material mapping capable of showing the sense of reality of the plane material is obtained, and the sense of reality of the plane material is effectively increased.
Optionally, in an embodiment of the present application, acquiring multiple sets of area images of different areas of the target object includes: shooting the same area of the target object in the directions of a plurality of single light sources to obtain a plurality of single light source images; shooting the same area of the target object in the direction of an annular light source to obtain an annular light source image; a plurality of single light source images and a ring light source image are determined as a set of area images. In the implementation process, the plurality of single light source images for calculating the normal map and the annular light source image for calculating the color map are acquired, and the plurality of single light source images and the annular light source image are grouped into a group of area images, so that the problem of disorder of grouping of the area images is avoided, and the efficiency of map calculation is effectively improved.
Optionally, in an embodiment of the present application, performing mapping calculation on each set of area images in the sets of area images to obtain a color mapping and a normal mapping corresponding to the sets of area images, including: performing mapping calculation on a plurality of single light source images corresponding to each group of region images in a plurality of groups of region images to obtain a normal mapping corresponding to each group of region images; and determining the annular light source image corresponding to each group of region images in the plurality of groups of region images as a color map corresponding to each group of region images. In the implementation process, mapping calculation is performed on the plurality of single light source images corresponding to each group of region images in the plurality of groups of region images, so that the normal mapping corresponding to each group of region images is obtained, a calculation basis is provided for material mapping calculation, and the sense of realism of the plane material is effectively increased.
Optionally, in an embodiment of the present application, performing mapping calculation on a plurality of single light source images corresponding to each group of area images in a plurality of groups of area images to obtain a normal map corresponding to each group of area images, including: normalizing each single light source image in each group of area images to obtain a plurality of normalized images; carrying out gradient calculation on the plurality of normalized images to obtain gradient images corresponding to each group of regional images; and performing scale transformation on the gradient images corresponding to each group of region images to obtain the normal map corresponding to each group of region images. In the implementation process, mapping calculation is performed on the plurality of single light source images corresponding to each group of region images in the plurality of groups of region images, so that the normal mapping corresponding to each group of region images is obtained, a calculation basis is provided for material mapping calculation, and the sense of realism of the plane material is effectively increased.
Optionally, in the embodiment of the present application, performing projective transformation and joint fusion on a color map and a normal map corresponding to a plurality of groups of area images to obtain a material map of a target object, including: filtering and detecting characteristic points of the color maps corresponding to each group of region images in the plurality of groups of region images to obtain the characteristic points in the color maps corresponding to each group of region images; matching the characteristic points in the color map corresponding to each group of regional images and performing iterative loop calculation to obtain a homography matrix; and respectively carrying out projection transformation and seam fusion on the color maps and the normal maps corresponding to the multiple groups of regional images according to the homography matrix to obtain the material maps of the target objects. In the implementation process, filtering and feature point detection are carried out on the color maps corresponding to each group of region images in the plurality of groups of region images, matching is carried out on the feature points in the color maps corresponding to each group of region images, iterative loop calculation is carried out, projection transformation and seam fusion are respectively carried out on the color maps and the normal maps corresponding to the plurality of groups of region images according to the homography matrix obtained by calculation, so that high-frequency feature information on the weak texture images is effectively displayed and highlighted by using a filtering algorithm, the problem that the feature points in the weak texture images are difficult to detect is solved, and the sense of realism of the material maps obtained by carrying out projection transformation and seam fusion on the weak texture images is effectively improved.
Optionally, in the embodiment of the present application, performing projective transformation and joint fusion on a color map and a normal map corresponding to a plurality of groups of area images according to a homography matrix, where the projective transformation and joint fusion include: performing projection transformation on the color maps corresponding to each group of regional images according to the homography matrix to obtain panoramic color maps, and performing projection transformation on the normal maps corresponding to each group of regional images according to the homography matrix to obtain panoramic normal maps; and performing joint fusion on the panoramic color map to obtain a color map of the target object, and performing joint fusion on the panoramic normal maps respectively to obtain the normal map of the target object. In the implementation process, the mapping calculation is performed on each group of region images in the plurality of groups of region images, and the projection transformation and the seam fusion are respectively performed on the color mapping and the normal mapping corresponding to each group of region images, so that the material mapping capable of showing the sense of reality of the plane material is obtained, and the sense of reality of the plane material is effectively increased.
Optionally, in an embodiment of the present application, the texture map includes: a front texture map and a back texture map; the material map obtaining method further comprises the following steps: acquiring a front material map and a back material map; and carrying out contour alignment on the front material mapping and the back material mapping to obtain the double-sided mapping of the target object. In the implementation process, the outline alignment is carried out on the front material mapping and the back material mapping to obtain the double-sided mapping of the target object, so that the problem that the outlines of the front material mapping and the back material mapping of the target object are difficult to align is solved, and the scene realism in double-sided rendering of the target object is effectively improved.
The embodiment of the application also provides a texture map acquisition device, which comprises: the regional image acquisition module is used for acquiring a plurality of groups of regional images of different regions of the target object, and each group of regional images in the plurality of groups of regional images is obtained by shooting the same region of the target object in different light source directions; the regional mapping calculation module is used for mapping calculation of each group of regional images in the plurality of groups of regional images to obtain color mapping and normal mapping corresponding to the plurality of groups of regional images; and the material mapping obtaining module is used for respectively carrying out projection transformation and seam fusion on the color mapping and the normal mapping corresponding to the plurality of groups of regional images to obtain the material mapping of the target object.
Optionally, in an embodiment of the present application, the area image acquisition module includes: the single light source image acquisition module is used for shooting the same area of the target object in a plurality of single light source directions to acquire a plurality of single light source images; the annular light source image acquisition module is used for shooting the same area of the target object in the direction of an annular light source to acquire an annular light source image; and the regional image determining module is used for determining a plurality of single light source images and annular light source images as a group of regional images.
Optionally, in an embodiment of the present application, the area map calculation module includes: the normal map obtaining module is used for carrying out map calculation on a plurality of single light source images corresponding to each group of region images in the plurality of groups of region images to obtain a normal map corresponding to each group of region images; and the color mapping determining module is used for determining the annular light source image corresponding to each group of area images in the plurality of groups of area images as the color mapping corresponding to each group of area images.
Optionally, in an embodiment of the present application, the normal map obtaining module includes: the normalization image obtaining module is used for normalizing each single light source image in each group of area images to obtain a plurality of normalization images; carrying out gradient calculation on the plurality of normalized images to obtain gradient images corresponding to each group of regional images; and performing scale transformation on the gradient images corresponding to each group of region images to obtain the normal map corresponding to each group of region images.
Optionally, in an embodiment of the present application, the texture map obtaining module includes: the filtering feature detection module is used for filtering and feature point detection on the color map corresponding to each group of region images in the plurality of groups of region images to obtain feature points in the color map corresponding to each group of region images; the homography matrix obtaining module is used for matching the characteristic points in the color map corresponding to each group of regional images and carrying out iterative loop calculation to obtain a homography matrix; and the mapping transformation fusion module is used for respectively carrying out projection transformation and joint fusion on the color mapping and the normal mapping corresponding to the plurality of groups of regional images according to the homography matrix to obtain the material mapping of the target object.
Optionally, in an embodiment of the present application, the mapping transformation fusion module includes: the mapping projection conversion module is used for carrying out projection conversion on the color mapping corresponding to each group of regional images according to the homography matrix to obtain a panoramic color mapping, and carrying out projection conversion on the normal mapping corresponding to each group of regional images according to the homography matrix to obtain a panoramic normal mapping; and the mapping joint fusion module is used for performing joint fusion on the panoramic color mapping to obtain the color mapping of the target object, and respectively performing joint fusion on the panoramic normal mapping to obtain the normal mapping of the target object.
Optionally, in an embodiment of the present application, the texture map includes: a front texture map and a back texture map; the texture map acquisition device further comprises: the front and back texture mapping acquisition module is used for acquiring a front texture mapping and a back texture mapping; and the double-sided mapping obtaining module is used for carrying out contour alignment on the front material mapping and the back material mapping to obtain the double-sided mapping of the target object.
The embodiment of the application also provides electronic equipment, which comprises: a processor and a memory storing machine-readable instructions executable by the processor to perform the method as described above when executed by the processor.
Embodiments of the present application also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs a method as described above.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a material map obtaining method according to an embodiment of the present application;
fig. 2 is a schematic view illustrating shooting of an object scanning device control camera according to an embodiment of the present application;
FIG. 3 is a schematic diagram of coordinates of a gradient illumination direction provided by an embodiment of the present application;
fig. 4 is a schematic structural diagram of a texture map obtaining apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Before introducing the material map obtaining method provided in the embodiments of the present application, some concepts related in the embodiments of the present application are described first:
normal Mapping (Normal Mapping), which means that the Normal is marked at every point of the concave-convex surface of the original object, by the RGB color channels, you can understand it as a different surface parallel to the original concave-convex surface, but in reality it is just a smooth plane. For visual effect, the efficiency of normal mapping is higher than that of the original concave-convex surface, and if a light source is applied to a specific position, the surface with lower detail degree can generate accurate illumination direction and reflection effect with high detail degree.
Homography is a concept in geometry; homography is a reversible transformation from the real projection plane to the projection plane, under which transformation the straight line still maps to a straight line; words of the same meaning also include direct conversion, projective property, and the like.
It should be noted that, the method for obtaining a texture map provided in the embodiment of the present application may be executed by an electronic device, where the electronic device refers to a device terminal having a function of executing a computer program or the server described above, and the device terminal is for example: smart phones, personal computers (personal computer, PC), tablet computers, personal digital assistants (personal digital assistant, PDA) or mobile internet devices (mobile Internet device, MID), etc.; the server is for example: an x86 server and a non-x 86 server, the non-x 86 server comprising: mainframe, minicomputer, and UNIX servers.
Application scenarios to which the texture map acquisition method is applicable are described below, where the application scenarios include, but are not limited to: the texture mapping acquisition method can improve the sense of realism of texture mapping generated in the fields of games, films and videos or cultural relics, digital museums and the like. Of course, the texture map acquisition method may also be used to enhance the functions of application software such as image processing software, video processing software or model processing software, and the application software specifically includes: photoshop, 3D Studio Max, auto CAD, sketchUp, solidworks, blender, maya, and the like.
Please refer to fig. 1, which is a flowchart illustrating a method for obtaining a texture map according to an embodiment of the present disclosure; the main idea of the material map acquisition method is that the material map capable of showing the sense of reality of the plane material is obtained by carrying out map calculation on each group of region images in a plurality of groups of region images and respectively carrying out projection transformation and seam fusion on the color map and the normal map corresponding to each group of region images, so that the sense of reality of the plane material is effectively increased; the material map obtaining method may include:
step S110: and acquiring a plurality of groups of area images of different areas of the target object, wherein each group of area images in the plurality of groups of area images are obtained by shooting the same area of the target object in different light source directions.
The target object refers to an object that needs to scan and acquire a texture map, and the object may be a planar object, where the planar object refers to an object that is generally planar in its entirety, specifically for example: artwork or cultural relics such as painting, line carving, embossment, internal carving, and intaglio. The different areas of the target object refer to different areas which are scanned separately when the painting and calligraphy are too long, and each area corresponds to a group of area images in different light source directions in order to ensure the sense of reality of the restored planar material.
The above-mentioned implementation of step S110 is very various, including but not limited to the following:
in a first implementation manner, please refer to a schematic view of an object scanning device provided in the embodiment of the present application shown in fig. 2; three sets of region images are obtained by photographing three regions of the target object in fig. 2 under different light source directions, respectively, and the three sets of region images must have a partial overlap with each other, and the overlap makes it easier to detect a common feature point, which is also called an "interior point", and the common feature point (i.e., an "interior point") can be used to calculate the following homography matrix. The embodiment of capturing each set of regional images by controlling the camera by the object scanning device or operating the camera by a professional photographer to capture a target object may include:
Step S111: the same region of the target object is photographed in a plurality of single light source directions, and a plurality of single light source images are obtained.
The embodiment of step S111 includes: firstly, controlling a camera or a professional photographer to operate the camera to photograph a first area of a target object in a plurality of single light source directions through an object scanning device, specifically, for example, eight lamps are provided in fig. 2 to provide light sources, one of the lamps can be sequentially lightened according to a clockwise or anticlockwise rotation direction, and the camera or the professional photographer is controlled to photograph the first area of the target object to obtain a plurality of single light source images; then press downShooting the second area and the third area of the target object in the directions of a plurality of single light sources in sequence according to the mode; and finally, respectively obtaining a plurality of single light source images corresponding to the first area, the second area and the third area. Wherein the number of single light sources (i.e. lamps) can be limited to 2 in order to improve the quality of the texture map n+1 (n=1, 2, 3.), then the number of single light sources may be 4, 8, 16, and so on.
Step S112: and shooting the same area of the target object in the direction of one annular light source to obtain an annular light source image.
The embodiment of step S112 described above is, for example: the object scanning device controls the camera or a professional photographer to operate the camera to photograph the first area of the target object in the direction of an annular light source, for example, eight lamps are provided in fig. 2 to provide light sources, and after all the eight lamps are lighted, the camera is controlled or operated to photograph the first area of the target object; then, shooting the second area and the third area of the target object in the above manner in sequence under the condition that all the eight lamps are lighted; and finally, respectively obtaining annular light source images corresponding to the first area, the second area and the third area.
Step S113: a plurality of single light source images and a ring light source image are determined as a set of area images.
The embodiment of step S113 described above is, for example: since the above plurality of single light source images and annular light source images are taken for the region of the target object, and the corresponding maps of each region are subjected to the joint fusion, the images need to be grouped according to the regions, that is, the plurality of single light source images and annular light source images are determined as one group of region images, so that three groups of region images corresponding to the first region, the second region and the third region are obtained.
In a second embodiment, a plurality of sets of area images of different areas of a target object stored in advance are acquired, specifically for example: acquiring multiple groups of area images of different areas of a target object from a file system, or acquiring multiple groups of area images of different areas of the target object from a database, or acquiring multiple groups of area images of different areas of the target object from a mobile storage device; or, a browser or other software is used to obtain multiple sets of area images of different areas of the target object on the internet, or other application programs are used to access the internet to obtain multiple sets of area images of different areas of the target object.
After step S110, step S120 is performed: and performing mapping calculation on each group of region images in the plurality of groups of region images to obtain color mapping and normal mapping corresponding to the plurality of groups of region images.
The embodiment of step S120 may include:
step S121: and performing mapping calculation on a plurality of single light source images corresponding to each group of region images in the plurality of groups of region images to obtain a normal mapping corresponding to each group of region images.
Please refer to fig. 3, which illustrates a schematic diagram of coordinates of a gradient illumination direction provided in an embodiment of the present application; the embodiment of step S121 described above is, for example: assume that 2 has been obtained for each region of the target object n+1 (n=1, 2, 3.) single light source images, each single light source image in each set of region images may be normalized to the gradient illumination direction in fig. 3, obtaining a plurality of normalized images; specific examples are: first, the light source in the positive y-axis direction in the coordinate axis of fig. 3 is normalized, and a normalized image in the positive y-axis direction is obtained and expressed as I 0 Normalizing the plurality of single light source images in a clockwise direction in sequence, wherein the obtained normalized image corresponding to the kth lamp can be expressed as I k Similarly, in FIG. 3
Figure BDA0003139488800000091
Represent 2 nd n-1 The lamp lights up the normalized image taken under the condition of +.>
Figure BDA0003139488800000092
Represent 2 nd n The lamp lights up the normalized image taken under the condition of +.>
Figure BDA0003139488800000093
Represents 3.2 n-1 The cup lights illuminate the normalized image taken under.
After normalizing each single light source image in each group of area images to obtain a plurality of normalized images, gradient calculation can be performed on the plurality of normalized images to obtain gradient images corresponding to each group of area images, wherein the gradient images comprise gradients of x, y and z axes, and the gradients of x, y and z axes can be respectively expressed as deltax, deltay and deltaz; wherein the relationship of the gradients of x, y and z axes can be formulated as
Figure BDA0003139488800000101
It will be appreciated that the gradient of the x-axis, in turn, includes a positive gradient and a negative gradient, and that the formula Δx=Δx may be used + -Δx - To represent the relationship of the three, wherein Deltax represents the gradient of the x-axis, deltax + Representing positive gradient of x-axis, deltax - Representing a negative gradient on the x-axis. The positive gradient of the x-axis uses the formula
Figure BDA0003139488800000102
Calculated, negative gradient of x-axis is calculated using the formula +.>
Figure BDA0003139488800000103
Calculating to obtain; wherein Deltax is + Representing positive gradient of x-axis, I k Representing the normalized image corresponding to the kth lamp,/->
Figure BDA0003139488800000104
Represent 2 nd n-1 The lamp lights up the normalized image taken under the condition of +.>
Figure BDA0003139488800000105
Represents 3.2 n-1 The cup lights illuminate the normalized image taken under. The gradient of the y-axis, in turn, includes a positive gradient and a negative gradient, and the formula Δy=Δy may be used + -Δy - To represent the relationship of the three, wherein Δy represents the gradient of the y-axis, Δy + Representing positive gradient of y-axis, Δy - Representing a negative gradient on the y-axis. The positive gradient of the y-axis can be expressed using the formula
Figure BDA0003139488800000106
Calculated, negative gradient of y-axis using the formula +.>
Figure BDA0003139488800000107
For calculation, reference is made to the description above for specific meaning of the letters in the formulas.
Finally, the gradient image corresponding to each group of region images is subjected to scale transformation, and the deltax, deltay and deltaz can be endowed to the normal map I N In the method, a normal map corresponding to each group of region images is obtained, and the normal map corresponding to each group of region images can be expressed as I N The normal map I N Comprising the following steps: r, G and three component matrices on the B channel, which can be represented as I respectively N R 、I N G And I N B Formula I can be used N R =(Δx+1)×128;I N G =(Δy+1)×128;I N B Calculation of = (Δz+1) ×128 to obtain these three component matrices, where I N R 、I N G And I N B Representing the component matrices of the normal map on R, G and B channels, respectively.
The calculation process of the normal map can be understood as taking the plane (namely the x coordinate axis and the y coordinate axis) where the annular lamplight is located as the coordinate plane, so that gradient images in the x coordinate axis direction and the y coordinate axis direction are calculated, then gradient images in the z coordinate axis direction are simulated according to the gradient images, and finally the normal map is obtained through the gradient images in the x coordinate axis direction, the y coordinate axis direction and the z coordinate axis direction; the gradient images in the x coordinate axis direction, the y coordinate axis direction and the z coordinate axis direction are normalized gray level images.
Step S122: and determining the annular light source image corresponding to each group of region images in the plurality of groups of region images as a color map corresponding to each group of region images.
The embodiment of step S122 described above is, for example: since the color map is an image of the target object photographed with all the lamps on, the ring-shaped light source image corresponding to each of the plurality of sets of area images can be directly determined as the color map corresponding to each of the plurality of sets of area images.
After step S120, step S130 is performed: and respectively carrying out projection transformation and seam fusion on the color maps and the normal maps corresponding to the multiple groups of regional images to obtain the material maps of the target objects.
The embodiment of step S130 may include:
step S131: and filtering and detecting characteristic points of the color maps corresponding to each group of region images in the plurality of groups of region images to obtain the characteristic points in the color maps corresponding to each group of region images.
The embodiment of step S131 described above is, for example: and carrying out high-pass filtering (which can be understood as high contrast retention) on the color maps corresponding to each group of region images in the plurality of groups of region images by using a Gaussian filtering algorithm, a mean filtering algorithm, a median filtering algorithm and/or a bilateral filtering algorithm to obtain a filtered region image, thereby obtaining high-frequency characteristic information of the region image, and amplifying the high-frequency characteristic information to ensure that the high-frequency characteristic information in the region image is more obvious and the detection of the characteristic points is facilitated. And then, performing feature point detection on the filtered region images by using feature detection algorithms such as Scale-invariant feature transform (Scale-Invariant Feature Transform, SIFT), acceleration robust features (Speed Up Robust Features, SURF), FAST (Features from Accelerated Segment Test), ORB (Oriented FAST and Rotated BRIEF) and/or Harris, and the like, so as to obtain feature points in the color map corresponding to each group of region images.
Step S132: and matching the characteristic points in the color map corresponding to each group of regional images, and performing iterative loop calculation to obtain a homography matrix.
The embodiment of step S132 described above is, for example: in order to improve the quality of obtaining the feature points in the color map corresponding to each group of region images, feature point matching can be performed on a certain feature point detected by the filtered region images, N Nearest feature points are searched for in other region images (possibly with overlapping portions) based on a k-dimensional tree (namely, a k-dimensional tree) by utilizing a Nearest Neighbor classification algorithm (k-Nearest Neighbor, kNN), and then N feature points are compared, so that whether the matching point is found or not is judged. In order to perform geometric projection transformation on the color map and the normal map corresponding to each group of region images in the plurality of groups of region images, a homography matrix needs to be calculated, and the homography matrix is a model with eight degrees of freedom. The characteristic points in the color map corresponding to each group of region images in the plurality of groups of region images can be grouped to obtain a plurality of groups of characteristic points, and each group of characteristic points in the plurality of groups of characteristic points can construct two equations, so that only four groups of matching points are needed, and if the number of the groups of matching points is more than four, the least square method can be used for improving the calculation accuracy. In order to calculate the homography matrix more accurately, the matching points need to be distinguished between the inner points and the outer points, and the more the number of the inner points is, the more accurate homography matrix can be obtained. In a specific practical process, an iterative loop calculation can be performed on the feature points in the color map corresponding to each group of regional images by adopting a random sampling consensus (Random sample consensus, RANSAC) algorithm, so as to obtain a homography matrix calculated by the maximum interior point number.
Step S133: and respectively carrying out projection transformation and joint fusion on the color maps and the normal maps corresponding to the multiple groups of regional images according to the homography matrix to obtain the color maps and the normal maps of the target object.
The embodiment of step S133 described above is, for example: because the size of the normal line map corresponding to each group of area images is the same as the size of the color map corresponding to each group of area images, projection transformation and seam fusion can be carried out on the color maps corresponding to each group of area images according to a homography matrix to obtain a panoramic color map with seams, and projection transformation is carried out on the normal line map corresponding to each group of area images according to the homography matrix to obtain the panoramic normal line map with seams; the purpose that the positions of all pixels in the color map and the normal map are completely corresponding is achieved, and therefore the requirement of later rendering is met. The homography conversion specific implementation operations of the color map and the normal map specifically include: formulas may be used
Figure BDA0003139488800000121
Carrying out projection transformation of a homography matrix on the color mapping and the normal mapping corresponding to each group of regional images to obtain the color mapping after homography transformation and the normal mapping after homography transformation; wherein (1) >
Figure BDA0003139488800000131
Pixel value of line n, line t Zhang Tudi m, for color map or normal map, I ij Pixel value H for ith row and jth column of color map or normal map t And a homography matrix for the t-th image of the calculated color map or normal map.
In the above-mentioned process of seam fusion, in order to make the images fused as seamlessly as possible, global exposure compensation may be further adopted to equalize the exposure of the images at each position, specifically for example: if there are different exposure degrees between the different images (including color map and normal map), obvious seams appear in the overlapped part of the spliced images, so that the exposure of the images at all positions is balanced by adopting a global exposure compensation method; the global exposure compensation may adopt a gain compensation scheme, that is, a gain coefficient is given to each image, so that the intensities of the overlapping partial images are equal or similar. Seam estimation uses the diagonal of overlapping rectangular areas as the stitching edge for the two images.
In a specific implementation process, the target object can be subjected to seam elimination according to the color map and the normal map of the target object, so as to obtain the texture map of the target object. This embodiment is, for example: in order to completely eliminate the joints, the spliced images are more natural, a poisson image editing (Poisson Image Editing) algorithm is adopted to eliminate the joints of the target object according to the color mapping and the normal mapping of the target object, and the material mapping of the target object is obtained, so that the poisson equation is constructed to solve the optimal value of the pixels, the color mapping and the normal mapping are fused while gradient information is maintained, and the continuity on the gradient is realized, so that the technical effect of seamless fusion at the joints is achieved. The normal map and the color map belong to the material map, and the rendering engine can be used for rendering the normal map, the color map and the model, so that the real 3D material effect is displayed.
In the implementation process, firstly, multiple groups of area images of different areas of a target object are acquired; then, mapping calculation is carried out on each group of region images in the plurality of groups of region images, and color mapping and normal mapping corresponding to each group of region images are obtained; and finally, respectively performing projection transformation and seam fusion on the color map and the normal map corresponding to each group of regional images to obtain the material map of the target object. That is, by performing mapping calculation on each group of region images in the plurality of groups of region images and performing projection transformation and seam fusion on the color mapping and the normal mapping corresponding to each group of region images, a material mapping capable of showing the sense of reality of a planar material is obtained, and the sense of reality of the planar material is effectively increased.
It should be understood that, in the above steps S110 to S130, only one surface texture map (e.g. a front surface texture map or a back surface texture map) of the planar texture is described, and the planar texture has two surfaces, i.e. a front surface and a back surface, so the texture map may include: a front texture map and a back texture map; the above-mentioned texture map obtaining method may further include:
step S210: and obtaining a front material map and a back material map.
The implementation principle and implementation of the step S210 are similar to those of the steps S110 to S130, except that the steps S110 to S130 only describe the process of obtaining either the front texture map or the back texture map, and the step S210 needs to obtain both the front texture map and the back texture map. Therefore, the implementation principle and embodiment thereof will not be described here, and reference may be made to the descriptions of step S110 to step S130, if not clear.
After step S210, step S220 is performed: and carrying out contour alignment on the front material mapping and the back material mapping to obtain the double-sided mapping of the target object.
The embodiment of step S220 described above is, for example: assuming the front face material obtained aboveThe map is denoted as I Pos The reverse texture map obtained above is denoted as I Neg
Step 1, mapping I to the front material Pos And reverse texture map I Neg Binarizing to obtain a binarized front texture map B Pos And binarized reverse texture map B Neg
Step 2, extracting the binarized front material map B Pos And binarized reverse texture map B Neg Is defined in the specification.
And step 3, initializing the homography matrix into an identity matrix H and iteration times N.
Step 4, mapping B on the reverse side material Neg Is set of contour points C Neg After homography transformation calculation, a projection contour point set C 'is obtained' Neg
Step 5, searching each contour point set C Pos And projection contour point set C' Neg The nearest neighbor point set between the two sets is expressed as C' Pos
Step 6, calculating a nearest neighbor point set C' Pos And each contour point set C Neg A minimized distance between the two, which can be expressed as min C using a formula Neg -C′ Pos And updating the homography matrix H after iteration according to the minimized distance.
And 7, repeating the steps 4 to 6 until the iteration times are equal to the calculated iteration times N, so as to ensure that the algorithm reaches a convergence state, and obtaining the optimal homography matrix H at the moment Final
Step 8, according to the optimal homography matrix H Final Opposite-surface texture mapping I Neg And binarized reverse texture map B Neg Performing homography conversion calculation to obtain a homography converted reverse surface texture map I' Neg And a binarized reverse texture map B 'transformed by homography' Neg
Step 9, calculating a reverse texture map B 'after homography transformation and binarization' Neg And binarized front surface material Map B Pos An intersection image between them, which may be represented as B With
Step 10, in order to intersect image B With Mapping the back surface material after homography transformation to I 'for mask' Neg And front texture map I Pos With the following formula
Figure BDA0003139488800000151
Figure BDA0003139488800000152
Transforming to obtain final contour line aligned front image I' Neg Reverse image I' aligned with final contour line Pos The method comprises the steps of carrying out a first treatment on the surface of the Wherein I' Neg Front image representing final contour alignment, I "" Pos Reverse image representing final contour alignment, I' Neg Representing the reverse texture mapping after homography transformation, B With Reverse texture map B 'representing homography conversion and binarization' Neg And binarized front texture map B Pos Intersection image between I Pos The front texture map is shown.
Through the calculation processes from the step 1 to the step 10, special requirements for carrying out double-sided rendering on some planar objects such as leaves and the like to highlight the absolute sense of reality of the scene can be achieved, and the technical effect of showing the double-sided rendering to highlight the absolute sense of reality of the scene is achieved.
Please refer to fig. 4, which illustrates a schematic structure diagram of a texture map obtaining apparatus according to an embodiment of the present disclosure; the embodiment of the application provides a texture map obtaining device 200, which comprises:
the region image obtaining module 210 is configured to obtain a plurality of sets of region images of different regions of the target object, where each set of region images is obtained by photographing the same region of the target object in different directions of the light source.
The region map calculation module 220 is configured to perform map calculation on each group of region images in the plurality of groups of region images, and obtain color maps and normal maps corresponding to the plurality of groups of region images.
The material map obtaining module 230 is configured to perform projection transformation and seam fusion on the color maps and the normal maps corresponding to the multiple groups of area images, respectively, to obtain a material map of the target object.
Optionally, in an embodiment of the present application, the area image acquisition module includes:
and the single light source image acquisition module is used for shooting the same area of the target object in a plurality of single light source directions to acquire a plurality of single light source images.
The annular light source image acquisition module is used for shooting the same area of the target object in the direction of one annular light source to acquire an annular light source image.
And the regional image determining module is used for determining a plurality of single light source images and annular light source images as a group of regional images.
Optionally, in an embodiment of the present application, the area map calculation module includes:
the normal map obtaining module is used for performing map calculation on a plurality of single light source images corresponding to each group of region images in the plurality of groups of region images to obtain a normal map corresponding to each group of region images.
And the color mapping determining module is used for determining the annular light source image corresponding to each group of area images in the plurality of groups of area images as the color mapping corresponding to each group of area images.
Optionally, in an embodiment of the present application, the normal map obtaining module includes:
the normalized image obtaining module is used for normalizing each single light source image in each group of area images to obtain a plurality of normalized images.
And carrying out gradient calculation on the plurality of normalized images to obtain gradient images corresponding to each group of regional images.
And performing scale transformation on the gradient images corresponding to each group of region images to obtain the normal map corresponding to each group of region images.
Optionally, in an embodiment of the present application, the texture map obtaining module includes:
and the filtering feature detection module is used for filtering and feature point detection on the color map corresponding to each group of region images in the plurality of groups of region images to obtain feature points in the color map corresponding to each group of region images.
And the homography matrix obtaining module is used for matching the characteristic points in the color map corresponding to each group of regional images and carrying out iterative loop calculation to obtain a homography matrix.
And the mapping transformation fusion module is used for respectively carrying out projection transformation and joint fusion on the color mapping and the normal mapping corresponding to the plurality of groups of regional images according to the homography matrix to obtain the material mapping of the target object.
Optionally, in an embodiment of the present application, the mapping transformation fusion module includes:
and the mapping projection conversion module is used for carrying out projection conversion on the color mapping corresponding to each group of regional images according to the homography matrix to obtain a panoramic color mapping, and carrying out projection conversion on the normal mapping corresponding to each group of regional images according to the homography matrix to obtain the panoramic normal mapping.
And the mapping joint fusion module is used for performing joint fusion on the panoramic color mapping to obtain the color mapping of the target object, and respectively performing joint fusion on the panoramic normal mapping to obtain the normal mapping of the target object.
Optionally, in an embodiment of the present application, the texture map includes: a front texture map and a back texture map; the texture map acquisition device further comprises:
and the front and back mapping acquisition module is used for acquiring the front material mapping and the back material mapping.
And the double-sided mapping obtaining module is used for carrying out contour alignment on the front material mapping and the back material mapping to obtain the double-sided mapping of the target object.
It should be understood that, the apparatus corresponds to the above embodiment of the method for obtaining a texture map, and is capable of executing the steps involved in the above embodiment of the method, and specific functions of the apparatus may be referred to the above description, and detailed descriptions thereof are omitted herein for avoiding repetition. The device includes at least one software functional module that can be stored in memory in the form of software or firmware (firmware) or cured in an Operating System (OS) of the device.
Please refer to fig. 5, which illustrates a schematic structural diagram of an electronic device provided in an embodiment of the present application. An electronic device 300 provided in an embodiment of the present application includes: a processor 310 and a memory 320, the memory 320 storing machine-readable instructions executable by the processor 310, which when executed by the processor 310 perform the method as described above.
The present embodiment also provides a computer readable storage medium 330, the computer readable storage medium 330 having stored thereon a computer program which, when executed by the processor 310, performs the method as above.
The computer readable storage medium 330 may be implemented by any type or combination of volatile or nonvolatile Memory devices, such as static random access Memory (Static Random Access Memory, SRAM for short), electrically erasable programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM for short), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM for short), programmable Read-Only Memory (Programmable Read-Only Memory, PROM for short), read-Only Memory (ROM for short), magnetic Memory, flash Memory, magnetic disk, or optical disk.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
In addition, the functional modules of the embodiments in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The foregoing description is merely an optional implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any person skilled in the art may easily think about changes or substitutions within the technical scope of the embodiments of the present application, and the changes or substitutions should be covered in the scope of the embodiments of the present application.

Claims (7)

1. The material map obtaining method is characterized by comprising the following steps:
acquiring a plurality of groups of area images of different areas of a target object, wherein each group of area images in the plurality of groups of area images are obtained by shooting the same area of the target object in different light source directions;
performing mapping calculation on each group of region images in the plurality of groups of region images to obtain color mapping and normal mapping corresponding to the plurality of groups of region images;
respectively carrying out projection transformation and seam fusion on the color maps and the normal maps corresponding to the multiple groups of regional images to obtain a material map of the target object;
Wherein the acquiring a plurality of groups of area images of different areas of the target object includes: shooting the same area of the target object in a plurality of single light source directions to obtain a plurality of single light source images; shooting the same area of the target object in the direction of an annular light source to obtain an annular light source image; determining the plurality of single light source images and the annular light source image as a set of the area images;
performing mapping calculation on each group of region images in the plurality of groups of region images to obtain color mapping and normal mapping corresponding to the plurality of groups of region images, including: normalizing each single light source image in each group of area images to obtain a plurality of normalized images; performing gradient calculation on the plurality of normalized images to obtain gradient images corresponding to each group of regional images, wherein the gradient images comprise: the relationship between the gradients Deltax, deltay and Deltaz, expressed as
Figure FDA0004119042500000011
Performing scale transformation on the gradient image corresponding to each group of region images to obtain a normal map corresponding to each group of region images, wherein the normal map comprises: using the formula
Figure FDA0004119042500000012
Calculating the component matrix on the R channel using the formula +.>
Figure FDA0004119042500000021
Calculating the component matrix on the G channel using the formula +.>
Figure FDA0004119042500000022
Calculating a component matrix on a B channel; and determining the annular light source image corresponding to each group of region images in the plurality of groups of region images as a color map corresponding to each group of region images.
2. The method according to claim 1, wherein the performing projective transformation and joint fusion on the color maps and the normal maps corresponding to the plurality of groups of area images to obtain the texture map of the target object includes:
filtering and detecting characteristic points of the color maps corresponding to each group of region images in the plurality of groups of region images to obtain the characteristic points in the color maps corresponding to each group of region images;
matching the characteristic points in the color map corresponding to each group of areas and performing iterative loop calculation to obtain a homography matrix;
and respectively carrying out projection transformation and joint fusion on the color maps and the normal maps corresponding to the multiple groups of regional images according to the homography matrix to obtain the material maps of the target object.
3. The method according to claim 2, wherein the performing projective transformation and joint fusion on the color maps and the normal maps corresponding to the plurality of groups of area images according to the homography matrix includes:
Performing projection transformation on the color maps corresponding to each group of regional images according to the homography matrix to obtain panoramic color maps, and performing projection transformation on the normal maps corresponding to each group of regional images according to the homography matrix to obtain panoramic normal maps;
and performing joint fusion on the panoramic color map to obtain the color map of the target object, and performing joint fusion on the panoramic normal map respectively to obtain the normal map of the target object.
4. A method according to any one of claims 1-3, wherein the texture map comprises: a front texture map and a back texture map; the material map obtaining method further comprises the following steps:
acquiring the front material map and the back material map;
and carrying out contour alignment on the front material mapping and the back material mapping to obtain the double-sided mapping of the target object.
5. A texture map acquisition apparatus, comprising:
the system comprises an area image acquisition module, a target object acquisition module and a display module, wherein the area image acquisition module is used for acquiring a plurality of groups of area images of different areas of the target object, and each group of area images in the plurality of groups of area images are obtained by shooting the same area of the target object in different light source directions;
The region mapping calculation module is used for mapping calculation of each group of region images in the plurality of groups of region images to obtain color mapping and normal mapping corresponding to the plurality of groups of region images;
the material mapping obtaining module is used for respectively carrying out projection transformation and joint fusion on the color mapping and the normal mapping corresponding to the multiple groups of regional images to obtain the material mapping of the target object;
wherein the acquiring a plurality of groups of area images of different areas of the target object includes: shooting the same area of the target object in a plurality of single light source directions to obtain a plurality of single light source images; shooting the same area of the target object in the direction of an annular light source to obtain an annular light source image; determining the plurality of single light source images and the annular light source image as a set of the area images;
performing mapping calculation on each group of region images in the plurality of groups of region images to obtain color mapping and normal mapping corresponding to the plurality of groups of region images, including: normalizing each single light source image in each group of area images to obtain a plurality of normalized images; performing gradient calculation on the plurality of normalized images to obtain gradient images corresponding to each group of regional images, wherein the gradient images comprise: the relationship between the gradients Deltax, deltay and Deltaz, expressed as
Figure FDA0004119042500000041
Performing scale transformation on the gradient image corresponding to each group of region images to obtain a normal map corresponding to each group of region images, wherein the normal map comprises: using the formula
Figure FDA0004119042500000042
Calculating component matrix on R channelUsing the formula +.>
Figure FDA0004119042500000043
Calculating the component matrix on the G channel using the formula +.>
Figure FDA0004119042500000044
Calculating a component matrix on a B channel; and determining the annular light source image corresponding to each group of region images in the plurality of groups of region images as a color map corresponding to each group of region images.
6. An electronic device, comprising: a processor and a memory storing machine-readable instructions executable by the processor to perform the method of any one of claims 1 to 4 when executed by the processor.
7. A computer-readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, performs the method according to any of claims 1 to 4.
CN202110729001.XA 2021-06-29 2021-06-29 Material map acquisition method and device, electronic equipment and storage medium Active CN113362440B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110729001.XA CN113362440B (en) 2021-06-29 2021-06-29 Material map acquisition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110729001.XA CN113362440B (en) 2021-06-29 2021-06-29 Material map acquisition method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113362440A CN113362440A (en) 2021-09-07
CN113362440B true CN113362440B (en) 2023-05-26

Family

ID=77537142

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110729001.XA Active CN113362440B (en) 2021-06-29 2021-06-29 Material map acquisition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113362440B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119779A (en) * 2021-10-29 2022-03-01 浙江凌迪数字科技有限公司 Method for generating material map through multi-angle polishing shooting and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106886979A (en) * 2017-03-30 2017-06-23 深圳市未来媒体技术研究院 A kind of image splicing device and image split-joint method
CN109523619A (en) * 2018-11-12 2019-03-26 厦门启尚科技有限公司 A method of 3D texturing is generated by the picture of multi-angle polishing
CN110033509A (en) * 2019-03-22 2019-07-19 嘉兴超维信息技术有限公司 A method of three-dimensional face normal is constructed based on diffusing reflection gradient polarised light
CN112116692A (en) * 2020-08-28 2020-12-22 北京完美赤金科技有限公司 Model rendering method, device and equipment
CN112215936A (en) * 2020-10-16 2021-01-12 广州虎牙科技有限公司 Image rendering method and device, electronic equipment and storage medium
CN112734761A (en) * 2021-04-06 2021-04-30 中科慧远视觉技术(北京)有限公司 Industrial product image boundary contour extraction method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106886979A (en) * 2017-03-30 2017-06-23 深圳市未来媒体技术研究院 A kind of image splicing device and image split-joint method
CN109523619A (en) * 2018-11-12 2019-03-26 厦门启尚科技有限公司 A method of 3D texturing is generated by the picture of multi-angle polishing
CN110033509A (en) * 2019-03-22 2019-07-19 嘉兴超维信息技术有限公司 A method of three-dimensional face normal is constructed based on diffusing reflection gradient polarised light
CN112116692A (en) * 2020-08-28 2020-12-22 北京完美赤金科技有限公司 Model rendering method, device and equipment
CN112215936A (en) * 2020-10-16 2021-01-12 广州虎牙科技有限公司 Image rendering method and device, electronic equipment and storage medium
CN112734761A (en) * 2021-04-06 2021-04-30 中科慧远视觉技术(北京)有限公司 Industrial product image boundary contour extraction method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
3D Face Template Registration Using Normal Maps;Zhongjie Wang等;《2013 International Conference on 3D Vision - 3DV 2013》;20130916;295-302 *
基于梯度光图像的高精度三维人脸重建算法;黄硕 等;《光学学报》;20200228;第40卷(第4期);0410001-1-041001-9 *

Also Published As

Publication number Publication date
CN113362440A (en) 2021-09-07

Similar Documents

Publication Publication Date Title
CN110135455B (en) Image matching method, device and computer readable storage medium
CN111401266B (en) Method, equipment, computer equipment and readable storage medium for positioning picture corner points
Zhang et al. An image stitching algorithm based on histogram matching and SIFT algorithm
Ghosh et al. A survey on image mosaicing techniques
US10726580B2 (en) Method and device for calibration
US9519968B2 (en) Calibrating visual sensors using homography operators
US20210004942A1 (en) Method and device for three-dimensional reconstruction
US20210295467A1 (en) Method for merging multiple images and post-processing of panorama
WO2016188010A1 (en) Motion image compensation method and device, display device
CN107154014A (en) A kind of real-time color and depth Panorama Mosaic method
CN106780297B (en) High-precision image registration method under the condition of scene and illumination changes
JP2010287174A (en) Furniture simulation method, device, program, recording medium
CN103902953B (en) A kind of screen detecting system and method
CN107248174A (en) A kind of method for tracking target based on TLD algorithms
US10169891B2 (en) Producing three-dimensional representation based on images of a person
Přibyl et al. Feature point detection under extreme lighting conditions
CN116157867A (en) Neural network analysis of LFA test strips
CN113689397A (en) Workpiece circular hole feature detection method and workpiece circular hole feature detection device
CN113362440B (en) Material map acquisition method and device, electronic equipment and storage medium
CN113012298A (en) Curved MARK three-dimensional registration augmented reality method based on region detection
CN117252931A (en) Camera combined external parameter calibration method and system using laser radar and storage medium
Chand et al. Implementation of Panoramic Image Stitching using Python
CN113723465A (en) Improved feature extraction method and image splicing method based on same
Hwang et al. Real-time 2d orthomosaic mapping from drone-captured images using feature-based sequential image registration
JP5563390B2 (en) Image processing apparatus, control method therefor, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant