[go: up one dir, main page]

CN114926593A - SVBRDF material modeling method and system based on single highlight image - Google Patents

SVBRDF material modeling method and system based on single highlight image Download PDF

Info

Publication number
CN114926593A
CN114926593A CN202210662982.5A CN202210662982A CN114926593A CN 114926593 A CN114926593 A CN 114926593A CN 202210662982 A CN202210662982 A CN 202210662982A CN 114926593 A CN114926593 A CN 114926593A
Authority
CN
China
Prior art keywords
highlight
image
svbrdf
map
modeling method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210662982.5A
Other languages
Chinese (zh)
Other versions
CN114926593B (en
Inventor
王璐
刘克梅
徐延宁
王贝贝
孟祥旭
杨承磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202210662982.5A priority Critical patent/CN114926593B/en
Publication of CN114926593A publication Critical patent/CN114926593A/en
Application granted granted Critical
Publication of CN114926593B publication Critical patent/CN114926593B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

本公开提供了一种基于单张高光图像的SVBRDF材质建模方法及系统,所述方案属于三维渲染材质技术领域,所述方案包括:获取物体材质表面的高光图像;将所述高光图像,通过基于密集特征融合连接和高光多级识别的方式进行高光消除,获得无高光图像;将所述高光图像和无高光图像输入预先训练的生成器网络中,获得物体表面的空间变化双向反射率分布函数,进而获得对应的材质贴图;其中,所述生成器网络包括共享编码器和分别与所述共享编码器连接的若干个解码器,所述解码器分别对应于漫反射贴图、法线贴图、粗糙度贴图及反射贴图的处理。

Figure 202210662982

The present disclosure provides an SVBRDF material modeling method and system based on a single highlight image, the solution belongs to the technical field of three-dimensional rendering materials, and the solution includes: acquiring a highlight image of the surface of an object material; Perform highlight removal based on dense feature fusion connection and highlight multi-level recognition to obtain a highlight-free image; input the highlight image and highlight-free image into a pre-trained generator network to obtain the spatially-varying bidirectional reflectance distribution function of the object surface , and then obtain the corresponding material map; wherein, the generator network includes a shared encoder and several decoders respectively connected to the shared encoder, the decoders are respectively corresponding to the diffuse reflection map, the normal map, the roughness map Processing of degree maps and reflection maps.

Figure 202210662982

Description

基于单张高光图像的SVBRDF材质建模方法及系统SVBRDF material modeling method and system based on single specular image

技术领域technical field

本公开属于三维渲染材质技术领域,尤其涉及一种基于单张高光图像的SVBRDF材质建模方法及系统。The present disclosure belongs to the technical field of three-dimensional rendering materials, and in particular, relates to a SVBRDF material modeling method and system based on a single highlight image.

背景技术Background technique

本部分的陈述仅仅是提供了与本公开相关的背景技术信息,不必然构成在先技术。The statements in this section merely provide background information related to the present disclosure and do not necessarily constitute prior art.

在计算机图形和视觉领域,从图像中估计材料的反射特性一直是一个广泛的研究课题。真实世界中,材料的外观取决于观察和照明方向,这使得外观估计成为一项具有挑战性的任务。表面反射率属性通常由双向反射率分布函数(bi-directional reflectancedistribution function,BRDF)表示,由于大多数表面不是完全均质的,因此通常将BRDF带表面位置的函数给出,将其称为空间变化双向反射率分布函数(spatially-varying bi-directional reflectance distribution function,SVBRDF),包括漫反照率、镜面反照率、表面法线、以及粗糙度,对应为漫反射贴图、反射贴图、法向贴图和粗糙度贴图四种材质贴图。Estimating the reflective properties of materials from images has been an extensive research topic in the fields of computer graphics and vision. In the real world, the appearance of materials depends on viewing and lighting directions, which makes appearance estimation a challenging task. The surface reflectance property is usually represented by a bi-directional reflectance distribution function (BRDF). Since most surfaces are not completely homogeneous, the BRDF is usually given as a function of surface position, which is called spatial variation Bi-directional reflectance distribution function (spatially-varying bi-directional reflectance distribution function, SVBRDF), including diffuse albedo, specular albedo, surface normal, and roughness, corresponding to diffuse maps, reflection maps, normal maps and roughness Degree map Four kinds of material maps.

不同的反射率可以产生相同的观察图像,因此从不同照明和观察方向下拍摄的输入图像中重建高质量的空间变化双向反射率分布函数是非常困难的。通常,对物体进行表观材质建模需要使用专业仪器测量特定物体材质表面在场景中光线的反射特性,或是通过大量人工干预利用专业工具标记材质贴图。发明人发现,最近,基于深度学习的方法在材质建模工作实现了从拍摄的材质图片中重建出各种材质贴图,大大简化了材质建模工作的过程,但由于许多消费级相机的动态范围有限,因此某些图像区域将被镜面反射高光污染,会在生成的特征图中产生明显的斑点伪影,导致生成的材质贴图效果无法满足实际需求。Different reflectances can produce the same observed image, so it is very difficult to reconstruct a high-quality spatially varying bidirectional reflectance distribution function from input images taken under different illumination and viewing directions. Usually, the apparent material modeling of an object requires the use of professional instruments to measure the reflection characteristics of the material surface of a specific object in the scene, or the use of professional tools to mark the material map with a lot of manual intervention. The inventor found that recently, the deep learning-based method has realized the reconstruction of various material maps from the captured material images in the material modeling work, which greatly simplifies the process of material modeling work, but due to the dynamic range of many consumer cameras. Limited, so some image areas will be polluted by specular highlights, which will cause obvious speckle artifacts in the generated feature map, resulting in the generated material map effect not meeting the actual needs.

发明内容SUMMARY OF THE INVENTION

本公开为了解决上述问题,提供了一种基于单张高光图像的SVBRDF材质建模方法及系统,所述方案基于真实世界拍摄的单张带有高光的图像,自动生成贴近真实材料外观的材质贴图。In order to solve the above problems, the present disclosure provides an SVBRDF material modeling method and system based on a single specular image. The solution automatically generates a material map close to the appearance of the real material based on a single specular image captured in the real world. .

根据本公开实施例的第一个方面,提供了一种基于单张高光图像的SVBRDF材质建模方法,包括:According to a first aspect of the embodiments of the present disclosure, an SVBRDF material modeling method based on a single highlight image is provided, including:

获取物体材质表面的高光图像;Get the specular image of the material surface of the object;

将所述高光图像,通过基于密集特征融合连接和高光多级识别的方式进行高光消除,获得无高光图像;Performing highlight removal on the highlight image by means of dense feature fusion connection and highlight multi-level recognition to obtain a highlight-free image;

将所述高光图像和无高光图像输入预先训练的生成器网络中,获得物体表面的空间变化双向反射率分布函数,进而获得对应的材质贴图;其中,所述生成器网络包括共享编码器和分别与所述共享编码器连接的若干个解码器,所述解码器分别对应于漫反射贴图、法线贴图、粗糙度贴图及反射贴图的处理。Input the high-light image and the non-high-light image into the pre-trained generator network to obtain the spatially-varying bidirectional reflectance distribution function of the object surface, and then obtain the corresponding material map; wherein, the generator network includes a shared encoder and a separate A plurality of decoders connected with the shared encoder, the decoders respectively correspond to the processing of diffuse reflection maps, normal maps, roughness maps and reflection maps.

进一步的,所述通过基于密集特征融合连接和高光多级识别的方式进行高光消除,具体为:利用带有密集特征融合连接的编码器-解码器组,采用多级高光识别策略,对拍摄图像进行高光去除操作,获取拍摄图像的无高光图像。Further, the highlight removal is performed by a method based on dense feature fusion connection and highlight multi-level recognition, specifically: using an encoder-decoder group with dense feature fusion connection, using a multi-level highlight recognition strategy, to shoot images. Perform a highlight removal operation to obtain a highlight-free image of the captured image.

进一步的,所述编码器包括顺序连接的若干个卷积层、实例归一化层、线性整流单元及最大池化层构成;所述解码器包括顺序连接的若干个反卷积层、实例归一化层及线性整流单元构成。Further, the encoder comprises several convolutional layers, instance normalization layers, linear rectification units and maximum pooling layers connected in sequence; the decoder comprises several deconvolutional layers, instance normalization layers connected in sequence; It is composed of a uniform layer and a linear rectifier unit.

进一步的,所述编码器基于密集特征融合连接,将卷积块提取到的特征与其上一层卷积块提取的特征进行级联操作,再进行上采样操作,最后进行卷积操作。Further, based on the dense feature fusion connection, the encoder performs a cascade operation on the features extracted by the convolution block and the features extracted by the previous layer of convolution blocks, then performs an upsampling operation, and finally performs a convolution operation.

进一步的,所述生成器网络的编码器包括八个卷积层用于下采样,前七个卷积层后连接一个实例化归一层和线性整流单元,最后一个卷积层连接一个实例化归一层。Further, the encoder of the generator network includes eight convolutional layers for downsampling, the first seven convolutional layers are connected to an instantiated normalization layer and a linear rectification unit, and the last convolutional layer is connected to an instantiated layer. down to one level.

进一步的,所述生成器网络与多辨别器网络进行协同训练,具体为:Further, the generator network and the multi-discriminator network are co-trained, specifically:

获取公共数据集,通过对所述公共数据集进行去重、随机裁剪及随机筛选,实现训练数据集的构建;Acquire a public data set, and realize the construction of a training data set by performing deduplication, random cropping and random screening on the public data set;

基于预先构建的最小化损失函数,通过构建的训练数据集对所述生成器网络与多辨别器网络进行协同训练,其中,通过所述多辨别器网络及预先构建的损失函数,对所述生成器网络生成的材质贴图进行质量判别。Based on a pre-built minimization loss function, the generator network and the multi-discriminator network are co-trained through the constructed training data set, wherein, through the multi-discriminator network and the pre-built loss function, the generated The material map generated by the filter network is used to judge the quality.

根据本公开实施例的第二个方面,提供了一种基于单张高光图像的SVBRDF材质建模系统,包括:According to a second aspect of the embodiments of the present disclosure, an SVBRDF material modeling system based on a single specular image is provided, including:

数据获取单元,其用于获取物体材质表面的高光图像;A data acquisition unit, which is used to acquire a specular image of the surface of the object material;

高光消除单元,其用于将所述高光图像,通过基于密集特征融合连接和高光多级识别的方式进行高光消除,获得无高光图像;a highlight removal unit, which is used to remove the highlights from the highlight image by means of dense feature fusion connection and highlight multi-level recognition to obtain a highlight-free image;

物体表面材质估计单元,其用于将所述高光图像和无高光图像输入预先训练的生成器网络中,获得物体表面的空间变化双向反射率分布函数,进而获得对应的材质贴图;其中,所述生成器网络包括共享编码器和分别与所述共享编码器连接的若干个解码器,所述解码器分别对应于漫反射贴图、法线贴图、粗糙度贴图及反射贴图的处理。an object surface material estimation unit, which is used to input the high-light image and the non-high-light image into a pre-trained generator network to obtain the spatially-varying bidirectional reflectance distribution function of the object surface, and then obtain the corresponding material map; wherein, the The generator network includes a shared encoder and a plurality of decoders respectively connected to the shared encoder, and the decoders respectively correspond to the processing of diffuse maps, normal maps, roughness maps and reflection maps.

根据本公开实施例的第三个方面,提供了一种电子设备,包括存储器、处理器及存储在存储器上运行的计算机程序,所述处理器执行所述程序时实现所述的基于单张高光图像的SVBRDF材质建模方法。According to a third aspect of the embodiments of the present disclosure, there is provided an electronic device, including a memory, a processor, and a computer program stored on the memory and running on the memory, the processor implementing the single highlight-based highlighting when executing the program SVBRDF material modeling method for images.

根据本公开实施例的第四个方面,提供了一种非暂态计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现所述的基于单张高光图像的SVBRDF材质建模方法。According to a fourth aspect of the embodiments of the present disclosure, a non-transitory computer-readable storage medium is provided, on which a computer program is stored, and when the program is executed by a processor, realizes the SVBRDF material based on a single highlight image modeling method.

与现有技术相比,本公开的有益效果是:Compared with the prior art, the beneficial effects of the present disclosure are:

(1)本公开所述方案提供了一种基于单张高光图像的SVBRDF材质建模方法,其提出的生成器是一个具有共享编码器和四个解码器的编码器-解码器网络,四个解码器分别对漫反射贴图、法线贴图、粗糙度贴图和反射贴图进行处理,使用生成器网络生成材质贴图,使用渲染模块将生成的贴图渲染出一张渲染图像,使用多辨别器网路对生成贴图与真实贴图,渲染图像与真实图像进行判别,生成高质量材质外观贴图。(1) The solution described in this disclosure provides an SVBRDF material modeling method based on a single highlight image. The proposed generator is an encoder-decoder network with a shared encoder and four decoders. Four The decoder processes the diffuse map, normal map, roughness map and reflection map respectively, uses the generator network to generate the material map, uses the rendering module to render the generated map into a rendered image, and uses the multi-discriminator network to Generate textures and real textures, discriminate between rendered images and real images, and generate high-quality material appearance textures.

(2)所述方案使用基于密集特征融合连接和高光多级识别的高光消除模块,在SVBRDF估计过程中减少过曝光区域对材质估计的影响,可以去除现有SVBRDF估计技术中存在的高光伪影现象。(2) The scheme uses a highlight removal module based on dense feature fusion connection and highlight multi-level recognition to reduce the impact of overexposed areas on material estimation in the SVBRDF estimation process, and can remove the highlight artifacts existing in the existing SVBRDF estimation technology. Phenomenon.

(3)所述方案提出基于贴图损失、渲染损失和对抗损失的损失函数,并且引入了特征匹配损失来稳定训练。(3) The scheme proposes loss functions based on texture loss, rendering loss, and adversarial loss, and introduces feature matching loss to stabilize training.

本公开附加方面的优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本公开的实践了解到。Advantages of additional aspects of the disclosure will be set forth in part in the description that follows, and in part will become apparent from the description below, or will be learned by practice of the disclosure.

附图说明Description of drawings

构成本公开的一部分的说明书附图用来提供对本公开的进一步理解,本公开的示意性实施例及其说明用于解释本公开,并不构成对本公开的不当限定。The accompanying drawings that constitute a part of the present disclosure are used to provide further understanding of the present disclosure, and the exemplary embodiments of the present disclosure and their descriptions are used to explain the present disclosure and do not constitute an improper limitation of the present disclosure.

图1是本公开实施例中所述的基于单张高光图像的SVBRDF材质建模方法采用的网络结构图;1 is a network structure diagram adopted by the SVBRDF material modeling method based on a single highlight image described in the embodiment of the present disclosure;

图2是本公开实施例中所述的高光消除模块网络结构图;2 is a network structure diagram of a highlight removal module described in an embodiment of the present disclosure;

图3(a)是本公开实施例中所述的高光消除模块编码器结构图;FIG. 3(a) is a structural diagram of the encoder of the highlight removal module described in the embodiment of the present disclosure;

图3(b)是本公开实施例中所述的高光消除模块解码器结构图;FIG. 3(b) is a structural diagram of the decoder of the highlight removal module described in the embodiment of the present disclosure;

图4是本公开实施例中所述的高光消除模块结果示意图;4 is a schematic diagram of the result of the highlight removal module described in the embodiment of the present disclosure;

图5(a)是本公开实施例中所述的生成器网络编码器结构图;Figure 5(a) is a structural diagram of the generator network encoder described in the embodiment of the present disclosure;

图5(b)是本公开实施例中所述的生成器网络解码器结构图;FIG. 5(b) is a structural diagram of the generator network decoder described in the embodiment of the present disclosure;

图6是本公开实施例中所述的辨别器网络结构图;Fig. 6 is the network structure diagram of the discriminator described in the embodiment of the present disclosure;

图7(a)和图7(b)是本公开实施例中所述基于单张高光图像的SVBRDF材质建模方法结果对比图。FIG. 7(a) and FIG. 7(b) are comparison diagrams of the results of the SVBRDF material modeling method based on a single highlight image described in the embodiment of the present disclosure.

具体实施方式Detailed ways

下面结合附图与实施例对本公开做进一步说明。The present disclosure will be further described below with reference to the accompanying drawings and embodiments.

应该指出,以下详细说明都是例示性的,旨在对本公开提供进一步的说明。除非另有指明,本文使用的所有技术和科学术语具有与本公开所属技术领域的普通技术人员通常理解的相同含义。It should be noted that the following detailed description is exemplary and intended to provide further explanation of the present disclosure. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.

需要注意的是,这里所使用的术语仅是为了描述具体实施方式,而非意图限制根据本公开的示例性实施方式。如在这里所使用的,除非上下文另外明确指出,否则单数形式也意图包括复数形式,此外,还应当理解的是,当在本说明书中使用术语“包含”和/或“包括”时,其指明存在特征、步骤、操作、器件、组件和/或它们的组合。It should be noted that the terminology used herein is for the purpose of describing specific embodiments only, and is not intended to limit the exemplary embodiments according to the present disclosure. As used herein, unless the context clearly dictates otherwise, the singular is intended to include the plural as well, furthermore, it is to be understood that when the terms "comprising" and/or "including" are used in this specification, it indicates that There are features, steps, operations, devices, components and/or combinations thereof.

在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。The embodiments of this disclosure and features of the embodiments may be combined with each other without conflict.

实施例一:Example 1:

本实施例的目的是提供了一种基于单张高光图像的SVBRDF材质建模方法。The purpose of this embodiment is to provide an SVBRDF material modeling method based on a single highlight image.

一种基于单张高光图像的SVBRDF材质建模方法,包括:An SVBRDF material modeling method based on a single specular image, including:

获取物体材质表面的高光图像;Get the specular image of the material surface of the object;

将所述高光图像,通过基于密集特征融合连接和高光多级识别的方式进行高光消除,获得无高光图像;Performing highlight removal on the highlight image by means of dense feature fusion connection and highlight multi-level recognition to obtain a highlight-free image;

将所述高光图像和无高光图像输入预先训练的生成器网络中,获得物体表面的空间变化双向反射率分布函数,进而获得对应的材质贴图;其中,所述生成器网络包括共享编码器和分别与所述共享编码器连接的若干个解码器,所述解码器分别对应于漫反射贴图、法线贴图、粗糙度贴图及反射贴图的处理。Input the high-light image and the non-high-light image into the pre-trained generator network to obtain the spatially-varying bidirectional reflectance distribution function of the object surface, and then obtain the corresponding material map; wherein, the generator network includes a shared encoder and a separate A plurality of decoders connected with the shared encoder, the decoders respectively correspond to the processing of diffuse reflection maps, normal maps, roughness maps and reflection maps.

进一步的,所述通过基于密集特征融合连接和高光多级识别的方式进行高光消除,具体为:利用带有密集特征融合连接的编码器-解码器组,采用多级高光识别策略,对拍摄图像进行高光去除操作,获取拍摄图像的无高光图像。Further, the highlight removal is performed by a method based on dense feature fusion connection and highlight multi-level recognition, specifically: using an encoder-decoder group with dense feature fusion connection, using a multi-level highlight recognition strategy, to shoot images. Perform a highlight removal operation to obtain a highlight-free image of the captured image.

进一步的,所述编码器包括顺序连接的若干个卷积层、实例归一化层、线性整流单元及最大池化层构成;所述解码器包括顺序连接的若干个反卷积层、实例归一化层及线性整流单元构成。Further, the encoder comprises several convolutional layers, instance normalization layers, linear rectification units and maximum pooling layers connected in sequence; the decoder comprises several deconvolutional layers, instance normalization layers connected in sequence; It is composed of a uniform layer and a linear rectifier unit.

进一步的,所述编码器基于密集特征融合连接,将卷积块提取到的特征与其上一层卷积块提取的特征进行级联操作,再进行上采样操作,最后进行卷积操作。Further, based on the dense feature fusion connection, the encoder performs a cascade operation on the features extracted by the convolution block and the features extracted by the previous layer of convolution blocks, then performs an upsampling operation, and finally performs a convolution operation.

进一步的,所述生成器网络的编码器包括八个卷积层用于下采样,前七个卷积层后连接一个实例化归一层和线性整流单元,最后一个卷积层连接一个实例化归一层。Further, the encoder of the generator network includes eight convolutional layers for downsampling, the first seven convolutional layers are connected to an instantiated normalization layer and a linear rectification unit, and the last convolutional layer is connected to an instantiated layer. down to one level.

进一步的,所述生成器网络与多辨别器网络进行协同训练,具体为:Further, the generator network and the multi-discriminator network are co-trained, specifically:

获取公共数据集,通过对所述公共数据集进行去重、随机裁剪及随机筛选,实现训练数据集的构建;Acquire a public data set, and realize the construction of a training data set by performing deduplication, random cropping and random screening on the public data set;

基于预先构建的最小化损失函数,通过构建的训练数据集对所述生成器网络与多辨别器网络进行协同训练,其中,通过所述多辨别器网络及预先构建的损失函数,对所述生成器网络生成的材质贴图进行质量判别。Based on a pre-built minimization loss function, the generator network and the multi-discriminator network are co-trained through the constructed training data set, wherein, through the multi-discriminator network and the pre-built loss function, the generated The material map generated by the filter network is used to judge the quality.

进一步的,所述多辨别器网络具体包括处理漫反射贴图的辨别器、处理法线贴图的辨别器、处理粗糙度贴图的辨别器、处理反射贴图的辨别器和处理渲染结果的辨别器。Further, the multi-discriminator network specifically includes a discriminator for processing diffuse reflection maps, a discriminator for processing normal maps, a discriminator for processing roughness maps, a discriminator for processing reflection maps, and a discriminator for processing rendering results.

具体的,为了便于理解,以下结合附图对本实施例所述方案进行详细说明:Specifically, in order to facilitate understanding, the solution described in this embodiment is described in detail below with reference to the accompanying drawings:

本实施例提供了一种基于单张高光图像的SVBRDF材质建模方法,所述方案可以分为四个主要阶段:训练数据集构建阶段、生成器网络和多辨别器网络协同训练阶段、高光消除阶段、物体表面材质估计阶段,流程图如图1所示,具体包括以下步骤:This embodiment provides a SVBRDF material modeling method based on a single highlight image. The solution can be divided into four main stages: training data set construction stage, generator network and multi-discriminator network co-training stage, highlight removal Stage, object surface material estimation stage, the flow chart is shown in Figure 1, which includes the following steps:

步骤1、对大规模数据集进行去重、随机剪裁、随机挑选,构建训练数据集。通过最小化损失函数,使用训练数据集对生成器网络和多辨别器网络进行协同训练,使用多辨别器对生成器生成材质贴图进行判别,根据公式计算损失函数。损失函数包括贴图损失,渲染损失和对抗损失,其中对抗损失包括特征匹配损失和标准对抗损失。Step 1. Deduplication, random cropping, and random selection of large-scale data sets to construct training data sets. By minimizing the loss function, the generator network and the multi-discriminator network are co-trained using the training data set, and the multi-discriminator is used to discriminate the material map generated by the generator, and the loss function is calculated according to the formula. The loss functions include texture loss, rendering loss and adversarial loss, where adversarial loss includes feature matching loss and standard adversarial loss.

S101:搜集公开数据集,使用Deschaintre等人提出的大规模数据集。对公开数据集进行去重,对于从相同的SVBRDF生成的图像,但观察方向或照明方向略有不同的训练数据仅保留1组数据,最后收集195284个实例。S101: Collect public datasets and use the large-scale dataset proposed by Deschaintre et al. De-duplication is performed on the public dataset, and only 1 set of data is retained for training data generated from the same SVBRDF but with slightly different viewing or lighting directions, and finally 195,284 instances are collected.

S102:裁剪数据集中的图片,将训练数据集中所有图片随机裁剪为256x256。S102: Crop the pictures in the data set, and randomly crop all the pictures in the training data set to 256x256.

S103:在数据集中随机挑选18K个作为训练数据集,剩余实例作为测试数据集。S103: Randomly select 18K instances from the dataset as training datasets, and the remaining instances as test datasets.

S104:计算对抗损失。S104: Calculate the adversarial loss.

利用公式(1)计算特征匹配损失

Figure BDA0003691742110000071
特征匹配损失通过最小化估计值和真实值之间高维特征的距离来稳定训练。Calculate the feature matching loss using formula (1)
Figure BDA0003691742110000071
The feature matching loss stabilizes training by minimizing the distance of high-dimensional features between estimated and true values.

Figure BDA0003691742110000072
Figure BDA0003691742110000072

其中,G表示生成器,I表示输入图像,M表示生成器生成的材质贴图。Di指代Dmaps=(Dd,Dn,Dr,Ds)中任意一个辨别器,各辨别器结构相同,其中Dd、Dn、Dr、Ds分别用来处理漫反射贴图、法线贴图、粗糙度贴图和反射贴图。Among them, G represents the generator, I represents the input image, and M represents the material map generated by the generator. D i refers to any discriminator in D maps = (D d , D n , D r , D s ), each discriminator has the same structure, wherein D d , D n , D r , D s are used to deal with diffuse reflection respectively Maps, normal maps, roughness maps and reflection maps.

利用公式(2)计算标准对抗损失

Figure BDA0003691742110000073
通过最小化标准对抗损失,生成器试图生成与参考贴图无法区分的贴图,而辨别器则试图有效区分生成贴图与参考贴图。Use Equation (2) to calculate the standard adversarial loss
Figure BDA0003691742110000073
By minimizing the standard adversarial loss, the generator tries to generate a texture that is indistinguishable from the reference texture, while the discriminator tries to effectively distinguish the generated texture from the reference texture.

Figure BDA0003691742110000074
Figure BDA0003691742110000074

其中,Drender表示渲染辨别器;Among them, D render represents the rendering discriminator;

利用公式(3),结合特征匹配损失和标准对抗损失计算对抗损失

Figure BDA0003691742110000075
Using formula (3), the adversarial loss is calculated by combining the feature matching loss and the standard adversarial loss
Figure BDA0003691742110000075

Figure BDA0003691742110000076
Figure BDA0003691742110000076

S105:利用公式(4)计算贴图损失

Figure BDA0003691742110000077
贴图损失反映了生成贴图和参考贴图的逐像素差异。S105: Calculate the texture loss using formula (4)
Figure BDA0003691742110000077
The texture loss reflects the pixel-by-pixel difference between the generated texture and the reference texture.

Figure BDA0003691742110000078
Figure BDA0003691742110000078

其中,

Figure BDA0003691742110000079
表示输入图像I对应的参考材质贴图。in,
Figure BDA0003691742110000079
Indicates the reference texture map corresponding to the input image I.

S106:将生成器生成的漫反射贴图、法线贴图、粗糙度贴图和反射贴图绘制出一张渲染图像,并利用公式(5)计算渲染损失

Figure BDA0003691742110000081
渲染损失反映了真实图像和渲染图像的逐像素差异。S106: Draw the diffuse reflection map, normal map, roughness map and reflection map generated by the generator into a rendered image, and use formula (5) to calculate the rendering loss
Figure BDA0003691742110000081
The rendering loss reflects the pixel-by-pixel difference between the real image and the rendered image.

Figure BDA0003691742110000082
Figure BDA0003691742110000082

其中,R(·)表示渲染模块。渲染模块使用Cook-Torrance BRDF模型,GGX微表面法线分布函数。Among them, R( ) represents the rendering module. The rendering module uses the Cook-Torrance BRDF model, the GGX microsurface normal distribution function.

S107:结合对抗损失、贴图损失、渲染损失,加权求和计算总损失函数:S107: Combine the adversarial loss, texture loss, rendering loss, and weighted summation to calculate the total loss function:

Figure BDA0003691742110000083
Figure BDA0003691742110000083

其中,λmap、λrender、λadv是用于平衡各损失项的权重,本实施例中取λmap=10,λrender=5,λadv=1。Among them, λ map , λ render , and λ adv are weights for balancing each loss item. In this embodiment, λ map =10, λ render =5, and λ adv =1.

S108:使用训练数据集,根据公式(7)对生成器网络和多辨别器网络进行协同训练。S108: Using the training data set, perform co-training on the generator network and the multi-discriminator network according to formula (7).

Figure BDA0003691742110000084
Figure BDA0003691742110000084

其中,N是训练集中样本个数,θ是生成器参数集,网络结构如图(1)所示。Among them, N is the number of samples in the training set, θ is the generator parameter set, and the network structure is shown in Figure (1).

在本实施例中,使用PyTorch框架,Adam优化器,初始学习率为0.00002。训练1000000次。图形处理器为英伟达NVIDIA TITAN RTX,显存为12GB。In this example, the PyTorch framework, the Adam optimizer, is used, and the initial learning rate is 0.00002. Train 1,000,000 times. The graphics processor is NVIDIA TITAN RTX, and the video memory is 12GB.

步骤2、使用物体材质表面的拍摄图像,输入到高光消除模块中进行多级高光识别和高光消除,获取无高光图像,将拍摄图像和无高光图像输入到步骤1获得的训练后的生成器网络中,对物体表面材质进行估计,估计材质的漫反射贴图、法线贴图、粗糙度贴图和反射贴图。Step 2. Use the captured image of the material surface of the object, and input it into the highlight removal module for multi-level highlight recognition and highlight removal to obtain a non-highlight image, and input the captured image and the non-highlight image into the trained generator network obtained in step 1. In , the surface material of the object is estimated, and the diffuse map, normal map, roughness map and reflection map of the material are estimated.

S201:获取物体材质表面的拍摄图像。S201: Acquire a photographed image of the material surface of the object.

S202:基于高光消除模块,对所述拍摄图像进行多级高光识别和消除处理,得到无高光图像。S202: Based on the highlight elimination module, perform multi-level highlight identification and elimination processing on the captured image to obtain a highlight-free image.

高光消除模块网络结构如图2所示,利用带有密集特征融合连接的编码器-解码器组,采用多级高光识别策略,对拍摄图像进行高光去除操作,获取拍摄图像的无高光图像,高光识别和高光去除结果如图4所示。The network structure of the highlight removal module is shown in Figure 2. Using an encoder-decoder group with dense feature fusion connections, a multi-level highlight recognition strategy is used to perform highlight removal operations on the captured image to obtain a non-highlight image of the captured image. The recognition and highlight removal results are shown in Figure 4.

使用拍摄图像I作为输入,使用高光消除模块编码器和高光消除模块解码器提取高光特征F。Using the captured image I as input, the highlight feature F is extracted using the highlight removal module encoder and the highlight removal module decoder.

基于高光特征F,预测高光蒙版M,M表示可见高光位置Based on the highlight feature F, predict the highlight mask M, where M represents the visible highlight position

基于高光特征F和高光蒙版M预测高光密度S。The specular density S is predicted based on the specular feature F and the specular mask M.

依据公式(8),基于M、S和I计算得到无高光图像DAccording to formula (8), based on M, S and I, the image without highlight D is calculated

Figure BDA0003691742110000091
Figure BDA0003691742110000091

其中,

Figure BDA0003691742110000092
表示对应位置元素相乘。in,
Figure BDA0003691742110000092
Indicates that the corresponding position elements are multiplied.

图3(a)展示了高光消除模块的编码器结构。高光消除模块的编码器由五个由卷积层、实例归一化层、线性整流单元、最大池化层构成的卷积块组成。卷积层用于提取图像特征,实例归一化层用于接收卷积层的输出并进行标准化处理,线性整流单元用于映射实例归一化层的输出,最大池化层用于减小卷积层参数误差造成估计均值的偏移的误差,保留更多的纹理信息,最大池化窗口大小为2。Figure 3(a) shows the encoder structure of the highlight removal module. The encoder of the highlight removal module consists of five convolutional blocks consisting of convolutional layers, instance normalization layers, linear rectification units, and max-pooling layers. The convolutional layer is used to extract image features, the instance normalization layer is used to receive the output of the convolutional layer and normalize it, the linear rectification unit is used to map the output of the instance normalization layer, and the max pooling layer is used to reduce the volume The parameter error of the stacked layer causes the error of the offset of the estimated mean, retains more texture information, and the maximum pooling window size is 2.

图3(b)展示了高光消除模块的解码器结构。高光消除模块的解码器由五个由反卷积层、实例归一化层、线性整流单元构成的反卷积块组成。反卷积层用于扩张特征向量维度,实例归一化层用于接收反卷积层的输出并进行标准化处理,线性整流单元用于映射实例归一化层的输出。Figure 3(b) shows the decoder structure of the highlight removal module. The decoder of the highlight removal module consists of five deconvolution blocks consisting of a deconvolution layer, an instance normalization layer, and a linear rectification unit. The deconvolution layer is used to expand the dimension of the feature vector, the instance normalization layer is used to receive the output of the deconvolution layer and normalize it, and the linear rectification unit is used to map the output of the instance normalization layer.

在高光消除模块的编码器中使用密集特征融合连接,利用密集特征融合策略,用公式(9)、公式(10)表示,将卷积块提取到的特征与其上一层卷积块提取的特征进行级联操作,再进行上采样操作,最后进行卷积操作。通常情况下,通道的数量会随着层数的增加而增加,为了避免高层信息在融合的特征中占主导地位,引入一个卷积操作将通道数减少至固定值,使每个特征将在融合后的特征中贡献相同数量的通道。In the encoder of the highlight removal module, the dense feature fusion connection is used, and the dense feature fusion strategy is used to express the features extracted by the convolution block and the features extracted by the previous convolution block. The cascade operation is performed, then the upsampling operation is performed, and finally the convolution operation is performed. Normally, the number of channels increases with the number of layers. In order to avoid high-level information dominating the fused features, a convolution operation is introduced to reduce the number of channels to a fixed value, so that each feature will be used in the fusion. The latter features contribute the same number of channels.

Figure BDA0003691742110000093
Figure BDA0003691742110000093

Figure BDA0003691742110000101
Figure BDA0003691742110000101

其中,

Figure BDA0003691742110000102
表示卷积块i抽取的特征,up表示上采样操作,cat表示级联操作,conv表示卷积操作。in,
Figure BDA0003691742110000102
Represents the features extracted by the convolution block i, up represents the upsampling operation, cat represents the cascade operation, and conv represents the convolution operation.

S203:使用训练后的生成器网络,对所述光照图像和无高光图像进行处理,生成输入图片对应材质的SVBRDF贴图。S203: Using the trained generator network, process the illuminated image and the non-highlight image to generate an SVBRDF map of the material corresponding to the input image.

图5(a)展示了生成器网络的编码器结构。生成器网络的编码器有8个卷积层用于下采样,前7个卷积层后跟随一个实例化归一层和线性整流单元,最后一个卷积层跟随一个实例化归一层。图5(b)展示了生成器网络的解码器结构。生成器网络的四个解码器结构相同,每个解码器网络有8个反卷积层用于上采样,前7个反卷积层后跟随一个实例化归一层和线性整流单元,最后一个反卷积层跟随一个实例化归一层。将高光消除模块生成的无高光图像送入编码器提取特征,将提取的漫反射特征送入解码器Ded用来产生漫反射图。将拍摄图像送入编码器提取特征,将提取的特征输入到解码器Den、Der和Des中,分别产生法线贴图、粗糙度贴图和反射贴图。Figure 5(a) shows the encoder structure of the generator network. The encoder of the generator network has 8 convolutional layers for downsampling, the first 7 convolutional layers are followed by an instantiated normalization layer and a linear rectification unit, and the last convolutional layer is followed by an instantiated normalization layer. Figure 5(b) shows the decoder structure of the generator network. The four decoders of the generator network have the same structure, each decoder network has 8 deconvolution layers for upsampling, the first 7 deconvolution layers are followed by an instantiated normalization layer and a linear rectification unit, and the last The deconvolution layer follows an instantiated normalization layer. The non-highlight image generated by the highlight removal module is sent to the encoder to extract features, and the extracted diffuse reflection features are sent to the decoder De d to generate the diffuse reflection map. The captured image is sent to the encoder to extract features, and the extracted features are input to the decoders Den , Der and Des to generate normal map, roughness map and reflection map respectively.

辨别器Dd、Dn、Dr、Ds、Drender结构相同。图6展示了辨别器网络的结构。辨别器网络由级联操作和三个由卷积层、实例归一化层、线性整流单元、最大池化层构成的卷积块组成。级联操作将生成贴图与真实贴图拼接,卷积层用于提取图像特征,实例归一化层用于接收卷积层的输出并进行标准化处理,线性整流单元用于映射实例归一化层的输出。The discriminators D d , D n , D r , D s , and D render have the same structure. Figure 6 shows the structure of the discriminator network. The discriminator network consists of cascade operations and three convolutional blocks consisting of convolutional layers, instance normalization layers, linear rectification units, and max-pooling layers. The cascade operation splices the generated texture with the real texture, the convolutional layer is used to extract image features, the instance normalization layer is used to receive the output of the convolutional layer and normalize it, and the linear rectification unit is used to map the instance normalization layer. output.

本实施例与现有技术相比,在合成数据和真实数据上,生成的材质贴图质量和重渲染结果均有明显提升。本实施例将重渲染均方根误差RMSE降低到0.079,法线贴图误差降至0.042,法线贴图误差降至0.062,粗糙度贴图误差降至0.102,反射贴图误差降至0.062,并且明显去除现有技术存在的高光伪影问题。Compared with the prior art, in this embodiment, the quality of the generated material map and the re-rendering result are significantly improved in terms of synthetic data and real data. In this embodiment, the re-rendering root mean square error RMSE is reduced to 0.079, the normal map error is reduced to 0.042, the normal map error is reduced to 0.062, the roughness map error is reduced to 0.102, and the reflection map error is reduced to 0.062. There are technical issues with specular artifacts.

本公开提出了一个用于单张图像SVBRDF估计的方法。方法提出了一个具有多级高光识别和密集特征融合连接的高光去除模块。使用一个具有共享编码器和四个解码器的编码器-解码器网络作为生成器网络,并且使用一组辨别器来训练生成器网络,以区分生成贴图与真实贴图,渲染图像与真实图像。在合成和真实图像数据集上的广泛实验表明,与现有技术相比,能够生成高质量的SVBRDF,对含有大量高光区域的输入图像能够产生更真实可信的渲染结果。The present disclosure proposes a method for single image SVBRDF estimation. The method proposes a highlight removal module with multi-level highlight recognition and dense feature fusion connections. An encoder-decoder network with a shared encoder and four decoders is used as the generator network, and a set of discriminators are used to train the generator network to distinguish generated maps from real maps, and rendered images from real images. Extensive experiments on synthetic and real image datasets show that compared to the state-of-the-art, high-quality SVBRDFs can be generated and more realistic and believable rendering results can be produced for input images with a large number of highlight regions.

实施例二:Embodiment 2:

本实施例的目的是提供一种基于单张高光图像的SVBRDF材质建模系统。The purpose of this embodiment is to provide an SVBRDF material modeling system based on a single highlight image.

一种基于单张高光图像的SVBRDF材质建模系统,包括:An SVBRDF material modeling system based on a single specular image, including:

数据获取单元,其用于获取物体材质表面的高光图像;A data acquisition unit, which is used to acquire a specular image of the surface of the object material;

高光消除单元,其用于将所述高光图像,通过基于密集特征融合连接和高光多级识别的方式进行高光消除,获得无高光图像;a highlight removal unit, which is used to remove the highlights from the highlight image by means of dense feature fusion connection and highlight multi-level recognition to obtain a highlight-free image;

物体表面材质估计单元,其用于将所述高光图像和无高光图像输入预先训练的生成器网络中,获得物体表面的空间变化双向反射率分布函数,进而获得对应的材质贴图;其中,所述生成器网络包括共享编码器和分别与所述共享编码器连接的若干个解码器,所述解码器分别对应于漫反射贴图、法线贴图、粗糙度贴图及反射贴图的处理。an object surface material estimation unit, which is used to input the high-light image and the non-high-light image into a pre-trained generator network to obtain the spatially-varying bidirectional reflectance distribution function of the object surface, and then obtain the corresponding material map; wherein, the The generator network includes a shared encoder and a plurality of decoders respectively connected to the shared encoder, and the decoders respectively correspond to the processing of diffuse maps, normal maps, roughness maps and reflection maps.

此处需要说明的是,本实施例中的各个模块与实施例一中的各个步骤一一对应,本实施例所述系统的技术细节在实施例一中已经进行了详细描述,故此处不再赘述。It should be noted here that each module in this embodiment corresponds to each step in Embodiment 1 one-to-one, and the technical details of the system in this embodiment have been described in detail in Embodiment 1, so they are not repeated here. Repeat.

在更多实施例中,还提供:In further embodiments, there is also provided:

一种电子设备,包括存储器和处理器以及存储在存储器上并在处理器上运行的计算机指令,所述计算机指令被处理器运行时,完成实施例一中所述的方法。为了简洁,在此不再赘述。An electronic device includes a memory, a processor, and computer instructions stored on the memory and executed on the processor, and when the computer instructions are executed by the processor, the method described in the first embodiment is completed. For brevity, details are not repeated here.

应理解,本实施例中,处理器可以是中央处理单元CPU,处理器还可以是其他通用处理器、数字信号处理器DSP、专用集成电路ASIC,现成可编程门阵列FPGA或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。It should be understood that, in this embodiment, the processor may be a central processing unit (CPU), and the processor may also be other general-purpose processors, digital signal processors, DSPs, application-specific integrated circuits (ASICs), off-the-shelf programmable gate arrays (FPGAs), or other programmable logic devices. , discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.

存储器可以包括只读存储器和随机存取存储器,并向处理器提供指令和数据、存储器的一部分还可以包括非易失性随机存储器。例如,存储器还可以存储设备类型的信息。The memory may include read-only memory and random access memory and provide instructions and data to the processor, and a portion of the memory may also include non-volatile random access memory. For example, the memory may also store device type information.

一种计算机可读存储介质,用于存储计算机指令,所述计算机指令被处理器执行时,完成实施例一中所述的方法。A computer-readable storage medium is used to store computer instructions, and when the computer instructions are executed by a processor, the method described in the first embodiment is completed.

实施例一中的方法可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器、闪存、只读存储器、可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。The method in the first embodiment can be directly embodied as being executed by a hardware processor, or executed by a combination of hardware and software modules in the processor. The software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art. The storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware. To avoid repetition, detailed description is omitted here.

本领域普通技术人员可以意识到,结合本实施例描述的各示例的单元即算法步骤,能够以电子硬件或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本公开的范围。Those of ordinary skill in the art can realize that the unit, that is, the algorithm step of each example described in conjunction with this embodiment, can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this disclosure.

上述实施例提供的基于单张高光图像的SVBRDF材质建模方法及系统可以实现,具有广阔的应用前景。The SVBRDF material modeling method and system based on a single highlight image provided by the above embodiments can be implemented and have broad application prospects.

以上所述仅为本公开的优选实施例而已,并不用于限制本公开,对于本领域的技术人员来说,本公开可以有各种更改和变化。凡在本公开的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。The above descriptions are only preferred embodiments of the present disclosure, and are not intended to limit the present disclosure. For those skilled in the art, the present disclosure may have various modifications and changes. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present disclosure shall be included within the protection scope of the present disclosure.

Claims (10)

1. A single highlight image-based SVBRDF material modeling method is characterized by comprising the following steps:
acquiring a highlight image of the surface of the object material;
highlight elimination is carried out on the highlight image in a mode based on dense feature fusion connection and highlight multilevel identification, and a highlight-free image is obtained;
inputting the highlight images and the non-highlight images into a generator network trained in advance to obtain a spatial variation bidirectional reflectivity distribution function of the surface of the object so as to obtain corresponding material maps; the generator network comprises a shared encoder and a plurality of decoders respectively connected with the shared encoder, wherein the decoders respectively correspond to the processing of the diffuse reflection map, the normal map, the roughness map and the reflection map.
2. The single highlight image-based SVBRDF material modeling method of claim 1, wherein said highlight elimination is performed by a highlight-based dense feature fusion connection and highlight multilevel identification method, specifically: and performing highlight removing operation on the shot image by using an encoder-decoder group with dense feature fusion connection and adopting a multi-level highlight identification strategy to obtain a highlight-free image of the shot image.
3. The single highlight image-based SVBRDF material modeling method of claim 2, wherein said encoder comprises a plurality of convolutional layers, instance normalization layers, linear rectification units and max-pooling layers connected in sequence; the decoder comprises a plurality of deconvolution layers, an example normalization layer and a linear rectification unit which are sequentially connected.
4. The single highlight image-based SVBRDF material modeling method of claim 2, wherein said encoder performs a cascade operation of the extracted features of the convolution block and the extracted features of the previous layer of convolution block based on dense feature fusion connection, then performs an upsampling operation, and finally performs a convolution operation.
5. The single highlight image-based SVBRDF material modeling method of claim 1, wherein said generator network encoder comprises eight convolutional layers for downsampling, the first seven convolutional layers are followed by an instantiated one-layer and linear rectification unit, and the last convolutional layer is followed by an instantiated one-layer.
6. The single highlight image-based SVBRDF material modeling method of claim 1, wherein said generator network and multi-discriminator network are trained in coordination, specifically:
acquiring a public data set, and implementing construction of a training data set by carrying out duplicate removal, random cutting and random screening on the public data set;
and performing collaborative training on the generator network and the multi-discriminator network through a constructed training data set based on a pre-constructed minimum loss function, wherein quality judgment is performed on the texture map generated by the generator network through the multi-discriminator network and the pre-constructed loss function.
7. The single highlight image-based SVBRDF texture modeling method of claim 1, wherein said multi-discriminator network specifically comprises a discriminator for processing diffuse reflection maps, a discriminator for processing normal maps, a discriminator for processing roughness maps, a discriminator for processing reflection maps and a discriminator for processing rendering results.
8. The utility model provides a SVBRDF material modeling system based on single highlight image which characterized in that includes:
the data acquisition unit is used for acquiring a highlight image of the surface of the object material;
the highlight elimination unit is used for eliminating highlights of the highlight image in a mode based on dense feature fusion connection and highlight multilevel identification to obtain a highlight-free image;
the object surface material estimation unit is used for inputting the highlight images and the non-highlight images into a generator network trained in advance to obtain a spatial variation bidirectional reflectivity distribution function of the object surface so as to obtain a corresponding material chartlet; the generator network comprises a shared encoder and a plurality of decoders respectively connected with the shared encoder, wherein the decoders respectively correspond to the processing of the diffuse reflection map, the normal map, the roughness map and the reflection map.
9. An electronic device comprising a memory, a processor, and a computer program stored in the memory for execution, wherein the processor when executing the program implements the single highlight image-based SVBRDF material modeling method of any of claims 1-7.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the program when executed by a processor implements the single highlight image based SVBRDF material modeling method of any one of claims 1-7.
CN202210662982.5A 2022-06-13 2022-06-13 SVBRDF material modeling method and system based on Shan Zhanggao light images Active CN114926593B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210662982.5A CN114926593B (en) 2022-06-13 2022-06-13 SVBRDF material modeling method and system based on Shan Zhanggao light images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210662982.5A CN114926593B (en) 2022-06-13 2022-06-13 SVBRDF material modeling method and system based on Shan Zhanggao light images

Publications (2)

Publication Number Publication Date
CN114926593A true CN114926593A (en) 2022-08-19
CN114926593B CN114926593B (en) 2024-07-19

Family

ID=82814300

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210662982.5A Active CN114926593B (en) 2022-06-13 2022-06-13 SVBRDF material modeling method and system based on Shan Zhanggao light images

Country Status (1)

Country Link
CN (1) CN114926593B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118096982A (en) * 2024-04-24 2024-05-28 国网江西省电力有限公司超高压分公司 A method and system for constructing a fault inversion training platform

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190347526A1 (en) * 2018-05-09 2019-11-14 Adobe Inc. Extracting material properties from a single image
CN112419334A (en) * 2020-11-18 2021-02-26 山东大学 Micro surface material reconstruction method and system based on deep learning
CN112634156A (en) * 2020-12-22 2021-04-09 浙江大学 Method for estimating material reflection parameter based on portable equipment collected image
CN114549726A (en) * 2022-01-19 2022-05-27 广东时谛智能科技有限公司 High-quality material chartlet obtaining method based on deep learning
CN114549431A (en) * 2022-01-29 2022-05-27 北京师范大学 Method for estimating reflection attribute of object surface material from single image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190347526A1 (en) * 2018-05-09 2019-11-14 Adobe Inc. Extracting material properties from a single image
CN112419334A (en) * 2020-11-18 2021-02-26 山东大学 Micro surface material reconstruction method and system based on deep learning
CN112634156A (en) * 2020-12-22 2021-04-09 浙江大学 Method for estimating material reflection parameter based on portable equipment collected image
CN114549726A (en) * 2022-01-19 2022-05-27 广东时谛智能科技有限公司 High-quality material chartlet obtaining method based on deep learning
CN114549431A (en) * 2022-01-29 2022-05-27 北京师范大学 Method for estimating reflection attribute of object surface material from single image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118096982A (en) * 2024-04-24 2024-05-28 国网江西省电力有限公司超高压分公司 A method and system for constructing a fault inversion training platform

Also Published As

Publication number Publication date
CN114926593B (en) 2024-07-19

Similar Documents

Publication Publication Date Title
CN113345082B (en) Characteristic pyramid multi-view three-dimensional reconstruction method and system
CN114041161A (en) Method and device for training neural network model for enhancing image details
CN111833430A (en) Illumination data prediction method, system, terminal and medium based on neural network
Yoo et al. Deep 3D-to-2D watermarking: Embedding messages in 3D meshes and extracting them from 2D renderings
CN106952222A (en) A kind of interactive image weakening method and device
JP2019160303A (en) Deep learning architectures for classification of objects captured with light-field camera
CN114022858B (en) A semantic segmentation method, system, electronic device and medium for autonomous driving
TW201022708A (en) Method of change detection for building models
CN107392234A (en) A kind of body surface material kind identification method based on individual 4D light field image
CN106165387A (en) Light field processing method
CN111192226A (en) Image fusion denoising method, device and system
CN114627223A (en) A free-view video synthesis method, device, electronic device and storage medium
Choi et al. Balanced spherical grid for egocentric view synthesis
CN117581232A (en) Accelerated training of NeRF-based machine learning models
CN112419334A (en) Micro surface material reconstruction method and system based on deep learning
CN110276831A (en) Method and device for constructing three-dimensional model, equipment and computer-readable storage medium
CN116863320A (en) Underwater image enhancement method and system based on physical model
CN109919832A (en) A traffic image stitching method for unmanned driving
CN109949354A (en) A light field depth information estimation method based on fully convolutional neural network
CN114926593A (en) SVBRDF material modeling method and system based on single highlight image
CN109087344A (en) Image-selecting method and device in three-dimensional reconstruction
CN110070608B (en) Method for automatically deleting three-dimensional reconstruction redundant points based on images
CN114820901A (en) Large-scene free viewpoint interpolation method based on neural network
CN114581577A (en) Object material micro-surface model reconstruction method and system
Han Texture image compression algorithm based on self‐organizing neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant