[go: up one dir, main page]

CN114926593B - SVBRDF material modeling method and system based on Shan Zhanggao light images - Google Patents

SVBRDF material modeling method and system based on Shan Zhanggao light images Download PDF

Info

Publication number
CN114926593B
CN114926593B CN202210662982.5A CN202210662982A CN114926593B CN 114926593 B CN114926593 B CN 114926593B CN 202210662982 A CN202210662982 A CN 202210662982A CN 114926593 B CN114926593 B CN 114926593B
Authority
CN
China
Prior art keywords
highlight
map
image
discriminator
svbrdf
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210662982.5A
Other languages
Chinese (zh)
Other versions
CN114926593A (en
Inventor
王璐
刘克梅
徐延宁
王贝贝
孟祥旭
杨承磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202210662982.5A priority Critical patent/CN114926593B/en
Publication of CN114926593A publication Critical patent/CN114926593A/en
Application granted granted Critical
Publication of CN114926593B publication Critical patent/CN114926593B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

本公开提供了一种基于单张高光图像的SVBRDF材质建模方法及系统,所述方案属于三维渲染材质技术领域,所述方案包括:获取物体材质表面的高光图像;将所述高光图像,通过基于密集特征融合连接和高光多级识别的方式进行高光消除,获得无高光图像;将所述高光图像和无高光图像输入预先训练的生成器网络中,获得物体表面的空间变化双向反射率分布函数,进而获得对应的材质贴图;其中,所述生成器网络包括共享编码器和分别与所述共享编码器连接的若干个解码器,所述解码器分别对应于漫反射贴图、法线贴图、粗糙度贴图及反射贴图的处理。

The present disclosure provides a SVBRDF material modeling method and system based on a single highlight image, the scheme belongs to the field of three-dimensional rendering material technology, and the scheme includes: obtaining a highlight image of the surface of an object material; removing highlights from the highlight image by dense feature fusion connection and highlight multi-level recognition to obtain a highlight-free image; inputting the highlight image and the highlight-free image into a pre-trained generator network to obtain a spatially varying bidirectional reflectance distribution function of the object surface, and then obtaining a corresponding material map; wherein the generator network includes a shared encoder and a plurality of decoders respectively connected to the shared encoder, and the decoders respectively correspond to the processing of a diffuse map, a normal map, a roughness map and a reflection map.

Description

基于单张高光图像的SVBRDF材质建模方法及系统SVBRDF material modeling method and system based on single highlight image

技术领域Technical Field

本公开属于三维渲染材质技术领域,尤其涉及一种基于单张高光图像的SVBRDF材质建模方法及系统。The present disclosure belongs to the technical field of three-dimensional rendering materials, and in particular, relates to a SVBRDF material modeling method and system based on a single highlight image.

背景技术Background technique

本部分的陈述仅仅是提供了与本公开相关的背景技术信息,不必然构成在先技术。The statements in this section merely provide background information related to the present disclosure and do not necessarily constitute prior art.

在计算机图形和视觉领域,从图像中估计材料的反射特性一直是一个广泛的研究课题。真实世界中,材料的外观取决于观察和照明方向,这使得外观估计成为一项具有挑战性的任务。表面反射率属性通常由双向反射率分布函数(bi-directional reflectancedistribution function,BRDF)表示,由于大多数表面不是完全均质的,因此通常将BRDF带表面位置的函数给出,将其称为空间变化双向反射率分布函数(spatially-varying bi-directional reflectance distribution function,SVBRDF),包括漫反照率、镜面反照率、表面法线、以及粗糙度,对应为漫反射贴图、反射贴图、法向贴图和粗糙度贴图四种材质贴图。In the field of computer graphics and vision, estimating the reflective properties of materials from images has been a widely studied topic. In the real world, the appearance of materials depends on the viewing and lighting directions, which makes appearance estimation a challenging task. The surface reflectance property is usually represented by a bidirectional reflectance distribution function (BRDF). Since most surfaces are not completely homogeneous, the BRDF is usually given as a function of the surface position, which is called a spatially varying bidirectional reflectance distribution function (SVBRDF), including diffuse albedo, specular albedo, surface normal, and roughness, corresponding to four material maps: diffuse map, reflection map, normal map, and roughness map.

不同的反射率可以产生相同的观察图像,因此从不同照明和观察方向下拍摄的输入图像中重建高质量的空间变化双向反射率分布函数是非常困难的。通常,对物体进行表观材质建模需要使用专业仪器测量特定物体材质表面在场景中光线的反射特性,或是通过大量人工干预利用专业工具标记材质贴图。发明人发现,最近,基于深度学习的方法在材质建模工作实现了从拍摄的材质图片中重建出各种材质贴图,大大简化了材质建模工作的过程,但由于许多消费级相机的动态范围有限,因此某些图像区域将被镜面反射高光污染,会在生成的特征图中产生明显的斑点伪影,导致生成的材质贴图效果无法满足实际需求。Different reflectivities can produce the same observed image, so it is very difficult to reconstruct a high-quality spatially varying bidirectional reflectance distribution function from input images taken under different lighting and observation directions. Usually, the apparent material modeling of an object requires the use of professional instruments to measure the reflection characteristics of the light on the surface of a specific object material in the scene, or the use of professional tools to mark the material map through a large amount of manual intervention. The inventors have found that recently, a deep learning-based method has been used in material modeling to reconstruct various material maps from photographed material images, greatly simplifying the process of material modeling. However, due to the limited dynamic range of many consumer-grade cameras, some image areas will be contaminated by specular reflection highlights, which will produce obvious spot artifacts in the generated feature map, resulting in the generated material map effect failing to meet actual needs.

发明内容Summary of the invention

本公开为了解决上述问题,提供了一种基于单张高光图像的SVBRDF材质建模方法及系统,所述方案基于真实世界拍摄的单张带有高光的图像,自动生成贴近真实材料外观的材质贴图。In order to solve the above problems, the present disclosure provides a SVBRDF material modeling method and system based on a single highlight image. The scheme automatically generates a material map that is close to the appearance of the real material based on a single image with highlights taken in the real world.

根据本公开实施例的第一个方面,提供了一种基于单张高光图像的SVBRDF材质建模方法,包括:According to a first aspect of an embodiment of the present disclosure, a SVBRDF material modeling method based on a single highlight image is provided, comprising:

获取物体材质表面的高光图像;Get the highlight image of the object material surface;

将所述高光图像,通过基于密集特征融合连接和高光多级识别的方式进行高光消除,获得无高光图像;Eliminate the highlights of the highlight image by dense feature fusion connection and highlight multi-level recognition to obtain a highlight-free image;

将所述高光图像和无高光图像输入预先训练的生成器网络中,获得物体表面的空间变化双向反射率分布函数,进而获得对应的材质贴图;其中,所述生成器网络包括共享编码器和分别与所述共享编码器连接的若干个解码器,所述解码器分别对应于漫反射贴图、法线贴图、粗糙度贴图及反射贴图的处理。The highlight image and the non-highlight image are input into a pre-trained generator network to obtain a spatially varying bidirectional reflectance distribution function of the object surface, and then obtain a corresponding material map; wherein the generator network includes a shared encoder and a plurality of decoders respectively connected to the shared encoder, and the decoders respectively correspond to the processing of diffuse map, normal map, roughness map and reflection map.

进一步的,所述通过基于密集特征融合连接和高光多级识别的方式进行高光消除,具体为:利用带有密集特征融合连接的编码器-解码器组,采用多级高光识别策略,对拍摄图像进行高光去除操作,获取拍摄图像的无高光图像。Furthermore, the highlight removal is performed based on dense feature fusion connection and multi-level highlight recognition, specifically: an encoder-decoder group with dense feature fusion connection is used, a multi-level highlight recognition strategy is adopted, and a highlight removal operation is performed on the captured image to obtain a highlight-free image of the captured image.

进一步的,所述编码器包括顺序连接的若干个卷积层、实例归一化层、线性整流单元及最大池化层构成;所述解码器包括顺序连接的若干个反卷积层、实例归一化层及线性整流单元构成。Furthermore, the encoder includes a plurality of sequentially connected convolutional layers, instance normalization layers, linear rectification units and maximum pooling layers; the decoder includes a plurality of sequentially connected deconvolutional layers, instance normalization layers and linear rectification units.

进一步的,所述编码器基于密集特征融合连接,将卷积块提取到的特征与其上一层卷积块提取的特征进行级联操作,再进行上采样操作,最后进行卷积操作。Furthermore, the encoder performs a cascade operation on the features extracted by the convolution block and the features extracted by the convolution block in the previous layer based on dense feature fusion connection, then performs an upsampling operation, and finally performs a convolution operation.

进一步的,所述生成器网络的编码器包括八个卷积层用于下采样,前七个卷积层后连接一个实例化归一层和线性整流单元,最后一个卷积层连接一个实例化归一层。Furthermore, the encoder of the generator network includes eight convolutional layers for downsampling, the first seven convolutional layers are connected to an instantiated normalization layer and a linear rectifier unit, and the last convolutional layer is connected to an instantiated normalization layer.

进一步的,所述生成器网络与多辨别器网络进行协同训练,具体为:Furthermore, the generator network and the multi-discriminator network are trained collaboratively, specifically:

获取公共数据集,通过对所述公共数据集进行去重、随机裁剪及随机筛选,实现训练数据集的构建;Obtain a public data set, and construct a training data set by removing duplicates, randomly cropping, and randomly screening the public data set;

基于预先构建的最小化损失函数,通过构建的训练数据集对所述生成器网络与多辨别器网络进行协同训练,其中,通过所述多辨别器网络及预先构建的损失函数,对所述生成器网络生成的材质贴图进行质量判别。Based on a pre-constructed minimization loss function, the generator network and the multi-discriminator network are collaboratively trained through a constructed training data set, wherein the quality of the material map generated by the generator network is judged through the multi-discriminator network and the pre-constructed loss function.

根据本公开实施例的第二个方面,提供了一种基于单张高光图像的SVBRDF材质建模系统,包括:According to a second aspect of an embodiment of the present disclosure, a SVBRDF material modeling system based on a single highlight image is provided, comprising:

数据获取单元,其用于获取物体材质表面的高光图像;A data acquisition unit, which is used to acquire a highlight image of the surface of an object material;

高光消除单元,其用于将所述高光图像,通过基于密集特征融合连接和高光多级识别的方式进行高光消除,获得无高光图像;A highlight removal unit, which is used to remove the highlights of the highlight image by means of dense feature fusion connection and highlight multi-level recognition to obtain a highlight-free image;

物体表面材质估计单元,其用于将所述高光图像和无高光图像输入预先训练的生成器网络中,获得物体表面的空间变化双向反射率分布函数,进而获得对应的材质贴图;其中,所述生成器网络包括共享编码器和分别与所述共享编码器连接的若干个解码器,所述解码器分别对应于漫反射贴图、法线贴图、粗糙度贴图及反射贴图的处理。An object surface material estimation unit is used to input the highlight image and the non-highlight image into a pre-trained generator network to obtain a spatially varying bidirectional reflectance distribution function of the object surface, and then obtain a corresponding material map; wherein the generator network includes a shared encoder and a plurality of decoders respectively connected to the shared encoder, and the decoders respectively correspond to the processing of diffuse maps, normal maps, roughness maps and reflection maps.

根据本公开实施例的第三个方面,提供了一种电子设备,包括存储器、处理器及存储在存储器上运行的计算机程序,所述处理器执行所述程序时实现所述的基于单张高光图像的SVBRDF材质建模方法。According to a third aspect of an embodiment of the present disclosure, an electronic device is provided, comprising a memory, a processor, and a computer program stored and running on the memory, wherein when the processor executes the program, the SVBRDF material modeling method based on a single highlight image is implemented.

根据本公开实施例的第四个方面,提供了一种非暂态计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现所述的基于单张高光图像的SVBRDF材质建模方法。According to a fourth aspect of an embodiment of the present disclosure, a non-transitory computer-readable storage medium is provided, on which a computer program is stored. When the program is executed by a processor, the SVBRDF material modeling method based on a single highlight image is implemented.

与现有技术相比,本公开的有益效果是:Compared with the prior art, the beneficial effects of the present invention are:

(1)本公开所述方案提供了一种基于单张高光图像的SVBRDF材质建模方法,其提出的生成器是一个具有共享编码器和四个解码器的编码器-解码器网络,四个解码器分别对漫反射贴图、法线贴图、粗糙度贴图和反射贴图进行处理,使用生成器网络生成材质贴图,使用渲染模块将生成的贴图渲染出一张渲染图像,使用多辨别器网路对生成贴图与真实贴图,渲染图像与真实图像进行判别,生成高质量材质外观贴图。(1) The scheme disclosed in the present invention provides an SVBRDF material modeling method based on a single highlight image. The generator proposed is an encoder-decoder network with a shared encoder and four decoders. The four decoders process the diffuse map, normal map, roughness map and reflection map respectively. The generator network is used to generate a material map. The generated map is rendered into a rendered image using a rendering module. A multi-discriminator network is used to discriminate between the generated map and the real map, and between the rendered image and the real image, so as to generate a high-quality material appearance map.

(2)所述方案使用基于密集特征融合连接和高光多级识别的高光消除模块,在SVBRDF估计过程中减少过曝光区域对材质估计的影响,可以去除现有SVBRDF估计技术中存在的高光伪影现象。(2) The scheme uses a highlight removal module based on dense feature fusion connection and highlight multi-level recognition to reduce the impact of overexposed areas on material estimation during the SVBRDF estimation process, and can remove highlight artifacts existing in existing SVBRDF estimation technologies.

(3)所述方案提出基于贴图损失、渲染损失和对抗损失的损失函数,并且引入了特征匹配损失来稳定训练。(3) The scheme proposes a loss function based on mapping loss, rendering loss and adversarial loss, and introduces feature matching loss to stabilize training.

本公开附加方面的优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本公开的实践了解到。Advantages of additional aspects of the present disclosure will be given in part in the following description, and in part will become apparent from the following description, or will be learned through practice of the present disclosure.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

构成本公开的一部分的说明书附图用来提供对本公开的进一步理解,本公开的示意性实施例及其说明用于解释本公开,并不构成对本公开的不当限定。The accompanying drawings constituting a part of the present disclosure are used to provide a further understanding of the present disclosure. The illustrative embodiments of the present disclosure and their descriptions are used to explain the present disclosure and do not constitute an improper limitation on the present disclosure.

图1是本公开实施例中所述的基于单张高光图像的SVBRDF材质建模方法采用的网络结构图;FIG1 is a network structure diagram of the SVBRDF material modeling method based on a single highlight image described in an embodiment of the present disclosure;

图2是本公开实施例中所述的高光消除模块网络结构图;FIG2 is a network structure diagram of a highlight removal module described in an embodiment of the present disclosure;

图3(a)是本公开实施例中所述的高光消除模块编码器结构图;FIG3( a ) is a structural diagram of an encoder of a highlight removal module described in an embodiment of the present disclosure;

图3(b)是本公开实施例中所述的高光消除模块解码器结构图;FIG3( b ) is a structural diagram of a decoder of a highlight removal module described in an embodiment of the present disclosure;

图4是本公开实施例中所述的高光消除模块结果示意图;FIG4 is a schematic diagram of the highlight removal module results described in an embodiment of the present disclosure;

图5(a)是本公开实施例中所述的生成器网络编码器结构图;FIG5( a ) is a diagram of a generator network encoder structure according to an embodiment of the present disclosure;

图5(b)是本公开实施例中所述的生成器网络解码器结构图;FIG5( b ) is a diagram of a generator network decoder structure in an embodiment of the present disclosure;

图6是本公开实施例中所述的辨别器网络结构图;FIG6 is a diagram of a discriminator network structure described in an embodiment of the present disclosure;

图7(a)和图7(b)是本公开实施例中所述基于单张高光图像的SVBRDF材质建模方法结果对比图。FIG. 7( a ) and FIG. 7( b ) are comparison diagrams of the results of the SVBRDF material modeling method based on a single highlight image described in an embodiment of the present disclosure.

具体实施方式Detailed ways

下面结合附图与实施例对本公开做进一步说明。The present disclosure is further described below in conjunction with the accompanying drawings and embodiments.

应该指出,以下详细说明都是例示性的,旨在对本公开提供进一步的说明。除非另有指明,本文使用的所有技术和科学术语具有与本公开所属技术领域的普通技术人员通常理解的相同含义。It should be noted that the following detailed descriptions are all illustrative and intended to provide further explanation of the present disclosure. Unless otherwise specified, all technical and scientific terms used herein have the same meanings as those commonly understood by those skilled in the art to which the present disclosure belongs.

需要注意的是,这里所使用的术语仅是为了描述具体实施方式,而非意图限制根据本公开的示例性实施方式。如在这里所使用的,除非上下文另外明确指出,否则单数形式也意图包括复数形式,此外,还应当理解的是,当在本说明书中使用术语“包含”和/或“包括”时,其指明存在特征、步骤、操作、器件、组件和/或它们的组合。It should be noted that the terms used herein are only for describing specific embodiments and are not intended to limit the exemplary embodiments according to the present disclosure. As used herein, unless the context clearly indicates otherwise, the singular form is also intended to include the plural form. In addition, it should be understood that when the terms "comprising" and/or "including" are used in this specification, it indicates the presence of features, steps, operations, devices, components and/or combinations thereof.

在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。In the absence of conflict, the embodiments in the present disclosure and the features in the embodiments may be combined with each other.

实施例一:Embodiment 1:

本实施例的目的是提供了一种基于单张高光图像的SVBRDF材质建模方法。The purpose of this embodiment is to provide a SVBRDF material modeling method based on a single highlight image.

一种基于单张高光图像的SVBRDF材质建模方法,包括:A SVBRDF material modeling method based on a single highlight image, comprising:

获取物体材质表面的高光图像;Get the highlight image of the object material surface;

将所述高光图像,通过基于密集特征融合连接和高光多级识别的方式进行高光消除,获得无高光图像;Eliminate the highlights of the highlight image by dense feature fusion connection and highlight multi-level recognition to obtain a highlight-free image;

将所述高光图像和无高光图像输入预先训练的生成器网络中,获得物体表面的空间变化双向反射率分布函数,进而获得对应的材质贴图;其中,所述生成器网络包括共享编码器和分别与所述共享编码器连接的若干个解码器,所述解码器分别对应于漫反射贴图、法线贴图、粗糙度贴图及反射贴图的处理。The highlight image and the non-highlight image are input into a pre-trained generator network to obtain a spatially varying bidirectional reflectance distribution function of the object surface, and then obtain a corresponding material map; wherein the generator network includes a shared encoder and a plurality of decoders respectively connected to the shared encoder, and the decoders respectively correspond to the processing of diffuse maps, normal maps, roughness maps and reflection maps.

进一步的,所述通过基于密集特征融合连接和高光多级识别的方式进行高光消除,具体为:利用带有密集特征融合连接的编码器-解码器组,采用多级高光识别策略,对拍摄图像进行高光去除操作,获取拍摄图像的无高光图像。Furthermore, the highlight removal is performed based on dense feature fusion connection and multi-level highlight recognition, specifically: an encoder-decoder group with dense feature fusion connection is used, a multi-level highlight recognition strategy is adopted, and a highlight removal operation is performed on the captured image to obtain a highlight-free image of the captured image.

进一步的,所述编码器包括顺序连接的若干个卷积层、实例归一化层、线性整流单元及最大池化层构成;所述解码器包括顺序连接的若干个反卷积层、实例归一化层及线性整流单元构成。Furthermore, the encoder includes a plurality of sequentially connected convolutional layers, instance normalization layers, linear rectification units and maximum pooling layers; the decoder includes a plurality of sequentially connected deconvolutional layers, instance normalization layers and linear rectification units.

进一步的,所述编码器基于密集特征融合连接,将卷积块提取到的特征与其上一层卷积块提取的特征进行级联操作,再进行上采样操作,最后进行卷积操作。Furthermore, the encoder performs a cascade operation on the features extracted by the convolution block and the features extracted by the convolution block in the previous layer based on dense feature fusion connection, then performs an upsampling operation, and finally performs a convolution operation.

进一步的,所述生成器网络的编码器包括八个卷积层用于下采样,前七个卷积层后连接一个实例化归一层和线性整流单元,最后一个卷积层连接一个实例化归一层。Furthermore, the encoder of the generator network includes eight convolutional layers for downsampling, the first seven convolutional layers are connected to an instantiated normalization layer and a linear rectifier unit, and the last convolutional layer is connected to an instantiated normalization layer.

进一步的,所述生成器网络与多辨别器网络进行协同训练,具体为:Furthermore, the generator network and the multi-discriminator network are trained collaboratively, specifically:

获取公共数据集,通过对所述公共数据集进行去重、随机裁剪及随机筛选,实现训练数据集的构建;Obtain a public data set, and construct a training data set by removing duplicates, randomly cropping, and randomly screening the public data set;

基于预先构建的最小化损失函数,通过构建的训练数据集对所述生成器网络与多辨别器网络进行协同训练,其中,通过所述多辨别器网络及预先构建的损失函数,对所述生成器网络生成的材质贴图进行质量判别。Based on a pre-constructed minimization loss function, the generator network and the multi-discriminator network are collaboratively trained through a constructed training data set, wherein the quality of the material map generated by the generator network is judged through the multi-discriminator network and the pre-constructed loss function.

进一步的,所述多辨别器网络具体包括处理漫反射贴图的辨别器、处理法线贴图的辨别器、处理粗糙度贴图的辨别器、处理反射贴图的辨别器和处理渲染结果的辨别器。Furthermore, the multi-discriminator network specifically includes a discriminator for processing diffuse maps, a discriminator for processing normal maps, a discriminator for processing roughness maps, a discriminator for processing reflection maps, and a discriminator for processing rendering results.

具体的,为了便于理解,以下结合附图对本实施例所述方案进行详细说明:Specifically, for ease of understanding, the solution described in this embodiment is described in detail below with reference to the accompanying drawings:

本实施例提供了一种基于单张高光图像的SVBRDF材质建模方法,所述方案可以分为四个主要阶段:训练数据集构建阶段、生成器网络和多辨别器网络协同训练阶段、高光消除阶段、物体表面材质估计阶段,流程图如图1所示,具体包括以下步骤:This embodiment provides an SVBRDF material modeling method based on a single highlight image. The scheme can be divided into four main stages: a training data set construction stage, a generator network and a multi-discriminator network collaborative training stage, a highlight elimination stage, and an object surface material estimation stage. The flowchart is shown in FIG1 , which specifically includes the following steps:

步骤1、对大规模数据集进行去重、随机剪裁、随机挑选,构建训练数据集。通过最小化损失函数,使用训练数据集对生成器网络和多辨别器网络进行协同训练,使用多辨别器对生成器生成材质贴图进行判别,根据公式计算损失函数。损失函数包括贴图损失,渲染损失和对抗损失,其中对抗损失包括特征匹配损失和标准对抗损失。Step 1: Remove duplicates, randomly crop, and randomly select large-scale data sets to construct training data sets. By minimizing the loss function, use the training data set to co-train the generator network and the multi-discriminator network, use the multi-discriminator to discriminate the material map generated by the generator, and calculate the loss function according to the formula. The loss function includes map loss, rendering loss, and adversarial loss, where the adversarial loss includes feature matching loss and standard adversarial loss.

S101:搜集公开数据集,使用Deschaintre等人提出的大规模数据集。对公开数据集进行去重,对于从相同的SVBRDF生成的图像,但观察方向或照明方向略有不同的训练数据仅保留1组数据,最后收集195284个实例。S101: Collect public datasets, using the large-scale dataset proposed by Deschaintre et al. The public datasets were deduplicated, and only one set of training data was retained for images generated from the same SVBRDF but with slightly different viewing or lighting directions, and finally 195,284 instances were collected.

S102:裁剪数据集中的图片,将训练数据集中所有图片随机裁剪为256x256。S102: Crop the images in the dataset, and randomly crop all the images in the training dataset to 256x256.

S103:在数据集中随机挑选18K个作为训练数据集,剩余实例作为测试数据集。S103: Randomly select 18K instances in the data set as the training data set, and the remaining instances as the test data set.

S104:计算对抗损失。S104: Calculate adversarial loss.

利用公式(1)计算特征匹配损失特征匹配损失通过最小化估计值和真实值之间高维特征的距离来稳定训练。The feature matching loss is calculated using formula (1) The feature matching loss stabilizes training by minimizing the distance between the estimated and true values of high-dimensional features.

其中,G表示生成器,I表示输入图像,M表示生成器生成的材质贴图。Di指代Dmaps=(Dd,Dn,Dr,Ds)中任意一个辨别器,各辨别器结构相同,其中Dd、Dn、Dr、Ds分别用来处理漫反射贴图、法线贴图、粗糙度贴图和反射贴图。Where G represents the generator, I represents the input image, and M represents the material map generated by the generator. Di refers to any discriminator in Dmaps = ( Dd , Dn , Dr , Ds ), and each discriminator has the same structure, where Dd , Dn , Dr , and Ds are used to process diffuse maps, normal maps, roughness maps, and reflection maps, respectively.

利用公式(2)计算标准对抗损失通过最小化标准对抗损失,生成器试图生成与参考贴图无法区分的贴图,而辨别器则试图有效区分生成贴图与参考贴图。The standard adversarial loss is calculated using formula (2) By minimizing the standard adversarial loss, the generator tries to generate maps that are indistinguishable from reference maps, while the discriminator tries to effectively distinguish the generated maps from the reference maps.

其中,Drender表示渲染辨别器;Among them, D render represents the rendering discriminator;

利用公式(3),结合特征匹配损失和标准对抗损失计算对抗损失 Using formula (3), the adversarial loss is calculated by combining feature matching loss and standard adversarial loss:

S105:利用公式(4)计算贴图损失贴图损失反映了生成贴图和参考贴图的逐像素差异。S105: Calculate the mapping loss using formula (4) The map loss reflects the pixel-by-pixel difference between the generated map and the reference map.

其中,表示输入图像I对应的参考材质贴图。in, Represents the reference material map corresponding to the input image I.

S106:将生成器生成的漫反射贴图、法线贴图、粗糙度贴图和反射贴图绘制出一张渲染图像,并利用公式(5)计算渲染损失渲染损失反映了真实图像和渲染图像的逐像素差异。S106: Draw a rendered image using the diffuse map, normal map, roughness map and reflection map generated by the generator, and calculate the rendering loss using formula (5) The rendering loss reflects the pixel-by-pixel difference between the real image and the rendered image.

其中,R(·)表示渲染模块。渲染模块使用Cook-Torrance BRDF模型,GGX微表面法线分布函数。Where R(·) represents the rendering module. The rendering module uses the Cook-Torrance BRDF model and the GGX microsurface normal distribution function.

S107:结合对抗损失、贴图损失、渲染损失,加权求和计算总损失函数:S107: Combine adversarial loss, mapping loss, and rendering loss, and calculate the total loss function by weighted summation:

其中,λmap、λrender、λadv是用于平衡各损失项的权重,本实施例中取λmap=10,λrender=5,λadv=1。Among them, λ map , λ render , and λ adv are weights used to balance each loss item. In this embodiment, λ map =10, λ render =5, and λ adv =1.

S108:使用训练数据集,根据公式(7)对生成器网络和多辨别器网络进行协同训练。S108: Using the training data set, the generator network and the multi-discriminator network are collaboratively trained according to formula (7).

其中,N是训练集中样本个数,θ是生成器参数集,网络结构如图(1)所示。Where N is the number of samples in the training set, θ is the generator parameter set, and the network structure is shown in Figure (1).

在本实施例中,使用PyTorch框架,Adam优化器,初始学习率为0.00002。训练1000000次。图形处理器为英伟达NVIDIA TITAN RTX,显存为12GB。In this embodiment, the PyTorch framework and the Adam optimizer are used, and the initial learning rate is 0.00002. The training is performed 1,000,000 times. The graphics processor is NVIDIA TITAN RTX, and the video memory is 12 GB.

步骤2、使用物体材质表面的拍摄图像,输入到高光消除模块中进行多级高光识别和高光消除,获取无高光图像,将拍摄图像和无高光图像输入到步骤1获得的训练后的生成器网络中,对物体表面材质进行估计,估计材质的漫反射贴图、法线贴图、粗糙度贴图和反射贴图。Step 2: Use the captured image of the surface of the object material and input it into the highlight removal module for multi-level highlight recognition and highlight removal to obtain a highlight-free image. Input the captured image and the highlight-free image into the trained generator network obtained in step 1 to estimate the surface material of the object, and estimate the diffuse map, normal map, roughness map and reflection map of the material.

S201:获取物体材质表面的拍摄图像。S201: Acquire a captured image of the surface of an object material.

S202:基于高光消除模块,对所述拍摄图像进行多级高光识别和消除处理,得到无高光图像。S202: Based on a highlight removal module, perform multi-level highlight recognition and removal processing on the captured image to obtain a highlight-free image.

高光消除模块网络结构如图2所示,利用带有密集特征融合连接的编码器-解码器组,采用多级高光识别策略,对拍摄图像进行高光去除操作,获取拍摄图像的无高光图像,高光识别和高光去除结果如图4所示。The network structure of the highlight removal module is shown in Figure 2. An encoder-decoder group with dense feature fusion connections is used to adopt a multi-level highlight recognition strategy to perform highlight removal operations on the captured image to obtain a highlight-free image of the captured image. The highlight recognition and highlight removal results are shown in Figure 4.

使用拍摄图像I作为输入,使用高光消除模块编码器和高光消除模块解码器提取高光特征F。Using the captured image I as input, the highlight feature F is extracted using the highlight removal module encoder and the highlight removal module decoder.

基于高光特征F,预测高光蒙版M,M表示可见高光位置Based on the highlight feature F, the highlight mask M is predicted, where M represents the visible highlight position.

基于高光特征F和高光蒙版M预测高光密度S。The highlight density S is predicted based on the highlight feature F and the highlight mask M.

依据公式(8),基于M、S和I计算得到无高光图像DAccording to formula (8), the highlight-free image D is calculated based on M, S and I:

其中,表示对应位置元素相乘。in, Represents the multiplication of elements at corresponding positions.

图3(a)展示了高光消除模块的编码器结构。高光消除模块的编码器由五个由卷积层、实例归一化层、线性整流单元、最大池化层构成的卷积块组成。卷积层用于提取图像特征,实例归一化层用于接收卷积层的输出并进行标准化处理,线性整流单元用于映射实例归一化层的输出,最大池化层用于减小卷积层参数误差造成估计均值的偏移的误差,保留更多的纹理信息,最大池化窗口大小为2。Figure 3(a) shows the encoder structure of the highlight removal module. The encoder of the highlight removal module consists of five convolutional blocks consisting of a convolutional layer, an instance normalization layer, a linear rectifier unit, and a maximum pooling layer. The convolutional layer is used to extract image features, the instance normalization layer is used to receive the output of the convolutional layer and perform normalization, the linear rectifier unit is used to map the output of the instance normalization layer, and the maximum pooling layer is used to reduce the error of the estimated mean caused by the convolutional layer parameter error and retain more texture information. The maximum pooling window size is 2.

图3(b)展示了高光消除模块的解码器结构。高光消除模块的解码器由五个由反卷积层、实例归一化层、线性整流单元构成的反卷积块组成。反卷积层用于扩张特征向量维度,实例归一化层用于接收反卷积层的输出并进行标准化处理,线性整流单元用于映射实例归一化层的输出。Figure 3(b) shows the decoder structure of the highlight removal module. The decoder of the highlight removal module consists of five deconvolution blocks consisting of a deconvolution layer, an instance normalization layer, and a linear rectifier unit. The deconvolution layer is used to expand the dimension of the feature vector, the instance normalization layer is used to receive the output of the deconvolution layer and perform normalization, and the linear rectifier unit is used to map the output of the instance normalization layer.

在高光消除模块的编码器中使用密集特征融合连接,利用密集特征融合策略,用公式(9)、公式(10)表示,将卷积块提取到的特征与其上一层卷积块提取的特征进行级联操作,再进行上采样操作,最后进行卷积操作。通常情况下,通道的数量会随着层数的增加而增加,为了避免高层信息在融合的特征中占主导地位,引入一个卷积操作将通道数减少至固定值,使每个特征将在融合后的特征中贡献相同数量的通道。In the encoder of the highlight removal module, a dense feature fusion connection is used. The dense feature fusion strategy is expressed by formula (9) and formula (10). The features extracted by the convolution block are cascaded with the features extracted by the convolution block of the previous layer, and then upsampled and finally convolved. Generally, the number of channels increases with the number of layers. In order to avoid the dominance of high-level information in the fused features, a convolution operation is introduced to reduce the number of channels to a fixed value so that each feature will contribute the same number of channels in the fused features.

其中,表示卷积块i抽取的特征,up表示上采样操作,cat表示级联操作,conv表示卷积操作。in, Represents the features extracted by convolution block i, up represents upsampling operation, cat represents cascade operation, and conv represents convolution operation.

S203:使用训练后的生成器网络,对所述光照图像和无高光图像进行处理,生成输入图片对应材质的SVBRDF贴图。S203: Using the trained generator network, the illuminated image and the non-highlight image are processed to generate an SVBRDF map of the material corresponding to the input image.

图5(a)展示了生成器网络的编码器结构。生成器网络的编码器有8个卷积层用于下采样,前7个卷积层后跟随一个实例化归一层和线性整流单元,最后一个卷积层跟随一个实例化归一层。图5(b)展示了生成器网络的解码器结构。生成器网络的四个解码器结构相同,每个解码器网络有8个反卷积层用于上采样,前7个反卷积层后跟随一个实例化归一层和线性整流单元,最后一个反卷积层跟随一个实例化归一层。将高光消除模块生成的无高光图像送入编码器提取特征,将提取的漫反射特征送入解码器Ded用来产生漫反射图。将拍摄图像送入编码器提取特征,将提取的特征输入到解码器Den、Der和Des中,分别产生法线贴图、粗糙度贴图和反射贴图。Figure 5(a) shows the encoder structure of the generator network. The encoder of the generator network has 8 convolutional layers for downsampling. The first 7 convolutional layers are followed by an instantiation normalization layer and a linear rectifier unit, and the last convolutional layer is followed by an instantiation normalization layer. Figure 5(b) shows the decoder structure of the generator network. The four decoders of the generator network have the same structure. Each decoder network has 8 deconvolutional layers for upsampling. The first 7 deconvolutional layers are followed by an instantiation normalization layer and a linear rectifier unit, and the last deconvolutional layer is followed by an instantiation normalization layer. The highlight-free image generated by the highlight removal module is fed into the encoder to extract features, and the extracted diffuse features are fed into the decoder De d to generate a diffuse map. The captured image is fed into the encoder to extract features, and the extracted features are input into the decoders Den , De r , and De s to generate normal maps, roughness maps, and reflection maps, respectively.

辨别器Dd、Dn、Dr、Ds、Drender结构相同。图6展示了辨别器网络的结构。辨别器网络由级联操作和三个由卷积层、实例归一化层、线性整流单元、最大池化层构成的卷积块组成。级联操作将生成贴图与真实贴图拼接,卷积层用于提取图像特征,实例归一化层用于接收卷积层的输出并进行标准化处理,线性整流单元用于映射实例归一化层的输出。The discriminators D d , D n , Dr , D s , and D render have the same structure. Figure 6 shows the structure of the discriminator network. The discriminator network consists of a cascade operation and three convolution blocks consisting of a convolution layer, an instance normalization layer, a linear rectifier unit, and a maximum pooling layer. The cascade operation concatenates the generated map with the real map, the convolution layer is used to extract image features, the instance normalization layer is used to receive the output of the convolution layer and perform normalization, and the linear rectifier unit is used to map the output of the instance normalization layer.

本实施例与现有技术相比,在合成数据和真实数据上,生成的材质贴图质量和重渲染结果均有明显提升。本实施例将重渲染均方根误差RMSE降低到0.079,法线贴图误差降至0.042,法线贴图误差降至0.062,粗糙度贴图误差降至0.102,反射贴图误差降至0.062,并且明显去除现有技术存在的高光伪影问题。Compared with the prior art, the quality of the generated material maps and the re-rendering results are significantly improved on both synthetic data and real data. This embodiment reduces the re-rendering root mean square error RMSE to 0.079, the normal map error to 0.042, the normal map error to 0.062, the roughness map error to 0.102, and the reflection map error to 0.062, and significantly removes the highlight artifact problem existing in the prior art.

本公开提出了一个用于单张图像SVBRDF估计的方法。方法提出了一个具有多级高光识别和密集特征融合连接的高光去除模块。使用一个具有共享编码器和四个解码器的编码器-解码器网络作为生成器网络,并且使用一组辨别器来训练生成器网络,以区分生成贴图与真实贴图,渲染图像与真实图像。在合成和真实图像数据集上的广泛实验表明,与现有技术相比,能够生成高质量的SVBRDF,对含有大量高光区域的输入图像能够产生更真实可信的渲染结果。The present disclosure proposes a method for estimating the SVBRDF of a single image. The method proposes a highlight removal module with multi-level highlight recognition and dense feature fusion connections. An encoder-decoder network with a shared encoder and four decoders is used as the generator network, and a set of discriminators is used to train the generator network to distinguish between generated maps and real maps, and rendered images and real images. Extensive experiments on synthetic and real image datasets show that compared with the existing technology, high-quality SVBRDF can be generated, and more realistic rendering results can be produced for input images with a large number of highlight areas.

实施例二:Embodiment 2:

本实施例的目的是提供一种基于单张高光图像的SVBRDF材质建模系统。The purpose of this embodiment is to provide a SVBRDF material modeling system based on a single highlight image.

一种基于单张高光图像的SVBRDF材质建模系统,包括:A SVBRDF material modeling system based on a single highlight image, comprising:

数据获取单元,其用于获取物体材质表面的高光图像;A data acquisition unit, which is used to acquire a highlight image of the surface of an object material;

高光消除单元,其用于将所述高光图像,通过基于密集特征融合连接和高光多级识别的方式进行高光消除,获得无高光图像;A highlight removal unit, which is used to remove the highlights of the highlight image by means of dense feature fusion connection and highlight multi-level recognition to obtain a highlight-free image;

物体表面材质估计单元,其用于将所述高光图像和无高光图像输入预先训练的生成器网络中,获得物体表面的空间变化双向反射率分布函数,进而获得对应的材质贴图;其中,所述生成器网络包括共享编码器和分别与所述共享编码器连接的若干个解码器,所述解码器分别对应于漫反射贴图、法线贴图、粗糙度贴图及反射贴图的处理。An object surface material estimation unit is used to input the highlight image and the non-highlight image into a pre-trained generator network to obtain a spatially varying bidirectional reflectance distribution function of the object surface, and then obtain a corresponding material map; wherein the generator network includes a shared encoder and a plurality of decoders respectively connected to the shared encoder, and the decoders respectively correspond to the processing of diffuse maps, normal maps, roughness maps and reflection maps.

此处需要说明的是,本实施例中的各个模块与实施例一中的各个步骤一一对应,本实施例所述系统的技术细节在实施例一中已经进行了详细描述,故此处不再赘述。It should be noted here that each module in this embodiment corresponds to each step in Example 1 one by one. The technical details of the system described in this embodiment have been described in detail in Example 1, so they will not be repeated here.

在更多实施例中,还提供:In further embodiments, there is also provided:

一种电子设备,包括存储器和处理器以及存储在存储器上并在处理器上运行的计算机指令,所述计算机指令被处理器运行时,完成实施例一中所述的方法。为了简洁,在此不再赘述。An electronic device includes a memory and a processor, and computer instructions stored in the memory and executed on the processor, wherein when the computer instructions are executed by the processor, the method described in Embodiment 1 is performed. For the sake of brevity, it will not be described in detail here.

应理解,本实施例中,处理器可以是中央处理单元CPU,处理器还可以是其他通用处理器、数字信号处理器DSP、专用集成电路ASIC,现成可编程门阵列FPGA或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。It should be understood that in this embodiment, the processor may be a central processing unit CPU, and the processor may also be other general-purpose processors, digital signal processors DSP, application-specific integrated circuits ASIC, off-the-shelf programmable gate arrays FPGA or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may also be any conventional processor, etc.

存储器可以包括只读存储器和随机存取存储器,并向处理器提供指令和数据、存储器的一部分还可以包括非易失性随机存储器。例如,存储器还可以存储设备类型的信息。The memory may include a read-only memory and a random access memory, and provide instructions and data to the processor. A portion of the memory may also include a non-volatile random access memory. For example, the memory may also store information about the device type.

一种计算机可读存储介质,用于存储计算机指令,所述计算机指令被处理器执行时,完成实施例一中所述的方法。A computer-readable storage medium is used to store computer instructions. When the computer instructions are executed by a processor, the method described in embodiment 1 is completed.

实施例一中的方法可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器、闪存、只读存储器、可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。The method in the first embodiment can be directly embodied as a hardware processor, or a combination of hardware and software modules in the processor. The software module can be located in a mature storage medium in the field such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, or an electrically erasable programmable memory, a register, etc. The storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware. To avoid repetition, it will not be described in detail here.

本领域普通技术人员可以意识到,结合本实施例描述的各示例的单元即算法步骤,能够以电子硬件或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本公开的范围。Those skilled in the art will appreciate that the units, i.e., algorithm steps, of the various examples described in the present embodiment can be implemented in electronic hardware or in a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Professional and technical personnel can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of this disclosure.

上述实施例提供的基于单张高光图像的SVBRDF材质建模方法及系统可以实现,具有广阔的应用前景。The SVBRDF material modeling method and system based on a single highlight image provided in the above embodiment can be implemented and have broad application prospects.

以上所述仅为本公开的优选实施例而已,并不用于限制本公开,对于本领域的技术人员来说,本公开可以有各种更改和变化。凡在本公开的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。The above description is only a preferred embodiment of the present disclosure and is not intended to limit the present disclosure. For those skilled in the art, the present disclosure may have various modifications and variations. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present disclosure shall be included in the protection scope of the present disclosure.

Claims (8)

1.一种基于单张高光图像的SVBRDF材质建模方法,其特征在于,包括:1. A SVBRDF material modeling method based on a single highlight image, characterized by comprising: 获取物体材质表面的高光图像;Get the highlight image of the object material surface; 将所述高光图像,通过基于密集特征融合连接和高光多级识别的方式进行高光消除,获得无高光图像;Eliminate the highlights of the highlight image by dense feature fusion connection and highlight multi-level recognition to obtain a highlight-free image; 将所述高光图像和无高光图像输入预先训练的生成器网络中,获得物体表面的空间变化双向反射率分布函数,进而获得对应的材质贴图;其中,所述生成器网络包括共享编码器和分别与所述共享编码器连接的若干个解码器,所述解码器分别对应于漫反射贴图、法线贴图、粗糙度贴图及反射贴图的处理;Inputting the highlight image and the non-highlight image into a pre-trained generator network to obtain a spatially varying bidirectional reflectance distribution function of the object surface, and then obtaining a corresponding material map; wherein the generator network includes a shared encoder and a plurality of decoders respectively connected to the shared encoder, and the decoders respectively correspond to the processing of a diffuse map, a normal map, a roughness map, and a reflection map; 所述生成器网络与多辨别器网络进行协同训练,具体为:The generator network is trained collaboratively with the multi-discriminator network, specifically: 获取公共数据集,通过对所述公共数据集进行去重、随机裁剪及随机筛选,实现训练数据集的构建;Obtain a public data set, and construct a training data set by removing duplicates, randomly cropping, and randomly screening the public data set; 基于预先构建的最小化损失函数,通过构建的训练数据集对所述生成器网络与多辨别器网络进行协同训练,其中,通过所述多辨别器网络及预先构建的损失函数,对所述生成器网络生成的材质贴图进行质量判别;Based on a pre-constructed minimization loss function, the generator network and the multi-discriminator network are collaboratively trained through a constructed training data set, wherein the quality of the material map generated by the generator network is judged through the multi-discriminator network and the pre-constructed loss function; 所述多辨别器网络具体包括处理漫反射贴图的辨别器、处理法线贴图的辨别器、处理粗糙度贴图的辨别器、处理反射贴图的辨别器和处理渲染结果的辨别器。The multi-discriminator network specifically includes a discriminator for processing a diffuse map, a discriminator for processing a normal map, a discriminator for processing a roughness map, a discriminator for processing a reflection map, and a discriminator for processing a rendering result. 2.如权利要求1所述的一种基于单张高光图像的SVBRDF材质建模方法,其特征在于,所述通过基于密集特征融合连接和高光多级识别的方式进行高光消除,具体为:利用带有密集特征融合连接的编码器-解码器组,采用多级高光识别策略,对拍摄图像进行高光去除操作,获取拍摄图像的无高光图像。2. An SVBRDF material modeling method based on a single highlight image as described in claim 1 is characterized in that the highlight elimination is performed based on dense feature fusion connection and multi-level highlight recognition, specifically: using an encoder-decoder group with dense feature fusion connection, adopting a multi-level highlight recognition strategy, performing a highlight removal operation on the captured image, and obtaining a highlight-free image of the captured image. 3.如权利要求2所述的一种基于单张高光图像的SVBRDF材质建模方法,其特征在于,所述编码器包括顺序连接的若干个卷积层、实例归一化层、线性整流单元及最大池化层构成;所述解码器包括顺序连接的若干个反卷积层、实例归一化层及线性整流单元构成。3. An SVBRDF material modeling method based on a single highlight image as described in claim 2, characterized in that the encoder includes a plurality of convolutional layers, instance normalization layers, linear rectification units and maximum pooling layers connected in sequence; the decoder includes a plurality of deconvolutional layers, instance normalization layers and linear rectification units connected in sequence. 4.如权利要求2所述的一种基于单张高光图像的SVBRDF材质建模方法,其特征在于,所述编码器基于密集特征融合连接,将卷积块提取到的特征与其上一层卷积块提取的特征进行级联操作,再进行上采样操作,最后进行卷积操作。4. The SVBRDF material modeling method based on a single highlight image as described in claim 2 is characterized in that the encoder performs a cascade operation on the features extracted by the convolution block and the features extracted by the convolution block in the previous layer based on dense feature fusion connection, then performs an upsampling operation, and finally performs a convolution operation. 5.如权利要求1所述的一种基于单张高光图像的SVBRDF材质建模方法,其特征在于,所述生成器网络的编码器包括八个卷积层用于下采样,前七个卷积层后连接一个实例化归一层和线性整流单元,最后一个卷积层连接一个实例化归一层。5. A SVBRDF material modeling method based on a single highlight image as described in claim 1, characterized in that the encoder of the generator network includes eight convolutional layers for downsampling, the first seven convolutional layers are connected to an instantiated normalization layer and a linear rectification unit, and the last convolutional layer is connected to an instantiated normalization layer. 6.一种基于单张高光图像的SVBRDF材质建模系统,其特征在于,包括:6. A SVBRDF material modeling system based on a single highlight image, characterized by comprising: 数据获取单元,其用于获取物体材质表面的高光图像;A data acquisition unit, which is used to acquire a highlight image of the surface of an object material; 高光消除单元,其用于将所述高光图像,通过基于密集特征融合连接和高光多级识别的方式进行高光消除,获得无高光图像;A highlight removal unit, which is used to remove the highlights of the highlight image by means of dense feature fusion connection and highlight multi-level recognition to obtain a highlight-free image; 物体表面材质估计单元,其用于将所述高光图像和无高光图像输入预先训练的生成器网络中,获得物体表面的空间变化双向反射率分布函数,进而获得对应的材质贴图;其中,所述生成器网络包括共享编码器和分别与所述共享编码器连接的若干个解码器,所述解码器分别对应于漫反射贴图、法线贴图、粗糙度贴图及反射贴图的处理;An object surface material estimation unit, which is used to input the highlight image and the non-highlight image into a pre-trained generator network to obtain a spatially varying bidirectional reflectance distribution function of the object surface, and then obtain a corresponding material map; wherein the generator network includes a shared encoder and a plurality of decoders respectively connected to the shared encoder, and the decoders respectively correspond to the processing of diffuse reflection map, normal map, roughness map and reflection map; 所述生成器网络与多辨别器网络进行协同训练,具体为:The generator network is trained collaboratively with the multi-discriminator network, specifically: 获取公共数据集,通过对所述公共数据集进行去重、随机裁剪及随机筛选,实现训练数据集的构建;Obtain a public data set, and construct a training data set by removing duplicates, randomly cropping, and randomly screening the public data set; 基于预先构建的最小化损失函数,通过构建的训练数据集对所述生成器网络与多辨别器网络进行协同训练,其中,通过所述多辨别器网络及预先构建的损失函数,对所述生成器网络生成的材质贴图进行质量判别;Based on a pre-constructed minimization loss function, the generator network and the multi-discriminator network are collaboratively trained through a constructed training data set, wherein the quality of the material map generated by the generator network is judged through the multi-discriminator network and the pre-constructed loss function; 所述多辨别器网络具体包括处理漫反射贴图的辨别器、处理法线贴图的辨别器、处理粗糙度贴图的辨别器、处理反射贴图的辨别器和处理渲染结果的辨别器。The multi-discriminator network specifically includes a discriminator for processing a diffuse map, a discriminator for processing a normal map, a discriminator for processing a roughness map, a discriminator for processing a reflection map, and a discriminator for processing a rendering result. 7.一种电子设备,包括存储器、处理器及存储在存储器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现如权利要求1-5任一项所述的基于单张高光图像的SVBRDF材质建模方法。7. An electronic device, comprising a memory, a processor and a computer program stored and running on the memory, characterized in that when the processor executes the program, the SVBRDF material modeling method based on a single highlight image as described in any one of claims 1-5 is implemented. 8.一种非暂态计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-5任一项所述的基于单张高光图像的SVBRDF材质建模方法。8. A non-transitory computer-readable storage medium having a computer program stored thereon, characterized in that when the program is executed by a processor, the SVBRDF material modeling method based on a single highlight image as described in any one of claims 1 to 5 is implemented.
CN202210662982.5A 2022-06-13 2022-06-13 SVBRDF material modeling method and system based on Shan Zhanggao light images Active CN114926593B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210662982.5A CN114926593B (en) 2022-06-13 2022-06-13 SVBRDF material modeling method and system based on Shan Zhanggao light images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210662982.5A CN114926593B (en) 2022-06-13 2022-06-13 SVBRDF material modeling method and system based on Shan Zhanggao light images

Publications (2)

Publication Number Publication Date
CN114926593A CN114926593A (en) 2022-08-19
CN114926593B true CN114926593B (en) 2024-07-19

Family

ID=82814300

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210662982.5A Active CN114926593B (en) 2022-06-13 2022-06-13 SVBRDF material modeling method and system based on Shan Zhanggao light images

Country Status (1)

Country Link
CN (1) CN114926593B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118096982B (en) * 2024-04-24 2024-08-09 国网江西省电力有限公司超高压分公司 Construction method and system of fault inversion training platform

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419334A (en) * 2020-11-18 2021-02-26 山东大学 Micro surface material reconstruction method and system based on deep learning
CN112634156A (en) * 2020-12-22 2021-04-09 浙江大学 Method for estimating material reflection parameter based on portable equipment collected image

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10810469B2 (en) * 2018-05-09 2020-10-20 Adobe Inc. Extracting material properties from a single image
CN114549726A (en) * 2022-01-19 2022-05-27 广东时谛智能科技有限公司 High-quality material chartlet obtaining method based on deep learning
CN114549431B (en) * 2022-01-29 2024-07-26 北京师范大学 Method for estimating reflection attribute of material of object surface from single image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419334A (en) * 2020-11-18 2021-02-26 山东大学 Micro surface material reconstruction method and system based on deep learning
CN112634156A (en) * 2020-12-22 2021-04-09 浙江大学 Method for estimating material reflection parameter based on portable equipment collected image

Also Published As

Publication number Publication date
CN114926593A (en) 2022-08-19

Similar Documents

Publication Publication Date Title
CN110033410B (en) Image reconstruction model training method, image super-resolution reconstruction method and device
Pan et al. MIEGAN: Mobile image enhancement via a multi-module cascade neural network
CN114041161A (en) Method and device for training neural network model for enhancing image details
CN111192226B (en) Image fusion denoising method, device and system
CN106952222A (en) A kind of interactive image weakening method and device
Cornells et al. Real-time connectivity constrained depth map computation using programmable graphics hardware
CN117197627B (en) Multi-mode image fusion method based on high-order degradation model
CN113781659A (en) A three-dimensional reconstruction method, device, electronic device and readable storage medium
Conde et al. Lens-to-lens bokeh effect transformation. NTIRE 2023 challenge report
CN110276831A (en) Method and device for constructing three-dimensional model, equipment and computer-readable storage medium
Zhao et al. Deep pyramid generative adversarial network with local and nonlocal similarity features for natural motion image deblurring
CN114298971A (en) Coronary artery segmentation method, system, terminal and storage medium
CN114926593B (en) SVBRDF material modeling method and system based on Shan Zhanggao light images
CN117197323A (en) Large scene free viewpoint interpolation method and device based on neural network
Franke et al. Vet: Visual error tomography for point cloud completion and high-quality neural rendering
CN116863320A (en) Underwater image enhancement method and system based on physical model
CN115631223A (en) Multi-view stereo reconstruction method based on self-adaptive learning and aggregation
CN119006742A (en) Human body three-dimensional reconstruction method and system based on deep learning
WO2024088061A1 (en) Face reconstruction and occlusion region recognition method, apparatus and device, and storage medium
CN111445422A (en) A Neural Network-Based Stochastic Asymptotic Photon Mapping Image Noise Reduction Method and System
CN116433548A (en) A hyperspectral and panchromatic image fusion method based on multi-level information extraction
CN117196940A (en) Super-resolution reconstruction method suitable for real scene image based on convolutional neural network
CN117876470B (en) Method and system for extracting central line of laser bar of transparent optical lens
WO2023019478A1 (en) Three-dimensional reconstruction method and apparatus, electronic device, and readable storage medium
Wang et al. FPD Net: Feature Pyramid DehazeNet.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant