CN106331663A - System and method for acquiring interactive materials for portable devices - Google Patents
System and method for acquiring interactive materials for portable devices Download PDFInfo
- Publication number
- CN106331663A CN106331663A CN201610738493.8A CN201610738493A CN106331663A CN 106331663 A CN106331663 A CN 106331663A CN 201610738493 A CN201610738493 A CN 201610738493A CN 106331663 A CN106331663 A CN 106331663A
- Authority
- CN
- China
- Prior art keywords
- pixel
- module
- image
- parameter
- diffuse
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3179—Video signal processing therefor
- H04N9/3182—Colour adjustment, e.g. white balance, shading or gamut
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/43—Conversion of monochrome picture signals to colour picture signals for colour picture display
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/73—Colour balance circuits, e.g. white balance circuits or colour temperature control
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Facsimile Image Signal Circuits (AREA)
Abstract
Description
技术领域technical field
本发明涉及一种便携设备的交互材质获取系统和方法,属于图形图像处理领域。The invention relates to a system and method for acquiring interactive materials of a portable device, belonging to the field of graphics and image processing.
背景技术Background technique
近年来,利用单幅图像获取材质外观属性并生成材质贴图技术,取得了一些进展,例如,在工业界技术人员利用CrazyBump计算Normal图;在研究领域,微软亚洲研究院开发出了一款交互式材质获取系统AppGen。但是相关系统均假设输入的单幅图像没有明显的高光,限制了此类方法的应用。另一方面,利用此类方法恢复的材质贴图,很难制作出类似于真实光照条件下的渲染效果。In recent years, some progress has been made in using a single image to obtain material appearance properties and generate material maps. For example, technicians in the industry use CrazyBump to calculate Normal maps; in the field of research, Microsoft Research Asia has developed an interactive Material acquisition system AppGen. However, related systems all assume that the input single image has no obvious highlights, which limits the application of such methods. On the other hand, it is difficult to produce rendering effects similar to real lighting conditions with the material maps restored by this method.
YUV是被欧洲电视系统所采用的一种颜色编码方法,是PAL和SECAM模拟彩色电视制式采用的颜色空间。在现代彩色电视系统中,通常采用三管彩色摄影机或彩色CCD摄影机进行取像,然后把取得的彩色图像信号经分色、分别放大校正后得到RGB,再经过矩阵变换电路得到亮度信号Y和两个色差信号B-Y(即U)、R-Y(即V),最后发送端将亮度和色差三个信号分别进行编码,用同一信道发送出去。这种色彩的表示方法就是所谓的YUV色彩空间表示。YUV的亮度信号Y和色度信号U、V是分离的。如果只有Y信号分量而没有U、V分量,那么这样表示的图像就是黑白灰度图像。彩色电视采用YUV空间正是为了用亮度信号Y解决彩色电视机与黑白电视机的兼容问题,使黑白电视机也能接收彩色电视信号。YUV is a color coding method adopted by the European television system, and is a color space adopted by PAL and SECAM analog color television systems. In modern color TV systems, three-tube color cameras or color CCD cameras are usually used to capture images, and then the obtained color image signals are separated, enlarged and corrected to obtain RGB, and then the brightness signal Y and two signals are obtained through a matrix conversion circuit. A color difference signal BY (ie U), RY (ie V), and finally the sending end encodes the three signals of brightness and color difference respectively, and sends them out on the same channel. This color representation method is the so-called YUV color space representation. YUV's luminance signal Y and chrominance signals U, V are separated. If there is only the Y signal component but no U and V components, then the image represented in this way is a black and white grayscale image. Color TV uses YUV space just to solve the compatibility problem between color TV and black and white TV with brightness signal Y, so that black and white TV can also receive color TV signal.
发明内容Contents of the invention
针对现有技术的不足,本发明的技术方案公开了一种便携设备的交互材质获取系统方法,图像数据通过手机、单反相机等设备拍摄带闪光和无闪光照片获得。Aiming at the deficiencies of the prior art, the technical solution of the present invention discloses an interactive material acquisition system method for portable devices, and the image data is obtained by taking photos with flash and without flash by mobile phones, SLR cameras and other devices.
本发明的技术方案包括一种便携设备的交互材质获取系统,其特征在于,该系统包括:采集模块,用于使用便携设备采集无闪光材质图像,还用于利用便携设备采集带闪光材质图像;高光分离模块,用于将所述采集模块采集的无闪光原始RGB颜色空间图像转化成YUV颜色空间图像,并基于用户输入的分离参数以及YUV颜色空间图像的原始明暗图计算得到新的明暗图,进一步将YUV图像分离得到镜像反射图和漫反射图;法向图模块,根据高光分离模块得到新明暗图的像素差,在切空间创建第一切向量和第二切向量,进一步,根据第一切向量和第二切向量计算法向图初始值;漫反射模块,用于获取所述高光分离得到的漫反射图,计算漫反射系数图初始值;高光指数模块,用于获取所述高光分离模块得到新明暗图,进一步,基于用户输入的高光系数计算高光指数图的初始值;镜像反射模块,用于获取所述高光分离得到的镜像反射图,进一步计算镜像反射系数图的初始值;参数优化模块,用于获取根据所述法向图模块、漫反射模块、高光指数模块和镜像反射模块得到的初始化参数,还获取所述采集模块的带闪光材质图像,使用非线性优化技术计算材质贴图参数并生成材质贴图。The technical solution of the present invention includes an interactive material acquisition system for a portable device, which is characterized in that the system includes: an acquisition module, which is used to use a portable device to collect an image of a non-flash material, and is also used to use a portable device to collect an image of a material with a flash; The highlight separation module is used to convert the flash-free original RGB color space image collected by the acquisition module into a YUV color space image, and calculate a new shading image based on the separation parameters input by the user and the original shading image of the YUV color space image, Further separate the YUV image to obtain the mirror reflection map and the diffuse reflection map; the normal map module obtains the pixel difference of the new light and dark map according to the highlight separation module, and creates the first tangent vector and the second tangent vector in the tangent space. Further, according to the first The tangent vector and the second tangent vector calculate the initial value of the normal map; the diffuse reflection module is used to obtain the diffuse reflection map obtained by the specular separation, and calculate the initial value of the diffuse albedo map; the specular index module is used to obtain the specular separation The module obtains a new shading map, further, calculates the initial value of the specular index map based on the specular coefficient input by the user; the mirror reflection module is used to obtain the mirror reflection map obtained by the highlight separation, and further calculates the initial value of the mirror reflection coefficient map; parameters The optimization module is used to obtain the initialization parameters obtained according to the normal map module, the diffuse reflection module, the specular index module and the mirror reflection module, and also obtains the flash material image of the collection module, and calculates the texture map using nonlinear optimization technology parameters and generate material maps.
根据所述的便携设备的交互材质获取系统,所述的高光分离模块还包括:转换子模块,用于将采集的无闪光材质图像的原始RGB颜色空间转换为YUV颜色空间的材质图像,原始明暗图Y1、以及原始颜色图U及饱和度V;分离子模块,用于根据用户自定义设置的分离参数p,其中新明暗图Y2为Y2=Y1*p,进一步,基于新明暗图Y2、原始颜色图U及饱和度V组合为新RGB图像,并将新RGB图像作为漫反射图,镜像反射图像与漫反射图的和为原始RGB图像。According to the interactive material acquisition system of the portable device, the highlight separation module also includes: a conversion sub-module, which is used to convert the original RGB color space of the collected flash-free material image into a material image in YUV color space, the original light and shade Figure Y1, as well as the original color map U and saturation V; the separation sub-module is used to set the separation parameter p according to user-defined settings, wherein the new light and dark map Y2 is Y2=Y1*p, further, based on the new light and dark map Y2, the original The color map U and saturation V are combined into a new RGB image, and the new RGB image is used as a diffuse reflection map, and the sum of the mirror reflection image and the diffuse reflection map is the original RGB image.
根据所述的便携设备的交互材质获取系统,所述的法向图模块还包括:用于获取所述新明暗图Y2的像素差,进一步,对于任意像素点(i,j)的值D(i,j),在S和T方向的切向量分别为S(i,j)=(1,0,D(i+1,j)-D(i-1,j),),T(i,j)=(0,1,D(i,j+1)-D(i,j-1),),则第一切向量S与第二切向量T的积为法向图初始值。According to the interactive material acquisition system for portable devices, the normal map module further includes: for obtaining the pixel difference of the new shading map Y2, further, for any pixel point (i, j) the value D( i,j), the tangent vectors in the S and T directions are S(i,j)=(1,0,D(i+1,j)-D(i-1,j),), T(i ,j)=(0,1,D(i,j+1)-D(i,j-1),), then the product of the first tangent vector S and the second tangent vector T is the initial value of the normal map.
根据所述的便携设备的交互材质获取系统,所述的漫反射模块还包括:用于对漫反射初始值进行计算,其中,漫反射系数图=漫反射图/NdotL,NdotL为像素法向量与光照向量的内积。According to the interactive material acquisition system of the portable device, the diffuse reflection module further includes: for calculating the initial value of the diffuse reflection, wherein, the diffuse reflection coefficient map=diffuse reflection map/NdotL, NdotL is the pixel normal vector and Inner product of lighting vectors.
根据所述的便携设备的交互材质获取系统,所述的高光指数模块还包括:用于根据用户自定义输入的高光参数,对高光指数图的初始值进行计算,其计算方式为,高光指数图=新的明暗图Y2*高光参数。According to the interactive material acquisition system of the portable device, the specular index module further includes: calculating the initial value of the specular index map according to the specular parameter input by the user, and the calculation method is: the specular index map = new shading map Y2 * specular parameters.
根据所述的便携设备的交互材质获取系统,所述的镜像反射模块还包括:用于计算镜面反射系数图的初始值,其计算方式为,镜面反射系数图=镜面反射图/pow(NdotH,g(x))/NdotL,其中H表示光照、视线中间分量,NdotH为像素法向量与H的内积,g(x)表示任意像素点x的高光指数。According to the interactive material acquisition system of the portable device, the mirror reflection module also includes: an initial value for calculating the specular reflection coefficient map, and the calculation method is: specular reflection coefficient map=specular reflection map/pow(NdotH, g(x))/NdotL, where H represents the intermediate component of illumination and line of sight, NdotH is the inner product of the pixel normal vector and H, and g(x) represents the highlight index of any pixel point x.
根据所述的便携设备的交互材质获取系统,所述的参数优化模块还包括:聚类子模块,用于对参数初始化的无闪光图像的所有像素点进行N聚类,N表示聚类个数;采集参数子模块,用于获取便携设备的FOV参数,初始化带闪光图像每个像素点的光照Lj、视线Vj参数;像素处理子模块,用于对所述带闪光图像的每个像素类别Ci进行处理。According to the interactive material acquisition system of the portable device, the parameter optimization module also includes: a clustering sub-module, which is used to perform N clustering on all pixels of the flash-free image whose parameters are initialized, and N represents the number of clusters The acquisition parameter submodule is used to obtain the FOV parameters of the portable device, and initializes the illumination Lj and line of sight Vj parameters of each pixel of the flash image; the pixel processing submodule is used to classify each pixel of the flash image with Ci to process.
根据所述的便携设备的交互材质获取系统,在一个优选的实施方案中,所述的像素处理子模块对每个像素类别Ci处理,包括:像素选取子模块,用于获取当前像素类别中的任意一个像素点P的值;读取初始参数子模块,用于获取与像素点P对应的无闪光材质图像像素点的初始化参数;读取采集参数子模块,用于读取当前像素类别Ci的每一个像素点Pj的RGB值、光照Lj、视线Vj;非线性优化子模块,用于分别调取所述像素选取子模块、读取初始参数子模块、读取采集参数子模块获取的数据,使用非线性优化计算像素点P的参数,并存储作为当前像素类别Ci对应的贴图参数基;贴图参数子模块,用于一一计算当前像素类别Ci的剩余像素点与像素点P的比值,作为剩余像素点的权重系数,进一步,根据当前像素类别Ci的贴图参数基,计算剩余像素点的贴图参数。According to the interactive material acquisition system of the portable device, in a preferred embodiment, the pixel processing sub-module processes each pixel category Ci, including: a pixel selection sub-module, used to obtain the current pixel category Ci The value of any pixel point P; read the initial parameter submodule, which is used to obtain the initialization parameters of the pixel point of the non-flicker material image corresponding to the pixel point P; read the collection parameter submodule, which is used to read the value of the current pixel category Ci The RGB value of each pixel point Pj, the illumination Lj, and the line of sight Vj; the nonlinear optimization submodule is used to respectively call the pixel selection submodule, read the initial parameter submodule, and read the data obtained by the acquisition parameter submodule, Use nonlinear optimization to calculate the parameters of the pixel point P, and store it as the texture parameter base corresponding to the current pixel category Ci; the texture parameter sub-module is used to calculate the ratio of the remaining pixels of the current pixel category Ci to the pixel point P one by one, as The weight coefficients of the remaining pixels, and further, according to the texture parameter base of the current pixel category Ci, the texture parameters of the remaining pixels are calculated.
本发明的技术方案还包括一种交互材质获取方法,其特征在于,该方法包括:使用便携设备采集无闪光材质图像,还用于利用便携设备采集带闪光材质图像;将所述采集模块采集的无闪光原始RGB颜色空间图像转化成YUV颜色空间图像,并基于用户输入的分离参数以及YUV颜色空间图像的原始明暗图计算得到新的明暗图,进一步将YUV图像分离得到镜像反射图和漫反射图;根据高光分离模块得到新明暗图的像素差,在切空间创建第一切向量和第二切向量,进一步,根据第一切向量和第二切向量计算法向图初始值;获取所述高光分离得到的漫反射图,计算漫反射系数图初始值;获取所述高光分离模块得到新明暗图,进一步,基于用户输入的高光系数计算高光指数图的初始值;获取所述高光分离得到的镜像反射图,进一步计算镜像反射系数图的初始值;获取根据所述法向图模块、漫反射模块、高光指数模块和镜像反射模块得到的初始化参数,还获取所述采集模块的带闪光材质图像,使用非线性优化技术计算材质贴图参数并生成材质贴图。The technical solution of the present invention also includes a method for acquiring interactive materials, which is characterized in that the method includes: using a portable device to collect an image of a material without flash, and also using a portable device to collect an image of a material with flash; The original RGB color space image without flash is converted into a YUV color space image, and a new light and shade map is calculated based on the separation parameters input by the user and the original light and shade map of the YUV color space image, and the YUV image is further separated to obtain a mirror reflection map and a diffuse reflection map ; Obtain the pixel difference of the new light and dark image according to the highlight separation module, create the first tangent vector and the second tangent vector in the tangent space, and further calculate the initial value of the normal map according to the first tangent vector and the second tangent vector; obtain the highlight Separating the obtained diffuse reflectance map, calculating the initial value of the diffuse reflectance map; obtaining the highlight separation module to obtain a new light and shade map, and further calculating the initial value of the specular index map based on the specular coefficient input by the user; obtaining the mirror image obtained by the specular separation reflection map, and further calculate the initial value of the mirror reflection coefficient map; obtain the initialization parameters obtained according to the normal map module, the diffuse reflection module, the specular index module and the mirror reflection module, and also obtain the flash material image of the acquisition module, Computes material map parameters and generates material maps using non-linear optimization techniques.
进一步,该方法还包括:将采集的无闪光材质图像的原始RGB颜色空间转换为YUV颜色空间的材质图像,原始明暗图Y1、以及原始颜色图U及饱和度V;根据用户自定义设置的分离参数p,其中新明暗图Y2为Y2=Y1*p,进一步,基于新明暗图Y2、原始颜色图U及饱和度V组合为新RGB图像,并将新RGB图像作为漫反射图,镜像反射图像与漫反射图的和为原始RGB图像。Further, the method also includes: converting the original RGB color space of the collected non-glitter material image into a material image in YUV color space, the original light and shade map Y1, and the original color map U and saturation V; separation according to user-defined settings Parameter p, where the new shading image Y2 is Y2=Y1*p, further, based on the new shading image Y2, the original color map U and the saturation V are combined into a new RGB image, and the new RGB image is used as a diffuse reflection image, mirroring the reflection image The sum with the diffuse reflectance map is the original RGB image.
进一步,该方法还包括:获取所述新明暗图Y2的像素差,进一步,对于任意像素点(i,j)的值D(i,j),在S和T方向的切向量分别为S(i,j)=(1,0,D(i+1,j)-D(i-1,j),),T(i,j)=(0,1,D(i,j+1)-D(i,j-1),),则第一切向量S与第二切向量T的积为法向图初始值。Further, the method further includes: obtaining the pixel difference of the new light and dark image Y2, further, for the value D(i,j) of any pixel point (i,j), the tangent vectors in the S and T directions are respectively S( i,j)=(1,0,D(i+1,j)-D(i-1,j),), T(i,j)=(0,1,D(i,j+1) -D(i,j-1),), then the product of the first tangent vector S and the second tangent vector T is the initial value of the normal graph.
进一步,该方法还包括:对漫反射初始值进行计算,其中,漫反射系数图=漫反射图/NdotL,NdotL为像素法向量与光照向量的内积。Further, the method further includes: calculating the initial value of the diffuse reflection, wherein, the diffuse reflection coefficient map=diffuse reflection map/NdotL, and NdotL is the inner product of the pixel normal vector and the illumination vector.
进一步,该方法还包括:根据用户自定义输入的高光参数,对高光指数图的初始值进行计算,其计算方式为,高光指数图=新的明暗图Y2*高光参数。Further, the method further includes: calculating the initial value of the highlight index map according to the highlight parameter input by the user, and the calculation method is: highlight index map=new shading map Y2*high light parameter.
进一步,该方法还包括:计算镜面反射系数图的初始值,其计算方式为,镜面反射系数图=镜面反射图/pow(NdotH,g(x))/NdotL,其中H表示光照、视线中间分量,NdotH为像素法向量与H的内积,g(x)表示任意像素点x的高光指数。Further, the method also includes: calculating the initial value of the specular reflection coefficient map, and the calculation method is: specular reflection coefficient map=specular reflection map/pow(NdotH,g(x))/NdotL, wherein H represents the middle component of illumination and line of sight , NdotH is the inner product of the pixel normal vector and H, and g(x) represents the highlight index of any pixel point x.
进一步,该方法还包括:对参数初始化的无闪光图像的所有像素点进行N聚类,N表示聚类个数;获取便携设备的FOV参数,初始化带闪光图像每个像素点的光照Lj、视线Vj参数;对所述带闪光图像的每个像素类别Ci进行处理。Further, the method also includes: performing N clustering on all pixels of the parameter-initialized non-flash image, where N represents the number of clusters; obtaining the FOV parameters of the portable device, and initializing the illumination Lj and line of sight of each pixel of the flash image. Vj parameter; process each pixel category Ci of the image with flash.
进一步,该方法还对没个Ci进行计算,包括:获取当前像素类别中的任意一个像素点P的值;获取与像素点P对应的无闪光材质图像像素点的初始化参数;读取当前像素类别Ci的每一个像素点Pj的RGB值、光照Lj、视线Vj;分别调取所述像素选取子模块、读取初始参数子模块、读取采集参数子模块获取的数据,使用非线性优化计算像素点P的参数,并存储作为当前像素类别Ci对应的贴图参数基;一一计算当前像素类别Ci的剩余像素点与像素点P的比值,作为剩余像素点的权重系数,进一步,根据当前像素类别Ci的贴图参数基,计算剩余像素点的贴图参数。的权重系数,进一步,根据当前像素类别Ci的贴图参数基,计算剩余像素点的贴图参数。Further, the method also calculates each Ci, including: obtaining the value of any pixel point P in the current pixel category; obtaining the initialization parameters of the pixel point of the non-flicker material image corresponding to the pixel point P; reading the current pixel category The RGB value, illumination Lj, and line of sight Vj of each pixel point Pj of Ci; call the pixel selection sub-module, read the initial parameter sub-module, and read the data obtained by the acquisition parameter sub-module respectively, and use nonlinear optimization to calculate the pixel The parameters of the point P, and stored as the texture parameter base corresponding to the current pixel category Ci; calculate the ratio of the remaining pixels of the current pixel category Ci to the pixel P, as the weight coefficient of the remaining pixels, further, according to the current pixel category The texture parameter base of Ci, which calculates the texture parameters of the remaining pixels. Further, according to the texture parameter base of the current pixel category Ci, the texture parameters of the remaining pixels are calculated.
本发明的有益效果为:本发明的材质外观获取系统利用手机、单反相机等便携设备采集带闪光、无闪光两张图像,交互式生成更为真实的材质贴图,从而解决目前游戏制作、影视动画等领域材质制作周期长、成本高、精度低等问题。The beneficial effects of the present invention are: the material appearance acquisition system of the present invention uses mobile phones, SLR cameras and other portable devices to collect two images with flash and without flash, and interactively generates more realistic texture maps, thereby solving the problem of current game production, film and television animation In other fields, the material production cycle is long, the cost is high, and the precision is low.
附图说明Description of drawings
图1所示为根据本发明实施方式的系统框图;Fig. 1 shows a system block diagram according to an embodiment of the present invention;
图2所示为根据本发明实施方式的详细流程图。Fig. 2 shows a detailed flowchart according to an embodiment of the present invention.
具体实施方式detailed description
为了使本发明的目的、技术方案和优点更加清楚,下面结合附图和具体实施例对本发明进行详细描述。本发明的便携设备的交互材质获取系统和方法适用于图形图像处理。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments. The interactive material acquisition system and method for portable devices of the present invention are suitable for graphics and image processing.
图1所示为根据本发明实施方式的系统模块图与流程图。其中才采集模块用于实现图像源数据采集,其中的高光分离模块、法向图模块、漫反射模块、镜像反射模块用于根据无闪光图像初始化材质贴图参数,其中的参数模块用于对闪光材质图像处理。该采集模块,用于采集第一便携设备的闪光材质图像,还用于采集第二便携设备的非闪光材质图像;高光分离模块,用于基于用户输入的分离参数将所述采集模块采集的非闪光原始RGB颜色空间图像转化成YUV颜色空间图像,并对转化YUV图像二次转化为新的RGB图像并进行分离得到镜像反射图和漫反射图,其中YUV颜色空间包括原始明暗图、原始颜色图及饱和度;法向图模块,用于获取YUV颜色空间图像的新明暗图,根据新明暗图的像素差在切空间创建第一切向量和第二切向量,进一步,根据第一切向量和第二切向量计算法向图初始值;漫反射模块,用于获取所述高光分离得到的漫反射图,计算漫反射系数图初始值;高光指数模块,用于获取所述高光分离模块得到新明暗图,进一步,基于用户输入的高光系数计算高光指数图的初始值;镜像反射模块,用于获取所述高光分离得到的镜像反射图,进一步计算镜像反射系数图的初始值,根据所述高光分离模块、法向图模块、漫反射模块和高光指数模块得到最终非闪光材质图像;参数模块,用于根据所述高光分离模块、法向图模块、漫反射模块和高光指数模块得到最终非闪光材质图像的初始化参数,还获取所述采集模块的闪光材质图像,使用非线性优化技术计算材质贴图参数,进一步,得到材质图像外观属性并生成材质贴图。Fig. 1 shows a system block diagram and a flow chart according to an embodiment of the present invention. Among them, the acquisition module is used to realize the image source data acquisition, and the specular separation module, normal map module, diffuse reflection module, and mirror reflection module are used to initialize the material texture parameters according to the non-flash image, and the parameter module is used to adjust the flash material Image Processing. The acquisition module is used to collect the flash material image of the first portable device, and is also used to collect the non-flash material image of the second portable device; The flash original RGB color space image is converted into a YUV color space image, and the converted YUV image is converted into a new RGB image twice and separated to obtain a mirror reflection map and a diffuse reflection map, where the YUV color space includes the original light and shade map, the original color map and saturation; the normal map module is used to obtain a new shade map of the YUV color space image, create the first tangent vector and the second tangent vector in the tangent space according to the pixel difference of the new shading map, and further, according to the first tangent vector and The second tangent vector calculates the initial value of the normal map; the diffuse reflection module is used to obtain the diffuse reflection map obtained by the highlight separation, and calculates the initial value of the diffuse albedo map; the highlight index module is used to obtain the new highlight separation module. For the shade map, further, calculate the initial value of the specular index map based on the specular coefficient input by the user; the mirror reflection module is used to obtain the mirror reflection map obtained by the highlight separation, and further calculate the initial value of the mirror reflection coefficient map, according to the highlight The separation module, the normal map module, the diffuse reflection module and the specular index module obtain the final non-glitter material image; the parameter module is used to obtain the final non-glitter material image according to the specular separation module, the normal map module, the diffuse reflection module and the specular index module The initialization parameters of the material image are also obtained from the flash material image of the acquisition module, and the parameters of the material map are calculated using nonlinear optimization technology, and further, the appearance properties of the material image are obtained and the material map is generated.
图2所示为根据本发明实施方式的详细流程图。其具体实施步骤包括S201~Fig. 2 shows a detailed flowchart according to an embodiment of the present invention. Its specific implementation steps include S201~
S201,数据采集,通过手机、单反相机等设备拍摄带闪光和无闪光两张材质图像。S201, data collection, shooting two material images with flash and without flash through mobile phones, SLR cameras and other equipment.
S202,高光分离,本发明的技术方案利用YUV颜色空间的明暗值Y将原始RGB图像分离成漫反射图、镜面反射图两部分。S202, highlight separation, the technical solution of the present invention separates the original RGB image into two parts, a diffuse reflection map and a specular reflection map, by using the light and shade value Y of the YUV color space.
其具体实施如下(1)~(5)所示,包括:Its specific implementation is shown in (1) to (5) below, including:
(1)根据原始RGB图像计算原始明暗图Y、以及原始颜色图U、V;(1) Calculate the original shading map Y, and the original color map U, V according to the original RGB image;
(2)用户交互输入分离参数(一般为0.6~0.8);(2) The user interactively inputs the separation parameters (generally 0.6-0.8);
(3)新的明暗图Y2=原始明暗图Y*分离参数;(3) New shading image Y2=original shading image Y*separation parameter;
(4)根据新的明暗图Y2以及原始颜色图U、V生成新的RGB图像,作为漫反射图;(4) Generate a new RGB image according to the new light and shade map Y2 and the original color map U, V, as a diffuse reflection map;
(5)镜面反射图=原始RGB图像-漫反射图。(5) Specular reflection map = original RGB image - diffuse reflection map.
S203,法向图初始化,基本思想是根据新的明暗图Y2的像素值差,在切空间构造S向量和T向量,由S×T得到ND。具体如下:S203, normal map initialization, the basic idea is to construct S vector and T vector in tangent space according to the pixel value difference of the new light and dark map Y2, and obtain N D from S×T. details as follows:
根据象素点(i,j)的值D(i,j),在S和T方向的切向量为,According to the value D(i,j) of the pixel point (i,j), the tangent vector in the direction of S and T is,
S(i,j)=(1,0,D(i+1,j)-D(i-1,j),),S(i,j)=(1,0,D(i+1,j)-D(i-1,j),),
T(i,j)=(0,1,D(i,j+1)-D(i,j-1),)。T(i,j)=(0,1,D(i,j+1)-D(i,j-1),).
因此,根据ND(i,j)=S(i,j)×T(i,j)可计算法向图。Therefore, the normal map can be calculated according to N D (i,j)=S(i,j)×T(i,j).
S204,漫反射系数图初始化,漫反射系数=漫反射图/NdotL。其中,NdotL为像素法向量与光照向量的内积。在本阶段假设光照垂直于材质表面,即L(0,0,1)。S204, the albedo map is initialized, and the albedo=diffuse map/NdotL. Among them, NdotL is the inner product of the pixel normal vector and the illumination vector. At this stage it is assumed that the lighting is perpendicular to the surface of the material, ie L(0,0,1).
S205,高光指数图初始化,用户交互式输入高光参数(一般为0.1~0.5),高光指数图=新的明暗图Y2*高光参数。S205, the highlight index map is initialized, and the user interactively inputs highlight parameters (generally 0.1-0.5), where the highlight index map=new shading map Y2*high light parameter.
S206,镜面反射系数图初始化,镜面反射系数图=镜面反射图/pow(NdotH,g(x))/NdotL。NdotL:为像素法向量与光照向量的内积;NdotH:为像素法向量与H(光照、视线中间分量)的内积;g(x):任意像素点x高光指数;pow(NdotH,g(x)):NdotH的g(x)次方。S206, initializing the specular reflectance map, specular reflectance map=specular reflectance map/pow(NdotH, g(x))/NdotL. NdotL: the inner product of the pixel normal vector and the illumination vector; NdotH: the inner product of the pixel normal vector and H (the middle component of illumination and line of sight); g(x): any pixel x highlight index; pow(NdotH,g( x)): NdotH to the power of g(x).
S207,根据带闪光图像,利用非线性优化技术计算材质贴图参数,包括如下(1)~(3):S207. According to the image with flash, use nonlinear optimization technology to calculate material texture parameters, including the following (1)-(3):
(1)对无闪光图像的所有像素点进行聚类(N为聚类个数)。(1) Cluster all pixels of the flash-free image (N is the number of clusters).
(2)根据相机的FOV参数,初始化带闪光图像每个像素点的光照Lj、视线Vj参数。(2) According to the FOV parameters of the camera, initialize the illumination Lj and line of sight Vj parameters of each pixel of the image with flash.
(3)按照以下步骤依次处理每一类像素Ci。(3) Process each type of pixel Ci sequentially according to the following steps.
对于步骤(3),其还包括(A)~(E)6个子模块:For step (3), it also includes (A) ~ (E) 6 sub-modules:
(A)从当前类别Ci随机选择一个像素点P;(A) Randomly select a pixel point P from the current category Ci;
(B)读取像素点P对应的材质贴图参数(在步骤S202~S206根据无闪光图像初始化得到);(B) Read the texture map parameters corresponding to the pixel point P (obtained in steps S202-S206 according to the initialization of the flash-free image);
(C)读取当前类别Ci的每一个像素点Pj的RGB值、光照Lj、视线Vj;(C) Read the RGB value, illumination Lj, and line of sight Vj of each pixel point Pj of the current category Ci;
(D)将(B)和(C)的值分别作为像素点P的材质贴图参数初始值和已知输入数据,利用非线性优化技术,重新计算像素点P的贴图参数,并作为当前类别的的贴图参数基。(D) Use the values of (B) and (C) as the initial value of the material texture parameter of the pixel point P and the known input data, and use the nonlinear optimization technology to recalculate the texture parameter of the pixel point P, and use it as the current category The texture parameter base for .
(E)依次计算当前类别Ci其他像素点与像素点P的比值,作为其他像素点的权重系数,根据当前类别的贴图参数基,计算其他像素点的贴图参数。(E) Sequentially calculate the ratio of other pixels of the current category Ci to the pixel P, as the weight coefficient of other pixels, and calculate the texture parameters of other pixels according to the texture parameter base of the current category.
以上所述,只是本发明的较佳实施例而已,本发明并不局限于上述实施方式,只要其以相同的手段达到本发明的技术效果,都应属于本发明的保护范围。在本发明的保护范围内其 技术方案和/或实施方式可以有各种不同的修改和变化。The above descriptions are only preferred embodiments of the present invention, and the present invention is not limited to the above-mentioned embodiments, as long as they achieve the technical effects of the present invention by the same means, they should all belong to the protection scope of the present invention. There can be various modifications and changes in its technical scheme and/or implementation within the protection scope of the present invention.
Claims (16)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610738493.8A CN106331663B (en) | 2016-08-26 | 2016-08-26 | A kind of interaction material acquisition system and method for portable equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610738493.8A CN106331663B (en) | 2016-08-26 | 2016-08-26 | A kind of interaction material acquisition system and method for portable equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106331663A true CN106331663A (en) | 2017-01-11 |
CN106331663B CN106331663B (en) | 2018-12-11 |
Family
ID=57791024
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610738493.8A Active CN106331663B (en) | 2016-08-26 | 2016-08-26 | A kind of interaction material acquisition system and method for portable equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106331663B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106971418A (en) * | 2017-04-28 | 2017-07-21 | 碰海科技(北京)有限公司 | Hand-held household building materials convex-concave surface texture reconstructing device |
CN112150592A (en) * | 2020-09-28 | 2020-12-29 | 腾讯科技(深圳)有限公司 | Animation file generation method and device, storage medium and electronic equipment |
CN112634156A (en) * | 2020-12-22 | 2021-04-09 | 浙江大学 | Method for estimating material reflection parameter based on portable equipment collected image |
CN114119779A (en) * | 2021-10-29 | 2022-03-01 | 浙江凌迪数字科技有限公司 | Method for generating material map through multi-angle polishing shooting and electronic device |
CN115035306A (en) * | 2022-06-24 | 2022-09-09 | Oppo广东移动通信有限公司 | Image material determination method and related device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6259455B1 (en) * | 1998-06-30 | 2001-07-10 | Cirrus Logic, Inc. | Method and apparatus for applying specular highlighting with specular components included with texture maps |
KR20050066501A (en) * | 2003-12-26 | 2005-06-30 | 한국전자통신연구원 | Inverse rendering apparatus and method using filtered environment map |
CN102819864A (en) * | 2012-08-14 | 2012-12-12 | 北京航空航天大学 | Luminosity diagram-based geometry and surface material reconstructing method |
US20140176550A1 (en) * | 2010-08-27 | 2014-06-26 | Industry-Academia Cooperation Group Of Sejong University | Roy tracing appratus and method |
CN104463198A (en) * | 2014-11-19 | 2015-03-25 | 上海电机学院 | Method for carrying out illumination estimation on real illumination environment |
CN105574920A (en) * | 2016-01-28 | 2016-05-11 | 网易(杭州)网络有限公司 | Texture map generating method, texture map generating device, texture synthesizing method and texture synthesizing device |
-
2016
- 2016-08-26 CN CN201610738493.8A patent/CN106331663B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6259455B1 (en) * | 1998-06-30 | 2001-07-10 | Cirrus Logic, Inc. | Method and apparatus for applying specular highlighting with specular components included with texture maps |
KR20050066501A (en) * | 2003-12-26 | 2005-06-30 | 한국전자통신연구원 | Inverse rendering apparatus and method using filtered environment map |
US20140176550A1 (en) * | 2010-08-27 | 2014-06-26 | Industry-Academia Cooperation Group Of Sejong University | Roy tracing appratus and method |
CN102819864A (en) * | 2012-08-14 | 2012-12-12 | 北京航空航天大学 | Luminosity diagram-based geometry and surface material reconstructing method |
CN104463198A (en) * | 2014-11-19 | 2015-03-25 | 上海电机学院 | Method for carrying out illumination estimation on real illumination environment |
CN105574920A (en) * | 2016-01-28 | 2016-05-11 | 网易(杭州)网络有限公司 | Texture map generating method, texture map generating device, texture synthesizing method and texture synthesizing device |
Non-Patent Citations (2)
Title |
---|
付曙光等: "基于三维纹理合成图像的重光照研究", 《科技信息(科学教研)》 * |
孙其民: "纹理物体表面反射参数的恢复", 《计算机研究与发展》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106971418A (en) * | 2017-04-28 | 2017-07-21 | 碰海科技(北京)有限公司 | Hand-held household building materials convex-concave surface texture reconstructing device |
CN112150592A (en) * | 2020-09-28 | 2020-12-29 | 腾讯科技(深圳)有限公司 | Animation file generation method and device, storage medium and electronic equipment |
CN112150592B (en) * | 2020-09-28 | 2023-07-14 | 腾讯科技(深圳)有限公司 | Animation file generation method and device, storage medium and electronic equipment |
CN112634156A (en) * | 2020-12-22 | 2021-04-09 | 浙江大学 | Method for estimating material reflection parameter based on portable equipment collected image |
CN112634156B (en) * | 2020-12-22 | 2022-06-24 | 浙江大学 | A method for estimating material reflection parameters based on images captured by portable devices |
CN114119779A (en) * | 2021-10-29 | 2022-03-01 | 浙江凌迪数字科技有限公司 | Method for generating material map through multi-angle polishing shooting and electronic device |
CN115035306A (en) * | 2022-06-24 | 2022-09-09 | Oppo广东移动通信有限公司 | Image material determination method and related device |
Also Published As
Publication number | Publication date |
---|---|
CN106331663B (en) | 2018-12-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107045715B (en) | A kind of method that single width low dynamic range echograms generate high dynamic range images | |
US11882369B2 (en) | Method and system of lens shading color correction using block matching | |
WO2021179820A1 (en) | Image processing method and apparatus, storage medium and electronic device | |
CN104156916B (en) | A kind of light field projecting method recovered for scene illumination | |
US9398280B2 (en) | AWB using face detection | |
CN104883504B (en) | Open the method and device of high dynamic range HDR functions on intelligent terminal | |
CN108419028B (en) | Image processing method, image processing device, computer-readable storage medium and electronic equipment | |
TWI737979B (en) | Image demosaicer and method | |
CN103366364B (en) | A kind of stingy drawing method based on color distortion | |
CN103440674B (en) | A kind of rapid generation of digital picture wax crayon specially good effect | |
CN109191403A (en) | Image processing method and apparatus, electronic device, computer-readable storage medium | |
CN113129391B (en) | Multi-exposure fusion method based on multi-exposure image feature distribution weight | |
CN109493283A (en) | A kind of method that high dynamic range images ghost is eliminated | |
CN113556526B (en) | RGBW filter array-based color enhancement method for color night vision equipment | |
CN101425179A (en) | Face image relighting method and device | |
CN110930341A (en) | Low-illumination image enhancement method based on image fusion | |
CN112508812A (en) | Image color cast correction method, model training method, device and equipment | |
CN108024105A (en) | Image color adjusting method, device, electronic equipment and storage medium | |
Zhao et al. | Adaptive light estimation using dynamic filtering for diverse lighting conditions | |
CN106331663B (en) | A kind of interaction material acquisition system and method for portable equipment | |
CN107705263A (en) | A kind of adaptive Penetrating Fog method and terminal based on RGB IR sensors | |
US8994848B2 (en) | Method and system for handling mixed illumination in video and photography | |
CN109191398B (en) | Image processing method, apparatus, computer-readable storage medium and electronic device | |
CN109040598B (en) | Image processing method, image processing device, computer-readable storage medium and electronic equipment | |
CN106408617B (en) | Interactive single image material obtaining system and method based on YUV color space |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address |
Address after: 519000 Room 102, 202, 302 and 402, No. 325, Qiandao Ring Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, Room 102 and 202, No. 327 and Room 302, No. 329 Patentee after: Zhuhai Jinshan Digital Network Technology Co.,Ltd. Patentee after: Guangzhou Xishanju Network Technology Co.,Ltd. Address before: 519000 Jinshan software building, 8 Lanshan lane, Jida Jingshan Hill Road, Zhuhai, Guangdong Patentee before: ZHUHAI KINGSOFT ONLINE GAME TECHNOLOGY Co.,Ltd. Patentee before: GUANGZHOU SEASUN ENTERTAINMENT NETWORK TECHNOLOGY Co.,Ltd. |
|
CP03 | Change of name, title or address | ||
TR01 | Transfer of patent right |
Effective date of registration: 20241210 Address after: Room 201, No. 325 Qiandao Ring Road, Tangjiawan Town, High tech Zone, Zhuhai City, Guangdong Province 519000 Patentee after: Zhuhai Xishan Jushiyou Technology Co.,Ltd. Country or region after: China Address before: 519000 Room 102, 202, 302 and 402, No. 325, Qiandao Ring Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, Room 102 and 202, No. 327 and Room 302, No. 329 Patentee before: Zhuhai Jinshan Digital Network Technology Co.,Ltd. Country or region before: China Patentee before: Guangzhou Xishanju Network Technology Co.,Ltd. |
|
TR01 | Transfer of patent right |