[go: up one dir, main page]

CN116017167B - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116017167B
CN116017167B CN202211704923.6A CN202211704923A CN116017167B CN 116017167 B CN116017167 B CN 116017167B CN 202211704923 A CN202211704923 A CN 202211704923A CN 116017167 B CN116017167 B CN 116017167B
Authority
CN
China
Prior art keywords
target
ambient light
relighting
virtual scene
coefficient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211704923.6A
Other languages
Chinese (zh)
Other versions
CN116017167A (en
Inventor
丛宽
林弘扬
赵清澄
许岚
张亦弛
吴红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan MgtvCom Interactive Entertainment Media Co Ltd
ShanghaiTech University
Original Assignee
Hunan MgtvCom Interactive Entertainment Media Co Ltd
ShanghaiTech University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan MgtvCom Interactive Entertainment Media Co Ltd, ShanghaiTech University filed Critical Hunan MgtvCom Interactive Entertainment Media Co Ltd
Priority to CN202211704923.6A priority Critical patent/CN116017167B/en
Publication of CN116017167A publication Critical patent/CN116017167A/en
Application granted granted Critical
Publication of CN116017167B publication Critical patent/CN116017167B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides an image processing method, an image processing device, electronic equipment and a storage medium, which comprise the steps of determining a corresponding target re-lighting coefficient from an ambient light restoration model based on a target virtual scene selected in a shooting instruction; taking the target re-lighting system as a pixel brightness value; re-polishing a target object in the shooting instruction based on the pixel brightness value; and adding the target object image obtained by the re-lighting shooting into the target virtual scene to obtain a target image. According to the method, the ambient light is restored in the lighting equipment of the multispectral light source according to the target ambient light sampling diagrams corresponding to different virtual scenes, so that an ambient light restoration model is constructed, and the target re-lighting coefficient corresponding to the target virtual scene is determined through the ambient light restoration model; and re-polishing the target object in the shooting instruction based on the target re-polishing coefficient, so that the re-polishing effect of the virtual environment on the real object is more real, and the authenticity of the image can be improved.

Description

图像的处理方法、装置、电子设备及存储介质Image processing method, device, electronic device and storage medium

技术领域Technical Field

本发明涉及多光谱光源技术领域,尤其涉及一种图像的处理方法、装置、电子设备及存储介质。The present invention relates to the technical field of multi-spectral light sources, and in particular to an image processing method, device, electronic equipment and storage medium.

背景技术Background Art

随着影视传媒,文化互娱行业的繁荣发展,相关行业对于环境光还原技术的需求不断提高。该技术可以高效地还原指定场景下的光照条件,对相关行业的发展升级起到了至关重要的作用。With the prosperous development of film and television media and cultural entertainment industries, the demand for ambient light restoration technology in related industries has continued to increase. This technology can efficiently restore the lighting conditions in a specified scene, playing a vital role in the development and upgrading of related industries.

一般来说,将现实中的人体或者物品图像直接添加到虚拟场景中时,会因为环境光照的差异,无法在视觉上逼真的还原出人体或物品的真实效果。各个物品的表面在不同的光照环境下,会呈现出不同的反射效果;对于同一物品的不同表面,会因为角度,粗糙程度,透明程度等因素,呈现出不同的反射效果;即使对于同一物品的同一表面,也会因为不同光谱的反射率不同,使得模拟出的物品表面与真实物体表面呈现出不同的反射效果。Generally speaking, when adding real-life human or object images directly to a virtual scene, the real effects of the human or object cannot be realistically restored visually due to differences in ambient lighting. The surfaces of various objects will show different reflection effects under different lighting environments; different surfaces of the same object will show different reflection effects due to factors such as angle, roughness, and transparency; even for the same surface of the same object, the reflectivity of different spectra will be different, making the simulated object surface and the real object surface show different reflection effects.

目前的技术多聚焦于对于光照方向的还原,力求能够纠正不同角度带来的光照与阴影信息。单纯的使用图像的三通道数据还原的环境光,会因为相机响应曲线,灯光响应曲线及通用色彩标准sRGB色域表达能力弱等因素,导致图像色差较大,即图像的真实性较差。Current technologies focus on restoring the direction of light, striving to correct the light and shadow information brought by different angles. Simply using the three-channel data of the image to restore the ambient light will result in large image color difference due to factors such as the camera response curve, the light response curve and the weak expression ability of the universal color standard sRGB color gamut, that is, the image authenticity is poor.

发明内容Summary of the invention

有鉴于此,本发明实施例提供一种图像的处理方法、装置、电子设备及存储介质,以解决现有技术中出现的图像真实性较差的问题。In view of this, embodiments of the present invention provide an image processing method, an apparatus, an electronic device, and a storage medium to solve the problem of poor image authenticity in the prior art.

为实现上述目的,本发明实施例提供如下技术方案:To achieve the above objectives, the embodiments of the present invention provide the following technical solutions:

本发明实施例第一方面示出了一种图像的处理方法,所述方法包括:A first aspect of an embodiment of the present invention shows a method for processing an image, the method comprising:

接收任意一用户输入的拍摄指令;Receive a shooting instruction input by any user;

基于所述拍摄指令中选择的目标虚拟场景,从环境光还原模型确定对应的目标重打光系数,所述环境光还原模型用于存储每一虚拟场景与第一重打光系数之间的对应关系;Based on the target virtual scene selected in the shooting instruction, determining a corresponding target relighting coefficient from an ambient light restoration model, wherein the ambient light restoration model is used to store a corresponding relationship between each virtual scene and a first relighting coefficient;

将所述目标重打光系数作为素像明度值;Using the target relighting coefficient as the pixel brightness value;

基于所述像素明度值对所述拍摄指令中的目标物进行重打光;relighting the target object in the shooting instruction based on the pixel brightness value;

将重打光拍摄得到的目标物图像添加至所述目标虚拟场景中,得到目标虚拟图像。The target object image obtained by relighting and photographing is added to the target virtual scene to obtain a target virtual image.

可选的,所述基于所述拍摄指令中选择的目标虚拟场景,从环境光还原模型确定对应的目标重打光系数,包括:Optionally, determining a corresponding target relighting coefficient from an ambient light restoration model based on the target virtual scene selected in the shooting instruction includes:

遍历所述环境光还原模型中每一虚拟场景与第一重打光系数之间的对应关系,确定与所述目标虚拟场景对应的重打光系数;Traversing the correspondence between each virtual scene in the ambient light restoration model and the first relighting coefficient, and determining the relighting coefficient corresponding to the target virtual scene;

将与所述目标虚拟场景对应的第一重打光系数作为目标重打光系数。The first relighting coefficient corresponding to the target virtual scene is used as the target relighting coefficient.

可选的,所述环境光还原模型的构建过程包括:Optionally, the process of constructing the ambient light restoration model includes:

利用多种色彩灯珠对拍摄得到的色卡图像进行校准,得到校准后的色卡图像;Using a variety of color lamp beads to calibrate the captured color card image to obtain a calibrated color card image;

针对每一虚拟场景对应的目标环境光采样图,基于预处理后的目标环境光采样图对校准后的色卡图像进行虚拟打光,得到对应的色卡像素;For each virtual scene corresponding to the target ambient light sampling map, the calibrated color card image is virtually illuminated based on the preprocessed target ambient light sampling map to obtain the corresponding color card pixels;

基于所述色卡像素确定每一虚拟场景的第一重打光系数;Determining a first relighting coefficient for each virtual scene based on the color card pixels;

基于所述每一虚拟场景与第一重打光系数之间的对应关系构建环境光还原模型。An ambient light restoration model is constructed based on the corresponding relationship between each virtual scene and the first relighting coefficient.

可选的,所述针对每一虚拟场景对应的目标环境光采样图,基于预处理后的目标环境光采样图对校准后的色卡图像进行虚拟打光,得到对应的色卡像素,包括:Optionally, for each virtual scene corresponding to the target ambient light sampling map, the calibrated color card image is virtually illuminated based on the preprocessed target ambient light sampling map to obtain corresponding color card pixels, including:

针对每一虚拟场景对应的目标环境光采样图,对所述目标环境光采样图进行预处理,得到压缩的初始像素明度值;For each target ambient light sampling map corresponding to each virtual scene, preprocessing the target ambient light sampling map to obtain a compressed initial pixel brightness value;

根据球体多光谱光源坐标对所述初始像素明度值进行映射,得到图像区域;Mapping the initial pixel brightness value according to the spherical multi-spectral light source coordinates to obtain an image area;

对所述图像区域进行处理,确定预期像素值;Processing the image region to determine expected pixel values;

利用所述预期像素值所测度的灯光对色卡进行虚拟打光得到,得到对应的色卡像素。The color card is virtually illuminated using the light measured by the expected pixel value to obtain corresponding color card pixels.

本发明实施例第二方面示出了一种图像的处理装置,所述装置包括:A second aspect of an embodiment of the present invention shows an image processing device, the device comprising:

确定单元,用于在接收到任意一用户输入的拍摄指令时,基于所述拍摄指令中选择的目标虚拟场景,从环境光还原模型确定对应的目标重打光系数,所述环境光还原模型由于构建单元构建得到;a determining unit, configured to, upon receiving a shooting instruction input by any user, determine a corresponding target relighting coefficient from an ambient light restoration model based on a target virtual scene selected in the shooting instruction, wherein the ambient light restoration model is constructed by the constructing unit;

打光单元,用于将所述目标重打光系统作为素像明度值;基于所述像素明度值对所述拍摄指令中的目标物进行重打光;A lighting unit, used for taking the target relighting system as a pixel brightness value; and relighting the target object in the shooting instruction based on the pixel brightness value;

处理单元,用于将重打光拍摄得到的目标物图像添加至所述目标虚拟场景中,得到目标图像。The processing unit is used to add the target object image obtained by relighting and photographing to the target virtual scene to obtain the target image.

可选的,所述确定单元,具体用于:遍历所述环境光还原模型中每一虚拟场景与第一重打光系数之间的对应关系,确定与所述目标虚拟场景对应的重打光系数;Optionally, the determining unit is specifically used to: traverse the correspondence between each virtual scene in the ambient light restoration model and the first relighting coefficient to determine the relighting coefficient corresponding to the target virtual scene;

将与所述目标虚拟场景对应的第一重打光系数作为目标重打光系数。The first relighting coefficient corresponding to the target virtual scene is used as the target relighting coefficient.

可选的,所述构建单元,用于:利用多种色彩灯珠对拍摄得到的色卡图像进行校准,得到校准后的色卡图像;Optionally, the construction unit is used to: calibrate the captured color card image using a variety of color lamp beads to obtain a calibrated color card image;

针对每一虚拟场景对应的目标环境光采样图,基于预处理后的目标环境光采样图对校准后的色卡图像进行虚拟打光,得到对应的色卡像素;For each virtual scene corresponding to the target ambient light sampling map, the calibrated color card image is virtually illuminated based on the preprocessed target ambient light sampling map to obtain the corresponding color card pixels;

基于所述色卡像素确定每一虚拟场景的第一重打光系数;Determining a first relighting coefficient for each virtual scene based on the color card pixels;

基于所述每一虚拟场景与第一重打光系数之间的对应关系构建环境光还原模型。An ambient light restoration model is constructed based on the corresponding relationship between each virtual scene and the first relighting coefficient.

可选的,所述针对每一虚拟场景对应的目标环境光采样图,基于预处理后的目标环境光采样图对校准后的色卡图像进行虚拟打光,得到对应的色卡像素的构建单元,具体用于:Optionally, for each virtual scene corresponding to the target ambient light sampling map, the calibrated color card image is virtually illuminated based on the preprocessed target ambient light sampling map to obtain a construction unit of corresponding color card pixels, which is specifically used for:

针对每一虚拟场景对应的目标环境光采样图,对所述目标环境光采样图进行预处理,得到压缩的初始像素明度值;For each target ambient light sampling map corresponding to each virtual scene, preprocessing the target ambient light sampling map to obtain a compressed initial pixel brightness value;

根据球体多光谱光源坐标对所述初始像素明度值进行映射,得到图像区域;Mapping the initial pixel brightness value according to the spherical multi-spectral light source coordinates to obtain an image area;

对所述图像区域进行处理,确定预期像素值;Processing the image region to determine expected pixel values;

利用所述预期像素值所测度的灯光对色卡进行虚拟打光得到,得到对应的色卡像素。The color card is virtually illuminated using the light measured by the expected pixel value to obtain corresponding color card pixels.

本发明实施例第三方面示出了一种电子设备,所述电子设备用于运行程序,其中,所述程序运行时执行如本发明实施例第一方面示出的图像的处理方法。A third aspect of an embodiment of the present invention shows an electronic device, which is used to run a program, wherein the program, when running, executes the image processing method shown in the first aspect of the embodiment of the present invention.

本发明实施例第四方面示出了一种计算机存储介质,所述存储介质包括存储程序,其中,在所述程序运行时控制所述存储介质所在设备执行如本发明实施例第一方面示出的图像的处理方法。A fourth aspect of an embodiment of the present invention shows a computer storage medium, which includes a storage program, wherein when the program is running, the device where the storage medium is located is controlled to execute the image processing method shown in the first aspect of the embodiment of the present invention.

基于上述本发明实施例提供的一种图像的处理方法、装置、电子设备及存储介质,所述方法包括:在接收到任意一用户输入的拍摄指令时,基于所述拍摄指令中选择的目标虚拟场景,从环境光还原模型确定对应的目标重打光系数,所述环境光还原模型用于存储每一虚拟场景与第一重打光系数之间的对应关系;将所述目标重打光系统作为素像明度值;基于所述像素明度值对所述拍摄指令中的目标物进行重打光;将重打光拍摄得到的目标物图像添加至所述目标虚拟场景中,得到目标图像。在本发明实施例中,根据不同的虚拟场景所对应的目标环境光采样图在多光谱光源的打光设备中还原环境光,以构建环境光还原模型,通过环境光还原模型确定目标虚拟场景对应的目标重打光系数;基于目标重打光系数对所述拍摄指令中的目标物进行重打光,使得虚拟环境对真实物体进行重打光效果更加真实,将重打光拍摄得到的目标物图像添加至所述目标虚拟场景中,得到目标虚拟图像。能够提高上述方式得到的图像的真实性。Based on the above-mentioned embodiment of the present invention, an image processing method, device, electronic device and storage medium are provided, the method includes: when receiving a shooting instruction input by any user, based on the target virtual scene selected in the shooting instruction, determining the corresponding target relighting coefficient from the ambient light restoration model, the ambient light restoration model is used to store the corresponding relationship between each virtual scene and the first relighting coefficient; using the target relighting system as the pixel brightness value; relighting the target object in the shooting instruction based on the pixel brightness value; adding the target object image obtained by relighting to the target virtual scene to obtain the target image. In an embodiment of the present invention, the ambient light is restored in the lighting device of the multi-spectral light source according to the target ambient light sampling diagram corresponding to different virtual scenes to construct an ambient light restoration model, and the target relighting coefficient corresponding to the target virtual scene is determined by the ambient light restoration model; relighting the target object in the shooting instruction based on the target relighting coefficient makes the virtual environment relighting the real object more realistic, and adding the target object image obtained by relighting to the target virtual scene to obtain the target virtual image. The authenticity of the image obtained by the above method can be improved.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings required for use in the embodiments or the description of the prior art will be briefly introduced below. Obviously, the drawings described below are only embodiments of the present invention. For ordinary technicians in this field, other drawings can be obtained based on the provided drawings without paying creative work.

图1为本发明实施例示出的一种图像的处理方法的流程示意图;FIG1 is a schematic flow chart of an image processing method according to an embodiment of the present invention;

图2为本发明实施例示出的环境光还原模型的构建过程的流程示意图;FIG2 is a schematic diagram of a process of constructing an ambient light restoration model according to an embodiment of the present invention;

图3为本发明实施例示出的一种图像的处理装置的结构示意图。FIG. 3 is a schematic diagram of the structure of an image processing device according to an embodiment of the present invention.

具体实施方式DETAILED DESCRIPTION

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will be combined with the drawings in the embodiments of the present invention to clearly and completely describe the technical solutions in the embodiments of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of the present invention.

本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。The terms "first", "second", "third", "fourth", etc. (if any) in the specification and claims of the present application and the above-mentioned drawings are used to distinguish similar objects, and are not necessarily used to describe a specific order or sequence. It should be understood that the data used in this way can be interchangeable where appropriate, so that the embodiments described herein can be implemented in an order other than that illustrated or described herein. In addition, the terms "including" and "having" and any variations thereof are intended to cover non-exclusive inclusions, for example, a process, method, system, product or device that includes a series of steps or units is not necessarily limited to those steps or units that are clearly listed, but may include other steps or units that are not clearly listed or inherent to these processes, methods, products or devices.

需要说明的是,在本发明中涉及“第一”、“第二”等的描述仅用于描述目的,而不能理解为指示或暗示其相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。另外,各个实施例之间的技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本发明要求的保护范围之内。It should be noted that the descriptions of "first", "second", etc. in the present invention are only used for descriptive purposes and cannot be understood as indicating or implying their relative importance or implicitly indicating the number of the indicated technical features. Therefore, the features defined as "first" and "second" may explicitly or implicitly include at least one of the features. In addition, the technical solutions between the various embodiments can be combined with each other, but they must be based on the ability of ordinary technicians in the field to implement them. When the combination of technical solutions is contradictory or cannot be implemented, it should be deemed that such a combination of technical solutions does not exist and is not within the scope of protection required by the present invention.

在本申请中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。In this application, the terms "comprises", "comprising" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article or device comprising a series of elements includes not only those elements, but also other elements not explicitly listed, or also includes elements inherent to such process, method, article or device. In the absence of more restrictions, an element defined by the sentence "comprising a ..." does not exclude the presence of other identical elements in the process, method, article or device comprising the element.

参见图1,为本发明实施例示出的一种图像的处理方法的流程示意图,所述方法包括:Referring to FIG. 1 , it is a schematic flow chart of an image processing method according to an embodiment of the present invention. The method includes:

步骤S101:接收任意一用户输入的拍摄指令。Step S101: receiving a shooting instruction input by any user.

可选的,用户可基于用户终端选择对应的目标虚拟场景,并触发对应的拍摄按钮,以生成对应的拍摄指令,并发送给图像处理系统。Optionally, the user can select a corresponding target virtual scene based on the user terminal and trigger a corresponding shooting button to generate a corresponding shooting instruction and send it to the image processing system.

在具体实现步骤S101的过程中,接收携带有目标虚拟场景的拍摄指令。In the specific process of implementing step S101, a shooting instruction carrying a target virtual scene is received.

步骤S102:基于所述拍摄指令中选择的目标虚拟场景,从环境光还原模型确定对应的目标重打光系数。Step S102: Based on the target virtual scene selected in the shooting instruction, determine the corresponding target relighting coefficient from the ambient light restoration model.

在步骤S102中,所述环境光还原模型用于存储每一虚拟场景与第一重打光系数之间的对应关系。In step S102, the ambient light restoration model is used to store the corresponding relationship between each virtual scene and the first relighting coefficient.

需要说明的是,环境光还原模型的构建过程,如图2所示,包括以下步骤:It should be noted that the construction process of the ambient light restoration model, as shown in FIG2 , includes the following steps:

步骤S201:利用多种色彩灯珠对拍摄得到的色卡图像进行校准,得到校准后的色卡图像。Step S201: calibrate the captured color card image using multiple color lamp beads to obtain a calibrated color card image.

需要说明的是,颜色-对立空间(Labcolorspace,LAB)颜色空间中的L通道是明度,另外A和B是色彩通道,因此,这种色彩混合后将产生明亮的色彩。由于RGB颜色空间并不能直观对应人眼的感受。而LAB颜色空间能准确对应人眼的感受,且相关的明度、色差、颜色校正等计算会比较准确;因此,需要将RGB颜色空间转换到LAB颜色空间。It should be noted that the L channel in the Lab color space (LAB) is the brightness, and A and B are the color channels. Therefore, the mixture of these colors will produce bright colors. Since the RGB color space cannot directly correspond to the perception of the human eye. The LAB color space can accurately correspond to the perception of the human eye, and the calculations of related brightness, color difference, color correction, etc. will be more accurate; therefore, it is necessary to convert the RGB color space to the LAB color space.

在具体实现步骤S201的过程中,首先,由于不同光谱灯珠在最大明度下激发出来的亮度不同,为了规避低光条件下的影响,在拍摄色卡过程中,对于各个光源光谱的灯珠,采用图像相机拍摄的曝光时间不同。对于不同的曝光时间,将色卡图像的红绿蓝RGB颜色空间转换到颜色-对立空间(Labcolor space,LAB)颜色空间,进而将像转换后色卡图像的素明度L和曝光时间代入公式(1)以确定校正后的色卡图像的像素明度L'。In the specific implementation of step S201, first, since the brightness of lamp beads with different spectra is different at the maximum brightness, in order to avoid the influence of low light conditions, in the process of shooting the color card, the exposure time of the image camera is different for the lamp beads of each light source spectrum. For different exposure times, the RGB color space of the color card image is converted to the color-opponent space (Lab color space, LAB) color space, and then the pixel brightness L and exposure time of the converted color card image are substituted into formula (1) to determine the pixel brightness L' of the corrected color card image.

公式(1):Formula (1):

L'=L/exp_time*factor (1)L'=L/exp_time*factor (1)

其中,L为像素明度;factor为平均曝光时间,在此曝光时间下,蓝色、绿色、红色打光下的色卡颜色曝光基本均匀;exp_time为相机的曝光时间;Where L is the pixel brightness; factor is the average exposure time, under which the color exposure of the color card under blue, green, and red lighting is basically uniform; exp_time is the exposure time of the camera;

其次,由于光源在输入亮度下存在实际亮度不均匀的问题,拍摄过程中需要对不同亮度输入的灯珠,在同一相机曝光参数下拍摄一组色卡图像,并标定得到灯珠的亮度曲线。具体的,对于色卡图像的一组离散的亮度L1,L2,...,Lk,并基于亮度L1,L2,...,Lk绘制第一曲线;拍摄色卡图像上的灰阶色块得到一组明度M1,M2,...,Mk,基于明度M1,M2,...,Mk绘制第二曲线;将所述第一曲线和第二曲线进行拟合计算最小误差,即回归函数f,也就是说,可以根据第一曲线和第二曲线的曲线状况确定最小误差,即回归函数f。Secondly, since the actual brightness of the light source is uneven under the input brightness, it is necessary to shoot a set of color card images for lamp beads with different brightness inputs under the same camera exposure parameters during the shooting process, and calibrate to obtain the brightness curve of the lamp beads. Specifically, for a set of discrete brightnesses L 1 , L 2 , ..., L k of the color card image, a first curve is drawn based on the brightnesses L 1 , L 2 , ..., L k ; a set of brightnesses M 1 , M 2 , ..., M k is obtained by shooting the grayscale color blocks on the color card image, and a second curve is drawn based on the brightnesses M 1 , M 2 , ..., M k ; the first curve and the second curve are fitted to calculate the minimum error, that is, the regression function f, that is, the minimum error, that is, the regression function f, can be determined according to the curve conditions of the first curve and the second curve.

其中,明度是指色彩的明亮程度,又称为色阶和照度。亮度也称为灰阶值是对颜色的明度的一种度量。Among them, brightness refers to the brightness of the color, also known as color scale and illumination. Brightness, also known as grayscale value, is a measure of the brightness of the color.

最后,对于同一光源不同光谱的灯珠,保存对应的数据,经过曝光时间矫正后,得到校准后的色卡图像,并将其作为多光谱光源重打光系数的过程为转换色彩空间到LAB等线性明度信息的色彩空间,以通过明度值除以对应的曝光时间,再乘上同一的曝光时间,得到作为多光谱光源重打光系数,即得到校准后的色卡图像。Finally, for the lamp beads with different spectra of the same light source, the corresponding data is saved. After exposure time correction, a calibrated color card image is obtained, and it is used as the re-lighting coefficient of the multi-spectral light source. The process is to convert the color space to the color space of linear brightness information such as LAB, divide the brightness value by the corresponding exposure time, and then multiply it by the same exposure time to obtain the re-lighting coefficient of the multi-spectral light source, that is, to obtain the calibrated color card image.

比如,选用了白光正好完全曝光的曝光时间Q作为同一的曝光时间;在此曝光时间Q下,蓝色、绿色、红色打光下的色卡颜色曝光基本均匀。For example, the exposure time Q at which white light is fully exposed is selected as the uniform exposure time; at this exposure time Q, the color exposure of the color cards under blue, green, and red lighting is basically uniform.

可选的,可将曝光时间作为多光谱光源重打光系数。Optionally, the exposure time can be used as a relighting factor for the multi-spectral light source.

需要说明的是,在各个光源光谱的灯珠分别点亮暗室时通过专业相机拍摄的高清晰度色卡图像,在采集的过程中还需要控制相机的感光度和曝光时间。It should be noted that the high-definition color card images taken by a professional camera when the lamp beads of each light source spectrum light up the dark room separately require the control of the camera's sensitivity and exposure time during the acquisition process.

对于同一光源根据色卡上的灰阶色块的亮度,和对应光源强度,可以得到各个光源的响应曲线。For the same light source, the response curve of each light source can be obtained according to the brightness of the grayscale color block on the color card and the corresponding light source intensity.

步骤S202:针对每一虚拟场景对应的目标环境光采样图,基于预处理后的目标环境光采样图对校准后的色卡图像进行虚拟打光,得到对应的色卡像素。Step S202: for each target ambient light sampling diagram corresponding to the virtual scene, the calibrated color card image is virtually illuminated based on the preprocessed target ambient light sampling diagram to obtain corresponding color card pixels.

需要说明的是,具体实现步骤S202针对每一虚拟场景对应的目标环境光采样图,基于预处理后的目标环境光采样图对校准后的色卡图像进行虚拟打光,得到对应的色卡像素的过程,包括以下步骤:It should be noted that the specific implementation of step S202 is to virtually illuminate the calibrated color card image based on the preprocessed target ambient light sampling map corresponding to each virtual scene to obtain the corresponding color card pixel, including the following steps:

步骤S11:针对每一虚拟场景对应的目标环境光采样图,对所述目标环境光采样图进行预处理,得到压缩的初始像素明度值。Step S11: for each target ambient light sampling map corresponding to the virtual scene, preprocess the target ambient light sampling map to obtain a compressed initial pixel brightness value.

在具体实现步骤S11的过程中,可通过混合测井曲线,即混合对数伽玛测井HybridLogGamma方法。具体地,对于图像中每一像素的明度值E进行压缩,如公式(2)所示,得到压缩后的初始像素明度值E'。In the specific implementation of step S11, a hybrid logging curve, namely a hybrid log gamma logging method, can be used. Specifically, the brightness value E of each pixel in the image is compressed, as shown in formula (2), to obtain the compressed initial pixel brightness value E'.

公式(2):Formula (2):

其中,E是线形但被归一化至区间0,1的图像亮度信号,即某一像素的明度值。E'是非线性的亮度输出信号,即压缩后的明度值。常数a,b,c分别被固为0.17883277,0.28466892,0.55991073,常数d由多光谱灯光系统的亮度范围决定,对于本专利的一实施例中,这一常数常被确定为 Wherein, E is a linear image brightness signal normalized to the interval 0, 1, i.e., the brightness value of a pixel. E' is a nonlinear brightness output signal, i.e., the compressed brightness value. Constants a, b, c are fixed to 0.17883277, 0.28466892, 0.55991073, respectively. Constant d is determined by the brightness range of the multi-spectral lighting system. For one embodiment of the present patent, this constant is often determined as

需要说明的是,目标环境光采样图是指具有高动态范围的全景图,其上具有某一位置周围环境的采样信息。一般的其图像的长宽比例为1:2。It should be noted that the target ambient light sampling image refers to a panoramic image with a high dynamic range, which has sampling information of the surrounding environment of a certain position. Generally, the aspect ratio of the image is 1:2.

在本发明实施了中通过压缩的方式增强图片,也就是说,若目标环境光图片不具备足够的高动态范围表达能力,本专利可以利用混合对数伽玛测井HybridLogGamma方法等增强图片,以还原出原图的动态范围。In the implementation of the present invention, the image is enhanced by compression. That is, if the target ambient light image does not have sufficient high dynamic range expression capability, this patent can enhance the image by using the hybrid log-gamma logging method, etc., to restore the dynamic range of the original image.

步骤S12:根据球体多光谱光源坐标对所述初始像素明度值进行映射,得到图像区域。Step S12: mapping the initial pixel brightness value according to the spherical multi-spectral light source coordinates to obtain an image area.

在具体实现步骤S12的过程中,利用正轴等角圆柱投影的方式对初始像素明度值所对应的像素进行投影,以通过构造球形经纬坐标来映射像素的位置坐标,如公式(3)所示,得到对应的坐标,进而基于映射得到的坐标确定所述目标环境光采样图映射至打光设备的图像区域。In the specific implementation of step S12, the pixel corresponding to the initial pixel brightness value is projected by using the equirectangular cylindrical projection method to map the pixel position coordinates by constructing spherical longitude and latitude coordinates, as shown in formula (3), and the corresponding coordinates are obtained, and then the target ambient light sampling map is determined to be mapped to the image area of the lighting device based on the mapped coordinates.

具体的,球形上任意一经纬度坐标λ,φ在正轴等角圆柱投影坐标轴上的坐标为公式(3)所示。Specifically, the coordinates of any longitude and latitude coordinates λ, φ on the sphere on the coordinate axes of the orthogonal conformal cylindrical projection are shown in formula (3).

公式(3):Formula (3):

其中,x,y分别为目标环境光采样图中压缩后的明度值的二维平面坐标,λ,λ0分别为维度坐标与赤道纬度。φ为经度坐标。Among them, x, y are the two-dimensional plane coordinates of the compressed brightness value in the target ambient light sampling map, λ, λ0 are the latitude coordinates and the equatorial latitude, respectively. φ is the longitude coordinate.

步骤S13:对所述图像区域进行处理,确定预期像素值。Step S13: Process the image area to determine expected pixel values.

在具体实现步骤S13的过程中,由于多光谱光源系统本身光源数量稀疏且分布不均匀,即正轴等角圆柱投影本身的面积映射也不均匀,可在进行坐标映射时考虑所有采样点的面积因素。也就是说,可通过球面上的冯洛诺伊图算法或圆锥映射等方法,确定目标环境光采样图对应的图像区域中的像素点对应的采样区域,以及投影面积以及像素颜色代入公式(4)进行计算,得到目标环境光采样图的预期像素值。也就是说,按目标环境光采样图中像素点的坐标距离最近原则生成控制点的一个德劳内三角剖分,任意一个内点到其控制点的距离都小于该点到其他控制点的距离。将每个光源的预期像素值定为所属三角剖分的所有像素值的平均。In the process of implementing step S13, since the number of light sources in the multi-spectral light source system itself is sparse and unevenly distributed, that is, the area mapping of the equirectangular cylindrical projection itself is also uneven, the area factors of all sampling points can be considered when performing coordinate mapping. That is to say, the sampling area corresponding to the pixel point in the image area corresponding to the target ambient light sampling map can be determined by the von Lonoy diagram algorithm on the sphere or the cone mapping method, and the projection area and the pixel color are substituted into formula (4) for calculation to obtain the expected pixel value of the target ambient light sampling map. That is to say, a Delaunay triangulation of the control points is generated according to the principle of the shortest coordinate distance of the pixel points in the target ambient light sampling map, and the distance from any inner point to its control point is less than the distance from the point to other control points. The expected pixel value of each light source is set as the average of all pixel values of the triangulation.

以保证其满足德劳内三角剖分的约束条件。对白色的三角网的每条边做中垂线,中垂线的交点即为泰森多边形的顶点,中垂线上的顶点相连就构成了泰森多边形,对于泰森多边形的内点做划分即生成了冯洛诺伊图。To ensure that it meets the constraints of Delaunay triangulation. Draw a perpendicular bisector for each edge of the white triangulated network. The intersection of the perpendicular bisectors is the vertex of the Thiessen polygon. The vertices on the perpendicular bisectors are connected to form a Thiessen polygon. Dividing the inner points of the Thiessen polygon generates a von Lonoy diagram.

其中,内点是指目标环境光采样图上的各像素点,控制点是指光源系统上各光源的三维坐标投影到二维图像坐标后的点。Among them, the internal points refer to the pixel points on the target ambient light sampling map, and the control points refer to the points after the three-dimensional coordinates of each light source on the light source system are projected to the two-dimensional image coordinates.

公式(4):Formula (4):

其中,C为所述目标环境光采样图的预期像素值,假设S为当前采样点对应的采样区域,取其中像素点S=s1,...,sk分别具有投影面积m1,...,mk,对应像素颜色为c1,...,ck。也就是说,mk为像素点k的投影面积,ck为像素点k对应像素颜色。Wherein, C is the expected pixel value of the target ambient light sampling map, assuming that S is the sampling area corresponding to the current sampling point, the pixel points S=s 1 ,...,s k respectively have projection areas m 1 ,...,m k , and the corresponding pixel colors are c 1 ,...,c k . In other words, m k is the projection area of pixel point k, and c k is the pixel color corresponding to pixel point k.

步骤S14:利用所述预期像素值所测度的灯光对色卡进行虚拟打光得到,得到对应的色卡像素。Step S14: Virtually illuminate the color card using the light measured by the expected pixel value to obtain corresponding color card pixels.

在具体实现步骤S14的过程中,由不同的目标环境光采样图预期像素值所测度的灯光对色卡进行虚拟打光得到对应的色卡像素。In the specific implementation of step S14, the color card is virtually illuminated by the light measured by the expected pixel values of different target ambient light sampling images to obtain corresponding color card pixels.

需要说明的是,虚拟打光是指对于任一色卡图片中的色卡颜色C,通过颜色查找表中的明度值LC=fI,O,LC作为该色卡颜色与特定光源输入I反射得到的颜色输出O的映射关系。It should be noted that virtual lighting refers to the mapping relationship between the color card color C in any color card image and the color output O obtained by reflecting a specific light source input I through the brightness value LC = fI,O in the color lookup table, LC .

步骤S203:基于所述色卡像素确定每一虚拟场景的第一重打光系数。Step S203: Determine a first relighting coefficient of each virtual scene based on the color card pixels.

需要说明的是,具体实现步骤S203基于所述色卡像素确定每一虚拟场景的第一重打光系数的过程,包括以下步骤:It should be noted that the specific implementation of step S203 of determining the first relighting coefficient of each virtual scene based on the color card pixels includes the following steps:

步骤S21:针对每一虚拟场景,利用所述色卡像素与给定光源色彩像素进行计算,得到第一重打光系数。Step S21: For each virtual scene, the color card pixels and the given light source color pixels are used to perform calculations to obtain a first relighting coefficient.

在具体实现步骤S21的过程中,通过给定光源色彩像素与预期色卡像素间做非负最小二乘法,即通过Pij和LijkPij和Lijk代入公式(5)进行计算,以得到第一重打光系数αkIn the specific implementation of step S21, non -negative least square method is performed between the given light source color pixel and the expected color card pixel, that is, Pij and Lijk are substituted into formula (5) for calculation to obtain the first relighting coefficient αk .

公式(5):Formula (5):

其中,Pij作为预期的色卡颜色i在颜色通道j下的值;Lijk作为灯光颜色k照射到色卡颜色i在颜色通道j的值;P是指Pij的向量化vectorize表述;Lα是指是Lijk的向量化vectorize表述。Among them, Pij is the expected value of color card color i in color channel j; Lijk is the value of color card color i in color channel j illuminated by light color k; P refers to the vectorized representation of Pij ; and Lα refers to the vectorized representation of Lijk .

需要说明的是,考虑到第一重打光系数αk不能为负,因此采用非负最小二乘法得到最优系数。It should be noted that, considering that the first lightening coefficient α k cannot be negative, the non-negative least squares method is used to obtain the optimal coefficient.

步骤S204:基于所述每一虚拟场景与第一重打光系数之间的对应关系构建环境光还原模型。Step S204: constructing an ambient light restoration model based on the corresponding relationship between each virtual scene and the first relighting coefficient.

在具体实现步骤S204的过程中,构建目标环境光采样图对应的虚拟场景,与该目标环境光采样图对应的第一重打光系数之间的对应关系,进而构建环境光还原模型。In the specific implementation of step S204, a correspondence between a virtual scene corresponding to the target ambient light sampling graph and a first relighting coefficient corresponding to the target ambient light sampling graph is constructed, thereby constructing an ambient light restoration model.

需要说明的是,具体实现步骤S102基于所述拍摄指令中选择的目标虚拟场景,从环境光还原模型确定对应的目标重打光系数的过程包括以下步骤:It should be noted that the specific implementation of step S102 includes the following steps:

步骤S31:遍历所述环境光还原模型中每一虚拟场景与第一重打光系数之间的对应关系,确定与所述目标虚拟场景对应的重打光系数。Step S31: traverse the correspondence between each virtual scene in the ambient light restoration model and the first relighting coefficient to determine the relighting coefficient corresponding to the target virtual scene.

在具体实现步骤S31的过程中,对所述环境光还原模型中每一虚拟场景与第一重打光系数之间的对应关系进行遍历,查找与所述目标虚拟场景对应的第一重打光系数。In the specific implementation of step S31, the correspondence between each virtual scene in the ambient light restoration model and the first relighting coefficient is traversed to find the first relighting coefficient corresponding to the target virtual scene.

步骤S32:将与所述目标虚拟场景对应的第一重打光系数作为目标重打光系数。Step S32: using the first relighting coefficient corresponding to the target virtual scene as the target relighting coefficient.

在具体实现步骤S32的过程中,将查找得到的第一重打光系数作为目标重打光系数。In the specific implementation of step S32, the first relighting coefficient obtained by searching is used as the target relighting coefficient.

步骤S103:将所述目标重打光系数作为像素明度值。Step S103: taking the target relighting coefficient as the pixel brightness value.

步骤S104:基于所述像素明度值对所述拍摄指令中的目标物进行重打光。Step S104: re-lighting the target object in the shooting instruction based on the pixel brightness value.

在具体实现步骤S104的过程中,利用所述所述像素明度值对光源响应曲线进行矫正,即可得到所有光源的强度信息;将所有光源的强度信息作为光源输入亮度,即可进行光照还原;利用还原后的光照可对所述拍摄指令中的目标物进行重打光。In the specific implementation process of step S104, the light source response curve is corrected using the pixel brightness value to obtain the intensity information of all light sources; the intensity information of all light sources is used as the light source input brightness to restore the lighting; the restored lighting can be used to re-light the target object in the shooting instruction.

步骤S105:将重打光拍摄得到的目标物图像添加至所述目标虚拟场景中,得到目标虚拟图像。Step S105: adding the target object image obtained by re-lighting and photographing to the target virtual scene to obtain a target virtual image.

在具体实现步骤S105的过程中,利用拍摄设备拍摄得到的目标物图像,将其放至目标虚拟场景中,使得得到目标虚拟图像从视觉上还原出目标物图像的真实效果。In the specific implementation of step S105, the target object image is captured by a shooting device and placed in the target virtual scene, so that the target virtual image visually restores the real effect of the target object image.

需要说明的是,目标物图像可为人体或者物品图像。It should be noted that the target image may be a human body or an object image.

在本发明实施例中,根据不同的虚拟场景所对应的目标环境光采样图在多光谱光源的打光设备中还原环境光,以构建环境光还原模型,通过环境光还原模型确定目标虚拟场景对应的目标重打光系数;基于目标重打光系数对所述拍摄指令中的目标物进行重打光,使得虚拟环境对真实物体进行重打光效果更加真实,将重打光拍摄得到的目标物图像添加至所述目标虚拟场景中,得到目标虚拟图像。能够提高上述方式得到的图像的真实性。In an embodiment of the present invention, the ambient light is restored in the lighting device of the multi-spectral light source according to the target ambient light sampling diagram corresponding to different virtual scenes to construct an ambient light restoration model, and the target relighting coefficient corresponding to the target virtual scene is determined by the ambient light restoration model; the target object in the shooting instruction is relighted based on the target relighting coefficient, so that the relighting effect of the virtual environment on the real object is more realistic, and the target object image obtained by the relighting shooting is added to the target virtual scene to obtain a target virtual image. The authenticity of the image obtained by the above method can be improved.

基于上述本发明实施例示出的图像的处理方法,相应的,本发明实施例还对应公开了另一种图像处理装置,如图3所示,所述装置包括:Based on the image processing method shown in the above embodiment of the present invention, the embodiment of the present invention also discloses another image processing device, as shown in FIG3 , the device includes:

确定单元301,用于在接收到任意一用户输入的拍摄指令时,基于所述拍摄指令中选择的目标虚拟场景,从环境光还原模型确定对应的目标重打光系数,所述环境光还原模型由于构建单元304构建得到;The determining unit 301 is used to determine the corresponding target relighting coefficient from the ambient light restoration model based on the target virtual scene selected in the shooting instruction when receiving a shooting instruction input by any user, and the ambient light restoration model is constructed by the constructing unit 304;

打光单元302,用于将所述目标重打光系统作为素像明度值;基于所述像素明度值对所述拍摄指令中的目标物进行重打光;A lighting unit 302 is used for using the target relighting system as a pixel brightness value; and relighting the target object in the shooting instruction based on the pixel brightness value;

处理单元303,用于将重打光拍摄得到的目标物图像添加至所述目标虚拟场景中,得到目标图像。The processing unit 303 is used to add the target object image obtained by re-lighting and photographing to the target virtual scene to obtain the target image.

需要说明的是,上述本发明实施例公开的图像处理装置中的各个单元具体的原理和执行过程,与上述本发明实施示出的图像的处理方法相同,可参见上述本发明实施例公开的图像的处理方法中相应的部分,这里不再进行赘述。It should be noted that the specific principles and execution processes of each unit in the image processing device disclosed in the above embodiment of the present invention are the same as the image processing method shown in the above embodiment of the present invention. Please refer to the corresponding parts of the image processing method disclosed in the above embodiment of the present invention, and will not be repeated here.

在本发明实施例中,根据不同的虚拟场景所对应的目标环境光采样图在多光谱光源的打光设备中还原环境光,以构建环境光还原模型,通过环境光还原模型确定目标虚拟场景对应的目标重打光系数;基于目标重打光系数对所述拍摄指令中的目标物进行重打光,使得虚拟环境对真实物体进行重打光效果更加真实,将重打光拍摄得到的目标物图像添加至所述目标虚拟场景中,得到目标虚拟图像。能够提高上述方式得到的图像的真实性。In an embodiment of the present invention, the ambient light is restored in the lighting device of the multi-spectral light source according to the target ambient light sampling diagram corresponding to different virtual scenes to construct an ambient light restoration model, and the target relighting coefficient corresponding to the target virtual scene is determined by the ambient light restoration model; the target object in the shooting instruction is relighted based on the target relighting coefficient, so that the relighting effect of the virtual environment on the real object is more realistic, and the target object image obtained by the relighting shooting is added to the target virtual scene to obtain a target virtual image. The authenticity of the image obtained by the above method can be improved.

可选的,基于上述本发明实施例示出的图像处理装置,所述确定单元301,具体用于:遍历所述环境光还原模型中每一虚拟场景与第一重打光系数之间的对应关系,确定与所述目标虚拟场景对应的重打光系数;Optionally, based on the image processing device shown in the above embodiment of the present invention, the determination unit 301 is specifically used to: traverse the correspondence between each virtual scene in the ambient light restoration model and the first relighting coefficient, and determine the relighting coefficient corresponding to the target virtual scene;

将与所述目标虚拟场景对应的第一重打光系数作为目标重打光系数。The first relighting coefficient corresponding to the target virtual scene is used as the target relighting coefficient.

可选的,基于上述本发明实施例示出的图像处理装置,所述构建单元304,用于:利用多种色彩灯珠对拍摄得到的色卡图像进行校准,得到校准后的色卡图像;Optionally, based on the image processing device shown in the above embodiment of the present invention, the construction unit 304 is used to: calibrate the captured color card image using multiple color lamp beads to obtain a calibrated color card image;

针对每一虚拟场景对应的目标环境光采样图,基于预处理后的目标环境光采样图对校准后的色卡图像进行虚拟打光,得到对应的色卡像素;For each virtual scene corresponding to the target ambient light sampling map, the calibrated color card image is virtually illuminated based on the preprocessed target ambient light sampling map to obtain the corresponding color card pixels;

基于所述色卡像素确定每一虚拟场景的第一重打光系数;Determining a first relighting coefficient for each virtual scene based on the color card pixels;

基于所述每一虚拟场景与第一重打光系数之间的对应关系构建环境光还原模型。An ambient light restoration model is constructed based on the corresponding relationship between each virtual scene and the first relighting coefficient.

可选的,基于上述本发明实施例示出的图像处理装置,针对每一虚拟场景对应的目标环境光采样图,基于预处理后的目标环境光采样图对校准后的色卡图像进行虚拟打光,得到对应的色卡像素的构建单元304,具体用于:针对每一虚拟场景对应的目标环境光采样图,对所述目标环境光采样图进行预处理,得到压缩的初始像素明度值;Optionally, based on the image processing device shown in the above embodiment of the present invention, for each target ambient light sampling map corresponding to a virtual scene, the calibrated color card image is virtually illuminated based on the preprocessed target ambient light sampling map to obtain a corresponding color card pixel construction unit 304, which is specifically used to: for each target ambient light sampling map corresponding to a virtual scene, preprocess the target ambient light sampling map to obtain a compressed initial pixel brightness value;

根据球体多光谱光源坐标对所述初始像素明度值进行映射,得到图像区域;Mapping the initial pixel brightness value according to the spherical multi-spectral light source coordinates to obtain an image area;

对所述图像区域进行处理,确定预期像素值;Processing the image region to determine expected pixel values;

利用所述预期像素值所测度的灯光对色卡进行虚拟打光得到,得到对应的色卡像素。The color card is virtually illuminated using the light measured by the expected pixel value to obtain corresponding color card pixels.

本发明实施例还公开了一种电子设备,该电子设备用于运行数据库存储过程,其中,所述运行数据库存储过程时执行上述图1和图2公开的图像的处理方法。An embodiment of the present invention further discloses an electronic device, which is used to run a database storage process, wherein the image processing method disclosed in FIG. 1 and FIG. 2 is executed when the database storage process is run.

本发明实施例还公开了一种计算机存储介质,所述存储介质包括存储数据库存储过程,其中,在所述数据库存储过程运行时控制所述存储介质所在设备执行上述图1和图2公开的图像的处理方法。An embodiment of the present invention further discloses a computer storage medium, which includes a storage database storage process, wherein when the database storage process is running, the device where the storage medium is located is controlled to execute the image processing method disclosed in Figures 1 and 2 above.

在本公开的上下文中,计算机存储介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。In the context of the present disclosure, a computer storage medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, device, or equipment. A machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or equipment, or any suitable combination of the foregoing. A more specific example of a machine-readable storage medium may include an electrical connection based on one or more lines, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统或系统实施例而言,由于其基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。以上所描述的系统及系统实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。Each embodiment in this specification is described in a progressive manner, and the same or similar parts between the embodiments can refer to each other, and each embodiment focuses on the differences from other embodiments. In particular, for the system or system embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and the relevant parts can refer to the partial description of the method embodiment. The system and system embodiments described above are merely schematic, wherein the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the scheme of this embodiment. Ordinary technicians in this field can understand and implement it without paying creative labor.

专业人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。Professionals may further appreciate that the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, computer software, or a combination of the two. In order to clearly illustrate the interchangeability of hardware and software, the composition and steps of each example have been generally described in the above description according to function. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Professionals and technicians may use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of the present invention.

对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本发明。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本发明的精神或范围的情况下,在其它实施例中实现。因此,本发明将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。The above description of the disclosed embodiments enables one skilled in the art to implement or use the present invention. Various modifications to these embodiments will be apparent to one skilled in the art, and the general principles defined herein may be implemented in other embodiments without departing from the spirit or scope of the present invention. Therefore, the present invention will not be limited to the embodiments shown herein, but rather to the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1.一种图像的处理方法,其特征在于,所述方法包括:1. A method for processing an image, characterized in that the method comprises: 接收任意一用户输入的拍摄指令;Receive a shooting instruction input by any user; 基于所述拍摄指令中选择的目标虚拟场景,从环境光还原模型确定对应的目标重打光系数,所述环境光还原模型用于存储每一虚拟场景与第一重打光系数之间的对应关系;Based on the target virtual scene selected in the shooting instruction, determining a corresponding target relighting coefficient from an ambient light restoration model, wherein the ambient light restoration model is used to store a corresponding relationship between each virtual scene and a first relighting coefficient; 将所述目标重打光系数作为像素明度值;Using the target relighting coefficient as the pixel brightness value; 基于所述像素明度值对所述拍摄指令中的目标物进行重打光;relighting the target object in the shooting instruction based on the pixel brightness value; 将重打光拍摄得到的目标物图像添加至所述目标虚拟场景中,得到目标虚拟图像;Adding the target object image obtained by relighting and photographing to the target virtual scene to obtain a target virtual image; 其中,所述环境光还原模型的构建过程包括:The construction process of the ambient light restoration model includes: 利用多种色彩灯珠对拍摄得到的色卡图像进行校准,得到校准后的色卡图像;Using a variety of color lamp beads to calibrate the captured color card image to obtain a calibrated color card image; 针对每一虚拟场景对应的目标环境光采样图,基于预处理后的目标环境光采样图对校准后的色卡图像进行虚拟打光,得到对应的色卡像素;For each virtual scene corresponding to the target ambient light sampling map, the calibrated color card image is virtually illuminated based on the preprocessed target ambient light sampling map to obtain the corresponding color card pixels; 基于所述色卡像素确定每一虚拟场景的第一重打光系数;Determining a first relighting coefficient for each virtual scene based on the color card pixels; 基于所述每一虚拟场景与第一重打光系数之间的对应关系构建环境光还原模型。An ambient light restoration model is constructed based on the corresponding relationship between each virtual scene and the first relighting coefficient. 2.根据权利要求1所述的方法,其特征在于,所述基于所述拍摄指令中选择的目标虚拟场景,从环境光还原模型确定对应的目标重打光系数,包括:2. The method according to claim 1, characterized in that the step of determining a corresponding target relighting coefficient from an ambient light restoration model based on the target virtual scene selected in the shooting instruction comprises: 遍历所述环境光还原模型中每一虚拟场景与第一重打光系数之间的对应关系,确定与所述目标虚拟场景对应的重打光系数;Traversing the correspondence between each virtual scene in the ambient light restoration model and the first relighting coefficient, and determining the relighting coefficient corresponding to the target virtual scene; 将与所述目标虚拟场景对应的第一重打光系数作为目标重打光系数。The first relighting coefficient corresponding to the target virtual scene is used as the target relighting coefficient. 3.根据权利要求1所述的方法,其特征在于,所述针对每一虚拟场景对应的目标环境光采样图,基于预处理后的目标环境光采样图对校准后的色卡图像进行虚拟打光,得到对应的色卡像素,包括:3. The method according to claim 1, characterized in that for each virtual scene corresponding to the target ambient light sampling map, the calibrated color card image is virtually illuminated based on the preprocessed target ambient light sampling map to obtain the corresponding color card pixels, comprising: 针对每一虚拟场景对应的目标环境光采样图,对所述目标环境光采样图进行预处理,得到压缩的初始像素明度值;For each target ambient light sampling map corresponding to each virtual scene, preprocessing the target ambient light sampling map to obtain a compressed initial pixel brightness value; 根据球体多光谱光源坐标对所述初始像素明度值进行映射,得到图像区域;Mapping the initial pixel brightness value according to the spherical multi-spectral light source coordinates to obtain an image area; 对所述图像区域进行处理,确定预期像素值;Processing the image region to determine expected pixel values; 利用所述预期像素值所测度的灯光对色卡进行虚拟打光得到,得到对应的色卡像素。The color card is virtually illuminated using the light measured by the expected pixel value to obtain corresponding color card pixels. 4.一种图像的处理装置,其特征在于,所述装置包括:4. An image processing device, characterized in that the device comprises: 确定单元,用于在接收到任意一用户输入的拍摄指令时,基于所述拍摄指令中选择的目标虚拟场景,从环境光还原模型确定对应的目标重打光系数,所述环境光还原模型由构建单元构建得到,所述环境光还原模型用于存储每一虚拟场景与第一重打光系数之间的对应关系;a determining unit, configured to, upon receiving a shooting instruction input by any user, determine a corresponding target relighting coefficient from an ambient light restoration model based on a target virtual scene selected in the shooting instruction, wherein the ambient light restoration model is constructed by the constructing unit and is used to store a corresponding relationship between each virtual scene and a first relighting coefficient; 打光单元,用于将所述目标重打光系统作为像素明度值;基于所述像素明度值对所述拍摄指令中的目标物进行重打光;A lighting unit, configured to use the target relighting system as a pixel brightness value; and relight the target object in the shooting instruction based on the pixel brightness value; 处理单元,用于将重打光拍摄得到的目标物图像添加至所述目标虚拟场景中,得到目标图像;A processing unit, used for adding the target object image obtained by relighting and photographing to the target virtual scene to obtain a target image; 其中,所述构建单元,用于利用多种色彩灯珠对拍摄得到的色卡图像进行校准,得到校准后的色卡图像;The construction unit is used to calibrate the captured color card image using a variety of color lamp beads to obtain a calibrated color card image; 针对每一虚拟场景对应的目标环境光采样图,基于预处理后的目标环境光采样图对校准后的色卡图像进行虚拟打光,得到对应的色卡像素;For each virtual scene corresponding to the target ambient light sampling map, the calibrated color card image is virtually illuminated based on the preprocessed target ambient light sampling map to obtain the corresponding color card pixels; 基于所述色卡像素确定每一虚拟场景的第一重打光系数;Determining a first relighting coefficient for each virtual scene based on the color card pixels; 基于所述每一虚拟场景与第一重打光系数之间的对应关系构建环境光还原模型。An ambient light restoration model is constructed based on the corresponding relationship between each virtual scene and the first relighting coefficient. 5.根据权利要求4所述的装置,其特征在于,所述确定单元,具体用于:遍历所述环境光还原模型中每一虚拟场景与第一重打光系数之间的对应关系,确定与所述目标虚拟场景对应的重打光系数;5. The device according to claim 4, characterized in that the determining unit is specifically used to: traverse the correspondence between each virtual scene in the ambient light restoration model and the first relighting coefficient to determine the relighting coefficient corresponding to the target virtual scene; 将与所述目标虚拟场景对应的第一重打光系数作为目标重打光系数。The first relighting coefficient corresponding to the target virtual scene is used as the target relighting coefficient. 6.根据权利要求4所述的装置,其特征在于,所述针对每一虚拟场景对应的目标环境光采样图,基于预处理后的目标环境光采样图对校准后的色卡图像进行虚拟打光,得到对应的色卡像素的构建单元,具体用于:6. The device according to claim 4, characterized in that the target ambient light sampling map corresponding to each virtual scene performs virtual lighting on the calibrated color card image based on the preprocessed target ambient light sampling map to obtain the corresponding color card pixel construction unit, which is specifically used for: 针对每一虚拟场景对应的目标环境光采样图,对所述目标环境光采样图进行预处理,得到压缩的初始像素明度值;For each target ambient light sampling map corresponding to each virtual scene, preprocessing the target ambient light sampling map to obtain a compressed initial pixel brightness value; 根据球体多光谱光源坐标对所述初始像素明度值进行映射,得到图像区域;Mapping the initial pixel brightness value according to the spherical multi-spectral light source coordinates to obtain an image area; 对所述图像区域进行处理,确定预期像素值;Processing the image region to determine expected pixel values; 利用所述预期像素值所测度的灯光对色卡进行虚拟打光得到,得到对应的色卡像素。The color card is virtually illuminated using the light measured by the expected pixel value to obtain corresponding color card pixels. 7.一种电子设备,其特征在于,所述电子设备用于运行程序,其中,所述程序运行时执行如权利要求1-3中任一所述的图像的处理方法。7. An electronic device, characterized in that the electronic device is used to run a program, wherein the program executes the image processing method as described in any one of claims 1 to 3 when it is run. 8.一种计算机存储介质,其特征在于,所述存储介质包括存储程序,其中,在所述程序运行时控制所述存储介质所在设备执行如权利要求1-3中任一所述的图像的处理方法。8. A computer storage medium, characterized in that the storage medium includes a storage program, wherein when the program is running, the device where the storage medium is located is controlled to execute the image processing method according to any one of claims 1 to 3.
CN202211704923.6A 2022-12-29 2022-12-29 Image processing method and device, electronic equipment and storage medium Active CN116017167B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211704923.6A CN116017167B (en) 2022-12-29 2022-12-29 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211704923.6A CN116017167B (en) 2022-12-29 2022-12-29 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116017167A CN116017167A (en) 2023-04-25
CN116017167B true CN116017167B (en) 2024-09-27

Family

ID=86024441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211704923.6A Active CN116017167B (en) 2022-12-29 2022-12-29 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116017167B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116229375B (en) * 2023-05-06 2023-08-25 山东卫肤药业有限公司 Internal environment imaging method based on non-light source incubator

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112546633A (en) * 2020-12-10 2021-03-26 网易(杭州)网络有限公司 Virtual scene processing method, device, equipment and storage medium
CN113409186A (en) * 2021-06-30 2021-09-17 上海科技大学 Single picture re-polishing method, system, terminal and storage medium based on priori knowledge

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111413837B (en) * 2020-05-27 2020-10-30 福清市鸿扬电子科技有限公司 Portable photographic light filling device
CN114119779A (en) * 2021-10-29 2022-03-01 浙江凌迪数字科技有限公司 Method for generating material map through multi-angle polishing shooting and electronic device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112546633A (en) * 2020-12-10 2021-03-26 网易(杭州)网络有限公司 Virtual scene processing method, device, equipment and storage medium
CN113409186A (en) * 2021-06-30 2021-09-17 上海科技大学 Single picture re-polishing method, system, terminal and storage medium based on priori knowledge

Also Published As

Publication number Publication date
CN116017167A (en) 2023-04-25

Similar Documents

Publication Publication Date Title
US6628298B1 (en) Apparatus and method for rendering synthetic objects into real scenes using measurements of scene illumination
CN110033510B (en) Method and device for establishing color mapping relation for correcting rendered image color
JP3962588B2 (en) 3D image processing method, 3D image processing apparatus, 3D image processing system, and 3D image processing program
CN100550053C (en) Determine the scene distance in the digital camera images
CN112819941B (en) Method, apparatus, device and computer readable storage medium for rendering water surface
US20040095385A1 (en) System and method for embodying virtual reality
US20190156563A1 (en) Image processing apparatus
CN111105365B (en) Color correction method, medium, terminal and device for texture image
CN110084873B (en) Method and apparatus for rendering three-dimensional model
CN105577982A (en) Image processing method and terminal
Gruyer et al. Modeling and validation of a new generic virtual optical sensor for ADAS prototyping
US11022861B2 (en) Lighting assembly for producing realistic photo images
CN110033509B (en) Method for constructing three-dimensional face normal based on diffuse reflection gradient polarized light
CN102023465A (en) Balancing luminance disparity in a display by multiple projectors
CN116017167B (en) Image processing method and device, electronic equipment and storage medium
CN112446943A (en) Image rendering method and device and computer readable storage medium
Apollonio et al. Photogrammetry driven tools to support the restoration of open-air bronze surfaces of sculptures: an integrated solution starting from the experience of the Neptune Fountain in Bologna
CN106412416A (en) Image processing method, device and system
JP7633790B2 (en) Texture acquisition system, texture acquisition device, and texture acquisition program
US20210217227A1 (en) Image processing method, apparatus and device
CN106231193A (en) A kind of image processing method and terminal
JP2007272847A (en) Lighting simulation method and image composition method
CN112700527A (en) Method for calculating object surface roughness map
CN105991987A (en) Image processing method, equipment and system
CN108377383B (en) Multi-projection 3D system light field contrast adjusting method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant