CN103973991B - A kind of automatic explosion method judging light scene based on B P neutral net - Google Patents
A kind of automatic explosion method judging light scene based on B P neutral net Download PDFInfo
- Publication number
- CN103973991B CN103973991B CN201410198357.5A CN201410198357A CN103973991B CN 103973991 B CN103973991 B CN 103973991B CN 201410198357 A CN201410198357 A CN 201410198357A CN 103973991 B CN103973991 B CN 103973991B
- Authority
- CN
- China
- Prior art keywords
- brightness
- image
- neutral net
- vector
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Landscapes
- Studio Devices (AREA)
Abstract
本发明公开一种基于B‑P神经网络判断照明场景的自动曝光方法,包括以下步骤,S1通过视频采集系统获得原始图像;S2将原始图像划分为多个区域;S3求每个区域的图像亮度平均值,获得亮度矢量;S4设计B‑P神经网络,将亮度矢量作为神经网络的输入,对照明场景进行判断;S5根据神经网络的判断结果,计算图像的理想亮度;S6将原始图像的实际亮度与理想亮度的偏差作为PID算法的初始输入,利用PID算法获取理想亮度对应的理想可控量;S7依据理想可控量,获得曝光时间t和模拟增益系数g,将t和g传输到视频采集系统的传感器中即可实现自动曝光。该方法对照明场景的判断准确,算法简单,可广泛应用。
The invention discloses an automatic exposure method for judging a lighting scene based on a B-P neural network, comprising the following steps: S1 obtains an original image through a video acquisition system; S2 divides the original image into multiple areas; S3 calculates the image brightness of each area The average value is used to obtain the brightness vector; S4 designs the B‑P neural network, and uses the brightness vector as the input of the neural network to judge the lighting scene; S5 calculates the ideal brightness of the image according to the judgment result of the neural network; S6 uses the actual brightness of the original image The deviation between the brightness and the ideal brightness is used as the initial input of the PID algorithm, and the PID algorithm is used to obtain the ideal controllable quantity corresponding to the ideal brightness; S7 obtains the exposure time t and the analog gain coefficient g according to the ideal controllable quantity, and transmits t and g to the video Automatic exposure can be realized in the sensor of the acquisition system. The method can accurately judge the lighting scene, has a simple algorithm, and can be widely used.
Description
技术领域technical field
本发明属于视频图像处理技术领域,具体涉及一种基于B-P神经网络判断照明场景的自动曝光方法。The invention belongs to the technical field of video image processing, and in particular relates to an automatic exposure method for judging lighting scenes based on a B-P neural network.
背景技术Background technique
自动曝光是指自动进行光圈、快门、信号增益等的调节,从而获得清晰、颜色接近实物的图像的方法。从傻瓜相机诞生以来,相关领域技术人员就一直在研究自动曝光方法。早期的曝光方法采用模拟曝光的方式,其速度快,无需大量计算,但是无法进行场景分析,虽然能进行曝光补偿,仍然难以得到理想的图片亮度,且其机械电子结构复杂,稳定性差。Automatic exposure refers to the method of automatically adjusting the aperture, shutter, signal gain, etc., so as to obtain a clear image with a color close to the real object. Since the birth of the point-and-shoot camera, technicians in related fields have been studying automatic exposure methods. The early exposure methods used analog exposure, which was fast and did not require a lot of calculations, but could not perform scene analysis. Although exposure compensation could be performed, it was still difficult to obtain ideal image brightness, and its mechanical and electronic structure was complex and poor in stability.
数字图像技术诞生后,电子曝光方式逐渐替代了模拟曝光的方式,电子曝光方式只需要对数字图像进行分析即可得到光强,进而调节光圈和快门从而进行自动曝光,随着嵌入式设备如单片机、DSP和FPGA等硬件编程设备逐渐应用于数字图像采集中,图像采集设备具备一些基本的图像分析功能,为了获得更加清晰、颜色更加接近实物的图像,技术人员将简单的场景分析技术引入自动曝光方法中,采用动态的理想亮度值的方式,来适应各种外部照明情况,如背光和对光,从而使曝光方法基于照明场景而进行,即产生了判断照明场景的曝光方法。After the birth of digital image technology, the electronic exposure method gradually replaced the analog exposure method. The electronic exposure method only needs to analyze the digital image to obtain the light intensity, and then adjust the aperture and shutter to perform automatic exposure. With embedded devices such as single-chip microcomputers , DSP, FPGA and other hardware programming equipment are gradually used in digital image acquisition. Image acquisition equipment has some basic image analysis functions. In order to obtain clearer images with colors closer to real objects, technicians introduce simple scene analysis techniques into automatic exposure. In the method, a dynamic ideal brightness value is adopted to adapt to various external lighting conditions, such as backlight and front light, so that the exposure method is performed based on the lighting scene, that is, an exposure method for judging the lighting scene is produced.
基于图像熵的图像分析技术来设定理想曝光值为一种基于照明场景的曝光方法,使图像主体无论在什么光照条件下均拥有良好的曝光,但图像熵的算法较为复杂,使其适用面受到限制,无法广泛应用。还有基于人脸辨识以及模糊逻辑的自动曝光方法,该算法实现比较复杂,也无法广泛运用。The image analysis technology based on image entropy to set the ideal exposure value is an exposure method based on lighting scenes, so that the image subject has a good exposure no matter what the lighting conditions are, but the algorithm of image entropy is relatively complicated, making it applicable to many areas. Limited and not widely applicable. There is also an automatic exposure method based on face recognition and fuzzy logic, which is relatively complicated to implement and cannot be widely used.
发明内容Contents of the invention
针对上述问题,本发明目的是在于提供一种基于B-P神经网络判断照明场景的自动曝光方法,其对照明场景的判断准确,算法简单,能对背光和对光条件下图像主体进行合适的曝光,满足网络摄相机应用下的视频处理前端的自动曝光要求。In view of the above problems, the purpose of the present invention is to provide an automatic exposure method for judging lighting scenes based on B-P neural network, which is accurate in judging lighting scenes, has a simple algorithm, and can properly expose image subjects under backlight and light conditions. It meets the automatic exposure requirements of the video processing front end under the network camera application.
为实现上述目的,本发明提供了一种基于B-P神经网络判断照明场景的自动曝光方法,其特征在于,包括以下步骤:In order to achieve the above object, the invention provides a kind of automatic exposure method based on B-P neural network judgment lighting scene, it is characterized in that, comprises the following steps:
S1:通过视频采集系统获得原始图像;S1: Obtain the original image by the video acquisition system;
S2:根据对图片不同区域的关注程度将所述原始图像划分为多个区域,并按照被关注程度的大小依次给区域编号;S2: divide the original image into a plurality of regions according to the degree of attention to different regions of the picture, and number the regions according to the degree of attention;
S3:对所述划分的区域求每个区域的图像亮度平均值,结合该区域位置编号和其区域内图像的平均亮度,获得亮度矢量;S3: Find the average brightness of the image in each region for the divided regions, and obtain a brightness vector in conjunction with the region position number and the average brightness of the image in the region;
S4:根据所述原始图像区域划分的数量设计B-P神经网络,将所述亮度矢量作为神经网络的输入,对照明场景进行判断,输出判断结果;S4: according to the quantity design B-P neural network of described original image region division, described luminance vector is used as the input of neural network, the lighting scene is judged, output judgment result;
S5:根据所述神经网络的判断结果,计算图像的理想亮度;S5: Calculate the ideal brightness of the image according to the judgment result of the neural network;
S6:将所述原始图像的实际亮度与所述S5中理想亮度的偏差作为PID算法的初始输入,利用PID算法获取理想亮度所需要的理想可控量;S6: Using the deviation between the actual brightness of the original image and the ideal brightness in S5 as the initial input of the PID algorithm, using the PID algorithm to obtain the ideal controllable amount required by the ideal brightness;
S7:依据所述S6中获得理想可控量,获取理想曝光时间t和理想模拟增益系数g,将获得的理想曝光时间t和理想模拟增益系数g传输到视频采集系统的传感器中,即可实现自动曝光。S7: Obtain the ideal exposure time t and ideal analog gain coefficient g according to the ideal controllable amount obtained in S6, and transmit the obtained ideal exposure time t and ideal analog gain coefficient g to the sensor of the video acquisition system, which can be realized automatic exposure.
进一步的,所述步骤S4中B-P神经网络的输入层变量个数为原始图像区域划分的个数,设计的B-P神经网络输出公式如下:Further, the number of input layer variables of the B-P neural network in the step S4 is the number of divisions of the original image area, and the designed B-P neural network output formula is as follows:
y=f(X),y=0,1y=f(X), y=0,1
X为神经网络输入数据的向量表示,y为神经网络的输出,y=0或者1,即所述B-P神经网络输出的判定结果为两种,0代表正常光照,1代表特殊光照情况。这样的设计能快速确定神经网络的结构,且使其结构简化,有利于其学习训练过程。X is the vector representation of the input data of the neural network, y is the output of the neural network, y=0 or 1, that is, there are two types of judgment results output by the B-P neural network, 0 represents normal lighting, and 1 represents special lighting conditions. Such a design can quickly determine the structure of the neural network and simplify its structure, which is beneficial to its learning and training process.
进一步的,所述步骤S5中,当神经网络输出为0时,理想亮度取亮度矢量的最大分量的二分之一;Further, in the step S5, when the output of the neural network is 0, the ideal brightness takes half of the maximum component of the brightness vector;
当神经网络输出为1时,计算理想亮度的公式如下:When the neural network output is 1, the formula for calculating the ideal brightness is as follows:
式中,yz为图像主体的评级亮度,X为图像的亮度矢量,W为评级亮度权重矢量,yp为图片的评价亮度,η为图像主体的亮度误差系数,l为矫正系数,其取值范围为0-1间,ymid为亮度矢量的最大分量的二分之一,也即图像中间亮度,yl为计算的理想亮度,Wi为评级亮度权重矢量W的第i个分量,T表示转置。In the formula, yz is the rating brightness of the image subject, X is the brightness vector of the image, W is the rating brightness weight vector, y p is the evaluation brightness of the picture, η is the brightness error coefficient of the image subject, l is the correction coefficient, which takes The value range is between 0 and 1, y mid is one-half of the largest component of the brightness vector, that is, the middle brightness of the image, y l is the calculated ideal brightness, W i is the ith component of the rating brightness weight vector W, T stands for transpose.
其中,评级亮度权重矢量W表征图像不同区域被关注的程度,其分量与亮度矢量X中分量数量相同且相互对应,某区域被关注的程度越高,则权重矢量中与该区域对应的分量值就越大。Among them, the rating brightness weight vector W represents the degree of attention to different areas of the image, and its components are the same as the number of components in the brightness vector X and correspond to each other. The higher the degree of attention to a certain area, the value of the component corresponding to the area in the weight vector bigger.
进一步的,所述的图片的评价亮度yp采用如下公式计算:Further, the evaluation brightness y p of the picture is calculated using the following formula:
yp=X*Wz T y p =X*W z T
其中yp为图片评价亮度,X为图像的亮度矢量,Wz为评价亮度权重矢量,该矢量是根据经验设计的。Among them, y p is the evaluation brightness of the picture, X is the brightness vector of the image, and W z is the evaluation brightness weight vector, which is designed according to experience.
其中,评价亮度权重矢量Wz表征图像不同区域被关注的程度,其分量与亮度矢量X中分量数量相同且相互对应,某区域被关注的程度越高,则评价亮度权重矢量中与该区域对应的分量值就越大。评价亮度权重矢量Wz与评级亮度权重W不同的是,各个分量值的大小不一样。Among them, the evaluation brightness weight vector Wz represents the degree of attention in different areas of the image, and its components are the same as the number of components in the brightness vector X and correspond to each other. The higher the degree of attention in a certain area, the evaluation brightness weight vector corresponds to the area The larger the component value is. The difference between the evaluation brightness weight vector W z and the rating brightness weight W is that the values of each component are different.
进一步的,所述步骤S6中,所述可控量用以下公式表示:Further, in the step S6, the controllable quantity is expressed by the following formula:
lnp=lntglnp=lntg
式中,lnp为可控量,p=tg,t为自动曝光时间,g模拟增益系数。In the formula, lnp is the controllable quantity, p=tg, t is the automatic exposure time, and g is the analog gain coefficient.
进一步的,所述步骤S7中,采用查询曝光增益表获得曝光时间t与模拟增益系数g。根据可控量lnp获得曝光时间t和模拟增益系数g的方式有多种,如查表法、迭代法、或者其他控制算法,查询曝光增益表是其中较为方便快速的一种。Further, in the step S7, the exposure time t and the analog gain coefficient g are obtained by querying the exposure gain table. There are many ways to obtain the exposure time t and the analog gain coefficient g according to the controllable quantity lnp, such as look-up table method, iterative method, or other control algorithms, and querying the exposure gain table is one of the more convenient and fast methods.
本发明方法中设计B-P神经网络判断照明场景,对B-P神经网络进行训练学习,使B-P神经网络具有一定灵活性和智能性,以实际图像亮度矢量为神经网络的输入,从而实现对照明场景的准确判断,只有对照明场景进行准确的判断后,才能基于照明场景采取合适的曝光,本发明方法对照明场景判断准确,算法简单,通用性强,可广泛使用。In the method of the present invention, the B-P neural network is designed to judge the lighting scene, and the B-P neural network is trained and learned, so that the B-P neural network has certain flexibility and intelligence, and the actual image brightness vector is used as the input of the neural network, thereby realizing accurate lighting scene Judgment: Only after accurate judgment is made on the lighting scene, can appropriate exposure be taken based on the lighting scene. The method of the present invention has the advantages of accurate judgment on the lighting scene, simple algorithm, strong versatility, and can be widely used.
附图说明Description of drawings
图1为本发明实施例的自动曝光方法流程图;FIG. 1 is a flowchart of an automatic exposure method according to an embodiment of the present invention;
图2为本发明实施例中原始图像区域的划分及编号;Fig. 2 is the division and numbering of the original image area in the embodiment of the present invention;
图3为本发明实施例中设计的B-P神经网络结构图;Fig. 3 is the B-P neural network structural diagram of design in the embodiment of the present invention;
图4为本实施例中B-P神经网络的神经元模型;Fig. 4 is the neuron model of B-P neural network in the present embodiment;
图5为本发明实施例中依据理想亮度进行亮度矫正的效果图;FIG. 5 is an effect diagram of brightness correction based on ideal brightness in an embodiment of the present invention;
图6为本发明实施例中采用PID控制器获得理想可控量的系统框图;Fig. 6 is the system block diagram that adopts PID controller to obtain ideal controllable quantity in the embodiment of the present invention;
图7为本发明实施例中摄像头ov9712的最终曝光增益表;Fig. 7 is the final exposure gain table of camera ov9712 in the embodiment of the present invention;
图8为在不同照明场景中应用本发明实施例中自动曝光方法获得的效果对比图。FIG. 8 is a comparison diagram of effects obtained by applying the automatic exposure method in the embodiment of the present invention in different lighting scenarios.
具体实施方式detailed description
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应该理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.
图1为本发明中实施例的自动曝光方法流程图,该方法包括:Fig. 1 is the flow chart of the automatic exposure method of embodiment in the present invention, and this method comprises:
S1:通过视频采集系统获得原始图像,包含有场景信号的光线经镜头聚焦以后照射在CMOS图像传感器上,CMOS图像传感器一般包含三个部分:COMS感光元件、模拟增益系数器、A/D转换器,照射在COMS传感器上的光线经过一定时间的曝光后通过感光元件转换成模拟电信号,模拟增益系数器将电信号进行放大,最后通过A/D转换器将电信号转换为数字信号,将数字信号输出到数字视频采集控制器中。S1: The original image is obtained through the video acquisition system. The light containing the scene signal is focused on the CMOS image sensor after being focused by the lens. The CMOS image sensor generally includes three parts: COMS photosensitive element, analog gain coefficient device, and A/D converter. , the light irradiated on the COMS sensor is converted into an analog electrical signal through the photosensitive element after a certain period of exposure, the analog gain coefficient device amplifies the electrical signal, and finally converts the electrical signal into a digital signal through the A/D converter, and converts the digital signal into a digital signal. The signal is output to the digital video acquisition controller.
S2:根据对图片不同区域的关注程度将所述原始图像划分为多个区域,并按照被关注程度的大小依次给区域编号,考虑到顺光或者背光情况下图片的特性以及人们对图片的关注程度和经验,将图片划分为13个区域,认为图片中间位置是被关注程度最大的区域,其编号较小,图片周边区域是被关注程度较小的区域,其编号较大,依据被关注的程度,将被划分区域依次编号为1至13,图像划分的区域和编号如图2。S2: Divide the original image into a plurality of regions according to the degree of attention to different regions of the picture, and number the regions in turn according to the degree of attention, taking into account the characteristics of the picture and people's attention to the picture in the case of forward light or backlight Degree and experience, divide the picture into 13 areas, think that the middle position of the picture is the area with the most attention, its number is smaller, and the surrounding area of the picture is the area with less attention, its number is larger, according to the area of attention Degree, the divided areas are numbered 1 to 13 in sequence, and the divided areas and numbers of the image are shown in Figure 2.
S3:对所述划分的区域求每个区域的图像亮度平均值,结合该区域位置编号和其区域内图像的平均亮度,获得亮度矢量。对经S2划分的13个区域,求每个区域内图像亮度的均值,设其为xi,i为区域的位置编号,i=1,2…13,则亮度矢量X=(x1,x2,...,x13)。S3: Calculating the average brightness of the image in each region for the divided regions, combining the position number of the region and the average brightness of the image in the region to obtain a brightness vector. For the 13 regions divided by S2, calculate the average value of the image brightness in each region, let it be x i , i is the position number of the region, i=1,2...13, then the brightness vector X=(x 1 ,x 2 ,...,x 13 ).
S4:根据所述原始图像区域划分的数量设计B-P神经网络,将所述亮度矢量作为神经网络的输入,对照明场景进行判断,输出判断结果。B-P神经网络的输入层变量个数为原始图像区域划分的个数,本实施例中即为13,设计的B-P神经网络输出公式如下:S4: Design the B-P neural network according to the number of divisions of the original image area, use the brightness vector as the input of the neural network, judge the lighting scene, and output the judgment result. The number of input layer variables of the B-P neural network is the number of divisions of the original image area, which is 13 in the present embodiment, and the output formula of the B-P neural network of design is as follows:
y=f(X),y=0,1y=f(X), y=0,1
X为神经网络输入数据的向量表示,y为神经网络的输出,y=0或者1,即所述B-P神经网络输出的判定结果为两种,0代表正常光照,1代表特殊光照情况。X is the vector representation of the input data of the neural network, y is the output of the neural network, y=0 or 1, that is, there are two types of judgment results output by the B-P neural network, 0 represents normal lighting, and 1 represents special lighting conditions.
确定神经网络的输入层变量个数为13个,其输出的神经元个数为1个,现只需要确定其中间隐藏层的层数与各层神经元的个数,B-P神经网络的结构即可确定下来。Determine the number of input layer variables of the neural network to be 13, and the number of output neurons to be 1. Now it is only necessary to determine the number of layers in the middle hidden layer and the number of neurons in each layer. The structure of the B-P neural network is can be determined.
B-P神经网络需要经过训练学习,才可能具有一定的灵活性和智能性,才会对照明场景进行准确的判断。B-P神经网络的训练学习算法有多种,本实施例中优选为误差反向传播学习算法,但是本发明不对B-P神经网络的训练学习算法进行限制。The B-P neural network needs to be trained and learned before it can have certain flexibility and intelligence, and then it can make accurate judgments on lighting scenes. There are many training and learning algorithms for the B-P neural network. In this embodiment, the error backpropagation learning algorithm is preferred, but the present invention does not limit the training and learning algorithm for the B-P neural network.
中间隐藏层可以为一层或者多层,当中间隐藏层为多层时,误差反向传播学习算法不能获得较好的训练学习效果,中间隐藏层为一层时,B-P神经网络其结构简单,误差反向传播学习算法成熟,因此采用中间隐藏层为一层,但本发明不对神经网络中间隐藏层的层数进行限制。The middle hidden layer can be one or more layers. When the middle hidden layer is multi-layered, the error backpropagation learning algorithm cannot obtain a better training and learning effect. When the middle hidden layer is one layer, the structure of the B-P neural network is simple. The error backpropagation learning algorithm is mature, so the middle hidden layer is used as one layer, but the present invention does not limit the number of layers of the middle hidden layer of the neural network.
中间隐藏层神经元的个数多采用经验来设置,一般参考计算公式如下:The number of neurons in the middle hidden layer is mostly set by experience, and the general reference calculation formula is as follows:
l<n-1l<n-1
l=log2nl=log 2 n
其中l为隐藏层神经元个数,n为输入层神经元个数,m为输出神经元个数,a为常数,其范围为0-10,根据经验,取常数a为3,则中间隐藏层神经元个数为6个,得到本实施例中设计的神经网络的结构如图3。Among them, l is the number of neurons in the hidden layer, n is the number of neurons in the input layer, m is the number of output neurons, a is a constant, and its range is 0-10. According to experience, if the constant a is 3, the middle hidden The number of layer neurons is 6, and the structure of the neural network designed in this embodiment is obtained as shown in FIG. 3 .
接着,需要确定各个神经元的参数,神经元参数包括各输入权值、神经元偏移值以及传输函数,神经元模型如图4,假设图4所示为第j个神经元,则该神经元的输入为pj1至pjn,偏移函数为bj,传输函数为fj,权值为wj1至wjn。Next, it is necessary to determine the parameters of each neuron. Neuron parameters include input weights, neuron offset values, and transfer functions. The neuron model is shown in Figure 4. Assuming that Figure 4 shows the jth neuron, the neuron The input of each element is p j1 to p jn , the offset function is b j , the transfer function is f j , and the weights are w j1 to w jn .
传输函数一般为simoid函数或者logistic函数,当传输函数为simoid函数时,B-P神经网络采用最速下降法可以高效率的完成训练过程,故所有神经元选用simoid函数为传输函数。The transfer function is generally a simoid function or a logistic function. When the transfer function is a simoid function, the B-P neural network can efficiently complete the training process by using the steepest descent method, so all neurons use the simoid function as the transfer function.
确定每个神经元的输入权值以及偏移值的过程即为神经网络的训练学习过程,B-P神经网络采用误差反向传播学习算法的学习过程,该算法较为成熟,已经被集成到Matlab工具箱中,可以在Matlab中编程实现,也可以直接使用Matlab工具箱,本实施例中直接调用Matlab工具箱完成学习过程。The process of determining the input weight and offset value of each neuron is the training and learning process of the neural network. The B-P neural network adopts the learning process of the error back propagation learning algorithm. This algorithm is relatively mature and has been integrated into the Matlab toolbox. Among them, it can be implemented by programming in Matlab, or directly use the Matlab toolbox. In this embodiment, the Matlab toolbox is directly called to complete the learning process.
B-P神经网络采用误差反向传播学习算法的学习过程,该学习过程所用的样本数据集对训练学习的速度以及最终权值以及偏移值的准确程度有很大的影响,为了准确快速的获取神经网络各个神经元的参数,正确选取学习采样点十分必要,考虑到采样点必须覆盖所有可能的参数情况以及该神经网络的应用对象,本实施例中将样本集数据设计如表1。The B-P neural network adopts the learning process of the error back propagation learning algorithm. The sample data set used in the learning process has a great influence on the speed of training and learning and the accuracy of the final weight and offset value. For the parameters of each neuron in the network, it is necessary to correctly select learning sampling points. Considering that the sampling points must cover all possible parameters and the application objects of the neural network, the sample set data is designed in Table 1 in this embodiment.
表1神经网络训练样本集情况Table 1 Neural network training sample set situation
将表1设计的图像样本的图像亮度转换为亮度矢量作为神经网络的输入,即求出各个样本的亮度矢量X=(x1,x2…x13),将亮度矢量的每个分量作为B-P神经网络的输入。Convert the image luminance of the image samples designed in Table 1 into a luminance vector as the input of the neural network, that is, calculate the luminance vector X=(x 1 , x 2 ... x 13 ) of each sample, and use each component of the luminance vector as BP input to the neural network.
将表1中的样本数据代入Matlab工具箱的误差反向传播学习算法模块中,对本实施例的B-P神经网络进行训练学习,经过训练学习可以获得各神经元的输入权值,偏移值。Substituting the sample data in Table 1 into the error backpropagation learning algorithm module of the Matlab toolbox, the B-P neural network of this embodiment is trained and learned, and the input weights and offset values of each neuron can be obtained through training and learning.
表2所示为训练学习得到输入层到隐藏层的输入权值、以及隐藏层各个神经元偏移值。Table 2 shows the input weights from the input layer to the hidden layer obtained through training and learning, and the offset values of each neuron in the hidden layer.
表3所示为隐藏层到输出层的输入权值以及输出层神经元的偏移值。Table 3 shows the input weights from the hidden layer to the output layer and the offset value of the neurons in the output layer.
表1神经网络输出层到隐藏层信息Table 1 Neural network output layer to hidden layer information
表2神经网络隐藏层到输出层信息Table 2 Neural network hidden layer to output layer information
将表2和表3得到的各个神经元参数应用于本实施例中的B-P神经网络中,随机选用背光照片、对光照片、光照正常条件下照片以及应用图像熵判断不出场景的照片作为试验样本,对本实施例设计的B-P神经网络判断照明场景的准确性进行验证。结果表明,本实施例中B-P神经网络对图片的照明场景判断准确,其能判断出应用图像熵判断场景失效的图片的照明场景。Apply the neuron parameters obtained in Table 2 and Table 3 to the B-P neural network in this embodiment, and randomly select backlit photos, photos against the light, photos under normal lighting conditions, and photos where the scene cannot be judged by applying image entropy as experiments The sample is used to verify the accuracy of the B-P neural network designed in this embodiment in judging the lighting scene. The results show that the B-P neural network in this embodiment is accurate in judging the lighting scene of the picture, and it can judge the lighting scene of the picture where the application of image entropy to judge the scene fails.
S5:根据神经网络的判断结果,计算图像的理想亮度,当神经网络输出为0时,理想亮度取亮度矢量最大分量的二分之一;当神经网络输出为1时,计算理想亮度的公式如下:S5: Calculate the ideal brightness of the image according to the judgment result of the neural network. When the output of the neural network is 0, the ideal brightness takes half of the maximum component of the brightness vector; when the output of the neural network is 1, the formula for calculating the ideal brightness is as follows :
式中,yz为图像主体的评级亮度,X为图像的亮度矢量,W为评级亮度权重矢量,yp为图片的评价亮度,η为图像主体的亮度误差系数,l为矫正系数,其取值范围为0-1间,ymid为亮度矢量的最大分量的二分之一,也即图像中间亮度,yl为计算的理想亮度,Wi为评级亮度权重矢量W的第i个分量,T表示转置。In the formula, yz is the rating brightness of the image subject, X is the brightness vector of the image, W is the rating brightness weight vector, y p is the evaluation brightness of the picture, η is the brightness error coefficient of the image subject, l is the correction coefficient, which takes The value range is between 0 and 1, y mid is one-half of the largest component of the brightness vector, that is, the middle brightness of the image, y l is the calculated ideal brightness, W i is the ith component of the rating brightness weight vector W, T stands for transpose.
图片的评价亮度yp采用如下公式计算:The evaluation brightness y p of the picture is calculated by the following formula:
yp=X*Wz T y p =X*W z T
其中,yp为图片评价亮度,X为图像的亮度矢量,X=(x1,x2…x13),Wz为评价亮度权重矢量,该矢量是根据经验设计的。Wherein, y p is the evaluation brightness of the picture, X is the brightness vector of the image, X=(x 1 , x 2 ... x 13 ), W z is the evaluation brightness weight vector, which is designed according to experience.
其中,评级亮度权重矢量W和评价亮度权重矢量Wz均表征图像不同区域被关注的程度,其分量与亮度矢量X中分量数量相同且相互对应,某区域被关注的程度越高,则与该区域对应的权重矢量中的分量值就越大。评价亮度权重矢量Wz与评级亮度权重矢量W的各个分量值的大小不一样,根据经验取Wz=[4,1,1,1,1,0,0,0,0,0,0,0,0],W=[4,2,2,2,2,1,1,1,1,1,1,1,1],取矫正系数为l=0.5。Among them, the rating brightness weight vector W and the evaluation brightness weight vector W z both represent the degree of attention to different areas of the image, and their components are the same as the number of components in the brightness vector X and correspond to each other. The larger the value of the component in the weight vector corresponding to the region. The evaluation brightness weight vector W z is different from the magnitude of each component value of the rating brightness weight vector W. According to experience, W z =[4,1,1,1,1,0,0,0,0,0,0, 0,0], W=[4,2,2,2,2,1,1,1,1,1,1,1,1], the correction coefficient is l=0.5.
图5为依据理想亮度对背光和对光场景下拍出的照片矫正的对比效果图,图5(a)和图5(c)分别是背光和对光场景下拍摄的原图,图5(b)和图5(d)为根据理想亮度进行矫正后的图片,矫正后图片比矫正前的图片的亮度更适合人眼观察,说明本发明实施例中计算理想亮度的公式、评级亮度权重矢量W的设计及评价亮度权重矢量Wz的设计均是合理的。Figure 5 is a comparison effect diagram of correcting photos taken under backlight and under light scenes according to the ideal brightness. Figure 5(a) and Figure 5(c) are the original pictures taken under backlight and under light scenes respectively. Figure 5( b) and Figure 5(d) are pictures corrected according to the ideal brightness. The brightness of the corrected picture is more suitable for human observation than the brightness of the picture before correction, illustrating the formula for calculating the ideal brightness and the rating brightness weight vector in the embodiment of the present invention The design of W and the design of evaluation brightness weight vector W z are reasonable.
S6:将原始图像的实际亮度与S5中理想亮度的偏差作为PID算法的初始输入,利用PID算法获取理想亮度所需要的理想可控量。S6: Use the deviation between the actual brightness of the original image and the ideal brightness in S5 as the initial input of the PID algorithm, and use the PID algorithm to obtain the ideal controllable amount required by the ideal brightness.
首先,可控量用以下公式表示:First, the controllable quantity is expressed by the following formula:
lnp=lntglnp=lntg
式中,lnp为可控量,p=tg,t为自动曝光时间,g模拟增益系数。In the formula, lnp is the controllable quantity, p=tg, t is the automatic exposure time, and g is the analog gain coefficient.
其中,获得可控量公式lnp=lntg的过程为:Among them, the process of obtaining the controllable quantity formula lnp=lntg is:
考虑直流偏移等因素前提下,实际曝光后图像亮度y的传统表达公式为:Under the premise of considering DC offset and other factors, the traditional expression formula of image brightness y after actual exposure is:
y=πkltgr2+c1tg+c2 y=πkltgr 2 +c 1 tg+c 2
其中k为比例系数,l为外部环境到镜头的照度,其与照明情况、被拍摄物体表面反射率、拍摄角度等多个因素有关,r为图像采集设备的光圈半径,光圈半径一般为定值,t为曝光时间,g为模拟增益系数,t和g均可改变大小,c1,c2是与图像采集系统的结构相关的常数,为定值。Among them, k is the proportional coefficient, l is the illuminance from the external environment to the lens, which is related to many factors such as lighting conditions, surface reflectance of the object to be photographed, and shooting angle, etc., r is the aperture radius of the image acquisition device, and the aperture radius is generally a fixed value , t is the exposure time, g is the analog gain coefficient, both t and g can be changed in size, c 1 and c 2 are constants related to the structure of the image acquisition system, which are fixed values.
通过黑电平矫正去除感光过程和模拟增益系数过程中产生的直流偏移,则c1tg+c2=0,此时实际图像亮度y由如下公式表示,The DC offset generated in the photosensitive process and the analog gain coefficient process is removed by black level correction, then c 1 tg+c 2 =0, and the actual image brightness y at this time is expressed by the following formula,
y=πkltgr2 y=πkltgr 2
将以上实际图像亮度的公式进行简化,其简化方法如下:Simplify the above formula of actual image brightness, the simplification method is as follows:
光圈半径r为常量,外部场景照度l为不可控量,曝光时间t和模拟增益系数g为可控量,则p为可控量,q为不可控量,k’为常数。The aperture radius r is a constant, the external scene illumination l is an uncontrollable quantity, the exposure time t and the analog gain coefficient g are controllable quantities, then p is a controllable quantity, q is an uncontrollable quantity, and k' is a constant.
此时,实际图像亮度y与可控量p及不可控量q的关系为非线性的,对两边取对数,得到如下公式,At this time, the relationship between the actual image brightness y and the controllable quantity p and the uncontrollable quantity q is nonlinear, and the logarithm of both sides is taken to obtain the following formula,
lny=lnk'+lnp+lnqlny=lnk'+lnp+lnq
式中,lny与lnp和lnq的关系为线性的,且lnp为可控量,lnq为非可控量。In the formula, the relationship between lny and lnp and lnq is linear, and lnp is a controllable quantity, and lnq is a non-controllable quantity.
进行以上简化的效果为:将影响实际图像亮度y的因素分离出来,即图像亮度y只与p相关,p=tg,即调节曝光时间t和模拟增益系数g即可调节实际图像的亮度。The effect of the above simplification is: separate the factors that affect the actual image brightness y, that is, the image brightness y is only related to p, p=tg, that is, the brightness of the actual image can be adjusted by adjusting the exposure time t and the analog gain coefficient g.
将理想亮度表示为理想可控量情况下输出的图片亮度,根据步骤S5中计算的图片的理想亮度,获得理想的可控量,进一步获得到理想亮度所需要的曝光时间t和模拟增益系数g。本实施例中利用PID算法获得理想的可控量,PID算法的初始输入为原始图像实际亮度与该图像理想亮度的偏差。Express the ideal brightness as the brightness of the output image under the ideal controllable amount, obtain the ideal controllable amount according to the ideal brightness of the image calculated in step S5, and further obtain the exposure time t and the analog gain coefficient g required for the ideal brightness . In this embodiment, the ideal controllable quantity is obtained by using the PID algorithm, and the initial input of the PID algorithm is the deviation between the actual brightness of the original image and the ideal brightness of the image.
如图6为采用PID控制器获得理想可控量的系统框图,其中,lnye中ye为理想亮度,lnyout中yout为经过PID算法后输出的接近理想亮度的某一亮度值,lnyin中yin为经过PID计算后输入到数字视频采集设备的图像亮度,lnp为可控量,lnq+lnk’作为系统干扰存在,G(s)为在数字视频采集设备到自动曝光控制之前的传递函数,H(s)为求误差是的反馈传递函数。多次将理想亮度ye与yout偏差作为PID算法的输入,如此循环计算,即每一次计算中通过调整比例系数kp,积分系数ki,微分系数kd,使可控量更加接近理想值,也即使lnyout与lnye更加接近,当|lnyout-lnye|<he时,认为此时的可控量lnp为理想可控量,则PID计算过程结束,通过实验证明,当亮度阀值he的值为ln1.1时效果较好。Figure 6 is a system block diagram of the ideal controllable quantity obtained by using the PID controller, where y e in lny e is the ideal brightness, y out in lny out is a certain brightness value close to the ideal brightness output after the PID algorithm, lny In , y in is the image brightness input to the digital video capture device after PID calculation, lnp is a controllable quantity, lnq+lnk' exists as system interference, and G(s) is before the digital video capture device to automatic exposure control Transfer function, H(s) is the feedback transfer function for seeking error. The deviation between the ideal luminance y e and y out is used as the input of the PID algorithm for many times, and the calculation is repeated in this way, that is, by adjusting the proportional coefficient k p , the integral coefficient k i , and the differential coefficient k d in each calculation, the controllable quantity is closer to the ideal value, that is, even if lny out is closer to lny e , when |lny out -lny e |<h e , the controllable quantity lnp at this time is considered to be an ideal controllable quantity, and the PID calculation process ends. It is proved by experiments that when The effect is better when the value of brightness threshold he is ln1.1 .
S7:依据所述S6中获得理想可控量,采用查表法获得理想曝光时间t和理想模拟增益系数g,将获得的理想曝光时间t和理想模拟增益系数g传输到视频采集系统的传感器中,即可实现自动曝光。S7: According to the ideal controllable amount obtained in S6, the ideal exposure time t and the ideal analog gain coefficient g are obtained by using the look-up table method, and the obtained ideal exposure time t and the ideal analog gain coefficient g are transmitted to the sensor of the video acquisition system , to achieve automatic exposure.
根据可控量lnp获得曝光时间t和模拟增益系数g的方式有多种,如查表法、迭代法、或者其他控制算法,查询曝光增益表是其中较为简单的一种。不同型号的镜头的曝光增益表不一样,即不同型号镜头的曝光时间t和模拟增益系数g的函数关系不一样,本发明实施例中以ov公司生产的摄像头ov9712说明曝光增益表获取方法。There are many ways to obtain the exposure time t and the analog gain coefficient g according to the controllable quantity lnp, such as look-up table method, iterative method, or other control algorithms, and querying the exposure gain table is a relatively simple one. The exposure gain tables of different types of lenses are different, that is, the functional relationship between the exposure time t and the analog gain coefficient g of different types of lenses is different. In the embodiment of the present invention, the camera ov9712 produced by ov company is used to illustrate the method of obtaining the exposure gain table.
ov9712图像传感器为COMS图像传感器,增益系数调节范围为1-31,根据COMS图像传感器特性,为保证在灯光下没有条纹与闪烁现象产生,曝光时间与增益系数需满足要求为:The ov9712 image sensor is a COMS image sensor, and the gain coefficient adjustment range is 1-31. According to the characteristics of the COMS image sensor, in order to ensure that there are no streaks and flickering phenomena under the light, the exposure time and gain coefficient need to meet the requirements:
fv=2f/nf v =2f/n
t=m/(2f)t=m/(2f)
t<=1/fv t<=1/f v
1≤g≤311≤g≤31
U=lntgU=lntg
式中,U即为可控量lnp中函数P,m和n均为比例系数,调节m大小可改变曝光时间,调节n大小,可调节视频采样帧率fv大小,f为交流电频率,fv为视频采样帧率,t为曝光时间,g为模拟增益系数。其中,交流电频率f=50hz,1≤g≤31。In the formula, U is the function P in the controllable quantity lnp, m and n are proportional coefficients, adjusting the size of m can change the exposure time, adjusting the size of n, can adjust the size of the video sampling frame rate f v , f is the AC frequency, f v is the video sampling frame rate, t is the exposure time, and g is the analog gain coefficient. Wherein, AC frequency f=50hz, 1≤g≤31.
由于不同照度情况下,可以通过调整曝光时间获得亮度较好的图像,即不同照度情况下,曝光时间调节对图像亮度的影响更大,因此,在确定曝光增益表时,先确定曝光时间表,再根据限制条件调节计算增益表,得到最终的曝光增益表。Because under different illuminance conditions, images with better brightness can be obtained by adjusting the exposure time, that is, under different illuminance conditions, exposure time adjustment has a greater impact on image brightness. Therefore, when determining the exposure gain table, first determine the exposure time table. Then the calculation gain table is adjusted according to the restriction conditions to obtain the final exposure gain table.
具体的方式为,在低照度情况,即照度小于10lux时,取n=8,将采样率降低为12.5hz,将曝光时间设为80ms,可得到亮度合适的图像,强光情况,即照度大于2000lux时(普通灯光照度大约为500lux),不用考虑曝光时间,无论曝光时间多短,均会产生过曝光,此时曝光时间设置为大于10ms或者小于10ms均可。依次思路先确定该摄像头的曝光时间表,再根据限制条件调节计算该摄像头的增益表,得到最终的曝光增益表如图7。The specific method is, in the case of low illuminance, that is, when the illuminance is less than 10lux, take n=8, reduce the sampling rate to 12.5hz, and set the exposure time to 80ms, and an image with appropriate brightness can be obtained. In the case of strong light, that is, the illuminance is greater than At 2000lux (normal light illumination is about 500lux), regardless of the exposure time, no matter how short the exposure time is, overexposure will occur. At this time, the exposure time can be set to be greater than 10ms or less than 10ms. In order of thought, first determine the exposure schedule of the camera, and then adjust and calculate the gain table of the camera according to the restriction conditions, and obtain the final exposure gain table as shown in Figure 7.
图8是在不同照明场景中应用本发明实施例中自动曝光方法获得的图片。图8(a)为采用本发明实施例中的自动曝光方法拍摄的室外图片,该图片适合人眼观察,没有出现曝光溢出的情况;图8(b)为采用本发明实施例中的自动曝光方法拍摄的室内图片,该图片适合人眼观察,没有出现曝光不足或者曝光溢出情况;图8(c)和(d)是在相同条件下对同一对象拍摄的照片,图8(c)为背光情况下没有应该本发明实施例中自动曝光方法拍摄的图片,该图片亮度较暗,无法辨认细节,不适合人眼观察,图8(d)为采用本发明实施例中自动曝光方法拍摄的图片,相比图8(c),其亮度提高,能够辨认出细节;图8(e)和(f)是在相同场景条件下对同一对象拍摄的图片,图8(e)为对光情况下没有应该本发明实施例中自动曝光方法拍摄的图片,该图片亮度较亮,无法辨认细节,不适合人眼观察,图8(f)为采用本发明实施例中自动曝光方法拍摄的图片,相比图8(c),其亮度降低,能够辨认出细节,适合人眼观察。本发明实施例的自动曝光方法适用于各种照明场景,能获得较好的图像效果。Fig. 8 is a picture obtained by applying the automatic exposure method in the embodiment of the present invention in different lighting scenes. Figure 8(a) is an outdoor picture taken by the automatic exposure method in the embodiment of the present invention, the picture is suitable for human observation, and there is no exposure overflow; Figure 8(b) is the automatic exposure in the embodiment of the present invention The indoor picture taken by the method is suitable for human observation, and there is no underexposure or overexposure; Figure 8(c) and (d) are photos taken of the same object under the same conditions, and Figure 8(c) is the backlight Under the circumstances, there is no picture that should be taken by the automatic exposure method in the embodiment of the present invention. The brightness of the picture is relatively dark, the details cannot be recognized, and it is not suitable for human observation. Figure 8(d) is a picture taken by the automatic exposure method in the embodiment of the present invention , compared with Figure 8(c), its brightness is improved, and details can be recognized; Figure 8(e) and (f) are pictures taken of the same object under the same scene conditions, and Figure 8(e) is under the condition of light There is no picture taken by the automatic exposure method in the embodiment of the present invention. The brightness of the picture is relatively bright, the details cannot be recognized, and it is not suitable for human observation. Figure 8(f) is a picture taken by the automatic exposure method in the embodiment of the present invention. Compared with Figure 8(c), its brightness is reduced, and details can be recognized, which is suitable for human observation. The automatic exposure method in the embodiment of the present invention is applicable to various lighting scenes, and can obtain better image effects.
本发明的自动曝光方法可满足网络摄相机应用下的视频处理前端自动曝光要求,分析结果可应用于重要目标分割、物体识别、自适应视频压缩、内容敏感的视频缩放、图像检索以及安防监控、军事守卫等应用领域。The automatic exposure method of the present invention can meet the front-end automatic exposure requirements of video processing under the application of network cameras, and the analysis results can be applied to important target segmentation, object recognition, adaptive video compression, content-sensitive video scaling, image retrieval and security monitoring, Applications such as military guards.
以上实施方式仅用于说明本发明,而并非对本发明的限制,有关技术领域的普通技术人员,在不脱离本发明的精神和范围的情况下,还可以做出各种变化和改进,因此所有等同的技术方案也属于本发明的范畴,本发明的专利保护范围应由权利要求限定。The above embodiments are only used to illustrate the present invention, but not to limit the present invention. Those of ordinary skill in the relevant technical field can make various changes and improvements without departing from the spirit and scope of the present invention. Therefore, all Equivalent technical solutions also belong to the category of the present invention, and the scope of patent protection of the present invention should be defined by the claims.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410198357.5A CN103973991B (en) | 2014-05-12 | 2014-05-12 | A kind of automatic explosion method judging light scene based on B P neutral net |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410198357.5A CN103973991B (en) | 2014-05-12 | 2014-05-12 | A kind of automatic explosion method judging light scene based on B P neutral net |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103973991A CN103973991A (en) | 2014-08-06 |
CN103973991B true CN103973991B (en) | 2017-03-01 |
Family
ID=51242981
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410198357.5A Expired - Fee Related CN103973991B (en) | 2014-05-12 | 2014-05-12 | A kind of automatic explosion method judging light scene based on B P neutral net |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103973991B (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104363390A (en) * | 2014-11-11 | 2015-02-18 | 广东中星电子有限公司 | Lens vignetting compensation method and system |
CN104320593B (en) * | 2014-11-19 | 2016-02-24 | 湖南国科微电子股份有限公司 | A kind of digital camera automatic exposure control method |
CN104754240B (en) * | 2015-04-15 | 2017-10-27 | 中国电子科技集团公司第四十四研究所 | Cmos image sensor automatic explosion method and device |
CN105611189A (en) * | 2015-12-23 | 2016-05-25 | 北京奇虎科技有限公司 | Automatic exposure parameter adjustment method and device and user equipment |
CN108777768B (en) * | 2018-05-31 | 2020-01-31 | 中国科学院西安光学精密机械研究所 | A Calibration-Based Fast Automatic Exposure Adjustment Method |
CN110708469B (en) * | 2018-07-10 | 2021-03-19 | 北京地平线机器人技术研发有限公司 | Method and device for adapting exposure parameters and corresponding camera exposure system |
CN110070009A (en) * | 2019-04-08 | 2019-07-30 | 北京百度网讯科技有限公司 | Road surface object identification method and device |
CN110602411A (en) * | 2019-08-07 | 2019-12-20 | 深圳市华付信息技术有限公司 | Method for improving quality of face image in backlight environment |
CN112861040B (en) * | 2019-11-27 | 2025-01-24 | 中科聚信信息技术(北京)有限公司 | Image processing method, image processing device and electronic device for network graph |
JP7026727B2 (en) * | 2020-05-20 | 2022-02-28 | Ckd株式会社 | Lighting equipment for visual inspection, visual inspection equipment and blister packaging machine |
CN111770285B (en) * | 2020-07-13 | 2022-02-18 | 浙江大华技术股份有限公司 | Exposure brightness control method and device, electronic equipment and storage medium |
CN112788250B (en) * | 2021-02-01 | 2022-06-17 | 青岛海泰新光科技股份有限公司 | Automatic exposure control method based on FPGA |
CN117793539B (en) * | 2024-02-26 | 2024-05-10 | 浙江双元科技股份有限公司 | Image acquisition method based on variable period and optical sensing device |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101452575B (en) * | 2008-12-12 | 2010-07-28 | 北京航空航天大学 | A Neural Network-Based Image Adaptive Enhancement Method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7184080B2 (en) * | 2001-06-25 | 2007-02-27 | Texas Instruments Incorporated | Automatic white balancing via illuminant scoring |
-
2014
- 2014-05-12 CN CN201410198357.5A patent/CN103973991B/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101452575B (en) * | 2008-12-12 | 2010-07-28 | 北京航空航天大学 | A Neural Network-Based Image Adaptive Enhancement Method |
Non-Patent Citations (1)
Title |
---|
基于图像处理的自动对焦和自动曝光算法研究;徐培凤;《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》;20051215;第五章 * |
Also Published As
Publication number | Publication date |
---|---|
CN103973991A (en) | 2014-08-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103973991B (en) | A kind of automatic explosion method judging light scene based on B P neutral net | |
Peng et al. | Generalization of the dark channel prior for single image restoration | |
CN115223004B (en) | Method for generating image enhancement of countermeasure network based on improved multi-scale fusion | |
CN110769164B (en) | Method for automatically adjusting illumination level of target scene | |
CN101242476B (en) | Automatic correction method of image color and digital camera system | |
CN108401154A (en) | A kind of image exposure degree reference-free quality evaluation method | |
CN104113743B (en) | Colour TV camera AWB processing method and processing device under low-light (level) | |
CN107124561A (en) | A kind of bar code image exposure adjustment system and method based on CMOS | |
CN106488201A (en) | A kind of processing method of picture signal and system | |
CN102340673A (en) | White balance method for video camera aiming at traffic scene | |
CN107404647A (en) | Camera lens condition detection method and device | |
CN109660736A (en) | Method for correcting flat field and device, image authentication method and device | |
CN102831586A (en) | Method for enhancing image/video in real time under poor lighting condition | |
CN108364269A (en) | A kind of whitepack photo post-processing method based on intensified learning frame | |
CN107071308A (en) | A kind of CMOS is quickly adjusted to as system and method | |
CN113643214A (en) | Image exposure correction method and system based on artificial intelligence | |
CN105592258A (en) | Automatic focusing method and apparatus | |
JP2006031440A (en) | Image processing method, image processing apparatus, image processing program and image processing system | |
CN103295205A (en) | Low-light-level image quick enhancement method and device based on Retinex | |
CN112217988B (en) | Photovoltaic camera motion blur self-adaptive adjusting method and system based on artificial intelligence | |
CN118338134A (en) | Unmanned aerial vehicle auxiliary photographing method and system based on self-adaptive dimming technology | |
CN110602411A (en) | Method for improving quality of face image in backlight environment | |
Fan et al. | Multi-scale dynamic fusion for correcting uneven illumination images | |
Xiang et al. | Artificial intelligence controller for automatic multispectral camera parameter adjustment | |
CN106705942B (en) | A method of examine remote sensing image to handle quality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170301 |
|
CF01 | Termination of patent right due to non-payment of annual fee |