[go: up one dir, main page]

CN119277035A - Projector backlight adjustment method, device, equipment and medium - Google Patents

Projector backlight adjustment method, device, equipment and medium Download PDF

Info

Publication number
CN119277035A
CN119277035A CN202411795777.1A CN202411795777A CN119277035A CN 119277035 A CN119277035 A CN 119277035A CN 202411795777 A CN202411795777 A CN 202411795777A CN 119277035 A CN119277035 A CN 119277035A
Authority
CN
China
Prior art keywords
pixel
azimuth
projection
preset
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202411795777.1A
Other languages
Chinese (zh)
Other versions
CN119277035B (en
Inventor
梁标
冀振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xin Zhi Lian Software Co ltd
Original Assignee
Shenzhen Xin Zhi Lian Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xin Zhi Lian Software Co ltd filed Critical Shenzhen Xin Zhi Lian Software Co ltd
Priority to CN202411795777.1A priority Critical patent/CN119277035B/en
Publication of CN119277035A publication Critical patent/CN119277035A/en
Application granted granted Critical
Publication of CN119277035B publication Critical patent/CN119277035B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Transforming Electric Information Into Light Information (AREA)

Abstract

本发明涉及一种投影仪背光调节方法、装置、设备及介质,包括获取通过投影仪上配置的摄像头拍照投影仪的投影镜头投影的原始投影画面图像,得到每一帧目标编码格式的拍摄投影画面图像,其中,摄像头的投射比小于投影镜头的投射比;根据预设时长内多帧拍摄投影画面图像中同一方位上像素的灰度值的计算方差以及预设的标定方差,从预设明暗场信息中确定拍摄投影画面图像中各方位上像素的背光值满足的目标场信息;根据目标场信息,确定原始投影画面图像的明暗调整信息;根据明暗调整信息,确定用于背光调节的脉冲宽度调制信号的频率以及占空比,并根据频率以及占空比,通过背光驱动模块调节投影镜头对应的原始投影画面图像的背光亮度。

The present invention relates to a method, device, equipment and medium for adjusting the backlight of a projector, comprising obtaining an original projection screen image projected by a projection lens of a camera configured on the projector, obtaining a captured projection screen image in a target coding format for each frame, wherein the projection ratio of the camera is smaller than the projection ratio of the projection lens; determining target field information satisfied by backlight values of pixels in various directions in the captured projection screen image from preset light and dark field information according to the calculated variance of gray values of pixels in the same orientation in multiple frames of captured projection screen images within a preset time length and a preset calibration variance; determining light and dark adjustment information of the original projection screen image according to the target field information; determining the frequency and duty cycle of a pulse width modulation signal used for backlight adjustment according to the light and dark adjustment information, and adjusting the backlight brightness of the original projection screen image corresponding to the projection lens through a backlight driving module according to the frequency and the duty cycle.

Description

Projector backlight adjusting method, device, equipment and medium
Technical Field
The disclosure relates to the technical field of projectors, and in particular relates to a projector backlight adjusting method, a projector backlight adjusting device, projector backlight adjusting equipment and a projector backlight adjusting medium.
Background
In the technical field of projection display, if a projector lacks self-adaptive adjustment capability to ambient light, the difference of display effects of projection pictures is larger under different illumination conditions, and viewing experience of a user is affected. Conventional bright-dark field detection methods generally rely on simple threshold segmentation or region-based processing, and may cause poor detection effects due to illumination changes, atmospheric conditions or object shielding, affecting the display effect of the projection screen.
Disclosure of Invention
The invention aims to provide a projector backlight adjusting method, a device, equipment and a medium, which aim to intelligently sense projection environment light and adaptively adjust the backlight of a projector so as to improve the display effect of a projection picture and the viewing experience of a user, and for a projector with high lumen, the backlight of the projector is adaptively adjusted in a dark field, so that the vision can be protected, the energy can be saved and the heat can be reduced.
To achieve the above object, a first aspect of embodiments of the present disclosure provides a projector backlight adjustment method, the method including:
Acquiring an original projection picture image projected by a projection lens of the projector through a camera arranged on the projector, and obtaining a shooting projection picture image of each frame of target coding format, wherein the projection ratio of the camera is smaller than that of the projection lens;
According to the calculated variance of the gray values of the pixels in the same direction in the multi-frame shooting projection picture image within a preset time length and a preset calibration variance, determining target field information which is met by the backlight values of the pixels in each direction in the shooting projection picture image from preset bright-dark field information;
determining shading adjustment information of the original projection picture image according to the target field information;
And determining the frequency and the duty ratio of a pulse width modulation signal for backlight adjustment according to the dimming information, and adjusting the backlight brightness of the original projection picture image corresponding to the projection lens through a backlight driving module according to the frequency and the duty ratio.
In one possible implementation manner, the determining, according to the calculated variance of the gray values of the pixels in the same direction in the captured projection picture image and the preset calibration variance in the multiple frames in the preset time period, the target field information that the backlight values of the pixels in each direction in the captured projection picture image meet from the preset bright-dark field information includes:
According to the gradient of gray values of pixels in the multi-frame shooting projection picture images within the preset time length, shooting projection intersection points in all directions in each frame of shooting projection picture images are determined, and a plurality of shooting projection intersection points corresponding to each frame of shooting projection picture images are obtained;
Determining a selection range of selecting the pixels in the photographed projection picture image by taking each photographed projection intersection point as a reference point according to a projection ratio difference value of the projection ratio of the camera and the projection ratio of the projection lens, wherein the selection range enables the selection range corresponding to any photographed projection intersection point to cover an original projection intersection point, and the original projection intersection point is an intersection point of image boundary points on each side in each frame of the original projection picture image projected by the projection lens, wherein the intersection point is obtained by fitting;
Selecting pixels in the selected range by taking each shooting projection intersection point as the reference point from each frame of shooting projection picture image in the preset duration, and calculating the calculated variance of gray values of pixels selected in the same direction by a plurality of frames;
and determining target field information which is met by the backlight value of the pixel in each azimuth in the shot projection picture image from preset bright-dark field information according to the calculated variance corresponding to each azimuth and the preset calibration variance.
In one possible implementation manner, the determining, from preset bright-dark field information, target field information that is met by backlight values of the pixels in each azimuth in the captured projection picture image according to the calculated variance and the preset calibration variance corresponding to each azimuth includes:
calculating a variance difference between the calculated variance corresponding to each azimuth and the preset calibration variance;
If the variance difference is smaller than or equal to a preset difference threshold, determining that the target field information which is met by the backlight value of the pixel in the direction in the shot projection picture image is bright field information if the preset bright-dark field information is bright field information, or determining that the target field information which is met by the backlight value of the pixel in the direction in the shot projection picture image is dark field information if the preset bright-dark field information is dark field information;
and if the variance difference is larger than the preset difference threshold, determining that the target field information which is met by the backlight value of the pixel in the direction in the shot projection picture image is dark field information, or determining that the target field information which is met by the backlight value of the pixel in the direction in the shot projection picture image is bright field information.
In one possible implementation manner, the selecting pixels in the selection range from the captured projection picture images of each frame in the preset duration with each captured projection intersection point as the reference point, and calculating a calculation variance of gray values of pixels selected by multiple frames in the same direction includes:
Selecting pixels in the selected range by taking each shooting projection intersection point as the reference point from each frame of shooting projection picture image in the preset duration to obtain a first pixel corresponding to each azimuth;
Calculating the average value of the gray values of the first pixels corresponding to each direction of the shooting projection picture image of each frame;
Determining whether an abnormal pixel with abnormal gray values exists in a first pixel corresponding to each azimuth according to a preset average value threshold value and the average value corresponding to each azimuth;
Screening the first pixels corresponding to any azimuth under the condition that the abnormal pixels with abnormal gray values exist in the first pixels corresponding to any azimuth, so as to eliminate the abnormal pixels with abnormal gray values from the first pixels corresponding to the azimuth;
And under the condition that no abnormal pixel with abnormal gray value exists in the first pixel corresponding to the same azimuth in each frame of the shot projection picture image within the preset duration, calculating the calculated variance of the gray value of the first pixel selected by a plurality of frames in the same azimuth.
In one possible implementation manner, in the case that the abnormal pixel with the abnormal gray value exists in the first pixel corresponding to any azimuth, the filtering is performed on the first pixel corresponding to the azimuth to remove the abnormal pixel with the abnormal gray value from the first pixel corresponding to the azimuth, including:
The following steps are circularly executed:
Calculating the standard deviation of the gray value of a first pixel corresponding to any azimuth under the condition that the abnormal pixel with the abnormal gray value exists in the first pixel;
taking the average value corresponding to the first pixel corresponding to the azimuth as a center, taking the standard deviation of a preset multiple as an extension range, and determining the target pixel covered by the extension range;
Removing the target pixel to screen the first pixel corresponding to the azimuth, and removing the abnormal pixel with abnormal gray value from the first pixel corresponding to the azimuth;
After the target pixel is eliminated, calculating an average value of the first pixels which are not eliminated, and determining whether the first pixels corresponding to the azimuth have abnormal pixels with abnormal gray values or not according to a preset average value threshold and the average value corresponding to the azimuth until the first pixels corresponding to the azimuth do not have the abnormal pixels with abnormal gray values.
In one possible implementation manner, the azimuth includes an upper right corner, a lower right corner, an upper left corner and a lower left corner, and under the condition that no abnormal pixel exists in a first pixel corresponding to the same azimuth in the captured projection picture image in each frame within the preset duration, calculating a calculation variance of gray values of the first pixels selected by multiple frames in the same azimuth includes:
Under the condition that no abnormal pixel with abnormal gray value exists in the first pixel corresponding to the same azimuth in each frame of the shooting projection picture image in the preset duration, respectively selecting the shooting projection intersection points of the upper right corner, the lower right corner, the upper left corner and the lower left corner in each frame of the shooting projection picture image in the preset duration as the reference points, and obtaining the first pixel in the selection range to obtain the second pixel corresponding to each azimuth;
calculating the calculation variance of the gray value of the second pixel selected by the upper right corner of the multi-frame according to the gray value of the second pixel corresponding to the upper right corner of the shot projection picture image of each frame in the preset time period;
calculating the calculation variance of the gray value of the second pixel selected by the plurality of frames at the lower right corner according to the gray value of the second pixel corresponding to the lower right corner corresponding to the shot projection picture image of each frame within the preset duration;
Calculating the calculation variance of the gray value of the second pixel selected by the multi-frame at the upper left corner according to the gray value of the second pixel corresponding to the upper left corner corresponding to the shot projection picture image of each frame in the preset time period;
And calculating the calculation variance of the gray value of the second pixel selected by the plurality of frames at the lower left corner according to the gray value of the second pixel corresponding to the lower left corner corresponding to the shot projection picture image of each frame within the preset time period.
In one possible implementation manner, the determining, according to a preset average value threshold and the average value corresponding to each azimuth, whether the first pixel corresponding to the azimuth has an abnormal pixel with abnormal gray value includes:
if the average value difference value between the average value corresponding to any azimuth and the preset average value threshold exceeds a preset value, determining that an abnormal pixel with abnormal gray value exists in the first pixel corresponding to the azimuth;
if the average value difference value between the average value corresponding to any azimuth and the preset average value threshold value does not exceed the preset value, determining that the first pixel corresponding to the azimuth does not have an abnormal pixel with abnormal gray value.
In a second aspect of embodiments of the present disclosure, there is provided a projector backlight adjustment apparatus including:
The acquisition module is configured to acquire an original projection picture image projected by a projection lens of the projector through a camera arranged on the projector, so as to acquire a shooting projection picture image of each frame of target coding format, wherein the projection ratio of the camera is smaller than that of the projection lens;
The first determining module is configured to determine target field information which is met by the backlight value of the pixel in each azimuth in the shooting projection picture image from preset bright-dark field information according to the calculated variance of the gray value of the pixel in the same azimuth in the shooting projection picture image of a plurality of frames in preset time length and preset calibration variance;
a second determining module configured to determine shading information of the original projection screen image according to the target field information;
And the third determining module is configured to determine the frequency and the duty ratio of the pulse width modulation signal for backlight adjustment according to the shading adjustment information, and adjust the backlight brightness of the original projection picture image corresponding to the projection lens through the backlight driving module according to the frequency and the duty ratio.
A third aspect of the disclosed embodiments provides a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of the method of any of the first aspects.
In a fourth aspect of embodiments of the present disclosure, there is provided an electronic device, including:
A memory having a computer program stored thereon;
A processor for executing the computer program in the memory to implement the steps of the method of any of the first aspects.
The invention provides a projector backlight adjusting method, a projector backlight adjusting device, projector backlight adjusting equipment and a projector backlight adjusting medium. Compared with the prior art, the method has the following beneficial effects:
The technology can capture an original picture projected by a projection lens in real time through a camera arranged on a projector. Because the projection ratio of the camera is smaller than that of the projection lens, the camera can capture wider and finer picture details, and the variance calculation is carried out on the pixel gray values in the same direction aiming at multi-frame shooting pictures in preset duration. The method can accurately identify the light and shade change area in the picture by comparing with the preset calibration variance, and can determine the target field information which is required to be met by the backlight value of the pixels in each azimuth in the shot picture image from the preset light and shade field information according to the gray level analysis result. The accurate identification and classification of the brightness of the picture are realized. Based on the target field information, shading information for the original projection screen image can be formulated. The method not only considers the whole brightness of the picture, but also balances the local brightness, thereby realizing the dynamic adjustment of the brightness of the picture. From the dimming information, the frequency and duty cycle of the pulse width modulated signal for backlight adjustment can be determined. The backlight driving module accurately executes the backlight brightness adjusting instruction, thereby realizing the accurate control of the backlight brightness of the projection picture.
Additional features and advantages of the present disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification, illustrate the disclosure and together with the description serve to explain, but do not limit the disclosure. In the drawings:
Fig. 1 is a flowchart of a projector backlight adjusting method according to an embodiment of the specification.
Fig. 2 is a schematic diagram of a hardware configuration of a projector according to an embodiment of the specification.
Fig. 3 is a block diagram of a projector backlight adjusting apparatus according to an embodiment of the present specification.
Fig. 4 is a block diagram of another projector backlight adjusting apparatus according to an embodiment of the specification.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Specific embodiments of the present disclosure are described in detail below with reference to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the disclosure, are not intended to limit the disclosure.
The present disclosure provides a projector backlight adjustment method, and fig. 1 is a flowchart illustrating a projector backlight adjustment method according to an embodiment. The method comprises the following steps:
In step S11, an original projection picture image projected by a projection lens of the projector is photographed by a camera configured on the projector, so as to obtain a photographed projection picture image of each frame of target coding format, where the projection ratio of the camera is smaller than that of the projection lens;
Wherein, the projection ratio is the ratio between the projection distance and the projection picture width. It determines the size of the picture that can be projected at a particular distance. The projection distance is the shortest distance that the projector lens reaches the projection screen, and the screen width is the actual width of the projection screen. The smaller the projection ratio, the larger the width of the projected screen at the same projection distance.
The target encoding format is a specific format adopted by the image data during storage or transmission, for example, YUV420P is adopted as the target encoding format, so that the image information can be divided into two parts of luminance (Y) and chrominance (U, V) for encoding based on the difference of sensitivity of the visual system to luminance and chrominance. The YUV format allows different sampling and processing of luminance and chrominance, thereby improving coding efficiency and image quality. YUV420P is a Planar (Planar) storage format of YUV4:2:0 sampling format. In YUV420P, luminance information (Y) and chrominance information (U, V) are stored separately on different planes. Specifically, the Y component (luminance information) is stored independently for each pixel, and the U and V components (chrominance information) share a group for every 4Y components. The sampling mode greatly reduces the storage requirement of the chromaticity information and simultaneously keeps better image quality.
In the embodiment of the disclosure, a camera configured on a projector is used to capture an original picture projected by a projection lens. Since the projection ratio of the camera is smaller than that of the projection lens, the camera can capture wider pictures within the same distance, so that other background detail information such as a projection wall surface and the like can be captured in all directions besides the original pictures. The pictures captured by the camera are then converted into digital signals and stored in a coded format according to the target coding format.
For example, assume that the projector has a throw ratio of 1.2, and the camera has a throw ratio of 0.8. When the projector projects a 100 inch picture over a3 meter distance, the camera can capture the same or a wider picture over a closer 3 meter distance. Thus, the camera can capture not only the whole projection picture, but also more details, such as background detail information of other projection walls besides the original picture.
In step S12, determining, from preset bright-dark field information, target field information satisfying backlight values of pixels in each azimuth in the captured projection screen image according to a calculated variance of gray values of the pixels in the same azimuth in the captured projection screen image and a preset calibration variance of the pixels in a plurality of frames in a preset duration;
the gray value is the brightness value of each pixel point in the image, and the range is usually 0 (black) to 255 (white). The variance is a statistic that measures the degree of dispersion of the data distribution, and for pixel gray values in an image, a larger variance indicates a larger variance in pixel gray values, i.e., the more non-uniform the image. The calibration variance is a preset threshold value, and is used for judging whether the degree of dispersion of the gray value of the image reaches a certain standard. The bright-dark field information is preset information about the bright-dark distribution of an image, and generally includes information such as backlight values corresponding to different brightness regions.
In the embodiment of the disclosure, firstly, the variance of gray values of pixels in the same direction in a multi-frame shooting projection picture image within a preset time period is calculated. The variance reflects the degree to which the brightness of the pixel varies in that orientation. This variance is then compared to a preset calibrated variance. If the variance is greater than the nominal variance, this indicates that the pixel brightness in that orientation is changing significantly, possibly due to backlight non-uniformity or projection lens problems.
And then, according to preset bright-dark field information, determining target field information which is required to be met by backlight values of pixels in all directions in the shot projection picture image. The target field information is set based on the desired shading profile to characterize what brightness level (or darkness level) the pixels in each orientation should reach to achieve overall brightness uniformity.
Thus, the uneven brightness area in the picture can be accurately identified, and targeted adjustment can be performed according to preset bright-dark field information.
In step S13, according to the target field information, determining shading information of the original projection screen image;
The shading information is specific information about how to adjust the brightness of each area of the picture based on the comparison of the target field information and the original projection picture image. This generally includes an increase or decrease in luminance, a luminance adjustment ratio, and the like for each pixel or pixel region.
In the disclosed embodiment, the previously determined target field information (i.e., the ideal shading profile) is utilized to compare with the original projected picture image. By comparing the luminance value of each pixel or pixel region with the luminance value of the corresponding position in the target field information, the system can calculate the luminance increase/decrease amount or adjustment ratio of each pixel or region, i.e., shading information.
For example, assume that the luminance value of a certain region in the target field information is 200 (gray value), and the luminance value of a corresponding region in the original projection screen image is 150. The system calculates the brightness of this area to be increased by 50 (gray value) and generates the corresponding shading information. This information will be used to guide the subsequent backlight adjustment process.
In step S14, according to the dimming information, the frequency and the duty ratio of the pwm signal for backlight adjustment are determined, and according to the frequency and the duty ratio, the backlight brightness of the original projection picture image corresponding to the projection lens is adjusted by the backlight driving module.
Among them, a Pulse Width Modulation (PWM) signal adjusts a signal of average power or brightness by changing a width (duty ratio) of a pulse. In backlight adjustment, the PWM signal is used to control the brightness of the backlight.
The frequency is the repetition frequency of the PWM signal, i.e. the number of times the signal changes from high to low (or from low to high) per second. The duty cycle is the ratio of the duration of the high level (or low level) to the total period time in the PWM signal, which determines the magnitude of the average power or brightness. The backlight driving module is a circuit or component for receiving the PWM signal and adjusting the brightness of the backlight according to the signal.
In the embodiment of the disclosure, the frequency and the duty ratio of the PWM signal for backlight adjustment are calculated according to the dimming information. For example, the dimming information is converted into parameters of the PWM signal to ensure that the backlight can be adjusted to the desired brightness.
For example, assuming that it is determined that the brightness of a certain area needs to be increased, it generates a PWM signal having a larger duty cycle to drive the backlight. An increase in the duty ratio means that the backlight is lit for a longer time in each period, thereby improving the average luminance. Meanwhile, the frequency of the PWM signal is adjusted according to actual needs, so that the stability and the response speed of backlight adjustment are ensured.
Once the frequency and duty cycle of the PWM signals are determined, these signals are sent to the backlight by the backlight driving module. The backlight driving module receives the signals and adjusts the brightness of the backlight lamp correspondingly, so that the accurate adjustment of the backlight brightness of the original projection picture image is realized.
According to the technical scheme, the original picture projected by the projection lens can be captured in real time by the camera arranged on the projector. Because the projection ratio of the camera is smaller than that of the projection lens, the camera can capture wider and finer picture details, and the variance calculation is carried out on the pixel gray values in the same direction aiming at multi-frame shooting pictures in preset duration. The method can accurately identify the light and shade change area in the picture by comparing with the preset calibration variance, and can determine the target field information which is required to be met by the backlight value of the pixels in each azimuth in the shot picture image from the preset light and shade field information according to the gray level analysis result. The accurate identification and classification of the brightness of the picture are realized. Based on the target field information, shading information for the original projection screen image can be formulated. The method not only considers the whole brightness of the picture, but also balances the local brightness, thereby realizing the dynamic adjustment of the brightness of the picture. From the dimming information, the frequency and duty cycle of the pulse width modulated signal for backlight adjustment can be determined. The backlight driving module accurately executes the backlight brightness adjusting instruction, thereby realizing the accurate control of the backlight brightness of the projection picture. The dynamic adjustment of the backlight brightness can avoid visual discomfort caused by excessively bright or excessively dark screens, so that a user can use the equipment more comfortably for a long time, the energy-saving effect is remarkable, the energy consumption is effectively reduced by dynamically adjusting the backlight brightness according to the illumination of the actual environment, particularly in the environment with darker light, the backlight brightness can be automatically reduced, the energy-saving effect is achieved, the service life of the backlight assembly is prolonged, the operation at the maximum brightness all the time is avoided due to the dynamic adjustment of the backlight brightness, the service life of the backlight assembly is prolonged, and the maintenance cost and the replacement frequency of the equipment are reduced.
In one possible implementation manner, in step S12, the determining, from preset bright-dark field information, target field information that is met by a backlight value of a pixel in each azimuth in the captured projection picture image according to a calculated variance of a gray value of the pixel in the same azimuth in the captured projection picture image and a preset calibration variance of the pixel in a plurality of frames in a preset duration includes:
In step S121, determining a shooting projection intersection point in each azimuth in each frame of the shooting projection picture image according to the gradient of the gray value of each pixel in the multi-frame shooting projection picture image in the preset time period, so as to obtain a plurality of shooting projection intersection points corresponding to each frame of the shooting projection picture image;
The gray value gradient is the change rate of gray values of adjacent pixels in the image and is used for detecting edges or change areas in the image. The shot projection intersection point is a point formed by the intersection of the projection light ray and the image plane in the shot projection screen image, and is usually located at the edge or the characteristic area of the image.
In the embodiment of the disclosure, firstly, gray value gradients of pixels in a multi-frame shooting projection picture image within a preset time period are calculated. By comparing the gray value differences of adjacent pixels, areas of the image where the gray value changes significantly, which areas often correspond to the intersection of the projected light with the image plane, can be identified. For example, in an image containing projected text, the pixel gray value gradient of the text edge will be large, so these edge points can be identified as the shot projection intersection points. Through this process, the system determines a plurality of shot projection intersections for each frame of image.
In step S122, determining, according to a projection ratio difference between the projection ratio of the camera and the projection ratio of the projection lens, a selection range of selecting the pixel in the captured projection picture image by using each captured projection intersection point as a reference point, where the selection range enables the selection range corresponding to any captured projection intersection point to cover an original projection intersection point, where the original projection intersection point is an intersection point of an image boundary line obtained by fitting image boundary points on each side in each frame of the original projection picture image projected by the projection lens;
the original projection intersection point is an intersection point of image boundary lines obtained by fitting image boundary points on each side in an original projection picture image projected by a projection lens.
In the embodiment of the disclosure, a pixel selection range taking each shooting projection intersection point as a reference point is determined according to a difference value between the projection ratio of the camera and the projection ratio of the projection lens. The purpose of this step is to ensure that the selected pixel range accurately reflects the information in the original projected image, while taking into account the difference in viewing angle between the camera and the projection lens.
For example, if the projection ratio of the camera is smaller than that of the projection lens, the captured projection image will be smaller than the original projection image, and thus the system needs to expand the pixel selection range accordingly to ensure coverage to the original projection intersection point. Specifically, the system may calculate the relative position of each shot projection intersection with the image boundary, and then adjust the size and shape of the selection ranges according to the projection ratio difference, so that each selection range can cover one original projection intersection. This helps to ensure the accuracy and reliability of the backlight value calculation.
In step S123, selecting pixels in the selected range from the captured projection picture images of each frame within the preset duration, and calculating a calculated variance of gray values of pixels selected by a plurality of frames in the same direction, wherein each captured projection intersection point is taken as the reference point;
the calculated variance of the gray values of the pixels selected by the multiple frames in the same direction is used for measuring the degree of dispersion of the gray values of the pixels in the direction.
In the embodiment of the disclosure, for each frame of the projection image within a preset duration, all pixels within a previously determined pixel range are selected by taking each shooting projection intersection point as a reference point. Then, for each azimuth (i.e., the position where each shot projection intersection is located), the system calculates the variance of the gray value of the selected pixel in that azimuth in the multi-frame image.
For example, assume that there is a sequence of 10 shot projection screen images, each of which identifies 4 shot projection intersections. For each intersection point a fixed range is chosen that contains its surrounding pixels. Next, the system calculates the variance of the gray value of the selected pixel in the azimuth for each intersection in the 10 frames of images. This helps to understand the stability or degree of variation in the pixel gray values at each orientation.
In step S124, according to the calculated variance and the preset calibration variance corresponding to each azimuth, the target field information that the backlight value of the pixel in each azimuth in the captured projection picture image satisfies is determined from the preset bright-dark field information.
In the embodiment of the disclosure, the calculated variance of the gray value of each azimuth pixel is compared with a preset calibration variance. The calibration variance is preset according to factors such as actual application scenes, projection equipment performance, image quality requirements and the like.
The purpose of the comparison is to determine whether the degree of dispersion of the pixel gray values at each orientation is within an acceptable range. If the variance of the gray value in a certain direction is far smaller than the calibrated variance, it may mean that the brightness of the pixels in the direction is too stable and lacks necessary contrast, while if the variance is far greater than the calibrated variance, it may mean that the brightness of the pixels in the direction is too changed, which affects the definition and readability of the image.
Based on the comparison result, selecting information which is most matched with the target condition from preset bright-dark field information as target field information which is required to be met by the pixel backlight value in the azimuth. For example, if the variance of gray values in a certain direction is moderate and is similar to the calibrated variance, the system may select a target field information with medium brightness, and if the variance is large, a target field information with higher contrast and more obvious contrast may be selected.
Therefore, the backlight value of each azimuth pixel in the shot projection picture image can be automatically adjusted, so that the whole image can keep clear visual effect and proper contrast under different illumination conditions.
In a possible implementation manner, in step S124, the determining, from preset bright-dark field information, target field information that is satisfied by a backlight value of the pixel in each azimuth in the captured projection screen image according to the calculated variance and the preset calibration variance corresponding to each azimuth includes:
In step S1241, calculating a variance difference between the calculated variance corresponding to each azimuth and the preset calibration variance;
The variance difference is the difference between the calculated variance (i.e. the variance of the gray value of each azimuth pixel obtained in step S123) and the preset calibration variance, and is used for measuring the degree of difference between the calculated variance and the preset calibration variance.
In the embodiment of the disclosure, the variance of the gray value calculated for each azimuth is compared with a preset calibration variance one by one, and the variance difference between the gray value variance and the preset calibration variance is calculated. This difference reflects the degree of deviation between the actual gray value fluctuation and the desired fluctuation.
For example, assume that the calculated gray value variance in a certain azimuth is 10, and the preset calibration variance is 8. Then the variance difference in this orientation is 10-8=2. The system will perform this calculation for all orientations, resulting in a series of variance differences.
In step S1242, if the variance difference is smaller than or equal to a preset difference threshold, determining that the target field information satisfied by the backlight value of the pixel in the direction in the captured projection picture image is bright field information if the preset bright-dark field information is bright field information, or determining that the target field information satisfied by the backlight value of the pixel in the direction in the captured projection picture image is dark field information if the preset bright-dark field information is dark field information;
The preset difference threshold is used for judging whether the variance difference is within the threshold in an acceptable range or not, and is set according to the actual application scene and the image quality requirement. Bright field information is the area information of the image with higher pixel brightness and more obvious contrast, and is generally used for ensuring that important information in the image is clearly visible. Dark field information is region information with lower brightness and weaker contrast of pixels in an image, and is commonly used for highlighting specific regions or creating specific atmospheres.
In the embodiment of the disclosure, the variance difference obtained through calculation is compared with a preset difference threshold. If the variance difference is smaller than or equal to the preset difference threshold, the gray value fluctuation of the pixel in the azimuth is within an acceptable range and is close to the preset calibration variance.
Next, the target field information is determined from the preset bright-dark field information. If the preset bright and dark field information is bright field information and the variance difference meets the condition, the backlight value of the pixel in the azimuth should meet the condition of bright field information, namely, the brightness is higher and the contrast is obvious. In contrast, if the preset bright-dark field information is dark field information and the variance difference satisfies the condition, the backlight value of the pixel in the azimuth should satisfy the condition of dark field information, namely, the brightness is lower and the contrast is weaker.
For example, assume that the preset difference threshold is 2 and the variance difference in a certain azimuth is 1 (less than 2), while the preset bright-dark field information is bright field information. Then, it is determined that the backlight value of the pixel in the azimuth should satisfy the condition of bright field information, that is, the brightness of the pixel in the azimuth is increased in the subsequent process to enhance the contrast ratio.
Through the process, the backlight value of the pixels in each azimuth in the shot projection picture image can be automatically adjusted according to the fluctuation condition of the gray values of the pixels in each azimuth, so that the whole image can keep clear visual effect and proper contrast under different illumination conditions.
In step S1243, if the variance difference is greater than the preset difference threshold, the target field information that the backlight value of the pixel in the direction satisfies in the captured projection image is determined to be dark field information if the preset bright-dark field information is bright field information, or the target field information that the backlight value of the pixel in the direction satisfies in the captured projection image is determined to be bright field information if the preset bright-dark field information is dark field information.
Similarly, if the variance difference is greater than the preset difference threshold, it means that the gray value of the pixel in a certain direction fluctuates beyond an acceptable range, and there is a significant difference from the preset calibration variance. In this case, the target field information is adjusted to accommodate this difference, thereby ensuring the overall quality and visual effect of the image.
Specifically, if the preset bright-dark field information is bright field information, but the variance difference is larger than the preset difference threshold, this indicates that the gray value of the pixel in the azimuth fluctuates too much, which may result in too high brightness or contrast. In order to balance this difference, dark field information opposite thereto is selected as target field information. This reduces the brightness of the pixels in that orientation, reduces the contrast, and makes the image smoother and more uniform.
In contrast, if the preset bright-dark field information is dark field information, the variance difference is larger than the preset difference threshold, which means that the gray value fluctuation of the pixel in the azimuth may also cause too low brightness or contrast, making the image information difficult to recognize. To improve this, bright field information is selected as target field information. By doing so, the brightness of the pixels in the azimuth can be improved, the contrast is enhanced, and important information in the image is more prominent.
In one possible implementation manner, in step S123, the selecting, from each frame of the captured projection picture image within the preset duration, a pixel within the selected range with each captured projection intersection point as the reference point, and calculating a calculated variance of gray values of pixels selected by multiple frames in the same direction includes:
In step S1231, selecting, from each frame of the captured projection image within the preset duration, a pixel within the selected range with each captured projection intersection point as the reference point, to obtain a first pixel corresponding to each azimuth;
In the embodiment of the disclosure, each frame within a preset time period is traversed to shoot a projection picture image. For each frame of image, the system projects intersection points (which are typically obtained by image processing algorithms such as edge detection, feature matching, etc.) from known shots as reference points. Then, the system selects a pixel area within a predetermined range with the intersections as the center. The size of this area is typically determined by the image resolution, the accuracy of the projection device, and the requirements of the actual application scene. The system extracts all pixels from this area and marks them as the first pixel of the corresponding orientation.
For example, assume that a set of sequences of 10 shot projection screen images is provided, each frame of image having 4 shot projection intersections identified. For each intersection, a circular area having a radius of 10 pixels centered on the intersection is set as the selection range. Each frame of image is traversed and for each intersection point, all pixels are extracted from this circular area and taken as the first pixel for the corresponding orientation.
In step S1232, an average value of gray values of the first pixels corresponding to each position of the captured projection screen image for each frame is calculated;
In the embodiment of the disclosure, for each azimuth (i.e., the position corresponding to each shooting projection intersection point) in each frame image, the average value of the gray values of all the first pixels in the azimuth is calculated. This average value reflects the overall brightness level of the pixel in that orientation. The system performs this calculation for each azimuth in each frame of image, resulting in a matrix or array containing the average of all azimuth gray values.
Continuing with the example above, assume that the first pixel of all orientations has been extracted from each frame of image. The system then calculates the average of the gray values of all pixels in each orientation by counting the gray values of the pixels. For example, for the azimuth corresponding to the first intersection, in 10 frames of images, an average value of 10 gray values is obtained, and these values reflect the overall trend of the pixel brightness in this azimuth.
In step S1233, according to a preset average value threshold and the average value corresponding to each azimuth, determining whether an abnormal pixel with abnormal gray value exists in the first pixel corresponding to the azimuth;
In the embodiment of the disclosure, whether pixels with abnormal gray values exist in each azimuth is judged by using a preset average value threshold value. This average threshold is typically set according to factors such as the actual application scenario, the image quality requirements, and the performance of the projection device. Comparing the average value of the gray values calculated in each azimuth with a preset average value threshold value, and if the average value of a certain azimuth exceeds a threshold value range (whether too high or too low), the system considers that pixels with abnormal gray values exist in the azimuth.
Continuing with the example above, the average of all pixel gray values for each orientation is calculated and a range of average thresholds (e.g., between 50 and 200) is set. Next, the average value for each azimuth is compared with this threshold range one by one. If the average value of a certain azimuth is lower than 50 or higher than 200, pixels with abnormal gray values exist in the azimuth. These abnormal pixels may be caused by a malfunction of the projection device, light interference, or image processing error.
In step S1234, when the abnormal pixel whose gray value is abnormal exists in the first pixels corresponding to any orientation, the first pixels corresponding to the orientation are filtered to remove the abnormal pixel whose gray value is abnormal from the first pixels corresponding to the orientation;
In the embodiment of the disclosure, the pixels with abnormal gray values may exist in each azimuth are screened. If it is determined that there are pixels in a certain orientation for which the gray value is abnormal (i.e., abnormal pixels), then all first pixels in that orientation are further examined. The purpose of this step is to eliminate from the first pixels corresponding to the orientation those outlier pixels whose gray values deviate significantly from the normal range, in order to ensure the accuracy and reliability of the subsequent calculation.
The screening process may involve various methods such as statistical-based filtering (e.g., median filtering, mean filtering, etc.), threshold-based decisions (e.g., setting a more stringent gray scale range, retaining only pixels within that range), or machine-learning-based classification algorithms (e.g., using trained models to identify and reject outlier pixels). Which method is specifically adopted depends on the practical application scenario, the image quality requirement and the limitation of the processing capacity of the system.
Suppose that a pixel having an abnormal gray value is found in a certain azimuth. To reject these outliers, a median filtering approach may be used. Specifically, for each first pixel in the orientation, pixels within a certain range around it (e.g., a 3x3 or 5x5 neighborhood) are selected, and then the median of these pixel gray values is calculated. If the median value does not differ much from the gray value of the original pixel (e.g., the difference is less than some preset threshold), the pixel is preserved, otherwise it is considered an outlier and culled.
In step S1235, under the condition that no abnormal pixel with abnormal gray value exists in the first pixel corresponding to the same direction in each frame of the captured projection picture image within the preset time period, calculating the calculated variance of gray values of the first pixels selected by multiple frames in the same direction.
In the embodiment of the disclosure, the gray value variance of the first pixel selected in the same direction in the multi-frame shooting projection picture image is calculated. The premise of the step is that in each frame of image within the preset time period, no abnormal pixel with abnormal gray value exists in the first pixels corresponding to the same azimuth. In other words, variance calculation is performed on the gray value of a certain orientation only when all the first pixels in that orientation pass the filtering.
Variance calculation typically involves the steps of first summing the gray values of all screened first pixels for each azimuth, then calculating the average (i.e., mean) of these gray values, and finally calculating the square of the difference between each gray value and the mean, and dividing the sum of these squares by the number of pixels to obtain the variance. The variance reflects the degree of dispersion or fluctuation of the pixel gray values in that orientation.
It is assumed that pixels having no abnormal gray value in each azimuth are screened out. Next, the gray value variance may be calculated for these pixels in each azimuth. Taking a certain orientation as an example, assume that in 10 frames of images, there are 100 pixels after screening on the orientation (i.e. 10 pixels in each frame of image). The average of these pixel gray values is calculated first, then the square of the difference between each pixel gray value and the average is calculated, and the square values are added and divided by 100 (i.e., the number of pixels) to obtain the gray value variance for that orientation.
In one possible implementation manner, in step S1234, when the abnormal pixel with abnormal gray value exists in the first pixel corresponding to any azimuth, filtering the first pixel corresponding to the azimuth to remove the abnormal pixel with abnormal gray value from the first pixel corresponding to the azimuth, including:
The following steps are circularly executed:
In step S21, when there is an abnormal pixel having an abnormal gray value in a first pixel corresponding to an arbitrary azimuth, a standard deviation of the gray value of the first pixel in the azimuth is calculated;
In the embodiment of the disclosure, when an abnormal pixel with abnormal gray values is detected in the first pixels corresponding to any azimuth, standard deviations of gray values of all the first pixels of the azimuth are calculated. The standard deviation is an important indicator of how well data is scattered, and reflects how far data points deviate from the average. Calculating the standard deviation can help the system more accurately identify which pixels have gray values that deviate from the normal range.
Assume that a certain orientation has 10 first pixels in the first frame image, and their gray values are {50, 52, 51, 100, 53, 54, 55, 56, 57, 49}, respectively. Firstly, calculating an average value (mu) of the gray values, then calculating the square of the difference between each gray value and the average value, dividing the sum by the number of pixels (N) to obtain a variance (sigma 2), and finally, obtaining a standard deviation (sigma) from the square of the variance. If the standard deviation calculated is large, it is indicated that the gray values of the pixels are dispersed to a high degree, and abnormal pixels may exist.
In step S22, the average value corresponding to the first pixel corresponding to the azimuth is taken as the center, the standard deviation of the preset multiple is taken as the extension range, and the target pixel covered by the extension range is determined;
In the embodiment of the disclosure, an interval is determined by taking an average value of first pixels corresponding to the azimuth as a center and taking a standard deviation of a preset multiple as an extension range. This interval will cover those pixels that may belong to an anomaly but have not been explicitly identified, i.e. the target pixel. The selection of the preset multiple depends on the sensitivity requirement of the system to the abnormal pixels, the larger the multiple is, the wider the coverage range is, the more abnormal pixels are possibly contained, but the more normal pixels are possibly contained, the smaller the multiple is, the narrower the coverage range is, some abnormal pixels are possibly missed, and the probability of misjudging the normal pixels is also reduced.
Continuing with the above example, assuming that the preset multiple is set to 2, the calculated standard deviation (σ) is 10. The extension is then the mean value (μ) ±2σ, i.e. [ μ -20, μ+20]. If the gray value of a pixel falls within this range, it is considered the target pixel and further screening may be required.
In step S23, the target pixel is removed, so as to screen the first pixel corresponding to the azimuth, and the abnormal pixel with abnormal gray value is removed from the first pixel corresponding to the azimuth;
In the embodiment of the disclosure, the determined target pixel is removed, so as to further screen the first pixel corresponding to the azimuth. To remove those pixels that may belong to anomalies but have not been explicitly identified to reduce errors and uncertainty in subsequent computations. And after the target pixel is removed, the gray value distribution condition of the rest pixels is reevaluated so as to ensure the accuracy of subsequent analysis.
In the above example, if there are 10 first pixels, in which the gray values of 3 pixels fall within the extension range [ mu-20, mu+20 ], then these 3 pixels are considered to be target pixels and are discarded. The remaining 7 pixels are considered to be pixels that are more likely to represent the normal gray value for that orientation.
In step S24, after the target pixel is removed, an average value of the first pixels that are not removed is calculated, and according to a preset average value threshold and the average value corresponding to the azimuth, whether the first pixels corresponding to the azimuth have abnormal pixels with abnormal gray values or not is determined, until the first pixels corresponding to the azimuth do not have the abnormal pixels with abnormal gray values.
In the embodiment of the disclosure, an average value of the first pixels remaining after the target pixel is removed is calculated, and whether an abnormal pixel with an abnormal gray value exists in the first pixels corresponding to the azimuth is determined again according to a preset average value threshold and the average value corresponding to the azimuth. This step is an iterative process aimed at approximating the true gray value distribution by continually culling out possible outlier pixels. If, after multiple iterations, no abnormal pixels with abnormal gray values exist in the first pixels corresponding to the azimuth (i.e. the gray values of all the remaining pixels are within the preset average threshold range), the iteration process is ended.
In the above example, if the average value and the standard deviation are recalculated after the target pixel is removed, and the gray values of the remaining pixels are found to be within the preset average value threshold range, then no abnormal pixel with abnormal gray values exists in the first pixel corresponding to the azimuth. If there are still some pixels whose gray values exceed the threshold range, the iterative process of steps S21 to S24 is continued until the condition is satisfied.
Through the series of steps, the pixels with abnormal gray values can be more accurately identified and removed.
In one possible implementation manner, the azimuth includes an upper right corner, a lower right corner, an upper left corner and a lower left corner, and in step S1235, when no abnormal pixel having an abnormal gray value exists in a first pixel corresponding to the same azimuth in the captured projection image of each frame in the preset duration, calculating a calculated variance of gray values of the first pixels selected by multiple frames in the same azimuth includes:
In step S31, under the condition that no abnormal pixel with abnormal gray value exists in the first pixel corresponding to the same azimuth in each frame of the shot projection picture image within the preset duration, respectively selecting the shot projection intersection points of the upper right corner, the lower right corner, the upper left corner and the lower left corner in each frame of the shot projection picture image within the preset duration as the reference points, and obtaining the first pixel within the selected range to obtain the second pixel corresponding to each azimuth;
in the embodiment of the disclosure, firstly, it is confirmed that in each frame of the captured projection picture image within the preset duration, no abnormal pixel with abnormal gray value exists in the first pixels corresponding to the same azimuth (such as the upper right corner, the lower right corner, the upper left corner and the lower left corner). Thereby ensuring accuracy and reliability of the calculation. And then, respectively selecting four shooting projection intersection points of an upper right corner, a lower right corner, an upper left corner and a lower left corner as reference points according to each frame shooting projection picture image within the preset duration, and defining a selection range around each reference point. Within this selection range, the system selects the first pixel as the second pixel for the bearing. This process is actually to extract and sort the effective pixels in each azimuth in each frame image for subsequent calculation of the gray value variance.
Assume that the preset time length is 5 seconds, 10 frames of images are photographed per second, and 50 frames of images are total. In each frame of image, the system takes the upper right corner as a reference point, and a 5x5 pixel selection range is defined around the system. Then, within this range, the system selects a first pixel whose gray value is normal (i.e., no abnormality exists) as a second pixel in the upper right corner of the frame image. The same procedure will also apply to the lower right corner, upper left corner and lower left corner. Thus, in each orientation, the system obtains a set of second pixels in the multi-frame image.
In step S32, according to the gray value of the second pixel corresponding to the upper right corner corresponding to each frame of the captured projection picture image within the preset duration, calculating the calculated variance of the gray value of the second pixel selected by the upper right corner for multiple frames;
In the embodiment of the disclosure, according to the gray value of the second pixel at the upper right corner corresponding to the image of the projection picture shot by each frame within the preset duration, the gray value variance of the second pixel selected by the upper right corner of the plurality of frames is calculated. This process is actually the computing application of the statistic of the difference. Variance is an important indicator of how discrete the data is, reflecting how far the data points deviate from the average. In this step, the system first calculates the average of the gray values of all the second pixels in the upper right corner, then calculates the square of the difference between each gray value and the average, and divides the sum of the squares by the number of pixels (i.e., the number of frames) to obtain the variance.
Continuing with the above example, assume that in a 29-frame image of 1 second, the gray value of the second pixel corresponding to each upper right corner has been extracted. These gray values may be a sequence such as {60, 62, 61, 59. The average value of this sequence is first calculated, then the square of the difference between each gray value and the average value is calculated, and the square values are summed and divided by 29 (i.e., the number of frames) to obtain the variance of the gray value of the second pixel in that orientation. This variance value reflects the degree of dispersion or fluctuation of the gray value of the second pixel in the upper right corner within the preset time period.
The variance is calculated to know the discrete degree of the pixel gray value in each azimuth, and meanwhile, the variance calculation is based on multi-frame images, so that the influence of random noise can be restrained to a certain extent, and the accuracy and the reliability of analysis are improved.
In step S33, calculating a calculation variance of the gray value of the second pixel selected by the plurality of frames at the lower right corner according to the gray value of the second pixel corresponding to the lower right corner corresponding to the captured projection picture image of each frame within the preset time period;
Similarly, the gray values of the second pixels corresponding to the lower right corner of the frames are collected first. Then, using these gray values, the system calculates the gray value variance of the second pixel selected for the multi-frame in the lower right corner. This variance calculation process is also based on statistical variance definitions, aimed at quantifying the degree of dispersion of the second pixel gray value in the lower right corner with respect to its average.
Assuming that the preset duration of 1 second and the photographing frequency of 29 frames per second are taken as an example again, there are 29 frames of images in total. For each frame of image, a 5x5 pixel region is selected in the lower right corner, and the first pixel with normal gray value is selected as the second pixel of the frame. After the selection of all frames is completed, a sequence of 29 gray values will be obtained, which correspond to the second pixel in the lower right corner of the 29 frame image, respectively. Next, the system calculates the average of these gradation values, and calculates the square of the difference between each gradation value and the average one by one. Finally, these squares are summed and divided by 29 (i.e., the number of frames) to yield the gray value variance in the bottom right corner.
In step S34, according to the gray value of the second pixel corresponding to the upper left corner corresponding to the captured projection picture image of each frame in the preset duration, calculating the calculated variance of the gray value of the second pixel selected by the upper left corner of the plurality of frames;
Similarly, the gray values of the second pixels corresponding to the upper left corner of the frames are collected first. Then, using these gray values, the system calculates the gray value variance of the second pixel selected in the upper left corner of the multiframe. This variance calculation process is also based on statistical variance definitions, aimed at quantifying the degree of dispersion of the second pixel gray value in the lower right corner with respect to its average.
In step S35, according to the gray value of the second pixel corresponding to the lower left corner corresponding to the captured projection picture image of each frame within the preset duration, the calculated variance of the gray value of the second pixel selected by the lower left corner of the plurality of frames is calculated.
Similarly, the gray values of the second pixels corresponding to the lower left corner of the frames are collected first. Then, using these gray values, the system calculates the gray value variance of the second pixel selected for the multi-frame in the lower left corner. This variance calculation process is also based on statistical variance definitions, aimed at quantifying the degree of dispersion of the second pixel gray value in the lower right corner with respect to its average.
In one possible implementation manner, in step S1233, the determining, according to a preset average value threshold and the average value corresponding to each azimuth, whether the first pixel corresponding to the azimuth has an abnormal pixel with an abnormal gray value includes:
if the average value difference value between the average value corresponding to any azimuth and the preset average value threshold exceeds a preset value, determining that an abnormal pixel with abnormal gray value exists in the first pixel corresponding to the azimuth;
if the average value difference value between the average value corresponding to any azimuth and the preset average value threshold value does not exceed the preset value, determining that the first pixel corresponding to the azimuth does not have an abnormal pixel with abnormal gray value.
In the embodiment of the disclosure, whether an abnormal pixel with abnormal gray value exists in a first pixel of each azimuth is judged according to a preset average value threshold value and an average value corresponding to each azimuth. Helping to identify and address potential image quality problems.
Specifically, an average value of the first pixel gradation value of each azimuth (upper right corner, lower right corner, upper left corner, lower left corner) is first calculated. These averages are then compared to a preset average threshold. The preset average value threshold is preset according to the normal image data and is used for judging whether the gray value is in a normal range or not.
In the comparison process, the difference (i.e., the mean difference) between the mean value of each azimuth and the preset mean threshold is calculated. This difference is then compared to a predetermined value (i.e., tolerance or threshold difference). If the mean value difference exceeds the preset value, the gray value of the azimuth deviates from the normal range, so that the system judges that the first pixel of the azimuth has abnormal pixels with abnormal gray values. And if the mean value difference value does not exceed the preset value, the gray value of the azimuth is in the normal range, and the first pixel of the azimuth is judged to have no abnormal pixel with abnormal gray value.
Assume that there are a preset mean threshold of 128 (assumed to be the median in the gray image), a preset value (tolerance) of 10 (i.e. the range of allowed mean fluctuations), and that for a certain orientation (e.g. upper right corner) the mean value of the first pixel gray value calculated by the system is 135. And (3) calculating a mean value difference value of 135-128=7, and comparing the mean value difference value with a preset value of 7<10, wherein the system judges that the first pixel at the upper right corner does not have an abnormal pixel with abnormal gray value because the mean value difference value is smaller than the preset value.
For another orientation (e.g., lower left corner), the average value of the first pixel gray values calculated by the system is 145. And (3) calculating a mean value difference value of 145-128=17, and comparing the mean value difference value with a preset value of 17>10, wherein the system judges that an abnormal pixel with abnormal gray value exists in a first pixel at the lower left corner because the mean value difference value is larger than the preset value.
The steps can identify which azimuth gray values are abnormal, so as to provide key information for adjusting projection parameters, marking abnormal areas and the like). The method is beneficial to improving the accuracy and efficiency of image processing.
Referring to fig. 2, the projector of the present disclosure includes a Camera 3, a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD), a backlight driving module, a System on Chip (SoC) 2, a Power module (Power) 1, and a projection Lens (Lens) 6, wherein the Camera 3 is used for photographing a current projection picture and transmitting the current projection picture to the System on Chip (SoC) 2, and the projection ratio of the Camera 3 is not less than the projection ratio of the projection Lens. The LCD5 can be used as display device to display the projected picture of any projection lens, the backlight driving module 4 uses pulse width modulation (Pulse Width Modulation, PWM) to control LCD backlight brightness, the system level chip 2 is used to process the data of the camera 3 to obtain a frame of YUV420P data, the byte of the size of (width) is Y data, 1 image is detected every second, the calibration data and algorithm of automatic trapezoid algorithm are used to calculate four angles of the projected picture, the four angles of the picture and the range of the fixed point area of four corners of the picture, the variance and average difference of the four images are calculated, whether the light and shade information is correct is judged according to the variance result, the average difference of the four points is judged when the calibration is compared, the bright and dark field is judged, backlight adjustment is carried out according to the light and shade value, and the signal PWM is controlled to adjust the screen brightness. The Power module (Power) 1 is used to supply Power required for the entire system. The projection Lens (Lens) 6 can project the picture of the LCD5, and the picture is projected through the projection Lens 6 to form a projection picture on a wall surface or a curtain.
In one embodiment, please refer to the code in the correlation calculation process:
mean value of// calculation
double getAverage (double *pixels,int n)
{
double ss= 0.0f,ave = 0.0f;
if(pixels != NULL)
{
for(int i = 0 ; i<n ; i++)
{
ss += pixels[i];
}
}
return ss/(double)n;
}
Calculating variance
double getVariance(double *pixels,int n)
{
double ss= 0.0f,ave = 0.0f;
if(pixels != NULL)
{
ave = getAverage(pixels,n);
ss = 0;
for(int i = 0 ; i<n ; i++)
{
ss += (pixels[i]-ave)*(pixels[i]-ave);
}
}
return ss/n;
}
Calculating standard deviation
double getStVariance(double *pixels,int n)
{
if(pixels != NULL&&n>0)
{
return sqrt(getVariance(pixels,n));
}
return 0x1fffffff;
}
Calculating effective average
Data of 2 times variance around the mean was filtered out, and the mean was calculated again
double getEffectAverage(double *pixels,int n)
{
int cnt = 0;
double avg1 = 0, avg2 = 0,aag = 0;
Avg1= GETSTVARIANCE (pixels, n);// standard deviation
Avg2= GETAVERAGE (pixels, n);// means
for(int i = 0 ; i<n ; i++)
{
if(fabs(pixels[i] - avg2)<(avg1*2.0f))
{
aag += pixels[i];
cnt ++;
}
}
if(cnt>0)
{
aag /= (double)cnt;
}
NANOX_MARK("avg1 = %0.2f,avg2 = %0.2f,aag = %0.2f,cnt = %d\n",avg1,avg2,aag,cnt);
return 0;
}
The embodiment of the disclosure further provides a backlight adjusting device for a projector, referring to fig. 3, including:
An obtaining module 210, configured to obtain an original projection picture image projected by a projection lens of the projector through a camera configured on the projector, so as to obtain a captured projection picture image in a target coding format of each frame, where a projection ratio of the camera is smaller than a projection ratio of the projection lens;
A first determining module 220, configured to determine, from preset bright-dark field information, target field information that is satisfied by a backlight value of a pixel in each azimuth in the captured projected image according to a calculated variance of a gray value of the pixel in the same azimuth in a plurality of frames of the captured projected image within a preset duration and a preset calibration variance;
a second determining module 230 configured to determine shading information of the original projection screen image according to the target field information;
the third determining module 240 is configured to determine the frequency and the duty ratio of the pwm signal for backlight adjustment according to the dimming information, and adjust the backlight brightness of the original projection screen image corresponding to the projection lens through the backlight driving module according to the frequency and the duty ratio.
In one possible implementation manner, the first determining module 220 is configured to determine, according to a gradient of gray values of pixels in a plurality of frames of the photographed projection picture images within the preset time period, photographing projection intersection points on each azimuth in each frame of the photographed projection picture images to obtain a plurality of photographing projection intersection points corresponding to each frame of the photographed projection picture images, determine, according to a difference between a projection ratio of the camera and a projection ratio of the projection lens, a selection range of the pixels in the photographed projection picture images by taking each photographing projection intersection point as a reference point, wherein the selection range enables the selection range of the pixels corresponding to any photographing projection intersection point to cover an original projection intersection point, the original projection intersection point is an intersection point of an image boundary point on each edge in each frame of the original projection picture images projected by the projection lens, select each photographing intersection point in the projection picture images by taking each projection intersection point as a reference point, calculate a corresponding to a pixel in the preset azimuth in the preset time period, and calculate a variance value in the preset time period from the preset image by calibrating the pixel in each azimuth, and determine the variance value in the preset image by using the pixel in the preset time period.
In one possible implementation manner, the first determining module 220 is configured to calculate a variance difference between the calculated variance corresponding to each direction and the preset calibration variance, determine that, if the variance difference is smaller than or equal to a preset difference threshold, target field information satisfied by a backlight value of the pixel in the direction in the captured projection picture image is bright field information if the preset bright-dark field information is bright field information, or determine that, if the preset bright-dark field information is dark field information, target field information satisfied by a backlight value of the pixel in the direction in the captured projection picture image is dark field information if the variance difference is greater than the preset difference threshold, determine that, if the preset bright-dark field information is bright field information, target field information satisfied by a backlight value of the pixel in the direction in the captured projection picture image is dark field information, or determine that, if the preset bright-dark field information is dark field information, the target field information satisfied by a backlight value of the pixel in the direction in the captured projection picture image is bright field information.
In a possible implementation manner, the first determining module 220 is configured to select, from each frame of the captured projection image within the preset duration, a pixel within the selection range with each captured projection intersection point as the reference point to obtain a first pixel corresponding to each azimuth, calculate an average value of gray values of the first pixels corresponding to each azimuth of each frame of the captured projection image, determine, according to a preset average value threshold and the average value corresponding to each azimuth, whether an abnormal pixel with an abnormal gray value exists in the first pixel corresponding to the azimuth, screen the first pixel corresponding to the azimuth to remove the abnormal pixel with an abnormal gray value from the first pixel corresponding to the azimuth in the case that the abnormal pixel with an abnormal gray value exists in the first pixel corresponding to any azimuth, and calculate a multi-frame variance of the gray value of the first pixel selected on the same azimuth in the preset duration when the abnormal pixel with an abnormal gray value does not exist in the first pixel corresponding to the same azimuth in the captured projection image.
In one possible implementation manner, the first determining module 220 is configured to circularly perform the steps of calculating a standard deviation of a gray value of a first pixel corresponding to any azimuth in the case that the abnormal pixel with the gray value abnormal exists in the first pixel corresponding to any azimuth, taking an average value corresponding to the first pixel corresponding to the azimuth as a center, determining the target pixel covered by the extension range by taking the standard deviation of a preset multiple as the extension range, eliminating the target pixel to screen the first pixel corresponding to the azimuth to eliminate the abnormal pixel with the gray value abnormal from the first pixel corresponding to the azimuth, calculating an average value of the first pixel which is not eliminated after eliminating the target pixel, and determining whether the abnormal pixel with the gray value abnormal exists in the first pixel corresponding to the azimuth according to a preset average value threshold and the average value corresponding to the azimuth until the abnormal pixel with the gray value abnormal does not exist in the first pixel corresponding to the azimuth.
In one possible implementation manner, the azimuth includes an upper right corner, a lower right corner, an upper left corner and a lower left corner, the first determining module 220 is configured to calculate, under the condition that no abnormal pixel exists in a first pixel corresponding to the same azimuth in the photographed projection picture image in each frame within the preset time period, a calculated variance of a value of the second pixel selected in the upper right corner according to the gray value of the second pixel selected in the preset time period, respectively selected by using the upper right corner, the lower right corner, the upper left corner and the lower left corner as the reference point, the first pixel in the selected range obtains a second pixel corresponding to each azimuth, calculate, according to the gray value of the second pixel corresponding to the upper right corner in each frame within the preset time period, the calculated variance of the second pixel selected in the multi-frame within the photographed projection picture image, and calculate, according to the gray value of the second pixel corresponding to the second pixel selected in the multi-frame within the preset time period, the calculated variance of the gray value of the second pixel selected in the multi-frame within the photographed projection picture image, and calculate the calculated gray value of the second pixel corresponding to the gray value of the second pixel selected in the multi-frame within the preset time period.
In one possible implementation manner, the first determining module 220 is configured to determine that the first pixel corresponding to any azimuth has an abnormal pixel with abnormal gray value if the average value difference between the average value corresponding to any azimuth and the preset average value threshold exceeds a preset value, and determine that the first pixel corresponding to any azimuth does not have an abnormal pixel with abnormal gray value if the average value difference between the average value corresponding to any azimuth and the preset average value threshold does not exceed the preset value.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of any of the preceding embodiments.
The embodiment of the disclosure also provides an electronic device, including:
a memory having stored thereon a computer program, a processor for executing the computer program in the memory to implement the steps of the method of any of the previous embodiments.
The projector backlight adjusting apparatus 100 shown in fig. 4 includes a processor 1001 and a memory 1003. The processor 1001 is coupled to the memory 1003, such as via a bus 1002. Optionally, the projector backlight adjusting apparatus 100 may further include a communication component 1004, where the communication component 1004 may be used for data interaction between the apparatus 100 and other devices, such as transmission of data and/or reception of data, etc. It should be noted that, the communication component 1004 is not limited to one in actual schedule, and the structure of the projector backlight adjusting apparatus 100 is not limited to the embodiment of the application.
The Processor 1001 may be a CPU (Central Processing Unit ), general purpose Processor, DSP (DIGITAL SIGNAL Processor, data signal Processor), ASIC (Application SPECIFIC INTEGRATED Circuit), FPGA (Field Programmable GATE ARRAY ) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with this disclosure. The processor 1001 may also be a combination that implements computing functionality, such as a combination comprising one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
Bus 1002 may include a path to transfer information between the components. Bus 1002 may be a PCI (PERIPHERAL COMPONENT INTERCONNECT, peripheral component interconnect standard) bus, or an EISA (Extended Industry Standard Architecture ) bus, or the like. The bus 1002 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 4, but not only one bus or one type of bus.
The Memory 1003 may be, but is not limited to, ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, RAM (Random Access Memory ) or other type of dynamic storage device that can store information and instructions, EEPROM (ELECTRICALLY ERASABLE PROGRAMMABLE READ ONLY MEMORY ), CD-ROM (Compact Disc Read Only Memory, compact disc Read Only Memory) or other optical disk storage, optical disk storage (including compact discs, laser discs, optical discs, digital versatile discs, blu-ray discs, etc.), magnetic disk storage media, other magnetic storage devices, or any other medium that can be used to carry or store program code and that can be Read by a computer.
The memory 1003 is used for storing program codes for executing the embodiments of the present disclosure, and is controlled to be executed by the processor 1001. The processor 1001 is configured to execute the program code stored in the memory 1003 to implement the steps shown in the aforementioned projector backlight adjustment method embodiment.
The embodiments of the present disclosure also provide a computer readable storage medium, where a program code is stored on the computer readable storage medium, and when the program code is executed by a processor, the steps and corresponding contents of the foregoing projector backlight adjustment method embodiments can be implemented.
The preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, but the present disclosure is not limited to the specific details of the above embodiments, and various changes, modifications, substitutions and alterations can be made to these embodiments within the scope of the technical idea of the present disclosure, which all fall within the scope of protection of the present disclosure.
It should be further noted that, where specific features described in the foregoing embodiments are combined in any suitable manner, they should also be regarded as disclosure of the present disclosure, and various possible combinations are not separately described in order to avoid unnecessary repetition. The technical scope of the present application is not limited to the contents of the specification, and must be determined according to the scope of claims.

Claims (10)

1. A projector backlight adjustment method, the method comprising:
Acquiring an original projection picture image projected by a projection lens of the projector through a camera arranged on the projector, and obtaining a shooting projection picture image of each frame of target coding format, wherein the projection ratio of the camera is smaller than that of the projection lens;
According to the calculated variance of the gray values of the pixels in the same direction in the multi-frame shooting projection picture image within a preset time length and a preset calibration variance, determining target field information which is met by the backlight values of the pixels in each direction in the shooting projection picture image from preset bright-dark field information;
determining shading adjustment information of the original projection picture image according to the target field information;
And determining the frequency and the duty ratio of a pulse width modulation signal for backlight adjustment according to the dimming information, and adjusting the backlight brightness of the original projection picture image corresponding to the projection lens through a backlight driving module according to the frequency and the duty ratio.
2. The method for adjusting backlight of a projector according to claim 1, wherein determining, from preset bright-dark field information, target field information satisfying backlight values of pixels in each azimuth in the captured projected picture image according to a calculated variance of gray values of pixels in the same azimuth in a plurality of frames of the captured projected picture image within a preset time period and a preset calibration variance, includes:
According to the gradient of gray values of pixels in the multi-frame shooting projection picture images within the preset time length, shooting projection intersection points in all directions in each frame of shooting projection picture images are determined, and a plurality of shooting projection intersection points corresponding to each frame of shooting projection picture images are obtained;
Determining a selection range of selecting the pixels in the photographed projection picture image by taking each photographed projection intersection point as a reference point according to a projection ratio difference value of the projection ratio of the camera and the projection ratio of the projection lens, wherein the selection range enables the selection range corresponding to any photographed projection intersection point to cover an original projection intersection point, and the original projection intersection point is an intersection point of image boundary points on each side in each frame of the original projection picture image projected by the projection lens, wherein the intersection point is obtained by fitting;
Selecting pixels in the selected range by taking each shooting projection intersection point as the reference point from each frame of shooting projection picture image in the preset duration, and calculating the calculated variance of gray values of pixels selected in the same direction by a plurality of frames;
and determining target field information which is met by the backlight value of the pixel in each azimuth in the shot projection picture image from preset bright-dark field information according to the calculated variance corresponding to each azimuth and the preset calibration variance.
3. The projector backlight adjustment method according to claim 2, wherein the determining, from preset bright-dark field information, target field information satisfying backlight values of the pixels in each azimuth in the captured projection screen image based on the calculated variances corresponding in each azimuth and preset calibration variances, includes:
calculating a variance difference between the calculated variance corresponding to each azimuth and the preset calibration variance;
If the variance difference is smaller than or equal to a preset difference threshold, determining that the target field information which is met by the backlight value of the pixel in the direction in the shot projection picture image is bright field information if the preset bright-dark field information is bright field information, or determining that the target field information which is met by the backlight value of the pixel in the direction in the shot projection picture image is dark field information if the preset bright-dark field information is dark field information;
and if the variance difference is larger than the preset difference threshold, determining that the target field information which is met by the backlight value of the pixel in the direction in the shot projection picture image is dark field information, or determining that the target field information which is met by the backlight value of the pixel in the direction in the shot projection picture image is bright field information.
4. The method for adjusting backlight of a projector according to claim 2, wherein selecting pixels in the selection range from the captured projection picture images of each frame in the preset time period with each captured projection intersection point as the reference point, and calculating a calculated variance of gray values of pixels selected in the same direction for a plurality of frames, comprises:
Selecting pixels in the selected range by taking each shooting projection intersection point as the reference point from each frame of shooting projection picture image in the preset duration to obtain a first pixel corresponding to each azimuth;
Calculating the average value of the gray values of the first pixels corresponding to each direction of the shooting projection picture image of each frame;
Determining whether an abnormal pixel with abnormal gray values exists in a first pixel corresponding to each azimuth according to a preset average value threshold value and the average value corresponding to each azimuth;
Screening the first pixels corresponding to any azimuth under the condition that the abnormal pixels with abnormal gray values exist in the first pixels corresponding to any azimuth, so as to eliminate the abnormal pixels with abnormal gray values from the first pixels corresponding to the azimuth;
And under the condition that no abnormal pixel with abnormal gray value exists in the first pixel corresponding to the same azimuth in each frame of the shot projection picture image within the preset duration, calculating the calculated variance of the gray value of the first pixel selected by a plurality of frames in the same azimuth.
5. The projector backlight adjustment method according to claim 4, wherein, in the case where the abnormal pixel having the abnormal gray value exists in the first pixels corresponding to any orientation, filtering the first pixels corresponding to the orientation to remove the abnormal pixel having the abnormal gray value from the first pixels corresponding to the orientation, comprises:
The following steps are circularly executed:
Calculating the standard deviation of the gray value of a first pixel corresponding to any azimuth under the condition that the abnormal pixel with the abnormal gray value exists in the first pixel;
taking the average value corresponding to the first pixel corresponding to the azimuth as a center, taking the standard deviation of a preset multiple as an extension range, and determining the target pixel covered by the extension range;
Removing the target pixel to screen the first pixel corresponding to the azimuth, and removing the abnormal pixel with abnormal gray value from the first pixel corresponding to the azimuth;
After the target pixel is eliminated, calculating an average value of the first pixels which are not eliminated, and determining whether the first pixels corresponding to the azimuth have abnormal pixels with abnormal gray values or not according to a preset average value threshold and the average value corresponding to the azimuth until the first pixels corresponding to the azimuth do not have the abnormal pixels with abnormal gray values.
6. The method for adjusting backlight of a projector according to claim 4, wherein the azimuth includes an upper right corner, a lower right corner, an upper left corner and a lower left corner, and the calculating the variance of the gray value of the first pixel selected by a plurality of frames in the same azimuth in the case that no abnormal pixel having the gray value abnormality exists in the first pixel corresponding to the same azimuth in each frame of the captured projected picture image within the preset time period includes:
Under the condition that no abnormal pixel with abnormal gray value exists in the first pixel corresponding to the same azimuth in each frame of the shooting projection picture image in the preset duration, respectively selecting the shooting projection intersection points of the upper right corner, the lower right corner, the upper left corner and the lower left corner in each frame of the shooting projection picture image in the preset duration as the reference points, and obtaining the first pixel in the selection range to obtain the second pixel corresponding to each azimuth;
calculating the calculation variance of the gray value of the second pixel selected by the upper right corner of the multi-frame according to the gray value of the second pixel corresponding to the upper right corner of the shot projection picture image of each frame in the preset time period;
calculating the calculation variance of the gray value of the second pixel selected by the plurality of frames at the lower right corner according to the gray value of the second pixel corresponding to the lower right corner corresponding to the shot projection picture image of each frame within the preset duration;
Calculating the calculation variance of the gray value of the second pixel selected by the multi-frame at the upper left corner according to the gray value of the second pixel corresponding to the upper left corner corresponding to the shot projection picture image of each frame in the preset time period;
And calculating the calculation variance of the gray value of the second pixel selected by the plurality of frames at the lower left corner according to the gray value of the second pixel corresponding to the lower left corner corresponding to the shot projection picture image of each frame within the preset time period.
7. The method for adjusting backlight of a projector according to claim 4, wherein determining whether the first pixel corresponding to each azimuth has an abnormal pixel with abnormal gray value according to the preset average value threshold and the average value corresponding to the azimuth comprises:
if the average value difference value between the average value corresponding to any azimuth and the preset average value threshold exceeds a preset value, determining that an abnormal pixel with abnormal gray value exists in the first pixel corresponding to the azimuth;
if the average value difference value between the average value corresponding to any azimuth and the preset average value threshold value does not exceed the preset value, determining that the first pixel corresponding to the azimuth does not have an abnormal pixel with abnormal gray value.
8. A backlight adjusting apparatus for a projector, comprising:
The acquisition module is configured to acquire an original projection picture image projected by a projection lens of the projector through a camera arranged on the projector, so as to acquire a shooting projection picture image of each frame of target coding format, wherein the projection ratio of the camera is smaller than that of the projection lens;
The first determining module is configured to determine target field information which is met by the backlight value of the pixel in each azimuth in the shooting projection picture image from preset bright-dark field information according to the calculated variance of the gray value of the pixel in the same azimuth in the shooting projection picture image of a plurality of frames in preset time length and preset calibration variance;
a second determining module configured to determine shading information of the original projection screen image according to the target field information;
And the third determining module is configured to determine the frequency and the duty ratio of the pulse width modulation signal for backlight adjustment according to the shading adjustment information, and adjust the backlight brightness of the original projection picture image corresponding to the projection lens through the backlight driving module according to the frequency and the duty ratio.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of the method according to any one of claims 1-7.
10. An electronic device, comprising:
A memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of any one of claims 1-7.
CN202411795777.1A 2024-12-09 2024-12-09 Projector backlight adjusting method, device, equipment and medium Active CN119277035B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411795777.1A CN119277035B (en) 2024-12-09 2024-12-09 Projector backlight adjusting method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411795777.1A CN119277035B (en) 2024-12-09 2024-12-09 Projector backlight adjusting method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN119277035A true CN119277035A (en) 2025-01-07
CN119277035B CN119277035B (en) 2025-02-25

Family

ID=94109876

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411795777.1A Active CN119277035B (en) 2024-12-09 2024-12-09 Projector backlight adjusting method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN119277035B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2099019A1 (en) * 2008-02-29 2009-09-09 Research In Motion Limited System and method for adjusting an intensity value and a backlight level for a display of an electronic device
JP2011039451A (en) * 2009-08-18 2011-02-24 Sharp Corp Display device, luminance unevenness correction method, and device and method for generating correction data
CN113573032A (en) * 2020-04-28 2021-10-29 深圳光峰科技股份有限公司 Image processing method and projection system
CN115484447A (en) * 2022-11-14 2022-12-16 深圳市芯图科技有限公司 Projection method, projection system and projector based on high color gamut adjustment
CN118075437A (en) * 2024-04-18 2024-05-24 深圳市艾科维达科技有限公司 Intelligent control method and system for light source of projector

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2099019A1 (en) * 2008-02-29 2009-09-09 Research In Motion Limited System and method for adjusting an intensity value and a backlight level for a display of an electronic device
JP2011039451A (en) * 2009-08-18 2011-02-24 Sharp Corp Display device, luminance unevenness correction method, and device and method for generating correction data
CN113573032A (en) * 2020-04-28 2021-10-29 深圳光峰科技股份有限公司 Image processing method and projection system
CN115484447A (en) * 2022-11-14 2022-12-16 深圳市芯图科技有限公司 Projection method, projection system and projector based on high color gamut adjustment
CN118075437A (en) * 2024-04-18 2024-05-24 深圳市艾科维达科技有限公司 Intelligent control method and system for light source of projector

Also Published As

Publication number Publication date
CN119277035B (en) 2025-02-25

Similar Documents

Publication Publication Date Title
US10630906B2 (en) Imaging control method, electronic device and computer readable storage medium
CN112752023B (en) Image adjusting method and device, electronic equipment and storage medium
CN107507558B (en) Correction method of LED display screen
US9692958B2 (en) Focus assist system and method
US8049795B2 (en) Lens shading compensation apparatus and method, and image processor using the same
US10665142B2 (en) Screen calibration method and screen calibration system capable of correcting full screen color tones automatically
US8944610B2 (en) Image projection apparatus, control method, control program, and storage medium
US20130021484A1 (en) Dynamic computation of lens shading
US8368803B2 (en) Setting exposure attributes for capturing calibration images
CN105049734A (en) License camera capable of giving shooting environment shooting prompt and shooting environment detection method
CN1200625A (en) Imager registration error and chromatic aberration measurement system for television cameras
KR20170030933A (en) Image processing device and auto white balancing metohd thereof
CN114757853B (en) Method and system for acquiring flat field correction function and flat field correction method and system
US9583071B2 (en) Calibration apparatus and calibration method
CN119277035B (en) Projector backlight adjusting method, device, equipment and medium
CN110782400A (en) A kind of self-adaptive illumination uniform realization method and device
CN115914850A (en) Method for enhancing permeability of wide dynamic image, electronic device and storage medium
KR20150040559A (en) Apparatus for Improving Image Quality and Computer-Readable Recording Medium with Program Therefor
CN108337448B (en) High dynamic range image acquisition method and device, terminal equipment and storage medium
CN114666558B (en) Method and device for detecting definition of projection picture, storage medium and projection equipment
CN114862804A (en) Detection method and device, electronic equipment and storage medium
CN116413008A (en) Display screen full gray-scale optical information acquisition method and device and display control equipment
CN112565719B (en) Method, device, equipment and storage medium for enhancing image illumination based on multistage filtering
CN114866755A (en) Automatic white balance method and device, computer storage medium and electronic equipment
CN112183158A (en) Grain type identification method of grain cooking equipment and grain cooking equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant