[go: up one dir, main page]

CN119277045A - A method for detecting bad pixel and electronic device - Google Patents

A method for detecting bad pixel and electronic device Download PDF

Info

Publication number
CN119277045A
CN119277045A CN202410214648.2A CN202410214648A CN119277045A CN 119277045 A CN119277045 A CN 119277045A CN 202410214648 A CN202410214648 A CN 202410214648A CN 119277045 A CN119277045 A CN 119277045A
Authority
CN
China
Prior art keywords
pixel
value
electronic device
detected
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410214648.2A
Other languages
Chinese (zh)
Inventor
马文锦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Publication of CN119277045A publication Critical patent/CN119277045A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/68Noise processing, e.g. detecting, correcting, reducing or removing noise applied to defects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Studio Devices (AREA)

Abstract

本申请提供一种坏点像素的检测方法及电子设备,涉及图像处理技术领域,该方法包括:电子设备以待检测像素为中心,获取第一像素矩阵内、与待检测像素为同一颜色通道的m个像素中每个像素的像素值,以得到m个像素值;m为正整数。电子设备根据m个像素值,确定第一像素阈值;若m个像素值中存在第一像素值大于第一像素阈值,则电子设备确定与第一像素值对应的第一像素为坏点像素;在电子设备对第一像素进行补偿之后,电子设备确定待检测像素是否为坏点像素,可以实现检测由图像传感器的缺陷引起的RAW域图像上存在的相邻三个坏点像素,从而提高了检测的准确率。

The present application provides a method and electronic device for detecting bad pixel, which relates to the field of image processing technology. The method comprises: the electronic device takes the pixel to be detected as the center, obtains the pixel value of each pixel in the m pixels in the first pixel matrix that are in the same color channel as the pixel to be detected, so as to obtain m pixel values; m is a positive integer. The electronic device determines the first pixel threshold value according to the m pixel values; if there is a first pixel value greater than the first pixel threshold value among the m pixel values, the electronic device determines that the first pixel corresponding to the first pixel value is a bad pixel; after the electronic device compensates the first pixel, the electronic device determines whether the pixel to be detected is a bad pixel, which can realize the detection of three adjacent bad pixels existing in the RAW domain image caused by the defects of the image sensor, thereby improving the accuracy of the detection.

Description

Method for detecting dead pixel and electronic equipment
The application claims priority of China patent application filed in 2024, 1 and 29 days with the national intellectual property agency, application number 202410124205.4 and the application name of 'a method for detecting bad pixels and electronic equipment', and the whole content of the application is incorporated by reference.
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a method for detecting dead pixels and electronic equipment.
Background
With the development of science and technology and manufacturing process, an image sensor used by a camera of electronic equipment is taken as a photoelectric sensor, so that the requirements of people on image acquisition and processing are greatly met. However, during the manufacturing and actual use, the pixels on the image output by the image sensor may have dead pixels due to the defects of the material structure of the image sensor, the defects of the manufacturing process, and the aging, which affect the shooting effect.
Disclosure of Invention
The application provides a method for detecting defective pixels and electronic equipment, which can detect three adjacent defective pixels existing on a RAW domain image caused by defects of an image sensor, thereby improving the detection accuracy.
The application adopts the following technical scheme:
In a first aspect, a method for detecting a dead pixel is provided and applied to an electronic device, the electronic device comprises a camera, the camera is used for acquiring a RAW domain image, the RAW domain image comprises a plurality of pixels, each two adjacent pixels in the plurality of pixels are sequentially used as pixels to be detected by the electronic device, and whether the dead pixel exists in the plurality of pixels is detected. Specifically, the method may include:
The electronic equipment takes a pixel to be detected as a center, and obtains a pixel value of each pixel in m pixels which are in the same color channel with the pixel to be detected in a first pixel matrix to obtain m pixel values, wherein m is a positive integer. And if the first pixel value in the m pixel values is larger than the first pixel threshold value, the electronic equipment determines that the first pixel corresponding to the first pixel value is a dead pixel. After the electronic device compensates the first pixel, the electronic device determines whether the pixel to be detected is a dead pixel.
Based on the first aspect, the electronic device firstly detects dead pixels existing around the pixel to be detected by taking the pixel to be detected as the center, and then detects whether the pixel to be detected is the dead pixel after compensating the surrounding dead pixels, so that three adjacent dead pixels existing on the RAW domain image can be detected, and the detection accuracy is improved.
In a design manner, if pixel values corresponding to the pixels to be detected are all greater than a first pixel threshold, the electronic device determines that the pixels to be detected are dead pixels. Therefore, the electronic equipment can directly detect the pixel to be detected as the dead pixel by adopting the first pixel threshold value, the detection complexity is reduced, and meanwhile, the power consumption can be reduced.
In a design manner, after detecting that the pixel to be detected is a dead pixel, the electronic device replaces a pixel value corresponding to the pixel to be detected with a first pixel threshold value to compensate the pixel to be detected, or replaces a pixel value corresponding to the pixel to be detected with a next largest value in m pixel values to compensate the pixel to be detected. Therefore, by compensating the pixel to be detected, the image quality can be improved, and the user experience is further improved.
The pixel value corresponding to the pixel to be detected refers to the pixel values of two adjacent pixels corresponding to the pixel to be detected.
In one design, the electronic device may determine whether the pixel to be detected is a dead pixel in the following manner. Specifically, the electronic device takes a pixel to be detected as a center, acquires a pixel value of each pixel in n pixels which are in the same color channel as the pixel to be detected in a second pixel matrix, so as to obtain n pixel values, wherein n is a positive integer, and the n pixel values comprise replaced first pixel values. And if the pixel value corresponding to the pixel to be detected is larger than the second pixel threshold, the electronic equipment determines that the pixel to be detected is a dead pixel.
Therefore, by determining whether the pixel to be detected is a dead pixel or not in the mode, the detection accuracy can be improved.
In a design manner, after the electronic device determines that the pixel to be detected is a dead pixel, the electronic device replaces a pixel value corresponding to the pixel to be detected with a second pixel threshold value to compensate the pixel to be detected, or replaces a pixel value corresponding to the pixel to be detected with a maximum value in n pixel values to compensate the pixel to be detected.
Therefore, the picture quality can be improved by compensating the pixel values of the two adjacent pixels corresponding to the pixel to be detected, and further the user experience is improved.
In one design, the electronic device may replace the first pixel value with a first pixel threshold to compensate for the first pixel, or the electronic device may replace the first pixel value with a next-largest value of the m pixel values to compensate for the first pixel. Therefore, the picture quality can be improved by compensating the first pixel, and further the user experience is improved.
In one design, the electronic device may determine the first pixel threshold value as follows. Specifically, the electronic device determines a first pixel threshold according to a first variable value, a second variable value and a next-largest value of m pixel values, wherein the first variable value is any one value of [1,2], the second variable value is related to the bit number of the pixel, or the second variable value is zero.
In this way, the first pixel threshold determined in this way is more accurate, so that the accuracy of detection can be improved.
In one design, the first pixel threshold satisfies the following expression:
K1=Fmax*SMP+TBP_offset;
Where K 1 denotes the first pixel threshold, fmax denotes the first variable value, SMP denotes the next largest value, and tbp_ offse denotes the second variable value.
In one design, the electronic device may determine the second pixel threshold value as follows. Specifically, the electronic device determines the second pixel threshold according to the third variable value, the fourth variable value and the maximum value of the n pixel values, wherein the third variable value is any one value of [1,2], the fourth variable value is related to the bit number of the pixel, or the fourth variable value is zero.
In this way, the second pixel threshold determined in this way is more accurate, so that the accuracy of detection can be improved.
In one design, the second pixel threshold satisfies the following expression:
K2=Fmax*MP+offset;
where K 2 denotes the second pixel threshold, fmax denotes the third variable value, MP denotes the maximum value, and offset denotes the fourth variable value.
In one design, a plurality of pixels included in a RAW domain image are divided into P pixel units, each of the P pixel units including one red pixel, two green pixels, and one blue pixel. The pixel cell may also be referred to as an RGGB pixel cell. Wherein P is more than or equal to 1.
In a second aspect, an electronic device is provided, where the electronic device has a function implementing any of the above first aspects, and the function may be implemented by hardware, or may be implemented by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
In a third aspect, there is provided an electronic device comprising a memory and one or more processors, a camera, the memory having stored therein computer program code comprising computer instructions which, when executed by the processors, cause the electronic device to perform the method of any one of the first aspects or the first aspects.
In a fourth aspect, there is provided a chip system comprising at least one processor and an interface for receiving instructions and transmitting to the at least one processor, the at least one processor executing instructions to cause an electronic device to perform the method of any of the first aspects.
In a fifth aspect, there is provided a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the method of any of the first aspects above.
In a sixth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any of the first aspects above.
The technical effects caused by any implementation manner of the second aspect to the sixth aspect may refer to the technical effects caused by different implementation manners of the first aspect, and are not described herein.
Drawings
FIG. 1 is a schematic diagram of an interface for displaying dark spots on an image according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an interface for displaying dark spots on an image according to another embodiment of the present application;
fig. 3 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a camera according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an image sensor according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a pixel unit according to an embodiment of the present application;
fig. 7 is a schematic diagram of a software framework of an electronic device according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an ISP process according to an embodiment of the present application;
fig. 9 is a schematic diagram of a method for detecting a defective pixel according to an embodiment of the present application;
Fig. 10 is a schematic diagram II of a method for detecting a defective pixel according to an embodiment of the present application;
fig. 11 is a schematic diagram III of a method for detecting a defective pixel according to an embodiment of the present application;
Fig. 12 is a schematic diagram of a method for detecting a defective pixel according to an embodiment of the present application;
Fig. 13 is a schematic diagram of a method for detecting a defective pixel according to an embodiment of the present application;
fig. 14 is a schematic diagram of a method for detecting a defective pixel according to an embodiment of the present application;
FIG. 15 is a schematic diagram of a defective pixel before and after elimination according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of a chip system according to an embodiment of the present application.
Detailed Description
Before describing the embodiments of the present application, technical terms related to the present application will be explained.
Bad pixels-image sensors, due to the process (or time of use) cause defective pixels to exist on the RAW domain image output by the image sensor, which may be darker or brighter than neighboring pixels, and thus are called bad pixels (or dead pixels).
In some embodiments, the dead pixels can be divided into single dead pixels, double dead pixels and dead pixel clusters according to the number of adjacent dead pixels on the same color channel. Wherein, a single dead pixel refers to that only one dead pixel exists in a certain range on the same color channel. The double dead pixels refer to two adjacent and continuous dead pixels in a certain range on the same color channel. The bad pixel cluster refers to the bad pixels which are adjacent and continuous in a certain range and are arranged on the same color channel.
In practical implementation, the dead pixel may affect the shooting effect, for example, affect the quality of a picture, and further affect the shooting experience of a user. For example, when shooting in a dark environment, if there is a dead spot in the image sensor, a bright spot will be displayed on the shot picture. Or when shooting in a bright light environment, if the image sensor has dead spots, the shot picture can display dark spots. For example, as shown in fig. 1, three dark spots are included on the photographed picture.
If a bad pixel cluster exists on the image sensor, a large area of dark dots (or bright dots) will be displayed on the photographed image. For example, as shown in fig. 2, a large area of dark spots appears on a photographed picture.
Based on the above, the embodiment of the application provides a method for detecting a dead pixel, which is applied to an electronic device including a camera, according to the method, the dead pixel existing on the RAW domain image acquired by the image sensor can be detected, and the dead pixel is compensated, so that the quality of a shot picture can be improved, and the user experience is further improved.
The following describes in detail the technical solution provided by the embodiments of the present application with reference to the drawings attached to the specification.
The method for detecting the dead pixel provided by the embodiment of the application can be applied to electronic equipment with a shooting function. The electronic device may be, for example, a mobile phone, a motion camera (go pro), a digital camera, a tablet computer, a desktop, a laptop, a handheld computer, a notebook, a vehicle-mounted device, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a Personal Digital Assistant (PDA), an augmented reality (augmented reality, AR) \virtual reality (VR) device, etc., and the embodiment of the present application does not limit the specific form of the electronic device.
By way of example, fig. 3 shows a schematic structural diagram of the electronic device 100.
The electronic device 100 may include, among other things, a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an ear-headphone interface 170D, a sensor module 180, a camera 193, and a display 194.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, the electronic device may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or processing units, for example, the processor 110 may include a central processor (central processing unit, CPU), an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it may be called directly from memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an I2C interface, an integrated circuit built-in audio (inter-INTEGRATED CIRCUIT SOUND, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus USB interface 130, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and is not meant to limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also employ different interfacing manners in the above embodiments, or a combination of multiple interfacing manners.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store one or more computer programs, including instructions. The processor 110 may cause the electronic device 100 to perform the methods provided in some embodiments of the present application, as well as various functional applications, data processing, and the like, by executing the above-described instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an operating system, one or more application programs (such as a gallery, contacts, etc.), and the like. The storage data area may store data created during use of the electronic device 100 (e.g., photos, contacts, etc.), and so on. In addition, the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, universal flash memory (universal flash storage, UFS), and the like. In other embodiments, processor 110 may cause electronic device 100 to perform the methods provided in embodiments of the present application, as well as various functional applications and data processing, by executing instructions stored in internal memory 121 and/or instructions stored in a memory disposed in the processor.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 may receive input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like.
The power management module 141 may be configured to monitor performance parameters such as battery capacity, battery cycle times, battery charge voltage, battery discharge voltage, battery state of health (e.g., leakage, impedance), etc.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include one or more filters, switches, power amplifiers, low noise amplifiers (low noise amplifier, LAN), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functionality of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (WIRELESS FIDELITY, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation SATELLITE SYSTEM, GNSS), frequency modulation (frequency modulation, FM), near field communication (NEAR FIELD communication, NFC), infrared (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate one or more communication processing modules. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic LIGHT EMITTING diode (AMOLED), a flexible light-emitting diode (FLED), a Mini-LED, a Micro-OLED, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
Wherein the ISP is used to process the data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, so that the electrical signal is converted into an image visible to naked eyes. ISP can also perform algorithm optimization on noise, brightness, color, etc. of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. Thus, the electronic device 100 may play or record video in a variety of encoding formats, such as moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
Of course, the electronic device 100 provided in the embodiment of the present application may further include one or more devices including a positioning module 181, a key 190, a motor 191, an indicator 192, and SIM card interfaces [1 to n ]195, which are not limited.
In some embodiments of the present application, the camera 193 may be a rear camera, disposed on a side of the housing of the electronic device 100. Alternatively, in other embodiments of the present application, the camera 193 may be provided as a front-facing camera on one side of the display 194 of the electronic device 100.
For example, the camera 193 may include the image sensor 20 shown in fig. 4 and the optical lens 30 disposed at the incident light side of the image sensor 20. The optical lens 30 is capable of inputting external light into the image sensor 20.
By way of example, the image sensor 20 may include a filter 200 as shown in fig. 5. The optical filter 200 may include filter units 210 arranged in an array. The filter unit 210 may include at least three color filter blocks for transmitting light of three primary colors, such as red (R), blue (B) and green (G), respectively, such as red, blue and green filter blocks R, B and G, respectively. Since the human eye is sensitive to green, the number of green filter blocks G may be greater than the number of other color filter blocks in the same filter unit 210. For example, in the case where one filter unit 210 is arranged in a 2×2 matrix, any one of the filter units 210 described above may include one red filter block R, one blue filter block B, and two green filter blocks G.
Based on this, the red filter R is used to transmit red light among the light rays from the optical lens 30 in fig. 4, and the rest of the light rays are filtered out. The blue filter block B is used for transmitting blue light rays from the optical lens 30 and filtering out the rest of the light rays. The green filter block G is used for transmitting green light rays from the optical lens 30 and filtering out other light rays. In this way, the light from the optical lens 30 can be separated into three primary colors (RGB) light after passing through the color filter block, so that the electronic device 100 can acquire an RGB domain image.
Illustratively, as shown in fig. 5, the image sensor 20 further includes a plurality of photoelectric conversion elements 220 (or referred to as photosensitive elements). In the one filter unit 210, the position of each color filter block may correspond to p (p Σ1, such as p=1 or p=4) photoelectric conversion elements 220. Taking p=1 as an example, each color filter block may cover one photoelectric conversion element 220. The photoelectric conversion element 220 may be a photodiode for converting light passing through the color filter block into an electrical signal.
The photodiodes may be fabricated using a charge coupled device (charge coupled device, CCD) process, for example. The photodiodes may convert the collected optical signals (light rays indicated by arrows in fig. 4) into electrical signals, which are then converted into digital image signals by analog-to-digital conversion circuits. Alternatively, the photodiode may be fabricated using a complementary metal oxide semiconductor (complementary metal oxide semiconductor, CMOS) process. The photodiode prepared by the CMOS process is provided with an N-type semiconductor and a P-type semiconductor, and the currents generated by the two semiconductors through complementary effects can be recorded and interpreted by a processing chip and converted into digital image signals through a digital-to-analog conversion circuit.
For example, when p=1, as shown in fig. 5, the image sensor 20 may include the filter 200 and the plurality of photoelectric conversion elements 220, and one color filter block in the filter 200 corresponds to the position of one photoelectric conversion element 220. When the color filter block is the red filter block R, the blue filter block B, or the green filter block G, only the light incident on the color filter block transmits the light corresponding to the color of the color filter block, and the light enters the photoelectric conversion element 220 corresponding to the position of the color filter block, and is converted into an electrical signal by the photoelectric conversion element 220.
In this case, each photoelectric conversion element 220 outputs a digital image signal obtained by analog-to-digital converting the acquired electric signal to a RAW domain image including a plurality of pixels, each of which corresponds to the electric signal acquired by the photoelectric conversion element 220 corresponding to the position of the color filter block, via the image sensor 20. In the embodiment of the present application, one color filter block may constitute one pixel with the photoelectric conversion element 220 corresponding to the position of the color filter block.
Since the color filter block, for example, the blue filter block B, allows only light of one color (for example, blue light) to pass therethrough, in the RAW domain image, only one color information (or one color channel), such as a blue channel, is present in the digital image signal obtained by the pixels constituted by the blue filter block of the image sensor 20 and the photoelectric conversion element 220 corresponding to the position of the blue filter block. The color channel is consistent with the color of the blue filter block.
For example, as shown in fig. 6, an arrangement of a plurality of pixels included in the RAW domain image may be referred to as a Bayer (Bayer) array. The bayer array is a pixel unit including one red pixel (R), two green pixels (G), and one blue pixel (B), which may also be referred to as an RGGB pixel unit. For example, a plurality of pixels included in the RAW domain image may be divided into P pixel units, each of which includes one red pixel, two green pixels, and one blue pixel.
The methods in the following embodiments may be implemented in the electronic device 100 having the above-described hardware structure. In order to make the technical solution of the present application clearer and to facilitate understanding, the technical solution provided by the embodiment of the present application is described in detail in the following embodiments in conjunction with the software structure of the electronic device 100.
Fig. 7 is a software block diagram of the electronic device 100 according to the embodiment of the present application.
The software structure of the electronic device may be a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture or a cloud architecture.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the software architecture of the electronic device is divided from top to bottom into an application layer (APP), an application framework layer (FWK), a hardware abstraction layer (hardware abstraction layer, HAL), and a kernel layer (kernel). As shown in FIG. 7, the layered architecture may also include android runtime (Android runtime) and system libraries. For ease of understanding, fig. 7 also illustrates some of the components of the electronic device 100 of fig. 3, such as the camera 193, the display 194, etc., in an embodiment of the present application.
The application layer may include a series of application packages. As shown in fig. 7, the application package may include APP1, APP2, APP3, and the like. In some embodiments, the application package may include some applications (e.g., camera applications, etc.) with shooting functionality. When the electronic device runs the camera application, the electronic device can start the camera and collect the RAW domain image through the camera.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for the application of the application layer. The application framework layer includes predefined functions. The application framework layer provides programming services to application layer calls through the API interface. As shown in fig. 7, the application framework layer includes a camera service framework.
Android runtime include core libraries and virtual machines. Android runtime is responsible for scheduling and management of the android system.
The core library comprises two parts, wherein one part is a function required to be called by java voice, and the other part is an android core library.
The application layer and the application framework layer run in virtual machines. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. Such as surface manager (surface manager), media library (Media Libraries), three-dimensional graphics processing library (e.g., openGL ES), 2D graphics engine (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The hardware abstraction layer is an interface layer between the kernel layer and the hardware, and can be used for abstracting the hardware. Illustratively, as shown in FIG. 7, the hardware abstraction layer includes a camera interface.
The kernel layer provides the underlying drivers for the various hardware of the electronic device. Illustratively, as shown in FIG. 7, the kernel layer includes a camera driver module.
The following is an exemplary illustration of an aspect of an embodiment of the present application in conjunction with the software architecture shown in fig. 7. In some embodiments, when the camera application is started, the camera application triggers an instruction to start the camera and invokes an API interface provided by the application framework layer to send the instruction to start the camera to the camera service framework. The camera service framework calls a camera interface in the hardware abstraction layer and sends an instruction for starting the camera to the camera driving module. The camera driving module is used for driving the camera 193 to acquire the RAW domain image.
The camera 193 transmits the acquired RAW domain image to a camera interface in the hardware abstraction layer. The camera interface is used for performing image processing on the RAW domain image, and transmitting the processed RAW domain image to the display screen 194.
By way of example, an ISP may be provided in the camera interface. The camera interface may perform image processing on the RAW domain image through the ISP. Illustratively, as shown in fig. 8, the ISP may process the RAW domain image as follows.
It will be appreciated that the various processes of the ISP illustrated in the embodiments of the present application do not constitute a particular limitation of the present application. In other embodiments of the application, an ISP may include more or fewer steps than shown, or combine certain steps, or split certain steps, or different synchronous arrangements.
Illustratively, as shown in fig. 8, a black level correction (black level correction, BLC) 201 is used to perform correction processing for black levels. The black level refers to the level of the video signal without a line of light output on a display device that has been calibrated. Black level correction is performed on the one hand because the image sensor has dark current, resulting in a problem of voltage output of the image sensor also in the absence of illumination. On the other hand, because the image sensor performs analog-to-digital conversion with insufficient accuracy, for example, 10 bits (bit), the effective range of the pixel value of each pixel is 0 to 1023, and the image sensor may not be able to convert information close to 0.
The dead pixel correction (bad pixel correction, BPC) 202 is used to detect dead pixels on the RAW domain image and correct the dead pixels. Specifically, the pixel value of the pixels around the pixel to be detected may be obtained with the pixel to be detected as the center, and then a certain process is performed to detect the dead pixel. On the basis, the purpose of correcting the dead pixel can be achieved by compensating the pixel value of the dead pixel.
Lens Shading Correction (LSC) 203 is used to eliminate the problem of color around the image and the inconsistency of brightness values with the center of the image due to the lens optical system.
Noise Reduction (NR) 204 refers to RAW domain noise reduction. The RAW domain noise reduction is used to reduce noise in the image. Noise present in the image can affect the visual experience of the user, and the image quality of the image can be improved to some extent by noise reduction.
An automatic white balance (auto white balance, AWB) 205 is used to enable white objects to be reduced to white at any color temperature. White will yellow at low color temperatures and blue at high color temperatures due to the influence of the color temperature. The purpose of the white balance is to make a white object r=g=b at any color temperature, thereby exhibiting white color.
Color interpolation (Demosaic) 206 is used to include three components of RGB simultaneously on each pixel.
The color correction matrix (color correction matrix, CCM) 207 is used to calibrate the accuracy of colors other than white.
Global tone mapping (global tone mapping, GTM) 208 is used to map the luminance, dynamic range of a high dynamic range image (high-DYNAMIC RANGE, HDR) to the standard of a low dynamic range image (low DYNAMIC RANGE, LDR) so that the high dynamic range HDR image can be displayed on the LDR display. The same transformation function is used for each pixel of the image in the mapping process, and the mapping relationship is one-to-one.
The color space conversion (color space transform, CST) 209 is used to convert the RGB domain image into a YUV domain image.
Color noise reduction (noise reduction in Chroma, NR Chroma) 210 is used to reduce the hue and saturation of YUV color space images (UV). Noise present in the image can affect the visual experience of the user, and the image quality of the image can be improved to some extent by noise reduction.
The color space conversion (color space transform, CST) 211 is used to convert the YUV domain image into an RGB domain image.
The local tone mapping (local tone mapping, LTM) 212 is used to map the luminance, dynamic range of the HDR image to the standard of the LDR image so that the high dynamic range HDR image can be displayed on the LDR display. The image is partitioned in the mapping process, and different transformation functions are used for different partitions.
The Gamma process (Gamma) 213 is used to adjust the brightness value, contrast, dynamic range, etc. of the image by adjusting the Gamma curve.
A two-dimensional color look-up table (two-dimension lut, TDL) 214 is used to color correct the image.
The color space conversion (color space transform, CST) 215 is used to convert the RGB domain image into a YUV domain image.
Sharpening (sharpen) 216 improves image sharpness by enhancing edges in the image.
It should be noted that, in the embodiment of the present application, the electronic device may implement a scheme of detecting and compensating the dead pixel in the dead pixel correction 202.
The following description is given by way of example only, and it should be understood that the following examples are illustrative of the present application and are not to be construed as limiting the application.
In general, in electronic devices that have been shipped, the number of adjacent dead spots on the same color channel in an image sensor is at most three. The electronic equipment is considered as an unqualified product when the number of adjacent dead spots on the same color channel is four or more after delivery detection, and delivery sales cannot be carried out later. Thus, in actual implementation, the dead pixel may include a single dead pixel, a double dead pixel, and a dead pixel cluster (e.g., three dead pixels).
In some embodiments of the present application, for a RAW domain image including a plurality of pixels acquired by a camera (including the image sensor 20 described above), when detecting whether a single dead pixel is included in the plurality of pixels, the electronic device sequentially uses each pixel in the plurality of pixels as a pixel to be detected, and determines whether the pixel to be detected is the single dead pixel. Correspondingly, under the condition that whether the plurality of pixels comprise double dead pixels or not is detected, the electronic equipment sequentially takes every two adjacent pixels in the plurality of pixels as pixels to be detected, and whether the pixels to be detected are double dead pixels or not is determined.
In actual implementation, for the same RAW domain image, the electronic device sequentially detects whether there are a single dead pixel and a double dead pixel in a plurality of pixels included in the RAW domain image. That is, the electronic device first uses each pixel as a pixel to be detected, and detects whether the pixel to be detected is a single dead pixel. After all the pixels are detected, the electronic equipment takes every two adjacent pixels as pixels to be detected, and detects whether the pixels to be detected are double dead pixels or not.
The following embodiments respectively describe specific implementation processes of detecting single dead pixels and double dead pixels by using the electronic device. In the following examples of the present application, it is exemplified to detect whether or not a bright point (hot pixel) is a dead point.
(One) Single bad point
For example, as shown in (1) in fig. 9, for one pixel to be detected in the RAW domain image, a pixel value of each pixel in m (e.g., 8) pixels in the same color channel (e.g., red channel) as the pixel to be detected in a first pixel matrix (e.g., 5×5 matrix) is obtained with the pixel to be detected (e.g., point a) as a center, so as to obtain m (i.e., 8) pixel values. m is a positive integer.
Fig. 9 illustrates an example in which the first pixel matrix is a 5×5 matrix, and does not limit the present application. Of course, the first pixel matrix may also be a3×3 matrix, or a7×7 matrix, without limitation. The value of m is not particularly limited, and is set as appropriate. Illustratively, the value of m is related to the size of the color channel and the first matrix of pixels. For example, in the green channel with the first pixel matrix of 3×3, m is 4, etc. will not be described herein.
In addition, in the embodiment of the application, in order to reduce the computing power of the electronic device and reduce the power consumption of the electronic device, the first pixel matrix can be set smaller as much as possible. Such as a3 x 3 matrix, or a5 x 5 matrix, etc.
Illustratively, the pixel value of each pixel shown in FIG. 9 is related to the number of bits of the pixel. For example, if the number of bits of a pixel is 8 bits, the pixel value is 0 to 255. As another example, if the number of bits of a pixel is 10 bits, the pixel value is 0 to 1023. For another example, if the number of bits of a pixel is 18 bits, the pixel value is 0 to 26143. In the embodiment of the present application, the bit number of the pixel is 18 bits.
The number of bits of a pixel has a certain relationship with the image quality. The number of bits determines the number of different colors or gray levels that a pixel can represent, i.e. the color depth of the image. The higher the number of bits, the more colors the image colors are represented, and the higher the image quality. For example, if the number of bits of a pixel is 1bit, two colors of 0 and 1, that is, black and white, can be represented. As another example, if the number of bits of a pixel is 8 bits, 256 colors can be represented.
Then, the electronic device may determine whether the pixel to be detected is a dead pixel according to the obtained pixel value of each pixel in the m pixels which are in the same color channel with the pixel to be detected and are in the first pixel matrix and using the pixel to be detected as a center.
In some embodiments of the present application, as shown in (1) of fig. 9, for a pixel to be detected (e.g., point a), the electronic device may determine a pixel threshold (or a second pixel threshold) according to a maximum value of 8 pixel values within the 5×5 matrix and a preset variable value. If the pixel value of the pixel to be detected is greater than the pixel threshold value, determining that the pixel to be detected is a dead pixel (namely a single dead pixel). If the pixel value of the pixel to be detected is smaller than the pixel threshold value, determining that the pixel to be detected is not a dead pixel.
Illustratively, taking an example in which the preset variable value includes a preset variable value 1 and a preset variable value 2, the pixel threshold value may satisfy the following expression (1):
K=Fmax*MP+offset (1);
Where K represents the pixel threshold, fmax represents the preset variable value 1 (or third variable value), MP represents the maximum value (maxmium pixel), i.e. the maximum value of the 8 pixel values, and offset represents the preset variable value 2 (or fourth variable value).
The preset variable value 1 is an modifiable variable value, and the preset variable value 1 may be modified according to practical situations, and the specific value of the preset variable value 1 is not limited in the embodiment of the present application.
Accordingly, the preset variable value 2 is a modifiable variable value, and the preset variable value 2 is related to the bit number of the pixel. For example, when the number of bits of a pixel is 8 bits, the value range of the preset variable value 2 is 0 to 255. For another example, when the bit number of the pixel is 10 bits, the value range of the preset variable value 2 is 0 to 1023. For another example, when the number of bits of the pixel is 18 bits, the value range of the preset variable 2 is 0 to 2626143.
Illustratively, assuming fmax=68/64, offset=11413, thenIt can be seen that the pixel value of the pixel to be detected (point a) is 92670, which is greater than the pixel threshold K (i.e., 92669.8), and therefore, the pixel to be detected, i.e., point a, is determined as a dead pixel.
In some embodiments, the electronic device may also compensate for dead pixels. The electronic device may replace the pixel value of the dead pixel with the pixel threshold K (i.e., 92669.8), replace the pixel value of the dead pixel with the maximum value of m pixel values (i.e., 76477), replace the pixel value of the dead pixel with the next largest value of m pixel values (i.e., 71766), or replace the pixel value of the dead pixel with other pixel values.
For example, the electronic device may determine a median pixel value, a mean pixel value, a standard pixel value, or the like based on the above-described pixel values of 8 pixels, without limitation. Then, the electronic device replaces the pixel value of the dead pixel by adopting a pixel median value, a pixel mean value, a pixel standard deviation value and the like.
Or the electronic device may also replace the pixel value of the dead pixel with a preset pixel value. The preset pixel value may be a pixel value preconfigured by the electronic device and stored in the electronic device. Specific values of the preset pixel values are not limited, and are set in practice. For example, the preset pixel value may be less than or equal to a maximum value of the m pixel values.
As an example, as shown in (2) in fig. 9, the pixel value of the pixel to be detected is replaced with the maximum value of m pixel values (i.e., 76477).
For each pixel in the RAW domain image, the above processing is performed, and it is possible to detect whether a single dead pixel is included in a plurality of pixels.
In summary, the electronic device may detect a single dead pixel on the RAW domain image acquired by the image sensor by using the method, and compensate the single dead pixel, so as to improve the quality of the shot picture, and further improve the user experience.
(II) double bad points
For example, as shown in (1) in fig. 10, for two adjacent pixels to be detected in the RAW domain image, the pixel value of each of m (e.g., 10) pixels in the first pixel matrix (e.g., 5×7 matrix) that are the same color channel (e.g., blue channel) as the two adjacent pixels to be detected is obtained by taking the two adjacent pixels to be detected (e.g., points a and B) as the center, so as to obtain m (i.e., 10) pixel values.
It should be noted that, for the first pixel matrix and the illustration of the corresponding value of m, reference may be made to the above embodiment, which is not repeated here. In addition, for the illustration of the pixel value of each pixel shown in fig. 10, reference may be made to the above embodiment, and the number of times is not repeated. In addition, the two adjacent pixels to be detected refer to two continuous pixels in the same color channel.
Then, the electronic device may determine whether the two adjacent pixels to be detected are dead pixels according to the obtained pixel value of each pixel in the m pixels, which are centered on the two adjacent pixels to be detected and are in the same color channel with the two adjacent pixels to be detected, in the first pixel matrix.
In some embodiments of the present application, as shown in (1) of fig. 10, for two adjacent pixels to be detected (e.g., point a and point B), the electronic device may determine the pixel threshold according to the maximum value of 10 pixel values within the 5×7 matrix, and the preset variable value. If the pixel values of the two adjacent pixels to be detected are both larger than the pixel threshold value, determining that the two adjacent pixels to be detected are dead pixels (namely double dead pixels). And if the pixel values of the two adjacent pixels to be detected are smaller than the pixel threshold value, determining that the two adjacent pixels to be detected are not dead pixels.
For example, the pixel threshold may satisfy the above expression (1), and for the illustration of the expression (1), reference may be made to the above embodiment, which is not repeated herein.
Illustratively, assuming fmax=68/64, offset=1759, thenIt can be seen that the pixel value of the pixel to be detected (point a) is 59961, the pixel value of the pixel to be detected (point B) is 73951, which are both greater than the pixel threshold K (i.e., 59960.6), and therefore, two adjacent pixels to be detected (point a and point B) are determined as dead pixels (i.e., double dead pixels).
In some embodiments, the electronic device may also compensate for dead pixels. For an example of compensating the dead pixel, reference may be made to the above embodiment, and details are not repeated.
Illustratively, as shown in (2) of fig. 10, the pixel value of the dead pixel is replaced with the maximum value (i.e., 54778) of the above 10 pixel values.
For every two adjacent pixels in the RAW domain image, the above processing is performed, so that whether the plurality of pixels comprise double dead pixels can be detected. Wherein every adjacent two pixels refer to any adjacent two pixels among a plurality of pixels included in the RAW domain image. For example, points a and B, points adjacent to each other around the points B, and the like are not described in detail.
In summary, the electronic device may detect the double dead pixels on the RAW domain image acquired by the image sensor by using the method, and compensate the double dead pixels, so as to improve the quality of the shot picture, and further improve the user experience.
In the related art, an electronic device sequentially detects single dead pixels and double dead pixels. Namely, after the electronic device detects the single dead pixel, the electronic device continues to detect the double dead pixels. However, in the related art, when the electronic device detects the double bad points, part of the double bad points cannot be detected and eliminated due to the existence of the bad point clusters. For example, as shown in (1) in fig. 11, for two adjacent pixels to be detected (e.g., point a and point B) in the RAW domain image, the pixel value of each of m (e.g., 10) pixels in the same color channel (e.g., green channel) as the two adjacent pixels to be detected in the first pixel matrix (e.g., 5×7 matrix) is obtained with the two adjacent pixels to be detected (e.g., point a and point B) as the center, so as to obtain m (i.e., 10) pixel values.
For example, the electronic device may determine whether the adjacent two pixels to be detected are dead pixels using the above expression (1). For example, assuming fmax=1 and offset=0, at this time, all pixels larger than mp (maxmium pixel) are determined as dead pixels, i.e. the dead pixel correction intensity is strongest. As shown in a point a in fig. 11 (1), when single bad point detection is performed, according to expression (1), k=1×29149+0= 29149, and the pixel value of pixel a is larger than K, and therefore it is determined that the pixel is a bad point pixel, and is replaced with 29149, whereas when double bad point detection is performed, as shown in fig. 11 (2), the maximum value of 10 pixel values around the pixel points of a and B is 30548, and therefore k=1×30558+0=30558. It can be seen that the pixel values of the pixels to be detected (points a and B) are 29149, which is smaller than the pixel threshold K (i.e., 30548), and therefore, the pixels to be detected (points a and B) are not determined to be dead pixels.
Therefore, the adjacent three pixels to be detected are not processed, and finally the quality of the shot picture is affected, so that the user experience is affected.
Based on this, in other embodiments of the present application, after the electronic device obtains the pixel value of each of m pixels in the first pixel matrix, which are the same color channel as the adjacent two pixels to be detected, the electronic device first determines the first pixel threshold according to the m pixel values. If the first pixel value in the m pixel values is larger than the first pixel threshold value, the electronic device determines that the first pixel corresponding to the first pixel value is a dead pixel.
For example, as shown in fig. 12, the pixel values of 10 pixels of the green channel in the 5×7 matrix are obtained with the adjacent two pixels to be detected (points a and B) as the centers, to obtain 10 pixel values. For example, the 10 pixel values may be 16525, 11614, 9034, 7823, 11476, 12802, 16665, 6781, 2609, and 3048. On this basis, the electronic device may determine the first pixel threshold from the 10 pixel values.
For example, the electronic device may determine the first pixel threshold based on the next largest value of the 10 pixel values, and the first and second variable values. Illustratively, the first pixel threshold may satisfy the following expression (2):
K1=Fmax*SMP+TBP_offset (2);
Where K 1 denotes the first pixel threshold, fmax denotes the first variable value, SMP (sub maximum pixel, pixel next largest value) denotes the next largest value of the 10 pixel values, and tbp_offset denotes the second variable value.
The first variable value and the second variable value are modifiable variable values, and specific values of the first variable value and the second variable value are not limited, and are set as appropriate.
Illustratively, the first variable value may be any one of values [1,2 ]. The second variable value is associated with the number of bits of the pixel or the second variable value is zero. For the illustration of the second variable value, reference may be made to the illustration of the preset variable value 2 in the above embodiment, and the description thereof will not be repeated here.
As an example, when the first variable value is 2, the second variable value may be zero.
Illustratively, in connection with the example of fig. 12, assuming fmax=1, tbp_offset=1000, K 1 =1×16665+100. It can be seen that, if the first pixel value (e.g. 30548) of the 10 pixel values is greater than the first pixel threshold (i.e. 17665), the electronic device may determine that the first pixel (e.g. point C) corresponding to the first pixel value is a dead pixel.
In some embodiments, the electronic device may also compensate for the dead pixel (i.e., the first pixel). The electronic device may illustratively replace the first pixel value with the first pixel threshold to compensate for the dead pixel, or as shown in fig. 13, the electronic device may replace the first pixel value with the next largest of the 10 pixel values.
In some embodiments of the present application, after the electronic device compensates the first pixel, the electronic device determines whether the two adjacent pixels to be detected (i.e., the point a and the point B) are dead pixels. Or in other embodiments of the present application, if the first pixel (e.g., point C) corresponding to the first pixel value is not a dead pixel, the electronic device continues to determine whether the two adjacent pixels to be detected (points a and B) are dead pixels.
As one example, the electronic device may determine whether two adjacent pixels to be detected are dead pixels based on the first pixel threshold. If the pixel values corresponding to the two adjacent pixels to be detected are larger than the first pixel threshold value, the electronic equipment determines that the two adjacent pixels to be detected are dead pixels. As shown in fig. 12, the pixel values of two adjacent pixels to be detected (points a and B) are 30471 and 29149, respectively, which are both greater than the first pixel threshold (i.e., 17665), and thus the two adjacent pixels to be detected are dead pixels. If the pixel values of the two adjacent pixels to be detected are smaller than the first pixel threshold value, the electronic device determines that the two adjacent pixels to be detected are not dead pixels.
On this basis, the electronic device can also compensate for the dead pixel (i.e., points a and B). The electronic device replaces pixel values corresponding to two adjacent pixels to be detected with the first pixel threshold value to compensate the pixels to be detected. Or as shown in fig. 13, the electronic device replaces the pixel values corresponding to the two adjacent pixels to be detected with the next largest value (i.e. 16665) of the above 10 pixel values to compensate the pixels to be detected.
As another example, after the electronic device compensates the first pixel, the electronic device may further determine whether the adjacent two pixels to be detected are dead pixels using the above expression (1). Illustratively, as shown in FIG. 14, after the first pixel value is replaced, the pixel value at point C is 16665. Based on the above, the electronic device takes two adjacent pixels to be detected (i.e. the point A and the point B) as the center, and obtains the pixel value of each pixel in the second pixel matrix, wherein the n pixels are in the same color channel with the two adjacent pixels to be detected, so as to obtain n pixel values, n is a positive integer, and the n pixel values comprise replaced first pixel values (i.e. pixel values comprising the replaced point C, such as 16665).
Illustratively, the electronic device determines the second pixel threshold from the n pixel values. If the pixel values corresponding to the two adjacent pixels to be detected are larger than the second pixel threshold value, the electronic equipment determines that the two adjacent pixels to be detected are dead pixels. If the pixel values corresponding to the two adjacent pixels to be detected are smaller than the second pixel threshold value, the electronic device determines that the two adjacent pixels to be detected are not dead pixels.
For the second pixel matrix and the value of n, reference may be made to the above embodiments, and details thereof are omitted herein.
Taking the second pixel matrix as a 5×7 matrix as an example, for two adjacent pixels to be detected, as shown in fig. 14, the electronic device may determine the second pixel threshold according to the maximum value of 10 pixel values in the 5×7 matrix, and the preset variable value 1 and the preset variable value 2. If the pixel value of the two adjacent pixels to be detected is larger than the second pixel threshold value, the electronic device determines that the two adjacent pixels to be detected are dead pixels.
For example, the electronic device may determine the second pixel threshold using expression (1) above. Illustratively, it is assumed that fmax=68/64, offset=11413,It can be seen that the pixel value of the pixel to be detected (point a) is 30471, which is greater than the second pixel threshold K 2 (i.e., 29119.6), and the pixel value of the pixel to be detected (point B) is 29149, which is greater than the second pixel threshold K 2 (i.e., 29119.6), so that the above two adjacent pixels to be detected are determined as dead pixels.
On the basis, the electronic equipment replaces the pixel value corresponding to the two adjacent pixels with a second pixel threshold value to compensate the pixel to be detected, or replaces the pixel value corresponding to the two adjacent pixels to be detected with the maximum value in the n pixel values to compensate the pixel to be detected.
In summary, the electronic device may detect the dead pixel existing in the first pixel matrix by using the two adjacent pixels to be detected as the center, and after the electronic device compensates the dead pixel, the electronic device determines that the two adjacent pixels to be detected are dead pixels. Therefore, the electronic equipment can be ensured to detect three adjacent dead pixel (namely dead pixel clusters), so that the accuracy of detecting the dead pixel clusters is improved.
By way of example, as shown in fig. 15, by adopting the scheme provided by the embodiment of the application, aiming at the large-area dark spots caused by the dead spot clusters, the electronic device can detect the dead spot clusters and compensate the dead spot clusters, so that the large-area dark spots caused by the dead spot clusters can be eliminated, the picture quality is improved, and the user experience is further improved.
In addition, in the embodiment of the application, due to aging of the image sensor, four (and more) adjacent dead pixels are also caused in the RAW domain image acquired by the image sensor. For example, for a detection method of adjacent four (and above) dead pixels, the pixel values of the pixels around the adjacent four pixels to be detected are obtained with the adjacent four pixels to be detected as the center, and the pixel threshold value is determined based on the obtained pixel values. If the pixel values of the four adjacent pixels to be detected are all larger than the pixel threshold value, the electronic device determines that the four adjacent pixels to be detected are dead pixels. If the pixel values of the adjacent four pixels to be detected are all smaller than the pixel threshold value, the electronic device determines that the adjacent four pixels to be detected are not dead pixels. Reference may be made to the above embodiments for specific implementation procedures, and details are not repeated here.
In addition, in the embodiment of the application, in a scene with a bright spot in a dark environment, whether a dead pixel exists is detected, and the dead pixel is compensated for illustration. For a scene where there is a dark spot in a bright environment, expression (3) can be employed to detect whether there is a dead pixel.
Fmin represents a first variable value, MP represents a minimum pixel value (i.e., a minimum pixel value around the pixel to be detected), and offset represents a second variable value.
The principle of detecting whether a dead pixel exists or not and compensating the dead pixel is similar to that of detecting the bright point, and the specific implementation process can refer to the above embodiment and will not be repeated here.
As an example, if the pixel value of the pixel to be detected is smaller than the pixel minimum value, the pixel to be detected is determined to be a dark point (i.e., a dead pixel in a bright environment).
The content described in each embodiment of the present application can be explained and the technical solutions described in other embodiments of the present application can be applied to other embodiments, and the technical features described in other embodiments can be combined to form new solutions.
The embodiment of the application provides an electronic device, which can comprise a camera, a memory and one or more processors, wherein the memory stores computer program codes, and the computer program codes comprise computer instructions which, when executed by the processors, cause the electronic device to execute the functions or steps in the embodiment. The structure of the electronic device may refer to the structure of the electronic device 100 shown in fig. 3 described above.
The embodiment of the application also provides a chip system which is applied to the electronic equipment. As shown in fig. 16, the chip system 1100 includes at least one processor 1101 and at least one interface circuit 1102. The processor 1101 may be the processor 110 shown in fig. 3 in the foregoing embodiment, and the interface circuit 1102 may be, for example, an interface circuit between the processor 1101 and an external memory, or an interface circuit between the processor 1101 and an internal memory.
The processor 1101 and interface circuit 1102 may be interconnected by wires. For example, interface circuit 1102 may be used to receive signals from other devices (e.g., a memory of electronic device 100). For another example, the interface circuit 1102 may be used to send signals to other devices (e.g., the processor 1101). The interface circuit 1102 may, for example, read instructions stored in a memory and send the instructions to the processor 1101. The instructions, when executed by the processor 1101, may cause the electronic device to perform the various functions or steps performed by the handset in the above-described embodiments. Of course, the system-on-chip may also include other discrete devices, which are not particularly limited in accordance with embodiments of the present application.
Embodiments of the present application also provide a computer readable storage medium, where the computer readable storage medium includes computer instructions, which when executed on an electronic device, cause the electronic device to perform the functions or steps performed by the electronic device in the above-described method embodiments.
Embodiments of the present application also provide a computer program product which, when run on a computer, causes the computer to perform the functions or steps performed by the electronic device in the method embodiments described above.
It should be noted that the terms "first" and "second" and the like in the description, the claims and the drawings of the present application are used for distinguishing between different objects and not for describing a particular sequential order. The terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present embodiment, unless otherwise specified, the meaning of "plurality" is two or more.
Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one (item)" means one or more. "plurality" means two or more. "at least two (items)" means two or three and more. And/or, for describing the association relationship of the association object, means that three relationships may exist. For example, "A and/or B" may mean that only A, only B, and both A and B are present, where A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one of a, b or c may represent a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural. ". the terms" in "and" if "are used to refer to a process that is performed under an objective condition, and are not intended to limit the time, nor are any determination made when implemented, nor are any other limitations intended.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion that may be readily understood.
It will be apparent to those skilled in the art from this description that the above-described functional modules are merely illustrated in terms of division for convenience and brevity, and in practical applications, the above-described functional modules may be allocated to different functional modules according to the system, that is, the internal structure of the apparatus may be divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the method described in the embodiments of the present application. The storage medium includes a U disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. The method for detecting the dead pixel is characterized by being applied to electronic equipment, wherein the electronic equipment comprises a camera, the camera is used for acquiring an original RAW domain image, the RAW domain image comprises a plurality of pixels, each two adjacent pixels in the plurality of pixels are sequentially used as pixels to be detected by the electronic equipment, whether the dead pixel exists in the plurality of pixels is detected, and the method comprises the following steps:
The electronic equipment takes the pixel to be detected as a center, and obtains a pixel value of each pixel in m pixels which are in the same color channel with the pixel to be detected in a first pixel matrix to obtain m pixel values, wherein m is a positive integer;
the electronic equipment determines a first pixel threshold according to the m pixel values;
if a first pixel value is larger than the first pixel threshold value in the m pixel values, the electronic equipment determines that a first pixel corresponding to the first pixel value is a dead pixel;
after the electronic device compensates the first pixel, the electronic device determines whether the pixel to be detected is a dead pixel.
2. The method of claim 1, wherein the electronic device determining that the pixel to be detected is a dead pixel comprises:
and if the pixel values corresponding to the pixels to be detected are all larger than the first pixel threshold value, the electronic equipment determines that the pixels to be detected are dead pixels.
3. The method according to claim 2, wherein the method further comprises:
the electronic device replaces the pixel value corresponding to the pixel to be detected with the first pixel threshold value to compensate the pixel to be detected, or
And the electronic equipment replaces the pixel value corresponding to the pixel to be detected with the next largest value in the m pixel values so as to compensate the pixel to be detected.
4. The method of claim 1, wherein the electronic device determining whether the pixel to be detected is a dead pixel comprises:
The electronic equipment takes the pixel to be detected as a center, and obtains a pixel value of each pixel in a second pixel matrix, wherein the pixel value is in the same color channel with the pixel to be detected, so as to obtain n pixel values;
The electronic equipment determines a second pixel threshold according to the n pixel values;
And if the pixel value corresponding to the pixel to be detected is larger than the second pixel threshold value, the electronic equipment determines that the pixel to be detected is a dead pixel.
5. The method according to claim 4, wherein the method further comprises:
The electronic device replaces the pixel value corresponding to the pixel to be detected with the second pixel threshold value to compensate the pixel to be detected, or
And the electronic equipment replaces the pixel value corresponding to the pixel to be detected with the maximum value in the n pixel values so as to compensate the pixel to be detected.
6. The method of any of claims 1-5, wherein the electronic device compensating the first pixel comprises:
the electronic device replaces the first pixel value with the first pixel threshold value, or
The electronic device replaces the first pixel value with a next largest value of the m pixel values.
7. The method of any of claims 1-6, wherein the electronic device determining a first pixel threshold from the m pixel values comprises:
the electronic device determines the first pixel threshold according to a first variable value, a second variable value and a next-largest value in the m pixel values;
wherein the first variable value is any one of [1,2], the second variable value is related to the number of bits of a pixel, or the second variable value is zero.
8. The method of claim 7, wherein the first pixel threshold satisfies the following expression:
K1=Fmax*SMP+TBP_offset;
Wherein K 1 denotes the first pixel threshold, fmax denotes the first variable value, SMP denotes the next largest value, and tbp_offset denotes the second variable value.
9. The method of claim 4 or 5, wherein the electronic device determining a second pixel threshold from the n pixel values comprises:
the electronic device determines the second pixel threshold according to the third variable value, the fourth variable value and the maximum value of the n pixel values;
Wherein the third variable value is either one of [1,2], the fourth variable value is related to the number of bits of the pixel, or the fourth variable value is zero.
10. The method of claim 9, wherein the second pixel threshold satisfies the following expression:
K2=Fmax*MP+offset;
wherein K 2 denotes the second pixel threshold, fmax denotes the third variable value, MP denotes the maximum value, and offset denotes the fourth variable value.
11. The method according to any one of claims 1 to 10, wherein,
The pixels are divided into P pixel units, each pixel unit in the P pixel units comprises a red pixel, two green pixels and a blue pixel, and P is more than or equal to 1.
12. An electronic device is characterized by comprising a memory, one or more processors and a camera;
The memory having stored therein computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the method of any of claims 1-11.
13. A chip system is characterized by comprising at least one processor and an interface;
the interface for receiving instructions and transmitting to the at least one processor, the at least one processor executing the instructions to cause the electronic device to perform the method of any of claims 1-11.
14. A computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any of claims 1-11.
15. A computer program product, characterized in that the computer program product, when run on a computer, causes the computer to perform the method according to any of claims 1-11.
CN202410214648.2A 2024-01-29 2024-02-26 A method for detecting bad pixel and electronic device Pending CN119277045A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202410124205 2024-01-29
CN2024101242054 2024-01-29

Publications (1)

Publication Number Publication Date
CN119277045A true CN119277045A (en) 2025-01-07

Family

ID=94107721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410214648.2A Pending CN119277045A (en) 2024-01-29 2024-02-26 A method for detecting bad pixel and electronic device

Country Status (1)

Country Link
CN (1) CN119277045A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6965395B1 (en) * 2000-09-12 2005-11-15 Dialog Semiconductor Gmbh Methods and systems for detecting defective imaging pixels and pixel values
CN102640489A (en) * 2009-10-20 2012-08-15 苹果公司 System and method for detecting and correcting defective pixels in an image sensor
CN110288599A (en) * 2019-07-10 2019-09-27 浙江大华技术股份有限公司 A kind of dead pixel detection method, device, electronic equipment and storage medium
CN112911174A (en) * 2021-01-18 2021-06-04 珠海全志科技股份有限公司 Image dead pixel cluster correction method, computer device and computer readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6965395B1 (en) * 2000-09-12 2005-11-15 Dialog Semiconductor Gmbh Methods and systems for detecting defective imaging pixels and pixel values
CN102640489A (en) * 2009-10-20 2012-08-15 苹果公司 System and method for detecting and correcting defective pixels in an image sensor
CN110288599A (en) * 2019-07-10 2019-09-27 浙江大华技术股份有限公司 A kind of dead pixel detection method, device, electronic equipment and storage medium
CN112911174A (en) * 2021-01-18 2021-06-04 珠海全志科技股份有限公司 Image dead pixel cluster correction method, computer device and computer readable storage medium

Similar Documents

Publication Publication Date Title
EP4024323A1 (en) Image processing method and apparatus
KR102480600B1 (en) Method for low-light image quality enhancement of image processing devices and method of operating an image processing system for performing the method
CN115601244B (en) Image processing method and device and electronic equipment
CN114693580B (en) Image processing method and related device
US20240119566A1 (en) Image processing method and apparatus, and electronic device
CN116052568B (en) A display screen calibration method and related equipment
CN115696078B (en) Color filter arrays, image sensors, camera modules and electronics
CN115550570A (en) Image processing method and electronic equipment
CN117135293A (en) Image processing methods and electronic devices
CN115550575B (en) Image processing method and related equipment
US20240397015A1 (en) Image Processing Method and Electronic Device
CN116668838A (en) Image processing method and electronic equipment
CN115631250B (en) Image processing methods and electronic equipment
US20240144451A1 (en) Image Processing Method and Electronic Device
CN115460343B (en) Image processing method, device and storage medium
CN115955611B (en) Image processing method and electronic equipment
CN119277045A (en) A method for detecting bad pixel and electronic device
CN115760652A (en) Method and electronic device for extending image dynamic range
CN116029914A (en) Image processing method and electronic device
CN117711300B (en) Image display method, electronic device, readable storage medium and chip
CN118982493B (en) Tone mapping method and electronic equipment
CN115118963A (en) Image quality adjustment method, electronic device, and storage medium
CN118509715B (en) Light spot display method, device and electronic equipment
CN118488319B (en) Image processing method and electronic device
CN117710265B (en) Image processing method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: Unit 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong 518040

Applicant after: Honor Terminal Co.,Ltd.

Address before: 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong

Applicant before: Honor Device Co.,Ltd.

Country or region before: China