Detailed Description
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to any or all possible combinations including one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present application to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the application. Depending on the context, furthermore, the word "if" used may be interpreted as "at..once" or "when..once" or "in response to a determination".
The embodiment of the application provides an image correction method, which can be applied to any electronic device, and is shown in fig. 2, which is a schematic flow chart of the image correction method, and the method can include:
step 201, obtaining a pixel to be corrected in an original image, and obtaining a first image area corresponding to the pixel to be corrected from the original image, wherein a center pixel of the first image area is the pixel to be corrected. The pixel point to be corrected may be an abnormal pixel point belonging to a bad pixel block.
Step 202, dividing the first image area into K pixel blocks, where K is a positive integer greater than 1.
Step 203, selecting a target pixel block from the K pixel blocks based on the pixel characteristics corresponding to each pixel block, that is, selecting one pixel block from the K pixel blocks as the target pixel block corresponding to the pixel point to be corrected. The target pixel block is a pixel block with the greatest similarity with the pixel point to be corrected, namely, the target pixel block is a pixel block with the greatest reference information provided for the pixel point to be corrected.
And 204, correcting the pixel value of the pixel point to be corrected based on the pixel characteristics corresponding to the target pixel block, and obtaining the corrected pixel value of the pixel point to be corrected. For example, the second image area may be obtained from the original image, and when the pixel value of the pixel to be corrected is corrected, the pixel value of each pixel in the second image area is corrected based on the pixel feature corresponding to the target pixel block.
Step 205, generating a corrected image based on the corrected pixel values of the pixel points to be corrected.
For example, K pixel blocks form a plurality of pixel block sets, and for each pixel block set, the pixel block set may include two pixel blocks, where a line of a central pixel point of the two pixel blocks in the pixel block set passes through a pixel point to be corrected. Based on the above, selecting the target pixel block from the K pixel blocks based on the pixel characteristics corresponding to each pixel block can include, but is not limited to, determining a gradient value corresponding to each pixel block set based on the pixel characteristics respectively corresponding to two pixel blocks in the pixel block set, wherein the pixel characteristics corresponding to the pixel block can be an average value, a median value, a maximum value or a minimum value of the pixel values of all pixel points in the pixel block, and selecting the pixel block with smaller pixel characteristics in the pixel block set corresponding to the maximum gradient value as the target pixel block based on the gradient value corresponding to each pixel block set.
And the pixel block with smaller pixel characteristics in the pixel block set corresponding to the maximum gradient value is the pixel block with the maximum similarity with the pixel point to be corrected, and the pixel block is taken as the target pixel block. Obviously, based on the gradient value and the pixel characteristic corresponding to the pixel block, the target pixel block can be found from all the pixel blocks.
The method includes the steps of obtaining a second image area from an original image, wherein a center pixel of the second image area is the pixel to be corrected, and the size of the second image area may be smaller than that of a first image area, wherein the second image area is used for determining the pixel to be corrected (namely, each pixel in the second image area is used as the pixel to be corrected), the first image area is used for determining the target pixel block, and the target pixel block is used for correcting the pixels of the second image area (all pixels of the second image area). And correcting the pixel value of each pixel point in the second image area based on the pixel characteristic corresponding to the target pixel block to obtain a corrected pixel value of the pixel point, wherein the pixel characteristic corresponding to the target pixel block can be the average value, the median value, the maximum value or the minimum value of the pixel values of all the pixel points in the target pixel block.
The size of the first image region may be determined based on a target gain value corresponding to the original image, which may be a gain value employed when the original image is acquired by the sensor, and the size of the second image region may be determined based on the target gain value, for example.
The method comprises the steps of determining a target gain value interval to which a target gain value belongs, obtaining a target intensity control parameter corresponding to the target gain value interval by inquiring a first mapping relation, wherein the first mapping relation comprises a corresponding relation between the gain value interval and the intensity control parameter, when the gain value interval is larger, the intensity control parameter corresponding to the gain value interval is larger, obtaining a first target size corresponding to the target intensity control parameter by inquiring a second mapping relation, the first target size is the size of a first image area, obtaining a second target size corresponding to the target intensity control parameter by inquiring a third mapping relation, the second target size is the size of a second image area, and the second target size is smaller than the first target size, wherein the second mapping relation comprises a corresponding relation between the intensity control parameter and the first size, when the intensity control parameter is larger, the first size corresponding to the intensity control parameter is larger, and the third mapping relation comprises a corresponding relation between the intensity control parameter and the second size, when the intensity control parameter is larger, the second size corresponding to the intensity control parameter is larger.
The method for obtaining the pixel to be corrected in the original image includes, but is not limited to, obtaining a first image area corresponding to the pixel from the original image for the pixel in the original image, selecting an adjacent pixel corresponding to the pixel from the first image area, determining an upper limit pixel value and a lower limit pixel value corresponding to the pixel based on the pixel value of the adjacent pixel, and determining that the pixel is not the pixel to be corrected, namely is not an abnormal pixel belonging to a bad pixel block if the pixel value of the pixel is located between the upper limit pixel value and the lower limit pixel value. If the pixel value of the pixel is not located between the upper limit pixel value and the lower limit pixel value, determining that the pixel is a pixel to be corrected, namely an abnormal pixel belonging to the bad pixel block, or if the pixel value of the pixel is not located between the upper limit pixel value and the lower limit pixel value, determining that the pixel is not a pixel to be corrected or is a pixel to be corrected based on the auxiliary feature corresponding to the first image area.
For example, if the auxiliary feature includes a pixel value of each pixel in the first image area, determining that the pixel is not a pixel to be corrected or is a pixel to be corrected based on the auxiliary feature corresponding to the first image area may include, but is not limited to, determining that the pixel is a pixel to be corrected, that is, an abnormal pixel belonging to the bad pixel block if the pixel value of the pixel is less than a first threshold and there is an overexposure area in the first image area. If the pixel value of the pixel is not smaller than the first threshold value and/or an overexposure region does not exist in the first image region, determining that the pixel is not the pixel to be corrected, namely, is not an abnormal pixel belonging to the dead pixel block.
If the pixel value of each pixel point in the first image area is smaller than a second threshold value, and the second threshold value is larger than the first threshold value, no overexposure area exists in the first image area; and if the pixel value of at least one pixel point in the first image area is not smaller than the second threshold value, the overexposure area exists in the first image area.
As can be seen from the above technical solutions, in the embodiments of the present application, the first image area is divided into K pixel blocks, a target pixel block is selected from the K pixel blocks based on the pixel characteristics corresponding to each pixel block, and the pixel values of the pixels to be corrected are corrected based on the pixel characteristics corresponding to the target pixel block, so as to obtain a corrected image. By correcting the pixel value of the pixel point to be corrected (namely the dead pixel), the image quality can be improved, the image definition is improved, the image detail is not lacked or is reduced, the image integrity is relatively good, the problem that detailed information is seriously lost is solved, and the detail reduction degree and the image integrity of the image are effectively improved.
The image correction method according to the embodiment of the present application will be described below with reference to specific embodiments.
When the image acquisition equipment acquires an image of a target scene through a sensor (sensor), due to the design defect of the sensor, the limited manufacturing process or the defect of a transmission link of an imaging system, abnormal pixel points exist on the image, the pixel values of the pixel points are obviously not matched with the pixel values of surrounding pixel points, the definition and the integrity of the image are damaged, and the abnormal pixel points are called as bad points.
The dead pixels in the image are generally classified into static dead pixels and dynamic dead pixels, wherein the static dead pixels are pixels with fixed positions and inaccurate pixel values caused by manufacturing process defects. The dynamic dead pixel is a pixel with unfixed position caused by manufacturing process defects or error in the photoelectric signal conversion process, the dynamic dead pixel is normal in a certain brightness range, when the brightness range is exceeded, the pixel value of the dynamic dead pixel can be abnormally changed, and the difference between the dynamic dead pixel and surrounding pixel can be changed along with the change of the temperature and the gain value of the sensor.
The dead pixel region formed by a large number of dynamic dead pixel adhesion can be called as a dead pixel block (namely a dynamic dead pixel block), the dead pixel block is caused by a sensor design defect or a data transmission link design defect, and the dead pixel block usually appears at the edge of an image high light reflection region. Compared with static dead pixels and dynamic dead pixels, the dead pixel blocks are less influenced by the pixel points of the dead pixels and are more influenced by the surrounding pixel points, the dead pixel blocks have uncertainty and randomness of the dynamic dead pixels, the area of the dead pixel blocks is larger, the damage to the definition, details and integrity of the image is more serious, and the unstable area and more neighborhood invalid pixels are the main characteristics of the dead pixel blocks.
Referring to fig. 3A, a dead pixel (static dead pixel or dynamic dead pixel) is illustrated, and referring to fig. 3B, a dead pixel block is illustrated, wherein the dead pixel is only a small number of dead pixels, and the dead pixel block is formed by adhering a large number of dynamic dead pixels. In the focus detection process, due to reflection of human body internal structures and other reasons, an obvious overexposure area exists on an imaged image, dead spots exist at the edge of the overexposure area, a large number of dynamic dead spots are adhered to form dead spot blocks, the quality of the image is affected by the dead spot blocks, and the observation of doctors is interfered.
The existence of the dead pixel blocks can cause the reduction of image quality, and the influence on the image definition, the image details and the image integrity is caused, namely, the image definition is reduced, the image details are lost, and the image integrity is poor.
Aiming at the discovery, the embodiment of the application provides an image correction method which can correct the dead pixel blocks (namely dead pixel areas formed by a large number of dynamic dead pixel adhesion) in the original image to obtain a corrected image. The method can be applied to image acquisition equipment (such as a camera and the like), and the image correction is carried out on the original image after the original image is acquired by the image acquisition equipment. The method can also be applied to back-end equipment (such as a server, a host, NVR, storage equipment and the like), for example, after the image acquisition equipment acquires the original image, the original image is input to the back-end equipment, and the back-end equipment performs image correction on the original image.
Referring to fig. 3C, a flowchart of an image correction method is shown, and the method may include:
Step 301, acquiring an original image. The original image may be a Bayer format image, or may be an image of another format, such as an RGB format image, a YUV format image, etc., where the format of the image is not limited, and the Bayer format image is used as an example, and is capable of simulating the sensitivity to color, and converting gray information into color information by adopting a1 red 2 green 1 blue arrangement mode.
For example, in some application scenarios, an endoscope may be used to acquire an image of a specific type of tissue (e.g., nerve tissue or organ tissue, etc.) within a target object, and the image may be used as the original image.
For another example, in some application scenarios, a camera may be used to capture an image (e.g., a vehicle image, etc.) of a target scene (e.g., a highway, etc.), which may be taken as the original image.
For another example, in some application scenarios, a camera may be used to capture an image (e.g., a human body image, etc.) of a target scene (e.g., an access control system, etc.), which may be used as an original image.
Of course, the above is just a few examples and there is no limitation on the source of this original image.
Step 302, obtaining a target gain value corresponding to an original image, where the target gain value may be a gain value adopted when the image acquisition device acquires the original image through a sensor. For example, when the image acquisition device acquires the original image through the sensor, the brightness of the image can be controlled through parameters such as a shutter, a gain value, an aperture and the like, so that the gain value can be used as a target gain value corresponding to the original image.
For example, since the area of the dead pixel block changes with the change of the gain value, a target gain value corresponding to the original image may be obtained, the dead pixel block degree may be estimated by the target gain value, and the size of the dead pixel correction window may be estimated according to the target gain value, so that the dead pixel block detection and the dead pixel block correction may be linked by the target gain value, and the process of the dead pixel block detection and the dead pixel block correction may refer to the subsequent steps.
Step 303, locating the bad point block in the original image.
For example, since the dead pixel block is formed by a large number of dynamic dead pixel adhesion, the pixel information in the neighborhood is unreliable, and in order to avoid the conditions of missing detection, false detection and the like, the detection precision of the dead pixel block can be improved by methods of dead pixel degree estimation, pixel constraint, auxiliary feature judgment and the like. For example, the following steps may be used to locate a bad block in the original image (i.e. determine whether a pixel in the original image belongs to a bad block):
Step 3031, estimating the bad point degree. For example, for a pixel point in an original image, a first image area corresponding to the pixel point is obtained from the original image, and the first image area is a dead point degree estimation result.
The central pixel point of the first image area is the pixel point, the size of the first image area is m×m, and the value of m is empirically configured, which is not limited. The value of m may be odd or even, and the following example is odd. The value of m may be between a minimum value and a maximum value, the minimum value may be empirically configured, such as 5, 7, etc., the maximum value may be empirically configured, such as 9, 11, etc., for example, taking the minimum value of 7 and the maximum value of 11 as examples, m is greater than or equal to 7 and m is less than or equal to 11, and when the value of m is an odd number, m may be 7 and m may be 9,m and 11.
In one possible embodiment, the size (m) of the first image region may be determined based on a target gain value corresponding to the original image. For example, assuming that m has K size values, all gain values may be divided into K gain value intervals, gain value interval 1 corresponding to the 1 st size value of m, gain value interval 2 corresponding to the 2 nd size value of m, and so on. After obtaining a target gain value corresponding to the original image, determining a target gain value interval to which the target gain value belongs, and taking a size value corresponding to the target gain value interval as the size of the first image area.
For example, assuming that m has 3 size values of 7, 9, 11, etc., all gain values may be divided into 3 gain value intervals, where gain value interval 1 (e.g., minimum gain value to gain value a) corresponds to size value 7, gain value interval 2 (e.g., gain value a to gain value B) corresponds to size value 9, and gain value interval 3 (e.g., gain value B to maximum gain value) corresponds to size value 11. After the target gain value is obtained, if the target gain value belongs to the gain value section 1, the size value 7 is set as the size of the first image area, that is, the first image area is 7*7, if the target gain value belongs to the gain value section 2, the size value 9 is set as the size of the first image area, that is, the first image area is 9*9, and if the target gain value belongs to the gain value section 3, the size value 11 is set as the size of the first image area, that is, the first image area is 11×11.
In one possible embodiment, the size (m) of the first image region may be determined based on a target gain value corresponding to the original image. By way of example, assuming that m has K size values, all gain values may be divided into K gain value intervals. The first mapping relationship may be preconfigured, where the first mapping relationship includes a correspondence between a gain value interval and an intensity control parameter, and when the gain value interval is larger, the intensity control parameter corresponding to the gain value interval is larger. The first mapping relationship may be a mapping table, a mapping function, or a mapping curve, which is not limited as long as the mapping relationship between the gain value interval and the intensity control parameter is included. The second mapping relationship may be preconfigured, where the second mapping relationship includes a correspondence between the intensity control parameter and the first size, and when the intensity control parameter is larger, the first size corresponding to the intensity control parameter is larger. The second mapping relationship may be a mapping table, a mapping function, a mapping curve, which is not limited thereto. Based on the method, a target gain value interval to which the target gain value belongs can be determined, a target intensity control parameter corresponding to the target gain value interval is obtained by inquiring a first mapping relation, and then a first target size corresponding to the target intensity control parameter is obtained by inquiring a second mapping relation, wherein the first target size is the size of the first image area.
For example, assuming that m has 3 size values of 7, 9, 11, etc., all gain values are divided into a gain value interval 1 (e.g., minimum gain value to gain value a), a gain value interval 2 (e.g., gain value a to gain value B), and a gain value interval 3 (e.g., gain value B to maximum gain value). In addition, the first mapping relationship may include a correspondence between the gain value interval 1 and the intensity control parameter 1, a correspondence between the gain value interval 2 and the intensity control parameter 2, and a correspondence between the gain value interval 3 and the intensity control parameter 3, where the intensity control parameter 3 is greater than the intensity control parameter 2, and the intensity control parameter 2 is greater than the intensity control parameter 1. The second mapping relationship may include a correspondence relationship between the intensity control parameter 1 and the size value 7 (i.e., the first size), a correspondence relationship between the intensity control parameter 2 and the size value 9, and a correspondence relationship between the intensity control parameter 3 and the size value 11. After obtaining the target gain value, if the target gain value belongs to the gain value interval 1, obtaining the intensity control parameter 1 corresponding to the gain value interval 1 by inquiring the first mapping relation. And obtaining a size value 7 corresponding to the intensity control parameter 1 by inquiring the second mapping relation, and taking the size value 7 as the size of the first image area, namely the image area with the first image area being 7*7.
Of course, the above is only an example of determining the size (m×m) of the first image area based on the target gain value, and is not limited thereto, as long as the size (m×m) of the first image area is related to the target gain value.
For example, the intensity control parameter may be a dead pixel block degree estimation parameter, the intensity control parameter may be denoted as α, and the intensity control parameter may control the size of the first image area, and since the intensity control parameter is determined based on the target gain value, the size of the first image area is determined based on the target gain value.
For example, for a pixel point in an original image, a first image area with the pixel point as a center pixel point, that is, an image area with a size of m×m, is obtained from the original image based on the size of the first image area.
Step 3032, analyzing whether the pixel points in the original image belong to the bad point block based on the pixel constraint.
For example, for a pixel point in the original image, an adjacent pixel point corresponding to the pixel point may be selected from the first image area, and an upper limit pixel value and a lower limit pixel value corresponding to the pixel point may be determined based on the pixel values of the adjacent pixel point. If the pixel value of the pixel is located between the upper limit pixel value and the lower limit pixel value, determining that the pixel is not the pixel to be corrected, i.e. the pixel is a normal pixel which does not belong to the bad pixel block. If the pixel value of the pixel is not located between the upper limit pixel value and the lower limit pixel value, determining that the pixel is a pixel to be corrected, that is, the pixel is an abnormal pixel belonging to the bad pixel block, or continuously determining that the pixel is not the pixel to be corrected or is the pixel to be corrected based on the auxiliary feature corresponding to the first image area.
For example, the normal pixel point in the initial image should be constrained by the adjacent pixel point to take a value range, and the bad pixel can be primarily deleted according to the pixel constraint method. For the pixel point P (i, j) in the original image, the pixel point P (i, j) can be used as a central pixel point to construct 1*m adjacent pixel point arrays, namely, the pixel points in the same row with the pixel point P (i, j) in the first image area form the adjacent pixel point arrays. Or the pixel point P (i, j) may be used as a central pixel point, and an adjacent pixel point array with m×1 is constructed, that is, the pixel points in the same column as the pixel point P (i, j) in the first image area form an adjacent pixel point array.
Taking the adjacent pixel array of 1*m as an example, an adjacent pixel corresponding to the pixel P (i, j) may be selected from the adjacent pixel array, and the upper limit pixel value P max and the lower limit pixel value P min corresponding to the pixel P (i, j) are determined based on the pixel values of the adjacent pixels. For example, the determination method of the upper limit pixel value P max is shown in the formula (1), however, the formula (1) is merely an example, the determination method of the upper limit pixel value P max is not limited, and the upper limit pixel value P max can be determined based on the pixel values of the adjacent pixel points. The determination method of the lower limit pixel value P min is shown in the formula (2), however, the formula (2) is merely an example, the determination method of the lower limit pixel value P min is not limited, and the lower limit pixel value P mi can be determined based on the pixel values of the adjacent pixel points.
Assuming that the size of the first image area is 9*9, that is, the value of m is 9, equation (1) may be converted into equation (3), that is, the upper limit pixel value P max may be determined using equation (3), and equation (2) may be converted into equation (4), that is, the lower limit pixel value P min may be determined using equation (4).
P max = MAX (2 x P2-P1, 2x P3-P4, P2, P3) formula (3)
P min = MIN (2 x p2-P1, 2x p3-P4, P2, P3) formula (4)
In the formulas (3) and (4), P1, P2, P3, and P4 are pixel values of adjacent pixels corresponding to the pixel point P (i, j), and the positional relationship of P1, P2, P3, and P4 with the pixel point P (i, j) can be shown with reference to fig. 4, that is, P1 represents a pixel value of a4 th adjacent pixel on the left side of the pixel point P (i, j), P2 represents a pixel value of a2 nd adjacent pixel on the left side of the pixel point P (i, j), P3 represents a pixel value of a2 nd adjacent pixel on the right side of the pixel point P (i, j), and P4 represents a pixel value of a4 th adjacent pixel on the right side of the pixel point P (i, j).
After the upper limit pixel value P max and the lower limit pixel value P min are obtained, if the pixel value of the pixel point P (i, j) is located between the upper limit pixel value P max and the lower limit pixel value P min, it is determined that the pixel point P (i, j) is a normal pixel point not belonging to the defective pixel block. If the pixel value of the pixel P (i, j) is not located between the upper limit pixel value P max and the lower limit pixel value P min, determining that the pixel P (i, j) is a suspected bad pixel (whether the pixel belongs to a bad pixel block cannot be confirmed yet), executing step 3033, and analyzing whether the pixel belongs to a bad pixel block in the original image based on the auxiliary feature.
Step 3033, analyzing whether the pixel points in the original image belong to the dead pixel blocks based on the auxiliary features.
For example, the auxiliary feature may include, but is not limited to, a pixel value of each pixel in the first image area, and if the pixel value of the pixel P (i, j) is smaller than a first threshold (configured empirically) and there is an overexposed area in the first image area, the pixel P (i, j) is determined to be a pixel to be corrected, that is, the pixel P (i, j) is an abnormal pixel belonging to the dead pixel block. If the pixel value of the pixel P (i, j) is not less than the first threshold value and/or there is no overexposure region in the first image region, it is determined that the pixel P (i, j) is not the pixel to be corrected, that is, the pixel P (i, j) is a normal pixel that does not belong to the dead pixel block.
If the pixel value of each pixel point in the first image area is smaller than a second threshold value, and the second threshold value is larger than the first threshold value, no overexposure area exists in the first image area; and if the pixel value of at least one pixel point in the first image area is not smaller than the second threshold value, the overexposure area exists in the first image area.
For example, the auxiliary feature judgment is used as the supplement of the pixel constraint method, and scene characteristics, pixel features and the like of the dead pixel blocks are used as auxiliary features to improve the precision of dead pixel judgment. Because the dead pixel blocks appear at the edge of the high reflection area, namely the dead pixel neighborhood exists in the overexposure area, and meanwhile, the pixel value of the dead pixel is far lower than the surrounding pixel value, based on the principle, the characteristics (namely the overexposure area exists in the dead pixel neighborhood and the pixel value of the dead pixel is far lower than the surrounding pixel value) can be used as an auxiliary judging method of the pixel constraint method, whether the pixel belongs to the dead pixel blocks or not is detected through the auxiliary judging method, and the accuracy of dead pixel block detection is effectively improved.
The existence of the overexposure region in the dead pixel neighborhood refers to that if the pixel value of each pixel point in the first image region is smaller than a second threshold (which can be configured empirically, and can be a relatively large pixel value, that is, the second threshold is larger than the first threshold), the overexposure region does not exist in the dead pixel neighborhood, and if the pixel value of at least one pixel point in the first image region is not smaller than the second threshold, the overexposure region exists in the dead pixel neighborhood.
The bad pixel value is far lower than the surrounding pixel value, which means that if the pixel value of the pixel P (i, j) is smaller than the first threshold (which may be a relatively small pixel value according to an empirical configuration), it means that the pixel P (i, j) is far lower than the surrounding pixel value, and if the pixel value of the pixel P (i, j) is not smaller than the first threshold, it means that the pixel P (i, j) is not far lower than the surrounding pixel value.
Thus, step 303 is completed, where the pixel to be corrected in the original image may be located, where the pixel to be corrected is an abnormal pixel belonging to the bad pixel block, and after a certain pixel to be corrected is found, the subsequent step 304 is executed for the pixel to be corrected. For example, each pixel in the original image is traversed in turn, and for the currently traversed pixel, if the pixel is not the pixel to be corrected, the next pixel is traversed continuously, and if the pixel is the pixel to be corrected, the next step 304 is executed for the pixel to be corrected.
Through step 3031, step 3032 and step 3033, the defective pixel block area can be effectively positioned, and the pixel point to be corrected can be found, so that the defective pixel block (i.e., the pixel point to be corrected) can be corrected conveniently.
And 304, carrying out bad point block correction on the bad point blocks in the original image.
By way of example, as the dead pixels are distributed in a large quantity within a certain range and are adhered to form blocks, the number of invalid pixels within a certain neighborhood range is relatively large, the information provided by the pixels closer to the dead pixels is unreliable, and the normal pixel blocks exist in the regions farther from the correction center within the neighborhood, the detailed information contained in the normal pixel blocks is similar to the lost detailed information in the dead pixel block regions, so that the normal pixel blocks can be migrated to the dead pixel block regions in an integral migration and fusion mode of the pixel blocks, and the pixel smoothness is improved by carrying out interpolation and fusion on the original pixels in the dead pixel block regions. Based on the above principle, in order to perform bad block correction on a bad block in an original image, a pixel block selection direction may be preferentially determined, a target pixel block is selected based on the pixel block selection direction, and interpolation and fusion are performed on pixel values of the bad block based on pixel values of the target pixel block, so as to correct the bad block. Obviously, by using a pixel migration mode, the pixel blocks far away from the dead pixel block area can be subjected to integral pixel value migration and interpolation fusion with the original pixels, so that a more accurate correction value is obtained.
For example, the following steps may be used to perform bad block correction on a bad block in an original image:
Step 3041, determining the pixel migration direction. For example, for a pixel point to be corrected in an original image (the pixel point to be corrected belongs to a dead pixel block), a first image area corresponding to the pixel point to be corrected is obtained from the original image, the first image area is divided into K pixel blocks, a target pixel block is selected from the K pixel blocks based on pixel characteristics corresponding to each pixel block, and the target pixel block is a pixel block in a pixel migration direction.
For example, after obtaining the pixel to be corrected from the original image, a first image area may be obtained, where the center pixel of the first image area is the pixel to be corrected, and the size of the first image area is m×m, and the first image area is already obtained in step 303, and may be used in step 304.
The first image area may be divided into K pixel blocks, and the K pixel blocks constitute a plurality of pixel block sets, each including two pixel blocks, for example, the first image area may be divided into 4 pixel blocks, or 6 pixel blocks, or 8 pixel blocks, or 10 pixel blocks, or the like, and the number of the pixel blocks is not limited. For each pixel block set, the connection line of the central pixel points of the two pixel blocks in the pixel block set passes through the pixel point to be corrected, and of course, the connection line of the central pixel points of the two pixel blocks does not pass through the pixel point to be corrected, but the vertical distance between the pixel point to be corrected and the connection line of the central pixel point is smaller than a preset threshold value, which is not limited.
For example, taking the first image area divided into 8 pixel blocks as an example, referring to fig. 5, an example of the first image area is that the first image area is divided into 8 pixel blocks by using an eight-neighborhood method, and the center pixel point of the first image area is the pixel point to be corrected. The 8 pixel blocks are Block1、Block2、Block3、Block4、Block5、Block6、Block7、Block8,, and the 8 pixel blocks can be pixel blocks with the same size or pixel blocks with different sizes. The 8 pixel blocks may have overlapping pixels (e.g., block 1 and Block 2 have overlapping pixels, block 1 and Block 4 have overlapping pixels, block 2 and Block 3 have overlapping pixels, and so on, such overlapping relationship is not shown in fig. 5), and the 8 pixel blocks may not have overlapping pixels (such overlapping relationship is shown in fig. 5). As for the size of each pixel block, the size of each pixel block is not limited in this embodiment as long as the size is smaller than the size of the first image area, for example, 3*3 or 5*5 may be used if the size of the first image area is 9*9.
Referring to fig. 5, block 1 and Block 8 form a pixel Block set 1, block 2 and Block 7 form a pixel Block set 2, block 3 and Block 6 form a pixel Block set 3, and Block 4 and Block 5 form a pixel Block set 4. Of course, the first image region may include all pixel Block sets in the pixel Block set 1, the pixel Block set 2, the pixel Block set 3 and the pixel Block set 4, or may include a partial pixel Block set.
Obviously, the central pixel point connection line of the blocks 1 and 8 in the pixel Block set 1 passes through the pixel point to be corrected, the central pixel point connection line of the blocks 2 and 7 in the pixel Block set 2 passes through the pixel point to be corrected, the central pixel point connection line of the blocks 3 and 6 in the pixel Block set 3 passes through the pixel point to be corrected, and the central pixel point connection line of the blocks 4 and 5 in the pixel Block set 4 passes through the pixel point to be corrected.
For each set of pixel blocks, the gradient value corresponding to the set of pixel blocks may be determined based on the pixel characteristics corresponding to the two pixel blocks within the set of pixel blocks, respectively. The pixel characteristics corresponding to the pixel block may be an average value, a median value, a maximum value, or a minimum value of the pixel values of all the pixel points in the pixel block, which is, of course, only a few examples. Based on this, the average value P1 of the pixel values of all the pixels in Block 1, the average value P2 of the pixel values of all the pixels in Block 2, the average value P3 of the pixel values of all the pixels in Block 3, the average value P4 of the pixel values of all the pixels in Block 4, the average value P5 of the pixel values of all the pixels in Block 5, the average value P6 of the pixel values of all the pixels in Block 6, the average value P7 of the pixel values of all the pixels in Block 7, and the average value P8 of the pixel values of all the pixels in Block 8 are calculated.
Then, based on the pixel characteristics corresponding to Block 1 and Block 8 in the pixel Block set 1, the Gradient value Gradient WN:GradientWN=|P1-P8 l corresponding to the pixel Block set 1 (i.e., the pixel Block set in the northwest direction) is determined by the following formula. Based on the pixel characteristics corresponding to Block 2 and Block 7 in the pixel Block set 2, the Gradient value Gradient V:GradientV=|P2-P7 corresponding to the pixel Block set 2 (i.e., the pixel Block set in the vertical direction) is determined by the following formula. Based on the pixel characteristics corresponding to Block 3 and Block 6 in the pixel Block set 3, the Gradient value Gradient EN:GradientEN=|P3-P6 corresponding to the pixel Block set 3 (i.e., the pixel Block set in the northeast direction) is determined by the following formula. Based on the pixel characteristics corresponding to Block 4 and Block 5 in the pixel Block set 4, the Gradient value Gradient H:GradientH=|P5-P4 corresponding to the pixel Block set 4 (i.e., the pixel Block set in the horizontal direction) is determined by the following formula.
For example, based on the gradient value corresponding to each pixel block set, the pixel block set corresponding to the maximum gradient value may be selected, so that the pixel block with smaller pixel characteristics in the pixel block set corresponding to the maximum gradient value is selected as the target pixel block, and thus, the target pixel block corresponding to the pixel point to be corrected is successfully selected. The target pixel block is a pixel block with the greatest similarity to the pixel point to be corrected, namely, a pixel block with smaller pixel characteristics in the pixel block set corresponding to the maximum gradient value is a pixel block with the greatest similarity to the pixel point to be corrected.
For example, gradient H is a horizontal Gradient (Gradient value corresponding to a pixel block set in the horizontal direction), gradient V is a vertical Gradient (Gradient value corresponding to a pixel block set in the vertical direction), gradient EN is a northeast Gradient (Gradient value corresponding to a pixel block set in the northeast direction), gradient WN is a northwest Gradient (Gradient value corresponding to a pixel block set in the northwest direction), and based on this, the maximum Gradient value can be expressed as Graident max=MAX(GraidentH,GraidentV,GraidentEN,GraidentWN.
If the maximum gradient value is Graident H, selecting a target pixel Block from the pixel Block set 4 in the horizontal direction, and when the pixel Block is P 5>P4, determining the target pixel Block as Block 4, otherwise, determining the target pixel Block as Block 5.
If the maximum gradient value is Graident V, selecting a target pixel Block from the pixel Block set 2 in the vertical direction, and when the pixel Block is P 2>P7, determining that the target pixel Block is Block 7, otherwise, determining that the target pixel Block is Block 2.
If the maximum gradient value is Graident EN, selecting a target pixel Block from the northeast direction pixel Block set 3, and when the pixel Block is P 3>P6, determining the target pixel Block as Block 6, otherwise, determining the target pixel Block as Block 3.
If the maximum gradient value is Graident WN, selecting a target pixel Block from the pixel Block set 1 in the northwest direction, and when the pixel Block is P 1>P8, determining that the target pixel Block is Block 8, otherwise, determining that the target pixel Block is Block 1.
So far, the target pixel block corresponding to the pixel point to be corrected is successfully determined from all the pixel blocks.
And step 3042, pixel interpolation fusion. For example, when correcting the pixel value of the pixel to be corrected, the second image area may be obtained from the original image, and the pixel value of each pixel in the second image area may be corrected based on the pixel feature corresponding to the target pixel block, that is, the second image area may be understood as a bad pixel block, and the pixel value of each pixel in the bad pixel block may be corrected.
The second image area is used for determining the pixel points to be corrected, the first image area is used for determining the target pixel block (see step 3041 for determining the target pixel block based on the first image area), and the target pixel block is used for correcting the pixel points of the second image area, that is, correcting the pixel value of each pixel point in the second image area based on the pixel characteristics corresponding to the target pixel block.
The central pixel point of the second image area is the pixel point to be corrected, the size of the second image area is n×n, the value of n is configured empirically, and n may be smaller than m, which is not limited. The value of n may be between a minimum value and a maximum value, the minimum value may be empirically configured, such as 2, 3, etc., the maximum value may be empirically configured, such as 4, 5, etc., for example, taking the minimum value of 3 and the maximum value of 5 as examples, n is greater than or equal to 3, and n is less than or equal to 5, such as n may be 3, n may be 4, and n may be 5.
The size (n x n) of the second image region may be determined based on a target gain value corresponding to the original image. For example, assuming that n has K size values, all gain values are divided into K gain value intervals, gain value interval 1 corresponds to the 1 st size value of n, and so on. After obtaining a target gain value corresponding to the original image, determining a target gain value section to which the target gain value belongs, and taking a size value corresponding to the target gain value section as the size of the second image area.
The size (n x n) of the second image region may be determined based on a target gain value corresponding to the original image. Illustratively, assuming that n has K size values, all gain values are divided into K gain value intervals. The first mapping relationship may be preconfigured, where the first mapping relationship includes a correspondence between a gain value interval and an intensity control parameter. A third mapping relationship may be preconfigured, where the third mapping relationship includes a correspondence between the intensity control parameter and the second size, and when the intensity control parameter is larger, the second size corresponding to the intensity control parameter is larger. The third mapping relationship may be a mapping table, a mapping function, a mapping curve, which is not limited thereto. The method comprises the steps of determining a target gain value interval to which a target gain value belongs, obtaining a target intensity control parameter corresponding to the target gain value interval by inquiring a first mapping relation, and obtaining a second target size corresponding to the target intensity control parameter by inquiring a third mapping relation, wherein the second target size is the size of a second image area, and the second target size of the second image area is smaller than the first target size of the first image area.
For example, assuming that n has 3 size values such as 3,4,5, etc., all the gain values are divided into a gain value interval 1, a gain value interval 2, and a gain value interval 3, the first mapping relationship may include a correspondence between the gain value interval 1 and the intensity control parameter 1, a correspondence between the gain value interval 2 and the intensity control parameter 2, a correspondence between the gain value interval 3 and the intensity control parameter 3, and the third mapping relationship may include a correspondence between the intensity control parameter 1 and the size value 3 (i.e., the second size), a correspondence between the intensity control parameter 2 and the size value 4, and a correspondence between the intensity control parameter 3 and the size value 5. After obtaining the target gain value, if the target gain value belongs to the gain value interval 1, obtaining the intensity control parameter 1 corresponding to the gain value interval 1 by inquiring the first mapping relation. And obtaining a size value 3 corresponding to the intensity control parameter 1 by inquiring the third mapping relation, and taking the size value 3 as the size of the second image area, namely the image area with the second image area being 3*3.
For a pixel to be corrected in the original image, a second image area with the pixel to be corrected as a central pixel may be obtained from the original image based on a size (n×n) of the second image area.
For example, the pixel characteristics corresponding to the target pixel block may be determined, and the pixel characteristics corresponding to the target pixel block may be an average value, a median value, a maximum value, or a minimum value of the pixel values of all the pixel points in the target pixel block, which are, of course, only a few examples, and the pixel characteristics are not limited, and the median value of all the pixel points in the target pixel block is taken as an example. Assuming that the original image is an image in Bayer format, the target pixel Block (noted as Block i) may include a pixel value of B channel, a pixel value of Gb channel, a pixel value of Gr channel, a pixel value of R channel. Based on this, the median value b_val of the pixel values of all the B channels within the target pixel Block may be calculated using the following formula, b_val=media (Block i (B)), the median value gb_val of the pixel values of all the Gb channels within the target pixel Block may be calculated using the following formula, gb_val=media (Block i (Gb)), the median value gr_val of the pixel values of all the Gr channels within the target pixel Block may be calculated using the following formula, gr_val=media (Block i (Gr)), and the median value r_val of the pixel values of all the R channels within the target pixel Block may be calculated using the following formula, r_val=media (Block i (R)).
For example, after obtaining the pixel feature corresponding to the target pixel block, for example, b_val, gb_val, gr_val, r_val, and the like, the pixel value of each pixel in the second image area may be corrected based on the pixel feature corresponding to the target pixel block, for example, for each pixel in the second image area, the corrected pixel value of the pixel is determined based on the pixel feature corresponding to the target pixel block and the pixel value of the pixel.
For example, after determining the pixel to be corrected, a second image area corresponding to the pixel to be corrected may be obtained from the original image, and the second image area may be used as a dead pixel block, and the pixel value of each pixel in the dead pixel block may be corrected, so that the dead pixel block is corrected by using an area correction method.
Assuming that the coordinates of the pixel to be corrected are (i, j), and the size of the second image area is n×n, the coordinate range of the second image area may beTo the point ofEach pixel point within the coordinate range may be corrected, and the pixel point to be corrected (i, j) may be included in the pixel points.
For example, for each pixel point in the second image region, the corrected pixel value for that pixel point may be determined using the following formula: In the above formula, the display represents the corrected pixel value of the pixel point, P represents the pixel value of the pixel point before correction, that is, the pixel value corresponding to the pixel point in the original image, val represents the pixel characteristic (such as the median value of all the pixel points) corresponding to the target pixel block, 255 represents the maximum pixel value, and may be set to other values.
For example, for each pixel in the second image area, if the pixel is a B-channel pixel, the corrected pixel value of the pixel may be determined according to the following formula: B val represents the median value of the pixel values of all B channels within the target pixel block. If the pixel is a pixel of the Gb channel, the corrected pixel value of the pixel may be determined using the following formula: Gb_val indicates the median value of pixel values of all Gb channels within the target pixel block. If the pixel is a pixel of the Gr channel, the corrected pixel value of the pixel may be determined using the following formula: Gr_val shows the median of the pixel values of all Gr channels within the target pixel block. If the pixel is a pixel of the R channel, the corrected pixel value of the pixel may be determined using the following formula: r val represents the median of the pixel values of all R channels within the target pixel block.
Thus, the correction of the pixel to be corrected is completed, that is, the pixel to be corrected can be used as the central pixel of the dead pixel block (i.e., the second image area), and the correction process of the dead pixel block is completed.
After the correction of the pixel to be corrected is completed, traversing the next pixel of the pixel to be corrected, executing step 303 for the next pixel, and repeating steps 303 and 304 until the traversal of the original image is completed.
Step 305, outputting the corrected image. For example, for each pixel in the original image, if the pixel is corrected, the corrected image includes the corrected pixel value of the pixel, and if the pixel is not corrected, the corrected image includes the original pixel value of the pixel. Referring to fig. 6A, an example of an original image before correction is shown, and referring to fig. 6B, an example of an image after correction is shown.
As can be seen from the above technical solutions, in the embodiments of the present application, the first image area is divided into K pixel blocks, a target pixel block is selected from the K pixel blocks based on the pixel characteristics corresponding to each pixel block, and the pixel values of the pixels to be corrected are corrected based on the pixel characteristics corresponding to the target pixel block, so as to obtain a corrected image. By correcting the pixel value of the pixel point to be corrected (namely the dead pixel), the image quality can be improved, the image definition is improved, the image detail is not lacked or reduced, the image integrity is relatively good, the problem that the detail information is seriously lost is solved, and the detail restoration degree and the image integrity of the image are improved. The method adopts a gain linkage mode to solve the problem of expansion risk of dead pixel block areas caused by gain change of a sensor, proposes pixel migration and interpolation fusion to solve the problem of inaccurate interpolation data, and solves the problem of image classification caused by single data interpolation.
Based on the same application concept as the above method, the embodiment of the application provides an image correction method, which may include acquiring a first image area from an original image, where the first image area includes a pixel to be corrected and an overexposed image area, or the first image area includes a pixel to be corrected, and the first image area is connected to the overexposed image area. The process of acquiring the first image area may refer to step 301, step 302 and step 303, and will not be repeated here.
A target pixel block is extracted from the first image region, wherein the target pixel block is any pixel block in the first image region, and the target pixel block is related to a gradient value of the pixel block and a pixel value of the pixel block. For example, a block pixel value of each pixel block in the first image area is obtained, wherein the block pixel value is represented by any one of an average value, a median value, a maximum value and a minimum value, gradient values in a horizontal direction, a vertical direction and a diagonal direction are respectively obtained according to the block pixel values, a pixel block corresponding to the minimum block pixel value in the direction of the maximum gradient value is taken as a target pixel block, and the target pixel block is the pixel block with the greatest similarity with the pixel point to be corrected. The process of obtaining the target pixel block may refer to step 3041, and the detailed description is not repeated here.
And correcting the pixel value of the pixel point to be corrected based on the pixel characteristics of the target pixel block to obtain a corrected pixel value. The method comprises the steps of obtaining a block correction size, determining a coordinate range of a region to be corrected based on the block correction size and pixel points to be corrected, correcting pixel values of all pixels in the coordinate range of the region to be corrected based on pixel characteristics of a target pixel block to obtain corrected pixel values, wherein the pixel characteristics of the target pixel block are represented by any one of average values, median values, maximum values and minimum values of all the pixel points in the target pixel block. The process of correcting the pixel value of the pixel point to be corrected can refer to step 3042, and the detailed description is not repeated here. The region coordinate range to be corrected may be the second image region of the above embodiment.
A corrected image is generated based on the corrected pixel values, the process may be seen in step 305.
Based on the same application concept as the above method, an image correction device is provided in an embodiment of the present application, and referring to fig. 7, the image correction device is a schematic structural diagram, and the device may include:
The acquisition module 71 is configured to acquire a pixel to be corrected in an original image, and acquire a first image area corresponding to the pixel to be corrected from the original image, wherein a center pixel of the first image area is the pixel to be corrected;
The processing module 72 is configured to select a target pixel block from the K pixel blocks based on a pixel feature corresponding to each pixel block, correct a pixel value of the pixel point to be corrected based on a pixel feature corresponding to the target pixel block, and obtain a corrected pixel value of the pixel point to be corrected, where the target pixel block is a pixel block with a maximum similarity to the pixel point to be corrected;
a generating module 73, configured to generate a corrected image based on the corrected pixel values of the pixel points to be corrected.
The processing module 72 is specifically configured to determine a gradient value corresponding to the pixel block set based on pixel characteristics corresponding to two pixel blocks in the pixel block set when selecting a target pixel block from the K pixel blocks based on pixel characteristics corresponding to each pixel block, wherein the pixel characteristics corresponding to the pixel block are an average value, a median value, a maximum value, or a minimum value of the pixel values of all the pixel points in the pixel block set, and select a pixel block with a smaller pixel characteristic in the pixel block set corresponding to the maximum gradient value as the target pixel block based on the gradient value corresponding to each pixel block set.
The processing module 72 is specifically configured to obtain a second image area from an original image when obtaining a corrected pixel value of the pixel to be corrected based on a pixel feature corresponding to the target pixel block, where a size of the second image area is smaller than a size of the first image area, the second image area is configured to determine a pixel to be corrected, the first image area is configured to determine the target pixel block, and the target pixel block is configured to correct the pixel of the second image area, and correct, for each pixel in the second image area, the pixel value of the pixel based on the pixel feature corresponding to the target pixel block, to obtain a corrected pixel value of the pixel, and the pixel feature corresponding to the target pixel block is an average value, a median value, a maximum value, or a minimum value of the pixel values of all the pixels in the target pixel block.
The size of the first image area is determined based on a target gain value corresponding to the original image, wherein the target gain value is a gain value adopted when the original image is acquired through a sensor, and the size of the second image area is determined based on the target gain value.
The obtaining module 71 is further configured to determine a target gain value interval to which the target gain value belongs, obtain a target intensity control parameter corresponding to the target gain value interval by querying a first mapping relationship, where the first mapping relationship includes a correspondence between the gain value interval and the intensity control parameter, when the gain value interval is larger, the intensity control parameter corresponding to the gain value interval is larger, obtain a first target size corresponding to the target intensity control parameter by querying a second mapping relationship, where the first target size is the size of the first image area, and obtain a second target size corresponding to the target intensity control parameter, where the second target size is the size of the second image area, and is smaller than the first target size, and where the second mapping relationship includes a correspondence between the intensity control parameter and the first size, when the intensity control parameter is larger, the first size corresponding to the intensity control parameter is larger, and the third mapping relationship includes a correspondence between the intensity control parameter and the second size, when the intensity control parameter is larger, the second size is larger, and the second size is larger.
The obtaining module 71 is specifically configured to obtain, for a pixel in an original image, a first image area corresponding to the pixel from the original image, select an adjacent pixel corresponding to the pixel from the first image area, determine an upper limit pixel value and a lower limit pixel value corresponding to the pixel based on the pixel value of the adjacent pixel, determine that the pixel is not a pixel to be corrected if the pixel is located between the upper limit pixel value and the lower limit pixel value, and determine that the pixel is not a pixel to be corrected if the pixel is not located between the upper limit pixel value and the lower limit pixel value, or determine that the pixel is not a pixel to be corrected based on an auxiliary feature corresponding to the first image area.
The obtaining module 71 is specifically configured to determine that the pixel is not a pixel to be corrected or is a pixel to be corrected based on the auxiliary feature corresponding to the first image area if the auxiliary feature includes a pixel value of each pixel in the first image area, determine that the pixel is a pixel to be corrected if the pixel value of the pixel is smaller than a first threshold and there is an overexposure area in the first image area, determine that the pixel is not a pixel to be corrected if the pixel value of the pixel is not smaller than the first threshold and/or if there is no overexposure area in the first image area, and determine that the pixel is not a pixel to be corrected if the pixel value of each pixel in the first image area is smaller than a second threshold, and if the second threshold is greater than the first threshold, there is no overexposure area in the first image area, and if the pixel value of at least one pixel in the first image area is not smaller than the second threshold.
Based on the same application conception as the method, the embodiment of the application provides an image correction device which comprises an acquisition module, a correction module and a generation module, wherein the acquisition module is used for acquiring a first image area from an original image, the first image area comprises a pixel point to be corrected and an overexposed image area, or the first image area comprises the pixel point to be corrected and is connected with the overexposed image area, the extraction module is used for extracting a target pixel block from the first image area, the target pixel block is any pixel block in the first image area, the target pixel block is related to a gradient value of a pixel block and a pixel value of the pixel block, the correction module is used for correcting the pixel value of the pixel point to be corrected based on the pixel characteristic of the target pixel block to obtain a corrected pixel value, and the generation module is used for generating a corrected image based on the corrected pixel value.
The extraction module is specifically configured to obtain a block pixel value of each pixel block in the first image area when extracting a target pixel block from the first image area, where the block pixel value is represented by any one of an average value, a median value, a maximum value and a minimum value, obtain gradient values in a horizontal direction, a vertical direction and a diagonal direction according to the block pixel value, and use a pixel block corresponding to the minimum block pixel value in a direction of the maximum gradient value as the target pixel block.
The correction module corrects the pixel value of the pixel point to be corrected based on the pixel characteristic of the target pixel block, and is specifically used for obtaining a block correction size when the corrected pixel value is obtained, determining a coordinate range of a region to be corrected based on the block correction size and the pixel point to be corrected, correcting the pixel values of all pixels in the coordinate range of the region to be corrected based on the pixel characteristic of the target pixel block, and obtaining the corrected pixel value, wherein the pixel characteristic of the target pixel block is represented by any one of an average value, a median value, a maximum value and a minimum value of each pixel point in the target pixel block.
Based on the same application concept as the above method, an electronic device is provided in an embodiment of the present application, and referring to fig. 8, the electronic device includes a processor 81 and a machine-readable storage medium 82, where the machine-readable storage medium 82 stores machine executable instructions that can be executed by the processor 81, and the processor 81 is configured to execute the machine executable instructions to implement the image correction method disclosed in the above example of the present application.
Based on the same application concept as the above method, the embodiment of the present application further provides a machine-readable storage medium, where a plurality of computer instructions are stored, where the computer instructions can implement the image correction method disclosed in the above example of the present application when the computer instructions are executed by a processor.
Wherein the machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information, such as executable instructions, data, or the like. For example, the machine-readable storage medium may be RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., hard drive), a solid state disk, any type of storage disk (e.g., optical disk, dvd, etc.), or a similar storage medium, or a combination thereof.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer entity or by an article of manufacture having some functionality. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in the same piece or pieces of software and/or hardware when implementing the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Moreover, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.