CN114627029B - Image processing method and device, computing equipment and computer readable storage medium - Google Patents
Image processing method and device, computing equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN114627029B CN114627029B CN202210511646.0A CN202210511646A CN114627029B CN 114627029 B CN114627029 B CN 114627029B CN 202210511646 A CN202210511646 A CN 202210511646A CN 114627029 B CN114627029 B CN 114627029B
- Authority
- CN
- China
- Prior art keywords
- peak
- interval
- value
- processed
- infrared image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 37
- 238000003860 storage Methods 0.000 title claims abstract description 11
- 238000012545 processing Methods 0.000 claims abstract description 119
- 238000000034 method Methods 0.000 claims description 76
- 238000012937 correction Methods 0.000 claims description 37
- 238000009826 distribution Methods 0.000 claims description 37
- 238000013507 mapping Methods 0.000 claims description 36
- 238000001914 filtration Methods 0.000 claims description 21
- 101100134058 Caenorhabditis elegans nth-1 gene Proteins 0.000 claims description 11
- 238000004891 communication Methods 0.000 claims description 7
- 238000012163 sequencing technique Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 3
- 230000008447 perception Effects 0.000 abstract description 3
- 238000013473 artificial intelligence Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 22
- AYFVYJQAPQTCCC-GBXIJSLDSA-N L-threonine Chemical compound C[C@@H](O)[C@H](N)C(O)=O AYFVYJQAPQTCCC-GBXIJSLDSA-N 0.000 description 19
- 238000004364 calculation method Methods 0.000 description 12
- 238000012544 monitoring process Methods 0.000 description 12
- 238000009499 grossing Methods 0.000 description 8
- 239000007789 gas Substances 0.000 description 7
- 230000000007 visual effect Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 230000000630 rising effect Effects 0.000 description 4
- 230000002159 abnormal effect Effects 0.000 description 3
- 230000001174 ascending effect Effects 0.000 description 3
- 230000009191 jumping Effects 0.000 description 3
- 230000005855 radiation Effects 0.000 description 3
- 238000009825 accumulation Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000012271 agricultural production Methods 0.000 description 2
- 238000009776 industrial production Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012824 chemical production Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003331 infrared imaging Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the application relates to the technical field of artificial intelligence, and relates to an image processing method, an image processing device, a computing device and a storage medium. The specific implementation scheme is as follows: acquiring histogram information of an infrared image to be processed; processing the histogram information to determine the main peak position and the secondary peak position; determining an effective signal interval corresponding to the main peak according to the position of the main peak; determining an effective signal interval corresponding to the secondary peak according to the position of the secondary peak; determining a stable boundary of a main peak according to the frame identifier of the infrared image to be processed and an effective signal interval corresponding to the main peak; determining a stable boundary of a secondary peak according to the frame identifier of the infrared image to be processed and an effective signal interval corresponding to the secondary peak; and stretching the infrared image to be processed based on the main peak position, the secondary peak position, the stable boundary of the main peak and the stable boundary of the secondary peak to obtain a global scene stretched image of the infrared image to be processed. The embodiment of the application reduces the influence brought by the object with larger temperature difference, and improves the target identification degree and the perception effect.
Description
Technical Field
The present invention relates to the field of artificial intelligence technology, and in particular, to an image processing method and apparatus, a computing device, and a computer-readable storage medium.
Background
The infrared monitoring system can generate stronger infrared radiation when used in illumination appliances with special purposes or high-temperature operation processes in the fields of industrial and agricultural production, scientific research and medicine, and can monitor, track and mark targets by adopting an infrared monitoring technology. Taking industrial safety monitoring as an example, in recent years, the demand for industrial safety monitoring technology is increasingly urgent. Medium leakage accidents in the field of chemical production frequently occur, so that consequences such as explosion, fire and the like are caused, and the safety of lives and properties is seriously threatened. In the case of a gas leak, the use of infrared monitoring of the gas is currently a relatively advanced method. Because most gases are characterized in the infrared band, a mass of black gas gushes out when a gas leaks can be seen using a wide spectral range infrared engine, and different locations and temperatures can be monitored to determine risk.
However, in the process of stretching the infrared image, since the offset changes with changes in the temperature of the infrared camera and the ambient temperature, the infrared image cannot be displayed in a satisfactory manner with a fixed stretching range. An object with a large temperature difference may exist in the infrared image, and under the influence of the object, the phenomenon that the infrared image is too gray or too black is easily caused, and other objects except the object cannot be clearly displayed in the image.
Disclosure of Invention
In view of the above problems in the prior art, the present application provides an image processing method and apparatus, a computing device, and a computer-readable storage medium, which can discriminate an infrared image to be processed, find an effective signal interval of a main scene and an object with a large temperature difference in the infrared image to be processed, process the infrared image by using the effective signal interval and keep an information boundary stable, obtain a visual image fusing the main scene and the object with the large temperature difference, reduce the influence caused by the object with the large temperature difference, and improve the identification of a target and the visual effect of the image.
To achieve the above object, a first aspect of the present application provides an image processing method, including:
acquiring histogram information of an infrared image to be processed;
processing the histogram information to determine a main peak position and a secondary peak position;
determining an effective signal interval corresponding to a main peak according to the position of the main peak; determining an effective signal interval corresponding to the secondary peak according to the position of the secondary peak;
determining a stable boundary of the main peak according to the frame identifier of the infrared image to be processed and the effective signal interval corresponding to the main peak; determining a stable boundary of the secondary peak according to the frame identifier of the infrared image to be processed and the effective signal interval corresponding to the secondary peak;
and stretching the to-be-processed infrared image based on the main peak position, the secondary peak position, the stable boundary of the main peak and the stable boundary of the secondary peak to obtain a global scene stretched image of the to-be-processed infrared image.
As a possible implementation manner of the first aspect, processing the histogram information to determine a main peak position and a secondary peak position includes:
processing the histogram information to obtain each first peak position of the histogram information;
determining a full width half maximum interval for each of the first peak locations;
and determining the positions of the main peak and the secondary peak according to the full width half maximum interval of each first peak position.
As a possible implementation manner of the first aspect, processing the histogram information to obtain each first peak position of the histogram information includes:
carrying out mean value filtering processing on the histogram information;
carrying out interpolation processing on the information after the average filtering processing;
and obtaining each first peak position of the histogram information based on the information after the interpolation processing.
As a possible implementation manner of the first aspect, determining a full-width half maximum interval of each first peak position includes:
calculating a half-height difference value of the histogram information aiming at each first peak value position, wherein the half-height difference value is used for measuring the difference degree between the value of the histogram information and the half-peak value corresponding to the first peak value position;
processing the half-height difference value to obtain each second peak value position of the half-height difference value;
and determining a full-width half-maximum interval corresponding to each first peak position according to each second peak position.
As a possible implementation manner of the first aspect, determining, according to the respective second peak positions, a full-width-half-maximum interval corresponding to the respective first peak positions includes:
and if the value of the histogram information corresponding to the nth second peak position is greater than the value of the histogram information corresponding to the first peak position, and the value of the histogram information corresponding to the nth-1 second peak position is less than the value of the histogram information corresponding to the first peak position, determining an interval between the nth second peak position and the nth-1 second peak position as a full width half maximum interval corresponding to the first peak position.
As a possible implementation manner of the first aspect, determining the primary peak position and the secondary peak position according to the full-width-half-maximum interval of each first peak position includes:
accumulating the values of the histogram information in the full-width half maximum interval corresponding to each first peak value position to obtain an accumulated sum corresponding to each first peak value position;
sorting the accumulated sums corresponding to the first peak positions to obtain a sorting result from large to small;
and sequentially determining the first peak positions corresponding to the first two values in the sequencing result as a main peak position and a secondary peak position.
As a possible implementation manner of the first aspect, determining an effective signal interval corresponding to a main peak according to the position of the main peak includes: obtaining an effective signal interval corresponding to the main peak according to the full width half maximum interval corresponding to the main peak position and a preset signal range threshold;
determining an effective signal interval corresponding to the secondary peak according to the position of the secondary peak, wherein the effective signal interval comprises the following steps: and obtaining an effective signal interval corresponding to the secondary peak according to the full width half maximum interval corresponding to the secondary peak position and a preset signal range threshold.
As a possible implementation manner of the first aspect, a stationary boundary of the main peak is determined according to the frame identifier of the infrared image to be processed and the effective signal interval corresponding to the main peak; determining a stationary boundary of the secondary peak according to the frame identifier of the infrared image to be processed and the effective signal interval corresponding to the secondary peak, including:
calculating a stationary boundary of a main peak and a stationary boundary of a secondary peak of a first frame in the infrared image to be processed according to a predetermined algorithm by using the initialized parameter offset, the initialized error reference, the initialized correction value and the initialized reference variable;
after calculating the stable boundary of the main peak and the stable boundary of the secondary peak of each frame in the infrared image to be processed, updating the correction value and the reference variable according to the preset algorithm;
calculating a stationary boundary of a main peak and a stationary boundary of a secondary peak of each frame in the infrared image to be processed according to the predetermined algorithm by using the initialized parameter offset, the initialized error reference, the updated correction value and the updated reference variable; wherein the updated correction value and the updated reference variable are updated according to the predetermined algorithm after calculating the stationary boundary of the main peak and the stationary boundary of the secondary peak of the previous frame.
As a possible implementation manner of the first aspect, the predetermined algorithm includes:
calculating a bias coefficient according to the reference variable;
calculating a scaling coefficient according to the correction value and the parameter offset;
calculating a gain factor according to the scaling factor and the error reference quantity;
calculating a stationary boundary according to the bias coefficient, the gain coefficient, the reference variable and the effective signal interval;
updating the correction value using the gain factor and the scaling factor;
updating the reference variable with the stationary boundary.
As a possible implementation manner of the first aspect, the stretching the to-be-processed infrared image based on the main peak position, the secondary peak position, the stationary boundary of the main peak, and the stationary boundary of the secondary peak to obtain a global scene stretched image of the to-be-processed infrared image includes:
setting a signal pre-distribution proportion corresponding to each interval in the histogram according to the position relation of the main peak position and the secondary peak position in the histogram;
the signal pre-allocation proportion comprises a dark part proportion, a main body proportion, a bright part proportion, a weak information coefficient and a secondary information coefficient; the main body proportion represents a signal pre-allocation proportion corresponding to an interval within a stationary boundary of the main peak; the secondary information coefficient represents a signal pre-allocation proportion corresponding to an interval within a stationary boundary of the secondary peak; the weak information coefficient represents a signal pre-distribution proportion corresponding to an interval between the position of the main peak and the position of the secondary peak, and is outside a stable boundary of the main peak and a stable boundary of the secondary peak; the dark part proportion represents a signal pre-distribution proportion corresponding to an interval with minimum brightness, wherein the interval is outside a stable boundary of the main peak and a stable boundary of the secondary peak; the bright part proportion represents a signal pre-distribution proportion corresponding to an interval with the maximum brightness, wherein the interval is outside a stable boundary of the main peak and a stable boundary of the secondary peak;
and according to the signal pre-distribution proportion corresponding to each interval, respectively carrying out pixel value mapping on the infrared image to be processed aiming at each interval in the histogram to obtain a global scene stretching image of the infrared image to be processed.
As a possible implementation manner of the first aspect, performing pixel value mapping on the infrared image to be processed according to the signal pre-allocation ratio corresponding to each interval in the histogram, to obtain a global scene stretched image of the infrared image to be processed, includes:
respectively obtaining pixel stretching values corresponding to the pixel points in each interval by utilizing the signal pre-distribution proportion corresponding to each interval;
respectively accumulating the pixel stretching value of the interval and the signal pre-distribution proportion corresponding to each interval with the brightness smaller than the interval to obtain the pixel mapping value corresponding to the pixel point in the interval;
and obtaining a global scene stretching image of the infrared image to be processed according to the pixel mapping value.
A second aspect of the present application provides an image processing apparatus comprising:
the acquisition unit is used for acquiring the histogram information of the infrared image to be processed;
the processing unit is used for processing the histogram information and determining the position of a main peak and the position of a secondary peak;
the first determining unit is used for determining an effective signal interval corresponding to a main peak according to the position of the main peak; determining an effective signal interval corresponding to the secondary peak according to the position of the secondary peak;
the second determining unit is used for determining a stable boundary of the main peak according to the identification of the frame of the infrared image to be processed and the effective signal interval corresponding to the main peak; determining a stable boundary of the secondary peak according to the frame identifier of the infrared image to be processed and the effective signal interval corresponding to the secondary peak;
and the stretching unit is used for stretching the infrared image to be processed based on the main peak position, the secondary peak position, the stable boundary of the main peak and the stable boundary of the secondary peak to obtain a global scene stretched image of the infrared image to be processed.
As a possible implementation manner of the second aspect, the processing unit includes:
the processing subunit is used for processing the histogram information to obtain each first peak position of the histogram information;
a first determining subunit, configured to determine a full-width half-maximum interval of the respective first peak positions;
and the second determining subunit is used for determining the positions of the main peak and the secondary peak according to the full width half maximum interval of each first peak position.
As a possible implementation manner of the second aspect, the processing subunit is configured to:
carrying out mean value filtering processing on the histogram information;
carrying out interpolation processing on the information after the average filtering processing;
and obtaining each first peak position of the histogram information based on the information after the interpolation processing.
As a possible implementation manner of the second aspect, the first determining subunit is configured to:
calculating a half-height difference value of the histogram information aiming at each first peak value position, wherein the half-height difference value is used for measuring the difference degree between the value of the histogram information and the half-peak value corresponding to the first peak value position;
processing the half-height difference value to obtain each second peak value position of the half-height difference value;
and determining a full-width half-maximum interval corresponding to each first peak position according to each second peak position.
As a possible implementation manner of the second aspect, the first determining subunit is configured to:
and if the value of the histogram information corresponding to the nth second peak position is greater than the value of the histogram information corresponding to the first peak position, and the value of the histogram information corresponding to the nth-1 second peak position is less than the value of the histogram information corresponding to the first peak position, determining an interval between the nth second peak position and the nth-1 second peak position as a full width half maximum interval corresponding to the first peak position.
As a possible implementation manner of the second aspect, the second determining subunit is configured to:
accumulating the values of the histogram information in the full-width half maximum interval corresponding to each first peak value position to obtain an accumulated sum corresponding to each first peak value position;
sorting the accumulated sums corresponding to the first peak positions to obtain a sorting result from large to small;
and sequentially determining the first peak positions corresponding to the first two values in the sequencing result as a main peak position and a secondary peak position.
As a possible implementation manner of the second aspect, the first determining unit is configured to: obtaining an effective signal interval corresponding to the main peak according to the full width half maximum interval corresponding to the main peak position and a preset signal range threshold;
the first determination unit is configured to: and obtaining an effective signal interval corresponding to the secondary peak according to the full width half maximum interval corresponding to the secondary peak position and a preset signal range threshold.
As a possible implementation manner of the second aspect, the second determining unit is configured to:
calculating a stationary boundary of a main peak and a stationary boundary of a secondary peak of a first frame in the infrared image to be processed according to a predetermined algorithm by using the initialized parameter offset, the initialized error reference, the initialized correction value and the initialized reference variable;
after calculating the stable boundary of the main peak and the stable boundary of the secondary peak of each frame in the infrared image to be processed, updating the correction value and the reference variable according to the preset algorithm;
calculating a stationary boundary of a main peak and a stationary boundary of a secondary peak of each frame in the infrared image to be processed according to the predetermined algorithm by using the initialized parameter offset, the initialized error reference, the updated correction value and the updated reference variable; wherein the updated correction value and the updated reference variable are updated according to the predetermined algorithm after calculating the stationary boundary of the main peak and the stationary boundary of the secondary peak of the previous frame.
As a possible implementation manner of the second aspect, the predetermined algorithm includes:
calculating a bias coefficient according to the reference variable;
calculating a scaling coefficient according to the correction value and the parameter offset;
calculating a gain factor according to the scaling factor and the error reference quantity;
calculating a stationary boundary according to the bias coefficient, the gain coefficient, the reference variable and the effective signal interval;
updating the correction value using the gain factor and the scaling factor;
updating the reference variable with the stationary boundary.
As a possible implementation manner of the second aspect, the stretching unit includes:
the setting subunit is used for setting the signal pre-distribution proportion corresponding to each interval in the histogram according to the position relation of the main peak position and the secondary peak position in the histogram;
the signal pre-allocation proportion comprises a dark part proportion, a main body proportion, a bright part proportion, a weak information coefficient and a secondary information coefficient; the main body proportion represents a signal pre-allocation proportion corresponding to an interval within a stationary boundary of the main peak; the secondary information coefficient represents a signal pre-allocation proportion corresponding to an interval within a stationary boundary of the secondary peak; the weak information coefficient represents a signal pre-distribution proportion corresponding to an interval between the position of the main peak and the position of the secondary peak, and is outside a stable boundary of the main peak and a stable boundary of the secondary peak; the dark part proportion represents a signal pre-distribution proportion corresponding to an interval with minimum brightness, wherein the interval is outside a stable boundary of the main peak and a stable boundary of the secondary peak; the bright part proportion represents a signal pre-distribution proportion corresponding to an interval with the maximum brightness, wherein the interval is outside a stable boundary of the main peak and a stable boundary of the secondary peak;
and the mapping subunit is configured to perform pixel value mapping on the infrared image to be processed respectively for each interval in the histogram according to the signal pre-allocation proportion corresponding to each interval, so as to obtain a global scene stretching image of the infrared image to be processed.
As a possible implementation manner of the second aspect, the mapping subunit is configured to:
respectively obtaining pixel stretching values corresponding to the pixel points in each interval by utilizing the signal pre-distribution proportion corresponding to each interval;
respectively accumulating the pixel stretching value of the interval and the signal pre-distribution proportion corresponding to each interval with the brightness smaller than the interval to obtain the pixel mapping value corresponding to the pixel point in the interval;
and obtaining a global scene stretching image of the infrared image to be processed according to the pixel mapping value.
A third aspect of the present application provides a computing device comprising:
a communication interface;
at least one processor coupled with the communication interface; and
at least one memory coupled to the processor and storing program instructions that, when executed by the at least one processor, cause the at least one processor to perform the method of any of the first aspects.
A fourth aspect of the present application provides a computer readable storage medium having stored thereon program instructions which, when executed by a computer, cause the computer to perform the method of any of the first aspects described above.
These and other aspects of the invention will be apparent from and elucidated with reference to the embodiment(s) described hereinafter.
Drawings
The various features and the connections between the various features of the present invention are further described below with reference to the attached figures. The figures are exemplary, some features are not shown to scale, and some of the figures may omit features that are conventional in the art to which the application relates and are not essential to the application, or show additional features that are not essential to the application, and the combination of features shown in the figures is not intended to limit the application. In addition, the same reference numerals are used throughout the specification to designate the same components. The specific drawings are illustrated as follows:
fig. 1 is a schematic diagram of an embodiment of an image processing method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of an embodiment of an image processing method according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of an embodiment of an image processing method according to an embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating an embodiment of an image processing method according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram illustrating an embodiment of an image processing method according to an embodiment of the present disclosure;
FIG. 6 is a flowchart illustrating an embodiment of an image processing method according to the present disclosure;
fig. 7 is a schematic diagram illustrating an embodiment of an image processing method according to an embodiment of the present disclosure;
FIG. 8 is a flowchart illustrating an embodiment of an image processing method according to the present disclosure;
fig. 9 is a schematic diagram illustrating an embodiment of an image processing method according to an embodiment of the present disclosure;
FIG. 10 is a flowchart illustrating an embodiment of an image processing method according to the present disclosure;
fig. 11 is a schematic diagram illustrating an embodiment of an image processing method according to an embodiment of the present disclosure;
FIG. 12 is a flowchart illustrating an embodiment of an image processing method according to an embodiment of the present disclosure;
FIG. 13 is a flowchart illustrating an exemplary process of an exemplary embodiment of a method for processing an image;
fig. 14 is a schematic diagram illustrating an embodiment of an image processing method according to an embodiment of the present disclosure;
fig. 15 is a schematic diagram illustrating an embodiment of an image processing method according to an embodiment of the present application;
FIG. 16 is a diagram illustrating an embodiment of an image processing apparatus according to the present application;
fig. 17 is a schematic diagram of an embodiment of an image processing apparatus according to the present application;
fig. 18 is a schematic diagram of an embodiment of an image processing apparatus according to the present application;
fig. 19 is a schematic diagram of a computing device provided in an embodiment of the present application.
Detailed Description
Before describing the embodiments, the terms used in the present specification are given the following explanations or definitions:
1) histogram: is a statistical report graph, and the condition of data distribution is represented by a series of vertical stripes or line segments with unequal heights. The data type is generally represented by the horizontal axis, and the distribution is represented by the vertical axis. Histograms are precise graphical representations of the distribution of numerical data. It is an estimate of the probability distribution of a continuous variable (a quantitative variable), which is a bar graph. To construct the histogram, the first step is to segment the range of values, i.e., divide the entire range of values into a series of intervals, and then calculate how many values are in each interval. These values are typically specified as consecutive, non-overlapping variable intervals. The spaces must be adjacent and are typically (but not necessarily) of equal size.
2) And (3) stretching the image: also called contrast enhancement or contrast enhancement, is a method of improving the image quality by changing the brightness values of the picture elements to increase the contrast of all or part of the image.
3) Interpolation: the method is an important method for approximating the discrete function, and can be used for estimating the approximate values of the function at other points through the value conditions of the function at a limited number of points. Interpolation can be used to fill in the gaps between pixels when the image is transformed.
The prior art method is described first, and then the technical solution of the present application is described in detail.
Any object with a temperature above absolute zero (-273 c) is constantly radiating infrared light. The higher the temperature of the object, the higher the radiation intensity, and the shorter the radiation wavelength. Therefore, in the fields of industrial and agricultural production, scientific research and medical science, the infrared monitoring technology can be adopted to monitor, track and mark the target. Taking an application scenario of image processing in industrial safety monitoring as an example, an infrared monitoring technology applied in the prior art can monitor different positions and temperatures to determine danger.
The prior art has the following defects: in the process of stretching the infrared image, the bias changes along with the change of the temperature of the infrared camera and the ambient temperature, so that the infrared image cannot be well visually displayed through fixing the stretching range. An object with a large temperature difference may exist in the infrared image, and the infrared image is easily over-gray or over-black due to the influence of the object, and objects except the object cannot be clearly displayed in the image.
Based on the technical problems in the prior art, the present application provides an image processing method. According to the method and the device, the infrared image to be processed is judged, and the main scene and the effective signal interval of the object with the larger temperature difference in the infrared image to be processed are found. Utilize effective signal interval to handle infrared image and keep information boundary steady, obtain the visual image that fuses main part scene and the great object of the difference in temperature, the influence that the great object of reducible difference in temperature brought avoids infrared image to appear crossing grey or crossing black phenomenon to obviously promote the discernment degree of target and the perception effect of image, solved the unable clear problem that shows of image among the prior art.
Fig. 1 is a schematic diagram of an embodiment of an image processing method according to an embodiment of the present disclosure. As shown in fig. 1, the image processing method may include:
step S110, acquiring histogram information of the infrared image to be processed;
step S120, processing the histogram information, and determining a main peak position and a secondary peak position;
step S130, determining an effective signal interval corresponding to the main peak according to the position of the main peak; determining an effective signal interval corresponding to the secondary peak according to the position of the secondary peak;
step S140, determining a stable boundary of the main peak according to the frame identifier of the infrared image to be processed and the effective signal interval corresponding to the main peak; determining a stable boundary of the secondary peak according to the frame identifier of the infrared image to be processed and the effective signal interval corresponding to the secondary peak;
and S150, stretching the infrared image to be processed based on the main peak position, the secondary peak position, the stable boundary of the main peak and the stable boundary of the secondary peak to obtain a global scene stretched image of the infrared image to be processed.
Taking an application scene of image processing in industrial safety monitoring as an example, a medium which is easy to leak can be used as a monitoring target, and hidden dangers can be found in time by monitoring the target so as to avoid medium leakage accidents. For example, in an application scene of infrared monitoring gas, an infrared camera device can be arranged in a gas identification marking system, an infrared image of a target to be identified is obtained by the infrared camera device, and then the infrared image is stretched based on an effective signal interval of a main scene and an object with a large temperature difference, so that a global scene stretching image with high identification degree can be finally obtained.
In step S110, a histogram may be constructed based on the infrared Image to be processed. In the histogram, the horizontal axis may represent the pixel values of the pixels in the infrared image, and the vertical axis represents the distribution of the pixel values. For example, the vertical axis may represent the number of pixels corresponding to the pixel value, that is, the number of pixels having the pixel value in the infrared image. According to the histogram of the infrared image, the histogram information data can be obtained. The histogram information data comprises pixel values of pixel points in the infrared image and the number of the corresponding pixel points.
In step S120, the histogram information data may be traversed, and each first peak position of the histogram information may be obtained according to an ascending trend and a descending trend of data change of the number of the pixel points in the histogram. In each first peak position, the number of signals contained in the neighborhood zone corresponding to each first peak position is counted. And sequencing the positions of the first peak values according to the size of the semaphore, and sequentially determining the positions of the first peak values sequenced at the front as a main peak position and a secondary peak position. For example, the full width half maximum interval may be used as a neighborhood interval corresponding to each first peak position, how many semaphores are included in the neighborhood interval is counted, and the primary peak position and the secondary peak position are determined according to how many semaphores are included.
In step S130, a signal range threshold may be preset, and the neighborhood interval corresponding to the primary peak position and the secondary peak position is adjusted according to the signal range threshold, so as to obtain an effective signal interval corresponding to the primary peak and the secondary peak. The signal range threshold can be set according to the specific situation and processing requirement of image processing in the actual application scene.
In step S140, each frame of the infrared image to be processed is sequentially subjected to signal smoothing processing to obtain a smoothing boundary between the main peak and the secondary peak. Specifically, the parameters of the signal smoothing processing may be updated based on the processing result of the previous frame of the infrared image to be processed, and the signal smoothing processing may be performed on the infrared image to be processed according to the updated parameters. And the first frame of the infrared image to be processed can be subjected to signal smoothing processing according to the initialized parameters.
In step S150, the infrared image to be processed is subjected to stretching processing using the main peak position, the sub-peak position, the stationary boundary of the main peak, and the stationary boundary of the sub-peak. For example, the histogram may be divided into a plurality of bins according to stationary boundaries of the major peaks and stationary boundaries of the minor peaks. And respectively carrying out pixel value mapping on the infrared image to be processed aiming at a plurality of intervals by utilizing the stable boundary of the main peak and the stable boundary of the secondary peak, and finally obtaining a global scene stretching image of the infrared image to be processed.
In one example, the infrared image is captured in a scene where most of the scene is at a temperature of 25 degrees, but few 100 degree objects are present. The infrared imaging picture is greatly affected if proper processing is not carried out. Due to the large temperature difference between 25 degrees and 100 degrees, the stretching algorithm of the prior art will result in a pixel value of 255 for an object at 100 degrees in the image, which is particularly dark for other ranges. In the prior art, the infrared image cannot be well visually displayed through a fixed stretching range. According to the method and the device, the infrared image to be processed is stretched by utilizing the stable boundary of the main peak and the stable boundary of the secondary peak, so that the phenomenon that the infrared image is too grey or too black can be effectively avoided.
The embodiment of the application distinguishes to pending infrared image, finds the effective signal interval of the great object of main part scene and the difference in temperature in the pending infrared image, utilizes effective signal interval to handle infrared image and keep the information boundary steady, obtains the visual image that fuses main part scene and the great object of the difference in temperature, reduces the influence that the great object of difference in temperature brought, promotes the discernment degree of target and the sensory effect of image.
Fig. 2 is a schematic diagram of an embodiment of an image processing method according to an embodiment of the present disclosure. As shown in fig. 2, in one embodiment, step S120 in fig. 1, processing the histogram information to determine the primary peak position and the secondary peak position includes:
step S210, processing the histogram information to obtain each first peak position of the histogram information;
step S220, determining a full width half maximum interval of each first peak position;
and step S230, determining the positions of the main peak and the secondary peak according to the full width half maximum interval of each first peak position.
In the histogram of the infrared image, the main peak position corresponds to the main scene of the infrared image. After the primary and secondary peak positions are determined, the stationary boundaries of the primary and secondary peaks may be determined. The stationary boundary of the main peak and the stationary boundary of the secondary peak correspond to the main scene and the object having a large temperature difference, respectively. The histogram may be divided into a plurality of bins on the basis of stationary boundaries of the major and minor peaks. In the subsequent steps, pixel value mapping is respectively carried out on the infrared image to be processed aiming at the intervals, so that a main scene and a visual image of an object with large temperature difference can be better fused, and the influence caused by the object with large temperature difference is reduced.
In step S210, the histogram information data may be traversed, and if the number of the pixel points increases with the increase of the pixel value, the data change trend of the histogram may be marked as an ascending trend; conversely, if the number of pixels decreases as the pixel value increases, the data change trend of the histogram may be marked as a downward trend. Between the rising trend and the falling trend, the number of the pixel points has a maximum value. This maximum corresponds to the first peak position of the histogram information. The histogram information may be processed in the above manner to obtain each first peak position of the histogram information.
In step S220, for each first peak position of the histogram information, a degree of difference between a value of the histogram information and a half-peak value corresponding to the first peak position may be quantitatively calculated. The value of the histogram information refers to the number of pixels corresponding to each pixel value in the histogram, and the half-peak value refers to the number of pixels corresponding to half of the peak value. And determining the full width half maximum interval of each first peak position according to the calculated difference degree.
In step S230, the values of the histogram information in the full-width half maximum interval corresponding to each first peak position may be accumulated to obtain an accumulated sum. And sorting the accumulated sums from large to small. And sequentially determining the first peak positions corresponding to the two accumulated sums sequenced at the top as a main peak position and a secondary peak position.
Fig. 3 is a schematic diagram of an embodiment of an image processing method according to an embodiment of the present disclosure. As shown in fig. 3, in an embodiment, step S210 in fig. 2, processing the histogram information to obtain each first peak position of the histogram information includes:
step S310, carrying out mean value filtering processing on the histogram information;
step S320, carrying out interpolation processing on the information after the average value filtering processing;
step S330, obtaining each first peak position of the histogram information based on the information after the interpolation processing.
Fig. 4 is a processing flowchart of an embodiment of an image processing method according to an embodiment of the present application. In the example of fig. 4, a process of finding the respective first peak positions using histogram information is shown. Referring to fig. 3 and 4, first, in step S310, the histogram information data is subjected to an average filtering process. The mean filtering is also called linear filtering. The main mode adopted by the mean filtering is a neighborhood averaging method, and the basic principle is to replace each pixel value in the original image with the mean value. For example, a template may be given to the target pixel on the image, the template including its surrounding neighboring pixels, and the average of all pixels in the template may be used instead of the original pixel value. The number of smoothing processes may be set in advance and then the histogram information data may be subjected to the mean filtering process in the loop structure. An exemplary mean filtering process is illustrated in fig. 4 as step 3.1 through step 3.4:
step 3.1: inputting histogram information data;
step 3.2: presetting smoothing times smoothtimes, and initializing smoothing iteration times smoothindex = 0;
step 3.3: judging whether the smoothindex < smoothtimes is met, if so, skipping to step 3.4, otherwise, skipping to step 3.5; carrying out mean value filtering on the data in step 3.4; after the mean filtering process is completed, the interpolation process is started from step 3.5.
Step 3.4: and (4) carrying out mean value filtering on the data, wherein smoothindex = smoothindex +1, skipping to the step 3.3, and continuing the next round of loop processing.
Referring to fig. 3 and 4, in step S320, interpolation processing is performed on the information after the mean value filtering processing. Interpolation coefficients can be preset, and then the information after mean value filtering processing is subjected to interpolation processing according to the interpolation coefficients. An exemplary interpolation process is shown in steps 3.5 to 3.6 of fig. 4:
step 3.5: setting an interpolation coefficient INTERRATIO;
step 3.6: and carrying out interpolation preprocessing on the filtered data according to an interpolation coefficient INTERRATIO, wherein the number of the data is I after interpolation.
Referring to fig. 3 and 4, in step S330, based on the information interpolated in step S320, each first peak position of the histogram information is found. The first peak position refers to a mark position of the corresponding original image when the number of the pixel points in the histogram information is maximum.
Specifically, the histogram information data may be traversed in the direction of the coordinate axis. In order to record the data variation trend of the histogram, a flag bit flag _ up may be set in advance. If the number of the pixel points is increased along with the increase of the pixel values, the data change trend of the histogram can be marked as an ascending trend, and the flag bit flag _ up is assigned to be 1; on the contrary, if the number of the pixels is reduced along with the increase of the pixel value, the data change trend of the histogram can be marked as a descending trend, and the flag bit flag _ up is assigned to 0. Between the rising trend and the falling trend, the number of the pixel points has a maximum value. This maximum corresponds to the first peak position of the histogram information. The histogram information may be processed in the above manner to obtain each first peak position of the histogram information.
In order to accelerate the processing flow, a threshold value peakthre can be preset for the processing process, and only the histogram information larger than the threshold value is processed, so that the algorithm efficiency can be optimized. The threshold value peakthre can be set according to the specific situation and processing requirement of the image processing in the actual application scene.
In the example of fig. 4, the respective first peak positions of the histogram information are found using steps 3.7 to 3.20. The specific processing procedures from step 3.7 to step 3.20 are as follows:
step 3.7: searching the maximum value maxValue of the data;
step 3.8: setting a threshold proportion ratio, and initializing a counter k =0 and i = 0;
step 3.9: calculating a threshold value peaktre = maxValue ratio; setting a threshold value peak according to the maximum value maxValue of the data and the ratio of the threshold value, and in the subsequent step 3.12, only processing the histogram information which is greater than the threshold value, so that the algorithm efficiency can be optimized;
step 3.10: the initialization value is marked as val _ loop = data (1), and the initialization rising flag bit flag _ up = 1;
step 3.11: judging whether the cycle times I < I are met, if so, executing the step 3.12, otherwise, executing the step 3.19; controlling the cycle times by utilizing the number I of the interpolated data obtained in the step 3.6, and sequentially performing traversal processing on the I data subjected to interpolation preprocessing in each cycle body;
step 3.12: judging whether data (i) < peakthre is met, if so, executing 3.13, otherwise, executing 3.14;
step 3.13: i = i +1, jump to 3.11;
step 3.14: judging whether the data (i) > val _ loop is met, if so, executing 3.15, otherwise, executing 3.16;
step 3.15: val _ loop = data (i), flap _ up =1, jump to 3.13; in step 3.14, data (i) > val _ loop is satisfied, in which case the data change trend of the histogram is a rising trend, and flag _ up is assigned to 1;
step 3.16: judging whether flag _ up = =1 is met, if yes, executing 3.17, and if not, executing 3.18;
step 3.17: recording peak value estimation position location _ ori (k) = i/interval, k = k +1, flap _ up =0, jumping to 3.13; if data (I) > val _ loop is not satisfied in step 3.14 and flag _ up = =1 is satisfied in step 3.16, it indicates that the data change trend of the histogram has changed from the upward trend to the downward trend, and in this case, information corresponding to the I-th data among the I data is marked as the peak estimation position.
Step 3.18: val _ loop = data (i), flap _ up =0, jump to step 3.13; in this case, the data change trend of the histogram is a downward trend, and the flag _ up is assigned to 0;
step 3.19: setting an interval threshold value X _ THRE; the peak value estimation position is close to the accurate peak value position, an interval threshold value of a neighborhood interval can be set, and the accurate peak value position is calculated in the neighborhood interval range of the peak value estimation position in the next step 3.20;
step 3.20: calculating the estimated positions of the K peak values obtained respectively, and calculating the maximum value and the corresponding position in the data interval [ (location _ ori (K) -X _ THRE) ], so as to obtain the accurate peak value location (K); the exact peak position is taken as the first peak position.
Fig. 5 is a schematic diagram of an embodiment of an image processing method according to an embodiment of the present application. As shown in fig. 5, in one embodiment, step S220 in fig. 2, determining a full-width-half-maximum interval of the first peak positions includes:
step S410, calculating a half-height difference value of histogram information for each first peak position, where the half-height difference value is used to measure a difference degree between a value of the histogram information and a half-peak value corresponding to the first peak position;
step S420, processing the half-height difference value to obtain each second peak position of the half-height difference value;
step S430, determining a full-width half-maximum interval corresponding to each first peak position according to each second peak position.
Fig. 6 is a processing flowchart of an embodiment of an image processing method according to an embodiment of the present application. In the example of fig. 6, a calculation is performed using the obtained exact peak positions and the histogram data to obtain the corresponding full-width-half-maximum positions, i.e. to determine the full-width-half-maximum intervals of the respective first peak positions. Referring to fig. 4 to 6, first in step S410, for the K accurate peak positions obtained in step 3.20, a loop structure is used to perform processing in the loop body for each accurate peak position, respectively, and half-height difference values of histogram information are calculated. Referring to steps 5.1 to 5.4 in fig. 6, the specific processing procedure is as follows:
step 5.1: obtaining K peak position locations (1) to (K);
step 5.2: initialization loop variable k = 1;
step 5.3: judging whether K is less than or equal to K, if so, executing 4, and otherwise, finishing the calculation;
step 5.4: the data is processed, and the half height difference value minusalf _ loop = data (location (k))/2-abs (data-data (location (k))/2) of the histogram information is calculated.
And the half height difference value minushalf _ loop is used for measuring the difference degree between the value of the histogram information data and the half peak value corresponding to the first peak value position. data (location (k))/2 represents a half-peak value corresponding to the first peak position, i.e., a value corresponding to one-half peak. In the above formula, the half-height difference value minusalf _ float is calculated using the difference between the histogram information data and the half-peak value.
The data of the half height difference value minusalf _ loop is also data having an upward variation tendency and a downward variation tendency. Therefore, the data of the half-height difference value is counted, and the corresponding peak value can be obtained. In step S420, the data of the half-height difference is processed to find the respective second peak positions of the half-height difference. Where peak locations may be found using a process similar to that shown in figure 4. In step 5.5 of fig. 6, each second peak position of the half-height difference is searched, and the specific processing procedure is as follows:
step 5.5: according to the peak position calculation method, the peak position of the minusalf _ loop is calculated, and a group of N peak positions half _ locs _ tem corresponding to the data is obtained. Here, the "method of calculating the peak position" refers to the processing shown in fig. 4. The N peak positions half _ locs _ tem represent the N second peak positions.
In step S430, a full-width half-maximum interval corresponding to each first peak position is determined according to each second peak position corresponding to each first peak position. For example, if a first peak position is between two adjacent second peak positions, an interval between the two adjacent second peak positions may be determined as a full width half maximum interval corresponding to the first peak position.
In one embodiment, the step S430 of determining a full-width half-maximum interval corresponding to each first peak position according to each second peak position includes:
and if the value of the histogram information corresponding to the nth second peak position is greater than the value of the histogram information corresponding to the first peak position, and the value of the histogram information corresponding to the nth-1 second peak position is less than the value of the histogram information corresponding to the first peak position, determining an interval between the nth second peak position and the nth-1 second peak position as a full width half maximum interval corresponding to the first peak position.
Referring to the example of fig. 6 again, the full-width half maximum interval corresponding to the first peak position is determined by using steps 5.6 to 5.12, and the specific processing procedure is as follows:
step 5.6: initializing a loop variable n = 2;
step 5.7: judging whether N is less than or equal to N, if so, executing the step 5.8, otherwise, executing the step 5.11;
step 5.8: judging whether half _ locs _ tem (n-1) < location (k) & & half _ locs _ tem (n) > location (k) is satisfied, if so, executing 5.10, otherwise, executing 5.9;
wherein, location (k) represents the first peak position, half _ locs _ tem (N) represents the Nth second peak position, and half _ locs _ tem (N-1) represents the Nth second peak position; it is determined in step 5.8 whether the first peak position is between the nth second peak position and the nth-1 st second peak position.
Step 5.9: n = n +1, jump to 5.7; if the condition in the step 5.8 is not met, the next cycle is continuously executed;
step 5.10: obtaining full width half maximum positions corresponding to the peak value k, respectively marking as half _ locs (2 x k-1) and half _ locs (2 x k), and jumping to 5.12;
if the condition in step 5.8 is met, respectively assigning half-width intervals to half-width intervals between half-width intervals of half-width _ locs (2 x k-1) and half-width intervals of half-width _ locs (2 x k) respectively to half-width _ locs (2 x k-1) and half-width _ locs (n);
step 5.11: half _ locs (2 x k-1) and half _ locs (2 x k) are assigned as location (k); in step 5.7, if it is determined that N is not greater than 2, i.e. only 1 second peak position is present, assigning the positions of both end points of the full-width-half-maximum interval as location (k);
step 5.12: k = k +1, jumps to 5.3, and repeats the above process for the next first peak position.
In the embodiment of the present application, the full width half maximum interval corresponding to the first peak position can be used to describe the valid signal of the first peak position, and how much signal amount each first peak position correspondingly contains can be calculated through the full width half maximum position.
Fig. 7 is a schematic diagram of an embodiment of an image processing method according to an embodiment of the present application. As shown in fig. 7, in one embodiment, step S230 in fig. 2, determining the primary peak position and the secondary peak position according to the full width half maximum interval of the respective first peak positions includes:
step S610, accumulating the values of the histogram information in the full-width half maximum interval corresponding to each first peak value position to obtain the accumulated sum corresponding to each first peak value position;
step S620, sorting the accumulated sums corresponding to the first peak positions to obtain a sorting result from big to small;
step S630, sequentially determining the first peak positions corresponding to the first two values in the sorting result as the main peak position and the secondary peak position.
The primary and secondary peak positions are determined from the plurality of first peak positions in step S230. Specifically, the dominant peak position can be calculated using the half _ locs and the histogram information data obtained in steps 5.6 to 5.12. The specific treatment process is as follows:
step 6.1: loop variables K =1 to K = K, step 6.2 calculation;
step 6.2: accumulating the values of data in a full width half maximum interval [ half _ locs (2 x k-1) and half _ locs (2 x k) ] corresponding to the peak value k to obtain an accumulated sum _ val (k); in this step, the total number of pixels in the full width half maximum interval is accumulated.
Step 6.3: sequencing the obtained accumulation sum sequence sum _ val by any method;
step 6.4: k corresponding to the maximum value in sum _ val is the main peak position loc _ final and the corresponding full width half maximum positions half _ left _ final and half _ right _ final.
In the sorting result of sum _ val from large to small, k corresponding to the value arranged at the second position is the secondary peak position location _ sec.
In one embodiment, step S130 in fig. 1, determining an effective signal interval corresponding to a main peak according to the position of the main peak, includes: obtaining an effective signal interval corresponding to the main peak according to the full width half maximum interval corresponding to the main peak position and a preset signal range threshold;
step S130 in fig. 1, determining an effective signal interval corresponding to the secondary peak according to the position of the secondary peak, including: and obtaining an effective signal interval corresponding to the secondary peak according to the full width half maximum interval corresponding to the secondary peak position and a preset signal range threshold.
Fig. 8 is a processing flowchart of an embodiment of an image processing method according to an embodiment of the present application. In the example shown in fig. 8, the full width half maximum positions half _ left _ final and half _ right _ final of the obtained main peak loc _ final and the histogram information data are calculated by using the preset signal range threshold MIN _ THRE to obtain the effective signal interval. The half-full width position half _ left _ final and half _ right _ final is a half-full width interval, and the interval between the left valid bit MINV and the right valid bit MAXV is a valid signal interval. The subject scene range can be obtained by setting MIN _ THRE. And adjusting on the basis of the full width half maximum position according to the signal range threshold value MIN _ THRE to obtain an effective signal interval of the main scene, namely an effective signal interval corresponding to the main peak.
The specific processing procedure in the example of fig. 8 is as follows:
step 7.1: acquiring half-maximum width positions half _ left _ final and half _ right _ final corresponding to the main peak;
and 7.2: setting a threshold MIN _ THRE;
step 7.3: initializing a left-end valid bit MINV = half _ left _ final and a right-end valid bit MAXV = half _ right _ final;
step 7.4: judging whether data (MINV) is less than or equal to MIN-THRE is met, if so, executing 7.6, otherwise, executing 7.5;
if the histogram information data corresponding to the MINV is larger than the threshold MIN _ THRE, the value of the MINV is reduced in step 7.5; if the histogram information data corresponding to the MINV is smaller than or equal to the threshold MIN _ THRE, the adjustment of the MINV value is finished, and 7.6 adjustment of the MAXV value is executed;
step 7.5: MINV = MINV-1, jump to 7.4;
step 7.6: judging whether data (MAXV) is less than or equal to MIN _ THRE is met, if so, executing 7.8, otherwise, executing 7.7;
if the histogram information data corresponding to the MAXV is greater than the threshold MIN _ THRE, the value of the MAXV is increased in step 7.7; if the histogram information data corresponding to the MAXV is smaller than or equal to the threshold value MIN _ THRE, the value of the MAXV is adjusted, and 7.8 turning is performed to obtain final significant bits MINV and MAXV;
step 7.7: MAXV = MAXV + 1, jump to 7.6;
step 7.8: and obtaining the final left-end effective bit MINV and the right-end effective bit MAXV.
And in the step of calculating the position of the main peak by using the obtained half _ locs and the data, k corresponding to the secondary large value in sum _ val is the position of the secondary peak, and the calculation is continued based on a similar processing flow in the figure 8, so that the left effective bit MINV _ sec of the secondary peak and the right effective bit MAXV _ sec of the secondary peak can be obtained.
Fig. 9 is a schematic diagram of an embodiment of an image processing method according to an embodiment of the present application. As shown in fig. 9, in an embodiment, in step S140 in fig. 1, a stationary boundary of the main peak is determined according to the identifier of the frame of the infrared image to be processed and the effective signal interval corresponding to the main peak; determining a stationary boundary of the secondary peak according to the frame identifier of the infrared image to be processed and the effective signal interval corresponding to the secondary peak, including:
step S710, calculating a stationary boundary of a main peak and a stationary boundary of a secondary peak of a first frame in the infrared image to be processed according to a predetermined algorithm by using the initialized parameter offset, the initialized error reference, the initialized correction value and the initialized reference variable;
step S720, after calculating the stable boundary of the main peak and the stable boundary of the secondary peak of each frame in the infrared image to be processed, updating the correction value and the reference variable according to the preset algorithm;
step 730, calculating a stationary boundary of a main peak and a stationary boundary of a secondary peak of each frame in the infrared image to be processed according to the predetermined algorithm by using the initialized parameter offset, the initialized error reference, the updated correction value and the updated reference variable; wherein the updated correction value and the updated reference variable are updated according to the predetermined algorithm after calculating the stationary boundary of the main peak and the stationary boundary of the sub-peak of the previous frame.
In one example, the image capture frequency of the infrared camera is 25 Hz. In general, it can be considered that the picture changes of two frames before and after the infrared image are not too large, that is, the difference between the images of the two frames before and after the infrared image is not large. The signal stabilization processing can be carried out on the infrared image to be processed by utilizing the temporal correlation of the front frame image and the rear frame image. Specifically, when calculating the stationary boundary of the main peak and the secondary peak, the updated parameters in the previous frame of image processing may be used, so that the boundary value of the signal interval may not cause the flicker of the picture due to the abrupt change of the abnormal value. After the stable boundary of the main peak and the secondary peak of the image of the frame is calculated, the parameters are iteratively updated again by using the calculation result of the image of the frame, and the updated parameters can be used for image processing of the next frame.
In one embodiment, the predetermined algorithm comprises:
calculating a bias coefficient according to the reference variable;
calculating a scaling coefficient according to the correction value and the parameter offset;
calculating a gain factor according to the scaling factor and the error reference quantity;
calculating a stationary boundary according to the bias coefficient, the gain coefficient, the reference variable and the effective signal interval;
updating the correction value using the gain factor and the scaling factor;
updating the reference variable with the stationary boundary.
Fig. 10 is a processing flowchart of an embodiment of an image processing method according to an embodiment of the present application. In the example shown in fig. 10, stationary boundaries MINV _ final and MAXV _ final of the main peak are calculated from the frame number index and the MINV, MAXV corresponding to the present frame. The specific treatment process is as follows:
step 8.1: inputting frame numbers index, MINV and MAXV;
step 8.2: judging whether index = =1 is met, if yes, executing step 8.3, and otherwise, executing step 8.5;
in this step, if index = =1 is satisfied, which indicates that the infrared image is the first frame image, step 8.3 is executed to calculate a stationary boundary of the main peak by using the initialized parameters; if index = =1 is not satisfied, indicating that the infrared image is not the first frame image, executing step 8.5 to calculate a stationary boundary of the main peak by using the iteratively updated parameters;
step 8.3: initializing parameter offset quantities Q1 and Q2, initializing error reference quantities R1 and R2, and initializing correction values P1 and P2;
step 8.4: initialization reference variable xh1= MINV, initialization reference variable xh2 = MAXV;
step 8.5: calculating a bias coefficient xminus 1= xh1, and calculating a bias coefficient xminus2 = xh 2;
step 8.6: calculating a scaling factor Pminus 1= P1 + Q1, calculating a scaling factor Pminus2 = P2 + Q2;
step 8.7: the gain factor K1 = Pminus1/(Pminus1+ R1) is calculated,
calculating the gain factor K2 = Pminus2/(Pminus2+ R2);
step 8.8: MINV _ final = xminus1+ K1 (MINV-xh 1),
MAXV_final =xminus2+K2*(MAXV–xh2);
step 8.9: update correction value P1 = (1-K1) × Pminus1, update correction value P2 = (1-K2) × Pminus2
Step 8.10: update reference variable xh1= MINV _ final, update reference variable xh2 = MAXV _ final.
Similarly, the minor peak stationary boundary MINV _ sec _ final and MAXV _ sec _ final can be calculated according to the frame number index and the MINV _ sec and MAXV _ sec corresponding to the frame.
Fig. 11 is a schematic diagram of an embodiment of an image processing method according to an embodiment of the present application. As shown in fig. 11, in an embodiment, in step S150 in fig. 1, performing stretching processing on the to-be-processed infrared image based on the main peak position, the secondary peak position, the stationary boundary of the main peak, and the stationary boundary of the secondary peak, to obtain a global scene stretched image of the to-be-processed infrared image, includes:
step S810, setting a signal pre-distribution proportion corresponding to each interval in the histogram according to the position relation of the main peak position and the secondary peak position in the histogram;
the signal pre-allocation proportion comprises a dark part proportion, a main body proportion, a bright part proportion, a weak information coefficient and a secondary information coefficient; the main body proportion represents a signal pre-allocation proportion corresponding to an interval within a stationary boundary of the main peak; the secondary information coefficient represents a signal pre-allocation proportion corresponding to an interval within a stationary boundary of the secondary peak; the weak information coefficient represents a signal pre-distribution proportion corresponding to an interval between the position of the main peak and the position of the secondary peak, and is outside a stable boundary of the main peak and a stable boundary of the secondary peak; the dark part proportion represents a signal pre-distribution proportion corresponding to an interval with minimum brightness, wherein the interval is outside a stable boundary of the main peak and a stable boundary of the secondary peak; the bright part proportion represents a signal pre-distribution proportion corresponding to an interval with the maximum brightness, wherein the interval is outside a stable boundary of the main peak and a stable boundary of the secondary peak;
step S820, according to the signal pre-allocation proportions corresponding to the intervals, performing pixel value mapping on the infrared image to be processed respectively for each interval in the histogram, so as to obtain a global scene stretching image of the infrared image to be processed.
In the embodiment of the present application, the histogram may be divided into a plurality of intervals according to the stationary boundary of the main peak and the stationary boundary of the secondary peak. The interval within the stationary boundary of the main peak may be referred to as interval 1, and the interval within the stationary boundary of the secondary peak may be referred to as interval 2. Bin 1 and bin 2 divide the entire histogram into bins, including: a section located between section 1 and section 2, a section located at the left of section 1 and section 2 where the image brightness is the smallest, and a section located at the right of section 1 and section 2 where the image brightness is the largest.
And respectively setting corresponding signal pre-distribution proportions for each divided interval. The divided intervals and the corresponding signal pre-allocation proportions can comprise:
interval 1: an interval within the stationary boundary of the main peak; corresponding to the bulk ratio.
Interval 2: an interval within the stationary boundary of the secondary peak; corresponding to the secondary information coefficients.
Interval 3: the interval between interval 1 and interval 2 in the histogram; the interval between the position of the main peak and the position of the secondary peak, which is outside the stable boundary of the main peak and the stable boundary of the secondary peak; corresponding to the weak information coefficients.
Interval 4: the interval located on the leftmost side in the histogram; the section with the minimum brightness is outside the stable boundary of the main peak and the stable boundary of the secondary peak; corresponding to the dark portion ratio.
Interval 5: the interval located at the rightmost side in the histogram; i.e. the section where the luminance is maximum, outside the stationary boundary of the main peak and the stationary boundary of the secondary peak. Corresponding to the bright portion ratio.
On the basis of dividing intervals and setting signal pre-distribution proportion, pixel value mapping is carried out on the infrared image to be processed aiming at each interval, and finally a global scene stretching image of the infrared image to be processed can be obtained.
Fig. 12 is a processing flowchart of an embodiment of an image processing method according to an embodiment of the present application. In the example shown in fig. 12, an adaptive calculation is performed according to the location, location _ sec, MINV _ final, MAXV _ final, MINV _ sec _ final, MAXV _ sec _ final, and the infrared Image to be processed, so as to obtain a global scene stretching Image _ final. In steps S1 to S6-2 of fig. 12, the signal pre-allocation ratio corresponding to each section is set, and the specific processing procedure is as follows:
step S1: acquiring location, location _ sec, MINV _ final, MAXV _ final, MINV _ sec _ final, MAXV _ sec _ final and Image;
step S2: obtaining a minimum value min _ final of a whole graph effective interval and a maximum value max _ final of the whole graph effective interval by using Image; one way to obtain the valid interval based on the threshold value THRE is provided in fig. 13, which is described below in relation to fig. 13;
step S3: judging whether the location _ sec > location is satisfied, if so, executing the step S4-1, otherwise, executing the step S4-2;
step S4-1: initializing a dark part ratio _ dark = 0.1, a body ratio _ main = 0.75, and a light part ratio _ light = 0.15;
if location _ sec > location is satisfied, it indicates that the primary peak position is to the left of the secondary peak position in the histogram. In this case, the main peak corresponds to a darker area of the image and the secondary peak corresponds to a brighter area of the image. Since the sub-peak contains more information in the bright portion than in the main peak and other peak values other than the sub-peak, the bright portion ratio 0.15 is set to be larger than the dark portion ratio 0.1, and the amount of information indicating the bright portion is relatively large.
Step S5-1: calculating a weak information coefficient ratio _ low = (MINV _ sec _ final-MAXV _ final)/(5 [ _ MAXV _ sec _ final-MAXV _ final)) ] ratio _ light;
step S6-1: calculating a secondary information coefficient ratio _ sec = ratio _ light-ratio _ low
If location _ sec > location is satisfied, the amount of information in the bright portion is relatively large. The weak information coefficients and the secondary information coefficients are thus calculated using the highlight ratio _ light.
Step S4-2: initializing a dark part ratio _ dark = 0.15, a body ratio _ main = 0.75, and a light part ratio _ light = 0.1;
if location _ sec > location is not satisfied, it indicates that the major peak position is to the right of the minor peak position in the histogram. In this case, the main peak corresponds to a bright area of the image and the sub peak corresponds to a dark area of the image. Since the secondary peak contains more information in the dark portion than the main peak and other peak values other than the secondary peak, the dark portion ratio 0.15 is set to be larger than the bright portion ratio 0.1, and the amount of information indicating the dark portion is relatively large.
Step S5-2: calculating a weak information coefficient ratio _ low = (MINV _ final-MAXV _ sec _ final)/(5 [ -MINV _ final-MINV _ sec _ final)) ] ratio _ dark
Step S6-2: calculating a secondary information coefficient ratio sec = ratio dark-ratio low
If location _ sec > location is not satisfied, the amount of information in the dark portion is relatively large. The weak information coefficients and the secondary information coefficients are thus calculated using the dark portion ratio _ dark.
In the above formulas of step 5-1 and step 5-2, the coefficient "5" may be replaced with other numerical values. For example, the coefficient "5" may be replaced by a value within a preset value range. In one example, the preset value range may be 4-6.
Fig. 13 is a processing flowchart of an embodiment of an image processing method according to an embodiment of the present application. In the above-described step S2, the full map valid interval minimum value min _ final and the full map valid interval maximum value max _ final can be obtained by the process flow of fig. 13.
Referring to fig. 13, an iteration threshold three is set in step 13.1. Outliers in the image where the pixel value is large and the pixel value is small are removed in fig. 13. Wherein the iteration threshold THRE is set to the number of outlier removal points. The threshold value can be set according to the specific situation and the processing requirement of the image processing in the actual application scene.
In one example, the iteration threshold three is set to 20. The minimum pixel value of the pixel points in the image is 2000, 6 pixel points with the pixel value of 2000, 0 pixel points with the pixel value of 2001, 5 pixel points with the pixel value of 2002, 16 pixel points with the pixel value of 2003, 7 pixel points with the pixel value of 2004, … …. Then, starting from the minimum pixel value of 2000, the number of the pixels is sequentially accumulated in the order of the pixel values from small to large until the number of the accumulated pixels is larger than the value of THRE. In the above example, from the minimum pixel value of 2000 to the pixel value of 2003, the value of the running total is 6+5+16=27, and the running total 27 is greater than the value of 20 of three, the pixel point with the pixel value less than 2003 is removed, and finally min _ val + i =2003 is obtained in step 13.13.
In yet another example, the iteration threshold three is set to 20. The maximum pixel value of the pixel points in the image is 4000, 5 pixel points with the pixel values of 4000 are provided, 0 pixel point with the pixel value of 3999 is provided, 8 pixel points with the pixel values of 3998 are provided, 15 pixel points with the pixel values of 3997 are provided, and 6 pixel points with the pixel values of 3996 are provided, … …. Then, starting from the maximum pixel value 4000, the number of the pixel points is sequentially accumulated according to the sequence of the pixel values from large to small until the number of the accumulated pixel points is larger than the value of THRE. In the above example, from the maximum pixel value 4000 to the pixel value 3997, the value of the running total is 5+8+15=28, and the running total 28 is greater than the value 20 of three, the pixel points with pixel values greater than 3997 are removed, and finally max _ val-j =3997 is obtained in step 13.13.
The specific calculation process of fig. 13 is as follows:
step 13.1: inputting an Image, and setting a threshold value THRE;
step 13.2: calculating an image histogram by taking the group interval as 1 to obtain a minimum value min _ val and a maximum value max _ val;
step 13.3: initializing a statistic value sum =0, and initializing a minimum value recording variable i = 0;
step 13.4: acquiring the number count (min _ val + i) of pixels with the pixel value (min _ val + i);
step 13.5: sum = sum + count (min _ val + i);
step 13.6: judging whether sum > THRE is met, if yes, executing step 13.8, otherwise, executing step 13.7;
step 13.7: i = i +1, jumping to step 13.4;
step 13.8: initializing a statistic value sum =0, and initializing a maximum value recording variable j = 0;
step 13.9: acquiring the number count (max _ val-j) of pixels with the pixel value (max _ val-j);
step 13.10: sum = sum + count (max _ val-j);
step 13.11: judging whether sum > THRE is met, if yes, executing step 13.13, otherwise, executing step 13.12;
step 13.12: j = j +1, jump to step 13.9;
step 13.13: and obtaining a minimum value min _ final = min _ val + i of the whole graph effective interval and a maximum value max _ final = max _ val-j of the whole graph effective interval.
And performing initial assignment on the variables in the step 13.3, and sequentially accumulating the number of the pixel points in the step 13.4 to the step 13.7 according to the sequence of the pixel values from small to large until the number of the accumulated pixel points is larger than the value of THRE. In the subsequent processing flow, the removed number of abnormal points whose pixel values are small is not within the calculation range.
Similarly, the variable is initially assigned in step 13.8, and the number of pixels is sequentially accumulated in steps 13.9 to 13.12 in descending order of the pixel value until the number of accumulated pixels is greater than the value of three. In the subsequent processing flow, the removed abnormal points whose number is the larger pixel value of the three are not within the calculation range.
Fig. 14 is a schematic diagram of an embodiment of an image processing method according to an embodiment of the present application. As shown in fig. 14, in an embodiment, in step S820 in fig. 11, performing pixel value mapping on the infrared image to be processed respectively for each interval in the histogram according to a signal pre-allocation ratio corresponding to each interval, to obtain a global scene stretching image of the infrared image to be processed, includes:
step S910, respectively obtaining pixel stretching values corresponding to the pixel points in each interval by utilizing the signal pre-distribution proportion corresponding to each interval;
step S920, respectively aiming at each interval, accumulating the pixel stretching value of the interval and the signal pre-allocation proportion corresponding to each interval with the brightness smaller than the interval to obtain the pixel mapping value corresponding to the pixel point in the interval;
and step S930, obtaining a global scene stretching image of the infrared image to be processed according to the pixel mapping value.
Referring to fig. 12 again, in steps S7 to S16 of fig. 12, an adaptive calculation is performed according to location, location _ sec, MINV _ final, MAXV _ final, MINV _ sec _ final, MAXV _ sec _ final, and the to-be-processed infrared Image, so as to obtain a global scene stretch Image _ final. The specific treatment process is as follows:
if location _ sec > location is satisfied, then steps S7 to S10 are executed after step S6-1, and finally steps S15 and S16 are executed to obtain a global scene extension Image _ final.
Step S7: the pixel value val size in the Image is mapped to [0, MINV _ final ] by val _ process = (val-min _ final)/(MINV _ final-min _ final) × ratio _ dark.
In step S7, the pixel values in section 4 are mapped. The interval 4 is the interval located at the leftmost side in the histogram, that is, the interval with the minimum brightness, which is outside the stationary boundary of the main peak and the stationary boundary of the secondary peak; the corresponding signal pre-allocation ratio is the dark portion ratio. Therefore, the dark portion ratio _ dark corresponding to the interval 4 is used to obtain the pixel stretching value (val-min final)/(MINV final-min final) ratio _ dark corresponding to the pixel points in the interval 4. The interval 4 is the interval with the minimum brightness, and does not need to accumulate the signal pre-allocation ratios corresponding to other intervals, and the pixel stretching value is equal to the pixel mapping value val _ process.
Step S8: the size of the pixel value val in the Image is mapped to (MINV _ final, MAXV _ final) by val _ process = (val-MINV _ final)/(MAXV _ final-MINV _ final) × ratio _ main + ratio _ dark.
In step S8, the pixel values in section 1 are mapped. Interval 1 is the interval within the stationary boundary of the main peak; the corresponding signal pre-allocation proportion is the main proportion. Therefore, by using the body proportion ratio _ main corresponding to the interval 1, the pixel stretching value (val-MINV _ final)/(MAXV _ final-MINV _ final) ratio _ main corresponding to the pixel points in the interval 1 is obtained. And then accumulating the pixel stretching value of the interval 1 and the signal pre-allocation ratio _ dark corresponding to the interval 4 with the brightness smaller than the interval to obtain a pixel mapping value val _ process corresponding to the pixel point in the interval.
Step S9: mapping the pixel value val size in the Image to (MAXV _ final, MINV _ sec _ final) val _ process = (val-MAXV _ final)/(MINV _ sec _ final-MAXV _ final) ratio _ low + ratio _ main + ratio _ dark.
In step S9, the pixel values in section 3 are mapped. Bin 3, i.e. the bin in the histogram that lies between bin 1 and bin 2; the interval between the position of the main peak and the position of the secondary peak, which is outside the stable boundary of the main peak and the stable boundary of the secondary peak; the corresponding signal pre-allocation proportion is a weak information coefficient. Therefore, the weak information coefficient ratio _ low corresponding to the interval 3 is used to obtain the pixel stretching value (val-MAXV _ final)/(MINV _ sec _ final-MAXV _ final) × ratio _ low corresponding to the pixel point in the interval 3. And then accumulating the pixel stretching value of the section 3 with the signal pre-allocation proportions of ratio _ dark and ratio _ main corresponding to the section 4 and the section 1, wherein the brightness of the section is smaller than that of the section, so as to obtain a pixel mapping value val _ process corresponding to the pixel point in the section.
Step S10: mapping the pixel value val in the Image to (MINV _ sec _ final, MAXV _ sec _ final) with val _ process = (val-MINV _ sec _ final)/(MAXV _ sec _ final-MINV _ sec _ final): ratio _ sec + ratio _ low + ratio _ main + ratio _ dark, and then proceeding to step S15.
In step S10, the pixel values in section 2 are mapped. Interval 2 is the interval within the stationary boundary of the secondary peak; the corresponding signal is pre-allocated with the proportion of the secondary information coefficients. Therefore, using the secondary information coefficient ratio _ sec corresponding to the interval 2, the pixel stretching value (val-MINV _ sec _ final)/(MAXV _ sec _ final) ratio _ sec corresponding to the pixel points in the interval 2 is obtained. And then accumulating the pixel stretching value of the interval 2 with the signal pre-allocation proportions of ratio _ dark, ratio _ main and ratio _ low corresponding to the interval 4, the interval 1 and the interval 3 with the brightness smaller than the interval to obtain a pixel mapping value val _ process corresponding to the pixel point in the interval.
In steps S7 to S10, the pixel stretching value in the section and the signal pre-allocation ratio corresponding to each section having a luminance smaller than the section are added for each section to obtain the pixel mapping value corresponding to the pixel point in the section. The pixel values can be reasonably mapped between 0 and 1 through accumulation, and the identification degree of the target of the global scene stretching image and the perception effect of the image are improved.
In the case where location _ sec > location is satisfied, the primary peak position is to the left of the secondary peak position. In this case, the main peak corresponds to a darker area of the image, by comparison. Therefore, for the section 5 with the maximum brightness, the section belongs to the information which is weakest in relation to the main scene, and the processing flow of pixel mapping does not need to be performed. Unnecessary processing flows are reduced, and the execution efficiency of image processing is improved.
If location _ sec > location is not satisfied, then steps S11 to S14 are executed after step S6-2, and finally steps S15 and S16 are executed to obtain a global scene extension Image _ final.
Step S11: the size of the pixel value val in the Image is mapped to [ MINV _ sec _ final, MAXV _ sec _ final ] by val _ process = (val-MINV _ sec _ final)/(MAXV _ sec _ final-MINV _ sec _ final) × ratio _ sec.
In step S11, the pixel values in section 2 are mapped. Interval 2 is the interval within the stationary boundary of the secondary peak; its corresponding signal pre-allocation ratio is the secondary information coefficient ratio sec.
Step S12: the size of the pixel value val in the Image is mapped to (MAXV _ sec _ final, MINV _ final) by val _ process = (val-MAXV _ sec _ final)/(MINV _ final-MAXV _ sec _ final) ratio _ low + ratio _ sec.
In step S12, the pixel values in section 3 are mapped. Bin 3, i.e. the bin in the histogram that lies between bin 1 and bin 2; the interval between the position of the main peak and the position of the secondary peak, which is outside the stable boundary of the main peak and the stable boundary of the secondary peak; the corresponding signal pre-allocation proportion is a weak information coefficient ratio _ low.
Step S13: the size of the pixel value val in the Image is mapped to (MINV _ final, MAXV _ final) by val _ process = (val-MINV _ final)/(MAXV _ final-MINV _ final) × ratio _ main + ratio _ low + ratio _ sec.
In step S13, the pixel values in section 1 are mapped. Interval 1 is the interval within the stationary boundary of the main peak; its corresponding signal pre-allocation proportion is the body proportion ratio _ main.
Step S14: the pixel value val size in the Image is mapped to (MAXV _ final, max _ final) by val _ process = (val-MAXV _ final)/(max _ final-MAXV _ final) × ratio _ light + ratio _ main + ratio _ low + ratio _ sec.
In step S14, the pixel values in section 5 are mapped. Bin 5, i.e. the bin located at the rightmost in the histogram; the section with the maximum brightness is outside the stable boundary of the main peak and the stable boundary of the secondary peak; the corresponding signal pre-allocation proportion is the highlight proportion ratio _ light.
In the processing flow of steps S11 to S14, similarly to steps S7 to S10, the pixel stretching value of the section and the signal pre-allocation ratio corresponding to the section having the luminance smaller than the section are added for each section, and the pixel mapping value corresponding to the pixel point in the section is obtained. The related processing procedure can be referred to the descriptions of steps S7 to S10, and is not described herein again.
In the case that location _ sec > location is not satisfied, the primary peak position is to the right of the secondary peak position. In this case, the main peak corresponds to a brighter region of the image, by comparison. Therefore, the section 4 with the minimum brightness belongs to the information weakest in relation to the main scene, and the processing flow of pixel mapping may not be performed. Unnecessary processing flows are reduced, and the execution efficiency of image processing is improved.
In step S930 of fig. 14, the pixel mapping values are further processed to obtain a global scene stretching image of the infrared image to be processed. The step of further processing may include steps S15 and S16 in fig. 12.
Step S15: the hash information is processed to make the portion of the pixel map value val _ process less than 0 equal to 0 and the portion of val _ process greater than 1 equal to 1. The processed signal of the hash information is denoted as val _ process'.
Step S16: and processing the signal obtained in the step S15 into an 8-bit Image form, and finally obtaining a global scene stretching Image _ final = Uint8 (val _ process'. 255) of the infrared Image to be processed.
Where Uint8 is an 8bit rounding operation. The result (val _ process'. times.255) may have a decimal number, and therefore the rounding process is performed by Uint 8.
In summary, in the embodiment of the present application, the effective signal intervals of the main scene and the object with a large temperature difference are obtained by performing algorithm analysis and self-adaptation on the histogram, and the infrared image is stretched by using the effective signal intervals of the main scene and the object with a large temperature difference.
Referring to fig. 15, in the embodiment of the present application, first, an infrared Image to be processed and a frame number index corresponding to the infrared Image to be processed are obtained, then, histogram processing is performed on the infrared Image to be processed, each peak value in the histogram is searched, and a main peak position location, a secondary peak position location _ sec, and full width half maximum positions corresponding to the main peak position location, the secondary peak position location _ sec, and the full width half maximum positions are calculated. The processing is then performed for the major and minor peaks. The processing for the main peak may include: calculating effective ranges MINV and MAXV corresponding to the main peak according to the position of the main peak and the corresponding full width half maximum position of the main peak; and calculating main peak stationary boundary MINV _ final and MAXV _ final according to the frame number index and the MINV and MAXV corresponding to the frame. The processing for the secondary peak may include: calculating effective ranges MINV _ sec and MAXV _ sec corresponding to the secondary peak according to the position of the secondary peak and the corresponding full width half maximum position of the secondary peak; and calculating minor peak stationary boundaries MINV _ sec _ final and MAXV _ sec _ final according to the frame number index and the MINV _ sec and MAXV _ sec corresponding to the frame. And finally, performing self-adaptive calculation according to the location, the location _ sec, the MINV _ final, the MAXV _ final, the MINV _ sec _ final, the MAXV _ sec _ final and the Image to obtain a final visual Image _ final.
As shown in fig. 16, the present application further provides an embodiment of an image processing apparatus, and for beneficial effects or technical problems to be solved by the apparatus, reference may be made to descriptions in methods respectively corresponding to the apparatuses, or to descriptions in the summary of the invention, and details are not repeated here.
In an embodiment of the image processing apparatus, the apparatus comprises:
an obtaining unit 100, configured to obtain histogram information of an infrared image to be processed;
the processing unit 200 is configured to process the histogram information, and determine a main peak position and a secondary peak position;
a first determining unit 300, configured to determine an effective signal interval corresponding to a main peak according to the position of the main peak; determining an effective signal interval corresponding to the secondary peak according to the position of the secondary peak;
a second determining unit 400, configured to determine a stationary boundary of the main peak according to the identifier of the frame of the infrared image to be processed and the effective signal interval corresponding to the main peak; determining a stable boundary of the secondary peak according to the frame identifier of the infrared image to be processed and the effective signal interval corresponding to the secondary peak;
and the stretching unit 500 is configured to stretch the to-be-processed infrared image based on the main peak position, the secondary peak position, the stable boundary of the main peak, and the stable boundary of the secondary peak, so as to obtain a global scene stretched image of the to-be-processed infrared image.
As shown in fig. 17, in one embodiment, the processing unit 200 includes:
a processing subunit 210, configured to process the histogram information to obtain each first peak position of the histogram information;
a first determining subunit 220, configured to determine a full-width half-maximum interval of the respective first peak position;
a second determining subunit 230, configured to determine the primary peak position and the secondary peak position according to the full width half maximum interval of the respective first peak positions.
In one embodiment, the processing subunit 210 is configured to:
carrying out mean value filtering processing on the histogram information;
carrying out interpolation processing on the information after the average filtering processing;
and obtaining each first peak position of the histogram information based on the information after the interpolation processing.
In one embodiment, the first determining subunit 220 is configured to:
calculating a half-height difference value of the histogram information aiming at each first peak value position, wherein the half-height difference value is used for measuring the difference degree between the value of the histogram information and the half-peak value corresponding to the first peak value position;
processing the half-height difference value to obtain each second peak value position of the half-height difference value;
and determining a full-width half-maximum interval corresponding to each first peak position according to each second peak position.
In one embodiment, the first determining subunit 220 is configured to:
and if the value of the histogram information corresponding to the nth second peak position is greater than the value of the histogram information corresponding to the first peak position, and the value of the histogram information corresponding to the nth-1 second peak position is less than the value of the histogram information corresponding to the first peak position, determining an interval between the nth second peak position and the nth-1 second peak position as a full width half maximum interval corresponding to the first peak position.
In one embodiment, the second determining subunit 230 is configured to:
accumulating the values of the histogram information in the full-width half maximum interval corresponding to each first peak value position to obtain an accumulated sum corresponding to each first peak value position;
sorting the accumulated sums corresponding to the first peak positions to obtain a sorting result from large to small;
and sequentially determining the first peak positions corresponding to the first two values in the sequencing result as a main peak position and a secondary peak position.
In one embodiment, the first determining unit 300 is configured to: obtaining an effective signal interval corresponding to the main peak according to the full width half maximum interval corresponding to the main peak position and a preset signal range threshold;
the first determination unit is configured to: and obtaining an effective signal interval corresponding to the secondary peak according to the full width half maximum interval corresponding to the secondary peak position and a preset signal range threshold.
In one embodiment, the second determining unit 400 is configured to:
calculating a stationary boundary of a main peak and a stationary boundary of a secondary peak of a first frame in the infrared image to be processed according to a predetermined algorithm by using the initialized parameter offset, the initialized error reference, the initialized correction value and the initialized reference variable;
after calculating the stable boundary of the main peak and the stable boundary of the secondary peak of each frame in the infrared image to be processed, updating the correction value and the reference variable according to the preset algorithm;
calculating a stationary boundary of a main peak and a stationary boundary of a secondary peak of each frame in the infrared image to be processed according to the predetermined algorithm by using the initialized parameter offset, the initialized error reference, the updated correction value and the updated reference variable; wherein the updated correction value and the updated reference variable are updated according to the predetermined algorithm after calculating the stationary boundary of the main peak and the stationary boundary of the secondary peak of the previous frame.
In one embodiment, the predetermined algorithm comprises:
calculating a bias coefficient according to the reference variable;
calculating a scaling coefficient according to the correction value and the parameter offset;
calculating a gain factor according to the scaling factor and the error reference quantity;
calculating a stationary boundary according to the bias coefficient, the gain coefficient, the reference variable and the effective signal interval;
updating the correction value using the gain factor and the scaling factor;
updating the reference variable with the stationary boundary.
As shown in fig. 18, in one embodiment, the stretching unit 500 includes:
a setting subunit 510, configured to set, according to a position relationship between the main peak position and the secondary peak position in the histogram, a signal pre-allocation proportion corresponding to each interval in the histogram;
the signal pre-allocation proportion comprises a dark part proportion, a main body proportion, a bright part proportion, a weak information coefficient and a secondary information coefficient; the main body proportion represents a signal pre-allocation proportion corresponding to an interval within a stationary boundary of the main peak; the secondary information coefficient represents a signal pre-allocation proportion corresponding to an interval within a stationary boundary of the secondary peak; the weak information coefficient represents a signal pre-distribution proportion corresponding to an interval between the position of the main peak and the position of the secondary peak, and is outside a stable boundary of the main peak and a stable boundary of the secondary peak; the dark part proportion represents a signal pre-distribution proportion corresponding to an interval with minimum brightness, wherein the interval is outside a stable boundary of the main peak and a stable boundary of the secondary peak; the bright part proportion represents a signal pre-distribution proportion corresponding to an interval with the maximum brightness, wherein the interval is outside a stable boundary of the main peak and a stable boundary of the secondary peak;
a mapping subunit 520, configured to perform pixel value mapping on the infrared image to be processed respectively for each interval in the histogram according to the signal pre-allocation proportion corresponding to each interval, so as to obtain a global scene stretching image of the infrared image to be processed.
In one embodiment, the mapping subunit 520 is configured to:
respectively obtaining pixel stretching values corresponding to the pixel points in each interval by utilizing the signal pre-distribution proportion corresponding to each interval;
respectively accumulating the pixel stretching value of the interval and the signal pre-distribution proportion corresponding to each interval with the brightness smaller than the interval to obtain the pixel mapping value corresponding to the pixel point in the interval;
and obtaining a global scene stretching image of the infrared image to be processed according to the pixel mapping value.
Fig. 19 is a schematic structural diagram of a computing device 900 provided in an embodiment of the present application. The computing device 900 includes: a processor 910, a memory 920, and a communication interface 930.
It is to be appreciated that the communication interface 930 in the computing device 900 shown in fig. 19 may be used to communicate with other devices.
The processor 910 may be connected to the memory 920. The memory 920 may be used to store the program codes and data. Accordingly, the memory 920 may be a storage unit inside the processor 910, an external storage unit independent of the processor 910, or a component including a storage unit inside the processor 910 and an external storage unit independent of the processor 910.
Optionally, computing device 900 may also include a bus. The memory 920 and the communication interface 930 may be connected to the processor 910 through a bus. The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc.
It should be understood that, in the embodiment of the present application, the processor 910 may employ a Central Processing Unit (CPU). The processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. Or the processor 910 may employ one or more integrated circuits for executing related programs to implement the technical solutions provided in the embodiments of the present application.
The memory 920 may include a read-only memory and a random access memory, and provides instructions and data to the processor 910. A portion of the processor 910 may also include non-volatile random access memory. For example, the processor 910 may also store information of the device type.
When the computing device 900 is running, the processor 910 executes the computer-executable instructions in the memory 920 to perform the operational steps of the above-described method.
It should be understood that the computing device 900 according to the embodiment of the present application may correspond to a corresponding main body for executing the method according to the embodiments of the present application, and the above and other operations and/or functions of each module in the computing device 900 are respectively for implementing corresponding flows of each method of the embodiment, and are not described herein again for brevity.
Claims (13)
1. An image processing method, comprising:
acquiring histogram information of an infrared image to be processed;
processing the histogram information to determine the position of a main peak and the position of a secondary peak;
determining an effective signal interval corresponding to a main peak according to the position of the main peak; determining an effective signal interval corresponding to the secondary peak according to the position of the secondary peak;
determining a stable boundary of the main peak according to the frame identifier of the infrared image to be processed and the effective signal interval corresponding to the main peak; determining a stable boundary of the secondary peak according to the frame identifier of the infrared image to be processed and the effective signal interval corresponding to the secondary peak;
stretching the to-be-processed infrared image based on the main peak position, the secondary peak position, the stable boundary of the main peak and the stable boundary of the secondary peak to obtain a global scene stretched image of the to-be-processed infrared image;
determining a stable boundary of the main peak according to the frame identifier of the infrared image to be processed and the effective signal interval corresponding to the main peak; determining a stationary boundary of the secondary peak according to the frame identifier of the infrared image to be processed and the effective signal interval corresponding to the secondary peak, including:
calculating a stationary boundary of a main peak and a stationary boundary of a secondary peak of a first frame in the infrared image to be processed according to a predetermined algorithm by using the initialized parameter offset, the initialized error reference, the initialized correction value and the initialized reference variable;
after calculating the stable boundary of the main peak and the stable boundary of the secondary peak of each frame in the infrared image to be processed, updating the correction value and the reference variable according to the preset algorithm;
calculating a stationary boundary of a main peak and a stationary boundary of a secondary peak of each frame in the infrared image to be processed according to the predetermined algorithm by using the initialized parameter offset, the initialized error reference, the updated correction value and the updated reference variable; wherein the updated correction value and the updated reference variable are updated according to the predetermined algorithm after calculating the stationary boundary of the main peak and the stationary boundary of the secondary peak of the previous frame.
2. The method of claim 1, wherein processing the histogram information to determine primary and secondary peak positions comprises:
processing the histogram information to obtain each first peak position of the histogram information;
determining a full-width half-maximum interval for the respective first peak locations;
and determining the positions of the main peak and the secondary peak according to the full-width half maximum interval of each first peak value position.
3. The method of claim 2, wherein processing the histogram information to obtain respective first peak locations of the histogram information comprises:
carrying out mean value filtering processing on the histogram information;
carrying out interpolation processing on the information after the average value filtering processing;
and obtaining each first peak position of the histogram information based on the information after the interpolation processing.
4. The method of claim 2, wherein determining a full-width-half-maximum interval for the respective first peak location comprises:
calculating a half-height difference value of the histogram information aiming at each first peak value position, wherein the half-height difference value is used for measuring the difference degree between the value of the histogram information and the half-peak value corresponding to the first peak value position;
processing the half-height difference value to obtain each second peak value position of the half-height difference value;
and determining a full-width half-maximum interval corresponding to each first peak position according to each second peak position.
5. The method of claim 4, wherein determining a full-width-half-maximum interval corresponding to each first peak position according to each second peak position comprises: -
And if the value of the histogram information corresponding to the nth second peak position is greater than the value of the histogram information corresponding to the first peak position, and the value of the histogram information corresponding to the nth-1 second peak position is less than the value of the histogram information corresponding to the first peak position, determining an interval between the nth second peak position and the nth-1 second peak position as a full width half maximum interval corresponding to the first peak position.
6. The method of claim 2, wherein determining primary and secondary peak positions from the full width half maximum interval of the respective first peak positions comprises:
accumulating the values of the histogram information in the full-width half maximum interval corresponding to each first peak value position to obtain an accumulated sum corresponding to each first peak value position;
sorting the accumulated sums corresponding to the first peak positions to obtain a sorting result from large to small;
and sequentially determining the first peak positions corresponding to the first two values in the sequencing result as a main peak position and a secondary peak position.
7. The method of claim 2,
determining an effective signal interval corresponding to a main peak according to the position of the main peak, comprising: obtaining an effective signal interval corresponding to the main peak according to the full width half maximum interval corresponding to the main peak position and a preset signal range threshold;
determining an effective signal interval corresponding to the secondary peak according to the position of the secondary peak, wherein the effective signal interval comprises the following steps: and obtaining an effective signal interval corresponding to the secondary peak according to the full width half maximum interval corresponding to the secondary peak position and a preset signal range threshold.
8. The method according to any one of claims 1 to 7, wherein the predetermined algorithm comprises:
calculating a bias coefficient according to the reference variable;
calculating a scaling coefficient according to the correction value and the parameter offset;
calculating a gain factor according to the scaling factor and the error reference quantity;
calculating a stationary boundary according to the bias coefficient, the gain coefficient, the reference variable and the effective signal interval;
updating the correction value using the gain factor and the scaling factor;
updating the reference variable with the stationary boundary.
9. The method according to any one of claims 1 to 7, wherein stretching the to-be-processed infrared image based on the main peak position, the secondary peak position, the stationary boundary of the main peak, and the stationary boundary of the secondary peak to obtain a global scene stretched image of the to-be-processed infrared image comprises:
setting a signal pre-distribution proportion corresponding to each interval in the histogram according to the position relation of the main peak position and the secondary peak position in the histogram;
the signal pre-allocation proportion comprises a dark part proportion, a main body proportion, a bright part proportion, a weak information coefficient and a secondary information coefficient; the main body proportion represents a signal pre-allocation proportion corresponding to an interval within a stationary boundary of the main peak; the secondary information coefficient represents a signal pre-allocation proportion corresponding to an interval within a stationary boundary of the secondary peak; the weak information coefficient represents a signal pre-distribution proportion corresponding to an interval between the position of the main peak and the position of the secondary peak, and is outside a stable boundary of the main peak and a stable boundary of the secondary peak; the dark part proportion represents a signal pre-distribution proportion corresponding to an interval with minimum brightness, wherein the interval is outside a stable boundary of the main peak and a stable boundary of the secondary peak; the bright part proportion represents a signal pre-distribution proportion corresponding to an interval with the maximum brightness, wherein the interval is outside a stable boundary of the main peak and a stable boundary of the secondary peak;
and according to the signal pre-distribution proportion corresponding to each interval, respectively carrying out pixel value mapping on the infrared image to be processed aiming at each interval in the histogram to obtain a global scene stretching image of the infrared image to be processed.
10. The method according to claim 9, wherein performing pixel value mapping on the infrared image to be processed for each interval in the histogram according to a signal pre-allocation ratio corresponding to each interval to obtain a global scene stretching image of the infrared image to be processed comprises:
respectively obtaining pixel stretching values corresponding to the pixel points in each interval by utilizing the signal pre-distribution proportion corresponding to each interval;
respectively accumulating the pixel stretching value of the interval and the signal pre-distribution proportion corresponding to each interval with the brightness smaller than the interval to obtain the pixel mapping value corresponding to the pixel point in the interval;
and obtaining a global scene stretching image of the infrared image to be processed according to the pixel mapping value.
11. An image processing apparatus characterized by comprising:
the acquisition unit is used for acquiring the histogram information of the infrared image to be processed;
the processing unit is used for processing the histogram information and determining a main peak position and a secondary peak position;
the first determining unit is used for determining an effective signal interval corresponding to a main peak according to the position of the main peak; determining an effective signal interval corresponding to the secondary peak according to the position of the secondary peak;
the second determining unit is used for determining a stable boundary of the main peak according to the identification of the frame of the infrared image to be processed and the effective signal interval corresponding to the main peak; determining a stable boundary of the secondary peak according to the frame identifier of the infrared image to be processed and the effective signal interval corresponding to the secondary peak;
the stretching unit is used for stretching the infrared image to be processed based on the main peak position, the secondary peak position, the stable boundary of the main peak and the stable boundary of the secondary peak to obtain a global scene stretched image of the infrared image to be processed;
wherein the second determination unit is configured to:
calculating a stationary boundary of a main peak and a stationary boundary of a secondary peak of a first frame in the infrared image to be processed according to a predetermined algorithm by using the initialized parameter offset, the initialized error reference, the initialized correction value and the initialized reference variable;
after calculating the stable boundary of the main peak and the stable boundary of the secondary peak of each frame in the infrared image to be processed, updating the correction value and the reference variable according to the preset algorithm;
calculating a stationary boundary of a main peak and a stationary boundary of a secondary peak of each frame in the infrared image to be processed according to the predetermined algorithm by using the initialized parameter offset, the initialized error reference, the updated correction value and the updated reference variable; wherein the updated correction value and the updated reference variable are updated according to the predetermined algorithm after calculating the stationary boundary of the main peak and the stationary boundary of the secondary peak of the previous frame.
12. A computing device, comprising:
a communication interface;
at least one processor coupled with the communication interface; and
at least one memory coupled to the processor and storing program instructions that, when executed by the at least one processor, cause the at least one processor to perform the method of any of claims 1-10.
13. A computer-readable storage medium having stored thereon program instructions, which, when executed by a computer, cause the computer to perform the method of any of claims 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210511646.0A CN114627029B (en) | 2022-05-12 | 2022-05-12 | Image processing method and device, computing equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210511646.0A CN114627029B (en) | 2022-05-12 | 2022-05-12 | Image processing method and device, computing equipment and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114627029A CN114627029A (en) | 2022-06-14 |
CN114627029B true CN114627029B (en) | 2022-08-02 |
Family
ID=81906045
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210511646.0A Active CN114627029B (en) | 2022-05-12 | 2022-05-12 | Image processing method and device, computing equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114627029B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102129675A (en) * | 2011-02-24 | 2011-07-20 | 中国兵器工业系统总体部 | Nonlinear adaptive infrared image enhancing method |
CN103530847A (en) * | 2013-09-24 | 2014-01-22 | 电子科技大学 | Infrared image enhancing method |
US20200242809A1 (en) * | 2017-10-25 | 2020-07-30 | Roche Diabetes Care, Inc. | Methods and devices for performing an analytical measurement based on a color formation reaction |
-
2022
- 2022-05-12 CN CN202210511646.0A patent/CN114627029B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102129675A (en) * | 2011-02-24 | 2011-07-20 | 中国兵器工业系统总体部 | Nonlinear adaptive infrared image enhancing method |
CN103530847A (en) * | 2013-09-24 | 2014-01-22 | 电子科技大学 | Infrared image enhancing method |
US20200242809A1 (en) * | 2017-10-25 | 2020-07-30 | Roche Diabetes Care, Inc. | Methods and devices for performing an analytical measurement based on a color formation reaction |
Also Published As
Publication number | Publication date |
---|---|
CN114627029A (en) | 2022-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2017204855B2 (en) | Logo presence detector based on blending characteristics | |
JP5908174B2 (en) | Image processing apparatus and image processing method | |
KR100809349B1 (en) | Apparatus and method for image brightness correction | |
US9578211B2 (en) | Image de-noising methods and apparatuses using the same | |
WO2017047494A1 (en) | Image-processing device | |
CN108198152B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN117455802B (en) | Noise reduction and enhancement method for image acquisition of intrinsic safety type miner lamp | |
US10602111B2 (en) | Auto white balance control algorithm based upon flicker frequency detection | |
CN111083388B (en) | Light supplement lamp control method and device, electronic equipment and storage medium | |
CN105163038A (en) | Linked light-supplementing method and device thereof | |
CN108063891B (en) | Image processing method, image processing device, computer-readable storage medium and computer equipment | |
CN102542576A (en) | Image processing device, image processing method and program | |
CN107872663B (en) | Image processing method and device, computer readable storage medium and computer equipment | |
CN113140197B (en) | Display screen adjusting method and device, electronic equipment and readable storage medium | |
JP2013041400A (en) | Image processing device, image processing method and program | |
CN113596422A (en) | Method for adjusting color correction matrix CCM and monitoring equipment | |
CN114627029B (en) | Image processing method and device, computing equipment and computer readable storage medium | |
KR101598701B1 (en) | Flicker detection method and flicker detection apparatus | |
CN113643651B (en) | Image enhancement method and device, computer equipment and storage medium | |
KR101516963B1 (en) | Automatic white balance adjustment device and method using effective area | |
JP5901685B2 (en) | Image display apparatus and control method thereof | |
CN114897695A (en) | Image processing method and device, computing equipment and computer readable storage medium | |
CN104992418B (en) | A kind of abnormal color bearing calibration suitable for thermal imagery video color | |
US8325279B2 (en) | Flicker suppression | |
US11631183B2 (en) | Method and system for motion segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |