CN118570099B - Postoperative rehabilitation effect monitoring system based on image enhancement - Google Patents
Postoperative rehabilitation effect monitoring system based on image enhancement Download PDFInfo
- Publication number
- CN118570099B CN118570099B CN202411057296.0A CN202411057296A CN118570099B CN 118570099 B CN118570099 B CN 118570099B CN 202411057296 A CN202411057296 A CN 202411057296A CN 118570099 B CN118570099 B CN 118570099B
- Authority
- CN
- China
- Prior art keywords
- image
- postoperative
- historical
- wound
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000002980 postoperative effect Effects 0.000 title claims abstract description 140
- 230000000694 effects Effects 0.000 title claims abstract description 50
- 238000012544 monitoring process Methods 0.000 title claims abstract description 45
- 206010052428 Wound Diseases 0.000 claims abstract description 116
- 208000027418 Wounds and injury Diseases 0.000 claims abstract description 116
- 238000000034 method Methods 0.000 claims abstract description 58
- 238000003709 image segmentation Methods 0.000 claims abstract description 30
- 230000011218 segmentation Effects 0.000 claims abstract description 19
- 230000008569 process Effects 0.000 claims abstract description 16
- 238000000605 extraction Methods 0.000 claims abstract description 13
- 238000001514 detection method Methods 0.000 claims abstract description 11
- 238000013461 design Methods 0.000 claims abstract description 10
- 230000004927 fusion Effects 0.000 claims abstract description 10
- 238000012216 screening Methods 0.000 claims abstract description 10
- 238000005516 engineering process Methods 0.000 claims abstract description 7
- 238000013441 quality evaluation Methods 0.000 claims abstract description 7
- 230000006870 function Effects 0.000 claims description 23
- 238000012545 processing Methods 0.000 claims description 21
- 238000011156 evaluation Methods 0.000 claims description 18
- 230000004044 response Effects 0.000 claims description 15
- 238000012937 correction Methods 0.000 claims description 13
- 238000013145 classification model Methods 0.000 claims description 12
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000010586 diagram Methods 0.000 claims description 10
- 239000013598 vector Substances 0.000 claims description 9
- 238000011084 recovery Methods 0.000 claims description 7
- 230000009191 jumping Effects 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- 230000009977 dual effect Effects 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 3
- 238000005315 distribution function Methods 0.000 claims description 3
- 230000010365 information processing Effects 0.000 claims description 3
- 230000008447 perception Effects 0.000 claims description 3
- 238000001303 quality assessment method Methods 0.000 claims description 3
- 238000012706 support-vector machine Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000007935 neutral effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004393 prognosis Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an image enhancement-based postoperative rehabilitation effect monitoring system which comprises an image acquisition module, an image enhancement module, an image segmentation module and a postoperative rehabilitation effect monitoring module. The invention belongs to the technical field of postoperative rehabilitation detection, in particular to an image-enhancement-based postoperative rehabilitation effect monitoring system, which performs secondary pixel screening by setting a threshold value; the white point detection accuracy is improved; the color balance is locally processed and improved, so that the color of the image is more real and uniform; introducing information entropy to improve image quality evaluation, and avoiding excessive or insufficient treatment on postoperative wound images; through multi-layer feature extraction and double clustering, a wound area can be accurately segmented; the edge sensing and super-pixel technology improves the segmentation robustness and can process complex wound boundaries; the label fusion and loss design enables the final segmentation result to have higher accuracy and consistency; thereby realizing the high efficiency, the accuracy and the robustness of the postoperative rehabilitation effect monitoring.
Description
Technical Field
The invention relates to the technical field of postoperative rehabilitation detection, in particular to a postoperative rehabilitation effect monitoring system based on image enhancement.
Background
The postoperative rehabilitation effect monitoring system is a medical system for tracking and evaluating the postoperative rehabilitation progress of a patient. The main purpose of such systems is to help medical professionals monitor the recovery of a post-operative wound in real time, through image data, sensor data and other relevant information, to optimize treatment and rehabilitation programs. However, the general postoperative rehabilitation effect monitoring system has the problems that the original image has a large amount of noise interference, the visual effect is unnatural when the postoperative wound image is subjected to image enhancement, and the color correction and the noise filtration cannot be accurately performed; the general postoperative rehabilitation effect monitoring system has the problems that image data in postoperative rehabilitation effect monitoring cannot be effectively processed, accuracy is poor when postoperative wound images are segmented, and robustness is low.
Disclosure of Invention
Aiming at the problems that the original image of a general postoperative rehabilitation effect monitoring system has a large amount of noise interference, the visual effect is unnatural when the image of a postoperative wound is enhanced, and the color correction and the noise filtration cannot be accurately carried out, the scheme carries out secondary screening on pixels by setting a threshold value, thereby being beneficial to focusing on a wound area; the white point detection accuracy is improved by adding a control threshold value, and artifacts caused by uneven brightness or other factors are reduced; the color deviation is effectively corrected by local processing and improved color balance, so that the color of the image is more real and uniform; introducing information entropy to improve image quality evaluation, objectively measuring definition and information quantity of an image, and avoiding excessive or insufficient treatment of a postoperative wound image; aiming at the problems that the general postoperative rehabilitation effect monitoring system cannot effectively process image data in postoperative rehabilitation effect monitoring, and has poor accuracy and low robustness when the postoperative wound image is segmented, the scheme can accurately segment a wound area through multi-layer feature extraction and double clustering, and optimize details; the edge sensing and super-pixel technology improves the segmentation robustness and can process complex wound boundaries; the label fusion and loss design enables the final segmentation result to have higher accuracy and consistency; thereby realizing the high efficiency, the accuracy and the robustness of the postoperative rehabilitation effect monitoring.
The technical scheme adopted by the invention is as follows: the invention provides an image enhancement-based postoperative rehabilitation effect monitoring system, which comprises an image acquisition module, an image enhancement module, an image segmentation module and a postoperative rehabilitation effect monitoring module, wherein the image acquisition module is used for acquiring an image of a patient;
the image acquisition module acquires historical postoperative wound images;
The image enhancement module performs secondary screening of pixels by setting a threshold value; dividing image blocks of the historical postoperative wound images; performing color correction based on the selected alternate white point; introducing information entropy to evaluate the quality of the color corrected historical postoperative wound image;
The image segmentation module introduces fuzzy clusters to segment images; dividing a wound area through multi-layer feature extraction and double clustering; performing wound boundary treatment based on edge perception and super-pixel technology; constructing model loss based on label fusion and loss design;
The postoperative rehabilitation effect monitoring module constructs a postoperative wound image classification model based on the historical postoperative wound images processed by the image segmentation module, so as to predict the postoperative wound images acquired in real time and realize postoperative rehabilitation effect monitoring.
Further, the image enhancement module is used for processing the historical postoperative wound images acquired by the image acquisition module; the method specifically comprises the following steps:
color space conversion: converting the historical post-operative wound image to a YCrCb color space;
Adding a preliminary constraint condition: the pixels used in screening the historical post-operative wound images are represented as follows:
;
where Y is the luminance component in the YCrCb color space, Is the threshold value of the threshold,AndRespectively a blue color difference component and a red color difference component,And beta is control respectivelyAndThe maximum absolute value of alpha and beta is in the range of 10-30,The value of (2) is in the range of 50-100,Is a threshold value for a luminance component in the YCrCb color space, which is used to screen out pixels with higher luminance, in which the range of luminance component Y is typically 0 to 255, and the value of τ is set to a range of 50-100 because the luminance value in this range typically corresponds to a brighter region in the image, suitable for detecting white or bright color regions, the lower limit of τ is set to 50 to avoid misidentifying darker pixels as white points, and the upper limit is set to 100 to ensure that the selected pixels are sufficiently bright to be able to effectively identify white point regions; alpha is a control parameter for the blue color difference component, defining a maximum absolute range of allowed blue color differences, a range of 10-30 may allow some slight blue color differences while avoiding excessive color differences from being erroneously included, such a setting ensuring that the selected pixel color does not deviate too far from a standard white point or bright color; beta is a control parameter of the red color difference component, which is used for defining the maximum absolute value range of the allowed red color difference, and the range of 10-30 ensures that the selected pixel color is close to neutral white or light color, and does not comprise excessive color difference, so that white points or bright color areas are detected more accurately;
adding a control threshold: adding a control threshold to improve the accuracy of white point detection is represented as follows:
;
Wherein, gamma is a control threshold value, and the value range of gamma is 10-70; gamma is the difference between luminance and color difference components to ensure that the detected white point region meets the expected color characteristics, the white point or bright color region generally has relatively stable color characteristics, i.e., a higher luminance value, while the value of its color difference component is smaller, in order to ensure detection accuracy, setting gamma to exclude pixels whose effect of the color difference component on luminance is too great, the value range of 10-70 provides enough flexibility to allow adjustment according to the image characteristics in practical use, a lower gamma value, e.g., 10-30, for scenes with less variation in color difference component, and a higher gamma value, e.g., 40-70, for scenes with greater variation in color difference component;
Dividing image blocks: dividing the whole historical post-operative wound image into small blocks based on the image pyramid, calculating the mean and mean square error of the color components for each image block, and selecting a spare white point based on the following criteria:
;
In the method, in the process of the invention, AndRespectively the blue color difference component and the red color difference component of pixel (i, j) in the image block,AndThe average of the blue color difference component and the red color difference component,AndStandard deviations of a blue color difference component and a red color difference component, respectively, sign (·) being a sign function;
Color correction; calculating, for all selected spare white points, a mean value of chromaticity values, the chromaticity values being defined as color components of each pixel in the image in a color space, represented by chromaticity components in the color space, i.e., blue and red color difference components; based on the chromaticity value mean of the spare white point, adjusting each color channel of the historical post-operative wound image using the cv2.convertscaleabs (·) function of OpenCV;
image evaluation: introducing information entropy to evaluate the quality of the color corrected historic postoperative wound image, wherein the information entropy is expressed as follows:
;
where H (·) is an image quality assessment function, d is the distance between pixels, β1 and The azimuth and altitude angles of the polar coordinates, p (·) is a correlation probability distribution function, and i and j represent the gray values of two pixels;
Image judgment: and presetting an evaluation threshold, if the quality evaluation value of the historical postoperative wound image after the color correction is higher than the evaluation threshold, completing image enhancement, otherwise, adjusting parameters to carry out image enhancement on the historical postoperative wound image again.
Further, the image segmentation module processes the historic post-operation wound images processed by the image enhancement module, marks the images with recovery evaluation grades in advance, and takes the recovery evaluation grades as image labels; the image segmentation module specifically comprises the following contents:
A multi-layer feature extraction network unit: the multi-layer feature extraction network unit includes: a convolution layer applying convolution operations to extract low-level features of the historical post-operative wound images; a pooling layer, which uses the maximum pooling operation to downsample the features; a deconvolution layer for recovering the resolution of the feature map; jumping connection, wherein the low-level features and the high-level features are combined through jumping connection, so that a feature map of a complete historical postoperative wound image is generated; the output layer is used for outputting the extracted characteristic images of the historical postoperative wound images;
Differential dual clustering unit: the differentiable double clustering unit comprises a jump connection cluster and a deep cluster sub-network; the jump connection cluster includes: feature map processing, namely performing convolution operation on the feature map of the input history postoperative wound image, and splicing the feature map after the convolution processing with the feature map before the processing to obtain the feature map ; Response value calculation, for characteristic diagramCalculating a response value of each pixel in each channel by applying a softmax functionExpressed as: ; generating an initial tag based on a channel where a maximum confidence level is located Expressed as: Wherein, the method comprises the steps of, wherein, Is a characteristic diagramThe response value of the (n, p) th pixel in each channel; the deep clustering sub-network comprises: constructing a correlation matrix for feature weighting, wherein the correlation matrix is expressed as: Wherein, the method comprises the steps of, wherein, Is the weight of the nth feature to the mth cluster in the i1 st iteration,Is the nth feature vector which is used to determine the feature vector,AndThe values of the jth cluster center and the mth cluster center in the ith 1-1 iteration are respectively, and M is the number of cluster centers; the features are weighted to update the cluster center, expressed as: Wherein, the method comprises the steps of, wherein, Is the jth cluster center in the i1 st iteration,Is the weight of the mth feature to the jth cluster in the ith iteration; computing feature graphs based on relevance and cluster centersExpressed as: Wherein, the method comprises the steps of, wherein, Is a matrix of the degree of association,Is a cluster center matrix; processing the feature map by a softmax function to obtain a response value of each pixel in each channelExpressed as: ; generating an initial tag based on a channel where a maximum confidence level is located Expressed as: Wherein, the method comprises the steps of, wherein, Is a characteristic diagramThe response value of the (N, p) th pixel in each channel, N being the total number of iterations,Is the weight of the nth feature to the qth cluster in the ith iteration;
edge-aware superpixel unit: generating T superpixels from historical post-operative wound images using EA segmentation algorithm Wherein S is the set of superpixels; s 1、S2 and S T are respectively the 1 st, the 2 nd and the T th super pixel areas, and boundary information processing is carried out by combining edge gradient information and super pixel segmentation results;
tag fusion unit: label distribution on a per superpixel basis Wherein, the method comprises the steps of, wherein,AndThe 1 st tag and the kth tag respectively,AndThe number of labels appearing in the superpixel h is the number of labels, and the labels are sorted in descending order according to the number to obtain a descending label distribution setWherein, the method comprises the steps of, wherein,AndThe number of labels is sorted according to the descending order of the number; processing intersection setsC (1) and C (2) are label sets of two super-pixel segmentation results; recalculating tag distributionC i2 is the recalculated label, u i2 is the recalculated number,AndIs a labelThe number of occurrences in the first superpixel region and the second superpixel region; based on minimumSelecting a final labelAnd assigning a final label to all pixels in the historical post-operative wound image;
Loss design unit: minimizing the gap between the current pixel and neighboring pixel features is expressed as: Wherein, the method comprises the steps of, wherein, Is a loss function that measures the gap between the current pixel and the neighboring pixel features, W is the width of the image, H is the height of the image,、AndFeature vectors of pixels at positions (i+1, j), (i, j), and (i, j+1), respectively; minimizing the gap between each pixel label and the superpixel label is expressed as;Wherein, the method comprises the steps of, wherein,Is a loss function that measures the gap between the pixel labels and the superpixel labels, N is the total number of pixels,Is a Kronecker delta function, t is a built-in parameter; the final loss L is obtained and expressed asWherein, the method comprises the steps of, wherein,Is the loss weight, p is the index of the currently processed superpixel,Is the total number of super-pixels,The value range is 0.1-10,The method aims at balancing the smoothness of the image and the consistency between pixels and superpixel labels, provides flexibility in a value range, adjusts the balance of a loss function based on model expression, and selects higher label consistencyA value;
segmentation determination means: the method comprises the steps of presetting the maximum iteration times, if the image segmentation module converges the loss of the historical postoperative wound image, completing the construction of the image segmentation module, if the maximum iteration times are reached, adjusting the module parameters to reconstruct, otherwise, continuing iteration.
Further, the postoperative rehabilitation effect monitoring module divides the historical postoperative wound images processed by the image segmentation module into a training set and a testing set, and uses a support vector machine to conduct classification modeling on the historical postoperative wound images to obtain a postoperative wound image classification model; and acquiring postoperative wound images in real time, processing the postoperative wound images by the image enhancement module and the image segmentation module, inputting the processed images into a postoperative wound image classification model, and if the rehabilitation evaluation grade output by the wound image classification model is too low, indicating that the rehabilitation effect is poor, and giving early warning treatment to realize postoperative rehabilitation effect monitoring.
By adopting the scheme, the beneficial effects obtained by the invention are as follows:
(1) Aiming at the problems that an original image of a general postoperative rehabilitation effect monitoring system has a large amount of noise interference, the visual effect is unnatural when the postoperative wound image is subjected to image enhancement, and color correction and noise filtration cannot be accurately performed, the scheme is beneficial to focusing on a wound area by setting a threshold value to perform secondary pixel screening; the white point detection accuracy is improved by adding a control threshold value, and artifacts caused by uneven brightness or other factors are reduced; the color deviation is effectively corrected by local processing and improved color balance, so that the color of the image is more real and uniform; the information entropy is introduced to improve the image quality evaluation, objectively measure the definition and information quantity of the image, and avoid excessive or insufficient treatment of the postoperative wound image.
(2) Aiming at the problems that the general postoperative rehabilitation effect monitoring system cannot effectively process image data in postoperative rehabilitation effect monitoring, and has poor accuracy and low robustness when the postoperative wound image is segmented, the scheme can accurately segment a wound area through multi-layer feature extraction and double clustering, and optimize details; the edge sensing and super-pixel technology improves the segmentation robustness and can process complex wound boundaries; the label fusion and loss design enables the final segmentation result to have higher accuracy and consistency; thereby realizing the high efficiency, the accuracy and the robustness of the postoperative rehabilitation effect monitoring.
Drawings
FIG. 1 is a schematic diagram of an image enhancement-based postoperative rehabilitation effect monitoring system provided by the invention;
FIG. 2 is a flow chart of an image enhancement module;
Fig. 3 is a flow chart of the image segmentation module.
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention; all other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, it should be understood that the terms "upper," "lower," "front," "rear," "left," "right," "top," "bottom," "inner," "outer," and the like indicate orientation or positional relationships based on those shown in the drawings, merely to facilitate describing the present invention and simplify the description, and do not indicate or imply that the system or element being referred to must have a particular orientation, be constructed and operate in a particular orientation, and thus should not be construed as limiting the present invention.
Referring to fig. 1, the postoperative rehabilitation effect monitoring system based on image enhancement provided by the invention comprises an image acquisition module, an image enhancement module, an image segmentation module, a postoperative rehabilitation effect monitoring module and a neurology prognosis evaluation module;
the image acquisition module acquires historical postoperative wound images; and transmitting the data to an image enhancement module;
The image enhancement module receives the data sent by the image acquisition module and performs secondary screening on pixels by setting a threshold value; dividing image blocks of the historical postoperative wound images; performing color correction based on the selected alternate white point; introducing information entropy to evaluate the quality of the color corrected historical postoperative wound image; and sending the data to an image segmentation module;
The image segmentation module receives the data sent by the image enhancement module, and introduces fuzzy clusters to segment the image; dividing a wound area through multi-layer feature extraction and double clustering; performing wound boundary treatment based on edge perception and super-pixel technology; constructing model loss based on label fusion and loss design; the data is sent to a postoperative rehabilitation effect monitoring module;
the postoperative rehabilitation effect monitoring module receives data sent by the image segmentation module, constructs a postoperative wound image classification model based on historical postoperative wound images processed by the image segmentation module, and further predicts the postoperative wound images acquired in real time to realize postoperative rehabilitation effect monitoring.
Referring to fig. 1 and 2, the second embodiment is based on the above embodiment, and the image enhancement module processes the historical post-operation wound image acquired by the image acquisition module; the method specifically comprises the following steps:
Color space conversion; converting the historical post-operative wound image to a YCrCb color space;
Adding a preliminary constraint condition; the method is used for screening pixels in the historical postoperative wound image; the expression is as follows:
;
where Y is the luminance component in the YCrCb color space, Is the threshold value of the threshold,AndRespectively a blue color difference component and a red color difference component,And beta is control respectivelyAndThe maximum absolute value of alpha and beta is in the range of 10-30,The value of (2) is in the range of 50-100,Is a threshold value for a luminance component in the YCrCb color space, which is used to screen out pixels with higher luminance, in which the range of luminance component Y is typically 0 to 255, and the value of τ is set to a range of 50-100 because the luminance value in this range typically corresponds to a brighter region in the image, suitable for detecting white or bright color regions, the lower limit of τ is set to 50 to avoid misidentifying darker pixels as white points, and the upper limit is set to 100 to ensure that the selected pixels are sufficiently bright to be able to effectively identify white point regions; alpha is a control parameter for the blue color difference component, defining a maximum absolute range of allowed blue color differences, a range of 10-30 may allow some slight blue color differences while avoiding excessive color differences from being erroneously included, such a setting ensuring that the selected pixel color does not deviate too far from a standard white point or bright color; beta is a control parameter of the red color difference component, which is used for defining the maximum absolute value range of the allowed red color difference, and the range of 10-30 ensures that the selected pixel color is close to neutral white or light color, and does not comprise excessive color difference, so that white points or bright color areas are detected more accurately;
adding a control threshold: adding a control threshold to improve the accuracy of white point detection is represented as follows:
;
Wherein, gamma is a control threshold value, and the value range of gamma is 10-70; gamma is the difference between luminance and color difference components to ensure that the detected white point region meets the expected color characteristics, the white point or bright color region generally has relatively stable color characteristics, i.e., a higher luminance value, while the value of its color difference component is smaller, in order to ensure detection accuracy, setting gamma to exclude pixels whose effect of the color difference component on luminance is too great, the value range of 10-70 provides enough flexibility to allow adjustment according to the image characteristics in practical use, a lower gamma value, e.g., 10-30, for scenes with less variation in color difference component, and a higher gamma value, e.g., 40-70, for scenes with greater variation in color difference component;
Dividing image blocks; dividing the whole historical post-operation wound image into small blocks based on an image pyramid; for each image block, calculating the mean and mean square error of the color components; the alternate white point is selected based on the following criteria:
;
In the method, in the process of the invention, AndBlue and red color difference components, respectively, of pixel (i, j) in the image block; And The mean values of the blue color difference component and the red color difference component; And The standard deviation of the blue color difference component and the red color difference component, respectively; sign (·) is a sign function;
Color correction; calculating, for all selected spare white points, a mean value of chromaticity values, the chromaticity values being defined as color components of each pixel in the image in a color space, represented by chromaticity components in the color space, i.e., blue and red color difference components; based on the chromaticity value mean of the spare white point, adjusting each color channel of the historical post-operative wound image using the cv2.convertscaleabs (·) function of OpenCV;
Evaluating an image; introducing information entropy to evaluate the quality of the color corrected historical postoperative wound image; the expression is as follows:
;
wherein H (·) is an image quality assessment function; d is the distance between pixels; β1 and The azimuth and altitude of the polar coordinates, respectively; p (·) is a correlation probability distribution function; i and j represent the gray values of two pixels;
Judging an image; presetting an evaluation threshold, and if the quality evaluation value of the historical postoperative wound image after color correction is higher than the evaluation threshold, finishing image enhancement; otherwise, the parameters are adjusted to carry out image enhancement on the historical postoperative wound image again.
By executing the operation, aiming at the problems that a large amount of noise interference exists in an original image in a general postoperative rehabilitation effect monitoring system, the visual effect is unnatural when the postoperative wound image is subjected to image enhancement, and color correction and noise filtration cannot be accurately performed, the scheme performs secondary pixel screening by setting a threshold value, so that focusing on a wound area is facilitated; the white point detection accuracy is improved by adding a control threshold value, and artifacts caused by uneven brightness or other factors are reduced; the color deviation is effectively corrected by local processing and improved color balance, so that the color of the image is more real and uniform; the information entropy is introduced to improve the image quality evaluation, objectively measure the definition and information quantity of the image, and avoid excessive or insufficient treatment of the postoperative wound image.
Referring to fig. 1 and 3, in the embodiment, the image segmentation module processes the historic post-operation wound image processed by the image enhancement module, marks the recovery evaluation level on the image in advance, and uses the recovery evaluation level as an image label; the image segmentation module specifically comprises the following contents:
A multi-layer feature extraction network unit; the multi-layer feature extraction network unit includes: a convolution layer applying convolution operations to extract low-level features of the historical post-operative wound images; a pooling layer, which uses the maximum pooling operation to downsample the features; a deconvolution layer for recovering the resolution of the feature map; jumping connection, wherein the low-level features and the high-level features are combined through jumping connection, so that a feature map of a complete historical postoperative wound image is generated; the output layer is used for outputting the extracted characteristic images of the historical postoperative wound images;
A differentiable dual clustering unit; the differentiable double clustering unit comprises a jump connection cluster and a deep cluster sub-network; the jump connection cluster includes: feature map processing, namely performing convolution operation on the feature map of the input history postoperative wound image, and splicing the feature map after the convolution processing with the feature map before the processing to obtain the feature map ; Response value calculation, for characteristic diagramCalculating a response value of each pixel in each channel by applying a softmax functionExpressed as: ; generating an initial tag based on a channel where a maximum confidence level is located Expressed as: Wherein, the method comprises the steps of, wherein, Is a characteristic diagramThe response value of the (n, p) th pixel in each channel; the deep clustering sub-network comprises: constructing a correlation matrix for feature weighting, wherein the correlation matrix is expressed as: Wherein, the method comprises the steps of, wherein, Is the weight of the nth feature to the mth cluster in the i1 st iteration; Is the nth feature vector which is used to determine the feature vector, AndThe values of the jth cluster center and the mth cluster center in the i1-1 th iteration are respectively; m is the number of cluster centers; the features are weighted to update the cluster center, expressed as: Wherein, the method comprises the steps of, wherein, Is the jth cluster center in the ith iteration; Is the weight of the mth feature to the jth cluster in the ith iteration; computing feature graphs based on relevance and cluster centers Expressed as: Wherein, the method comprises the steps of, wherein, Is a correlation matrix; Is a cluster center matrix; processing the feature map by a softmax function to obtain a response value of each pixel in each channel Expressed as: ; generating an initial tag based on a channel where a maximum confidence level is located Expressed as: Wherein, the method comprises the steps of, wherein, Is a characteristic diagramThe response value of the (n, p) th pixel in each channel; n is the total number of iterations; is the weight of the nth feature to the qth cluster in the ith iteration;
an edge-aware superpixel unit; generating T superpixels from historical post-operative wound images using EA segmentation algorithm Wherein S is the set of superpixels; s 1、S2 and S T are the 1 st, 2 nd and T-th super pixel regions, respectively; boundary information processing is carried out by combining the edge gradient information and the super-pixel segmentation result;
A tag fusion unit; label distribution on a per superpixel basis Wherein, the method comprises the steps of, wherein,AndThe 1 st tag and the kth tag, respectively; And Is the number of labels present in superpixel h; the labels are sorted in descending order according to the quantity, and a descending label distribution set is obtainedWherein, the method comprises the steps of, wherein,AndThe number of labels is sorted according to the descending order of the number; processing intersection setsC (1) and C (2) are label sets of two superpixel segmentation results, and the label distribution is recalculatedC i2 is the recalculated label, u i2 is the recalculated number,AndIs a labelThe number of occurrences in the first superpixel region and the second superpixel region; based on minimumSelecting a final label; And assigning a final label to all pixels in the historical post-operative wound image;
a loss design unit; minimizing the gap between the current pixel and neighboring pixel features is expressed as: Wherein, the method comprises the steps of, wherein, Is a loss function for measuring the gap between the current pixel and the adjacent pixel features; w is the width of the image; h is the height of the image;、 And Feature vectors of pixels at positions (i+1, j), (i, j), and (i, j+1), respectively; minimizing the gap between each pixel label and the superpixel label is expressed as;Wherein, the method comprises the steps of, wherein,Is a loss function that measures the gap between the pixel labels and the superpixel labels; n is the total number of pixels; Is a Kronecker delta function, t is a built-in parameter; the final loss L is obtained and expressed as Wherein, the method comprises the steps of, wherein,Is the loss weight; p is the index of the currently processed superpixel; Is the total number of superpixels; the value range is 0.1-10, The method aims at balancing the smoothness of the image and the consistency between pixels and superpixel labels, provides flexibility in a value range, adjusts the balance of a loss function based on model expression, and selects higher label consistencyA value;
A division determination unit; the maximum iteration times are preset, and if the image segmentation module converges to the loss of the historical postoperative wound image, the image segmentation module is constructed; if the maximum iteration times are reached, adjusting module parameters to reconstruct; otherwise, continuing the iteration.
By executing the operations, aiming at the problems that the general postoperative rehabilitation effect monitoring system cannot effectively process image data in postoperative rehabilitation effect monitoring, the accuracy is poor and the robustness is low when the postoperative wound image is segmented, the scheme can accurately segment a wound area through multi-layer feature extraction and double clustering, and optimize details; the edge sensing and super-pixel technology improves the segmentation robustness and can process complex wound boundaries; the label fusion and loss design enables the final segmentation result to have higher accuracy and consistency; thereby realizing the high efficiency, the accuracy and the robustness of the postoperative rehabilitation effect monitoring.
Fourth, referring to fig. 1, the embodiment is based on the above embodiment, and the postoperative rehabilitation effect monitoring module specifically divides the historical postoperative wound image processed by the image segmentation module into a training set and a testing set, and uses a support vector machine to perform classification modeling on the historical postoperative wound image to obtain a classification model of the postoperative wound image; and acquiring postoperative wound images in real time, processing the postoperative wound images by the image enhancement module and the image segmentation module, inputting the processed images into a postoperative wound image classification model, and if the rehabilitation evaluation grade output by the wound image classification model is too low, indicating that the rehabilitation effect is poor, and giving early warning treatment to realize postoperative rehabilitation effect monitoring.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made hereto without departing from the spirit and principles of the present invention.
The invention and its embodiments have been described above with no limitation, and the actual construction is not limited to the embodiments of the invention as shown in the drawings. In summary, if one of ordinary skill in the art is informed by this disclosure, a structural manner and an embodiment similar to the technical solution should not be creatively devised without departing from the gist of the present invention.
Claims (3)
1. Postoperative rehabilitation effect monitoring system based on image enhancement, its characterized in that: the system comprises an image acquisition module, an image enhancement module, an image segmentation module and a postoperative rehabilitation effect monitoring module;
the image acquisition module acquires historical postoperative wound images;
The image enhancement module performs secondary screening of pixels by setting a threshold value; dividing image blocks of the historical postoperative wound images; performing color correction based on the selected alternate white point; introducing information entropy to evaluate the quality of the color corrected historical postoperative wound image;
The image segmentation module introduces fuzzy clusters to segment images; dividing a wound area through multi-layer feature extraction and double clustering; performing wound boundary treatment based on edge perception and super-pixel technology; constructing model loss based on label fusion and loss design;
The postoperative rehabilitation effect monitoring module builds a postoperative wound image classification model based on the historical postoperative wound images processed by the image segmentation module, so as to predict the postoperative wound images acquired in real time and realize postoperative rehabilitation effect monitoring;
The image enhancement module includes an addition control threshold: adding a control threshold to improve the accuracy of white point detection is represented as follows:
;
Wherein γ is a control threshold; y is a luminance component in the YCrCb color space; And Respectively a blue color difference component and a red color difference component; the value range of gamma is 10-70;
the image segmentation module processes the historic postoperative wound images processed by the image enhancement module, marks the recovery evaluation level on the images in advance, takes the recovery evaluation level as an image label, and specifically comprises the following contents:
A multi-layer feature extraction network unit: the multi-layer feature extraction network unit includes: a convolution layer applying convolution operations to extract low-level features of the historical post-operative wound images; a pooling layer, which uses the maximum pooling operation to downsample the features; a deconvolution layer for recovering the resolution of the feature map; jumping connection, wherein the low-level features and the high-level features are combined through jumping connection, so that a feature map of a complete historical postoperative wound image is generated; the output layer is used for outputting the extracted characteristic images of the historical postoperative wound images;
a differentiable dual clustering unit;
edge-aware superpixel unit: generating T superpixels from historical post-operative wound images using EA segmentation algorithm S is a super pixel set, S 1、S2 and S T are respectively the 1 st, 2 nd and T th super pixel areas, and boundary information processing is carried out by combining edge gradient information and super pixel segmentation results;
tag fusion unit: label distribution on a per superpixel basis Wherein, the method comprises the steps of, wherein,AndThe 1 st tag and the kth tag respectively,AndThe number of labels appearing in the superpixel h is the number of labels, and the labels are sorted in descending order according to the number to obtain a descending label distribution setWherein, the method comprises the steps of, wherein,AndIs the number of labels sorted in descending order of number, and the intersection set is processedC (1) and C (2) are label sets of two superpixel segmentation results, and the label distribution is recalculatedC i2 is the recalculated label, u i2 is the recalculated number,AndIs a labelThe number of occurrences in the first superpixel region and the second superpixel region is based on a minimumSelecting a final labelAnd assigning a final label to all pixels in the historical post-operative wound image;
Loss design unit: minimizing the gap between the current pixel and neighboring pixel features is expressed as: Wherein, the method comprises the steps of, wherein, Is a loss function that measures the gap between the current pixel and the neighboring pixel features, W is the width of the image, H is the height of the image,、AndFeature vectors of pixels at positions (i+1, j), (i, j), and (i, j+1), respectively, minimize the gap between each pixel label and the superpixel label, expressed as,Wherein, the method comprises the steps of, wherein,Is a loss function that measures the gap between the pixel labels and the superpixel labels, N is the total number of pixels,Is a Kronecker delta function, t is a built-in parameter, and the final loss L is obtained and expressed asWherein, the method comprises the steps of, wherein,Is the loss weight, p is the index of the currently processed superpixel,Is the total number of super-pixels,The value range is 0.1-10;
segmentation determination means: presetting the maximum iteration times, if the image segmentation module converges the loss of the historical postoperative wound image, completing the construction of the image segmentation module, if the maximum iteration times are reached, adjusting the module parameters to reconstruct, otherwise, continuing iteration;
the differentiable dual clustering unit comprises a jump connection clustering sub-network and a deep clustering sub-network; the jump connection cluster includes: feature map processing, namely performing convolution operation on the feature map of the input history postoperative wound image, and splicing the feature map after the convolution processing with the feature map before the processing to obtain the feature map ; Response value calculation, for characteristic diagramCalculating a response value of each pixel in each channel by applying a softmax functionExpressed as: ; generating an initial tag based on a channel where a maximum confidence level is located Expressed as: Wherein, the method comprises the steps of, wherein, Is a characteristic diagramThe response value of the (n, p) th pixel in each channel; the deep clustering sub-network comprises: constructing a correlation matrix for feature weighting, wherein the correlation matrix is expressed as: Wherein, the method comprises the steps of, wherein, Is the weight of the nth feature to the mth cluster in the i1 st iteration,Is the nth feature vector which is used to determine the feature vector,AndThe values of the jth cluster center and the mth cluster center in the ith 1-1 iteration are respectively, and M is the number of cluster centers; the features are weighted to update the cluster center, expressed as: Wherein, the method comprises the steps of, wherein, Is the jth cluster center in the ith iteration; Is the weight of the mth feature to the jth cluster in the ith iteration; computing feature graphs based on relevance and cluster centers Expressed as: Wherein, the method comprises the steps of, wherein, Is a correlation matrix; Is a cluster center matrix; processing the feature map by a softmax function to obtain a response value of each pixel in each channel Expressed as: ; generating an initial tag based on a channel where a maximum confidence level is located Expressed as: Wherein, the method comprises the steps of, wherein, Is a characteristic diagramThe response value of the (N, p) th pixel in each channel, N being the total number of iterations; is the weight of the nth feature to the qth cluster in the ith iteration.
2. The image-enhancement-based postoperative rehabilitation effect monitoring system according to claim 1, wherein: the image enhancement module is used for processing the historical postoperative wound images acquired by the image acquisition module; the method specifically comprises the following steps:
color space conversion: converting the historical post-operative wound image to a YCrCb color space;
Adding a preliminary constraint condition: the pixels used in screening the historical post-operative wound images are represented as follows:
;
Where Y is a luminance component in the YCrCb color space; is a threshold; And Respectively a blue color difference component and a red color difference component; and beta is control respectively AndIs the maximum absolute value of (2); the value range of alpha and beta is 10-30; the value range of (2) is 50-100;
adding a control threshold;
Dividing image blocks: dividing the whole historical post-operative wound image into small blocks based on the image pyramid, calculating the mean and mean square error of the color components for each image block, and selecting a spare white point based on the following criteria:
;
In the method, in the process of the invention, AndBlue and red color difference components, respectively, of pixel (i, j) in the image block; And The mean values of the blue color difference component and the red color difference component; And The standard deviation of the blue color difference component and the red color difference component, respectively; sign (·) is a sign function;
Color correction; calculating, for all selected spare white points, a mean value of chromaticity values, the chromaticity values being defined as color components of each pixel in the image in a color space, represented by chromaticity components in the color space, i.e., blue and red color difference components; based on the chromaticity value mean of the spare white point, adjusting each color channel of the historical post-operative wound image using the cv2.convertscaleabs (·) function of OpenCV;
image evaluation: introducing information entropy to evaluate the quality of the color corrected historic postoperative wound image, wherein the information entropy is expressed as follows:
;
wherein H (·) is an image quality assessment function; d is the distance between pixels; β1 and The azimuth and altitude of the polar coordinates, respectively; p (·) is a correlation probability distribution function; i and j represent the gray values of two pixels;
Image judgment: and presetting an evaluation threshold, if the quality evaluation value of the historical postoperative wound image after the color correction is higher than the evaluation threshold, completing image enhancement, otherwise, adjusting parameters to carry out image enhancement on the historical postoperative wound image again.
3. The image-enhancement-based postoperative rehabilitation effect monitoring system according to claim 1, wherein: the postoperative rehabilitation effect monitoring module is characterized in that the historical postoperative wound images processed by the image segmentation module are divided into a training set and a testing set, the historical postoperative wound images are subjected to classification modeling by using a support vector machine to obtain a postoperative wound image classification model, the postoperative wound images are collected in real time and are input into the postoperative wound image classification model after being processed by the image enhancement module and the image segmentation module, if the rehabilitation evaluation grade output by the wound image classification model is too low, the rehabilitation effect is poor, and early warning treatment is given to the postoperative rehabilitation effect monitoring.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411057296.0A CN118570099B (en) | 2024-08-02 | 2024-08-02 | Postoperative rehabilitation effect monitoring system based on image enhancement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411057296.0A CN118570099B (en) | 2024-08-02 | 2024-08-02 | Postoperative rehabilitation effect monitoring system based on image enhancement |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118570099A CN118570099A (en) | 2024-08-30 |
CN118570099B true CN118570099B (en) | 2024-10-11 |
Family
ID=92478656
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411057296.0A Active CN118570099B (en) | 2024-08-02 | 2024-08-02 | Postoperative rehabilitation effect monitoring system based on image enhancement |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118570099B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119151934B (en) * | 2024-11-19 | 2025-01-28 | 山东衡昊信息技术有限公司 | Extracardiac operation monitoring system and method based on image processing |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20220012628A (en) * | 2020-07-23 | 2022-02-04 | 강원대학교산학협력단 | Deep Learning based Gastric Classification System using Data Augmentation and Image Segmentation |
CN117611824A (en) * | 2023-12-07 | 2024-02-27 | 福建恒智信息技术有限公司 | Digital retina image segmentation method based on improved UNET |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109274948B (en) * | 2017-07-17 | 2021-03-26 | 深圳市道通智能航空技术有限公司 | Image color correction method, device, storage medium and computer equipment |
KR20210102039A (en) * | 2020-02-11 | 2021-08-19 | 삼성전자주식회사 | Electronic device and control method thereof |
CN112001391A (en) * | 2020-05-11 | 2020-11-27 | 江苏鲲博智行科技有限公司 | A method of image feature fusion for image semantic segmentation |
CN111915629B (en) * | 2020-07-06 | 2023-11-21 | 天津大学 | Superpixel segmentation method based on boundary detection |
CN112288011B (en) * | 2020-10-30 | 2022-05-13 | 闽江学院 | Image matching method based on self-attention deep neural network |
CN113592894B (en) * | 2021-08-29 | 2024-02-02 | 浙江工业大学 | Image segmentation method based on boundary box and co-occurrence feature prediction |
US11847811B1 (en) * | 2022-07-26 | 2023-12-19 | Nanjing University Of Posts And Telecommunications | Image segmentation method combined with superpixel and multi-scale hierarchical feature recognition |
CN115170805A (en) * | 2022-07-26 | 2022-10-11 | 南京邮电大学 | Image segmentation method combining super-pixel and multi-scale hierarchical feature recognition |
CN116128766A (en) * | 2023-03-10 | 2023-05-16 | 昆明理工大学 | Improved Retinex-Net-based infrared image enhancement method for power equipment |
CN116612041A (en) * | 2023-05-31 | 2023-08-18 | 山东大学 | Low-illumination image enhancement method and system based on superpixel analysis |
CN117522817A (en) * | 2023-11-10 | 2024-02-06 | 华北理工大学 | A medical image processing method and system based on artificial intelligence algorithm |
CN117557579A (en) * | 2023-11-23 | 2024-02-13 | 电子科技大学长三角研究院(湖州) | Method and system for assisting non-supervision super-pixel segmentation by using cavity pyramid collaborative attention mechanism |
CN118154984B (en) * | 2024-04-09 | 2024-10-29 | 山东财经大学 | Method and system for generating non-supervision neighborhood classification superpixels by fusing guided filtering |
-
2024
- 2024-08-02 CN CN202411057296.0A patent/CN118570099B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20220012628A (en) * | 2020-07-23 | 2022-02-04 | 강원대학교산학협력단 | Deep Learning based Gastric Classification System using Data Augmentation and Image Segmentation |
CN117611824A (en) * | 2023-12-07 | 2024-02-27 | 福建恒智信息技术有限公司 | Digital retina image segmentation method based on improved UNET |
Also Published As
Publication number | Publication date |
---|---|
CN118570099A (en) | 2024-08-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106875373B (en) | Mobile phone screen MURA defect detection method based on convolutional neural network pruning algorithm | |
CN115115644B (en) | Vehicle welding defect detection method based on artificial intelligence | |
CN110443778B (en) | A method for detecting irregular defects in industrial products | |
CN109147005B (en) | Self-adaptive dyeing method and system for infrared image, storage medium and terminal | |
CN118570099B (en) | Postoperative rehabilitation effect monitoring system based on image enhancement | |
CN103198467B (en) | Image processing apparatus and image processing method | |
CN110648322A (en) | Method and system for detecting abnormal cervical cells | |
CN108734108B (en) | A crack tongue recognition method based on SSD network | |
CN115797607B (en) | Image optimization processing method for enhancing VR real effect | |
CN115131354B (en) | Laboratory plastic film defect detection method based on optical means | |
CN117876971B (en) | Building construction safety monitoring and early warning method based on machine vision | |
US12136143B2 (en) | Image processing system configured to generate a colorized image | |
CN116309559A (en) | Intelligent identification method for production flaws of medium borosilicate glass | |
CN116485801B (en) | Rubber tube quality online detection method and system based on computer vision | |
Yang et al. | EHNQ: Subjective and objective quality evaluation of enhanced night-time images | |
CN114080644B (en) | Systems and methods for diagnosing small intestinal cleanliness | |
CN113223098A (en) | Preprocessing optimization method for image color classification | |
CN116228777B (en) | Concrete stirring uniformity detection method | |
CN117079197B (en) | Intelligent building site management method and system | |
CN114820597B (en) | Smelting product defect detection method, device and system based on artificial intelligence | |
CN118521562A (en) | Concrete crack detection method based on machine vision | |
CN117853488A (en) | Tire wear degree detection method | |
CN116597029B (en) | Image re-coloring method for achromatopsia | |
CN117315267A (en) | AI identification method and storage medium for displaying missing texture based on computer vision processing | |
CN114331975A (en) | Display defect detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |