Disclosure of Invention
Therefore, the invention provides an image processing method, an image processing device, an electronic device and a computer readable medium, which are used for solving the problem of poor image quality caused by artifacts in a monitored image in the prior art.
The first aspect of the present invention provides an image processing method, wherein the method includes:
acquiring a first image and a second image generated when an ultrasonic probe is positioned at different positions, wherein the generation time interval of the first image and the second image is smaller than a preset time threshold;
determining an initial superposition area of the first image and the second image according to the preset mark of the first image and the preset mark of the second image;
And determining a target image according to the non-artifact area of the first image, the non-artifact area of the second image and the initial superposition area.
In some embodiments, the step of determining the initial overlapping region of the first image and the second image according to the preset flag of the first image and the preset flag of the second image includes:
identifying a preset mark of the first image and a preset mark of the second image;
Moving the first image or the second image so that a preset mark of the first image overlaps a preset mark of the second image;
And determining a part formed by overlapping the first image and the second image as the initial overlapping area.
In some embodiments, before the step of determining the portion of the first image superimposed with the second image as the initial superimposed area, the method further comprises:
The first image and the second image are preprocessed so that portions of the first image and the second image that can be superimposed have the same brightness and contrast.
In some embodiments, the non-artifact regions of the first image and the non-artifact regions of the second image each comprise an independent non-artifact region and a non-artifact region corresponding to the initial overlay region;
the step of determining a target image from the non-artifact region of the first image, the non-artifact region of the second image, and the initial overlay region comprises:
performing replacement processing on the initial superposition area according to the non-artifact area corresponding to the initial superposition area to obtain a final superposition area;
And splicing the independent non-artifact region of the first image, the independent non-artifact region of the second image and the final superposition region to obtain the target image.
In some embodiments, the step of replacing the initial overlapping region according to the non-artifact region corresponding to the initial overlapping region includes:
replacing a part of the initial superposition area corresponding to the artifact area in the first image with a non-artifact area of a corresponding position range in the second image;
replacing a part of the initial superposition area corresponding to the artifact area in the second image with a non-artifact area of a corresponding position range in the first image;
And determining a target area from the non-artifact area corresponding to the initial superposition area according to the pixel value of the first image and the pixel value of the second image, and replacing the non-artifact area in the initial superposition area with the target area.
In some embodiments, the first image is generated when the ultrasound probe is in a position closer to a target monitoring position than the second image, the step of determining a target region from the non-artifact region corresponding to the initial overlap region from pixel values of the first image and pixel values of the second image comprising:
And under the condition that the difference value between the pixel value of the first image and the pixel value of the second image is smaller than a preset pixel threshold value, determining the target area according to the pixel information of the first image, or determining the target image according to the average value of the pixel information of the first image and the pixel information of the second image.
In some embodiments, the step of acquiring the first and second images generated when the ultrasound probe is located at different positions comprises:
Controlling the ultrasonic probe to vibrate in a direction perpendicular to a target position according to a preset vibration frequency according to a vibration signal and emitting detection ultrasonic waves;
generating a plurality of images according to ultrasonic echoes received by the ultrasonic probe;
And determining the first image and the second image from the plurality of images according to the vibration signal.
In some embodiments, the step of controlling the ultrasonic probe to vibrate in a direction perpendicular to the target position according to a preset vibration frequency and transmitting the probe ultrasonic wave according to the vibration signal includes:
And sending the vibration signal to a vibration mechanism so that the vibration mechanism pushes the ultrasonic probe to vibrate in a direction perpendicular to a target position according to the preset vibration frequency and emits detection ultrasonic waves.
In some embodiments, the vibration mechanism is a cam mechanism, the cycle angle of the cam mechanism at a short diameter and the cycle angle of the cam mechanism at a long diameter are both within the interval [90 °,140 ° ], and the cycle angle of the cam mechanism between the short diameter and the long diameter is within the interval [80 °,180 ° ].
A second aspect of the present invention provides an image processing apparatus, wherein the image processing apparatus includes:
the acquisition module is used for acquiring a first image and a second image which are generated when the ultrasonic probe is positioned at different positions and the generation time interval is smaller than a preset time threshold;
the first processing module is used for determining an initial superposition area of the first image and the second image according to the preset mark of the first image and the preset mark of the second image;
And the second processing module is used for determining a target image according to the non-artifact area of the first image, the non-artifact area of the second image and the initial superposition area.
A third aspect of the present invention provides an electronic apparatus, comprising:
One or more processors;
A storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the image processing method as described above;
One or more I/O interfaces coupled between the processor and the memory configured to enable information interaction of the processor with the memory.
A fourth aspect of the invention provides a computer readable medium having stored thereon a computer program which when executed by a processor implements the image processing method as described above.
The invention has the following advantages:
By adopting the image processing method provided by the embodiment of the invention, the first image and the second image which are generated when the ultrasonic probe is positioned at different positions and the generation time interval is smaller than the preset time threshold are utilized, the initial superposition area of the first image and the second image is determined according to the preset mark of the first image and the preset mark of the second image, and the target image is determined according to the non-artifact area of the first image, the non-artifact area of the second image and the initial superposition area. The clear part, namely the non-artifact area, in the two images is comprehensively utilized, the target image without artifacts is obtained through superposition processing, the ultrasonic probe does not need to be lifted remotely, the skin is not required to be tightly pressed, the high-quality monitoring image can be safely and efficiently obtained, and the generation time interval of the first image and the second image is smaller than the preset time threshold value, the human body state is hardly changed, the obtained target image can truly and accurately reflect the condition of the body organ tissue, and the treatment effect is facilitated to be improved.
Detailed Description
The following describes specific embodiments of the present invention in detail with reference to the drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the invention, are not intended to limit the invention.
As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
When the terms "comprises" and/or "comprising," "including," are used in this specification, they specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Embodiments of the invention may be described with reference to plan and/or cross-sectional illustrations that are schematic illustrations of idealized embodiments of the invention. Accordingly, the example illustrations may be modified in accordance with manufacturing techniques and/or tolerances.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present invention and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In the treatment process of minimally invasive surgery such as FUS (focused ultrasound surgery), B ultrasonic is often used as a real-time image monitoring means, but an ultrasonic probe is often arranged on a mechanical device, and other mediums such as water and the like can not be used as a couplant, so that the ultrasonic probe can not be well contacted with skin, other mediums exist between the ultrasonic probe and the skin, further, ultrasonic multiple reflection is caused to generate artifacts on images, namely, one or more reflected images of the surface of an FUS (focused ultrasound surgery) transducer exist on the images, the images are in one or more sections of arcs, the images are covered and float on a real monitoring object, a part of the real monitoring object is shielded, the quality of the monitoring image is affected, and the imaging device is lower than the anatomical expressive force of MRI (nuclear magnetic resonance)/CT and even lower than the ultrasonic effect for diagnosis.
When the ultrasonic probe completely contacts the skin, the artifact can be obviously reduced or even completely disappeared, but at the moment, the ultrasonic probe can block the high-intensity focused ultrasound of the FUS and the ultrasonic probe presses the skin, so that the energy attenuation of the FUS can be caused, the focus form is poor, and even the skin is scalded and the ultrasonic probe is damaged. Thus, a safe and effective means for solving the artifact problem has been proposed, i.e. to manually raise and lower the ultrasound probe continuously, so that the body tissue and the target area can be seen clearly when not in FUS irradiation, and the safety of the skin and the ultrasound probe can be ensured when in FUS irradiation.
The method can solve the problem of the artifact to a certain extent, but the operation intensity of doctors is high because the ultrasonic probe needs to be controlled to be continuously far away from or close to the skin, in some cases, the ultrasonic probe needs to be tightly pressed to the skin for the doctor to see the monitored object, skin abrasion is caused, the ultrasonic probe is further horizontally moved when the ultrasonic probe is tightly pressed in the direction perpendicular to the target position, the risk of skin abrasion is higher, and the mechanical mechanism is easy to damage when being moved at a high frequency and a long distance. Therefore, it is proposed to control the ultrasound probe to continuously lift by adopting an automatic control manner, which reduces the operation of a doctor, but moving the ultrasound probe at a high frequency and a long distance is still inefficient, and the image cannot be kept in a stable and clear state, which still has adverse effects on the treatment.
In view of this, the present invention has been carefully studied to find that when the ultrasound probe moves up and down to approach or move away from the skin, the body tissue moves the same distance in the image, for example, every 1cm less the distance of the ultrasound probe from the skin, the position of the body tissue in the image moves down by 1cm closer to the bottom of the image (i.e., the ultrasound probe position), but the position of the artifact in the image does not move according to this rule, the artifact is generated by reflection, and the moving speed of the artifact in the image is 2 times (by one reflection) or 4 times (by two reflections) the moving speed of the body tissue in the image. Fig. 1 is a schematic view of a monitoring image when vibration of an ultrasonic probe is in high position and low position, as shown in fig. 1a, a graph a shown on the left is an image when vibration of the ultrasonic probe is in high position, a graph B shown on the right is an image when vibration of the ultrasonic probe is in low position, broken lines in the graph B are used for mapping an artifact area and skin in the graph a to the graph B according to the same positions, compared with the situation that the ultrasonic probe is in low position, the ultrasonic probe is closer to the skin, so the skin in the graph a is closer to the bottom of the image than the skin in the graph B, and when the ultrasonic probe moves fast and the distance from the skin is different, the artifact can shade different body tissues on the image.
Therefore, the phenomenon can be fully utilized, two images generated when the ultrasonic probe is positioned at different positions are acquired according to the characteristic that the moving speed of body tissues in the images is smaller than the moving speed of artifacts in the images when the ultrasonic probe moves rapidly, clear parts, namely non-artifact areas, in the two images are comprehensively utilized to be overlapped to obtain a target image without artifacts, the ultrasonic probe does not need to be lifted remotely, the skin does not need to be tightly pressed, and the purpose of safely and efficiently eliminating the artifacts in the images can be achieved.
Accordingly, in a first aspect, an embodiment of the present invention provides an image processing method, as shown in fig. 2, where the method may include the following steps:
In step S1, a first image and a second image generated when the ultrasound probe is located at different positions are acquired, wherein a generation time interval of the first image and the second image is smaller than a preset time threshold.
In step S2, an initial overlapping area of the first image and the second image is determined according to the preset flag of the first image and the preset flag of the second image.
In step S3, a target image is determined from the non-artifact region of the first image, the non-artifact region of the second image, and the initial overlap region.
Wherein, the ultrasonic probe is located at different positions, which refers to different heights of the ultrasonic probe from the target detection position, and the preset time threshold may be a shorter duration, for example, may be 0.5 seconds, 1 second, 1.5 seconds, and so on. The preset markers may comprise objects, such as a certain body organ tissue, that are moved at a speed that is less than the speed of movement of the artefact in the image when the ultrasound probe is moved fast. The non-artifact region is a region in the image where no artifact is present.
As can be seen from the above steps S1 to S3, with the image processing method provided by the embodiment of the present invention, the first image and the second image generated when the ultrasound probe is located at different positions and the generated time interval is smaller than the preset time threshold are used, the initial overlapping area of the first image and the second image is determined according to the preset mark of the first image and the preset mark of the second image, and the target image is determined according to the non-artifact area of the first image, the non-artifact area of the second image and the initial overlapping area. The clear part, namely the non-artifact area, in the two images is comprehensively utilized, the target image without artifacts is obtained through superposition processing, the ultrasonic probe does not need to be lifted remotely, the skin is not required to be tightly pressed, the high-quality monitoring image can be safely and efficiently obtained, and the generation time interval of the first image and the second image is smaller than the preset time threshold value, the human body state is hardly changed, the obtained target image can truly and accurately reflect the condition of the body organ tissue, and the treatment effect is facilitated to be improved.
At present, the existing method for eliminating the image artifact is to pre-store the clear image with fewer artifacts when the ultrasonic probe approaches the skin, when the real-time image is received currently, the pre-stored clear image is overlapped with the current real-time image according to the position information, namely, the corresponding part in the pre-stored clear image is filled into the current real-time image by a software algorithm, so that the obtained image is clear and has no artifact, but due to the fact that the clear image is pre-stored, the position of the organ tissue may have a small amount of change due to physiological motion and other reasons, and the pre-stored clear image cannot accurately reflect the organ tissue condition of the current body. In contrast, in the method of the embodiment of the invention, the two images for eliminating the artifact are acquired in real time, the generation time interval of the two images is smaller than the preset time threshold, and the conditions of the body organ tissues can be truly and accurately reflected, so that the treatment effect is improved.
Specifically, the preset mark of the first image and the preset mark of the second image may be taken as references, and the first image or the second image is moved so that the preset mark of the first image and the preset mark of the second image are overlapped to obtain a portion where the first image and the second image are overlapped, and accordingly, in some embodiments, as shown in fig. 3, the step of determining the initial overlapping area of the first image and the second image according to the preset mark of the first image and the preset mark of the second image (i.e. step S2) may further include the following steps:
in step S21, a preset flag of the first image and a preset flag of the second image are identified.
In step S22, the first image or the second image is moved such that the preset mark of the first image overlaps with the preset mark of the second image.
In step S23, a portion where the first image and the second image are superimposed is determined as an initial superimposition area.
To further improve the image quality, the first image and the second image may be preprocessed such that portions of the first image and the second image that can be superimposed have the same brightness and contrast. Accordingly, in some embodiments, prior to the step of determining the portion of the first image superimposed with the second image as the initial superimposed area (i.e., step S23), the method may further include the step of pre-processing the first image and the second image such that the portion of the first image and the second image that can be superimposed have the same brightness and contrast.
The initial overlapping region is formed by overlapping a preset mark according to the first image and a preset mark of the second image as references, and since the first image and the second image are generated when the ultrasonic probe is located at different positions, the position of the preset mark in the first image is different from the position of the preset mark in the second image, the first image and the second image cannot be fully overlapped, and no artifact exists in a portion where the first image and the second image cannot be overlapped due to a distance, so that the first image and the second image both comprise independent non-artifact regions which do not correspond to the initial overlapping region and non-artifact regions which correspond to the initial overlapping region. The artifact region in the initial superimposed region may be replaced according to the non-artifact regions of the first image and the second image corresponding to the initial superimposed region, resulting in a final superimposed region free of artifacts.
Accordingly, in some embodiments, as shown in fig. 4, the artifact region of the first image and the artifact region of the second image each include an independent artifact region and an artifact region corresponding to the initial overlapping region, and the step of determining the target image according to the artifact region of the first image, the artifact region of the second image and the initial overlapping region (i.e. step S3) may further include the steps of:
in step S31, the initial superimposition area is replaced according to the non-artifact area corresponding to the initial superimposition area, so as to obtain a final superimposition area.
In step S32, the independent non-artifact region of the first image, the independent non-artifact region of the second image, and the final overlapping region are stitched to obtain the target image.
Firstly, according to a non-artifact region of the first image corresponding to the initial superposition region and a non-artifact region of the second image corresponding to the initial superposition region, carrying out replacement processing on the artifact region and the non-artifact region in the initial superposition region, so that the final superposition region obtained after processing has no artifact. Then, an independent non-artifact region which cannot be overlapped with the second image in the first image, an independent non-artifact region which cannot be overlapped with the first image in the second image and a final overlapped region without artifacts are spliced, so that a high-quality target image without artifacts can be obtained.
Specifically, the portion of the initial overlapping region corresponding to the artifact region in the first image may also be referred to as an artifact region of the initial overlapping region, where the artifact is overlapped from the first image, and the portion of the initial overlapping region corresponding to the artifact region in the second image may also be referred to as an artifact region of the initial overlapping region, where the artifact is overlapped from the second image. For the portion of the initial overlap region corresponding to the artifact region in the first image, the portion of the corresponding location range in the second image is typically a non-artifact region, and this portion may be directly replaced with the non-artifact region of the corresponding location range in the second image. For the portion of the initial overlap region corresponding to the artifact region in the second image, the portion of the corresponding location range in the first image is typically a non-artifact region, and this portion may be directly replaced with the non-artifact region of the corresponding location range in the first image. For the non-artifact region in the initial overlapping region, the artifact is not overlapped from the first image, but is not overlapped from the second image, the part of the artifact can be directly replaced by the non-artifact region in the corresponding position range in the first image, or the part of the artifact can be directly replaced by the non-artifact region in the corresponding position range in the second image in theory, but in order to further improve the image quality, the target region can be determined from the first image and the second image according to the pixel value of the first image and the pixel value of the second image, and the non-artifact region in the initial overlapping region is replaced by the target region.
Accordingly, in some embodiments, as shown in fig. 5, the step of performing the replacement processing on the initial overlapping area according to the non-artifact area corresponding to the initial overlapping area (i.e. step S31) may further include the following steps:
In step S311, the portion of the initial superimposed region corresponding to the artifact region in the first image is replaced with a non-artifact region of the corresponding position range in the second image.
In step S312, the portion of the initial superimposed region corresponding to the artifact region in the second image is replaced with a non-artifact region of the corresponding position range in the first image.
In step S313, a target area is determined from the non-artifact area corresponding to the initial overlapping area according to the pixel value of the first image and the pixel value of the second image, and the non-artifact area in the initial overlapping area is replaced with the target area.
Fig. 1B is a schematic diagram of image superposition provided in the embodiment of the present invention, and is shown in fig. 1a and 1B, assuming that an image a is a first image, an image B is a second image, and a preset mark is skin, because the image a and the image B are generated when the ultrasound probe is located at different positions, the position of the skin in the image a is different from the position of the skin in the image B, the image a is fixed, the image B is moved, so that the skin in the image B overlaps the skin in the image a, the artifact area in the image B and the artifact area in the image B are non-overlapping after the skin in the image B overlaps the skin in the image a, so that the skin in the image a is also the skin in the image B after overlapping, and a certain distance is between the artifact area in the image a and the artifact area in the image B after overlapping, and the artifact area in the image B can be replaced with a non-artifact area in a corresponding position range in the image B, and the artifact area in the image B after overlapping can be replaced with a non-artifact area in a corresponding position range in the image B, and the artifact area in the image B is required to be replaced with a skin area in the same distance in the image a 3cm when the artifact area in the image B is the artifact area in the image B and the non-image B is determined as a reference area in the skin area.
In some embodiments, the first image is generated when the ultrasound probe is at a position closer to the target monitoring position than when the second image is generated, and the step of determining the target region from the non-artifact region corresponding to the initial superimposed region according to the pixel value of the first image and the pixel value of the second image (i.e., step S313) may further include the step of determining the target region according to the pixel information of the first image or determining the target image according to an average value of the pixel information of the first image and the pixel information of the second image if the difference between the pixel value of the first image and the pixel value of the second image is less than a preset pixel threshold.
For the non-artifact region in the initial overlapping region, the artifact is not obtained from the first image in an overlapping manner, nor is the artifact obtained from the second image in an overlapping manner, the first image has the non-artifact region corresponding to the artifact, the second image also has the non-artifact region corresponding to the artifact, and when the pixel value of the first image is not greatly different from the pixel value of the second image, the pixel information of the non-artifact region corresponding to the artifact in the first image or the average value of the pixel information of the non-artifact region corresponding to the artifact region in the first image and the pixel information of the non-artifact region corresponding to the artifact region in the second image can be directly adopted to replace the non-artifact region in the initial overlapping region.
In some embodiments, when the pixel value of the first image differs significantly from the pixel value of the second image and the non-artifact region in the initial overlap region is close to the skin, then the non-artifact region in the initial overlap region may be replaced with the pixel information of the non-artifact region in the second image that corresponds to the non-artifact region in the initial overlap region, otherwise the non-artifact region in the initial overlap region is replaced with the pixel information of the non-artifact region in the first image that corresponds to the non-artifact region in the initial overlap region.
The acquiring the first image and the second image generated when the ultrasonic probe is located at different positions may be achieved by controlling the ultrasonic probe to vibrate up and down, and accordingly, in some embodiments, as shown in fig. 6, the step of acquiring the first image and the second image generated when the ultrasonic probe is located at different positions (i.e., step S1) may further include the steps of:
in step S11, the ultrasonic probe is controlled to vibrate in a direction perpendicular to the target position according to a preset vibration frequency and emit probe ultrasonic waves according to the vibration signal.
In step S12, a plurality of images are generated from the ultrasound echoes received by the ultrasound probe.
In step S13, a first image and a second image are determined from the plurality of images based on the vibration signal.
The vibration signal is used for controlling the ultrasonic probe to vibrate in a direction perpendicular to the target position according to a preset vibration frequency, the position of the ultrasonic probe can be judged according to the vibration signal, and the image generated according to the ultrasonic echo received by the ultrasonic probe is almost real-time, so that the first image and the second image generated when the vibration of the ultrasonic probe is at a high position and generated when the vibration of the ultrasonic probe is at a low position can be determined from a plurality of images according to the vibration signal.
In some embodiments, the step of controlling the ultrasonic probe to vibrate in a direction perpendicular to the target position according to the preset vibration frequency and transmitting the probe ultrasonic waves (i.e. step S11) according to the vibration signal may further include the step of transmitting the vibration signal to the vibration mechanism so that the vibration mechanism pushes the ultrasonic probe to vibrate in a direction perpendicular to the target position according to the preset vibration frequency and transmitting the probe ultrasonic waves.
In some embodiments, the vibration mechanism is a cam mechanism, the cycle angle of the cam mechanism at a short diameter and the cycle angle of the cam mechanism at a long diameter are both within the interval [90 °,140 ° ], and the cycle angle of the cam mechanism between the short diameter and the long diameter is within the interval [80 °,180 ° ].
The vibration mechanism drives the ultrasonic probe to vibrate up and down continuously to acquire a plurality of real-time images, the vibration mechanism is designed as a cam mechanism, the cam mechanism is designed to be located in a section [90 degrees, 140 degrees ] at the period angle when the cam mechanism is located in a short diameter and located in a section [80 degrees, 180 degrees ] at the period angle when the cam mechanism is located between the long diameter and the short diameter, the vibration of the ultrasonic probe can be ensured to be located at a low position or a high position for most of the time, and only a small part of the vibration time is located between the low position and the high position, so that the acquired first image and second image generated when the ultrasonic probe is located at different positions can provide good characteristic representation, and the preset mark and the position of an artifact area in the first image are not too close to the position in the second image, so that the subsequent replacement processing is interfered.
The image processing method provided by the present invention is briefly described below in connection with a specific embodiment.
As shown in fig. 7, an embodiment of the present invention provides an image processing apparatus including a FUS transducer, an ultrasonic probe, a vibration mechanism, an ultrasonic imaging module, an image acquisition module, an image processing module, and a control module, where the workflow may include the following steps:
and (I) the control module transmits a vibration signal to the vibration mechanism to initiate vibration control, so that the vibration mechanism vibrates according to a preset vibration frequency, and meanwhile, the control module transmits the vibration signal to the image processing module.
Wherein setting the vibration frequency too low will affect the real-time performance, setting too high will cause blurring of the image, and in order to ensure the image quality, the preset vibration frequency may be set between 1hz and 20hz, and the vibration amplitude may be set between 3mm and 15 mm.
And secondly, the vibration mechanism pushes the ultrasonic probe to vibrate up and down.
The vibration mechanism can be composed of a special cam mechanism for pushing the ultrasonic probe to vibrate up and down, and can be designed to be in a short diameter (the ultrasonic probe descends to the lowest) and in a long diameter (the ultrasonic probe ascends to the highest) which are respectively more than 40 percent, and the percentage of the transition part (the ultrasonic probe is positioned between the lowest point and the highest point) is controlled within 20 percent.
And thirdly, the ultrasonic probe continuously vibrating up and down emits detection ultrasonic waves and receives ultrasonic echoes.
And fourthly, the ultrasonic probe transmits the ultrasonic echo to the ultrasonic imaging module.
And fifthly, the image acquisition module acquires images from the ultrasonic imaging module and transmits the acquired images to the image processing module.
And sixthly, the image processing module receives the image transmitted by the image acquisition module in real time and also receives the vibration signal transmitted by the control module in real time, and the position of the ultrasonic probe is judged according to the control signal. Based on the vibration signal, an image img1 generated when the vibration of the ultrasonic probe is in the high position (refer to a diagram in fig. 1 a) and an image img2 generated when the vibration of the ultrasonic probe is in the low position (refer to a diagram B in fig. 1 a) are acquired, respectively. And respectively finding an artifact region on the img1 and the img2, removing the artifact region, and overlapping the img1 and the img2 in a staggered manner according to the upper position and the lower position to form an image, and then displaying the image.
Wherein the step of removing the artifact region may further comprise the steps of:
(1) The vertical distance d1 of the ultrasonic probe from the skin when img1 is generated is determined, the vertical distance d2 of the ultrasonic probe from the skin when img2 is generated is determined, and the difference dis=d2-d 1 between the two is calculated.
(2) With img1 as the base map, img2 is moved down by dis distances so that the skin line of img2 can overlap with the skin line of img1, and the superimposable region (equivalent to intersection of two sectors) is calculated.
(3) Img1 and img2 are pre-processed such that the superimposable regions of img1 and img2 have the same brightness and contrast.
(4) And determining a part formed by superposing the img1 and the img2 as an initial superposition area, comparing the initial superposition area with the img1 and the img2, replacing a part corresponding to an artifact area in the img1 in the initial superposition area with a non-artifact area in a corresponding position range in the img2, and replacing a part corresponding to the artifact area in the img2 in the initial superposition area with a non-artifact area in a corresponding position range in the img 1. For the non-artifact area in the initial superposition area, if the difference of the pixel values between img1 and img2 is not large, the pixel information of the non-artifact area in the corresponding position range in img1 can be directly adopted for replacement, and the pixel information of the non-artifact area in the corresponding position range in img1 and the pixel information of the non-artifact area in the corresponding position range in img2 can also be adopted for replacement. As another embodiment, if the difference in pixel values between img1 and img2 is large and the non-artifact region in the initial overlap region is close to the skin, then the replacement may be performed with the pixel information of the non-artifact region of the corresponding location range in img2, otherwise the replacement is performed with the pixel information of the non-artifact region of the corresponding location range in img 1.
Wherein, this step can use optical flow method to reduce the error caused by slight displacement of the body, and can find the drift direction of the artifact.
The above steps of the methods are divided into only for clarity of description, and may be combined into one step or split into multiple steps when implemented, so long as the steps include the same logic relationship, and all the steps are within the protection scope of the patent, and adding insignificant modification or introducing insignificant design to the algorithm or the process, but not changing the core design of the algorithm and the process, and all the steps are within the protection scope of the patent.
Based on the same technical idea, in a second aspect, an embodiment of the present invention provides an image processing apparatus. As shown in fig. 8, the image processing apparatus may include:
The acquisition module 101 is configured to acquire a first image and a second image that are generated when the ultrasound probe is located at different positions and have a generation time interval smaller than a preset time threshold.
The first processing module 102 is configured to determine an initial overlapping area of the first image and the second image according to the preset flag of the first image and the preset flag of the second image.
A second processing module 103, configured to determine a target image according to the non-artifact area of the first image, the non-artifact area of the second image, and the initial overlapping area.
In some embodiments, the first processing module 102 is configured to:
identifying a preset mark of the first image and a preset mark of the second image;
Moving the first image or the second image so that a preset mark of the first image overlaps a preset mark of the second image;
And determining a part formed by overlapping the first image and the second image as the initial overlapping area.
In some embodiments, the first processing module 102 is further configured to:
The first image and the second image are preprocessed so that portions of the first image and the second image that can be superimposed have the same brightness and contrast.
In some embodiments, the non-artifact region of the first image and the non-artifact region of the second image each comprise an independent non-artifact region and a non-artifact region corresponding to the initial overlay region, the second processing module 103 is configured to:
performing replacement processing on the initial superposition area according to the non-artifact area corresponding to the initial superposition area to obtain a final superposition area;
And splicing the independent non-artifact region of the first image, the independent non-artifact region of the second image and the final superposition region to obtain the target image.
In some embodiments, the second processing module 103 is configured to:
replacing a part of the initial superposition area corresponding to the artifact area in the first image with a non-artifact area of a corresponding position range in the second image;
replacing a part of the initial superposition area corresponding to the artifact area in the second image with a non-artifact area of a corresponding position range in the first image;
And determining a target area from the non-artifact area corresponding to the initial superposition area according to the pixel value of the first image and the pixel value of the second image, and replacing the non-artifact area in the initial superposition area with the target area.
In some embodiments, the first image is generated when the ultrasound probe is in a position closer to a target monitoring position than the second image is generated, the second processing module 103 for:
And under the condition that the difference value between the pixel value of the first image and the pixel value of the second image is smaller than a preset pixel threshold value, determining the target area according to the pixel information of the first image, or determining the target image according to the average value of the pixel information of the first image and the pixel information of the second image.
In some embodiments, the obtaining module 101 is configured to:
Controlling the ultrasonic probe to vibrate in a direction perpendicular to a target position according to a preset vibration frequency according to a vibration signal and emitting detection ultrasonic waves;
generating a plurality of images according to ultrasonic echoes received by the ultrasonic probe;
And determining the first image and the second image from the plurality of images according to the vibration signal.
In some embodiments, the acquisition module 101 is configured to send a vibration signal to the vibration mechanism, so that the vibration mechanism pushes the ultrasonic probe to vibrate in a direction perpendicular to the target position according to a preset vibration frequency and transmits the probe ultrasonic wave.
In some embodiments, the vibration mechanism is a cam mechanism, the cycle angle of the cam mechanism at a short diameter and the cycle angle of the cam mechanism at a long diameter are both within the interval [90 °,140 ° ], and the cycle angle of the cam mechanism between the short diameter and the long diameter is within the interval [80 °,180 ° ].
It should be clear that the invention is not limited to the specific arrangements and processes described in the foregoing embodiments and shown in the drawings. For convenience and brevity of description, detailed descriptions of known methods are omitted herein, and specific working processes of the systems, modules and units described above may refer to corresponding processes in the foregoing method embodiments, which are not repeated herein.
As shown in fig. 9, an embodiment of the present invention provides an electronic device, which may include:
one or more processors 901;
A memory 902 having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the image processing method of any of the above;
One or more I/O interfaces 903, coupled between the processor and the memory, are configured to enable information interaction of the processor with the memory.
The processor 901 is a device with data processing capability, including but not limited to a Central Processing Unit (CPU), the memory 902 is a device with data storage capability, including but not limited to a random access memory (RAM, more specifically SDRAM, DDR, etc.), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a FLASH memory (FLASH), and an I/O interface 903 is connected between the processor 901 and the memory 902, so as to enable information interaction between the processor 901 and the memory 902, including but not limited to a data Bus (Bus), etc.
In some embodiments, processor 901, memory 902, and I/O interface 903 are connected to each other via a bus, which in turn connects to other components of the computing device.
The present embodiment also provides a computer readable medium, on which a computer program is stored, where the program when executed by a processor implements the image processing method provided in the present embodiment, and in order to avoid repetitive description, specific steps of the image processing method are not described herein.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, functional modules/units in the apparatus, and methods of the invention described above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components, for example, one physical component may have a plurality of functions, or one function or step may be cooperatively performed by several physical components. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the embodiments and form different embodiments.
It is to be understood that the above embodiments are merely illustrative of the application of the principles of the present invention, but not in limitation thereof. Various modifications and improvements may be made by those skilled in the art without departing from the spirit and substance of the invention, and are also considered to be within the scope of the invention.