[go: up one dir, main page]

CN120746952A - Bleeding point detection method, computer device, storage medium, and program product - Google Patents

Bleeding point detection method, computer device, storage medium, and program product

Info

Publication number
CN120746952A
CN120746952A CN202510806954.XA CN202510806954A CN120746952A CN 120746952 A CN120746952 A CN 120746952A CN 202510806954 A CN202510806954 A CN 202510806954A CN 120746952 A CN120746952 A CN 120746952A
Authority
CN
China
Prior art keywords
image
detected
sample image
bleeding
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202510806954.XA
Other languages
Chinese (zh)
Inventor
施再峰
盛元一
张贻彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Lianying Zhirong Medical Technology Co ltd
Original Assignee
Changzhou Lianying Zhirong Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Lianying Zhirong Medical Technology Co ltd filed Critical Changzhou Lianying Zhirong Medical Technology Co ltd
Priority to CN202510806954.XA priority Critical patent/CN120746952A/en
Publication of CN120746952A publication Critical patent/CN120746952A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Endoscopes (AREA)

Abstract

The present application relates to a bleeding point detection method, a computer device, a storage medium and a program product. The method comprises the steps of obtaining an image to be detected, determining a conversion relation between the image to be detected and a sample image based on the image to be detected, and converting the positions of bleeding points in the sample image into the image to be detected according to the conversion relation to obtain the positions of the bleeding points in the image to be detected. The sample image is an image comprising bleeding points in a historical sequence image of the target endoscope. The method can be used for detecting the positions of the bleeding points, and the positions of the bleeding points in the image to be detected can be obtained based on the conversion relation between the positions of the bleeding points in the sample image by utilizing the immobility of the bleeding points, so that the accuracy of detecting the bleeding points is improved.

Description

Bleeding point detection method, computer device, storage medium, and program product
Technical Field
The present application relates to the field of medical technology, and in particular, to a method for detecting a bleeding point, a computer device, a storage medium, and a program product.
Background
An endoscope is a common medical instrument in the medical field, and is mostly applied to the scenes of observation, suturing, excision and the like of tissues in a target object.
When the surgical suture is applied to the excision of the tissue in the target object, bleeding points are formed at the excision positions, and medical staff need to find the bleeding points for suturing after the tissue excision is completed. However, in practical application, the bleeding amount of the bleeding point is too fast and too much, which can cause a great amount of blood accumulation, and the bleeding point is covered, so that the medical staff cannot find the bleeding point in time.
In the conventional technology, an image of internal tissue can be acquired by an image sensor at the end of an endoscope body, and the image of the internal tissue is subjected to image processing (ISP, mage SignalProcessor) by a computer device, so as to enhance the subtle difference between the bleeding point position and the blood region in the image of the internal tissue, and further determine the bleeding point position. However, the bleeding point location determined by the above conventional method is less accurate.
Disclosure of Invention
Based on this, it is necessary to provide a bleeding point detection method, a computer device, a storage medium, and a program product, in view of the above-described technical problems.
In a first aspect, a method for detecting a bleeding point is provided, comprising:
Acquiring an image to be detected;
Determining a conversion relation between an image to be detected and a sample image based on the image to be detected, wherein the sample image is an image comprising bleeding points in a historical sequence image of the target endoscope;
and converting the positions of the bleeding points in the sample image into the image to be detected according to the conversion relation to obtain the positions of the bleeding points in the image to be detected.
In one embodiment, determining a conversion relationship between the image to be detected and the sample image based on the image to be detected includes:
acquiring position information of a matched characteristic point pair between an image to be detected and a sample image;
determining an affine transformation matrix between the image to be detected and the sample image according to the position information of the matched characteristic point pairs;
The affine transformation matrix is determined as a conversion relationship between the image to be detected and the sample image.
In one embodiment, acquiring position information of a matched feature point pair between an image to be detected and a sample image includes:
extracting features of the image to be detected to obtain multi-dimensional features of at least one first feature point in the image to be detected;
Performing feature matching on the multi-dimensional features of at least one first feature point and the multi-dimensional features of at least one second feature point in the sample image, and determining the first feature point and the second feature point with the highest feature matching degree as matched feature point pairs;
and acquiring the position information of the first characteristic point in the matched characteristic point pair in the image to be detected and the position information of the second characteristic point in the matched characteristic point pair in the sample image.
In one embodiment, acquiring an image to be detected includes:
Acquiring an initial image to be detected, and determining the similarity between the initial image to be detected and a sample image;
and if the similarity is greater than the similarity threshold, determining the initial image to be detected as the image to be detected.
In one embodiment, the method further comprises:
Acquiring initial images under different light source directions of a target endoscope;
synthesizing the initial images in different light source directions to obtain a stereoscopic image at a target moment;
And arranging the stereoscopic images at the target time according to the time sequence to obtain a historical sequence image.
In one embodiment, acquiring initial images at different light source directions of a target endoscope includes:
Changing the direction of the light source by adjusting the shading position of a shading sheet on the target endoscope, wherein the shading sheet is positioned at the end part of the endoscope body of the target endoscope and covers part of the light source;
an image of the operating environment of the target endoscope in different light shielding positions of the light shielding sheet is acquired as an initial image.
In one embodiment, the method further comprises:
performing differential processing on two adjacent frames of stereoscopic images in the historical sequence image to obtain differential characteristics between the two adjacent frames of stereoscopic images in the historical sequence image;
and determining a stereoscopic image of the first bleeding point according to the difference characteristics, and taking the stereoscopic image of the first bleeding point as a sample image.
In a second aspect, the present application also provides a bleeding point detection device, including:
The system comprises an image acquisition module, a conversion determining module, a detection module and a detection module, wherein the image acquisition module is used for acquiring an image to be detected;
The position conversion module is used for converting the positions of the bleeding points in the sample image into the image to be detected according to the conversion relation to obtain the positions of the bleeding points in the image to be detected.
In a third aspect, the present application also provides a computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
Determining a conversion relation between the image to be detected and a sample image based on the image to be detected, wherein the sample image is an image comprising bleeding points in a historical sequence image of the target endoscope;
and converting the positions of the bleeding points in the sample image into the image to be detected according to the conversion relation to obtain the positions of the bleeding points in the image to be detected.
In a fourth aspect, the present application also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Determining a conversion relation between the image to be detected and a sample image based on the image to be detected, wherein the sample image is an image comprising bleeding points in a historical sequence image of the target endoscope;
and converting the positions of the bleeding points in the sample image into the image to be detected according to the conversion relation to obtain the positions of the bleeding points in the image to be detected.
In a fifth aspect, the application also provides a computer program product comprising a computer program which, when executed by a processor, performs the steps of:
Determining a conversion relation between the image to be detected and a sample image based on the image to be detected, wherein the sample image is an image comprising bleeding points in a historical sequence image of the target endoscope;
and converting the positions of the bleeding points in the sample image into the image to be detected according to the conversion relation to obtain the positions of the bleeding points in the image to be detected.
The bleeding point detection method, the bleeding point detection device, the computer equipment, the storage medium and the computer program product are used for obtaining the bleeding point position in the image to be detected by obtaining the image to be detected and determining the conversion relation between the image to be detected and the sample image based on the image to be detected, and then converting the bleeding point position in the sample image into the image to be detected according to the conversion relation. The sample image is an image comprising bleeding points in a historical sequence image of the target endoscope. The method can be used for detecting the positions of the bleeding points, and the positions of the bleeding points in the image to be detected can be obtained based on the conversion relation between the positions of the bleeding points in the sample image by utilizing the immobility of the bleeding points, so that the accuracy of detecting the bleeding points is improved.
Drawings
FIG. 1 is a diagram of an application environment of a method for detecting bleeding points in an embodiment;
FIG. 2 is a flow chart of a method for detecting bleeding points according to an embodiment;
FIG. 3 is a flowchart illustrating a method for determining a conversion relationship between an image to be detected and a sample image according to an embodiment;
FIG. 4 is a flowchart of acquiring location information of matched feature point pairs in one embodiment;
FIG. 5 is a schematic flow chart of acquiring an image to be detected in one embodiment;
FIG. 6 is a flow diagram of constructing a historical sequence image in one embodiment;
FIG. 7 is a flow chart of an initial image at different light source directions in one embodiment;
FIG. 8 is a flow diagram of a scope end of a target endoscope in one embodiment;
FIG. 9 is a schematic structural view of a light shielding sheet according to an embodiment;
FIG. 10 is a flow diagram of determining a sample image in one embodiment;
FIG. 11 is a schematic illustration of images included in a history sequence image in one embodiment;
FIG. 12 is a flow chart of a method for detecting bleeding points according to another embodiment;
FIG. 13 is a schematic diagram of a process of detecting a bleeding point in one embodiment;
FIG. 14 is a block diagram of a bleeding point detection device in one embodiment;
fig. 15 is an internal structural view of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The bleeding point detection method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. The endoscope system comprises a data acquisition end and a data processing end, wherein the data acquisition end comprises a target endoscope 102, the data processing end comprises a computer device 104 for realizing blood spot detection, and communication is carried out between the target endoscope 102 and the computer device 104. The target endoscope 102 is used for acquiring a current image of the operating environment, the computer device 104 acquires the current image through the target endoscope 102 and uses the current image as an image to be detected, and based on the image to be detected, a conversion relationship between the image to be detected and the sample image is determined, and then the position of the bleeding point in the sample image is converted into the image to be detected according to the conversion relationship, so as to obtain the position of the bleeding point in the image to be detected. The sample image is an image comprising bleeding points in a historical sequence image of the target endoscope. The computer device 104 may be a general-purpose computer device or a special-purpose computer device, such as a desktop computer, a portable computer, a network server, a palm computer (PersonalDigital Assistant, PDA), a mobile phone, a tablet computer, a wireless terminal device, a communication device, an embedded device, etc., which is not limited to the specific type of the computer device 104 in this embodiment.
In one embodiment, as shown in fig. 2, a method for detecting a bleeding point is provided, and the method is applied to the computer device in fig. 1, and is illustrated as an example, and includes the following steps:
s210, acquiring an image to be detected.
Wherein the image to be detected comprises an image of the tissue/lumen interior acquired by the target endoscope. The target endoscope is used to acquire images of the operating environment in which it is located (e.g., simulating the interior of human tissue/lumen).
Optionally, the target endoscope and the computer device can communicate in a wired or wireless manner, and the computer device can obtain the image to be detected acquired by the target endoscope. The image to be detected can be an image obtained by photometric stereo synthesis of images acquired by the target endoscope in different light source directions.
S220, based on the image to be detected, determining a conversion relation between the image to be detected and the sample image.
The sample image is an image comprising bleeding points in a historical sequence image of the target endoscope. The historical sequence image of the target endoscope is the image acquired by the target endoscope under the same operation environment before the image to be detected is acquired. The conversion relationship between the image to be detected and the sample image substantially reflects the coordinate conversion relationship between the image to be detected and the sample image.
Optionally, the computer device may perform feature contrast on the image to be detected and the sample image to determine a conversion relationship between the image to be detected and the sample image.
S230, converting the positions of the bleeding points in the sample image into the image to be detected according to the conversion relation, and obtaining the positions of the bleeding points in the image to be detected.
Specifically, the computer equipment acquires the positions of the bleeding points in the sample image, and then converts the positions of the bleeding points in the sample image into an image to be detected according to the conversion relation, so as to obtain the positions of the bleeding points in the image to be detected.
In this embodiment, the computer device obtains the image to be detected, determines a conversion relationship between the image to be detected and the sample image based on the image to be detected, and then converts the position of the bleeding point in the sample image into the image to be detected according to the conversion relationship, so as to obtain the position of the bleeding point in the image to be detected. The image to be detected is a current image acquired by the target endoscope, and the sample image is an image comprising bleeding points in a historical sequence image of the target endoscope. The method can be used for detecting the positions of the bleeding points, and the positions of the bleeding points in the image to be detected can be obtained based on the conversion relation between the positions of the bleeding points in the sample image by utilizing the immobility of the bleeding points, so that the accuracy of detecting the bleeding points is improved.
In practical application, the conversion relationship between the image to be detected and the sample image can be determined according to the matched characteristic point pairs between the image to be detected and the sample image. As shown in fig. 3, S220, determining a conversion relationship between the image to be detected and the sample image based on the image to be detected, includes:
s310, acquiring position information of a matched characteristic point pair between the image to be detected and the sample image.
Wherein the feature is a specific structure in the image, the feature point is a point or area with the feature, and in a specific embodiment, the feature point may be taken as an example of the point with the feature. The matched feature point pair is a pair of feature points matched by the feature. The position information correspondingly comprises the position information of the characteristic points in the image to be detected in the matched characteristic point pair and the position information of the characteristic points in the sample image in the matched characteristic point pair.
Optionally, the computer device extracts feature points of the image to be detected to obtain feature points in the image to be detected, and performs feature matching on the obtained feature points and the feature points in the sample image to obtain matched feature point pairs between the image to be detected and the sample image, so as to obtain position information of the feature points in the image to be detected in the matched feature point pairs in the image to be detected and position information of the feature points in the sample image in the matched feature point pairs in the sample image respectively.
S320, determining an affine transformation matrix between the image to be detected and the sample image according to the position information of the matched feature point pairs.
It should be noted that, in the actual use process of the target endoscope, there may be transformation operations such as movement, rotation, or scaling, so that it is difficult to ensure that the target endoscope is in the same operation state when the sample image and the image to be detected are obtained, and the affine transformation matrix can accurately reflect the association relationship between the image to be detected and the sample image.
Alternatively, the position information of the matched pair of feature points may be coordinates of the feature points in the image. The computer device can obtain an affine transformation matrix between the image to be detected and the sample image based on the coordinates of the plurality of sets of characteristic point pairs in the image to be detected and the coordinates in the sample image.
S330, determining an affine transformation matrix as a conversion relation between the image to be detected and the sample image.
Specifically, the affine transformation matrix is determined as a conversion relationship between the image to be detected and the sample image after the computer device obtains the radiation transformation matrix.
In this embodiment, the computer device obtains the position information of the matched pair of feature points between the image to be detected and the sample image, and determines an affine transformation matrix between the image to be detected and the sample image according to the position information of the matched pair of feature points, thereby determining the affine transformation matrix as a conversion relationship between the image to be detected and the sample image. The affine transformation matrix can accurately reflect the conversion relation between the image to be detected and the sample image, is beneficial to accurately determining the position of the bleeding point in the image to be detected based on the conversion relation, and further improves the accuracy of bleeding point detection.
In an alternative embodiment, the accuracy of the affine transformation matrix, and thus the accuracy of detecting the bleeding point, can be improved by improving the matching characteristic point pairs between the image to be detected and the sample image. Based on this, as shown in fig. 4, the step S310 of acquiring the position information of the matched feature point pair between the image to be detected and the sample image includes:
s410, extracting features of the image to be detected to obtain multi-dimensional features of at least one first feature point in the image to be detected.
The multidimensional feature is various types of feature information. Optionally, the multi-dimensional features include various types of feature information including color, texture, size, location information, and the like. The position information includes a distance between the feature point and a certain reference point, and the reference point may be another feature point or an image center point.
It should be noted that the feature points may be more significant points in the image, such as contour points, bright points in darker areas, dark points in lighter areas, points with color distinction from surrounding areas, and so on. The feature points in different images are different and the number is also uncertain.
Optionally, the computer device may process the image to be detected by using an image processing method based on the characteristics of the feature points, so as to obtain feature points in the image to be detected, that is, the first feature points, so as to further obtain multidimensional information of each first feature point. For example, the computer device may employ an ORB (Oriented FAST and Rotated BRIEF, rapid feature point extraction and description) feature extraction algorithm to derive a first feature point in the image to be detected. The computer device may also input the image to be detected into a deep learning neural network model for identifying feature points, to identify a first feature point in the image to be detected by the neural network model.
And S420, performing feature matching on the multi-dimensional features of at least one first feature point and the multi-dimensional features of at least one second feature point in the sample image, and determining the first feature point and the second feature point with the highest feature matching degree as matched feature point pairs.
The feature points in the sample image are the second feature points, and the multi-dimensional features of the second feature points in the sample image are matched with the feature types of the multi-dimensional features of the first feature points in the image to be detected.
Alternatively, the process of extracting the second feature point in the sample image may refer to the process of extracting the first feature point in the image to be detected, for example, an ORB feature extraction algorithm or a deep learning neural network model is used to obtain the second feature point in the sample image. Similarly, the first feature points in different images to be detected are different, and the second feature points in different sample images are different, but the images to be detected and the sample images are images acquired by the target endoscope under the same operation scene, and the first feature points and the second feature points which are matched are feature point pairs.
In a specific embodiment, the extraction process of the second feature point in the sample image is a preprocessing process performed according to the sample image. That is, before the step of acquiring the image to be detected, feature extraction is performed on the sample image in advance, so as to obtain multi-dimensional features of at least one second feature point in the sample image.
Alternatively, the feature matching degree is positively correlated with the number of matched feature types, i.e., the more the number of matched feature types, the higher the feature matching degree, and conversely, the fewer the number of matched feature types, the lower the feature matching degree. The feature matching degree is also inversely related to the feature deviation among various types of features, namely the larger the feature deviation is, the lower the feature matching degree is, and conversely, the smaller the feature deviation is, the higher the feature matching degree is.
Specifically, the computer device performs feature matching of multi-dimensional features on each first feature point in the image to be detected and each second feature point in the sample image to determine the matching degree between the corresponding first feature point and the second feature point, and for each first feature point, the first feature point and the second feature point which have the highest feature matching degree and are larger than a matching degree threshold value are determined to be matched feature point pairs.
S430, acquiring position information of a first feature point in the matched feature point pair in the image to be detected and position information of a second feature point in the matched feature point pair in the sample image.
Specifically, after determining the matched characteristic point pair between the image to be detected and the image, the computer equipment obtains the coordinates of a first characteristic point in the matched characteristic point pair in the image to be detected and the coordinates of a second characteristic point in the matched characteristic point pair in the sample image.
In this embodiment, the computer device performs feature extraction on an image to be detected to obtain a multi-dimensional feature of at least one first feature point in the image to be detected, performs feature matching on the multi-dimensional feature of the at least one first feature point and the multi-dimensional feature of at least one second feature point in the sample image, determines a first feature point and a second feature point with the highest feature matching degree as a matched feature point pair, and further obtains position information of the first feature point in the image to be detected in the matched feature point pair and position information of the second feature point in the sample image in the matched feature point pair. The matched characteristic point pairs can be accurately determined in the multi-dimensional characteristic matching mode, the reliability of the determined matched characteristic point pairs is improved, and the affine transformation matrix with high accuracy and high reliability can be determined based on the position information of the matched characteristic point pairs with high accuracy and high reliability, so that the accuracy of bleeding point detection is improved.
In one embodiment, in order to further improve the accuracy of the determined position of the bleeding point, as shown in fig. 5, S210, acquiring the image to be detected includes:
s510, acquiring an initial image to be detected, and determining the similarity between the initial image to be detected and the sample image.
The initial image to be detected is an image of an operation environment acquired by a target endoscope when a user detects a bleeding point, and may be an image acquired by the target endoscope at any acquisition position in the whole operation process. Under the same operation scene, different acquisition positions correspond to different operation environments.
Optionally, the computer device may directly calculate a structural similarity measure (SSIM, SIMILARITY MEASUREMENT) between the initial image to be detected and the sample image, or may respectively use vector characterization for the initial image to be detected and the sample image, so as to calculate a cosine value of a vector between the initial image to be detected and the sample image, and may perform histogram matching for the initial image to be detected and the sample image, so as to obtain the similarity between the initial image to be detected and the sample image.
Optionally, the computer device may further use a plurality of reference images at the same acquisition position as the sample image as training samples, train to obtain a neural network model for identifying an operation environment corresponding to the acquisition position, further input the initial image to be detected into the neural network model, and output the similarity between the initial image to be detected and the sample image by the neural network model.
S520, if the similarity is larger than a similarity threshold, determining the initial image to be detected as the image to be detected.
Specifically, after obtaining the similarity between the initial image to be detected and the sample image, the computer equipment compares the similarity with a preset similarity threshold value, and further determines whether the initial image to be detected is determined to be the image to be detected according to a comparison result.
And otherwise, if the similarity is smaller than or equal to the similarity threshold, determining that the initial image to be detected is not the image to be detected, and further reminding a user to acquire the initial image to be detected again.
In this embodiment, the computer device obtains an initial image to be detected, determines a similarity between the initial image to be detected and the sample image, and determines that the initial image to be detected is the image to be detected if the similarity is greater than a similarity threshold. The method can realize screening of the images to be detected, so that the initial images to be detected acquired near the bleeding points are used as the images to be detected for detecting the bleeding points, the accuracy of the determined bleeding points is improved, unnecessary detection is avoided, the detection time is reduced, and the detection efficiency is correspondingly improved.
In one embodiment, the method further includes a process of constructing a historical sequence image. As shown in fig. 6, the method further includes:
S610, acquiring initial images under different light source directions of a target endoscope.
Optionally, an adjusting component is arranged at the end part of the endoscope body of the target endoscope, the adjusting component is used for changing the light emitting direction of the light source, and the computer equipment can acquire initial images acquired by the target endoscope in different light source directions. For example, a user can trigger a control button for controlling the adjusting assembly at the control end of the target endoscope so as to control the adjusting assembly to change the light emitting direction of the light source and control the target endoscope to acquire the initial image under different light source directions.
Alternatively, the target endoscope may be stopped in response to a stop instruction from the user/computer device to stop acquisition of the initial image, or may be automatically stopped after a bleeding spot is detected in the initial image.
It should be noted that, the initial images in different light source directions are acquired based on the target endoscope at the same acquisition position, and a plurality of initial images in different light source directions can be acquired at the same acquisition position as a set of initial images, and each set of initial images is used for synthesizing one frame of stereoscopic image. The target endoscope can move in the using process, and the computer equipment can obtain initial images acquired by the target endoscope under different light source directions at different acquisition positions. For example, taking an operation environment of the target endoscope as an example of a simulated lumen of a living body, the target endoscope may collect multiple sets of initial images in different light source directions at a collection position a, may continue to collect multiple sets of initial images in different light source directions at a collection position B, and may also continue to collect multiple sets of initial images in different light source directions at a collection position C.
S620, synthesizing the initial images in different light source directions to obtain a stereoscopic image at the target moment.
Optionally, the target time may be the earliest, the latest, or the middle of the acquisition times corresponding to the acquired initial images in different light source directions.
Specifically, for the same acquisition position, the computer equipment adopts a photometric stereo synthesis method to synthesize each group of initial images in different light source directions, so as to obtain a stereo image at a target moment. For example, the computer device acquires two sets of initial images acquired by the target endoscope at the acquisition position A. The first set of initial images includes a first initial image (acquisition time t1, light source direction F1), a second initial image (acquisition time t2, light source direction F2), and a third initial image (acquisition time t3, light source direction F3), the second set of initial images includes a fourth initial image (acquisition time t4, light source direction F1), a fifth initial image (acquisition time t5, light source direction F2), and a sixth initial image (acquisition time t6, light source direction F3). Wherein t1< t2< t3< t4< t5< t6. The computer equipment performs photometric stereo synthesis on the first group of initial images, performs photometric stereo synthesis on the stereo image at the target time t1, and performs photometric stereo synthesis on the second group of initial images, and performs stereo image at the target time t 4.
S630, arranging the stereoscopic images at the target time according to the time sequence to obtain a historical sequence image.
The historical sequence images comprise stereoscopic images corresponding to a plurality of target moments. The historical sequence image also comprises a plurality of stereoscopic images corresponding to the acquisition positions.
Specifically, the computer device sorts all the obtained stereoscopic images according to the target time of each stereoscopic image to obtain a history sequence image with a time sequence relationship.
In an alternative embodiment, the end of the lens body of the target endoscope is provided with a shading sheet, and the shading position of the shading sheet on the end of the lens body can be adjusted to realize the partial shading of the light source in the end of the lens body, so that the direction of the light source is changed. Based on this, as shown in fig. 7, the step S610 of acquiring initial images in different light source directions of the target endoscope includes:
S710, changing the direction of the light source by adjusting the shading position of the shading sheet on the target endoscope.
The light source direction is the light emitting direction of the light source. The light shielding sheet is positioned at the end of the endoscope body of the target endoscope and covers part of the light source. As shown in fig. 8, the end of the endoscope body of the target endoscope includes a light outlet 801 for placing the light source module, and a light shielding sheet disposed at the light outlet, and further includes an opening for realizing other functions, such as a forceps opening 802 for passing a surgical instrument, and a camera opening 803 for placing the camera module.
Alternatively, as shown in fig. 9, the light shielding sheet may be of the type 4 or 8. The 4-classification light shielding sheet is used for reserving 1/4 area to emit light and shielding 3/4 area, and the 8-classification light shielding sheet is used for reserving 1/8 area to emit light and shielding 7/8 area. In this embodiment, the light shielding sheet is adapted to the shape of the light outlet and is circular.
Alternatively, the computer device may adjust the shade position of the shade sheet on the target endoscope based on the control operation of the user to change the light source direction. Optionally, the light shielding sheet can be controlled to rotate for multiple times to complete 360-degree rotation, and the direction of the light source is changed once every time of rotation, so that the light shielding sheet can rotate clockwise or anticlockwise. For example, taking the light shielding sheet as the quarter type in fig. 9 as an example, the light transmitting region (white region) of the light shielding sheet is rotated to the position ① corresponding to the light source direction S1, the light transmitting region of the light shielding sheet is rotated to the position ② corresponding to the light source direction S2, the light transmitting region of the light shielding sheet is rotated to the position ③ corresponding to the light source direction S3, and the light transmitting region of the light shielding sheet is rotated to the position ④ corresponding to the light source direction S4, and at this time, the target endoscope completes 360 ° rotation.
S720, acquiring images of the operating environment of the target endoscope under different shading positions of the shading sheet as initial images.
Specifically, the target endoscope acquires images of the current operating environment when the light shielding sheets are positioned at different light shielding positions, and the computer equipment correspondingly acquires the images acquired by the target endoscope as initial images. Continuing with the above example, the target endoscope acquires, as a first initial image, an image of the current operating environment in the light source direction S1 with the light-transmitting region of the light-shielding sheet rotated to the position ①, an image of the current operating environment in the light source direction S2 with the light-transmitting region of the light-shielding sheet rotated to the position ②, as a second initial image, an image of the current operating environment in the light source direction S3 with the light-transmitting region of the light-shielding sheet rotated to the position ③, as a third initial image, an image of the current operating environment in the light source direction S4 with the light-transmitting region of the light-shielding sheet rotated to the position ④, and as a fourth initial image.
It should be noted that, in the foregoing embodiment, the image to be detected and the initial image to be detected are stereo images obtained by photometric stereo synthesis of the initial images in different light source directions, and specific processes of collecting the initial images and photometric synthesis are the same as those described above.
In this embodiment, the computer device acquires initial images in different light source directions of the target endoscope, synthesizes the initial images in different light source directions to obtain a stereoscopic image at the target time, and then arranges the stereoscopic images at the target time according to a time sequence to obtain a history sequence image. The history sequence image can cover the whole process of the application of the target endoscope, records the whole process from no bleeding point to the occurrence of the bleeding point in the operation environment, is favorable for determining the bleeding point position based on the history sequence image, and each frame of image in the history sequence image is a three-dimensional image obtained by photometric synthesis of initial images in different light source directions, covers multidimensional information, is favorable for determining the conversion relation between the image to be detected and the sample image, and further improves the accuracy of bleeding point detection.
To further increase the accuracy of the determined location of the bleeding point, the sample image may be an image of the first bleeding point, and to determine an image of the first bleeding point in the historical sequence of images, as shown in fig. 10, the method further includes:
s1010, carrying out differential processing on two adjacent frames of stereoscopic images in the historical sequence image to obtain differential characteristics between the two adjacent frames of stereoscopic images in the historical sequence image.
Optionally, when the historical sequence image includes a stereo image set corresponding to a plurality of acquisition positions (each acquisition position corresponds to one stereo image set), the computer device performs differential processing on two adjacent frames of stereo images belonging to the same stereo image set in the historical sequence image, that is, weakens a similar part between the two frames of stereo images, highlights a changed part between the two frames of stereo images, and further obtains a differential feature between the two adjacent frames of stereo images.
As shown in fig. 11, for the same acquisition position in the historical sequence image, multiple frames of stereoscopic images are included, and each frame of stereoscopic image is obtained by photometric synthesis of multiple frames of initial images. In order to determine the stereo image (i.e. the sample image) with the bleeding point appearing for the first time from the above-mentioned historical sequence chart, the computer device performs differential processing on two adjacent frames of stereo images, namely, the 2 nd frame and the 1 st frame, the 3 rd frame and the 2 nd frame, the 4 th frame and the 3 rd frame, and so on, so as to obtain the difference feature between the two adjacent frames of stereo images.
S1020, determining a stereoscopic image of the first bleeding point according to the difference characteristics, and taking the stereoscopic image of the first bleeding point as a sample image.
It should be noted that, for the same acquisition position, the operation environment of the target endoscope generally does not change greatly, so that there is no difference feature between two adjacent frames of stereoscopic images, and there is a difference feature between two adjacent frames of stereoscopic images after a bleeding point occurs in the operation environment.
Specifically, after obtaining the difference characteristic between two adjacent frames of stereo images, the computer equipment determines whether a bleeding point appears in the time interval from the previous frame of stereo image to the next frame of stereo image in the two frames of stereo images based on the difference characteristic, and determines the next frame of stereo image in the two adjacent frames of stereo images as the stereo image with the bleeding point appearing for the first time as the sample image under the condition that the bleeding point appears.
Optionally, if the difference feature is that a red pixel point appears in the stereoscopic image of the later frame relative to the stereoscopic image of the previous frame or the pixel color changes and the change area meets a preset condition, the computer device can determine that the bleeding point appears. If the difference characteristic is that the texture of the partial region of the stereoscopic image of the rear frame is deepened relative to that of the stereoscopic image of the front frame, the computer equipment can also determine that the bleeding point occurs. Continuing with the above example, the computer device may determine that a bleeding point has occurred based on the difference between the 5 th and 4 th frames, and may determine that the 5 th frame is the sample image.
In this embodiment, the computer device performs differential processing on two adjacent frames of stereoscopic images in the historical sequence image to obtain a differential feature between the two adjacent frames of stereoscopic images in the historical sequence image, further determines a stereoscopic image with a bleeding point appearing for the first time according to the differential feature, and uses the stereoscopic image with the bleeding point appearing for the first time as a sample image. In the application process of the target endoscope, the whole process from the initial bleeding point-free to the bleeding point-free in the operation scene is experienced, the corresponding historical sequence image comprises an image when the bleeding point is not present at the same acquisition position and an image when the bleeding point is present, the three-dimensional image of the bleeding point which is the sample image and is the first three-dimensional image of the bleeding point can be determined from the multi-frame three-dimensional image of the historical sequence image aiming at the same acquisition position by adopting the differential processing mode, and the accuracy of the determined bleeding point position can be improved by carrying out subsequent bleeding point detection based on the sample image due to less interference elements in the three-dimensional image of the bleeding point which is the first.
In one embodiment, as shown in fig. 12, there is also provided a bleeding point detection method, including the steps of:
S1210, acquiring an initial image to be detected, and determining the similarity between the initial image to be detected and a sample image, wherein the sample image is an image comprising bleeding points in a historical sequence image.
S1220, if the similarity is greater than the similarity threshold, determining the initial image to be detected as the image to be detected.
S1230, converting the positions of the bleeding points in the sample image into the image to be detected according to the affine transformation matrix between the image to be detected and the sample image, and obtaining the positions of the bleeding points in the image to be detected.
Optionally, the computer device performs the detection of the bleeding point, and the specific process of determining the location of the bleeding point is as follows:
The doctor uses the target endoscope to carry out medical operation (such as suture operation) under corresponding operation environment (such as simulating human tissue/cavity inside), in the process of medical operation, the target endoscope can acquire multiple groups of initial images in different light source directions at the acquisition position of the target endoscope under the action of operation instructions of the doctor or control instructions transmitted by computer equipment, the computer equipment acquires the target endoscope and the acquired initial images, each group of initial images is used for photometric synthesis of a frame of stereoscopic image, and all the acquired stereoscopic images are arranged according to acquisition time to form a historical sequence image. And determining the stereo image with the first bleeding point according to the difference characteristic, and taking the stereo image with the first bleeding point as a sample image. Meanwhile, a neural network model for identifying the operation environment in the image and an affine transformation matrix between the image to be detected and the sample image are obtained, and the specific process is referred to the above related embodiments and is not described herein.
After the doctor performs the above medical operation, the doctor needs to find out the bleeding point generated during the operation. With reference to fig. 13, the computer device obtains a frame of initial image to be detected through the target endoscope, the initial image to be detected is a stereo image synthesized based on multiple groups of initial images to be detected in different light source directions, the computer device obtains the initial image to be detected, the neural network model is adopted to identify the initial image to be detected, the similarity between the initial image to be detected and the sample image is obtained, the initial image to be detected is determined to be the image to be detected under the condition that the similarity is larger than a similarity threshold, and then the bleeding point position in the sample image is converted into the image to be detected according to an affine transformation matrix between the image to be detected, so that the bleeding point position in the image to be detected is obtained, and the bleeding point is displayed in the image to be detected in real time.
It should be noted that, the specific process of the related steps in this embodiment may be referred to the related embodiments, and will not be described herein. By the method, the bleeding point can be detected, and the accuracy of the determined bleeding point position is improved.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
In one embodiment, as shown in fig. 14, there is provided a bleeding point detection apparatus including an image acquisition module 1401, a conversion determination module 1402, and a position conversion module 1403, wherein:
the image acquisition module 1401 is used for acquiring an image to be detected, and the conversion determination module 1402 is used for determining a conversion relation between the image to be detected and a sample image based on the image to be detected, wherein the sample image is an image comprising bleeding points in a historical sequence image of a target endoscope;
The position conversion module 1403 is configured to convert the position of the bleeding point in the sample image into the image to be detected according to the conversion relationship, so as to obtain the position of the bleeding point in the image to be detected.
In one embodiment, the transition determination module 1402 is specifically configured to:
the method comprises the steps of obtaining position information of matched characteristic point pairs between an image to be detected and a sample image, determining an affine transformation matrix between the image to be detected and the sample image according to the position information of the matched characteristic point pairs, and determining the affine transformation matrix as a conversion relation between the image to be detected and the sample image.
In one embodiment, the transition determination module 1402 is specifically configured to:
The method comprises the steps of obtaining a first characteristic point in a sample image, carrying out characteristic extraction on the image to be detected to obtain a multi-dimensional characteristic of at least one first characteristic point in the image to be detected, carrying out characteristic matching on the multi-dimensional characteristic of at least one first characteristic point and the multi-dimensional characteristic of at least one second characteristic point in the sample image, determining the first characteristic point and the second characteristic point with the highest characteristic matching degree as matched characteristic point pairs, and obtaining position information of the first characteristic point in the image to be detected in the matched characteristic point pairs and position information of the second characteristic point in the sample image in the matched characteristic point pairs.
In one embodiment, the image acquisition module 1401 is specifically configured to:
And if the similarity is greater than a similarity threshold, determining the initial image to be detected as the image to be detected.
In one embodiment, the image acquisition module 1401 is further configured to:
The method comprises the steps of obtaining initial images in different light source directions of a target endoscope, synthesizing the initial images in the different light source directions to obtain stereoscopic images at target moments, and arranging the stereoscopic images at the target moments according to time sequence to obtain historical sequence images.
In one embodiment, the image acquisition module 1401 is specifically configured to:
the method comprises the steps of changing the direction of a light source by adjusting the shading position of a shading sheet on a target endoscope, wherein the shading sheet is positioned at the end part of a lens body of the target endoscope and covers part of the light source, and acquiring images of the operating environment of the target endoscope under different shading positions of the shading sheet as initial images.
In one embodiment, the image acquisition module 1401 is further configured to:
And determining the stereo image with the first bleeding point according to the difference characteristic, and taking the stereo image with the first bleeding point as a sample image.
The various modules in the bleeding point detection device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and an internal structure diagram thereof may be as shown in fig. 15. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method of bleeding point detection. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 15 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements are applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
The method comprises the steps of obtaining an image to be detected, determining a conversion relation between the image to be detected and a sample image based on the image to be detected, wherein the sample image is an image comprising bleeding points in a historical sequence image of a target endoscope, and converting the bleeding point positions in the sample image into the image to be detected according to the conversion relation to obtain the bleeding point positions in the image to be detected.
In one embodiment, the processor when executing the computer program further performs the steps of:
the method comprises the steps of obtaining position information of matched characteristic point pairs between an image to be detected and a sample image, determining an affine transformation matrix between the image to be detected and the sample image according to the position information of the matched characteristic point pairs, and determining the affine transformation matrix as a conversion relation between the image to be detected and the sample image.
In one embodiment, the processor when executing the computer program further performs the steps of:
The method comprises the steps of obtaining a first characteristic point in a sample image, carrying out characteristic extraction on the image to be detected to obtain a multi-dimensional characteristic of at least one first characteristic point in the image to be detected, carrying out characteristic matching on the multi-dimensional characteristic of at least one first characteristic point and the multi-dimensional characteristic of at least one second characteristic point in the sample image, determining the first characteristic point and the second characteristic point with the highest characteristic matching degree as matched characteristic point pairs, and obtaining position information of the first characteristic point in the image to be detected in the matched characteristic point pairs and position information of the second characteristic point in the sample image in the matched characteristic point pairs.
In one embodiment, the processor when executing the computer program further performs the steps of:
And if the similarity is greater than a similarity threshold, determining the initial image to be detected as the image to be detected.
In one embodiment, the processor when executing the computer program further performs the steps of:
The method comprises the steps of obtaining initial images in different light source directions of a target endoscope, synthesizing the initial images in the different light source directions to obtain stereoscopic images at target moments, and arranging the stereoscopic images at the target moments according to time sequence to obtain historical sequence images.
In one embodiment, the processor when executing the computer program further performs the steps of:
the method comprises the steps of changing the direction of a light source by adjusting the shading position of a shading sheet on a target endoscope, wherein the shading sheet is positioned at the end part of a lens body of the target endoscope and covers part of the light source, and acquiring images of the operating environment of the target endoscope under different shading positions of the shading sheet as initial images.
In one embodiment, the processor when executing the computer program further performs the steps of:
And determining the stereo image with the first bleeding point according to the difference characteristic, and taking the stereo image with the first bleeding point as a sample image.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
The method comprises the steps of obtaining an image to be detected, determining a conversion relation between the image to be detected and a sample image based on the image to be detected, wherein the sample image is an image comprising bleeding points in a historical sequence image of a target endoscope, and converting the bleeding point positions in the sample image into the image to be detected according to the conversion relation to obtain the bleeding point positions in the image to be detected.
In one embodiment, the computer program when executed by the processor further performs the steps of:
the method comprises the steps of obtaining position information of matched characteristic point pairs between an image to be detected and a sample image, determining an affine transformation matrix between the image to be detected and the sample image according to the position information of the matched characteristic point pairs, and determining the affine transformation matrix as a conversion relation between the image to be detected and the sample image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
The method comprises the steps of obtaining a first characteristic point in a sample image, carrying out characteristic extraction on the image to be detected to obtain a multi-dimensional characteristic of at least one first characteristic point in the image to be detected, carrying out characteristic matching on the multi-dimensional characteristic of at least one first characteristic point and the multi-dimensional characteristic of at least one second characteristic point in the sample image, determining the first characteristic point and the second characteristic point with the highest characteristic matching degree as matched characteristic point pairs, and obtaining position information of the first characteristic point in the image to be detected in the matched characteristic point pairs and position information of the second characteristic point in the sample image in the matched characteristic point pairs.
In one embodiment, the computer program when executed by the processor further performs the steps of:
And if the similarity is greater than a similarity threshold, determining the initial image to be detected as the image to be detected.
In one embodiment, the computer program when executed by the processor further performs the steps of:
The method comprises the steps of obtaining initial images in different light source directions of a target endoscope, synthesizing the initial images in the different light source directions to obtain stereoscopic images at target moments, and arranging the stereoscopic images at the target moments according to time sequence to obtain historical sequence images.
In one embodiment, the computer program when executed by the processor further performs the steps of:
the method comprises the steps of changing the direction of a light source by adjusting the shading position of a shading sheet on a target endoscope, wherein the shading sheet is positioned at the end part of a lens body of the target endoscope and covers part of the light source, and acquiring images of the operating environment of the target endoscope under different shading positions of the shading sheet as initial images.
In one embodiment, the computer program when executed by the processor further performs the steps of:
And determining the stereo image with the first bleeding point according to the difference characteristic, and taking the stereo image with the first bleeding point as a sample image.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of:
The method comprises the steps of obtaining an image to be detected, determining a conversion relation between the image to be detected and a sample image based on the image to be detected, wherein the sample image is an image comprising bleeding points in a historical sequence image of a target endoscope, and converting the bleeding point positions in the sample image into the image to be detected according to the conversion relation to obtain the bleeding point positions in the image to be detected.
In one embodiment, the computer program when executed by the processor further performs the steps of:
the method comprises the steps of obtaining position information of matched characteristic point pairs between an image to be detected and a sample image, determining an affine transformation matrix between the image to be detected and the sample image according to the position information of the matched characteristic point pairs, and determining the affine transformation matrix as a conversion relation between the image to be detected and the sample image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
The method comprises the steps of obtaining a first characteristic point in a sample image, carrying out characteristic extraction on the image to be detected to obtain a multi-dimensional characteristic of at least one first characteristic point in the image to be detected, carrying out characteristic matching on the multi-dimensional characteristic of at least one first characteristic point and the multi-dimensional characteristic of at least one second characteristic point in the sample image, determining the first characteristic point and the second characteristic point with the highest characteristic matching degree as matched characteristic point pairs, and obtaining position information of the first characteristic point in the image to be detected in the matched characteristic point pairs and position information of the second characteristic point in the sample image in the matched characteristic point pairs.
In one embodiment, the computer program when executed by the processor further performs the steps of:
And if the similarity is greater than a similarity threshold, determining the initial image to be detected as the image to be detected.
In one embodiment, the computer program when executed by the processor further performs the steps of:
The method comprises the steps of obtaining initial images in different light source directions of a target endoscope, synthesizing the initial images in the different light source directions to obtain stereoscopic images at target moments, and arranging the stereoscopic images at the target moments according to time sequence to obtain historical sequence images.
In one embodiment, the computer program when executed by the processor further performs the steps of:
the method comprises the steps of changing the direction of a light source by adjusting the shading position of a shading sheet on a target endoscope, wherein the shading sheet is positioned at the end part of a lens body of the target endoscope and covers part of the light source, and acquiring images of the operating environment of the target endoscope under different shading positions of the shading sheet as initial images.
In one embodiment, the computer program when executed by the processor further performs the steps of:
And determining the stereo image with the first bleeding point according to the difference characteristic, and taking the stereo image with the first bleeding point as a sample image.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

1. A method of detecting bleeding points, comprising:
Acquiring initial images under different light source directions of a target endoscope;
synthesizing the initial images in different light source directions to obtain a stereoscopic image at a target moment;
Arranging the stereoscopic images at the target moment according to a time sequence to obtain a historical sequence image;
acquiring an image to be detected, wherein the image to be detected comprises an image of tissue or the inside of a cavity acquired by the target endoscope;
Determining a conversion relation between the image to be detected and a sample image based on the image to be detected, wherein the sample image is an image comprising bleeding points in the historical sequence image;
Converting the positions of the bleeding points in the sample image into the image to be detected according to the conversion relation to obtain the positions of the bleeding points in the image to be detected;
the acquiring initial images under different light source directions of the target endoscope comprises:
changing the direction of a light source by adjusting the shading position of a shading sheet on the target endoscope, wherein the shading sheet is positioned at the end part of a lens body of the target endoscope and covers part of the light source;
And acquiring images of the operating environment of the target endoscope under different shading positions of the shading sheet as the initial images.
2. The method of claim 1, wherein the determining a conversion relationship between the image to be detected and a sample image based on the image to be detected comprises:
acquiring position information of a matched characteristic point pair between the image to be detected and the sample image;
determining an affine transformation matrix between the image to be detected and the sample image according to the position information of the matched characteristic point pairs;
the affine transformation matrix is determined as a conversion relationship between the image to be detected and the sample image.
3. The method according to claim 2, wherein the acquiring the position information of the matched pair of feature points between the image to be detected and the sample image includes:
extracting features of the image to be detected to obtain multi-dimensional features of at least one first feature point in the image to be detected;
Performing feature matching on the multi-dimensional features of the at least one first feature point and the multi-dimensional features of the at least one second feature point in the sample image, and determining the first feature point and the second feature point with the highest feature matching degree as the matched feature point pair;
and acquiring the position information of the first characteristic point in the matched characteristic point pair in the image to be detected and the position information of the second characteristic point in the matched characteristic point pair in the sample image.
4. The method of claim 1, wherein the acquiring the image to be detected comprises:
Acquiring an initial image to be detected, and determining the similarity between the initial image to be detected and the sample image;
and if the similarity is greater than a similarity threshold, determining the initial image to be detected as the image to be detected.
5. A method according to claim 3, characterized in that the method further comprises:
and obtaining a first characteristic point in the image to be detected and a second characteristic point in the sample image by adopting a rapid characteristic point extraction and description characteristic extraction algorithm or a deep learning neural network model.
6. The method of any of claims 4, wherein the similarity is determined based on a structural similarity measure or a vector cosine value between the initial image to be detected and the sample image.
7. The method according to any one of claims 1-4, further comprising:
Performing differential processing on two adjacent frames of stereoscopic images in the historical sequence image to obtain differential characteristics between the two adjacent frames of stereoscopic images in the historical sequence image;
And determining a stereoscopic image of the first bleeding point according to the difference characteristics, and taking the stereoscopic image of the first bleeding point as the sample image.
8. A computer device comprising an end memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202510806954.XA 2022-06-27 2022-06-27 Bleeding point detection method, computer device, storage medium, and program product Pending CN120746952A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510806954.XA CN120746952A (en) 2022-06-27 2022-06-27 Bleeding point detection method, computer device, storage medium, and program product

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202510806954.XA CN120746952A (en) 2022-06-27 2022-06-27 Bleeding point detection method, computer device, storage medium, and program product
CN202210736992.9A CN117372313B (en) 2022-06-27 2022-06-27 Bleeding point detection method, computer device, storage medium, and program product

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202210736992.9A Division CN117372313B (en) 2022-06-27 2022-06-27 Bleeding point detection method, computer device, storage medium, and program product

Publications (1)

Publication Number Publication Date
CN120746952A true CN120746952A (en) 2025-10-03

Family

ID=89389690

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202510806954.XA Pending CN120746952A (en) 2022-06-27 2022-06-27 Bleeding point detection method, computer device, storage medium, and program product
CN202210736992.9A Active CN117372313B (en) 2022-06-27 2022-06-27 Bleeding point detection method, computer device, storage medium, and program product

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202210736992.9A Active CN117372313B (en) 2022-06-27 2022-06-27 Bleeding point detection method, computer device, storage medium, and program product

Country Status (1)

Country Link
CN (2) CN120746952A (en)

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102163327B1 (en) * 2012-07-25 2020-10-08 인튜어티브 서지컬 오퍼레이션즈 인코포레이티드 Efficient and interactive bleeding detection in a surgical system
WO2017006618A1 (en) * 2015-07-09 2017-01-12 オリンパス株式会社 Server, endoscopic system, transmission method, and program
CN110245671B (en) * 2019-06-17 2021-05-28 艾瑞迈迪科技石家庄有限公司 Endoscope image feature point matching method and system
EP3769659A1 (en) * 2019-07-23 2021-01-27 Koninklijke Philips N.V. Method and system for generating a virtual image upon detecting an obscured image in endoscopy
CN111080593B (en) * 2019-12-07 2023-06-16 上海联影智能医疗科技有限公司 An image processing device, method and storage medium
CN115135223A (en) * 2020-02-21 2022-09-30 奥林巴斯株式会社 Image processing system, endoscope system, and image processing method
CN112348125B (en) * 2021-01-06 2021-04-02 安翰科技(武汉)股份有限公司 Capsule endoscope image recognition method, equipment and medium based on deep learning
CN113421231B (en) * 2021-06-08 2023-02-28 杭州海康威视数字技术股份有限公司 Bleeding point detection method, device and system
CN113781499B (en) * 2021-08-27 2024-07-26 上海微创医疗机器人(集团)股份有限公司 Medical mirror state detection method, robot control method and system

Also Published As

Publication number Publication date
CN117372313A (en) 2024-01-09
CN117372313B (en) 2025-06-24

Similar Documents

Publication Publication Date Title
CN107492071B (en) Medical image processing method and equipment
CN105389589B (en) A kind of chest X ray piece rib cage detection method returned based on random forest
CN110648331B (en) Detection method for medical image segmentation, medical image segmentation method and device
JP2021513435A (en) Systems and methods for diagnosing gastrointestinal tumors
CN109685768A (en) Pulmonary nodule automatic detection method and system based on pulmonary CT sequence
CN115131290B (en) Image processing method
CN110517771B (en) Medical image processing method, medical image identification method and device
WO2020211530A1 (en) Model training method and apparatus for detection on fundus image, method and apparatus for detection on fundus image, computer device, and medium
WO2021081771A1 (en) Vrds ai medical image-based analysis method for heart coronary artery, and related devices
CN115880266A (en) Intestinal polyp detection system and method based on deep learning
CN112734707B (en) Auxiliary detection method, system and device for 3D endoscope and storage medium
CN116485791B (en) Automatic detection method and system for double-view breast tumor lesion area based on absorbance
US12511745B2 (en) Method for detecting a rib with a medical image, device, and medium
CN112651400A (en) Stereoscopic endoscope auxiliary detection method, system, device and storage medium
CN119964155B (en) Aquatic organism image recognition and counting methods, devices, equipment and media
CN113177953B (en) Liver region segmentation method, device, electronic device and storage medium
CN117372313B (en) Bleeding point detection method, computer device, storage medium, and program product
US12327308B2 (en) Learning neural light fields with ray-space embedding networks
CN118447303A (en) Ultrasonic image acquisition and processing method
Liang et al. YOLOv9-GSSA model for efficient soybean seedlings and weeds detection
CN116883603A (en) Three-dimensional image reconstruction method and device
CN117372314A (en) Target area identification method, apparatus, computer device, medium and program product
CN119942541B (en) A method, device, computer equipment and medium for reading cytopathology slides
CN119399763A (en) Cell image recognition method, device, system, computer equipment and medium
CN121074246A (en) Image reconstruction method, system, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination