CN113017544B - Sectional completeness self-checking method and device for capsule endoscope and readable storage medium - Google Patents
Sectional completeness self-checking method and device for capsule endoscope and readable storage medium Download PDFInfo
- Publication number
- CN113017544B CN113017544B CN202110284699.9A CN202110284699A CN113017544B CN 113017544 B CN113017544 B CN 113017544B CN 202110284699 A CN202110284699 A CN 202110284699A CN 113017544 B CN113017544 B CN 113017544B
- Authority
- CN
- China
- Prior art keywords
- capsule endoscope
- working
- area
- voxel
- virtual positioning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 239000002775 capsule Substances 0.000 title claims abstract description 91
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000000007 visual effect Effects 0.000 claims abstract description 22
- 239000013598 vector Substances 0.000 claims description 32
- 238000010998 test method Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000011016 integrity testing Methods 0.000 claims 1
- 238000012360 testing method Methods 0.000 claims 1
- 238000005192 partition Methods 0.000 abstract description 8
- 238000001514 detection method Methods 0.000 description 12
- 230000006872 improvement Effects 0.000 description 7
- 210000001035 gastrointestinal tract Anatomy 0.000 description 5
- 238000012544 monitoring process Methods 0.000 description 3
- 210000002784 stomach Anatomy 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000002627 tracheal intubation Methods 0.000 description 2
- 206010011409 Cross infection Diseases 0.000 description 1
- 206010029803 Nosocomial infection Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000003238 esophagus Anatomy 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 210000002429 large intestine Anatomy 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000000214 mouth Anatomy 0.000 description 1
- 230000008855 peristalsis Effects 0.000 description 1
- 230000002572 peristaltic effect Effects 0.000 description 1
- 238000013441 quality evaluation Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 210000000813 small intestine Anatomy 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00006—Operational features of endoscopes characterised by electronic signal processing of control signals
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00057—Operational features of endoscopes provided with means for testing or calibration
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/04—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
- A61B1/041—Capsule endoscopes for imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/273—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the upper alimentary canal, e.g. oesophagoscopes, gastroscopes
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biophysics (AREA)
- Physics & Mathematics (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Optics & Photonics (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Signal Processing (AREA)
- Gastroenterology & Hepatology (AREA)
- Endoscopes (AREA)
Abstract
The invention provides a self-checking method, a device and a readable storage medium for partition completeness of a capsule endoscope, wherein the method comprises the following steps: dividing a working area into a plurality of sub-working areas; correspondingly establishing a virtual positioning area for each sub-working area; driving the capsule endoscope to move in the working area, shooting images, and synchronously executing the step A, wherein when the proportion of the voxel marked with the lightening mark in the current virtual positioning area is not less than a preset proportion threshold value, the step A is not executed synchronously for each virtual positioning area; the step A comprises the following steps: sequentially recording the position and the view field orientation of each working point in a space coordinate system; sequentially confirming an intersection area of the visual field of the capsule endoscope and a virtual positioning area where the visual field is currently positioned under the current working point at each working point according to the position and the visual field direction of the current working point; and marking the lightening identification for the voxel points which are in the intersection region and are not marked with the lightening identification. The invention can realize the self-checking of the completeness of the capsule endoscope.
Description
Technical Field
The invention relates to the field of medical equipment, in particular to a partition completeness self-checking method and device of a capsule endoscope and a readable storage medium.
Background
Capsule endoscopes are increasingly used for examination of the digestive tract; the capsule endoscope is orally taken, passes through the oral cavity, esophagus, stomach, small intestine and large intestine, and is finally discharged out of the body. In general, a capsule endoscope is passively operated by the peristaltic motion of the digestive tract, and images are taken at a certain frame rate in the process, so that doctors can check the health condition of all sections of the digestive tract of a patient.
Compared with the traditional intubation endoscope, the capsule endoscope has the advantages of no cross infection risk, no damage to human bodies, good tolerance and the like. However, the traditional endoscope has higher controllability, and relatively complete operation procedures have been summarized to ensure the relative completeness of examination through long-term operation, and the self-checking scheme of the completeness of the capsule endoscope in the new technology is still insufficient.
On one hand, the controllability of the capsule endoscope is poor, the capsule endoscope is influenced by the peristalsis, the movement and the like of a detection space, so that the randomness exists in the capsule endoscope shooting, and the detection space is difficult to be completely shot even if an external magnetic control device is used for operation, namely, the condition of missed shooting exists; on the other hand, the method is also influenced by poor controllability, lack of position and posture feedback, and no good operation rules ensure completeness of inspection; in addition, the capsule endoscope lacks the function of cleaning the lens, so that the image resolution is obviously lower than that of the intubation endoscope, the image quality cannot be always clear, and the defect of completeness of the capsule endoscope examination can be caused.
Disclosure of Invention
In order to solve the above technical problems, the present invention provides a method for self-checking the completeness of a capsule endoscope, an electronic device and a readable storage medium.
In order to achieve one of the above objects, an embodiment of the present invention provides a method for self-checking partition completeness of a capsule endoscope, including: acquiring a working area of the capsule endoscope, dividing the working area into a plurality of sub-working areas, wherein a straight line randomly passing through any one sub-working area has at most two intersection points with the current sub-working area;
correspondingly establishing a virtual positioning area for each sub-working area, wherein the virtual positioning area and the working area are in the same space coordinate system, and the virtual positioning area completely covers the working area;
dividing each virtual positioning area into a plurality of adjacent voxels with the same size, wherein each voxel has a unique identifier and a coordinate;
driving the capsule endoscope to move in the working area, sequentially recording images shot when the capsule endoscope reaches each working point according to a preset frequency, synchronously executing the step A to mark a lighting mark on a voxel in each virtual positioning area, and respectively executing the step A no longer synchronously when the proportion of the voxel marked with the lighting mark in the current virtual positioning area is not less than a preset proportion threshold value for each virtual positioning area;
In the initial state, each voxel point is not marked with a lighting identifier;
the step A comprises the following steps:
sequentially recording the position and the view field orientation of each working point in the space coordinate system;
confirming an intersection area of the visual field of the capsule endoscope and the virtual positioning area where the visual field is located under the current working point according to the position and the visual field direction of the current working point at each working point in sequence;
and marking the lightening identification for the voxel points which are in the intersection region and are not marked with the lightening identification.
As a further improvement of an embodiment of the present invention, the driving the capsule endoscope to move in the working area, and sequentially recording images taken by the capsule endoscope when the capsule endoscope reaches each working point according to a predetermined frequency, and synchronously executing step a to mark a lighting mark on a voxel comprises:
and B, scoring the image acquired by each working point, if the score of the image acquired by the current working point is not smaller than a preset score, synchronously executing the step A, and if the score of the image acquired by the current working point is smaller than the preset score, skipping the step A for the current working point.
As a further improvement of an embodiment of the present invention, when step a is executed, the method further includes:
And if the virtual positioning area where the capsule endoscope is currently located is not unique at any working point, defaulting the virtual positioning area with the minimum volume as the virtual positioning area corresponding to the current working point.
As a further improvement of an embodiment of the present invention, in executing step a, the labeling a lighting identifier for a voxel point which is located in the intersection area and is not labeled with a lighting identifier includes:
in the intersection region, acquiring the sight line vector of the current working point and each voxel without the lighting identification, and simultaneously, sequentially merging the sight line vector corresponding to each voxel into the same vector set according to the acquisition sequence of the intersection region;
and traversing the vector sets, and marking a lighting identifier for the voxel corresponding to the current vector set if the number of the sight vectors of any one vector set is at least 2 and the included angle of the two sight vectors is larger than a preset included angle threshold value.
As a further improvement of an embodiment of the present invention, when step a is executed, the method further includes:
if the distance between the two positioning points is smaller than a preset distance threshold value and the included angle between the view directions of the two positioning points is smaller than a preset included angle threshold value, when the vector set intersected in the view range of the two positioning points is traversed, calculation of the included angle between each voxel in the view intersection range and two sight line vectors corresponding to the two positioning points is omitted.
As a further improvement of an embodiment of the present invention, the method further comprises:
judging whether the ratio of the voxel points marked with the lightening marks in each virtual positioning area is not less than a preset ratio threshold value in real time,
if so, driving the capsule endoscope to exit the working mode;
if not, the capsule endoscope is driven to continue the working mode.
As a further improvement of an embodiment of the present invention, the method further comprises:
when the capsule endoscope runs in a working area for a preset working time, judging whether the proportion of the voxel point marked with the lightening mark in each virtual positioning area is not less than a preset proportion threshold value,
if so, driving the capsule endoscope to exit the working mode;
if not, the capsule endoscope is driven to continue the working mode.
As a further improvement of an embodiment of the present invention, each of the virtual positioning areas is configured to be a sphere.
In order to solve one of the above-mentioned objects, an embodiment of the present invention provides an electronic device, which comprises a memory and a processor, wherein the memory stores a computer program operable on the processor, and the processor executes the program to implement the steps of the zonal completeness self-check method for a capsule endoscope.
In order to solve one of the above objects, an embodiment of the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps in the zone-based self-integrity test method for a capsule endoscope as described above.
Compared with the prior art, the invention has the beneficial effects that: according to the partition completeness self-checking method, the partition completeness self-checking equipment and the readable storage medium of the capsule endoscope, the working area is divided into the sub-working areas formed by the convex curved surfaces, the virtual positioning area which is located in the same space coordinate system with each sub-working area is established corresponding to each sub-working area, and the lighting marks are marked when the sub-working areas are intersected with the virtual positioning area, so that completeness self-checking of the capsule endoscope can be achieved, and the detection probability is improved.
Drawings
FIG. 1 is a schematic flow chart of a self-test method for zonal completeness of a capsule endoscope according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of step A in FIG. 1;
fig. 3 and 4 are schematic structural diagrams of a specific example of the present invention, respectively.
Detailed Description
The present invention will be described in detail below with reference to specific embodiments shown in the drawings. These embodiments are not intended to limit the present invention, and structural, methodological, or functional changes made by those skilled in the art according to these embodiments are included in the scope of the present invention.
With reference to fig. 1 and 2, a first embodiment of the present invention provides a self-test method for partition completeness of a capsule endoscope, including:
s1, acquiring a working area of the capsule endoscope, dividing the working area into a plurality of sub-working areas, wherein a straight line passing through any one sub-working area has at most two intersection points with the current sub-working area;
correspondingly establishing a virtual positioning area for each sub-working area, wherein the virtual positioning area and the working area are in the same space coordinate system, and the virtual positioning area completely covers the working area;
s2, dividing each virtual positioning area into a plurality of adjacent voxels with the same size, wherein each voxel has a unique identifier and a coordinate;
s3, driving the capsule endoscope to move in the working area, sequentially recording images shot when the capsule endoscope reaches each working point according to a preset frequency, synchronously executing the step A to mark a lighting mark on a voxel in each virtual positioning area, and respectively executing the step A no longer synchronously when the proportion of the voxel marked with the lighting mark in the current virtual positioning area is not less than a preset proportion threshold value for each virtual positioning area;
In the initial state, each voxel point is not marked with a lighting identifier;
the step A comprises the following steps:
sequentially recording the position and the view field orientation of each working point in the space coordinate system;
confirming an intersection area of the visual field of the capsule endoscope and the virtual positioning area where the visual field is located under the current working point according to the position and the visual field direction of the current working point at each working point in sequence;
and marking the lightening identification for the voxel points which are in the intersection region and are not marked with the lightening identification.
For step S1, after the capsule endoscope reaches the working area, quickly traverse the working area to outline the working area, further retrieve and locate key areas in the working area according to the specific working environment, obtain the coordinates of key points in each key area, and after connecting each key point in sequence, form the working area composed of key points; referring to fig. 3, in a specific example of the present invention, taking a working area as a virtual stomach environment as an example for specific description, a stomach key area includes: after the capsule endoscope rapidly traverses the working area, the working area formed by the curved surfaces can be drawn, wherein the more sampling points in each key area, the smoother the curved surface of the working area is, and correspondingly, the calculated amount is increased; furthermore, the coordinates of a plurality of working points are obtained in a key area of each stomach and are sequentially connected in a straight line to form the working area.
In a specific example of the present invention, the announcement number can be extended as: CN110335318A, chinese patent application entitled "a method for measuring an object in the digestive tract based on a camera system", or the method for measuring an object in the digestive tract based on a camera system "with the introductory publication of" CN110327046A ", roughly estimates the coordinates of a certain working point in each key area, and then connects the acquired coordinates in a straight line to form the working area described in the steps of the present invention.
After the working area is determined, dividing the working area into a plurality of sub-working areas, preferably, taking the division mode with the least sub-working areas as the optimal division mode; specifically, any straight line passing through any sub-working area has two intersection points with the current sub-working area, namely each sub-working area is of a convex curved surface structure; in the specific example of the present invention, as shown in fig. 4, the sub-work area with any surface protruding outward is defined as a sub-work area with a convex curved surface.
Further, after the working area is determined, a virtual positioning area is correspondingly established for each sub-working area under the same space coordinate system with the working area; preferably, the plurality of virtual positioning areas have the same shape.
In a specific example of the present invention, the virtual positioning area is configured to be spherical, and for convenience of illustration, only one cross section is shown in the example of fig. 3, where each virtual positioning area covers its corresponding sub-working area.
For step S2, discretizing the virtual localization area to divide the virtual localization area into a plurality of voxels that are adjacent and have the same size; in a specific example of the invention, each voxel is configured as a cube, and the range of the side length of the cube belongs to [1mm,5mm ]; accordingly, each voxel has a unique identification and coordinates, e.g. a number; the coordinates may be represented as fixed position coordinate values for each voxel, for example: one of the corner coordinate values; in a specific example of the present invention, the coordinate value of the center point of each voxel is used as the coordinate value corresponding to the current voxel.
It can be understood that, in practical application, a platform may be provided, after a user is located in a monitoring area of the platform, a virtual positioning area is automatically constructed according to the position of the user, and in the working process of the capsule endoscope, the user is always located in the monitoring area, that is, the virtual positioning area and the working area are ensured to be located in the same spatial coordinate system.
For step S3, after the sub-working area and the corresponding virtual positioning area are determined, the capsule endoscope is driven to enter the working area, that is, each working point is recorded according to a predetermined frequency, and according to specific requirements, the image photographed at each working point, the determination coordinate value P (x, y, z) of each working point, and the view direction M can be selectively recorded; the visual field orientation here is the posture of the capsule endoscope, for example: euler angles (yaw, pitch, roll), also vector coordinates of four elements or orientations; the visual field direction of the capsule endoscope shot in the M direction under the current coordinate point can be obtained through the visual field direction, the visual field direction is in a conical shape taking the current coordinate point as the starting point, and the vector direction isI.e. the direction of the axis delay of the cone. In the prior art, the image shooting, the position coordinate positioning and the direction of the recorder visual field are performed by the capsule endoscope, which are not further described herein.
In a preferred embodiment of the present invention, step S3 further includes: and B, scoring the image acquired by each working point, if the score of the image acquired by the current working point is not smaller than a preset score, synchronously executing the step A, and if the score of the image acquired by the current working point is smaller than the preset score, skipping the step A for the current working point.
Scoring images can be done in a number of ways, which are prior art; for example: the chinese patent with the invention name of "capsule endoscope non-reference image evaluation method, electronic device and medium" of the publication No. CN111932532B is introduced in the present application, wherein the score of the present invention may be an image quality evaluation score and/or an image content evaluation score and/or a comprehensive score of the introduced patent, and is not described herein again.
Preferably, when each working point is reached, synchronously executing the step A to mark a lighting identifier for the voxel, and respectively executing the step A no longer synchronously for each virtual positioning area when the proportion of the voxel marked with the lighting identifier in the current virtual positioning area is not less than a preset proportion threshold; the detection completeness of the capsule endoscope can be determined by lighting the proportion of the marks, and the higher the proportion is, the more comprehensive the capsule detection working area is.
For step a, specifically, in an initial state, each voxel point defaults to an unlabeled lighting identifier, the lighting identifier is a general label, and after passing through the algorithm a, the voxel point is identified in a plurality of identification ways, for example: identifying the same code, the same color, and the like for the corresponding voxel points; after specific operation, different voxel points are sequentially lightened, and the detection progress of the working area is determined by marking the voxel proportion of the lightening mark. Of course, in other real-time modes of the present invention, all voxels may be lit up in the initial state, and each voxel may be sequentially extinguished according to the sequence of step a, which is not described herein.
It should be noted that, because each sub-working area is a convex curved surface, taking any working point of the current sub-working area as a shooting angle, no occlusion exists in the visual field range, and thus, voxels in the shooting visual field range can be completely shot.
As shown in fig. 3, the working area is divided into two sub-working areas, and the two sub-working areas are respectively covered by a larger virtual positioning area X1 and a smaller virtual positioning area X2. For step a, for each working point, its frustum-conical region can be calculated from its corresponding view orientation, and accordingly, this frustum-conical region and said spherical virtual positioning region have an intersection region, for example, coordinate point P1, its view orientation M, and its intersection region a1, and accordingly, when the capsule endoscope is at coordinate point P1, all voxels within the intersection region a1 are marked with bright marks.
Preferably, if the virtual positioning area in which the capsule endoscope is currently located is not unique at any working point, the virtual positioning area in which all the voxel points are not lit and which has the smallest volume is defaulted as the virtual positioning area corresponding to the current working point.
In order to avoid that some sub-working areas are of non-convex curved surface structures due to errors when the sub-working areas are divided, in a second preferred embodiment of the present invention, when the step a is executed, the step of marking the lighting marks on the voxel points which are located in the intersection area and are not marked with the lighting marks comprises the steps of obtaining the sight line vectors of the current working area and each voxel which is not marked with the lighting marks in the intersection area, and simultaneously, sequentially merging the sight line vectors corresponding to each voxel into the same vector set according to the obtaining sequence of the intersection area; and traversing the vector sets, and marking a lightening identifier for a voxel corresponding to the current vector set if the number of the sight vectors of any one vector set is at least 2 and the included angle of two sight vectors is greater than a preset included angle threshold value.
Preferably, when the second preferred embodiment implements step a, the method further includes: if the distance between the two positioning points is smaller than a preset distance threshold value and the included angle between the view directions of the two positioning points is smaller than a preset included angle threshold value, when a vector set intersected in the view range of the current two positioning points is traversed, calculation of the included angle between each voxel in the view intersection range and two sight line vectors corresponding to the two positioning points is omitted; when the deviation of the two positioning points is small, the intersection areas of the two positioning points may be approximately overlapped, and at this time, the voxel points in the intersection areas cannot be marked with lighting marks at a high probability, so that the calculation amount is reduced on the premise of ensuring the accuracy of the calculation result by adding the step.
In general, the two positioning points are usually two coordinate points that are located in the same detection area and are obtained sequentially, which is not described herein again.
In the operation of the step A, each voxel point of the virtual positioning area is marked and lightened in sequence, and in an ideal state, when the capsule endoscope finishes working, each voxel point of the virtual positioning area is lightened, but in actual operation, errors can be caused by interference of various factors, so that a preset proportion threshold value is set, when the proportion of the voxel marked with the lightening mark in the virtual positioning area is not less than the preset proportion threshold value, the monitoring range of the capsule endoscope is marked to meet the standard, and therefore, the voxel marked lightening mark of the virtual positioning area assists the completeness self-check of the capsule endoscope.
Furthermore, the detection result is visualized, and the user can assist in checking the detection area of the capsule endoscope by observing the lighting marks marked on the virtual positioning area, which is not described herein.
Preferably, the method further comprises: judging whether the proportion of the voxel points marked with the lightening marks in the virtual positioning area is not less than a preset proportion threshold value or not in real time, if so, driving the capsule endoscope to exit the working mode; if not, the capsule endoscope is driven to continue the working mode.
Preferably, the method further comprises: when the capsule endoscope runs in a working area for a preset working time, judging whether the proportion of a voxel point marked with a lighting mark in a virtual positioning area is not less than a preset proportion threshold value, if so, driving the capsule endoscope to exit the working mode; if not, the capsule endoscope is driven to continue the working mode. Whether the working mode is finished or not is judged according to the occupation ratio of the voxel points of the lightening marks in the virtual positioning area, the working area can be observed in multiple visual angles, the image shooting quantity of the same area is increased under the judgment standard of the multiple visual angles, the shooting completeness is guaranteed, the same area can be observed in multiple angles in the later-stage application of images, the better observation effect is achieved, and the detection rate is improved.
Further, an embodiment of the present invention provides an electronic device, which comprises a memory and a processor, wherein the memory stores a computer program operable on the processor, and the processor executes the program to implement the steps of the zone completeness self-check method for a capsule endoscope.
Further, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the zone completeness self-test method of the capsule endoscope.
In summary, the partition completeness self-checking method, device and readable storage medium of the capsule endoscope of the present invention divide the working area into sub-working areas formed by convex curved surfaces, establish a virtual positioning area corresponding to each sub-working area and located under the same spatial coordinate system, and mark the lighting mark when the sub-working areas intersect with the virtual positioning area, so as to implement completeness self-checking of the capsule endoscope and improve the detection probability; meanwhile, the visual detection effect can be realized, and the convenience of the operation of the capsule endoscope is improved.
It should be understood that although the present description refers to embodiments, not every embodiment contains only a single technical solution, and such description is for clarity only, and those skilled in the art should make the description as a whole, and the technical solutions in the embodiments can also be combined appropriately to form other embodiments understood by those skilled in the art.
The above-listed detailed description is only a specific description of a possible embodiment of the present invention, and they are not intended to limit the scope of the present invention, and equivalent embodiments or modifications made without departing from the technical spirit of the present invention should be included in the scope of the present invention.
Claims (10)
1. A sectional completeness self-checking method for a capsule endoscope, which is characterized by comprising the following steps:
acquiring a working area of the capsule endoscope, dividing the working area into a plurality of sub-working areas, wherein a straight line randomly passing through any one sub-working area has at most two intersection points with the current sub-working area;
correspondingly establishing a virtual positioning area for each sub-working area, wherein the virtual positioning area and the working area are in the same space coordinate system, and the virtual positioning area completely covers the working area;
dividing each virtual positioning area into a plurality of adjacent voxels with the same size, wherein each voxel has a unique identifier and a unique coordinate;
driving the capsule endoscope to move in the working area, sequentially recording images shot when the capsule endoscope reaches each working point according to a preset frequency, synchronously executing the step A to mark a lighting mark on a voxel in each virtual positioning area, and respectively executing the step A no longer synchronously when the proportion of the voxel marked with the lighting mark in the current virtual positioning area is not less than a preset proportion threshold value for each virtual positioning area;
Wherein, in the initial state, each voxel point is not marked with a lightening mark;
the step A comprises the following steps:
sequentially recording the position and the view field orientation of each working point in the space coordinate system;
confirming an intersection area of the visual field of the capsule endoscope and the virtual positioning area where the visual field is located under the current working point according to the position and the visual field direction of the current working point at each working point in sequence;
and marking the lightening identification for the voxel points which are in the intersection region and are not marked with the lightening identification.
2. The method for self-checking the zone completeness of a capsule endoscope according to claim 1, wherein the driving the capsule endoscope to move in the working area, sequentially recording images taken by the capsule endoscope when the capsule endoscope reaches each working point according to a preset frequency, and synchronously executing the step A to mark and light the mark for the voxel comprises:
and B, scoring the image acquired by each working point, if the score of the image acquired by the current working point is not smaller than a preset score, synchronously executing the step A, and if the score of the image acquired by the current working point is smaller than the preset score, skipping the step A for the current working point.
3. The zoned completeness self-test method of a capsule endoscope according to claim 1, wherein when performing step a, the method further comprises:
If the virtual positioning area where the capsule endoscope is currently located is not unique at any working point, the virtual positioning area with the smallest volume and the pixel points which are not all lighted is defaulted as the virtual positioning area corresponding to the current working point.
4. The method for self-checking the completeness of a capsule endoscope according to claim 1, wherein in the step a, the step of marking the lighting marks on the voxel points which are in the intersection region and not marked with the lighting marks comprises:
in the intersection region, acquiring the sight line vector of the current working point and each voxel without the lighting identification, and simultaneously, sequentially merging the sight line vector corresponding to each voxel into the same vector set according to the acquisition sequence of the intersection region;
and traversing the vector sets, and marking a lighting identifier for the voxel corresponding to the current vector set if the number of the sight vectors of any one vector set is at least 2 and the included angle of the two sight vectors is larger than a preset included angle threshold value.
5. The zoned completeness self-test method of a capsule endoscope according to claim 4, wherein when performing step a, the method further comprises:
if the distance between the two positioning points is smaller than a preset distance threshold value and the included angle between the view directions of the two positioning points is smaller than a preset included angle threshold value, when the vector set intersected in the view range of the two positioning points is traversed, calculation of the included angle between each voxel in the view intersection range and two sight line vectors corresponding to the two positioning points is omitted.
6. The zoned completeness self-test method of a capsule endoscope according to claim 1, wherein the method further comprises:
judging whether the ratio of the voxel points marked with the lightening marks in each virtual positioning area is not less than a preset ratio threshold value in real time,
if so, driving the capsule endoscope to exit the working mode;
if not, the capsule endoscope is driven to continue the working mode.
7. The zoned completeness self-test method of a capsule endoscope according to claim 1, wherein the method further comprises:
when the capsule endoscope runs in a working area for a preset working time, judging whether the proportion of the voxel points marked with the lightening marks in each virtual positioning area is not less than a preset proportion threshold value,
if so, driving the capsule endoscope to exit the working mode;
if not, the capsule endoscope is driven to continue the working mode.
8. The method for self-testing zonal completeness of a capsule endoscope of claim 1, wherein each of the virtual positioning regions is configured to be spherical.
9. An electronic device comprising a memory and a processor, the memory storing a computer program operable on the processor, wherein the processor when executing the program implements the steps of the zone integrity self-test method of a capsule endoscope according to any one of claims 1-8.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for self-contained self-integrity testing of a capsule endoscope according to any one of claims 1-8.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110284699.9A CN113017544B (en) | 2021-03-17 | 2021-03-17 | Sectional completeness self-checking method and device for capsule endoscope and readable storage medium |
PCT/CN2022/080076 WO2022194015A1 (en) | 2021-03-17 | 2022-03-10 | Area-by-area completeness self-checking method of capsule endoscope, electronic device, and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110284699.9A CN113017544B (en) | 2021-03-17 | 2021-03-17 | Sectional completeness self-checking method and device for capsule endoscope and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113017544A CN113017544A (en) | 2021-06-25 |
CN113017544B true CN113017544B (en) | 2022-07-29 |
Family
ID=76470911
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110284699.9A Active CN113017544B (en) | 2021-03-17 | 2021-03-17 | Sectional completeness self-checking method and device for capsule endoscope and readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113017544B (en) |
WO (1) | WO2022194015A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112998630B (en) * | 2021-03-17 | 2022-07-29 | 安翰科技(武汉)股份有限公司 | Self-checking method for completeness of capsule endoscope, electronic equipment and readable storage medium |
CN113017544B (en) * | 2021-03-17 | 2022-07-29 | 安翰科技(武汉)股份有限公司 | Sectional completeness self-checking method and device for capsule endoscope and readable storage medium |
CN113951808A (en) * | 2021-12-10 | 2022-01-21 | 广州思德医疗科技有限公司 | Method, device and system for acquiring stomach image of non-magnetic control capsule endoscope |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7194117B2 (en) * | 1999-06-29 | 2007-03-20 | The Research Foundation Of State University Of New York | System and method for performing a three-dimensional virtual examination of objects, such as internal organs |
US7447342B2 (en) * | 2003-09-22 | 2008-11-04 | Siemens Medical Solutions Usa, Inc. | Method and system for using cutting planes for colon polyp detection |
US20080117210A1 (en) * | 2006-11-22 | 2008-05-22 | Barco N.V. | Virtual endoscopy |
DE102010009884A1 (en) * | 2010-03-02 | 2011-09-08 | Friedrich-Alexander-Universität Erlangen-Nürnberg | Method and device for acquiring information about the three-dimensional structure of the inner surface of a body cavity |
DE102011076928A1 (en) * | 2011-06-03 | 2012-12-06 | Siemens Ag | Method and device for carrying out an examination of a body cavity of a patient |
CN109907720A (en) * | 2019-04-12 | 2019-06-21 | 重庆金山医疗器械有限公司 | Video image dendoscope auxiliary examination method and video image dendoscope control system |
CN110335318B (en) * | 2019-04-28 | 2022-02-11 | 安翰科技(武汉)股份有限公司 | Method for measuring object in digestive tract based on camera system |
CN110136808B (en) * | 2019-05-23 | 2022-05-24 | 安翰科技(武汉)股份有限公司 | Auxiliary display system of shooting device |
CN112998630B (en) * | 2021-03-17 | 2022-07-29 | 安翰科技(武汉)股份有限公司 | Self-checking method for completeness of capsule endoscope, electronic equipment and readable storage medium |
CN113017544B (en) * | 2021-03-17 | 2022-07-29 | 安翰科技(武汉)股份有限公司 | Sectional completeness self-checking method and device for capsule endoscope and readable storage medium |
-
2021
- 2021-03-17 CN CN202110284699.9A patent/CN113017544B/en active Active
-
2022
- 2022-03-10 WO PCT/CN2022/080076 patent/WO2022194015A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2022194015A1 (en) | 2022-09-22 |
CN113017544A (en) | 2021-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112998630B (en) | Self-checking method for completeness of capsule endoscope, electronic equipment and readable storage medium | |
CN113017544B (en) | Sectional completeness self-checking method and device for capsule endoscope and readable storage medium | |
US9460536B2 (en) | Endoscope system and method for operating endoscope system that display an organ model image to which an endoscopic image is pasted | |
US7922652B2 (en) | Endoscope system | |
US9538907B2 (en) | Endoscope system and actuation method for displaying an organ model image pasted with an endoscopic image | |
CN113052956B (en) | Method, device and medium for constructing film reading model based on capsule endoscope | |
US20070161854A1 (en) | System and method for endoscopic measurement and mapping of internal organs, tumors and other objects | |
US9521944B2 (en) | Endoscope system for displaying an organ model image to which an endoscope image is pasted | |
CN102596003B (en) | System for determining airway diameter using endoscope | |
CN108289598A (en) | Trace system | |
CN113768527A (en) | Real-time three-dimensional reconstruction method, device and medium based on CT and ultrasonic image fusion | |
CN111091562A (en) | A method and system for measuring the size of gastrointestinal lesions | |
US12299922B2 (en) | Luminal structure calculation apparatus, creation method for luminal structure information, and non-transitory recording medium recording luminal structure information creation program | |
KR102285008B1 (en) | System for tracking motion of medical device using marker | |
CN114916898A (en) | Automatic control inspection method, system, equipment and medium for magnetic control capsule | |
Dimas et al. | Endoscopic single-image size measurements | |
CN115120350A (en) | Computer-readable storage medium, electronic device, position calibration and robotic system | |
US20230419535A1 (en) | Endoscope system and method of operating the same | |
US20230215022A1 (en) | Image-based motion detection method | |
CN115024805A (en) | Method, system and storage medium for assisting puncture of endoscopic surgery | |
EP1992274B1 (en) | Medical image processing device and medical image processing method | |
CN114391954B (en) | Computer-readable storage medium, electronic device, and surgical robot system | |
JPS6323636A (en) | Endoscope image diagnostic apparatus | |
JPH02297515A (en) | Stereoscopic electronic endoscope | |
CN119367010A (en) | A method for surgical puncture navigation using virtual reality technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |