Disclosure of Invention
The invention aims to provide a focus confirmation method and a focus confirmation system for endoscopic surgery, which are used for solving the problem that a focus in the prior art is located at a position which is not easy to perceive and takes more time to adjust an operating table.
In a first aspect, an embodiment of the present application provides a method for confirming a lesion in an endoscopic surgery, including the steps of:
acquiring a scanning image of a focus organ, wherein the scanning image comprises a preliminary positioning focus position;
three-dimensional modeling is carried out on the focal organ according to the scanned image of the focal organ, so as to obtain a focal organ three-dimensional model, wherein the focal organ three-dimensional model comprises preliminary positioning focal positions;
Inputting a scanning image of a focus organ into a preset bed adjusting model, generating and adjusting an operating bed according to bed angle information;
the method comprises the steps of obtaining and adjusting a focus organ three-dimensional model according to real-time picture information of a visual cavity mirror rod, so that a display view angle of the focus organ three-dimensional model is consistent with a view angle of a real-time picture;
and obtaining and calculating the relative positions of the positioning endoscope rod and the preliminary positioning focus position according to the current positioning endoscope rod position and the preliminary positioning focus position so as to control the positioning endoscope rod and the vision endoscope rod to determine the focus position.
In the implementation process, the scanning image of the focus organ is obtained, the scanning image comprises a preliminary positioning focus position, so that the preliminary positioning focus position can be displayed, then the focus organ is subjected to three-dimensional modeling according to the scanning image of the focus organ to obtain a focus organ three-dimensional model, the focus organ three-dimensional model comprises the preliminary positioning focus position, the three-dimensional model is beneficial to a doctor to observe the focus organ, then the scanning image of the focus organ is input into a preset bed adjusting model, the operation table is generated and adjusted according to bed angle information, the most suitable bed angle information can be obtained through the preset bed adjusting model, the operation table is adjusted, and therefore the adjustment time of the operation table is saved, then the focus organ three-dimensional model is obtained and adjusted according to real-time picture information of the vision cavity mirror rod, so that the display view angle of the focus organ three-dimensional model is consistent with the view angle of a real-time picture, so that a doctor can be assisted in focus observation, finally the relative positions of the positioning cavity mirror rod and the preliminary positioning focus position are obtained according to the current positioning cavity mirror rod position and the doctor, the relative positions of the preliminary positioning focus position of the focus organ are calculated, so that the positioning cavity mirror rod and vision cavity position can be controlled, the relative position of the operation table can be determined, the position can be determined, and the operation position can be quickly, and the position can be adjusted in a mode, and the operation position can be more can be confirmed.
Based on the first aspect, in some embodiments of the invention, the method further comprises the steps of:
And inputting the scanning image of the focus organ into a preset operation parameter preconditioning model to generate optimal bed angle information, optimal positioning endoscopic rod parameter information and optimal visual endoscopic rod parameter information.
Based on the first aspect, in some embodiments of the invention, the method further comprises the steps of:
acquiring patient information corresponding to a scanned image of a focus organ;
And matching in a preset bed adjustment model library according to the sex information in the patient information to obtain a corresponding bed adjustment model.
Based on the first aspect, in some embodiments of the invention, the method further comprises the steps of:
acquiring and taking final operating table angle information of multiple endoscopic surgeries and scanning image information of corresponding focal organs as sample information;
training the sample information by adopting a neural network algorithm to obtain a bed adjusting model.
Based on the first aspect, in some embodiments of the invention, the method further comprises the steps of:
Acquiring and taking final operating table angle information of multiple endoscopic surgeries and scanning image information of corresponding focal organs as initial samples;
acquiring medical record information of each endoscopic surgery;
classifying the initial sample according to medical record information and preset classification rules to obtain sub-sample information of a plurality of categories;
Training the subsampled information of each category by adopting a neural network algorithm to obtain a plurality of category bed adjustment models so as to form a bed adjustment model library.
Based on the first aspect, in some embodiments of the invention, obtaining a scanned image of a focal organ, the step of including a preliminary localization of the focal location in the scanned image includes the steps of:
Obtaining focus organ information;
And scanning the focal organ according to the focal organ information to obtain a scanning image of the focal organ, wherein the scanning image comprises preliminary positioning focal positions.
Based on the first aspect, in some embodiments of the present invention, the step of three-dimensionally modeling a focal organ according to a scanned image of the focal organ to obtain a three-dimensional model of the focal organ, wherein the three-dimensional model of the focal organ includes the steps of:
Obtaining an image sequence of a plurality of angles according to the scanned image of the focus organ;
And carrying out three-dimensional modeling according to the image sequences of the multiple angles to obtain a focus organ three-dimensional model, wherein the focus organ three-dimensional model comprises preliminary positioning focus positions.
In a second aspect, embodiments of the present application provide a lesion confirmation system for use in endoscopic surgery, comprising:
the scanning image acquisition module is used for acquiring a scanning image of a focus organ, wherein the scanning image comprises a preliminary positioning focus position;
The three-dimensional modeling module is used for carrying out three-dimensional modeling on the focal organ according to the scanned image of the focal organ to obtain a focal organ three-dimensional model, wherein the focal organ three-dimensional model comprises a preliminary positioning focal position;
The bed angle adjusting module is used for inputting the scanning image of the focus organ into a preset bed adjusting model, generating and adjusting the operating bed according to the bed angle information;
The visual adjusting module is used for acquiring and adjusting the focus organ three-dimensional model according to the real-time picture information of the visual cavity mirror rod so that the display view angle of the focus organ three-dimensional model is consistent with the view angle of the real-time picture;
The focus determining module is used for obtaining and calculating the relative positions of the positioning endoscope rod and the preliminary positioning focus position according to the current positioning endoscope rod position and the preliminary positioning focus position so as to control the positioning endoscope rod and the vision endoscope rod to determine the focus position.
In the implementation process, a scanning image of a focus organ is acquired through a scanning image acquisition module, the scanning image comprises a preliminary positioning focus position, so that the scanning image can display the preliminary positioning focus position, a three-dimensional modeling module carries out three-dimensional modeling on the focus organ according to the scanning image of the focus organ to obtain a focus organ three-dimensional model, the focus organ three-dimensional model comprises the preliminary positioning focus position, the three-dimensional model is beneficial to a doctor to observe the focus organ, a bed angle adjustment module inputs the scanning image of the focus organ into a preset bed adjustment model to generate and adjust an operation bed according to bed angle information, the most suitable bed angle information can be obtained through the preset bed adjustment model, and then the operation bed is adjusted, so that the adjustment time of the operation bed is saved, a visual adjustment module acquires and adjusts the focus organ three-dimensional model according to real-time picture information of the visual cavity mirror rod, so that the display view angle of the focus organ three-dimensional model is consistent with the view angle of the real-time picture, the focus organ three-dimensional model is obtained by a focus determination module and is beneficial to observe by a doctor, the focus mirror rod position and preliminary positioning focus position is calculated according to the preliminary positioning focus position, the relative position of the focus mirror rod is acquired by a focus mirror rod and the doctor is beneficial to observe, the focus position of the focus organ, the relative position is determined by a doctor, the preliminary positioning position and the focus position is needed to be fast, and the position is controlled by a control mirror, and the control mirror is relatively and the position is required to be fast, and a position is relatively and a position is shortened.
In a third aspect, an embodiment of the present application provides an electronic device including a memory for storing one or more programs, and a processor. The method of any of the first aspects described above is implemented when one or more programs are executed by a processor.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method as in any of the first aspects described above.
The embodiment of the invention has at least the following advantages or beneficial effects:
The embodiment of the invention provides a focus confirmation method and a focus confirmation system for endoscopic surgery, wherein a scanning image of a focus organ is acquired, the scanning image comprises a preliminary positioning focus position, so that the scanning image can display the preliminary positioning focus position, then the focus organ is subjected to three-dimensional modeling according to the scanning image of the focus organ to obtain a focus organ three-dimensional model, the focus organ three-dimensional model comprises the preliminary positioning focus position, the three-dimensional model is beneficial to a doctor to observe the focus organ, then the scanning image of the focus organ is input into a preset bed position adjusting model, an operating table is regulated according to bed position angle information, the most suitable bed position angle information can be obtained through the preset bed position adjusting model, the operating table is regulated further, the operating table adjusting time is saved, then the focus organ three-dimensional model is regulated according to real-time picture information of a vision endoscope rod, so that the display view angle of the focus organ three-dimensional model is consistent with the view angle of a real-time picture, the doctor is assisted to observe the focus organ three-dimensional model, finally the doctor obtains and calculates the relative position of a positioning endoscope rod and the preliminary positioning focus position according to the current positioning endoscope rod position, the position of the focus endoscope is controlled by the preset bed position adjusting model, the vision endoscope position can be controlled by the control rod, and the relative positioning endoscope position can be determined, and the auxiliary position of the focus can be quickly confirmed by the vision endoscope position is not needed, and the operating table is regulated by the auxiliary position is determined, and the position of the operating table can be regulated. By matching the corresponding bed adjusting model, the obtained bed angle information is more signed with the operation requirement, and the adjusting time of the operation bed is further shortened.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Examples
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The various embodiments and features of the embodiments described below may be combined with one another without conflict.
Referring to fig. 1, fig. 1 is a flowchart of a method for confirming a focus in an endoscopic surgery according to an embodiment of the present invention. The focus confirmation method for the endoscopic surgery comprises the following steps:
step S110, obtaining a scanning image of a focus organ, wherein the scanning image comprises preliminary focus position positioning, the scanning image can be image data output by medical image equipment such as CT, MRI and the like, in the embodiment, preliminary examination can be carried out on the organ before scanning, focus positioning identification can be set, and the method specifically comprises the following steps:
firstly, when a focus is detected by an endoscope, a positioning hook with a marking function is clamped at the focus, so that focus organ information is obtained, for example, in a gastroscopy operation, the positioning hook is clamped at the focus inside a stomach wall, and the positioning hook can be mounted through the endoscope and is clamped in the stomach wall by the endoscope operation, so that the device has the characteristics of small volume, light weight, safety, no toxicity and the like. The positioning tag with the positioning recognition function is arranged at the grip of the hook, and the implementation modes can be two, namely, the first mode is a miniature RFID tag, the recognition needs to install a positioning antenna on a positioning endoscope rod, and the second mode is a small permanent magnet, and the positioning recognition can be carried out on the endoscope rod by using a magnetic force sensor. The two technologies can be simultaneously used for double guarantee.
Then, the focal organ is scanned according to the focal organ information, a scanning image of the focal organ is obtained, and the scanning image comprises preliminary positioning focal positions. Because the metal component in the hook has high brightness imaging in a scanning image such as a CT image, the high brightness position in the image is the initial focus position. By acquiring a scanned image of a target organ having a preliminary localization lesion location, a preliminary localization of the lesion location may be achieved.
Step S120, three-dimensional modeling is carried out on the focal organ according to the scanned image of the focal organ to obtain a focal organ three-dimensional model, the focal organ three-dimensional model comprises a preliminary positioning focal position, and as the metal component in the hook has highlight imaging in the CT image, the obtained stomach three-dimensional model also has obvious highlight marks on the focal position. The three-dimensional modeling can be performed through volume rendering or surface rendering, the three-dimensional reconstruction can be performed by utilizing a visualization tool package (Visualization Toolkit, VTK), the three-dimensional reconstruction of the image based on the VTK mainly comprises two types of surface rendering and volume rendering, a ray projection algorithm belongs to the volume rendering, the volume rendering is facing to the whole volume data, and each voxel in a volume data field is processed, so that the reconstruction effect is more accurate. The moving cube algorithm belongs to surface drawing, wherein the surface drawing is to extract surface profile information of a part to be reconstructed in an image to perform three-dimensional drawing, and only reconstruct the surface of an object. Rendering using a surface may be performed by:
Firstly, according to the obtained scanning image and corresponding image parameters, calculating pixel values of all converted pixel points, then according to the pixel values of all converted pixel points, constructing a new image, and making binarization treatment and median filtering treatment on the new image so as to obtain basic CT image, then calculating similarity between adjacent basic CT images so as to obtain the CT image sequence of multiple angles of the part to be treated.
Then, three-dimensional modeling is carried out according to the image sequences of the angles, so as to obtain a focus organ three-dimensional model, wherein the focus organ three-dimensional model comprises preliminary positioning focus positions. The modeling is firstly to calculate pixel intervals contained in each CT image in each angle image sequence, and according to the pixel intervals, a three-dimensional model of the part to be processed is established by using a moving cube algorithm. The mobile cube algorithm belongs to the prior art and will not be described in detail here.
Step 130, inputting a scanning image of a focus organ into a preset bed adjustment model, generating and adjusting an operation bed according to bed angle information, wherein the operation bed can be an operation bed capable of automatically controlling an adjustment angle and a height, the preset bed adjustment model can be a model obtained by training by adopting a neural network model according to historical bed data, and the model can calculate the corresponding bed angle information through the input scanning image. When the bed adjusting model is calculated, firstly focus position information in a scanning image of focus organs is calculated, and then bed angle information is obtained according to the focus position information. The bed angle information includes an inclination angle of the operating bed. The lesion position information may be coordinate information, and a coordinate system may be established on the lesion organ, thereby determining the lesion position information. The adjusting operation table can acquire current angle information through the angle sensor on the operation table, then calculate the adjusting angle according to the angle information of the table and the current angle information, and generate an angle adjusting command according to the adjusting angle so as to adjust the operation table. The bed adjusting model can be obtained through the following steps:
Firstly, acquiring and taking final operation bed angle information of multiple endoscopic operations and scanning image information of corresponding focal organs as sample information, wherein the sample information comprises information of multiple completed operations, including final operation angle information, scanning image information of corresponding focal organs and the like. The scanned image information for the focal organ includes image information, focal position information, etc., which may be coordinate information.
And then training the sample information by adopting a neural network algorithm to obtain a bed adjustment model. When training is performed, firstly focus position information in scanning image information of focus organs in sample information is extracted, and then the focus position information and corresponding final operation bed angle information are trained by adopting a neural network algorithm to obtain a bed adjustment model. The neural network algorithm belongs to the prior art, and will not be described in detail herein.
Step 140, the three-dimensional model of the focus organ is obtained and regulated according to the real-time picture information of the vision cavity mirror rod, so that the display view angle of the three-dimensional model of the focus organ is consistent with the view angle of the real-time picture, the real-time picture information can be obtained in a wired transmission mode, the angle information of the vision cavity mirror rod, which is monitored in real time by using a displacement sensor at the tail part of the vision cavity mirror rod, is real-time vision information which is acquired by an inclined plane camera at the head part of the vision cavity mirror rod, and in the step, the inclined plane camera is arranged at the head part of the vision cavity mirror rod, and the imaging angle of the inclined plane camera forms an angle of 30 degrees with the vision cavity mirror rod. In the moving and rotating process of the vision cavity mirror rod, the position of the vision cavity mirror rod from a focus, the rotation detection angle and the zooming multiple of the camera are monitored in real time by a displacement sensor at the tail part and sent to the main control unit for processing, so that the relative distance between the current vision cavity mirror rod and the focus position and the current view angle direction are known, and a basis is provided for the follow-up adjustment of the three-dimensional model so that the display view angle of the three-dimensional model is kept synchronous with the current display view angle of the vision cavity mirror rod. And adjusting and transforming the three-dimensional model according to the angle information and the visual information of the visual cavity mirror rod, so that the display view angle of the three-dimensional model is consistent with the view angle of the return visual picture of the visual cavity mirror rod. The above-mentioned adjustment transformation includes rotation, scaling, etc.
Step S150, obtaining and calculating the relative positions of the positioning endoscope rod and the preliminary positioning focus position according to the current positioning endoscope rod position and the preliminary positioning focus position so as to control the positioning endoscope rod and the vision endoscope rod to determine the focus position. The focus is used as a center to establish a three-dimensional coordinate system, real-time position information of the positioning endoscope rod is monitored through a displacement sensor on the positioning endoscope rod, coordinates of the positioning endoscope rod are updated in real time and are sent to a main control unit for processing in a wired transmission mode, so that the position of the current positioning endoscope rod is obtained, the position information of the current positioning endoscope rod can be projected onto a three-dimensional image, further, the relative position of the current positioning endoscope rod and the position of a preliminary positioning focus is calculated, a corresponding control command can be generated according to the relative position, and the control command is sent to the positioning endoscope rod to control the movement of the positioning endoscope rod and the vision endoscope rod, so that a doctor is assisted in rapidly determining the focus position.
In the implementation process, the scanning image of the focus organ is obtained, the scanning image comprises a preliminary positioning focus position, so that the preliminary positioning focus position can be displayed, then the focus organ is subjected to three-dimensional modeling according to the scanning image of the focus organ to obtain a focus organ three-dimensional model, the focus organ three-dimensional model comprises the preliminary positioning focus position, the three-dimensional model is beneficial to a doctor to observe the focus organ, then the scanning image of the focus organ is input into a preset bed adjusting model, the operation table is generated and adjusted according to bed angle information, the most suitable bed angle information can be obtained through the preset bed adjusting model, the operation table is adjusted, and therefore the adjustment time of the operation table is saved, then the focus organ three-dimensional model is obtained and adjusted according to real-time picture information of the vision cavity mirror rod, so that the display view angle of the focus organ three-dimensional model is consistent with the view angle of a real-time picture, so that a doctor can be assisted in focus observation, finally the relative positions of the positioning cavity mirror rod and the preliminary positioning focus position are obtained according to the current positioning cavity mirror rod position and the doctor, the relative positions of the preliminary positioning focus position of the focus organ are calculated, so that the positioning cavity mirror rod and vision cavity position can be controlled, the relative position of the operation table can be determined, the position can be determined, and the operation position can be quickly, and the position can be adjusted in a mode, and the operation position can be more can be confirmed.
The scanning image of the focus organ can be input into a preset operation parameter preconditioning model to generate optimal bed angle information, optimal positioning endoscopic rod parameter information and optimal visual endoscopic rod parameter information. The preset operation parameter preconditioning model can be a model obtained by training a neural network model according to historical operation data, and the model can calculate and obtain optimal bed angle information, optimal positioning endoscopic rod parameter information and optimal visual endoscopic rod parameter information through input scanning images. The historical operation data comprises scanning images during operation, bed parameter information during final determination of focus positions, positioning endoscope rod parameter information and vision endoscope rod parameter information, the historical operation data is trained by adopting a neural network model, and an operation parameter preconditioning model is obtained through machine learning. The optimal bed angle information comprises optimal inclination angle of an operating bed, optimal height information of the operating bed and the like, the optimal positioning endoscope rod parameter information comprises optimal stretching length of a positioning endoscope rod, optimal stretching angle of the positioning endoscope rod and the like, and the optimal vision endoscope rod parameter information comprises optimal stretching length of a vision endoscope rod, optimal stretching of the vision endoscope rod and the like.
Before an operation is performed, the operation table, the positioning endoscope rod and the vision endoscope rod can be respectively adjusted through the obtained optimal bed angle information, the optimal positioning endoscope rod parameter information and the optimal vision endoscope rod parameter information, so that a doctor can quickly find a focus during the operation, and the operation time is further saved.
In order to further make the bed angle information meet the operation requirement, a preset bed adjustment model may be further matched, please refer to fig. 2, fig. 2 is a flowchart illustrating steps according to the organ matching bed adjustment model according to an embodiment of the present invention. The method specifically comprises the following steps:
first, organ information corresponding to a scanned image of a focal organ, which is an organ requiring surgery, such as a stomach, an intestine, etc., is acquired.
Then, matching is carried out in a preset bed adjustment model library according to the organ information, and a corresponding bed adjustment model is obtained. The preset bed adjusting model library comprises a plurality of types of bed adjusting models, and can be different bed adjusting models aiming at different organs. The matching is to find the bed adjusting model of the corresponding organ. By matching the bed adjusting model of the corresponding organ, the obtained bed angle information is more signed with the operation requirement, and the adjusting time of the operation bed is further shortened.
In order to further make the bed angle information meet the operation requirement, the preset bed adjustment model may be matched through patient information, please refer to fig. 3, fig. 3 is a flowchart illustrating steps of matching the bed adjustment model according to gender according to an embodiment of the present invention. The method specifically comprises the following steps:
firstly, patient information corresponding to a scanned image of a focus organ is acquired, wherein the patient information comprises information such as name, age, sex and the like.
Then, matching is carried out in a preset bed adjustment model library according to the sex information in the patient information, and a corresponding bed adjustment model is obtained. The preset bed adjusting model library comprises a plurality of types of bed adjusting models, which can be divided into a bed adjusting model suitable for men and a bed adjusting model suitable for women according to sexes, and the matching is to find the bed adjusting model suitable for the corresponding sexes, so that the obtained bed angle information is more symbolized with the operation requirement, and the operation bed adjusting time is further shortened. Meanwhile, corresponding bed adjusting models can be respectively set according to different age groups, and similar to the above, the description is omitted here.
Wherein, the method for obtaining the bed adjustment model library can be completed by the following steps:
firstly, acquiring and taking final operating table angle information of multiple endoscopic operations and scanning image information of corresponding focal organs as initial samples;
and then obtaining medical record information of each endoscopic surgery, wherein the medical record information comprises patient information, focus organ information and the like.
And classifying the initial sample according to medical record information and preset classification rules to obtain sub-sample information of a plurality of categories, wherein the classification rules can be classification according to sex information in patient information, classification according to focus organ information and the like, and can be specifically set according to a bed adjustment model library established according to actual needs. By classifying the initial samples, sub-sample information of multiple categories can be obtained, and the corresponding bed adjustment model library can be obtained by training the sub-sample information after the sub-sample information is convenient to carry out.
And finally, training the subsampled information of each category by adopting a neural network algorithm to obtain a plurality of category bed adjustment models so as to form a bed adjustment model library. When training is performed, focus position information in scanning image information of focus organs in subsampled information is firstly extracted, and then the focus position information and corresponding final operating table angle information are trained by adopting a neural network algorithm to obtain a bed adjusting model. The neural network algorithm belongs to the prior art, and will not be described in detail herein. By classifying the initial samples, training the information of each sub-sample respectively to obtain different bed adjustment models so as to form a bed adjustment model library.
Based on the same inventive concept, the invention also provides a focus confirming system for use in endoscopic surgery, please refer to fig. 4, and fig. 4 is a block diagram of a focus confirming system for use in endoscopic surgery according to an embodiment of the invention. The focus confirming system for use in endoscopic surgery includes:
a scan image acquisition module 110, configured to acquire a scan image of a focal organ, where the scan image includes a preliminary localization focal position;
The three-dimensional modeling module 120 is configured to perform three-dimensional modeling on a focal organ according to the scanned image of the focal organ, so as to obtain a focal organ three-dimensional model, where the focal organ three-dimensional model includes a preliminary positioning focal position;
the bed angle adjusting module 130 is used for inputting the scanning image of the focus organ into a preset bed adjusting model, generating and adjusting the operating bed according to the bed angle information;
The vision adjusting module 140 is configured to acquire and adjust the focal organ three-dimensional model according to the real-time image information of the vision cavity mirror rod, so that the display view angle of the focal organ three-dimensional model is consistent with the view angle of the real-time image;
The focus determining module 150 is configured to obtain and calculate a relative position of the positioning endoscope rod and the preliminary positioning focus position according to the current positioning endoscope rod position and the preliminary positioning focus position, so as to control the positioning endoscope rod and the vision endoscope rod to determine the focus position.
In the implementation process, a scanning image of a focus organ is acquired through a scanning image acquisition module 110, the scanning image comprises a preliminary positioning focus position, so that the scanning image can display the preliminary positioning focus position, a three-dimensional modeling module 120 carries out three-dimensional modeling on the focus organ according to the scanning image of the focus organ to obtain a focus organ three-dimensional model, the focus organ three-dimensional model comprises the preliminary positioning focus position, the three-dimensional model is beneficial to a doctor to observe the focus organ, a bed angle adjustment module 130 inputs the scanning image of the focus organ into a preset bed adjustment model, generates and adjusts an operation bed according to bed angle information, the most suitable bed angle information can be obtained through the preset bed adjustment model, and then adjusts the operation bed, so that the adjustment time of the operation bed is saved, a visual adjustment module 140 acquires and adjusts the focus organ three-dimensional model according to real-time picture information of the vision cavity mirror rod, so that the display view angle of the focus organ three-dimensional model is consistent with the view angle of the real-time picture, thereby assisting doctor to observe the focus, a focus determination module 150 acquires and calculates the focus position of the focus organ according to the current positioning cavity mirror rod position and the preliminary positioning cavity position, generates a focus position and the preliminary positioning cavity mirror, and the position of the focus position is controlled by a control mirror, and the relative positioning cavity position of the preliminary positioning cavity is determined, and the position of the focus position and the focus position is required to be controlled, so that the position is fast and the position is determined.
Referring to fig. 5, fig. 5 is a schematic block diagram of an electronic device according to an embodiment of the present application. The electronic device comprises a memory 101, a processor 102 and a communication interface 103, wherein the memory 101, the processor 102 and the communication interface 103 are electrically connected with each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory 101 may be used to store software programs and modules, such as program instructions/modules corresponding to a focus identification system for use in endoscopic surgery according to embodiments of the present application, and the processor 102 executes the software programs and modules stored in the memory 101, thereby performing various functional applications and data processing. The communication interface 103 may be used for communication of signaling or data with other node devices.
The Memory 101 may be, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc.
The processor 102 may be an integrated circuit chip with signal processing capabilities. The processor 102 may be a general purpose processor including a central Processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc., or may be a digital signal processor (DIGITAL SIGNAL Processing, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components.
It will be appreciated that the configuration shown in fig. 5 is merely illustrative, and that the electronic device may also include more or fewer components than shown in fig. 5, or have a different configuration than shown in fig. 5. The components shown in fig. 5 may be implemented in hardware, software, or a combination thereof.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.
It will be evident to those skilled in the art that the application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.