[go: up one dir, main page]

CN114121218A - Virtual scene construction method, device, equipment and medium applied to operation - Google Patents

Virtual scene construction method, device, equipment and medium applied to operation Download PDF

Info

Publication number
CN114121218A
CN114121218A CN202111420464.4A CN202111420464A CN114121218A CN 114121218 A CN114121218 A CN 114121218A CN 202111420464 A CN202111420464 A CN 202111420464A CN 114121218 A CN114121218 A CN 114121218A
Authority
CN
China
Prior art keywords
target
information
scene
virtual
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111420464.4A
Other languages
Chinese (zh)
Other versions
CN114121218B (en
Inventor
陈方印
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Maidi Artificial Intelligence Research Institute Suzhou Co ltd
Original Assignee
Zhongke Maidi Artificial Intelligence Research Institute Suzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Maidi Artificial Intelligence Research Institute Suzhou Co ltd filed Critical Zhongke Maidi Artificial Intelligence Research Institute Suzhou Co ltd
Priority to CN202111420464.4A priority Critical patent/CN114121218B/en
Priority claimed from CN202111420464.4A external-priority patent/CN114121218B/en
Publication of CN114121218A publication Critical patent/CN114121218A/en
Application granted granted Critical
Publication of CN114121218B publication Critical patent/CN114121218B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • General Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Pathology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Animal Behavior & Ethology (AREA)
  • Computational Linguistics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Surgery (AREA)
  • Urology & Nephrology (AREA)
  • Instructional Devices (AREA)

Abstract

本发明实施例公开了一种应用于手术的虚拟场景构建方法、装置、设备及介质,该方法包括:确定与目标用户相关联的场景构建信息,并确定所述场景构建信息的目标待处理矩阵;其中,所述场景构建基础信息包括与所述目标用户相对应的目标基础信息、与目标病灶相关联的至少一个待处理器官信息、与所述目标病灶相关联的目标历史诊疗信息;基于预先训练得到的虚拟场景构建模型对所述目标待处理矩阵进行处理,确定与所述目标用户相对应的虚拟手术场景,以基于所述虚拟手术场景进行手术模拟。本发明实施例的技术方案,实现了虚拟手术场景的自动化高效构建,使生成的虚拟手术场景更加多样化,也更符合临床场景下的实际需求。

Figure 202111420464

The embodiment of the present invention discloses a virtual scene construction method, device, device and medium applied to surgery. The method includes: determining scene construction information associated with a target user, and determining a target to-be-processed matrix of the scene construction information ; wherein, the scene construction basic information includes target basic information corresponding to the target user, at least one organ to be processed information associated with the target lesion, and target historical diagnosis and treatment information associated with the target lesion; The virtual scene construction model obtained by training processes the target to-be-processed matrix to determine a virtual surgical scene corresponding to the target user, so as to perform surgical simulation based on the virtual surgical scene. The technical solutions of the embodiments of the present invention realize the automatic and efficient construction of virtual surgical scenes, so that the generated virtual surgical scenes are more diverse and more in line with actual needs in clinical scenarios.

Figure 202111420464

Description

Virtual scene construction method, device, equipment and medium applied to operation
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a method, a device, equipment and a medium for constructing a virtual scene applied to an operation.
Background
At present, in the process of performing surgical training by medical staff, mechanical devices such as springs, gears and the like are generally used for simulating human organs, and meanwhile, training materials are displayed by combining a three-dimensional image or an augmented reality technology so as to train the clinical judgment capability and the clinical processing capability in the surgical process.
However, some drawbacks still exist in the solutions provided in the prior art. On one hand, the procedure steps of the surgical training are single, the training content is limited by the mechanical device and the initial setting of the image, and diversified surgical scenes cannot be simulated; on the other hand, the operation scene constructed by the method can not be fit with the illness state of the patient and can not meet the actual operation training requirement under the clinical scene.
Disclosure of Invention
The invention provides a virtual scene construction method, a virtual scene construction device, a virtual scene construction equipment and a virtual scene construction medium, which are applied to surgery, so that the automatic and efficient construction of a virtual surgical scene is realized, the generated virtual surgical scene is more diversified, and the actual requirements under clinical scenes are better met.
In a first aspect, an embodiment of the present invention provides a virtual scene construction method applied to a surgery, where the method includes:
determining scene construction information associated with a target user, and determining a target matrix to be processed of the scene construction information; the scene construction basic information comprises target basic information corresponding to the target user, at least one piece of to-be-processed organ information associated with a target focus, and target historical diagnosis and treatment information associated with the target focus;
and processing the target matrix to be processed based on a virtual scene construction model obtained by pre-training, determining a virtual operation scene corresponding to the target user, and performing operation simulation based on the virtual operation scene.
In a second aspect, an embodiment of the present invention further provides a virtual scene constructing apparatus applied to a surgery, where the apparatus includes:
the scene construction information determining module is used for determining scene construction information associated with a target user and determining a target matrix to be processed of the scene construction information; the scene construction basic information comprises target basic information corresponding to the target user, at least one piece of to-be-processed organ information associated with a target focus, and target historical diagnosis and treatment information associated with the target focus;
and the virtual operation scene determining module is used for processing the target matrix to be processed based on a virtual scene building model obtained through pre-training, determining a virtual operation scene corresponding to the target user, and performing operation simulation based on the virtual operation scene.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the virtual scene construction method applied to the operation according to any one of the embodiments of the present invention.
In a fourth aspect, the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform the virtual scene construction method applied to surgery according to any one of the embodiments of the present invention.
According to the technical scheme of the embodiment of the invention, scene construction information associated with a target user is determined, and a target to-be-processed matrix of the scene construction information is determined, so that target basic information of the target user, at least one piece of to-be-processed organ information of a target focus and historical diagnosis and treatment information of the target focus are determined; furthermore, a target matrix to be processed is processed based on a virtual scene construction model obtained through pre-training, a virtual operation scene corresponding to a target user is determined, operation simulation is performed based on the virtual operation scene, automatic and efficient construction of the virtual operation scene is achieved, meanwhile, the limitation of traditional mechanical devices and image initial parameters is eliminated by fusing information of multiple dimensions in the scene construction process, the generated virtual operation scene is more diversified, and the actual requirements under clinical scenes are better met.
Drawings
In order to more clearly illustrate the technical solutions of the exemplary embodiments of the present invention, a brief description is given below of the drawings used in describing the embodiments. It should be clear that the described figures are only views of some of the embodiments of the invention to be described, not all, and that for a person skilled in the art, other figures can be derived from these figures without inventive effort.
Fig. 1 is a schematic flowchart of a virtual scene construction method applied to an operation according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a virtual scene construction method applied to an operation according to a second embodiment of the present invention;
fig. 3 is a flowchart of a virtual scene construction method applied to an operation according to a third embodiment of the present invention;
fig. 4 is a block diagram of a virtual scene constructing apparatus applied to a surgery according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a schematic flowchart of a method for constructing a virtual scene applied to a surgery according to an embodiment of the present invention, where the method is applicable to a case where diversified virtual surgical scenes are constructed based on multidimensional information, and the method can be executed by a virtual scene construction device applied to a surgery, where the device can be implemented in the form of software and/or hardware, and the hardware can be an electronic device, such as a mobile terminal, a PC terminal, or a server.
As shown in fig. 1, the method specifically includes the following steps:
s110, scene construction information associated with the target user is determined, and a target to-be-processed matrix of the scene construction information is determined.
In this embodiment, the target user may be a user who needs to perform an operation on a certain lesion on the body, and in order to construct a diversified virtual operation scene that meets clinical actual requirements around the target user, it is first necessary to acquire information of multiple dimensions as a data basis, that is, to acquire scene construction information associated with the target user.
The scene construction basic information comprises target basic information corresponding to a target user, at least one piece of to-be-processed organ information associated with a target focus, and target historical diagnosis and treatment information associated with the target focus. Specifically, the target basic information may be information such as name, sex, age, etc. stored in the medical information base corresponding to the target user; meanwhile, in order to determine an operation object in the virtual operation scene, at least one piece of to-be-processed organ information associated with a target focus needs to be retrieved after the target focus of a target user is determined, and it can be understood that the to-be-processed organ information includes not only information of an organ corresponding to the target focus, but also information of an organ associated with the organ and possibly affected by the target focus, for example, when the organ information corresponding to the target focus is a liver angiogram image, the to-be-processed organ information also includes a heart angiogram image; further, in order to construct a corresponding virtual surgical scene around a target lesion, information of similar cases, that is, target historical diagnosis and treatment information, needs to be acquired from an information base of a medical system.
In this embodiment, since the information needs to be processed by using the neural network model in the subsequent process, after the scene construction information for the target user is obtained, data reconstruction and transformation need to be performed, that is, the target to-be-processed matrix corresponding to the scene construction information is determined. It is understood that the target pending matrix includes at least a plurality of elements corresponding to the target basis information, the pending organ information, and the target historical clinical information.
Those skilled in the art should understand that in the actual application process, a transfer function may be used to perform data loop calculation on the multidimensional large amount of data, so as to generate a high-dimensional matrix corresponding to the scene construction information and meeting the input requirement of the neural network model, which is not described herein again in this disclosure.
And S120, processing the target matrix to be processed based on the virtual scene construction model obtained through pre-training, determining a virtual operation scene corresponding to the target user, and performing operation simulation based on the virtual operation scene.
In this embodiment, the virtual scene construction model may be a pre-trained neural network model, which is input as a target matrix to be processed corresponding to the scene construction information and output as a data basis required for constructing the virtual surgical scene. In particular, the virtual surgical build model may be pre-stored in the surgical training apparatus, e.g., in a memory of the surgical training apparatus that provides a three-dimensional virtual reality and standard surgical procedures. Correspondingly, the determined virtual surgical scene may be displayed on a surgical training device display, and it should be understood by those skilled in the art that the constructed virtual surgical scene may be a simulated environment of the real world, a semi-simulated semi-fictional virtual environment, or a pure fictional virtual environment. In the actual application process, no matter which virtual operation scene is the above one, the specific information reflected by the scene is associated with the target focus of the target user.
It should be noted that, in the process of training the virtual scene construction model, the historical scene construction information and the corresponding historical virtual surgical scene are selected from the historical scene database as a training set to train the model, and further, a part of the historical scene is selected from the historical scene database as a verification set to estimate the relevant parameters of the model, and finally, the model performance is evaluated and optimized by using the test set, so that the trained virtual scene construction model can be obtained.
Further, after the virtual surgical scene is displayed on the display of the surgical training device, the user can perform surgical simulation based on other components provided by the surgical training device, and the simulation process includes clinical processing operation training, clinical judgment training and the like. Continuing with the above example, after a virtual surgical scene of the liver of the target user is constructed, the physicians participating in the surgical training can perform a simulation operation on the liver in the virtual surgical scene by using the operation component on the surgical training device, so as to perform a future operation on the target user and predict the surgical result.
According to the technical scheme of the embodiment, scene construction information associated with a target user is determined, and a target to-be-processed matrix of the scene construction information is determined, so that target basic information of the target user, at least one piece of to-be-processed organ information of a target focus and historical diagnosis and treatment information of the target focus are determined; furthermore, a target matrix to be processed is processed based on a virtual scene construction model obtained through pre-training, a virtual operation scene corresponding to a target user is determined, operation simulation is performed based on the virtual operation scene, automatic and efficient construction of the virtual operation scene is achieved, meanwhile, the limitation of traditional mechanical devices and image initial parameters is eliminated by fusing information of multiple dimensions in the scene construction process, the generated virtual operation scene is more diversified, and the actual requirements under clinical scenes are better met.
Example two
Fig. 2 is a schematic flow chart of a virtual scene construction method applied to an operation according to a second embodiment of the present invention, which is based on the second embodiment of the present invention, and after multi-dimensional scene construction information of a target user is determined, target taboo information may also be determined, so as to simulate an intraoperative sudden event; redundant information in the scene construction information is screened by using a preset template, so that templated arrangement of the information is realized; performing dimensionality reduction on a target matrix to be processed based on the node similarity submodel, and inputting the obtained characteristic sequence to be processed into a pre-trained GAT model, so that the operation training system can conveniently construct a virtual operation scene based on a high-dimensional matrix; the virtual operation result is fed back based on the operation evaluation knowledge graph called in real time, and the intelligence of the operation training system is improved. The specific implementation manner can be referred to the technical scheme of the embodiment. The technical terms that are the same as or corresponding to the above embodiments are not repeated herein.
As shown in fig. 2, the method specifically includes the following steps:
s210, scene construction information associated with the target user is determined, and a target to-be-processed matrix of the scene construction information is determined.
In the process of determining the scene construction information associated with the target user, because the information of different dimensions has differences in data type, data format, data storage and transmission modes, the corresponding determination modes are different. Optionally, the target basic information is determined by obtaining basic parameters to be processed corresponding to the target user, and extracting the target basic information corresponding to the preset field from the basic parameters to be processed; determining the organ information to be processed in a mode that at least one piece of organ information to be processed is determined from the view to be processed associated with the target focus based on an image recognition algorithm; the target historical diagnosis and treatment information is determined in a mode that the target historical diagnosis and treatment information related to the target focus is determined in a historical diagnosis and treatment information base according to a diagnosis and treatment knowledge map established in advance.
Specifically, the basic parameters to be processed corresponding to the target user include basic information of the patient, such as age information, gender information, family information, and the like, physiological index information of the patient, such as specific data of the patient, such as a body temperature, a heart rate, and a blood pressure, and preoperative tracking information of the patient, such as electrocardiogram information and a blood pressure change curve of the patient in a specific time period. It can be understood that the above information can be obtained by the corresponding information acquisition module. Further, after the basic parameters to be processed containing the information are acquired, the parts required for constructing the virtual scene can be extracted from the basic parameters as target basic information according to a preset field extraction rule, so that the redundant data can be eliminated.
In this embodiment, the information of the organ to be processed includes at least contrast screening information of the target lesion and similar case information. For example, when the target lesion of the target user is determined to be a liver, a scan or contrast image of the liver region of the patient may be used as the information of the organ to be processed, and the scan or contrast image may be extracted from the corresponding scan device or imaging device, similar to the physiological index information and the preoperative tracking information. Meanwhile, the information of other patients who perform operations on the liver can be screened out from the image database of the medical system, and then the information is sorted out to construct a corresponding set as similar case information.
In this embodiment, in order to determine the construction basis of the virtual surgical scene, target historical clinical information associated with the target lesion needs to be determined in the historical clinical information base based on the clinical knowledge map, and specifically, the target historical clinical information may be determined in the target historical information base based on the clinical knowledge map after information extraction is performed on the contrast screening information and the similar case information.
The knowledge graph combines theories and methods of mathematics, graphics, information visualization technology, information science and other subjects with methods of metrology introduction analysis, co-occurrence analysis and the like, and utilizes visual graph images to display core structures, development histories, frontier fields and overall knowledge architectures of the subjects so as to achieve the modern theory of multidisciplinary fusion. Based on the above, it can be understood that the diagnosis and treatment knowledge graph refers to a mapping map constructed by the clinical diagnosis and treatment knowledge in a visual form, that is, the diagnosis and treatment knowledge graph is a structural graph displaying the development process and the structural relationship of the diagnosis and treatment knowledge, and the diagnosis and treatment knowledge graph is used as a carrier of historical diagnosis and treatment information and can be at least used for describing, mining and analyzing the association relationship among information related in a plurality of historical operation scenes.
In this embodiment, the target basic information further includes target taboo information corresponding to the target user. In the process of determining the target basic information, optionally, the target taboo information corresponding to the target basic information is determined according to a taboo knowledge graph established in advance, and the target taboo information is updated to the target basic information so as to simulate an intraoperative sudden event according to the target taboo information.
The target contraindication information is information of the drug which is needed to be avoided and can not be selected in the process of treating the target lesion. Similar to the diagnosis and treatment knowledge map, the corresponding contraindication knowledge map can be utilized when determining the target contraindication information of the target focus. Further, after the corresponding target contraindication information is determined for the target lesion of the patient, the information needs to be updated to the target basic information, so as to use the information as a data basis for constructing the virtual surgical scene.
Illustratively, when the target lesion is determined to be the liver and metabolic disorders of the relevant organs of the patient are determined, the part of the drugs prohibited from being used in the surgical procedure and the maximum dosage of the anesthetic for the patient can be determined based on the contraindication knowledge map. Furthermore, the drug information prohibited from being used and the maximum dosage of the anesthetic are updated to the target basic information of the patient, so that the information is displayed on a display screen associated with the operation training equipment after the virtual operation scene is constructed.
Particularly, after the target taboo information is updated to the target basic information, the information can be used for simulating an emergency situation in the virtual surgery training process, so that the virtual surgery scene is more suitable for the actual situation of a patient, and the intelligence of the virtual scene construction scheme is enhanced.
In this embodiment, after the scene construction information of the target user is determined, in order to obtain the input of the model, a target to-be-processed matrix corresponding to the scene construction information needs to be determined. Optionally, the target basic information, the at least one to-be-processed organ information, and the target historical diagnosis and treatment information are spliced to construct a target to-be-processed matrix.
Specifically, after the target basic information, the at least one to-be-processed organ information, and the target historical diagnosis and treatment information are determined, since specific contents of the target basic information, the at least one to-be-processed organ information, and the target historical diagnosis and treatment information relate to macro information (such as basic information of the patient and similar medical records) and micro information (such as contrast images and physiological index information of the patient and similar medical records) of multiple dimensions of the patient and similar medical records, redundant information in the scene construction information can be screened by using a preset template, so that templatized arrangement of the scene construction information is realized.
Further, after finishing the information, in order to facilitate inputting data into the neural network algorithm model, the multi-dimensional scene construction information may be packed into a triple or multi-element group conforming to a Resource Description Framework (RDF). The RDF is a material model (Datamodel) expressed by using XML syntax, and is at least used for describing the characteristics of the Web resource and the association relationship between the resource and the resource. It will be appreciated that RDF provides a general framework for expressing data that can be processed by an associated application, enabling it to be exchanged between applications without loss of semantics. Those skilled in the art should understand that, for multidimensional scene construction information, the RDF parser may be used to package data, and details of the embodiment of the present disclosure are not described herein. And packaging the multi-dimensional scene construction information to obtain a corresponding target matrix to be processed.
And S220, performing dimension reduction processing on the target matrix to be processed based on the node similarity submodel to obtain a corresponding characteristic sequence to be processed.
In this embodiment, the virtual scene building model includes a node similarity submodel and a Graph Attention Network (GAT) submodel. It can be understood that, before processing data by using the GAT model and constructing a corresponding virtual surgical scene, a node similarity sub-model is first used to perform dimension reduction processing on a target matrix to be processed.
In this embodiment, before describing the node similarity submodel, a Graph embedding (Graph embedding) process related to the schema may be described. When data includes a high-dimensional matrix, the problem of slow input and operation may occur in the process of inputting the data into a corresponding machine learning model, whereas graph embedding is a process of mapping graph data (usually a high-dimensional dense matrix, such as a target matrix to be processed corresponding to scene construction information in this embodiment) into a low-dimensional dense vector, and the problem that graph data is difficult to be efficiently input into a machine learning algorithm can be solved through the graph embedding process. In this embodiment, the node similarity submodel is used to perform the dimension reduction processing on the target matrix to be processed, so that the process of embedding the graph into the data corresponding to the scene construction information can be completed, and the subsequent GAT model can process the data conveniently.
Conventional graph embedding methods such as Deepwalk, Line, SDNE and the like are all based on a neighbor similarity assumption, that is, more common neighbors of two vertexes indicate that the two vertexes are more similar, while in many scenarios, two vertexes which are not neighbors may have high similarity, for example, in the present embodiment, a target user and a similar case are associated with information of multiple dimensions, and roles of the nodes in a neighborhood are similar. Therefore, in the practical application process, the struc2vec model can be selected as the node similarity submodel to execute the graph embedding process.
Specifically, struc2vec is a model proposed for structural role similarity (structural role similarity) of capture nodes. In the process of using the submodel to perform dimensionality reduction processing on a target matrix to be processed, firstly, respectively calculating the structural similarity of node pairs according to neighbor information at different distances, and then constructing a multilayer weighted directed graph, wherein each layer is a weighted undirected graph and is directed only between layers; further, randomly walking in the multilayer weighted directed graph, constructing a sequence of contexts in the target matrix to be processed, and finally obtaining the low-dimensional vector representation of each node based on the skip-gram training sequence.
In the characteristic sequences to be processed obtained by performing dimensionality reduction on the target matrix to be processed, each characteristic sequence reactsCorresponds to the contents of each dimension in the scene construction information. Specifically, the feature sequence to be processed corresponding to the basic information of the target user may be P [ b ]]A, s, f, a, wherein a represents an age characteristic of the target user, s represents a gender characteristic of the target user, and f represents a family information characteristic of the target user; the characteristic sequence to be processed corresponding to the physiological index of the target user can be P [ x ]]={x1,x2,.. }; the feature sequence to be processed corresponding to the preoperative tracking information of the target user may be P [ tx]=∑P[x](ii) a The sequence of features to be processed corresponding to the target taboo information determined based on the determined target basic information and the taboo knowledge map may be D ═ f (Σ P [ b ])],∑P[tx]) (ii) a For at least one piece of organ information to be processed, the characteristic sequence to be processed corresponding to the contrast screening information may be Pc]={c1,c2,.. }; the corresponding feature sequence to be processed of the similar case information may be P [ r ]]={r1,r2,.. }; since the similar operation information is determined from the historical diagnosis and treatment information base based on the contrast screening information and the similar case information, the feature sequence to be processed corresponding to the similar operation information may be O ═ f (Σ P [ c ])],∑P[r])。
And S230, processing the characteristic sequence to be processed based on the pre-trained graph attention network submodel to obtain a target characteristic sequence, and constructing a virtual operation scene according to the target characteristic sequence.
In this embodiment, after the dimension reduction processing is performed on the target to-be-processed matrix to obtain the corresponding to-be-processed feature sequence, the to-be-processed feature sequence may be processed by using the graph attention network submodel, so as to obtain the target feature sequence. Optionally, inputting the feature sequence to be processed into a pre-trained graph attention network sub-model, and obtaining the attention coefficients of each element in the feature sequence to be processed respectively; and determining the target characteristic sequence according to the attention coefficients and the corresponding elements.
Because graph data usually contains the relationship between vertices and edges, and each vertex also has its own feature (i.e., a feature sequence to be processed), when data is processed based on a conventional GCN model, the processing of a dynamic graph cannot be realized, and it is also not convenient to assign different learning weights to different neighborhoods during the processing. Therefore, in the embodiment, the GAT model is used to process the feature sequence to be processed, and each vertex can perform attention coefficient operation on any vertex on the graph in the processing process, which is completely independent of the graph structure. The following describes a process based on the GAT model.
When a GAT model is used for processing a feature sequence to be processed, firstly, a similarity coefficient e between each neighbor node and a vertex needs to be calculated for the vertexijSpecifically, dimension increase is performed on the features of the vertex by using a linear mapping of a shared parameter, for example, dimension increase is performed on a to-be-processed feature sequence corresponding to the basic information of the target user, and further, the high-dimensional features obtained after splicing are mapped onto a real number, so that the similarity coefficient e can be obtainedij
After determining the similarity coefficient between the neighboring node and the vertex, the attention coefficient a can be calculated by using a softmax functionij. Wherein softmax can map some inputs to real numbers between 0-1 as a function in machine learning, especially deep learning, and the normalized guaranteed sum is 1. And finally, according to the attention coefficient obtained by calculation, weighting and summing the features to determine a target feature sequence corresponding to the feature sequence to be processed. Illustratively, when the sequence of features to be processed is
Figure BDA0003377166790000131
When the target characteristic sequence output by the GAT model is corresponding to the target characteristic sequence, the target characteristic sequence is
Figure BDA0003377166790000132
It should be noted that before the model is constructed based on the virtual scene to process the target matrix to be processed, the model needs to be trained. Optionally, a training sample set is obtained; taking historical scene construction information in each training sample as input of a virtual scene construction model to be trained, taking actual operation scene information as output of the virtual scene construction model to be trained, and training the virtual scene construction model to be trained; and training to obtain the virtual scene construction model by taking the loss convergence in the virtual scene construction model to be trained as a training target.
Wherein, training sample set includes a plurality of training samples, and every training sample includes: historical scene build information and actual surgical scene information associated with the historical user. And the loss function is preset, and parameters in the virtual scene building model to be trained can be modified based on the loss function. It can be understood that, after the virtual scene construction model to be trained is trained by using the historical scene construction information and the actual operation scene information in the training set, the training error of the loss function, that is, the loss parameter, may be used as a condition for detecting whether the loss function reaches convergence currently, for example, whether the training error is smaller than a preset error or whether the error variation trend tends to be stable, or whether the current iteration number is equal to the preset number. If the detection reaches the convergence condition, for example, the training error of the loss function is smaller than the preset error or the error change tends to be stable, indicating that the training of the virtual scene construction model to be trained is completed, at this time, the iterative training may be stopped. If the current condition is not met, historical scene construction information in other training sets and corresponding actual operation scene information can be further obtained to train the model until the training error of the loss function is within a preset range. When the training error of the loss function reaches convergence, the trained model can be used as a virtual scene construction model, that is, after the target to-be-processed matrix associated with the scene construction information is input into the trained virtual scene construction model, the corresponding target characteristic sequence can be obtained.
In this embodiment, after the target feature sequence corresponding to the scene construction information is determined, the surgical training system may construct a virtual surgical scene corresponding to the target user based on the data. The virtual operation scene comprises virtual user information, virtual organ information, virtual diagnosis and treatment information and intraoperative sudden events. It can be understood that the virtual operation scene may simulate basic information, physiological index information, virtual image information of an organ associated with the target lesion, and surgical equipment required for the virtual operation, around the target user. Meanwhile, the constructed virtual operation scene can be a plurality of scene segments, and in each virtual operation scene segment, an emergency situation in the operation process can be simulated based on the target taboo information. For example, when the target lesion in the virtual surgical scene is the liver of the target user, other related organs may bleed or the virtual blood pressure of the user may be greatly reduced in some scene segments.
Meanwhile, the virtual operation scene constructed can be a plurality of scene segments, and based on the scene segments, when medical staff perform operation training in the virtual operation scene as training participants, the operation training equipment can perform adaptive adjustment on operation scenes in other subsequent scene segments according to operation information received in the scene segments, so that the constructed virtual operation scene is more suitable for actual operation conditions, and the clinical operation processing capacity and clinical judgment capacity of the medical staff are favorably trained. Furthermore, after the training participants complete the operation training in the virtual operation scene, the operation training system can also feed back the virtual operation result based on the operation information and the operation evaluation knowledge graph called in real time, so that the framework of the virtual operation training is improved, and the intelligence of the operation training system is improved.
According to the technical scheme of the embodiment, after the multi-dimensional scene construction information of the target user is determined, the target contraindication information can be determined, so that the intraoperative burst event can be simulated; redundant information in the scene construction information is screened by using a preset template, so that templated arrangement of the information is realized; performing dimensionality reduction on a target matrix to be processed based on the node similarity submodel, and inputting the obtained characteristic sequence to be processed into a pre-trained GAT model, so that the operation training system can conveniently construct a virtual operation scene based on a high-dimensional matrix; the virtual operation result is fed back based on the operation evaluation knowledge graph called in real time, and the intelligence of the operation training system is improved.
EXAMPLE III
As an alternative embodiment of the foregoing embodiment, fig. 3 is a flowchart of a virtual scene construction method applied to an operation according to a third embodiment of the present invention. For clearly describing the technical solution of the present embodiment, the case that the application scene is a diversified virtual surgery scene constructed based on multi-dimensional information may be taken as an example for description, but the present invention is not limited to the above-mentioned scene, and may be applied to various cases requiring virtual scene construction.
Referring to fig. 3, in a practical application process, in order to construct a virtual surgical scene, multi-dimensional information, including at least patient information, organ information, and surgical information, needs to be acquired first. The patient information comprises basic information, physiological indexes, preoperative tracking and contraindication risks of the patient, and it needs to be stated that contraindication risk information is determined by information extraction from the first three information and screening and matching in a knowledge graph; the organ information comprises contrast screening information and similar case information, and meanwhile, after the two kinds of information are extracted, operation scene information of cases similar to the patient can be determined based on the corresponding knowledge graph; furthermore, the operation information is combined with preoperative preparation information containing necessary operation steps and materials, and then the operation information can be obtained.
With continued reference to fig. 3, after the patient information, the organ information, and the operation information in the scene construction information are determined, the information may be subjected to templating based on a semantic segmentation technique, so as to eliminate invalid and redundant information in the information. Further, the scene construction information is packaged into RDF triple or multi-element groups, graph embedding processing is carried out by using a struc2vec model, namely, a high-dimensional matrix is converted into a low-dimensional characteristic sequence, so that data can be input into the algorithm model conveniently.
With continued reference to fig. 3, after the feature sequence to be processed is determined, the feature sequence may be input into a GAT model trained in advance, so as to obtain a target feature sequence. The operation training system can construct a virtual operation scene corresponding to the target user based on the output data. In the constructed virtual operation scene, patient information, organ information and operation information can be simulated, and meanwhile, the tabu risk information is fused in the multi-dimensional scene construction information, so that the emergency situation in the operation process can be simulated in the scene. Finally, based on the operation information received by the operation training system and the knowledge map model called in real time, the effect generated in the operation process can be fed back in real time, and after the operation training is finished, the operation result can be evaluated.
The beneficial effects of the above technical scheme are: the automatic and efficient construction of the virtual operation scene is realized, meanwhile, the limitation of the traditional mechanical device and the image initial parameter is eliminated by fusing information of various dimensions in the scene construction process, so that the generated virtual operation scene is more diversified, and the actual requirements under the clinical scene are better met.
Example four
Fig. 4 is a block diagram of a virtual scene constructing apparatus for surgery according to a fourth embodiment of the present invention, which is capable of executing a virtual scene constructing method for surgery according to any embodiment of the present invention, and has functional modules and beneficial effects corresponding to the executing method. As shown in fig. 4, the apparatus specifically includes: a scene construction information determination module 310 and a virtual surgical scene determination module 320.
A scene construction information determining module 310, configured to determine scene construction information associated with a target user, and determine a target to-be-processed matrix of the scene construction information; wherein the scene construction basic information comprises target basic information corresponding to the target user, at least one piece of to-be-processed organ information associated with a target focus, and target historical diagnosis and treatment information associated with the target focus.
The virtual surgical scene determining module 320 is configured to process the target matrix to be processed based on a virtual scene building model obtained through pre-training, determine a virtual surgical scene corresponding to the target user, and perform surgical simulation based on the virtual surgical scene.
On the basis of the above technical solutions, the scene construction information determining module 310 includes a target basic information determining unit, a to-be-processed organ information determining unit, and a target historical diagnosis and treatment information determining unit.
And the target basic information determining unit is used for acquiring basic parameters to be processed corresponding to the target user and extracting target basic information corresponding to a preset field from the basic parameters to be processed.
A to-be-processed organ information determination unit for determining the at least one to-be-processed organ information from a to-be-processed view associated with the target lesion based on an image recognition algorithm.
And the target historical diagnosis and treatment information determining unit is used for determining target historical diagnosis and treatment information associated with the target focus in a historical diagnosis and treatment information base according to a diagnosis and treatment knowledge map established in advance.
On the basis of the above technical solutions, the target basic information further includes target taboo information corresponding to the target user.
Optionally, the target basic information determining unit is further configured to determine, according to a tabu knowledge graph established in advance, target tabu information corresponding to the target basic information, and update the target tabu information into the target basic information, so as to simulate an intraoperative emergency according to the target tabu information.
On the basis of the above technical solutions, the scene construction information determining module 310 further includes a target to-be-processed matrix determining unit.
And the target to-be-processed matrix determining unit is used for splicing the target basic information, the at least one to-be-processed organ information and the target historical diagnosis and treatment information to construct the target to-be-processed matrix.
On the basis of the technical schemes, the virtual scene construction model comprises a node similarity submodel and a graph attention network submodel.
On the basis of the above technical solutions, the virtual surgical scene determining module 320 includes a to-be-processed feature sequence determining unit and a virtual scene constructing unit.
And the to-be-processed characteristic sequence determining unit is used for performing dimensionality reduction processing on the target to-be-processed matrix based on the node similarity submodel to obtain a corresponding to-be-processed characteristic sequence.
The virtual scene construction unit is used for processing the characteristic sequence to be processed based on a pre-trained graph attention network submodel to obtain a target characteristic sequence and constructing the virtual operation scene according to the target characteristic sequence; the virtual operation scene comprises virtual user information, virtual organ information, virtual diagnosis and treatment information and intraoperative sudden events.
Optionally, the virtual scene constructing unit is further configured to input the feature sequence to be processed into a pre-trained graph attention network sub-model, and obtain an attention coefficient of each element in the feature sequence to be processed respectively; and determining the target characteristic sequence according to the attention coefficients and the corresponding elements.
On the basis of the technical schemes, the virtual scene construction device applied to the operation further comprises a model training module.
The model training module is used for acquiring a training sample set; wherein the training sample set comprises a plurality of training samples, each training sample comprising: historical scene construction information and actual surgical scene information associated with historical users; taking historical scene construction information in each training sample as input of a virtual scene construction model to be trained, taking actual operation scene information as output of the virtual scene construction model to be trained, and training the virtual scene construction model to be trained; and training to obtain the virtual scene construction model by taking the loss convergence in the virtual scene construction model to be trained as a training target.
According to the technical scheme provided by the embodiment, scene construction information associated with a target user is determined, and a target to-be-processed matrix of the scene construction information is determined, so that target basic information of the target user, at least one piece of to-be-processed organ information of a target focus and historical diagnosis and treatment information of the target focus are determined; furthermore, a target matrix to be processed is processed based on a virtual scene construction model obtained through pre-training, a virtual operation scene corresponding to a target user is determined, operation simulation is performed based on the virtual operation scene, automatic and efficient construction of the virtual operation scene is achieved, meanwhile, the limitation of traditional mechanical devices and image initial parameters is eliminated by fusing information of multiple dimensions in the scene construction process, the generated virtual operation scene is more diversified, and the actual requirements under clinical scenes are better met.
The virtual scene construction device applied to the operation provided by the embodiment of the invention can execute the virtual scene construction method applied to the operation provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
It should be noted that, the units and modules included in the apparatus are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the embodiment of the invention.
EXAMPLE five
Fig. 5 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention. FIG. 5 illustrates a block diagram of an exemplary electronic device 40 suitable for use in implementing embodiments of the present invention. The electronic device 40 shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiment of the present invention.
As shown in fig. 5, electronic device 40 is embodied in the form of a general purpose computing device. The components of electronic device 40 may include, but are not limited to: one or more processors or processing units 401, a system memory 402, and a bus 403 that couples the various system components (including the system memory 402 and the processing unit 401).
Bus 403 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 40 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 40 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 402 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)404 and/or cache memory 405. The electronic device 40 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 406 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, commonly referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to the bus 403 by one or more data media interfaces. Memory 402 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 408 having a set (at least one) of program modules 407 may be stored, for example, in memory 402, such program modules 407 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 407 generally perform the functions and/or methods of the described embodiments of the invention.
The electronic device 40 may also communicate with one or more external devices 409 (e.g., keyboard, pointing device, display 410, etc.), with one or more devices that enable a user to interact with the electronic device 40, and/or with any devices (e.g., network card, modem, etc.) that enable the electronic device 40 to communicate with one or more other computing devices. Such communication may be through input/output (I/O) interface 411. Also, the electronic device 40 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 412. As shown, the network adapter 412 communicates with the other modules of the electronic device 40 over the bus 403. It should be appreciated that although not shown in FIG. 5, other hardware and/or software modules may be used in conjunction with electronic device 40, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 401 executes various functional applications and data processing by running a program stored in the system memory 402, for example, to implement the virtual scene construction method applied to the surgery provided by the embodiment of the present invention.
EXAMPLE six
The sixth embodiment of the present invention further provides a storage medium containing computer-executable instructions, which are used to execute a virtual scene construction method applied to a surgery when the computer-executable instructions are executed by a computer processor.
The method comprises the following steps:
determining scene construction information associated with a target user, and determining a target matrix to be processed of the scene construction information; the scene construction basic information comprises target basic information corresponding to the target user, at least one piece of to-be-processed organ information associated with a target focus, and target historical diagnosis and treatment information associated with the target focus;
and processing the target matrix to be processed based on a virtual scene construction model obtained by pre-training, determining a virtual operation scene corresponding to the target user, and performing operation simulation based on the virtual operation scene.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable item code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
The item code embodied on the computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer project code for carrying out operations for embodiments of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The project code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A virtual scene construction method applied to surgery is characterized by comprising the following steps:
determining scene construction information associated with a target user, and determining a target matrix to be processed of the scene construction information; the scene construction basic information comprises target basic information corresponding to the target user, at least one piece of to-be-processed organ information associated with a target focus, and target historical diagnosis and treatment information associated with the target focus;
and processing the target matrix to be processed based on a virtual scene construction model obtained by pre-training, determining a virtual operation scene corresponding to the target user, and performing operation simulation based on the virtual operation scene.
2. The method of claim 1, wherein determining scene building information associated with the target user comprises:
acquiring basic parameters to be processed corresponding to the target user, and extracting target basic information corresponding to a preset field from the basic parameters to be processed;
determining the at least one organ to be processed information from a view to be processed associated with the target lesion based on an image recognition algorithm;
and determining target historical diagnosis and treatment information associated with the target focus in a historical diagnosis and treatment information base according to a pre-established diagnosis and treatment knowledge map.
3. The method according to claim 2, wherein the target basic information further includes target taboo information corresponding to the target user, and the extracting the target basic information corresponding to a preset field from the basic parameters to be processed includes:
and determining target taboo information corresponding to the target basic information according to a taboo knowledge graph established in advance, and updating the target taboo information into the target basic information so as to simulate intraoperative sudden events according to the target taboo information.
4. The method of claim 1, wherein the determining the target pending matrix of the scene construction information comprises:
and splicing the target basic information, the at least one to-be-processed organ information and the target historical diagnosis and treatment information to construct the target to-be-processed matrix.
5. The method of claim 1, wherein the virtual scene construction model comprises a node similarity submodel and a graph attention network submodel, and the processing the target matrix to be processed based on the virtual scene construction model obtained through pre-training to determine the virtual surgical scene corresponding to the target user comprises:
performing dimensionality reduction processing on the target matrix to be processed based on the node similarity submodel to obtain a corresponding characteristic sequence to be processed;
processing a feature sequence to be processed based on a pre-trained graph attention network sub-model to obtain a target feature sequence, and constructing the virtual operation scene according to the target feature sequence;
the virtual operation scene comprises virtual user information, virtual organ information, virtual diagnosis and treatment information and intraoperative sudden events.
6. The method of claim 5, wherein the processing the to-be-processed feature sequence based on the pre-trained graph attention network submodel to obtain a target feature sequence comprises:
inputting the characteristic sequence to be processed into a pre-trained graph attention network submodel to respectively obtain the attention coefficient of each element in the characteristic sequence to be processed;
and determining the target characteristic sequence according to the attention coefficients and the corresponding elements.
7. The method according to claim 1, wherein before the processing the target matrix to be processed based on the virtual scene building model obtained by pre-training, the method further comprises:
acquiring a training sample set; wherein the training sample set comprises a plurality of training samples, each training sample comprising: historical scene construction information and actual surgical scene information associated with historical users;
taking historical scene construction information in each training sample as input of a virtual scene construction model to be trained, taking actual operation scene information as output of the virtual scene construction model to be trained, and training the virtual scene construction model to be trained;
and training to obtain the virtual scene construction model by taking the loss convergence in the virtual scene construction model to be trained as a training target.
8. A virtual scene constructing apparatus applied to a surgery, comprising:
the scene construction information determining module is used for determining scene construction information associated with a target user and determining a target matrix to be processed of the scene construction information; the scene construction basic information comprises target basic information corresponding to the target user, at least one piece of to-be-processed organ information associated with a target focus, and target historical diagnosis and treatment information associated with the target focus;
and the virtual operation scene determining module is used for processing the target matrix to be processed based on a virtual scene building model obtained through pre-training, determining a virtual operation scene corresponding to the target user, and performing operation simulation based on the virtual operation scene.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the virtual scene construction method for surgery of any one of claims 1-7.
10. A storage medium containing computer-executable instructions for performing the virtual scene construction method for surgery of any one of claims 1-7 when executed by a computer processor.
CN202111420464.4A 2021-11-26 Virtual scene construction method, device, equipment and medium applied to surgery Active CN114121218B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111420464.4A CN114121218B (en) 2021-11-26 Virtual scene construction method, device, equipment and medium applied to surgery

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111420464.4A CN114121218B (en) 2021-11-26 Virtual scene construction method, device, equipment and medium applied to surgery

Publications (2)

Publication Number Publication Date
CN114121218A true CN114121218A (en) 2022-03-01
CN114121218B CN114121218B (en) 2025-08-01

Family

ID=

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117457150A (en) * 2023-11-06 2024-01-26 中国人民解放军总医院第一医学中心 A method, equipment and medium for automatically generating surgical plans
CN118609825A (en) * 2024-05-27 2024-09-06 江苏世康启航医疗器械有限公司 A data management system and method for establishing a virtual simulation surgery model
CN119107435A (en) * 2024-09-10 2024-12-10 成都真叶科技有限公司 Optimization method and system for automatically generating immersive 3D scenes based on AIGC

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106708260A (en) * 2016-11-30 2017-05-24 宇龙计算机通信科技(深圳)有限公司 Generation method and device for virtual reality surgery scene
CN106874700A (en) * 2017-04-01 2017-06-20 上海术理智能科技有限公司 Surgical simulation method, surgical simulation device and electronic equipment based on Web
CN108133755A (en) * 2017-12-20 2018-06-08 安徽紫薇帝星数字科技有限公司 A kind of atlas pivot surgery simulation system and its analogy method based on three-dimensional visualization
CN108335599A (en) * 2018-01-19 2018-07-27 武汉康慧然信息技术咨询有限公司 Operation model training method based on three-dimensional modeling image technology
CN110796739A (en) * 2019-09-27 2020-02-14 哈雷医用(广州)智能技术有限公司 Virtual reality simulation method and system for craniocerebral operation
CN113486190A (en) * 2021-06-21 2021-10-08 北京邮电大学 Multi-mode knowledge representation method integrating entity image information and entity category information

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106708260A (en) * 2016-11-30 2017-05-24 宇龙计算机通信科技(深圳)有限公司 Generation method and device for virtual reality surgery scene
CN106874700A (en) * 2017-04-01 2017-06-20 上海术理智能科技有限公司 Surgical simulation method, surgical simulation device and electronic equipment based on Web
CN108133755A (en) * 2017-12-20 2018-06-08 安徽紫薇帝星数字科技有限公司 A kind of atlas pivot surgery simulation system and its analogy method based on three-dimensional visualization
CN108335599A (en) * 2018-01-19 2018-07-27 武汉康慧然信息技术咨询有限公司 Operation model training method based on three-dimensional modeling image technology
CN110796739A (en) * 2019-09-27 2020-02-14 哈雷医用(广州)智能技术有限公司 Virtual reality simulation method and system for craniocerebral operation
CN113486190A (en) * 2021-06-21 2021-10-08 北京邮电大学 Multi-mode knowledge representation method integrating entity image information and entity category information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张扬武: "机器之心 法律文本的主题学习", 30 September 2021, 中国政法大学出版社, pages: 53 - 55 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117457150A (en) * 2023-11-06 2024-01-26 中国人民解放军总医院第一医学中心 A method, equipment and medium for automatically generating surgical plans
CN118609825A (en) * 2024-05-27 2024-09-06 江苏世康启航医疗器械有限公司 A data management system and method for establishing a virtual simulation surgery model
CN118609825B (en) * 2024-05-27 2025-01-28 江苏世康启航医疗器械有限公司 A data management system and method for establishing a virtual simulation surgery model
CN119107435A (en) * 2024-09-10 2024-12-10 成都真叶科技有限公司 Optimization method and system for automatically generating immersive 3D scenes based on AIGC

Similar Documents

Publication Publication Date Title
Ueda et al. Technical and clinical overview of deep learning in radiology
WO2020215984A1 (en) Medical image detection method based on deep learning, and related device
Lee et al. Cephalometric landmark detection in dental x-ray images using convolutional neural networks
Chen et al. Generative AI-driven human digital twin in IoT-healthcare: A comprehensive survey
US20250148050A1 (en) Similarity determining method and device, network training method and device, search method and device, and electronic device and storage medium
CN114121213A (en) Anesthesia medicine information rechecking method and device, electronic equipment and storage medium
CN117009924B (en) Multi-mode self-adaptive multi-center data fusion method and system guided by electronic medical records
CN115994902A (en) Medical image analysis method, electronic device and storage medium
CN116433605A (en) Medical image analysis mobile augmented reality system and method based on cloud intelligence
WO2023160157A1 (en) Three-dimensional medical image recognition method and apparatus, and device, storage medium and product
Meng et al. Knowledge distillation in medical data mining: a survey
CN113822439A (en) Task prediction method, device, equipment and storage medium
Benbelkacem et al. Lung infection region quantification, recognition, and virtual reality rendering of CT scan of COVID-19
CN116994695A (en) Training method, device, equipment and storage medium of report generation model
Wu et al. Diagnosis assistant for liver cancer utilizing a large language model with three types of knowledge
US12141923B2 (en) Health management system, and human body information display method and human body model generation method applied to same
Eapen et al. LesionMap: A method and tool for the semantic annotation of dermatological lesions for documentation and machine learning
Qin et al. Virtual reality video image classification based on texture features
Huang et al. External validation based on transfer learning for diagnosing atelectasis using portable chest X-rays
Zhou et al. MedVersa: A Generalist Foundation Model for Medical Image Interpretation
Liu et al. Detection of fetal facial anatomy in standard ultrasonographic sections based on real‐time target detection network
CN114121218B (en) Virtual scene construction method, device, equipment and medium applied to surgery
WO2024221231A1 (en) Semi-supervised learning method and apparatus based on model framework
CN114121218A (en) Virtual scene construction method, device, equipment and medium applied to operation
US20240062857A1 (en) Systems and methods for visualization of medical records

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant