[go: up one dir, main page]

CN121460091A - Method and system for assisting doctor in working - Google Patents

Method and system for assisting doctor in working

Info

Publication number
CN121460091A
CN121460091A CN202411385860.1A CN202411385860A CN121460091A CN 121460091 A CN121460091 A CN 121460091A CN 202411385860 A CN202411385860 A CN 202411385860A CN 121460091 A CN121460091 A CN 121460091A
Authority
CN
China
Prior art keywords
doctor
patient
surgical
data
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411385860.1A
Other languages
Chinese (zh)
Inventor
张丽
郭培涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Lianying Zhiyuan Medical Technology Co ltd
Original Assignee
Shanghai Lianying Zhiyuan Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Lianying Zhiyuan Medical Technology Co ltd filed Critical Shanghai Lianying Zhiyuan Medical Technology Co ltd
Publication of CN121460091A publication Critical patent/CN121460091A/en
Pending legal-status Critical Current

Links

Landscapes

  • Medical Treatment And Welfare Office Work (AREA)

Abstract

本公开实施例中提供了一种用于协助医生工作的方法和系统。该方法在处理设备上实现,该处理设备与医生的医生终端通信连接,该方法包括:从医生终端获取对医空间应用的访问请求;响应于访问请求,基于访问请求的接收时间和医生的日程信息,确定医生的待完成任务;以及控制医生终端通过医空间应用呈现交互界面,该交互界面包括第一界面元素,第一界面元素用于获取与至少一个待完成任务相关的协助服务。其中,所述处理设备上配置有能够基于人工智能技术自我演化的智能体,交互界面的布局由智能体基于医生的行为数据和偏好信息确定。

This disclosure provides a method and system for assisting doctors in their work. The method is implemented on a processing device that is communicatively connected to a doctor's terminal. The method includes: obtaining an access request for a medical space application from the doctor's terminal; in response to the access request, determining the doctor's pending tasks based on the time the access request was received and the doctor's schedule information; and controlling the doctor's terminal to present an interactive interface through the medical space application. The interactive interface includes a first interface element for obtaining assistance services related to at least one pending task. The processing device is equipped with an intelligent agent capable of self-evolving based on artificial intelligence technology, and the layout of the interactive interface is determined by the intelligent agent based on the doctor's behavioral data and preference information.

Description

Method and system for assisting doctor in working
Cross reference
The present application claims priority from international application number PCT/CN2024/109063 filed on 7/31 of 2024, the entire contents of which are incorporated herein by reference.
Technical Field
The application relates to the technical field of medical services, in particular to a method and a system for assisting doctors in working.
Background
In the daily work scenario of a doctor, including ward rounds, inquiry, surgery, participation in meetings, etc., the doctor needs to know various information related to the work in advance, such as work schedule, inpatient data, inquiry patient information, surgery plan, etc. Whether a doctor can comprehensively and conveniently look into the information and process related work directly influences the working efficiency of the doctor.
Based on this, it is desirable to provide a method and system for assisting a doctor in working, which assists the doctor in efficiently performing daily work.
Disclosure of Invention
One or more embodiments of the present specification provide a method for assisting a physician in a task, implemented on a processing device communicatively coupled to a physician's terminal. The method comprises the steps of obtaining an access request for medical space application from a doctor terminal, responding to the access request, determining a task to be completed of the doctor based on the receiving time of the access request and schedule information of the doctor, and controlling the doctor terminal to present an interactive interface through the medical space application, wherein the interactive interface comprises a first interface element used for obtaining assistance services related to at least one task to be completed, and an agent capable of self-evolving based on artificial intelligence technology is configured on the processing equipment, and the layout of the interactive interface is determined by the agent based on behavior data and preference information of the doctor.
In some embodiments, the behavioral data reflects daily behavior of the physician and/or interactive behavior of the physician with the physician-space application, and the preference information reflects preferences of the physician for interface styles and interface elements of the interactive interface.
In some embodiments, determining, by the agent, the layout of the interactive interface based on the behavior data and the preference information of the doctor includes determining preference information of the doctor based on historical operating data of the doctor, and determining the layout of the interactive interface corresponding to the doctor based on the preference information and the behavior data of the doctor.
In some embodiments, the method further comprises autonomously obtaining real-time behavior data of the doctor, updating the layout of the interactive interface in real time based on the real-time behavior data, and controlling the doctor terminal to present the updated interactive interface through the doctor space application.
In some embodiments, determining the task to be completed for the doctor in response to the access request includes controlling the doctor terminal to present an initial interactive interface through the doctor space application in response to the access request, the initial interactive interface including interface elements for reminding the doctor to view a work schedule, and determining the task to be completed in response to a request for accessing a work schedule entered by the doctor through the doctor terminal.
In some embodiments, the initial interactive interface further provides a virtual character configured to communicate with the doctor, the request to access a work schedule and/or the request to access assistance services associated with at least one of the tasks to be completed being entered by the doctor through voice interaction with the virtual character.
In some embodiments, the interactive interface further includes a second interface element, where the second interface element is configured to obtain a real-time 3D map related to a target location corresponding to at least one of the tasks to be completed.
In some embodiments, the method further comprises determining a completed task for the physician based on the time of receipt of the access request and schedule information for the physician, the interactive interface further comprising foldable elements related to the completed task.
In some embodiments, the method further comprises determining a digital twin associated with the assistance service in response to a request entered by the physician via the geospatial application in relation to at least one of the assistance services, the digital twin reflecting a real-time status of the respective physical entity, and controlling the physician terminal to present the digital twin through the geospatial application.
In some embodiments, the method further comprises receiving, by the medical space application, interaction information of the physician with the digital twin, and updating a state of a physical entity corresponding to the digital twin based on the interaction information.
In some embodiments, the method further includes determining an agent associated with the assistance service in response to a request entered by the physician via the geospatial application in connection with at least one of the assistance services, and implementing at least one of the assistance services with the agent associated with the assistance service.
In some embodiments, the task to be completed includes a ward, the interactive interface includes a first interface element for applying for a remote ward, the method further includes obtaining a request entered by the doctor from the doctor terminal to remotely participate in the ward, obtaining first perception information acquired during the ward by a first perception device in a hospital ward, generating a virtual ward space based on the first perception information and patient data of a target patient, and controlling the doctor terminal to present the virtual ward space.
In some embodiments, the first interface element comprises a first interface element for obtaining an initial ward record, the method further comprising obtaining a request from the physician terminal to access an initial ward record of the target patient, and controlling the physician terminal to present the initial ward record of the target patient in response to the request, the initial ward record being generated by an agent corresponding to a ward service based on the first awareness information.
In some embodiments, the task to be completed includes providing a consultation service within a consulting room, the interactive interface includes a first interface element for obtaining patient data of patients who have subscribed to the consultation service, the method further includes obtaining a request from the doctor terminal to access patient data of a target patient of the patients who have subscribed to the consultation service, generating a patient virtual character representing the target patient based on the patient data of the target patient, and controlling the doctor terminal to present the patient virtual character such that the patient virtual character teaches the doctor patient data of the target patient.
In some embodiments, the first interface element comprises a first interface element for obtaining an initial diagnostic record associated with the interview service, the method further comprising obtaining a request from the doctor terminal to access the initial diagnostic record of the target patient, and in response to the request, controlling the doctor terminal to present the initial diagnostic record generated by an agent corresponding to the interview service based on second perceived information acquired by a second perceived device in the consulting room during the interview service.
In some embodiments, the task to be completed includes providing a remote inquiry service, the interactive interface includes a first interface element for entering a virtual consulting room, the method further includes obtaining a request from the doctor terminal to enter the virtual consulting room to provide the remote inquiry service to a target patient, controlling the doctor terminal to present a 3D patient model of the target patient, obtaining from the doctor terminal a query entered by the doctor through interaction with the 3D patient model, and controlling a wearable device worn by the target patient to acquire query data of the target patient based on the query.
In some embodiments, the task to be completed includes performing a procedure on a target patient, and the interactive interface includes a first interface element for acquiring patient data related to the target patient.
In some embodiments, the first interface element comprises a first interface element for obtaining an initial surgical record of a target patient, the method further comprising obtaining a request from the doctor terminal to access the initial surgical record of the target patient, and controlling the doctor terminal to present the initial surgical record of the target patient in response to the request, the initial surgical record being generated by an agent corresponding to a surgical service based on third perceived information acquired during surgery by a third perceived device in the operating room.
In some embodiments, the interactive interface further comprises a third interface element for performing preoperative patient teaching, the doctor terminal comprises a second XR device worn by the doctor, the method further comprises obtaining a request for performing preoperative patient teaching on a target patient from the second XR device worn by the doctor, the request being input by the doctor through interaction with the third interface element, generating teaching materials for teaching a surgical plan of the target patient in response to the request, and controlling the first XR device worn by the target patient and the second XR device worn by the doctor to simultaneously display the teaching materials to the target patient and the doctor.
In some embodiments, the method further includes obtaining a first confirmation instruction regarding the surgical plan entered by the target patient through the first XR device, obtaining a second confirmation instruction regarding the surgical plan entered by the patient family of the patient through a worn third XR device, controlling the first XR device, the second XR device, and the third XR device to present surgical consent in response to the first confirmation instruction and the second confirmation instruction, respectively, and obtaining signature information of the surgical consent from the first XR device, the second XR device, and the third XR device, respectively.
In some embodiments, the interactive interface further comprises a fourth interface element for performing a surgical simulation.
In some embodiments, the physician terminal includes a second XR device worn by the physician, the method further comprising obtaining a request to simulate a target procedure from the second XR device worn by the physician, the request being entered by the physician through interaction with the fourth interface element, generating a virtual surgical scene corresponding to the target procedure in response to the request, the virtual surgical scene including a virtual surgical site and a virtual surgical device, and controlling the second XR device worn by the physician to present the virtual surgical scene to the physician for surgical simulation by the physician.
In some embodiments, the method further comprises obtaining an interaction instruction regarding the virtual surgical device entered by the physician through a worn second XR device or an interaction device corresponding to the virtual surgical device, determining a possible emergency situation in the virtual surgical scene based on the interaction instruction, and updating the virtual surgical portion and/or the virtual surgical device in the virtual surgical scene based on the possible emergency situation.
In some embodiments, the interactive interface further comprises a fifth interface element for executing a surgical plan, the method further comprising obtaining a request from the doctor terminal to execute a surgical plan for a target patient, the request being entered by the doctor through interaction with the fifth interface element, determining a surgical difficulty factor based on patient data of the target patient in response to the request, determining whether an expert conference is required based on the surgical difficulty factor, and controlling the doctor terminal to present a sixth interface element that initiates the expert conference in response to determining that the expert conference is required.
In some embodiments, the physician terminal includes a second XR device worn by the physician, the method further comprising controlling the second XR device of the physician and a fourth XR device of a remote specialist to present virtual conference spaces, respectively, in response to a need to hold a specialist conference, acquiring fifth awareness information acquired by the second XR device of the physician and the fourth XR device of the remote specialist, and generating a surgical plan for the target patient based on patient data of the target patient and the fifth awareness information.
In some embodiments, the method further includes processing the surgical plan and at least a portion of the patient data through a risk assessment model, generating a risk assessment result for the surgical plan, the risk assessment model being a trained machine learning model, determining a risk precaution based on the risk assessment result, and presenting the risk assessment result for the surgical plan and the risk precaution to the physician.
In some embodiments, the interactive interface further comprises a seventh interface element for patient management, the method further comprising obtaining a request from the physician terminal to access an initial admission record for a target patient, the request being entered by the physician through interaction with the seventh interface element, controlling the physician terminal to present the initial admission record in response to the request, and updating the initial admission record based on feedback information of the initial admission record entered by the physician through the physician terminal.
In some embodiments, the updating the initial admission record includes determining query content of a supplemental query based on the feedback information, controlling terminal devices set in a hospital ward in which the target patient is located to perform the supplemental query based on the query content, and updating the initial admission record based on fourth perceived information acquired by a fourth perceived device of the hospital ward during the supplemental query.
Drawings
The present specification will be further elucidated by way of example embodiments, which will be described in detail by means of the accompanying drawings. The embodiments are not limiting, in which like numerals represent like structures, wherein:
FIG. 1 is a block diagram of an exemplary healthcare system shown according to some embodiments of the present application;
FIG. 2 is a schematic diagram of an exemplary healthcare system shown according to some embodiments of the present application;
FIG. 3 is a schematic diagram of an exemplary hospital support platform shown according to some embodiments of the present application;
FIG. 4 is a flow diagram illustrating an exemplary method for assisting a physician in a task according to some embodiments of the present description;
FIG. 5 is a schematic diagram of an exemplary initial interactive interface shown in accordance with some embodiments of the present description;
FIG. 6 is a schematic diagram of an exemplary to-do task interface shown in accordance with some embodiments of the present description;
FIG. 7 is a schematic diagram of an exemplary interaction interface shown in accordance with some embodiments of the present description;
FIG. 8 is an interface schematic of an exemplary in-hospital smooth-trip service shown in accordance with some embodiments of the present description;
FIG. 9 is an interface schematic diagram of an exemplary ward-round service according to some embodiments of the present description;
FIG. 10 is a flow diagram of an exemplary remotely engaged ward round shown in accordance with some embodiments of the present description;
FIG. 11A is an interface schematic diagram of an exemplary inquiry service according to some embodiments of the present description;
FIG. 11B is a schematic illustration of a presentation interface for exemplary patient data of a patient being interviewed according to some embodiments of the present disclosure;
FIG. 11C is a diagram of a presentation interface for prior multimodal data of an exemplary interviewed patient according to some embodiments of the present disclosure;
FIG. 12 is a schematic flow chart of an exemplary view of patient data shown in accordance with some embodiments of the present description;
FIG. 13 is a flow diagram of an exemplary remote interrogation shown in accordance with some embodiments of the present description;
FIG. 14 is an interface schematic diagram of an exemplary surgical executive service shown in accordance with some embodiments of the disclosure;
FIG. 15 is a flow diagram of an exemplary pre-operative patient teaching shown in accordance with some embodiments of the present description;
FIG. 16 is a flow diagram of an exemplary surgical simulation shown in accordance with some embodiments of the present description;
FIG. 17 is a flow diagram illustrating an exemplary execution of a surgical plan according to further embodiments of the present disclosure;
FIG. 18 is a schematic flow chart of an exemplary updating initial admission record shown in accordance with some embodiments of the present description, and
Fig. 19 is a flow chart of an exemplary method of assisting a physician in a task according to some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present specification, the drawings that are required to be used in the description of the embodiments will be briefly described below. It is apparent that the drawings in the following description are only some examples or embodiments of the present specification, and it is possible for those of ordinary skill in the art to apply the present specification to other similar situations according to the drawings without inventive effort. Unless otherwise apparent from the context of the language or otherwise specified, like reference numerals in the figures refer to like structures or operations.
It will be appreciated that "system," "apparatus," "unit" and/or "module" as used herein is one method for distinguishing between different components, elements, parts, portions or assemblies at different levels. However, if other words can achieve the same purpose, the words can be replaced by other expressions.
Generally, the terms "module," "unit," or "block" are used herein to refer to a logic or collection of software instructions, or a collection of software instructions, contained in hardware or firmware. The modules, units, or blocks described herein may be implemented as software and/or hardware and may be stored in any type of non-transitory computer-readable medium or other storage device. In some embodiments, software modules/units/blocks may be compiled and linked into executable programs. It should be appreciated that the software module may be invoked from other modules/units/blocks or from itself, and/or may be invoked in response to a detected event or interrupt. Software modules/units/blocks configured to execute on a computing device may be provided on a computer readable medium (e.g., CD-ROM, digital video diskette, flash drive, floppy disk, or any other tangible medium), or as a digital download (which may be initially stored in a compressed or installable format requiring installation, decompression, or decryption prior to execution). The software codes herein may be stored in part or in whole in a memory device of a computing device performing operations and applied to the operations of the computing device. The software instructions may be embedded in firmware, such as EPROM. It will also be appreciated that the hardware modules/units/blocks may be included in connected logic components, such as gates and flip-flops, and/or may include programmable units, such as programmable gate arrays or processors. The modules/units/blocks or computing device functions described herein may be implemented as software modules/units/blocks, but may be represented in hardware or firmware. In general, the modules/units/blocks referred to herein are logical modules/units/blocks, which may be combined with other modules/units/blocks or may be divided into sub-modules/sub-units/sub-blocks, even if they are physical organizations or memory blocks. The present description may be applicable to a system, an engine, or a portion thereof.
It will be understood that when an element, engine, module or block is referred to as being "on" or "connected" or "coupled" to another element, engine, module or block, it can be directly on, connected or coupled or in communication with the other element, engine, module or block, or intervening elements, engines, modules or blocks may be present unless the context clearly indicates otherwise. In this disclosure, the term "and/or" may include any one or more of the associated listed items or combinations thereof. In the present disclosure, the term "image" may refer to a 2D image, a 3D image, or a 4D image.
These and other features, and methods of function and operation of the related structural elements of the present disclosure, as well as the combination of components and economies of manufacture, will become more apparent upon consideration of the following description and the accompanying drawings, all of which form a part of this specification. It is to be understood, however, that the drawings are designed solely for the purposes of illustration and description and are not intended to limit the scope of the present disclosure. It should be understood that the accompanying drawings are not to scale.
As used in this specification and the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
A flowchart is used in this specification to describe the operations performed by the system according to embodiments of the present specification. It should be appreciated that the preceding or following operations are not necessarily performed in order precisely. Rather, the steps may be processed in reverse order or simultaneously. Also, other operations may be added to or removed from these processes.
In a physician's daily work scenario, a physician may need to handle multiple tasks, such as ward rounds, consultations, surgery, attending meetings, etc. To accomplish the above tasks, a doctor needs to know not only the schedule time of each task, but also information related to each task (e.g., inpatient data, interview patient information, surgery plan, etc.) in advance, or create/add information related to each task (e.g., create surgery plan, etc.). Most of the information is checked by a doctor in a text or picture mode, and the form may not fully and intuitively display pathological features of the patient (such as complex geometric features of a focus, etc.), real-time conditions of the patient (such as recovery conditions of inpatients) and details in a surgery plan (such as implantation paths of implantation surgery, etc.), so that the doctor may not be able to conveniently and intuitively understand the detailed information required by each task, and further the working efficiency of the doctor is affected. On the other hand, ward-round work on inpatients has traditionally required doctors to perform in-situ in inpatients areas, and ward-round work on interview patients has required doctors to perform in-situ in consulting rooms, however, in these areas, doctors cannot view patient data comprehensively and intuitively, which also affects the working efficiency of doctors, and even leads to misjudgment of inexperienced doctors, which has bad consequences.
Some embodiments of the present specification provide a method and system for assisting a physician in a task. The method and the system for assisting the doctor to work combine the physical hospital with the digital twin hospital, realize interaction between the doctor and the system through virtual and real linkage, not only can eliminate information incompleteness of the 2D image and reduce task processing difficulty, but also can enable the doctor to remotely process certain tasks, thereby improving the working efficiency of the doctor. Further description of the above method can be found in fig. 4-19 and related descriptions thereof.
Fig. 1 is a block diagram of an exemplary healthcare system shown according to some embodiments of the present application.
The healthcare system 100, which may also be referred to as a metahospital system, is built based on a variety of innovative technologies including metauniverse technology, XR technology (e.g., augmented Reality (AR) technology, virtual Reality (VR) technology, mixed Reality (MR) technology, etc.), AI technology, digital twin technology, IOT technology, data flow technology (e.g., blockchain technology, data privacy computing technology), spatial computing technology, image rendering technology, etc.
As shown in fig. 1, the healthcare system 100 may include a physical hospital 110, a virtual hospital 130, a user space application 120, and a hospital support platform 140. In some embodiments, the hospital support platform 140 may map data related to the physical hospital 110 into a virtual hospital 130 corresponding to the physical hospital 110 and provide user services to related users of the physical hospital 110 through the user space application 120.
The physical hospital 110 refers to a hospital existing in the physical world and having a tangible attribute. Health care institutions that provide medical, surgical and psychiatric care and treatment for humans are collectively referred to herein as hospitals.
As shown in fig. 1, the physical hospital 110 may include a plurality of physical entities. For example, the plurality of physical entities may include departments, users, hardware devices, user services, public areas, medical service procedures, and the like, or any combination thereof.
A department refers to a specialized unit or department that is specialized in providing a particular type of medical care, treatment, and service. Each department may be focused on a particular medical field and may be equipped with a healthcare professional having expertise in that field. For example, the departments may include an outpatient department, an inpatient department, a surgical department, a support department (e.g., a registration department, a pharmacy department), a medical department, a surgical department, a specialty medical department, a child care department, etc., or any combination thereof.
The user may include any user associated with the physical hospital 110 (or related user referred to as the physical hospital 110). For example, the user may include a patient (or a portion of a patient (e.g., an organ)), a physician, a visit to a patient, a hospital staff of the physical hospital 110, a provider of the physical hospital 110, an application developer of the physical hospital 110, or the like, or any combination thereof. Hospital staff of the physical hospital 110 may include healthcare providers (e.g., doctors, nurses, technicians, etc.), hospital administrators, support staff, or the like, or any combination thereof. Exemplary hospital administrators may include department care administrators, clinical administrators, department courtyards, hospital administrative staff, job management staff, or the like, or any combination thereof.
The hardware devices may include hardware devices located in the physical hospital 110 and/or hardware devices in communication with hardware devices in the physical hospital 110. Exemplary hardware devices may include terminal devices, healthcare devices, sensing devices, base devices, etc., or any combination thereof.
The terminal device may comprise a terminal device that interacts with a user of the medical services system 100. For example, the terminal devices may include terminal devices that interact with the patient (also referred to as patient terminals), terminal devices that interact with the patient's doctor (also referred to as doctor terminals), terminal equipment that interact with the nurse (also referred to as nurse terminals), terminal devices that interact with the remote seeker (also referred to as remote terminal devices), or public terminals of the hospital (e.g., office terminals, bedside terminal devices, terminal devices in waiting areas, intelligent surgical terminals), etc., or any combination thereof. In the present application, unless explicitly obtained from the context or otherwise stated in the context, the terminal devices owned by the user and the terminal devices provided to the user by the physical hospital 110 are collectively referred to as the user's terminal devices or the terminal devices interacting with the user.
The terminal device may include a mobile terminal, an XR device, an intelligent wearable device, etc. The mobile terminal may include a smart phone, a Personal Digital Assistant (PDA), a display, a gaming device, a navigation device, a hand-held terminal (POS), a tablet computer, etc., or any combination thereof.
The XR device may comprise a device that allows a user to participate in an augmented reality experience. For example, the XR device may include VR components, AR components, MR components, and the like, or any combination thereof. In some embodiments, the XR device may include an XR helmet, XR glasses, an XR patch, a stereo headset, or the like, or any combination thereof. For example, the XR device may include Google Glass TM、Oculus RiftTM、Gear VRTM、Apple VisionproTM, and the like. In particular, the XR device may include a display component on which virtual content may be presented and/or displayed. In some embodiments, the XR device may further comprise an input component. The input component can enable user interaction between a user and virtual content (e.g., virtual surgical environment) displayed by the display component. For example, the input component may include a touch sensor, microphone, image sensor, etc. configured to receive user input that may be provided to the XR device and used to control the virtual world by changing visual content presented on the display component. The input components may include handles, gloves, styluses, consoles, and the like.
The intelligent wearable device may include an intelligent wristband, intelligent footwear, intelligent glasses, intelligent helmet, intelligent watch, intelligent garment, intelligent backpack, intelligent accessory, etc., or any combination thereof. In some embodiments, the smart wearable device may acquire physiological data of the user (e.g., heart rate, blood pressure, body temperature, etc.).
The healthcare device may be configured to provide healthcare to the patient. For example, the medical services device may include an examination device, a care device, a treatment device, etc., or any combination thereof.
The examination apparatus may be configured to provide examination services to a patient, e.g. to collect examination data of the patient. Exemplary examination data may include heart rate, respiratory rate, body temperature, blood pressure, medical imaging data, body fluid test reports (e.g., blood test reports), and the like, or any combination thereof. Accordingly, the examination device may include a vital sign monitor (e.g., blood pressure monitor, blood glucose meter, heart rate meter, thermometer, digital stethoscope, etc.), a medical imaging device (e.g., computed Tomography (CT) device, digital Subtraction Angiography (DSA) device, magnetic Resonance (MR) device, etc.), a laboratory device (e.g., blood routine examination device, etc.), or any combination thereof.
The care device may be configured to provide care services to the patient and/or assist the healthcare provider in providing care services. Exemplary care devices may include hospital beds, patient care robots, smart care carts, smart kits, smart wheelchairs, and the like.
The treatment device may be configured to provide treatment services to the patient and/or assist the medical service provider in providing treatment services. Exemplary treatment devices may include surgical devices, radiation treatment devices, physical treatment devices, and the like, or any combination thereof.
The sensing device may be configured to gather sensing information related to the environment in which it is located. For example, the sensing device may include an image sensor, a sound sensor, or the like. The image sensor may be configured to collect image data in the physical hospital 110 and the sound sensor may be configured to collect voice signals in the physical hospital 110. In some embodiments, the sensing device may be a stand-alone device or may be integrated into another device. For example, the sound sensor may be part of a medical service device or a terminal device.
The base device may be configured to support data transmission, storage, and processing. For example, the infrastructure devices may include networks, machine room facilities, computing devices, computing chips, storage devices, and the like.
In some embodiments, at least a portion of the hardware devices of the physical hospital 110 are IoT devices. An internet of things device refers to a device with sensors, processing power, software and other technologies that connect and exchange data with other devices and systems through the internet or other communication networks. For example, one or more healthcare devices and/or sensing devices of the physical hospital 110 are internet of things devices and are configured to transmit collected data to the hospital support platform 140 for storage and/or processing.
The user services may include any service provided by the hospital support platform 140 to the user. For example, user services include medical services provided to patients and/or accompanying persons, support services provided to staff members of physical hospital 110 and/or suppliers of physical hospital 110, and the like. In some embodiments, user services may be provided to patients, doctors, and hospital administrators through the user space application 120, which will be described in detail in the following description.
The public area refers to a shared space accessible to users (or portions of users) in the physical hospital 110. For example, the public area may include a reception area (e.g., a foreground), a waiting area, hallways, etc., or any combination thereof.
A healthcare procedure is a procedure that provides a corresponding healthcare to a patient. Medical service procedures typically include several links and/or steps through which a user may need to obtain a corresponding medical service. Exemplary healthcare procedures may include outpatient procedures, hospitalization procedures, surgical procedures, or the like, or any combination thereof. In some embodiments, the healthcare procedures may include corresponding healthcare procedures for different departments, different diseases, and the like. In some embodiments, a preset data acquisition protocol may be set and specify the standard links involved in the healthcare procedure and how to acquire data related to the healthcare procedure.
The user space application 120 provides the user with access to user services provided by the hospital support platform 140. The user space application 120 may be an application, plug-in, website, applet, or any other suitable form. For example, the user space application 120 is an application installed on a user terminal device that includes a user interface for a user to initiate requests and receive corresponding services.
In some embodiments, user space application 120 may include different applications corresponding to different types of users. For example, the user space application 120 includes a patient space application corresponding to a patient, a medical space application corresponding to a doctor, a tube space application corresponding to an administrator, and the like, or any combination thereof. User services provided through the patient space application, the medical space application, and the management space application are also referred to as a patient space service, a medical space service, and a management space service, respectively. Exemplary patient space services include registration services, route guidance services, pre-consultation services, remote consultation services, hospitalization services, discharge services, and the like. Exemplary medical space services include scheduling services, surgical planning services, surgical simulation services, patient management services, remote ward services, remote outpatient services, and the like. Exemplary manager space services include monitoring services, medical services assessment services, device parameter setting services, service parameter setting services, resource scheduling services, and the like.
In some embodiments, the patient space application, the medical space application, and the management space application may be integrated into one user space application 120, and the user space application 120 may be configured to provide access portals for each type of user (e.g., patient, healthcare provider, manager, etc.). By way of example only, a particular user may have a corresponding account number that may be used to log into a user space application, view corresponding diagnostic data, and obtain corresponding user services.
According to some embodiments of the present application, by providing user space applications for different types of users, each type of user can easily obtain various user services that he/she may need on its corresponding user space application. In addition, currently users often need to install various applications to obtain different user services, which results in poor user experience and high development costs. Therefore, the user space application of the application can improve the user experience, improve the service quality and efficiency, enhance the service safety and reduce the development or operation cost.
In some embodiments, the user space application 120 may be configured to provide access portals for relevant users of the physical hospital 110 to interact with the virtual hospital 130. For example, through the user space application 120, a user may enter instructions for retrieving digital content of the virtual hospital 130 (e.g., hardware devices, patient organs, digital twin models of public areas), view the digital content, and interact with the digital content. As another example, through the user space application 120, a user may communicate with a avatar representing an agent. In some embodiments, a public terminal of a hospital may install a administrative space application, and an administrator account of a department to which the public terminal corresponds may be logged into the administrative space application. The user may accept user services through a pipe space application installed in the public terminal.
The virtual hospital 130 is a digital twin (i.e., virtual representation or virtual copy) of the physical hospital 110 for simulating, analyzing, predicting, and optimizing the operating state of the physical hospital 110. For example, the virtual hospital 130 may be a real-time digital copy of the physical hospital 110.
In some embodiments, the virtual hospital 130 may be presented to the user using digital technology. For example, when the relevant user interacts with the virtual hospital 130, at least a portion of the virtual hospital 130 may be presented to the relevant user using XR technology. For example only, MR technology may be used to superimpose at least a portion of the virtual hospital 130 on the real-world view of the relevant user.
In some embodiments, the virtual hospital 130 may include a digital twin of a physical entity associated with the physical hospital 110. Digital twins refer to virtual representations (e.g., virtual copies, mappers, digital simulators) of physical entities. The digital twin can reflect and predict the state, behavior and performance of the physical entity in real time. For example, the virtual hospital 130 may include digital twins of at least a portion of medical services, departments, users, hardware devices, user services, public areas, medical services procedures, and the like of the physical hospital 110. The digital twins of a physical entity can take a variety of forms including models, images, graphics, text, numerical values, and the like. For example, the digital twin body may be a virtual hospital corresponding to a physical hospital, virtual personnel (e.g., virtual doctors, virtual nurses, and virtual patients) corresponding to personnel entities (e.g., doctors, nurses, and patients), virtual devices (e.g., virtual imaging devices and virtual scalpels) corresponding to medical service devices (e.g., imaging devices and scalpels), and the like.
In some embodiments, the digital twins may include one or more first digital twins and/or one or more second digital twins. The state of each first digital twin may be updated based on an update of the state of the corresponding physical entity. For example, one or more first digital twins may be updated during the mapping of data associated with the physical hospital 110 to the virtual hospital 130. One or more second digital twins can be updated by at least one of the user space applications 120, and the update of each second digital twins can result in a status update of the corresponding physical entity. In other words, the first digital twin may be updated accordingly when the corresponding physical entity changes its state, and the state of the corresponding physical entity changes accordingly when the second digital twin is updated. For example, the one or more first digital twins may include digital twins of a public area, a medical service, a user, a hardware device, etc., and the one or more second digital twins may include digital twins of a hardware device, a user service, a medical service procedure, etc. It should be appreciated that the digital twins may be either the first digital twins or the second digital twins.
According to some embodiments of the present application, physical hospitals 110 (including hardware devices, users, user services, healthcare procedures, etc.) may be simulated and tested in a secure and controllable environment by generating a virtual hospital 130 that includes digital twins of physical entities associated with the physical hospitals 110. By virtual reality linkage (e.g., real-time interaction between physical hospital 110 and virtual hospital 130), various medical scenarios can be more accurately predicted and responded to, thereby improving the quality and efficiency of medical services. In addition, the application of the XR technology and the virtual reality integration technology enables the interaction of related users to be more natural and visual, and provides a more comfortable and efficient medical environment, so that the user experience is improved.
In some embodiments, the virtual hospital 130 may further include agents that implement self-evolution based on data related to the physical hospital 110 and AI technology.
An agent refers to an agent that acts in an intelligent manner. For example, an agent may include a computing/software entity that can autonomously learn and evolve, and sense and analyze data to perform specific tasks and/or achieve specific goals (e.g., healthcare procedures). Through AI techniques (e.g., reinforcement learning, deep learning, etc.), an agent can constantly learn and self-optimize in interactions with the environment. In addition, the agent can collect and analyze mass data (e.g., related data of the physical hospital 110) through a big data technology, mine patterns and learning rules from the data, optimize decision flow, thereby identifying environmental changes in uncertain or dynamic environments, responding quickly, and making reasonable judgment. For example, agents may learn and evolve autonomously based on AI technology to accommodate changes in physical hospitals 110. By way of example only, agents may be built based on NLP technology (e.g., large language models, etc.) and may automatically learn and autonomously update through large amounts of language text (e.g., hospital business data and patient feedback information) to improve the quality of user service provided by physical hospitals 110.
In some embodiments, the agents may include different types of agents corresponding to different healthcare procedures, different user services, different departments, different diseases, different hospital positions (e.g., nurses, doctors, technicians, etc.), different links of healthcare procedures, and the like. A particular type of agent is used to process tasks corresponding to the particular type. In some embodiments, one agent may correspond to a different healthcare procedure (or a different healthcare, or a different department, or a different disease, or a different hospital location). In some embodiments, an agent may operate with reference to basic configuration data (e.g., dictionary, knowledge graph, template, etc.) of a department and/or disease corresponding to the agent. In some embodiments, multiple agents may cooperate and share information through network communications to collectively accomplish complex tasks.
In some embodiments, a configuration of the agent may be provided. For example, basic configuration data for use by the agent in operation may be set. The basic configuration data may include dictionaries, knowledge databases, templates, etc. As another example, usage rights of the agent may be set for different users. In some embodiments, an administrator of the physical hospital 110 may set the configuration of the agent through a managed space application.
In some embodiments, the agent may be integrated into or deployed on a hardware device. For example, agents corresponding to hospitalization services may be integrated into a hospital bed or presentation device of a hospital bed. In some embodiments, the agent may be integrated into or deployed on the intelligent robot. A self-contained intelligent robot refers to a robotic system that combines physical presence (manifestation) with intelligent behavior (cognition). The self-contained intelligent robot may be configured to interact with the real world in a manner that mimics or complements human capabilities, utilizing physical morphology and cognitive functions to perform tasks, make decisions, and adapt to the environment. By utilizing artificial intelligence and sensor technology, the self-contained intelligent robot can operate autonomously, interact with the environment, and continuously improve performance. For example, the self-contained intelligent robot may configure an agent corresponding to a surgical service and assist a doctor in performing a surgery.
In some embodiments, at least a portion of the user services may be provided based on the agent. For example, at least a portion of the user services may be provided to the relevant users based on the processing results, wherein the processing results are generated by at least one of the agents based on data related to the physical hospital 110. For example only, the data related to the physical hospital 110 may include data related to a healthcare procedure of the physical hospital 110, the agent may include an agent corresponding to the healthcare procedure, and the user service may be provided to an associated user of the healthcare procedure by using the agent processing data corresponding to the healthcare procedure.
The hospital support platform 140 may be configured to provide technical support to the healthcare system 100. For example, the hospital support platform 140 may include computing hardware and software to support innovative technologies including XR technology, AI technology, digital twinning technology, data flow technology, and the like. In some embodiments, the hospital support platform 140 may include at least a storage device for data storage and a processing device for data computation.
In some embodiments, the hospital support platform 140 may support interactions between the physical hospitals 110 and the virtual hospitals 130. For example, the processing device of the hospital support platform 140 may obtain data related to the physical hospital 110 from the hardware device and map the data related to the physical hospital 110 into the virtual hospital 130. For example, the processing device of the hospital support platform 140 may update a portion of the digital twins (e.g., one or more first digital twins) in the virtual hospital 130 based on the obtained data such that each portion of the digital twins in the virtual hospital 130 may reflect the updated status of the corresponding physical entity in the physical hospital 110. Based on the digital twin body which is continuously updated with the corresponding physical entity, the user can know the state of the physical entity related to the physical hospital 110 in real time, so that the monitoring and evaluation of the physical entity are realized. As another example, agents corresponding to data related to the physical hospital 110 may train and/or update based on the data related to the physical hospital 110 to self-evolve and self-learn.
In some embodiments, the hospital support platform 140 may support and/or provide user services to the relevant users of the physical hospital 110. For example, in response to receiving a user service request from a user, the processing device of the hospital support platform 140 may provide a user service corresponding to the service request. As another example, in response to detecting a need to provide a user service to a user, the processing device of the hospital support platform 140 may control a physical entity or virtual entity corresponding to the user service to provide the user service. For example, in response to detecting that a patient is being sent to a hospital ward, the processing device of the hospital support platform 140 may control the intelligent care cart to direct a nurse to the hospital ward for a hospital admission check of the patient.
In some embodiments, at least a portion of the user services may be provided to the relevant users based on interactions between the relevant users and the virtual hospital 130. Interaction refers to interactions or effects (e.g., conversations, behaviors, etc.) between the relevant user and the virtual hospital 130. For example, interactions between the relevant user and the virtual hospital 130 may include interactions between the relevant user and a digital twin in the virtual hospital 130, interactions between the relevant user and an agent, interactions between the relevant user and a virtual character, and the like, or any combination thereof.
In some embodiments, at least a portion of the user services may be provided to the associated user based on interactions between the associated user and at least one of the digital twins. For example, an update instruction of the second digital twin input by the relevant user may be received by the user space application 120, and the corresponding physical entity of the second digital twin may be updated according to the update instruction. As another example, a user may view a first digital twin of a physical entity (e.g., a 3D digital twin model of a patient organ or hardware device) through the user space application 120 to learn about the state of the physical entity. Alternatively, the user may change the display angle, display size, etc. of the digital twin.
In some embodiments, the processing device of the hospital support platform 140 may present virtual characters corresponding to the agents through the user space application, interact with the associated user, and provide at least a portion of the user services to the associated user based on the interactions between the associated user and the virtual characters.
In some embodiments, the hospital support platform 140 may have a five-layer structure including a hardware device layer, an interface layer, a data processing layer, an application development layer, and a service layer, see fig. 3 and its associated description. In some embodiments, the hardware devices of the physical hospital 110 may be part of the hospital support platform 140.
According to some embodiments of the present application, a virtual hospital corresponding to a physical hospital may be established by integrating various internal and external resources (e.g., medical service equipment, hospital personnel, medicines and consumables, etc.) of the physical hospital. The virtual hospital may reflect real-time status (e.g., changes, updates, etc.) of physical entities associated with the physical hospital, thereby enabling monitoring and assessment of the physical entities. Such integration may provide accurate data support for the operation and intelligent decision-making of medical services. In addition, through the virtual hospital, users related to medical services can commonly establish an open shared ecosystem, thereby promoting innovation and promotion of medical services.
In addition, the medical care service of the patient in the whole life cycle can be provided for the linkage between the inside and outside of the hospital. The perspective of medical services extends from mere disease treatment to covering the entire life cycle of a patient, including prevention, diagnosis, treatment, rehabilitation, health management, and the like. By establishing the intra-and-inter-hospital linkage, the physical hospital can integrate online and offline resources better and provide comprehensive and continuous medical and health services for patients. For example, by remote monitoring and online consultation, the health condition of the patient can be followed in real time, the treatment scheme can be adjusted in time, and the treatment effect can be improved.
Fig. 2 is a schematic diagram of an exemplary healthcare system 200 shown according to some embodiments of the present application.
As shown in fig. 2, the healthcare system 200 may include a processing device 210, a network 220, a storage device 230, one or more healthcare devices 240, one or more perception devices 250, one or more patient terminals 260 of a patient 261, and one or more doctor terminals 270 of a doctor 271 associated with the patient 261. In some embodiments, components in the healthcare system 200 may be interconnected and/or communicate by a wireless connection, a wired connection, or a combination thereof. The connections between the components of the healthcare system 200 may be variable.
The processing device 210 may process data and/or information obtained from the storage device 230, the healthcare device 240, the sensing device 250, the patient terminal 260, and/or the doctor terminal 270. For example, the processing device 210 may map data related to a physical hospital to a virtual hospital corresponding to the physical hospital and provide user services to the patient 261 and the doctor 271 through the patient terminal 260 and/or the doctor terminal 270, respectively, by processing the data related to the physical hospital. As another example, processing device 210 may maintain a digital smart object and provide user services to patient 261 and doctor 271 through patient terminal 260 and/or doctor terminal 270, respectively, by engaging the digital smart object in processing data related to a physical hospital.
In some embodiments, the processing device 210 may be a single server or a group of servers. The server group may be centralized or distributed. In some embodiments, the processing device 210 may be located locally or remotely from the healthcare system 200. In some embodiments, the processing device 210 may be implemented on a cloud platform. For example, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or a combination thereof.
In some embodiments, processing device 210 may include one or more processors (e.g., single-core processors or multi-core processors). For illustration only, only one processing device 210 is depicted in the healthcare system 200. It should be noted, however, that the healthcare system 200 of the present application may also include multiple processing devices. Thus, as with the present application, operations and/or method steps performed by one processing device 210 may also be performed by multiple processing devices in combination or separately.
The network 220 may include any suitable network capable of facilitating the exchange of information and/or data by the healthcare system 200. The network 220 may be or include a wired network, a wireless network (e.g., an 802.11 network, a Wi-Fi network), a bluetooth (TM) network, a Near Field Communication (NFC) network, etc., or any combination thereof.
Storage device 230 may store data, instructions, and/or any other information. In some embodiments, the storage device 230 may store data obtained from other components of the medical services system 200. In some embodiments, storage device 230 may store data and/or instructions that processing device 210 may perform or be used to perform the exemplary methods described herein.
In some embodiments, the data stored in the storage device 230 may include multi-modal data. Multimodal data may include various forms of data (e.g., images, graphics, video, text, etc.), various types of data, data obtained from different sources, data related to different medical services (e.g., diagnosis, surgery, rehabilitation, etc.), data related to different users (e.g., patients, medical personnel, management personnel, etc.). For example, the data stored in the storage device 230 may include medical data of the patient 261 reflecting the health of the patient 261. For example, the medical data can include an electronic medical record of the patient 261. Electronic medical records refer to electronic files that record various types of patient data (e.g., basic information, examination data, imaging data). For example, the electronic medical record can include a three-dimensional model of a plurality of organs and/or tissues of the patient 261.
In some embodiments, storage device 230 may include mass storage devices, removable storage devices, volatile read-write memory, read-only memory (ROM), and the like, or any combination thereof. In some embodiments, storage device 230 may include a data lake and a data warehouse, as will be described in detail in connection with FIG. 3.
The healthcare device 240 may be used to provide or assist in healthcare. As shown in FIG. 2, the medical services device 240 includes a clinic terminal 240-1, a hospital bed 240-2, a smart surgical terminal 240-3, a smart care cart 240-4, a smart wheelchair 240-5, etc., or any combination thereof.
The office terminal 240-1 is a terminal device that is configured within the office for use by doctors and patients in a medical outpatient procedure. For example, the office terminal 240-1 may include one or more of a screen, a sound output component, an image sensor, or a sound sensor. A doctor interface may be displayed on the screen of the doctor-room terminal 240-1 and data may be displayed on the doctor interface to facilitate communication between the doctor and patient. Exemplary data may include electronic medical records (or portions thereof), pre-consultation records, medical images, 3D organ models, examination results, consultation advice, and the like.
Hospital bed 240-2 refers to a hospital bed that is capable of supporting inpatients in a hospital ward and providing user services to the patient. The hospital bed 240-2 may include beds, bedside terminal equipment, bedside inspection equipment, sensors, and the like, or any combination thereof. The bedside terminal device may include an XR device, a display device, a mobile device, etc., or any combination thereof. In some embodiments, the hospital bed 240-2 may be controlled by an agent corresponding to the hospitalization service, wherein the hospital bed may also be referred to as a smart hospital bed or a meta-hospital bed.
The intelligent surgical terminal 240-3 refers to a device configured with an agent for assisting surgery, and is controlled by the agent corresponding to a surgical service. The intelligent surgical terminal 240-3 may sense interactions (e.g., conversations, behaviors, etc.) between the healthcare provider, the patient, and the agent and obtain data captured by the sensing device 250 to provide surgical assistance. In some embodiments, the intelligent surgical terminal 240-3 may be configured to perform a risk alert for a surgical procedure, generate a surgical record of a surgical procedure, etc., based on the agent configured therein.
The intelligent nursing car 240-4 is a nursing car having an automatic driving function and capable of assisting patient treatment and nursing. For example, the intelligent care vehicle 240-4 may be configured to guide a nurse to a hospital ward for admission of the patient. In some embodiments, the intelligent care vehicle may be controlled by an agent (e.g., an agent corresponding to a hospitalization service, a care agent). In some embodiments, the smart care cart 240-4 may include a cart, a presentation device, one or more examination devices and/or care tools, a sensing device (e.g., an image sensor, a GPS sensor, a sound sensor, etc.), and so forth. In some embodiments, the intelligent care vehicle 240-4 may be configured to obtain relevant treatment and care information for the patient and generate the screening data, care data, and the like. The physical examination data may include vital sign data of the patient. The care data may include detailed records of care operations, such as care time, care operator, care measure, patient response, and the like.
The intelligent wheelchair 240-5 refers to a transport device for intelligently taking in and out of a patient. In some embodiments, the smart wheelchair 240-5 may be configured to perform autonomous navigation through integrated sensors and maps, locate the patient's location using Radio Frequency Identification Devices (RFID), bluetooth, or Wi-Fi signals, and identify the patient through biometric technology. In some embodiments, the intelligent wheelchair 240-5 may be controlled by an agent (e.g., an agent corresponding to a hospitalization service, an agent corresponding to a surgical service). In some embodiments, the intelligent wheelchair 240-5 may be configured to generate data (e.g., a record of the interaction between the agent and the patient) by sensing the interaction data through the built-in cameras/sensors.
The sensing device 250 may be configured to gather sensing information related to the environment in which it is located. In some embodiments, the sensing device 250 may comprise a sensing device in a physical hospital 110. For example, the sensing device 250 may include an image sensor 250-1, a sound sensor 250-2, a temperature sensor, a humidity sensor, and the like.
The patient terminal 260 may be a terminal device that interacts with the patient 261. In some embodiments, patient terminal 260 may include a mobile terminal 260-1, an XR device 260-2, a smart wearable device 260-3, and so forth. Doctor terminal 270 may be a terminal device that interacts with doctor 271. In some embodiments, the physician terminal 270 may include a mobile terminal 270-1, an XR device 270-2, or the like. In some embodiments, patient 261 may access a user space application (e.g., a patient space application) through patient terminal 260 and doctor 271 may access a user space application (e.g., a doctor space application) through doctor terminal 270. In some embodiments, patient 261 and doctor 271 may communicate with each other remotely through patient terminal 260 and doctor terminal 270, thereby providing remote medical services, such as remote outpatient services, remote ward services, remote follow-up services, and the like.
The sensing device 250, patient terminal 260, and doctor terminal 270 may be configured as data sources to provide information to the healthcare system 200. For example, the devices may transmit the collected data to the processing device 210, and the processing device 210 may provide user services based on the received data.
It should be noted that the above description of the healthcare systems 100 and 200 is intended to be illustrative, and not limiting of the scope of the present application. Many alternatives, modifications, and variations will be apparent to those skilled in the art. The features, structures, methods, and other features of the example embodiments herein may be combined in various ways to obtain additional and/or alternative example embodiments. For example, the healthcare system 200 may include one or more additional components, such as, for example, other users 'terminal devices, a hospital's public terminal device, and the like. As another example, two or more components of the healthcare system 200 may be integrated into a single component.
Fig. 3 is a schematic diagram of an exemplary hospital support platform 300 shown according to some embodiments of the present application.
As shown in fig. 3, the hospital support platform 300 may include a hardware layer 310 (also referred to as a hardware module), an interface layer 320 (also referred to as an interface module), a data processing layer 330 (also referred to as a data processing module), an application development layer 340 (also referred to as an application development module), and a service layer 350 (also referred to as a service module). It should be understood that the "layers" and "modules" in this disclosure are used only for logically dividing the components of the hospital support platform and are not intended to be limiting.
The hardware layer 310 may be configured to provide a hardware basis for interactions between the real world and the digital world, and may include one or more hardware devices related to hospital operations. Exemplary hardware devices may include healthcare devices, sensing devices, terminal devices, and base devices.
The interface layer 320 may be connected with the hardware layer 310 and the data processing layer 330. The interface layer 320 may be configured to obtain data collected by hardware devices of the hardware layer 310 and send the data to the data processing layer 330 for storage and/or processing. Interface layer 320 may also be configured to control at least a portion of the hardware devices of hardware layer 310. In some embodiments, interface layer 320 may include hardware interfaces and software interfaces (e.g., data interfaces, control interfaces).
The data processing layer 330 may be configured to store and/or process data. The data processing layer 330 may include a processing device on which a plurality of data processing units may be configured. The data processing layer 330 may be configured to obtain data from the interface layer 320 and process the data by at least one data processing unit to enable user services related to hospital services.
The data processing unit may comprise various preset algorithms for implementing data processing. In some embodiments, data processing layer 330 may include a processing device (e.g., processing device 210 in fig. 2). The data processing unit may be configured on the processing device. In some embodiments, the data processing unit may include an XR unit configured to process data using XR technology to implement XR services, an AI unit (e.g., an agent unit) configured to process data using AI technology to implement AI services, a digital twin unit configured to process data using digital twin technology to implement digital twin services, a data flow unit configured to process data using data flow technology (e.g., blockchain technology, data privacy computing technology) to implement data flow services, and so forth.
In some embodiments, data processing layer 330 may also include a data center configured to store data. In some embodiments, the data center may employ a lake-warehouse integrated architecture, which may include data lakes and data warehouses. The data lake may be used to persist large amounts of data in a tamper-proof manner. The data warehouse may be used to store index data corresponding to data in the data lake. The data stored in the data lake may include native (or raw) data collected by the hardware device, derived data generated based on the native data, and the like. In some embodiments, the data in the data lake may be processed by a processing device (e.g., processing device 210).
The application development layer 340 may be configured to support application development, publishing, subscribing, and the like. The application development layer 340 is also referred to as an ecological suite layer. In some embodiments, the application development layer 340 may be configured to provide an open interface for application developers to access or invoke at least a portion of the data processing units and to utilize at least a portion of the data processing units to develop applications. In some embodiments, as shown in fig. 3, the application development layer 340 may provide development kits, application markets, multi-tenant operation platforms, cloud officials, workspaces, and other support kits to assist developers in doing work.
The service layer 350 may be configured to enable relevant users of the hospital service to access user services related to the hospital service through the user space application.
The application provides a hospital support platform which is designed for comprehensive management of various resources in a hospital, including hardware resources, software resources and data resources. In some embodiments, the platform further integrates data processing units capable of supporting advanced technologies, such as artificial intelligence, XR, digital twinning, and blockchain. These advanced techniques are used to improve the efficiency and quality of service in the healthcare industry. For example, artificial intelligence techniques enable autonomous evolution and continuous optimization of hospital operations, while XR and digital twins techniques facilitate the creation and maintenance of virtual hospitals. The virtual hospital can interact with the user, providing an immersive novel service experience. In addition, the platform includes an application development layer for granting access to these advanced technologies to third party developers of the healthcare industry. This access promotes an open ecosystem, which promotes the development and innovation of applications, and thus promotes the advancement of medical services.
Fig. 4 is a flow chart illustrating an exemplary method for assisting a physician in a task according to some embodiments of the present description. As shown in fig. 4, in some embodiments, the flow 400 may include the following steps. In some embodiments, the process 400 may be performed by the processing device 210.
In step 410, an access request for a medical space application is obtained from a medical terminal.
The doctor terminal refers to equipment used by a doctor. For example, as shown in fig. 4, doctor terminal 270 may include a mobile terminal 270-1 (e.g., a smart phone or tablet computer) used by doctor 271.
In some embodiments, the physician terminal 270 may include an augmented reality (XR) device. For example, as shown in FIG. 4, a physician terminal 270 may include an XR device 270-2 (also referred to as a second XR device) worn by a physician.
Medical space applications refer to procedures that are assisted by the daily clinical work used by doctors. The doctor may perform at least part of the daily work tasks through the hospital space application, for example, the doctor may perform remote rounds, view patient data, etc. through the hospital space application. The medical space application may be installed and run on top of the doctor terminal 270. For example, doctor 271 may wake up the medical space application through worn second XR device 270-2.
The access request refers to a doctor's invocation request for the medical space application. The doctor may generate access requests to the doctor space application in a number of ways. For example, the medical space application may be displayed on the display of the mobile terminal 270-1, and the doctor 271 generates the access request to the medical space application by clicking on the medical space application icon or voice command on the display, for example, the doctor 271 says "launch medical space application". For another example, second XR device 270-2 may generate a virtual reality hospital space application with which doctor 271 generates access requests for the hospital space application by interacting or speaking voice instructions.
In response to the access request, one or more tasks to be completed by the doctor are determined based on the time of receipt of the access request and the schedule information of the doctor, step 420.
The reception time of the access request refers to a point of time when the processing device 210 receives the access request for the medical space application issued from the medical terminal 270.
The schedule information of the doctor refers to detailed work schedule information of the doctor on the same day. The schedule information of the doctor may include various work tasks that the doctor needs to complete on the same day and the time corresponding to each work task (e.g., the planned start time and the planned end time of each work task). For example, doctor schedule information may be "ward preparation (7:45-8:00), ward preparation (8:00-9:00), pre-diagnosis preview (9:00-9:10), inquiry (9:10-11:30), pre-operation preparation (13:30-14:00), surgery (14:00-17:00)". Where ward preparation refers to one or more tasks that a doctor needs to perform before going to a ward for ward, e.g., viewing a patient's surgical records and medical records, etc. Pre-diagnosis previews refer to one or more tasks that a physician needs to perform before performing a consultation task, e.g., viewing patient data of a target patient among patients who have subscribed to a consultation service, etc. Preoperative preparation refers to one or more tasks that a physician needs to perform before performing a procedure, e.g., preoperative cleaning, preoperative sterilization, etc.
The task to be completed refers to a work task that the doctor has not completed on the same day, and may include a task that the doctor is currently processing and/or a task that has not yet started. For example, taking doctor schedule information as an example of doctor schedule information in the above example, if a doctor is currently performing a consultation, tasks to be completed include a consultation, preoperative preparation, and surgery.
In some embodiments, the one or more tasks to be completed include making rounds in a hospital ward, or making remote rounds.
In some embodiments, a doctor (e.g., doctor 271) may conduct a ward round in the form of remotely participating in the ward round. Further description of remote rounds may be found in fig. 7, 9 or 10 and related descriptions thereof.
In some embodiments, the one or more tasks to be completed include providing a consultation service within the consulting room. For further description of this embodiment, see fig. 11A, fig. 12 and their associated description.
In some embodiments, the one or more tasks to be completed include providing a remote interrogation service.
Remote interrogation services refer to doctors providing medical diagnosis and interrogation services through an online platform (e.g., a medical space application). For further description of the above embodiments, see fig. 13 and its associated description.
In some embodiments, the one or more tasks to be completed include performing a procedure on the target patient. Further description of this embodiment can be found in fig. 14 and its associated description.
In some embodiments, the one or more tasks to be completed include at least writing a work record.
Work records refer to records of detailed activities of a doctor on the day, such as contents of work, working time, emergency in task, task summary, and the like. For example, the work records may include a current day ward work record (including ward actual start/end time, an emergency encountered in ward, a corresponding solution, etc.), a interview record (including ward actual start/end time, number of patients to be interviewed, a particular patient encountered, patient problems, etc.), a surgical record (including surgical actual start/end time, whether surgery was successful, whether an abnormal condition occurred during surgery, etc.), a work error (e.g., a faulty operation during surgery), etc.
In some embodiments, the work records may include daily work records, work records for each task, or work records for a preset period of time (e.g., 3 hours, 5 hours, 24 hours, etc.). In some embodiments, the task to be completed of the writing work record may be displayed at all times. For example, when a doctor does not complete or has completed a task of writing a work record, the proxy task is displayed in the display interface.
In some embodiments, the one or more tasks to be completed include at least a record of reviewing the one or more tasks that have been completed. For example, doctor 271 may review and review records of tasks related to rounds, consultations, procedures, etc. via doctor terminal 270. Upon review of the completed record of one or more tasks, the physician may add, modify, or delete content in the task record. For example, doctor 271 may input (e.g., text input or voice input, etc.) the end-of-procedure time through doctor terminal 270 to modify the end-of-procedure time already in the procedure record.
A ward round record refers to a record of related data at the time of a ward round. The ward record may include at least one of ward time (e.g., ward start time), ward participants (e.g., names of doctors and nurses participating in the ward), patient data, communication between the patient and at least one physician, orders at ward, etc. In some embodiments, the processing device 210 may generate an initial ward record based on first perceived information (e.g., physician and patient voice information) acquired by a first perceived device in the ward during the ward, and generate the ward record based on feedback information (e.g., modification information or confirmation information) from the physician to the initial ward record. Further description of the initial ward record may be found in fig. 9 and its associated description.
The inquiry records refer to records of the patient's related condition, etiology, diagnosis result, inquiry orders, etc. during the inquiry process of inquiry. In some embodiments, the processing device 210 may generate an initial diagnostic record based on second perceived information (e.g., physician and patient voice information) acquired by the second perceived device during the interrogation process, and generate the diagnostic record based on feedback information (e.g., modification information) from the physician to the initial diagnostic record. For further description of the initial diagnostic record, see FIG. 11A and its associated description.
Surgical records refer to records of events (e.g., doctor operations) during surgery, and the like. The procedure records may include procedure related information (e.g., procedure start and end times, whether the procedure was successful, etc.), patient related records (e.g., vital sign records of the patient, etc.), participant related records (e.g., physician and nurse procedure records, voice information records, etc.), and the like.
In some embodiments, processing device 210 may generate an initial surgical record based on data acquired during the surgical execution (e.g., sensory information acquired during the surgery by one or more sensory devices in the operating room). In some embodiments, processing device 210 may generate the surgical record based on the initial surgical record and feedback information (e.g., modification information) entered by the physician regarding the initial surgical record. For further description of the initial surgical record, see fig. 14 and its associated description.
In some embodiments, processing device 210 may determine a task having a scheduled end time later than the receipt time of the access request as a task to be completed. For example, taking the receiving time of the access request as 11:13, the doctor schedule information as "ward preparation (7:45-8:00), ward preparation (8:00-9:00), pre-diagnosis preview (9:00-9:10), inquiry (9:10-11:30), pre-operation preparation (13:30-14:00), operation (14:00-17:00)" as an example, the task with the planned ending time later than the receiving time of the access request as 11:13 includes inquiry, pre-operation preparation and operation, then the processing device 210 may determine the inquiry, pre-operation preparation and operation as tasks to be completed.
In some embodiments, processing device 210 may determine that one or more tasks of the doctor have been completed based on the time of receipt and the doctor's calendar information.
In some embodiments, processing device 210 may determine that the work task having a scheduled end time earlier than the time of receipt of the access request is a completed task. For example, taking the access request with a receiving time of 11:13 and doctor schedule information as "ward preparation (7:45-8:00), ward preparation (8:00-9:00), pre-diagnosis preview (9:00-9:10), inquiry (9:10-11:30), pre-operation preparation (13:30-14:00), and operation (14:00-17:00)", the task with the scheduled end time earlier than the receiving time of the access request of 11:13 includes ward preparation, and pre-diagnosis preview, then the processing device 210 may determine the ward preparation, and pre-diagnosis preview as completed tasks.
In some embodiments, since there may be situations where some of the work tasks are completed in advance, a doctor (e.g., doctor 271) may input a task completion instruction through doctor terminal 270 in conjunction with the actual completion situation of the task (e.g., doctor 271 "interview task completed" by inputting a voice instruction). The doctor terminal 270 may transmit the instruction to the processing device 210, and the processing device 210 may determine that the corresponding task is completed based on the instruction. For example, in the previous example, if the access request is received at a time of 11:13, but the doctor has previously indicated that the task of the inquiry has been completed, the inquiry is further included in the determined one or more completed tasks. Thus, one or more tasks to be completed may include only preoperative preparation and surgery.
In some embodiments, the processing device 210 may control the doctor terminal 270 to present an initial interactive interface through the doctor space application in response to the access request, the initial interactive interface including an eighth interface element for prompting the doctor to view the work schedule (i.e., doctor schedule information). In some embodiments, the processing device 210 may determine one or more tasks to complete in response to a request entered by a physician through the physician terminal 270 to access a work schedule. For further description of the above embodiments, see fig. 5 and its associated description.
And 430, controlling the doctor terminal to present an interactive interface through the doctor space application. For example, as shown in fig. 4, the processing device 210 may control the doctor terminal 270 to present an interactive interface 431. The manner of presentation may include that if the physician terminal 270 is a mobile terminal 270-1, the interactive interface 431 may be presented directly on the display screen, and if the physician terminal 270 is a second XR device 270-2, the second XR device 270-2 may present the interactive interface 431 in its generated virtual reality space. For more explanation of the interactive interface 431, see fig. 7 and the associated description.
The interactive interface (e.g., interactive interface 431) may include at least one interface element that a doctor (e.g., doctor 271) may obtain assistance services corresponding to the interface element by accessing. The doctor may access the interface element by clicking, long pressing, voice selection (by inputting the interface element to be accessed through voice), etc.
Assistance services refer to the assistance provided by a doctor space application to a doctor in completing a task function. For example, assistance services may include presenting patient data to a physician, presenting a real-time 3D (three-dimensional) map of a target location within a hospital field to a physician, providing participation in remote inquiry services to a physician, remote ward services, surgical planning services, surgical simulation services, patient management services, and the like. Further description of the above services may be found hereinafter.
In some embodiments, the interactive interface (e.g., interactive interface 431) includes a first interface element for obtaining assistance services related to at least one of the one or more tasks to be completed. The doctor (doctor 271) can select the first interface element by clicking or voice to obtain the assistance service corresponding to the first interface element. For more explanation of the interactive interface, see fig. 7 and the associated description.
In some embodiments, the processing device 210 determines a digital twin associated with at least one assistance service in response to a request entered by a physician in response to the assistance service, and controls the physician terminal to present the digital twin associated with the assistance service through the physician-space application, the digital twin reflecting the real-time status of the respective physical entity. The digital twins associated with the assistance services may include digital twins corresponding to various physical entities of a physical hospital such as hardware devices, users, public areas, medical service procedures, and the like. For example, the assistance service is a remote ward, and the associated digital twins include a virtual patient, and a virtual medical service device in a ward where the virtual patient is located. For another example, an assistance service is to provide remote interrogation, with the associated digital twins comprising a virtual patient or 3D patient model. In some embodiments, the processing device 210 receives interaction information of the physician with the digital twin (e.g., update instructions of the second digital twin entered by the physician) through the physician-space application, and updates the state of the physical entity corresponding to the digital twin based on the interaction information. For more description of digital twins reference may be made to fig. 1.
In some embodiments, processing device 210 determines an agent associated with the assistance service in response to a request entered by the physician via the medical space application in connection with at least one of the assistance services, and implements the at least one assistance service with the agent associated with the assistance service. For example, the assistance service is to perform a surgical plan for a patient, and the processing device 210 may process patient data of the patient with a surgical agent to perform the surgical plan (e.g., determine a surgical plan based on the patient data, determine a risk assessment result and/or a risk preventive measure of the surgical plan based on the patient data, etc.). For another example, the assistance service may obtain an initial ward record, and the processing device 210 may process the perception information acquired during the ward by the perception device in the hospital ward using the agent corresponding to the ward service to generate the initial ward record of the patient. For more description of the agent, reference may be made to fig. 1.
In some embodiments, the processing device 210 may control the physician terminal 270 to present an initial interactive interface through the physician-space application in response to the access request, the initial interactive interface including an eighth interface element for prompting the physician to view the work schedule. For example, FIG. 5 is a schematic diagram of an exemplary initial interactive interface shown in accordance with some embodiments of the present description. The processing device 210 may control the physician terminal 270 to present the initial interaction interface 500 as shown in fig. 5 through the geospatial application in response to the access request, the initial interaction interface 500 comprising an eighth interface element 510.
As shown in fig. 5, the eighth interface element 510 includes an input box element (such as the "view today's schedule" position in fig. 5) and a function element (such as the "return to home" element and the "open" element in fig. 5). The input box element can display reminding information for viewing the working schedule (such as text of ' viewing today ' schedule ') through gray fonts or other forms so as to remind a doctor to view the working schedule. For example, as shown in fig. 5, doctor 271 may initiate a request to access a work schedule by entering a "view today's schedule" text in an input box and clicking on an "open" element.
It is to be appreciated that fig. 5 and its description are by way of example only, and in some embodiments eighth interface element 510 may be presented in other manners, such as a "view work schedule" button, a voice prompt to "view work schedule," etc., which is not limiting in this specification.
In some embodiments, processing device 210 may control a doctor terminal (e.g., doctor terminal 270) to present a corresponding interface based on a doctor's (e.g., doctor 271) interaction (e.g., clicking or voice selection, etc.) with eighth interface element 510. For example, in response to doctor 271 clicking on the "on" element, processing device 210 may control doctor terminal 270 to present doctor schedule information. For another example, in response to doctor 271 entering a name of an assistance service (e.g., a hospital stay) within an input box element and clicking on an "open" element, processing device 210 may control doctor terminal 270 to present an interface for the assistance service for the doctor. For another example, doctor 271 may click on the "back to home" element, causing doctor terminal 270 to present a main interface (e.g., interactive interface 431 as shown in FIG. 7). The theatre region free-play refers to accessing at least one real-time 3D map related to the position of a target within a theatre. For more explanation of the target location and real-time 3D map, see fig. 7 and its associated description.
In some embodiments, when the doctor clicks on the "open" element, the processing device 210 may control the doctor terminal (e.g., doctor terminal 270) to present a work schedule interface.
FIG. 6 is a schematic diagram of an exemplary to-do task interface shown according to some embodiments of the present description.
In some embodiments, as shown in FIG. 6, the work schedule interface 600 may include tasks that a doctor (e.g., doctor 271) needs to process on the day. For example, as shown in FIG. 6, tasks presented by the work schedule interface 600 include a ward task 610, a interview task 620, and a surgical task 630. In some embodiments, further, the work schedule interface 600 may include specific work content and its corresponding time information (e.g., a scheduled start time) included in each work in the work schedule. For example, as shown in fig. 6, the ward task 610 may include a ward having a planned start time of 8:00, and the surgical task 630 may include a pre-operative preparation and an intra-operative execution having planned start times of 13:30 and 14:00, respectively.
In some embodiments, as shown in FIG. 6, the work schedule interface 600 may also include a free-flowing element 640. The processing device 210 may control a doctor terminal (e.g., doctor terminal 270) to present an interface (e.g., an interface as shown in fig. 8) corresponding to the theatre on the basis of the doctor's (e.g., doctor 271) interaction (e.g., click or voice selection, etc.) with the tour element 640. For further description of the theatre smooth-going service see fig. 8 and its associated description.
In some embodiments, the initial interactive interface further provides a avatar (also referred to as a second avatar) configured to communicate with the physician. For example, as shown in FIG. 5, the initial interactive interface 500 includes a virtual character 520.
In some embodiments, a doctor (e.g., doctor 271) may obtain the desired assistance service by voice communication with virtual character 520. For example, the doctor 271 may speak a voice "back to home", the doctor terminal 270 may transmit the voice information to the processing device 210 after receiving it, the processing device 210 may perform voice recognition on the voice information, and control the doctor terminal 270 to jump to a main interface (e.g., the interactive interface 431 as shown in fig. 7) according to the recognition result.
In some embodiments, the request for access to the work schedule is entered by the doctor through communication with the avatar. For example, the doctor 271 may speak a voice "view today's schedule", the doctor terminal 270 may transmit the voice information to the processing device 210 after receiving it, the processing device 210 may perform voice recognition on the voice information, and control the doctor terminal 270 to jump to a work schedule interface (e.g., a work schedule interface 600 as shown in fig. 6) according to the recognition result.
In some embodiments, the processing device 210 may determine one or more tasks to be completed in response to a request for access to a work schedule entered by a doctor through a doctor terminal. That is, the processing device 210 determines one or more tasks to be completed by the doctor after receiving the request for accessing the work schedule input by the doctor through the doctor terminal.
FIG. 7 is a schematic diagram of an exemplary interaction interface shown in accordance with some embodiments of the present description.
In some embodiments, as shown in fig. 7, the interactive interface 431 includes a first interface element 710 (hereinafter referred to as a plurality of first interface elements 710) for obtaining assistance services related to at least one of one or more tasks to be completed.
In some embodiments, when the one or more tasks to be completed include making rounds in a hospital ward, the first interface element includes a first interface element for remote rounds (hereinafter referred to simply as a rounds interface element). For example, as shown in fig. 7, the plurality of first interface elements 710 includes a ward-round interface element 711.
In some embodiments, processing device 210 may control doctor terminal 270 to display ward-round service interface 7110 as shown in fig. 9 in response to a doctor's (e.g., doctor 271) interaction with ward-round interface element 711 (e.g., doctor 271 clicks on ward-round interface element 711).
In some embodiments, a doctor (e.g., doctor 271) may obtain assistance services related to remotely participating in a ward (e.g., issuing a request to participate in a remote ward, accessing patient data of a target patient, etc.) by interacting with ward service interface 7110 as shown in fig. 9 (e.g., clicking on an interface element in ward service interface 7110, voice interaction, etc.).
Remote participation in a ward round refers to a doctor (e.g., doctor 271) conducting the ward round through a virtual ward space presented by doctor terminal 270 (e.g., second XR device 270-2).
Further description of the above embodiments and virtual ward space can be found in fig. 9 or 10 and their associated description.
In some embodiments, when the one or more tasks to be completed include providing a consultation service within the consulting room, the first interface element includes a first interface element (hereinafter simply referred to as a consultation interface element) for acquiring patient data of the patient for whom the consultation service has been reserved. For example, as shown in fig. 7, the plurality of first interface elements 710 includes a interview interface element 712.
In some embodiments, processing device 210 may control doctor terminal 270 to display a interview service interface 7120 as shown in fig. 11A in response to a doctor's (e.g., doctor 271) interaction with interview interface element 712 (e.g., doctor 271 clicks on interview interface element 712).
In some embodiments, a doctor (e.g., doctor 271) may obtain assistance services related to a consultation (e.g., view patient data of a patient who has reserved a consultation, etc.) by interacting with the consultation services interface 7120 as shown in fig. 11A (e.g., clicking on interface elements in the consultation services interface 7120, voice interactions, etc.).
For further description of the above embodiments, see fig. 11A and its associated description.
In some embodiments, when the one or more tasks to be completed include providing a remote interrogation service, the first interface element includes a first interface element for entering a virtual consulting room (hereinafter referred to simply as a consulting room interface element). For example, as shown in fig. 7, the plurality of first interface elements 710 includes a consulting room interface element 713.
In some embodiments, processing device 210 may control doctor terminal 270 (e.g., second XR device 270-2) to present a virtual consulting room in response to a doctor's (e.g., doctor 271) interaction with consulting room interface element 713 (e.g., doctor 271 clicking on consulting room interface element 713).
Further description of the above embodiments and virtual clinics can be found in fig. 13 and its associated description.
In some embodiments, when the one or more tasks to be completed include performing a procedure on the target patient, the first interface element includes a first interface element (hereinafter simply referred to as a procedure interface element) for accessing patient data related to the target patient. For example, as shown in fig. 7, the plurality of first interface elements 710 includes a surgical interface element 714.
In some embodiments, processing device 210 may control doctor terminal 270 to display surgical service interface 7140 as shown in fig. 14 in response to a doctor's (e.g., doctor 271) interaction with surgical interface element 714 (e.g., doctor 271 clicking on surgical interface element 714).
In some embodiments, a doctor (e.g., doctor 271) may obtain assistance services related to a procedure (e.g., view patient data of a target patient for a procedure execution, etc.) by interacting with a procedure service interface 7140 as shown in fig. 14 (e.g., clicking on interface elements in the procedure service interface 7140, voice interactions, etc.).
Further description of the above embodiments can be found in fig. 14 and its associated description.
In some embodiments, as shown in fig. 7, the interactive interface 431 further includes a second interface element 720. The second interface element 720 is used to access a real-time 3D map associated with a target location corresponding to the at least one task to be completed (i.e., free-stream the target location corresponding to the at least one task to be completed).
The target position corresponding to the task to be completed refers to the spatial position where the doctor is located when executing the task to be completed. For example, if the task to be completed is a ward, the target location is a ward. For another example, if the task to be completed is to perform an operation, the target location is an operating room.
In some embodiments, the real-time 3D map of the target location is generated based on the preliminary 3D map of the hospital and real-time status information of the user and/or device in the target location.
The initial three-dimensional map of the hospital is a static map of the hospital and can reflect the physical structure and layout of the hospital. In some embodiments, the processing device 210 may generate a three-dimensional map of the hospital based on the three-dimensional model of the hospital.
The real-time information is real-time dynamic information of users and hospital equipment entering the hospital. The real-time information may include current location information of the user (e.g., current location of the patient, current location of the doctor, current location of the nurse), information of a medical procedure in which the user is currently located (e.g., the patient is receiving a consultation service), medical service information of the user (e.g., a pre-consultation record, a diagnosis record, a result of examination, etc.), operation information of hospital equipment (e.g., operation information of an elevator, a smart hospital bed, a smart care cart, a smart chair, etc.), etc.
In some embodiments, the real-time information may be acquired based on a sensing device of the hospital (e.g., an in-hospital camera device, a sound sensing device, etc.) and/or a user terminal (e.g., a doctor terminal 270). For example, the doctor terminal 270 may acquire current location information of the doctor, tasks being processed by the doctor, and the like. For another example, a sensing device of a hospital may acquire a consultation session of a doctor and a patient.
The real-time three-dimensional map of the target location is a three-dimensional map reflecting the intra-hospital layout of the target location and the real-time status (e.g., the real-time location of a doctor or patient). Specifically, the processing device 210 may dynamically update the initial three-dimensional map of the target location according to the real-time information of the target location, and obtain the real-time three-dimensional map of the target location. For example, taking a target location as an example of a room (hereinafter referred to as room a), if there is real-time information of a doctor showing that the doctor's location information is within room a, the processing device 210 may place a three-dimensional model representing the doctor in a real-time three-dimensional map of room a.
Fig. 8 is an interface schematic of an exemplary in-hospital smooth-trip service shown in accordance with some embodiments of the present description.
In some embodiments, processing device 210 may control doctor terminal 270 (e.g., second XR device 270-2) to present in-hospital smooth-flow service interface 7200 in response to doctor's (e.g., doctor 271) interaction with second interface element 720 (e.g., click or voice selection, etc.).
Illustratively, taking doctor's tasks to be completed including ward, inquiry and surgery as an example, as shown in fig. 8, the intra-hospital smooth-flow service interface 7200 includes ward smooth-flow element 7201, doctor's office smooth-flow element 7202 and operating room smooth-flow element 7203. The processing device 210 may control the doctor terminal 270 (e.g., the second XR device 270-2) to present real-time 3D maps of the ward, the room, or the operating room, respectively, in response to interaction (e.g., clicking or voice selection, etc.) of the doctor (e.g., the doctor 271) with at least one of the ward-free-flowing element 7201, the room-free-flowing element 7202, or the operating room-free-flowing element 7203.
In some embodiments, as shown in fig. 7, the interactive interface 431 further includes a third interface element 730 for performing preoperative patient teaching.
In some embodiments, doctor terminal 270 may generate and send to processing device 210 a request for preoperative patient education for the target patient in response to a doctor's (e.g., doctor 271) interaction with third interface element 730 (e.g., doctor 271 clicks or voice-selects third interface element 730).
Preoperative patient teaching refers to the process of teaching the condition and operation plan to the patient and/or patient family and deducing the postoperative rehabilitation condition of the patient. Illustratively, the processing device 210 may visually display the patient's lesion and its surrounding tissues or organs via terminal devices (XR glasses or smart display terminals) worn by the participant (e.g., patient, doctor, patient family, etc.) based on the three-dimensional anatomical model of the patient. Then, the processing device 210 may obtain the explanation data of the doctor to the patient and/or the family member thereof through the sensor on the terminal device worn by the doctor, and send the explanation data to the terminal device of the patient or the family member of the patient, so that the patient or the family member of the patient receives the explanation data. The explanation information may include at least one of explanation of various surgical plans and simulated surgical steps, informing potential risks, deducing postoperative rehabilitation progress and explaining discharge criteria, etc., and the form of the explanation information may be voice information, pattern information, text information, etc.
In some embodiments, the processing device 210 may obtain a request for preoperative patient education for the target patient. The processing device 210 may generate, in response to a request for preoperative patient teaching of the target patient, teaching material for teaching the surgical plan of the target patient. Processing device 210 may control a first XR device worn by the target patient and a second XR device worn by the physician to present interpretation data to the target patient and the physician simultaneously. For more details on preoperative patient teachings and the above embodiments, see fig. 15 and its associated description.
In some embodiments, as shown in fig. 7, the interactive interface 431 further includes a fourth interface element 740 for performing surgical simulation.
In some embodiments, doctor terminal 270 can generate a request to simulate the target procedure and send the request to processing device 210 in response to a doctor's (e.g., doctor 271) interaction with fourth interface element 740 (e.g., doctor 271 clicking on fourth interface element 740). The target surgery refers to a surgery corresponding to a target surgery plan.
Surgical simulation refers to the process by which a doctor (e.g., doctor 271) performs surgical exercises in a safe and controlled environment, perfecting a surgical plan and/or improving surgical skills. For example, for complex or rare procedures, a physician may utilize an augmented reality device (second XR device 270-2) to simulate a procedure on a virtual patient in a virtual surgical scene (e.g., an augmented reality surgical scene) based on the formulated procedure plan to identify risk points that may occur during the procedure, thereby formulating corresponding risk preventive measures. The risk point refers to an operation or abnormal state of an organ or other emergency situations, etc. which are at risk in the operation. For example, the risk point may be patient hypotension, patient bradycardia, patient liver failure, etc. For another example, for multiple surgical plans, a physician may simulate each surgical plan in a virtual surgical scene using an augmented reality device (e.g., second XR device 270-2) to compare the merits of different surgical plans to determine an optimal surgical plan. For another example, a physician may repeatedly practice the same surgical procedure in a virtual surgical scene using an augmented reality device (e.g., second XR device 270-2) to enhance the understanding and memory of the surgical procedure, improving the accuracy and proficiency of the surgical procedure.
In some embodiments, the processing device 210 may obtain a request to simulate a target procedure. The processing device 210 may generate a virtual surgical scene corresponding to the target surgery in response to the request to simulate the target surgery, the virtual surgical scene including the virtual surgical site and the one or more virtual surgical devices. Processing device 210 may control the second XR device to present the virtual surgical scene to the physician. Processing device 210 may obtain interaction instructions regarding the virtual surgical device entered by the physician via the second XR device or an interaction device corresponding to the virtual surgical device. The processing device 210 may update the virtual surgical site and the virtual surgical device in the virtual surgical scene based on the interactive instructions. For more details on preoperative patient teachings and the above embodiments, see fig. 16 and its associated description.
In some embodiments, as shown in fig. 7, the interactive interface 431 further includes a fifth interface element 750 for executing a surgical plan.
In some embodiments, doctor terminal 270 may generate and send to processing device 210 a request to perform a surgical plan on the target patient in response to a doctor's (e.g., doctor 271) interaction with fifth interface element 750 (e.g., doctor 271 clicks or voice-picks on fifth interface element 750).
Surgical planning refers to a surgical plan in which surgery is performed for a patient undergoing surgery. In some embodiments, the surgical plan may include at least one of a surgical site (e.g., abdomen, chest, etc.), a surgical time, a surgical procedure, a predicted surgical duration, an anesthetic dose, a primary surgeon, a surgical type (e.g., minimally invasive or laparoscopic or open surgery, etc.), a surgical incision location, a depth of cut, a cutting path, an implant (e.g., a cardiac stent), an implantation path, a type and number of surgical tools, risk precautions, and the like.
Performing a surgical plan is the process of performing surgery on a patient based on the surgical plan.
In some embodiments, the processing device 210 may determine the surgical plan using a plan generation model. For example, the condition evaluation results are processed and analyzed using a plan generation model to determine a surgical plan. The plan generation model may be a machine learning model. In some embodiments, the inputs to the plan generation model may include condition assessment results and the outputs may include a surgical plan. In some embodiments, the plan generation model may be integrated in the processing device 210. After the processing device 210 determines the surgical plan using the plan generation model, the surgical plan output may be presented to the physician through an interactive device of the physician workstation (e.g., physician terminal 270 or a smart display screen, etc.). In some embodiments, the processing device 210 may confirm or update the surgical plan based on doctor feedback information (e.g., confirmation or modification information to the surgical plan, etc.) obtained from the interaction device.
The condition assessment results reflect the health condition and/or illness state of the patient. The condition assessment results may include stage of disease, progression of disease, rate of deterioration of disease, patient tolerance to drugs or surgery, and the like. For example, taking a disease as an example of cancer, the condition evaluation result may be "the disease stage is advanced, the cancer focus has spread, the cancer focus has fast development and spread speed, the patient has no medical allergy history, and the operation bearing capability is stronger.
The evaluation mode of the condition evaluation result can comprise manual evaluation and/or intelligent evaluation.
Manual assessment refers to determining a condition assessment result from patient data by a condition assessment person (e.g., a physician). In some embodiments, after the patient data is retrieved and presented on a display interface of an interactive device (e.g., doctor terminal 270, intelligent display terminal of doctor's office), the doctor may comprehensively evaluate the health and condition of the patient based on the patient data, and enter (e.g., by way of voice input, interface input, etc.) the condition evaluation results via the interactive device. Illustratively, a doctor may review patient data in the UHR through an intelligent display terminal of the doctor workstation, precisely locate a lesion through a tool provided by the UHR, view an exact location, size, relationship with surrounding tissue, etc. of the lesion, so as to intuitively understand a patient condition and perform preoperative evaluation, and a sensing device (e.g., a microphone, a gesture sensor, etc. on a second terminal device worn by the doctor) may capture input information (e.g., voice information, gesture information, etc.) of the doctor to generate a condition evaluation result.
The intelligent evaluation refers to processing patient data by using a condition evaluation model to determine a condition evaluation result. The condition assessment model may be a machine learning model, the inputs of the condition assessment model may include patient data, and the outputs may include condition assessment results. In some embodiments, the condition assessment model may be integrated in the processing device 210. The processing device 210 may determine a patient's condition assessment result using the condition assessment model, and may output and display the condition assessment result to a doctor via an interactive terminal (e.g., doctor terminal 270, smart display screen, etc.) of the doctor workstation. In some embodiments, the processing device 210 may also update the condition evaluation results based on the physician's feedback information (e.g., confirmation information or modification information, etc. of the presented condition evaluation results).
Patient data refers to physiological or pathological information related to a patient. The patient data may include patient personal data, historic medical treatment data, medical examination data, patient digital twins (e.g., a three-dimensional anatomical model of the patient), and the like.
Patient personal data refers to basic information data of a patient. The patient personal data may include information of the patient's sex, age, height, weight, etc. For example, the patient personal data may be "sex male, age 39 years, height 174cm, weight 73kg".
The historical diagnosis and treatment data refers to data related to treatment and care before the current operation or the current time of the patient. The historic medical treatment data may include patient medical records, historic treatment records (e.g., historic surgery records, historic chemotherapy records, etc.), hospitalization records (e.g., physician order records during hospitalization, care records, medication records, etc.), historic medication records (e.g., medication records prior to the present hospitalization, etc.), and the like.
Medical examination data refers to data reflecting the examination results of a medical examination made by a patient. Medical examinations may include blood tests, biochemical tests, urine tests, immunological tests, microbiological tests, allergen tests, imaging tests (e.g., CT scan tests, MR scan tests, PET scan tests, ultrasound scan tests, etc.), and the like. The medical examination data may be an examination report of a medical examination, e.g. a blood test report, a urine test report, an immunological test report, a CT image report, an MR image report, a PET image report, etc.
Patient data may be obtained from a unified health archive (Unified Health Records, UHR) or may be supplemented by a physician. UHR is an archive that stores or displays at least part of the patient data. For example, personal information of a patient (e.g., after first registration entry), physical examination reports, medical examination reports, etc. may be stored in a folder (e.g., a folder named by patient number, name, etc.) in the UHR that corresponds to the patient. Illustratively, the physician initiates patient data acquisition instructions by operating an interactive terminal (e.g., physician terminal 270, smart display terminal) of the physician's workstation, and the processing device 210 retrieves patient data for the current patient from the UHR based on the acquisition instructions and presents it via the interactive terminal.
In some embodiments, the physician may determine a surgical plan based on the condition evaluation results. For example, a doctor may determine the surgical plan based on the condition evaluation results, by determining the surgical necessity of the current patient, the most appropriate surgical method and technique, the surgical time, the predicted surgical duration, and the like.
In some embodiments, processing device 210 may generate a surgical plan by introducing an expert conference (also known as a multidisciplinary team conference) when preparing the surgical plan.
In some embodiments, the processing device 210 may obtain a request to perform a surgical plan on the target patient. The processing device 210 may determine a surgical difficulty factor based on patient data of the target patient. Processing device 210 may determine whether an expert conference needs to be held based on the surgical difficulty coefficient. The processing device 210 may control the doctor terminal to present a sixth interface element (e.g., sixth interface element 760) that initiates the expert conference in response to determining that the expert conference is required. For further description of the above embodiments, see fig. 17 and its associated description.
In some embodiments, as shown in fig. 7, the interactive interface 431 further includes a seventh interface element 770 for patient management.
In some embodiments, doctor terminal 270 can generate and send to processing device 210 a request to access an initial admission record for a target patient in response to a doctor's (e.g., doctor 271) interaction with seventh interface element 770 (e.g., doctor 271 clicking on seventh interface element 770).
Patient management refers to managing relevant data of a patient (e.g., patient data, surgical records, admission records, care records, post-operative recovery records, etc.).
In some embodiments, the processing device 210 may obtain a request to access an initial admission record for the target patient. The processing device 210 may control the doctor terminal to present the initial admission record in response to the above-described request to access the initial admission record for the target patient. The processing device 210 may update the initial admission record based on feedback information of the initial admission record entered by the doctor through the doctor terminal. Further description of the above embodiments can be found in fig. 18 and its associated description.
In some embodiments, the interactive interface 431 further includes scheduling information 780 associated with at least one task to be completed. For example, as shown in FIG. 7, the interactive interface 431 may include schedule information, ward check (8:00 of scheduled start time), pre-diagnosis preview (9:00 of scheduled start time), inquiry (9:10 of scheduled start time), and the like.
In some embodiments, interactive interface 431 further includes one or more foldable elements related to one or more completed tasks. A foldable element refers to an element in an interactive interface that can be zoomed out (e.g., zoomed in to 1/4 of the original size) or zoomed in by human-machine interaction (e.g., clicking or voice selection, etc.), based on which the foldable element can include a folded state (state when zoomed out) and an unfolded state (state when zoomed in). The foldable element may present the completed task and time information of the completed task in an unfolded state (e.g., a start time of the completed task). For example, as shown in FIG. 7, the interactive interface 431 includes a foldable element 790 (in an unfolded state), the foldable element 790 presents the ward preparation task, and the ward preparation task has a start time of 7:45. A doctor (e.g., doctor 271) may click on the foldable element 790 on the interactive interface 431, and the processing device 210 may control the foldable element 790 to enter a folded state (the foldable element 790 is reduced in area and no longer exhibits the completed task) in response to the click operation described above.
In some embodiments, the layout of the interactive interface 431 is determined based at least in part on the physician's preference information.
The preference information reflects the physician (e.g., physician 271)'s preference for content, specifications (including shape, color, font, etc.), and location of elements on the interface (e.g., first interface element, second interface element, etc.). For example, the preference information may be "the interface elements are circular, the fonts are regular script, and the interval is 5mm", and the processing device 210 may set the shape of each interface element on the interactive interface 431 to be circular, the fonts included in each interface element to be regular script, and the interval between each interface element to be 5mm.
In some embodiments, the preference information may also reflect a preference of a doctor (e.g., doctor 271) for the interface style. The interface styles may include a conclusive style and a detailed style, wherein the conclusive style contains fewer interface elements than the detailed style. For example, the interactive interface 431 in the detailed style may include all interface elements as shown in fig. 7, and the interactive interface 431 in the reduced style includes only the first interface element 710, the second interface element 720, the third interface element 730, the fourth interface element 740, the fifth interface element 750, and the seventh interface element 770 as shown in fig. 7.
In some embodiments, the preference information further includes whether to display a avatar 520 as shown in FIG. 5. Further, the preference information may also include avatar preferences of the avatar 520, for example, the avatar of the avatar 520 may be a cartoon persona or cartoon animal persona, etc., and the preference information may include a selection of a specific avatar by a doctor (e.g., doctor 271).
The preference information may be entered actively by the physician or determined by the processing device 210 based on historical operating data of the physician. For example, the processing device 210 collects historical operational data for the interactive interface during use of the doctor's medical space application and determines preference information for the doctor based on the historical operational data for the interactive interface. The historical operational data may include doctor's settings for parameters of the interactive interface (e.g., settings of style, font, content, etc.), element operations (e.g., frequency of use of individual elements presented in the interactive interface, status operations of individual elements, etc.). For example, if a doctor always sets a foldable element in the interactive interface to a folded state, the doctor may determine to prefer a reduced style. For another example, if the doctor clicks on a portion of the elements presented in the interactive interface less frequently or has not been used (e.g., has not used free-flowing elements), the portion of the elements may not be presented in the doctor's interactive interface.
In some embodiments, the layout of the interactive interface 431 is determined based at least in part on the behavioural data of the physician. The physician's behavioral data may reflect the physician's daily behavior (e.g., the physician's lifestyle and/or work habit), and the physician's interaction with the physician-space application. For example, the living habits of the doctor include noon break time, go out time, meal time, driving time, sleeping time, etc., and the work habits include body stretch time of work gap, eye rest time, drinking time, etc. The interaction behavior of the doctor with the medical space application may include voice interactions, gesture interactions, etc.
The behavioural data of the doctor can be collected autonomously by an agent configured in the processing device. The intelligent agent is an intelligent assistant of a doctor and can provide intelligent and personalized services for the doctor. For example, the agent may obtain real-time behavior data of the doctor through a wearable sensing device worn by the doctor or a sensing device in an environment where the doctor is located, and determine a living habit and/or a working habit of the doctor based on the real-time behavior data.
In some embodiments, the processing device 210 determines an interface style and/or interface elements of the interactive interface based on the behavioural data of the physician. For example, if a doctor initiates an access request of a medical space application, and the access request is inconvenient to operate in a daily noon break period, a driving period, a dining period or the like, a virtual character capable of performing voice interaction with the doctor can be configured in the presented interaction interface, and the doctor is provided with relevant assistance services through the voice interaction of the virtual character with the doctor. The avatar may provide status announcements of physical entities (e.g., patient entities, medical device entities) related to the assistance services to the physician.
In some embodiments, the processing device 210 updates the physician's behavioral data and preference information periodically or in real-time, and adjusts the interface layout based on the updated behavioral data and preference information. In some embodiments, the processing device 210 autonomously obtains real-time behavior data of the doctor, updates the layout of the interactive interface in real time based on the real-time behavior data, and controls the doctor terminal to present the updated interactive interface through the doctor space application. For example, if the doctor has a closed eye rest, the brightness of the interactive interface is dimmed, and/or related data is broadcasted through voice.
Fig. 9 is an interface diagram of an exemplary ward-round service according to some embodiments of the present description.
In some embodiments, as shown in fig. 9, the ward-round service interface 7110 may include a first interface element 7111 for applying for a remote ward-round (hereinafter referred to as a remote ward-round element 7111). A doctor terminal (e.g., doctor terminal 270) may generate and send a request to remotely participate in a ward round to the processing device 210 in response to a doctor's (e.g., doctor 271) interaction with the remote ward round element 7111 (e.g., clicking or voice selecting the first interface element 7111).
In some embodiments, as shown in fig. 9, the ward service interface 7110 may further include a first interface element 7112 (hereinafter referred to as patient data acquisition element 7112) for acquiring patient data of a patient (e.g., a target patient) to be visited in the ward. A doctor terminal (e.g., doctor terminal 270) may generate and send a request to access patient data of a patient to be visited in a ward visit to the processing device 210 in response to a doctor's (e.g., doctor 271) interaction with the patient data acquisition element 7112 (e.g., clicking or voice selecting the patient data acquisition element 7112). The processing device 210 may retrieve (e.g., retrieve from the storage device 230 or UHR) and transmit to a doctor terminal (e.g., the doctor terminal 270) patient data for the patient to be visited in the ward, and then control the doctor terminal (e.g., the doctor terminal 270) to present the patient data for the doctor to view in response to the request to access the patient data for the patient to be visited in the ward.
For further description of patient data, see fig. 7 and the associated description.
In some embodiments, as shown in fig. 9, the ward service interface 7110 may further include a first interface element 7113 (hereinafter referred to as ward record acquisition element 7113) for acquiring an initial ward record related to a ward. A doctor terminal (e.g., doctor terminal 270) may generate and send a request to access an initial ward record of a target patient to the processing device 210 in response to interaction of a doctor (e.g., doctor 271) with the ward record acquisition element 7113 (e.g., clicking or voice selecting the first interface element 7113). The processing device 210 may retrieve (e.g., retrieve from the storage device 230) and send the initial ward record of the target patient to a doctor terminal (e.g., the doctor terminal 270) in response to a request to access the initial ward record of the target patient, and then control the doctor terminal (e.g., the doctor terminal 270) to present the initial ward record of the target patient for viewing by the doctor.
In some embodiments, the initial ward record is generated based on sensory information (also referred to as first sensory information) acquired by one or more sensory devices (also referred to as first sensory devices) in the hospital ward during the ward.
The one or more sensing devices in the hospital ward may include an image sensor, a sound sensor, a light intensity sensor, etc. in the hospital ward.
The first sensory information may be information collected by one or more sensory devices during a ward round. For example, the first perceived information collected by the image sensor may be image information within a patient room, and the first perceived information collected by the sound sensor may be sound information within the patient room (e.g., a patient's sound, a ward participant's sound (e.g., a doctor and/or nurse within the patient room), an interactive sound between the patient and the ward participant, etc.).
In some embodiments, the processing device 210 may obtain a ward record template. For example, the ward record template may be preset and stored in the storage device 230, and the processing device 210 may retrieve the ward record template from the storage device 230. In some embodiments, processing device 210 may populate a ward record template based on the first awareness information to generate an initial ward record. For example, the processing device 210 may face the ward participant based on the image information of the person in the ward in the first perception information, determine identity information (e.g., name) of the ward participant, and populate the ward record template with the identity information of the ward participant. For another example, the processing device 210 may perform voice recognition based on the voice information of the doctor in the ward in the first perception information, determine the order content of the doctor, and populate the ward record template.
In some embodiments, after the doctor views the initial ward record through the doctor terminal (e.g., doctor terminal 270), the doctor may further input (e.g., voice input) feedback information (e.g., confirmation information or modification information) to the initial ward record through the doctor terminal (e.g., doctor terminal 270), confirm or modify the initial ward record, and thereby obtain the ward record.
Processing device 210 may have configured therein an agent (e.g., a nurse agent or a ward agent) corresponding to the ward service, the ward record (e.g., an initial ward record, an updated ward record, etc.) generated by the agent corresponding to the ward service based on the first awareness information. For more about the agent, see description in fig. 11A.
Fig. 10 is a flow diagram of an exemplary remotely engaged ward round shown in accordance with some embodiments of the present description. As shown in fig. 10, in some embodiments, the process 1000 may include the following steps. In some embodiments, the process 1000 may be performed by the processing device 210.
At step 1010, a request entered by a doctor to remotely participate in a ward round is obtained from the doctor's terminal. Further description of remote participation in a ward round may be found in fig. 9 and its associated description.
In response to finding that the ward is a hospital ward, sensing information (e.g., first sensing information) acquired by one or more sensing devices (e.g., first sensing devices) in the hospital ward during the ward is acquired, step 1020. See fig. 9 and the associated description for more content regarding the first perceptual information.
Step 1030, generating a virtual ward space based on the first perceived information.
Virtual ward space refers to a virtual reality ward space. The virtual ward space may be the result of the virtualization of the people or objects in the real ward. The virtual ward space may include a virtual hospital bed, a virtual patient, a virtual ward device (e.g., a virtual electrocardiograph, a virtual blood pressure device, etc.), and so forth.
In some embodiments, when a doctor (e.g., doctor 271) remotely participates in a ward, there are also in-situ ward personnel (e.g., doctor, nurse, etc. participating in an in-situ ward) in the real ward corresponding to the virtual ward space. Based on this, the first perceived information may include image information and voice information inside the physical ward, whereby generating the virtual ward space further includes the virtual field ward personnel. A doctor (e.g., doctor 271) remotely engaged in the ward round may obtain the virtual person (e.g., virtual field ward round person, virtual patient) and corresponding voice information (e.g., voice information of patient, voice information of field ward round person, etc.) via doctor terminal 270 (e.g., second XR device 270-2).
In some embodiments, the processing device 210 may construct a three-dimensional model (i.e., a virtual ward space) corresponding to the real ward space based on the first perceived information (e.g., the ward interior image) at a preset ratio (e.g., a volume ratio of the object in the virtual ward space to the object in the real ward of 1:100) using techniques such as three-dimensional modeling. For example, the processing device 210 may employ techniques such as three-dimensional modeling to construct a three-dimensional model of the field inspector at a predetermined ratio (e.g., a volume ratio of virtual field inspector to field inspector of 1:100). The processing device 210 may use three-dimensional modeling and other techniques to construct a three-dimensional model of the ward device in the ward according to a preset ratio (for example, the volume ratio of the virtual ward device to the real ward device is 1:100).
In some embodiments, the processing device 210 may generate the virtual ward space based on patient data of a target patient (e.g., a patient for whom a doctor is to make a remote ward), and the first perceived information. For example, the processing device 210 may construct a virtual volume in a virtual ward space (e.g., a virtual hospital bed, a virtual patient, a virtual ward device, a virtual field ward finder, etc.) based on the methods described above. On this basis, the processing device 210 may construct a virtual display screen in the virtual ward space for presenting patient data based at least in part on the patient data (e.g., pathology data, medical examination data, etc. of the patient), which may be displayed for viewing by a doctor remotely engaged in the ward.
And step 1040, controlling the doctor terminal to present the virtual ward space. For example, processing device 210 may send the generated virtual ward space to second XR device 270-2 and control second XR device 270-2 to render the virtual ward space.
Fig. 11A is an interface diagram of an exemplary interview service according to some embodiments of the present disclosure, fig. 11B is a presentation interface diagram of patient data of an exemplary interview patient according to some embodiments of the present disclosure, and fig. 11C is a presentation interface diagram of prior multimodal data of an exemplary interview patient according to some embodiments of the present disclosure.
In some embodiments, as shown in fig. 11A, the interview service interface 7120 may include a first interface element 7121 (hereinafter referred to as a data access element 7121) for accessing patient data of patients for whom interview services have been reserved. A doctor terminal (e.g., doctor terminal 270) may generate and send a request to the processing device 210 to access patient data of a target patient among patients for whom a interview service has been reserved in response to a doctor's (e.g., doctor 271) interaction with the data access element 7121 (e.g., clicking or voice selecting the data access element 7121). The processing device 210 may retrieve (e.g., retrieve from the storage device 230) and transmit patient data of a target patient among the patients for whom the interrogation service has been reserved to a doctor terminal (e.g., the doctor terminal 270) in response to the patient data request to access the target patient, and then control the doctor terminal (e.g., the doctor terminal 270) to present the patient data of the target patient for viewing by the doctor.
In some embodiments, as shown in fig. 11B, the patient data presented by the doctor terminal 270 for the target patient among the patients for whom the inquiry service has been reserved may include the date of the inquiry (e.g., day 29 of 2022, 12 shown in fig. 11B), the number (ID), name (Name), primary physician (Primary provider), time of medical warranty, contact phone, time of last inquiry, etc.
In some embodiments, the processing device 210 may obtain a request from the medical terminal to access patient data of a target patient of the patients. The processing device 210 may generate a patient avatar (also referred to as a first avatar) representing the target patient based on patient data of the target patient (e.g., a patient of the patient who has subscribed to the inquiry service). The processing device 210 may cause the doctor terminal to present the patient avatar to cause the patient avatar to teach the doctor patient data of the target patient. Further description of this embodiment may be found in fig. 12 and related description.
In some embodiments, the interview service interface 7120 may further include a first interface element 7122 (hereinafter referred to as a diagnostic record acquisition element 7122) for acquiring initial diagnostic records related to the interview service. A doctor terminal (e.g., doctor terminal 270) may generate and send a request to access an initial diagnostic record for a target patient to the processing device 210 in response to a doctor's (e.g., doctor 271) interaction with the diagnostic record acquisition element 7122 (e.g., clicking or voice selecting the diagnostic record acquisition element 7122). The processing device 210 may retrieve (e.g., retrieve from the storage device 230) and send the initial diagnostic record of the target patient to a doctor terminal (e.g., the doctor terminal 270) in response to a request to access the initial diagnostic record of the target patient, and then control the doctor terminal (e.g., the doctor terminal 270) to present the initial diagnostic record for viewing by the doctor.
In some embodiments, the initial diagnostic record is generated based on sensory information (hereinafter referred to as second sensory information) acquired by one or more sensory devices (also referred to as second sensory devices) in the consulting room during the interrogation service.
The second perception information refers to information collected by one or more perception devices in the consulting room during the interrogation service. For example, taking the sensing device as a sound sensor, the second sensing information may be voice information of a doctor and a patient in a consulting room during a consultation service.
In some embodiments, the initial diagnostic record is an automatically generated diagnostic record. In some embodiments, the initial diagnostic record may include at least one of an initial patient medical record, an initial diagnostic opinion, an initial diagnostic prescription (e.g., an initial treatment prescription and an initial examination prescription), an initial medical order, and the like.
In some embodiments, the initial diagnostic record may be generated by an agent corresponding to the interview service. The agent may learn a diagnostic record generation mechanism from various types of data, such as a diagnostic record template, a knowledge dictionary, a knowledge database, and the like, and process the second sensory information and the patient data based on the mechanism to generate a diagnostic record.
An agent is a program (e.g., a machine learning model with self-evolving functionality) that can make decisions or provide services based on environmental information (e.g., second perceptual information), user input (e.g., physician-entered speech information), or empirical data. These programs may be used to autonomously collect information and make decisions on a regular, programmed time or in real-time as prompted by the user. In some embodiments, different types of agents may be configured in a computing device. For example, a general server side of a hospital may be deployed with hospital agents including various in-hospital medical activities such as consultation agents, hospitalization agents (e.g., ward agents, care agents, etc.), surgery agents, rehabilitation agents, physical examination agents, etc., for making decisions, guiding, or providing services for a patient's consultation, hospitalization (e.g., ward, daily care, etc.), surgery (including preoperative preparation, preoperative patient education, surgical execution, postoperative rehabilitation, surgical simulation), rehabilitation, physical examination, etc., respectively. Wherein, the operation intelligent body can comprise a preoperative preparation intelligent body, a preoperative patient teaching intelligent body, an operation execution intelligent body, a postoperative disc-recovering intelligent body and an operation simulation intelligent body. For another example, the agents may include nurse agents, doctor agents, and the like, based on the division of roles. A nurse agent refers to an agent that replaces a nurse to complete a portion of his work (e.g., preoperative patient education, postoperative care, etc.). Doctor agent refers to an agent that replaces a doctor to perform part of his work.
In some embodiments, the processing device 210 may extract key content from the perception information based on the diagnostic record template.
The diagnostic record template is used to define the format and content of the diagnostic record. For example, a diagnostic record template includes a plurality of template fields arranged in a particular format, the template fields representing what is to be included in the diagnostic record. In some embodiments, the diagnostic record templates may include template fields for patient medical records (including patient basic information, descriptions of patient conditions, physical examination data, etc.), diagnostic comments, diagnostic prescriptions (e.g., treatment prescriptions and examination prescriptions), medical orders, and the like.
The key content is content related to template fields in the diagnostic record template. In some embodiments, the second perceptual information comprises a speech signal. Since the voice signal records the dialogue of the patient and the doctor, the key content extracted from the voice signal includes content in the form of natural language. In particular, the processing device 210 may transcribe the speech signal into text and extract key content from the transcribed text based on a plurality of template fields in the diagnostic record template. For example, based on the template field "check prescription", the key content "CT in the legs" may be extracted from the transcribed text.
In some embodiments, the processing device 210 may extract key content from the query data collected by the query device during the interrogation. A physical examination apparatus refers to an apparatus for performing physical examination of a patient in a consulting room, such as a blood pressure monitor or the like. For example, when the template field "blood pressure" is included in the diagnostic record template, the blood pressure value of the patient may be extracted from the data collected by the blood pressure meter as key content. The physical examination data refers to physical examination data of a patient collected by the physical examination device, for example, taking the physical examination device as a blood pressure meter, and the physical examination data of the patient may be blood pressure data of the patient.
In some embodiments, the processing device 210 may translate the key content into professional content based on a knowledge dictionary.
The knowledge dictionary is a comparison dictionary of natural language descriptions and specialized language descriptions. Specifically, the processing device 210 may retrieve the corresponding specialized language from the knowledge dictionary as specialized content with the key content as an index. For example, based on a knowledge dictionary, the key content "CT on the leg" can be converted to the professional content "CT on the leg".
In some embodiments, the processing device 210 may convert the key content to professional content based on a term conversion model. The term transformation model is a machine learning model, and the processing device 210 may process the key content using the term transformation model to obtain the professional content. The input of the term transformation model may include key content and the input may include professional content.
In some embodiments, the expertise may be generated using knowledge dictionaries and/or term transformation models corresponding to registered departments of the patient, where the different departments correspond to different knowledge dictionaries and/or term transformation models.
In some embodiments, processing device 210 may update the diagnostic record template based on the expertise and knowledge database to generate an initial diagnostic record.
The knowledge database refers to a knowledge database of a registered department, which includes the visit specifications of the department (such as disorder description specifications, diagnosis specifications, prescription specifications, doctor's advice specifications, etc.). In some embodiments, the processing device 210 may evaluate and/or adjust the expertise based on the knowledge database to conform to the department's visit specifications. Further, the processing device 210 may populate corresponding template fields in the diagnostic record template with the assessed or adjusted professional content to generate an initial diagnostic record. In some embodiments, the processing device 210 may further generate an initial diagnostic record based on the patient's electronic calendar. For example, the processing device 210 can search the electronic medical record for content corresponding to the template fields, evaluate and/or adjust the content according to the knowledge database, and fill the evaluated or adjusted content into the corresponding template fields in the diagnostic record template.
In some embodiments, as shown in fig. 11C, the preliminary diagnostic record presented by the doctor terminal 270 may include a patient number (ID), a Name (Name), an Age (Age), a preliminary diagnostic result, an examination image (e.g., liver CT image in fig. 11C), electrocardiographic data, electroencephalogram data, a plurality of index examination results (e.g., glucose, BUN, etc. in fig. 11C), and a normal range (e.g., glucose, BUN's normal range in fig. 11C) corresponding to each examination item, and so forth.
Fig. 12 is a flow chart illustrating exemplary viewing of patient data according to some embodiments of the present description. As shown in fig. 12, in some embodiments, the process 1200 may include the following steps. In some embodiments, the process 1200 may be performed by the processing device 210.
At step 1210, a request is obtained from a medical terminal to access patient data of a target patient of the patients. The target patient may be a patient to be visited in a ward visit, or a patient who has subscribed to a visit service. Further description of requests for access to patient data may be found in fig. 9 or 11A and related description thereof.
Step 1220, based on the patient data of the target patient, a avatar representing the target patient (hereinafter simply referred to as a patient avatar, also referred to as a first avatar) is generated.
The patient avatar refers to the virtual reality of the target patient. The patient avatar may be used to teach the patient's physical features (e.g., height, weight, body contour, etc.), lesion conditions (e.g., lesion location, lesion size, lesion shape, etc.).
In some embodiments, the processing device 210 may employ techniques such as three-dimensional modeling to construct a three-dimensional model (i.e., a patient avatar) corresponding to the target patient based on patient data of the target patient at a preset ratio (e.g., a volume of the patient avatar to a real volume of the target patient of 1:10).
In step 1230, the doctor terminal is controlled to present the patient avatar to teach the doctor patient data of the target patient.
Processing device 210 may send patient avatar 1221 to second XR device 270-2 and control second XR device 270-2 to present the patient avatar. At the same time, processing device 210 may also control patient avatar 1221 presented on second XR device 270-2 to interpret patient data of the target patient in a preset form (e.g., in voice form or in video form, etc.). For example, the patient avatar 1221 may be described in voice form from basic information of the corresponding patient, historical diagnosis and treatment data, current symptoms, and the like.
In some embodiments, the processing device 210 may control the patient avatar to interact with the physician in voice to answer questions from the physician. For example, the physician may ask a question after the end of the explanation, and second XR device 270-2 may collect physician question information and send it to processing device 210. Processing device 210 may perform voice recognition on the questioning information, determine response information to send to second XR device 270-2, and control the patient avatar to express the response information to the physician. Illustratively, taking the doctor's question as "several surgeries have been performed," the processing device 210 may recognize the voice, look up the surgical records in the patient data and determine the number of patient surgeries, and then the processing device 210 may send the number of patient surgeries to the second XR device 270-2 and control the patient virtual character to inform the doctor of the number of patient surgeries in voice.
In some embodiments, the above teachings may be derived by processing device 210 processing patient data for a target patient based on a language model. For example, the processing device 210 may obtain one or more template fields (e.g., patient basic information, symptoms, surgical records, medical examination reports, etc.) in the explanation content template based on a preset explanation content template, retrieve contents corresponding to each template field from patient data of the target patient, and generate corresponding voice information or video information based on the retrieved contents, thereby obtaining the explanation content.
In some embodiments, the processing device 210 may update the narrative content of the patient avatar based on the patient real-time data (e.g., patient real-time actions, voice, diagnostic information, etc.). For example, doctor 271 may ask a question to the patient avatar via second XR device 270-2, and processing device 210 may send the question to the first XR device worn by the target patient to which the patient avatar corresponds and obtain real-time feedback information of the target patient, to control the patient avatar to answer the question based on the real-time feedback information.
Fig. 13 is a flow diagram of an exemplary remote interrogation shown in accordance with some embodiments of the present description. As shown in fig. 13, in some embodiments, the process 1300 may include the following steps. In some embodiments, the process 1300 may be performed by the processing device 210.
At step 1310, a request is obtained from the doctor terminal to enter the virtual consulting room to provide a remote interrogation service to the target patient.
Virtual consulting room refers to a virtually realistic consulting room. The virtual consulting room may be a result of the virtualization of objects in the real consulting room. The virtual diagnostic room may include at least one of a virtual diagnostic bed, a virtual diagnostic chair, a virtual diagnostic device, and the like.
In some embodiments, the processing device 210 may employ techniques such as three-dimensional modeling to construct a three-dimensional model (i.e., a virtual room) corresponding to the real room at a preset ratio (e.g., a volume ratio of the object in the virtual room to the object in the real room of 1:100).
In step 1320, the physician terminal is controlled to present a three-dimensional patient model of the target patient.
A three-dimensional patient model (3D patient model) refers to a virtual model corresponding to the whole body of a patient or a part of the body of a patient (e.g., the upper body). For example only, the processing device 210 may obtain an initial three-dimensional patient model from the electronic medical record of the patient and update the initial three-dimensional patient model based on the real-time dynamic data and the physiological data of the patient to obtain a three-dimensional patient model. Further, the processing device 210 may control the physician terminal to present the three-dimensional patient model to the physician, e.g., the processing device 210 may control the augmented reality device 270-2 to display the three-dimensional patient model over the physician's field of view. The field of view of the doctor may be a real field of view in the doctor's gaze range, or may be a virtual background (e.g., a virtual consulting room).
Step 1330, obtaining from the doctor terminal the physical examination instruction input by the doctor through interaction with the three-dimensional patient model.
The physical examination instruction of the doctor refers to an instruction for examining vital signs of a patient, and the physical examination instruction may include a physical examination part, physical examination equipment and physical examination operation. The doctor can input the examination instructions in various modes such as voice, gestures, operation input devices (such as intelligent gloves, intelligent handles and the like) and the like. For example, the augmented reality device 270-2 may present virtual examination devices corresponding to a variety of examination devices, and a doctor may pick up the virtual examination devices through an input device and perform virtual examination operations on the three-dimensional patient simulation using the virtual examination devices. The processing device 210 may determine the physical examination location, the physical examination device, the physical examination operation, etc., based on the virtual physical examination operation performed by the physician, thereby generating the physical examination instructions.
In addition to the examination instructions, the physician may also input other instructions, such as operation instructions that instruct the augmented reality device 270-2 to rotate, zoom in, out, etc., the three-dimensional patient model.
Step 1340, based on the physical examination instruction, controlling the wearable device worn by the target patient to acquire physical examination data of the target patient.
The wearable device may be a wearable physical examination device. For example, the wearable device may be a blood pressure meter, an electrocardiographic band, or the like. Based on this, the physical examination data of the target patient acquired by the wearable device may be blood pressure data, electrocardiographic data, and the like of the patient. For more description of the data, see fig. 11A and its related description.
Fig. 14 is an interface schematic diagram of an exemplary surgical executive service shown in accordance with some embodiments of the disclosure.
In some embodiments, as shown in fig. 14, the surgical services interface 7140 may include a first interface element 7141 (hereinafter referred to as a surgical related data access element 7141) that accesses patient data related to a target patient for which a surgical procedure is performed (i.e., a patient for which a surgical procedure is performed). A doctor terminal (e.g., doctor terminal 270) may generate and transmit a request to access patient data related to a target patient for a surgical procedure execution to the processing device 210 in response to a doctor's (e.g., doctor 271) interaction with the surgical related data access element 7141 (e.g., clicking or voice selecting the surgical related data access element 7141). The processing device 210 may retrieve (e.g., retrieve from the storage device 230) and send the patient data to a doctor terminal (e.g., doctor terminal 270) in response to the request, and then control the doctor terminal (e.g., doctor terminal 270) to present the patient data for viewing by the doctor.
In some embodiments, the patient data of the patient includes data related to a target surgical plan corresponding to the target patient, e.g., the patient data of the patient includes content of the target surgical plan. The target surgical plan refers to the finalized surgical plan (i.e., the surgical plan employed in performing the surgery on the patient). For example, the target surgical plan may be an optimal (e.g., least damaging to the patient) surgical plan selected from a plurality of surgical plans. For further description of patient data and surgical planning, see fig. 7 and its associated description.
In some embodiments, as shown in fig. 14, the surgical service interface 7140 may further include a first interface element 7142 for updating orders for the target patient (hereinafter order update element 7142).
Illustratively, a physician terminal (e.g., physician terminal 270) may generate and send an order to access the target patient to the processing device 210 in response to interaction of a physician (e.g., physician 271) with the order update element 7142 (e.g., clicking or voice selecting the order update element 7142). The processing device 210 may retrieve (e.g., retrieve from the storage device 230) and send the order of the target patient to a doctor terminal (e.g., the doctor terminal 270) in response to the request, and then control the doctor terminal (e.g., the doctor terminal 270) to present the order of the target patient for viewing by the doctor. The physician terminal (e.g., physician terminal 270) may then receive physician input (e.g., text input, voice input, etc.) of the modification information of the order and send to the processing device 210, and the processing device 210 may update the order of the target patient based on the modification information. For example, if the modification information entered by the doctor is "modify medication frequency to three times a day," the processing device 210 may update the medication frequency in the order to three times a day based on this information.
In some embodiments, as shown in fig. 14, the surgical services interface 7140 may further include a first interface element 7143 (hereinafter referred to as a surgical records access element 7143) for accessing an initial surgical record of a target patient surgery. A doctor terminal (e.g., doctor terminal 270) may generate and send a request to access an initial surgical record of a target patient's surgery to processing device 210 in response to a doctor's (e.g., doctor 271) interaction with surgical record access element 7143 (e.g., clicking or voice selecting surgical record access element 7143). The processing device 210 may retrieve (e.g., retrieve from the storage device 230) and transmit the initial surgical record to a doctor terminal (e.g., the doctor terminal 270) in response to a request to access the initial surgical record for the target patient, and then control the doctor terminal (e.g., the doctor terminal 270) to present the initial surgical record for viewing by the doctor.
Initial surgical records refer to raw, recorded data of surgical related information and events occurring during the surgical procedure. For example, the initial procedure record may include procedure related information, patient related records, participant related records, and the like.
The operation-related information includes operation start/end time, operation duration, risk of occurrence, kind and number of surgical tools before operation and kind and number of surgical tools after operation (preventing occurrence of omission of surgical tools in a patient), kind and number of surgical consumables before operation and kind and number of surgical tools after operation, and the like. For example, the operation-related information may be "operation start time is 14:00, operation end time is 17:12, operation duration is 3 hours and 12 minutes", a patient suffers from major bleeding during operation, an operation tool comprises a surgical knife 2 handle, a hemostatic forceps 2 handle before operation, an operation consumable comprises a blood transfusion bag 2 (400 ml per bag), a cardiac needle 1 agent and a hemostatic gauze 5 bag before operation, blood transfusion bag remaining 0 (representing blood transfusion bag used up) after operation, cardiac needle remaining 1 agent (representing unused cardiac needle) and hemostatic gauze remaining 1 bag.
Patient-related records include patient anesthesia duration (the duration taken from beginning anesthesia to waking up), vital sign data of the patient during surgery (heart rate, blood pressure, respiratory rate, etc.), bleeding volume, etc. For example, the patient-related record may be in the form of a patient-related record table in which patient anesthesia duration (e.g., 2 hours), bleeding volume (e.g., 350 ml), and heart rate values, blood pressure values, respiratory rate values, etc. of the patient at various points in time during the procedure (e.g., every 10s during the procedure) are recorded.
The participant-related records include action records, force records, station position records of the doctor of the main doctor, station position records of the nursing staff (nurses), action records, and the like. The action records include the type of action (e.g., cut, press, lift, etc.) of the individual actions of the surgical participant and the time corresponding to each action. The force record includes the magnitude of the force of the surgical participant at various moments in the surgical procedure (e.g., recorded every 1 s). The stance location record includes stance of the surgical participant at a plurality of moments during the surgical procedure. The action record can be obtained through the action recognition equipment, and the dynamics record can be obtained through the dynamics sensing equipment.
In some embodiments, the initial surgical record is generated based on sensory information (hereinafter referred to as third sensory information) acquired during the surgery by one or more sensory devices (also referred to as third sensory devices) in the operating room.
The third perception information may include all types of data during the surgical procedure, for example, participant (including doctor, nurse participating in the surgery) gesture data, image data in the operating room, voice data, etc.
In some embodiments, processing device 210 may generate an initial surgical record based on the third perception information using a record generation model (e.g., a preset record template). The initial surgical record may be generated by an agent (e.g., surgical agent, nurse agent, etc.) configured in the processing device based on the third perception information.
Fig. 15 is a flow diagram of an exemplary pre-operative patient teaching shown in accordance with some embodiments of the present description. As shown in fig. 15, in some embodiments, the process 1400 may include the following steps. In some embodiments, the process 1500 may be performed by the processing device 210.
At step 1510, a request for preoperative patient education for the target patient is obtained.
Preoperative patient teaching refers to teaching of preoperative related matters to a patient, for example, teaching of an operative planning content to the patient, an execution process of an operative planning at an operative site of the patient, a rehabilitation process after an operation of the patient after using the operative planning, and the like.
In some embodiments, the request may be obtained from the second XR device and entered by the physician through interaction with third interface element 730.
Illustratively, the second XR device may generate and send to the processing device 210 a request for preoperative patient education for the target patient in response to the interaction of a physician (e.g., physician 271) with the third interface element 730 (e.g., clicking or voice selecting the third interface element 730).
In step 1520, in response to the request for preoperative patient teaching of the target patient, interpretation data is generated for interpreting the surgical plan of the target patient.
The explanation data is used to explain information related to the operation plan, such as an explanatory explanation of the operation plan, an execution process of the operation plan at the operation site of the patient, a rehabilitation process after the operation of the patient after using the operation plan, and the like. In some embodiments, the instructional material may comprise text, pictures, audio, or video.
In some embodiments, the instructional material may be generated based on a digital twin model (e.g., a three-dimensional anatomical model) of the surgical site of the patient. For example, the processing device 210 may simulate the procedure and the outcome of the surgery (e.g., incision size after surgery, etc.) based on the surgical plan on a three-dimensional anatomical model of the patient's surgical site.
In some embodiments, the instructional material comprises a surgical video. The surgical video illustrates a procedure for performing a surgery based on a surgical plan at a surgical site of a patient. For example, taking the type of surgery in a surgical plan as an open surgery, the surgical video shows the appearance of the patient before the operation site is opened, the procedure of the operation site with the surgical knife, the procedure of the lesion being excised, the incision suturing procedure, the appearance after suturing, and the like.
In some embodiments, the instructional material may reflect a patient's post-operative rehabilitation process. In some embodiments, the processing device 210 may generate text, pictures, audio or video, etc. to present the patient's post-operative rehabilitation process.
In some embodiments, the surgical video may further demonstrate the patient's post-operative rehabilitation process.
In some embodiments, the processing device 210 or the preoperative preparation module 430 may predict a patient's post-operative rehabilitation process based on patient data of the patient.
The healing process reflects the patient's vital signs after surgery and the progress of recovery from the wound. In some embodiments, the healing process may include incision healing rate, whether patient vital signs are normal, whether complications exist, predicted healing time, and the like. For example, the healing process may be "patient incision healing rate 1 mm/day, normal post-operative vital signs, no complications, predicted 1 month recovery time".
In some embodiments, the processing device 210 or the preoperative preparation module 430 may process the patient data and the surgical plan with a rehabilitation prediction model to determine a rehabilitation process. The rehabilitation prediction model may be a machine learning model, the inputs of which may include patient data and surgical plans, and the outputs may include rehabilitation procedures.
Step 1530 controls the first XR device worn by the target patient and the second XR device worn by the physician to simultaneously present interpretation data to the target patient and the physician.
In some embodiments, processing device 210 may send and control presentation of the instructional material to a first XR device worn by the patient and a second XR device worn by the physician, respectively.
In some embodiments, processing device 210 may also send the instructional material to a third XR device worn by the patient's family member and control the third XR device to present the instructional material.
In some embodiments, after determining the surgical plan, the processing device 210 may further obtain the first confirmation instruction and/or the second confirmation instruction. The first confirmation instruction is an instruction entered by the patient through the first XR device regarding the surgical plan. The second confirmation instruction is an instruction regarding the surgical plan entered by the patient family through the third XR device. In response to receiving the first confirmation instruction and the second confirmation instruction, processing device 210 may cause the first XR device, the second XR device 270-2, and the third XR device to present surgical consent, respectively. Processing device 210 may obtain signature information for the surgical consent form from the first XR device, second XR device 270-2, and third XR device, respectively.
The first confirmation instruction and the second confirmation instruction determine confirmation information for the patient and the patient family, respectively, that the currently determined surgical plan is determined to be the final surgical plan (i.e., the target surgical plan). The manner of confirming the input of the instruction may include a key input, a gesture input, a voice input, etc. For example, the patient may speak a "confirm surgery plan" to the first XR device, and the patient's family may speak a "confirm surgery plan" to the third XR device, which respectively obtain the voice signals through microphones thereon and send them to the processing device 210.
The surgical consent refers to an electronic informed consent of the surgical plan that the patient and the patient's family communicate with the doctor to determine the optimal surgical plan, and the doctor has informed the patient and the patient's family of the surgical period notice, and the three parties need to sign synchronously.
In some embodiments, the patient, physician, and patient family may view and sign surgical consent via the first, second, and third XR devices 270-2, and 270-2, respectively. For example, processing device 210 may generate signature information (e.g., signed text or patterns) on the surgical consent form based on the signature information of the surgical consent form acquired from the first, second, and third XR devices 270-2. In some embodiments, the manner in which the patient inputs the signature information may be based on fingerprint input, and the processing device 210 may verify the patient's identity based on the fingerprint input by the patient, and generate the patient's signature information in response to the identity verification being correct. The patient may input a fingerprint through a fingerprint sensor of the first XR device or other terminal device (e.g., a touch screen device used by the patient).
Fig. 16 is a flow diagram of an exemplary surgical simulation shown in accordance with some embodiments of the present description. As shown in fig. 16, in some embodiments, the process 1600 may include the following steps. In some embodiments, the process 1600 may be performed by the processing device 210.
At step 1610, a request to simulate a target procedure is obtained.
In some embodiments, the request may be obtained from the second XR device 270-2 and entered by the physician through interaction with the fourth interface element 740.
Illustratively, the second XR device may generate and send a request to simulate the target procedure to the processing device 210 in response to an interaction of a physician (e.g., physician 271) with the fourth interface element 740 (e.g., clicking or voice selecting the fourth interface element 740).
Step 1620, in response to the request for simulating the target surgery, generating a virtual surgery scene corresponding to the target surgery.
The virtual surgical scene may be generated based on digital twinning techniques and presented to the physician 271. The virtual surgical scene may be a virtual form of an actual operating room.
The virtual surgical scene includes a virtual surgical site and one or more virtual surgical devices.
Virtual surgical equipment refers to the virtual form of the real surgical equipment that is used as needed for surgery. For example, virtual surgical devices may include virtual scalpels, virtual hemostats, virtual implants, virtual blood transfusion bags, virtual hemostatic gauze, and the like.
The virtual surgical site refers to the virtual form of the site of the patient where surgery is desired. For example, the virtual surgical site may be a three-dimensional anatomical model of the patient's chest cavity and organs and tissues within the chest cavity.
In some embodiments, the processing device 210 may generate a virtual surgical scene for surgical simulation based on the surgical plan of the target surgery. For example, the processing device 210 may determine the surgical site based on the surgical plan, construct a three-dimensional anatomical model corresponding to the surgical site at a preset ratio (e.g., a volume ratio of the actual surgical site to the three-dimensional anatomical model of 1:1) using techniques such as three-dimensional modeling. The processing device 210 may determine the model and specification of the target implant based on the surgical plan, select a target implant model from a database, or determine a personalized target implant model (i.e., a virtual target implant). The processing device 210 may construct a three-dimensional model of the target surgical device at a preset ratio (e.g., a 1:1 ratio of surgical device to three-dimensional model) based on the surgical plan to determine the target surgical device to be used.
Step 1630, control of the second XR device presents the virtual surgical scene to the physician.
In some embodiments, processing device 210 may send the generated virtual surgical scene to the second XR device and control the second XR device to render the virtual surgical scene.
Step 1640, obtain interactive instructions regarding the virtual surgical device entered by the physician via the second XR device or an interactive device corresponding to the virtual surgical device.
The interaction device corresponding to the virtual surgical device refers to a device for perceiving the behavior of a wearer (e.g., doctor), for example, the interaction device may be a perceiving wearable device. The sensing wear device may sense an action or gesture of a wearer (e.g., doctor), e.g., the sensing wear device may include a sensing glove, a sensing bracelet, a sensing garment, etc.
The interactive instructions reflect the physician's operational data on the virtual surgical device. For example, the operation data for the virtual surgical device may include data of a virtual surgical device type used by the doctor 271, a fixed point position for the virtual surgical device, a moving direction, a moving amplitude (e.g., a moving distance and angle), and the like.
In some embodiments, the interaction instruction may be a voice instruction, or may be an operation instruction (e.g., a key input, a gesture input, etc.) input by the doctor through the second XR device or the interaction device (e.g., a perception wearable device). By way of example, the interactive instructions may be instructions by the doctor 271 to pick up, put down, move, etc. the virtual surgical device by perceiving the wearing device 271-3.
In some embodiments, doctor 271 (e.g., a surgeon and/or medical student) can perform a simulated procedure in a virtual surgical environment by wearing second XR device 270-2 to enter the virtual surgical environment, performing an operation on the three-dimensional anatomical model of the patient in the virtual surgical scene via second XR device 270-2 or an interactive device (e.g., a perception wearable device) corresponding to the virtual surgical device.
In some embodiments, doctor 271 (e.g., a surgeon and/or medical student) can perform an operation on a three-dimensional anatomical model of a patient in a virtual surgical scene (i.e., with a real simulated instrument (e.g., a scalpel, an operating table, a forceps, etc.) in physical space, thereby performing a simulated operation.
Step 1650, updating the virtual surgical site and the virtual surgical device in the virtual surgical scene based on the interactive instructions.
Updating the virtual surgical site and the virtual surgical device in the virtual surgical scene refers to updating the form or the position presented by the virtual surgical site and the virtual surgical device based on the content of the interaction instruction after the interaction instruction is acquired. For example, taking the interactive instruction "descend the knife tip until contacting the skin, then move down to cut into the skin by 1cm, then cut to the left by 3cm", the processing device 210 may present a virtual knife edge with a depth of 1cm and a length of 3cm along the left-right direction at the skin surface of the virtual operation site in the virtual operation room, and assuming that the position of the virtual knife is 5cm from the skin of the virtual operation site before the update, the virtual knife moves down by 6cm and moves to the left by 3cm after the update compared with the position before the update.
In some embodiments, updating the virtual surgical site and the virtual surgical device in the virtual surgical scene based on the interactive instructions includes determining an emergency situation that may occur in the virtual surgical scene based on the interactive instructions, and updating the virtual surgical portion and/or the virtual surgical device in the virtual surgical scene based on the emergency situation that may occur. Emergency situations refer to sudden adverse conditions that may occur during surgery. For example, the emergency situation may be virtual surgical site bleeding or the like. In some embodiments, processing device 210 may process the interaction instructions using an emergency determination model to determine an emergency. The emergency determination model may be a machine learning model, the input of which may be a surgical plan and interactive instructions, and the output may be whether the emergency and/or type of emergency (e.g., bleeding, blood pressure reduction, heart beat stop, etc.) is caused.
Fig. 17 is a flow diagram illustrating an exemplary execution of a surgical plan according to further embodiments of the present description. As shown in fig. 17, in some embodiments, the process 1700 may include the following steps. In some embodiments, the process 1700 may be performed by the processing device 210.
Step 1710, a request to perform a surgical plan for a target patient is obtained.
In some embodiments, the above-described request to perform a surgical plan for the target patient may be obtained from a physician terminal (e.g., physician terminal 270) entered by a physician through interaction with fifth interface element 750.
Illustratively, doctor terminal 270 may generate and send to processing device 210 a request to perform a surgical plan on a target patient in response to a doctor's (e.g., doctor 271) interaction with fifth interface element 750 (e.g., clicking or voice-selecting fifth interface element 750).
In some embodiments, the processing device 210, upon obtaining a request to perform a surgical plan on a target patient, further obtains patient data for the target patient (e.g., recalled from the storage device 230 or UHR).
At step 1720, a surgical difficulty coefficient is determined based on patient data of the target patient.
The surgical difficulty coefficient reflects the degree of surgical difficulty. The surgical difficulty coefficient may be characterized based on the surgical difficulty coefficient value. By way of example, the surgical difficulty coefficient value may be an integer in the range of [1,10], with a larger value representing a higher surgical difficulty coefficient (i.e., the more difficult the surgery).
The manner of determining the surgical difficulty coefficient may include manual determination and/or intelligent determination.
Manual determination refers to determining, by a difficulty factor determining person (e.g., a doctor expert), a surgical difficulty factor from patient data. Illustratively, the doctor may determine the surgical difficulty factor based on the surgical simulation or historical surgical experience through the worn second terminal device or the intelligent display terminal of the workstation.
The intelligent determination means that the coefficient determination model is utilized to process the patient data and determine the operation difficulty coefficient. The coefficient determination model may be a machine learning model, the input of the coefficient determination model may include patient data, and the output may include a surgical difficulty coefficient value. In some embodiments, the coefficient determination model may be integrated in the processing device 210. After determining the surgical difficulty coefficient using the coefficient determination model, the processing device 210 may display the surgical difficulty coefficient output to the doctor through an interaction device (e.g., a second terminal device, an intelligent display terminal, etc.) of the doctor workstation. Further, the doctor may input modification information of the surgical difficulty coefficient through an interaction device (for example, a microphone, a gesture sensor, a touch screen, etc. on a second terminal device worn by the doctor) to modify the surgical difficulty coefficient determined by the coefficient determination model (for example, increase the surgical difficulty coefficient value determined by the coefficient determination model by 2 as a final surgical difficulty coefficient). In some embodiments, to ensure accuracy of the surgical difficulty factor, the processing device 210 may combine intelligent determination with manual determination. For example, the processing device 210 may first make an intelligent determination and send the results of the intelligent determination to a coefficient determination personnel, who may confirm or modify the results of the intelligent determination to determine the final surgical difficulty coefficient.
Step 1730, determine if expert meetings are required based on the surgical difficulty coefficient.
In some embodiments, processing device 210 may determine whether the surgical difficulty factor is greater than a difficulty factor threshold to determine whether an expert conference needs to be held. Responsive to determining that the surgical difficulty coefficient is greater than the difficulty coefficient threshold, step 1740 is performed, and responsive to determining that the surgical difficulty coefficient is not greater than the difficulty coefficient threshold, processing device 210 directly generates a surgical plan (e.g., generates a surgical plan in the flow described in FIG. 7). The difficulty coefficient threshold may be manually preset or automatically determined by the system.
In response to determining that the expert conference is required, the controlling physician terminal presents a sixth interface element 760 for initiating the expert conference, step 1740.
In some embodiments, a doctor terminal (e.g., doctor terminal 270) may generate and send a request to open an expert conference to processing device 210 in response to a doctor's (e.g., doctor 271) interaction with sixth interface element 760 (e.g., clicking or voice selecting sixth interface element 760). Processing device 210 may issue meeting invitations to terminal devices (e.g., XR devices) used by other meeting participants (e.g., remote specialists, other department doctors) in response to the requests. In response to receiving confirmation of the meeting information sent back by the meeting participant's terminal device, the processing device 210 may generate a virtual meeting space and control the terminal device used by the meeting participant (including doctors, remote specialists, or other department doctors, etc. interacting with the sixth interface element 760) to present the virtual meeting space.
The virtual conference space refers to a virtual conference scene presented by an augmented reality device (e.g., second XR device 270-2).
Conference participants of the expert conference may enter the virtual conference space through respective worn interactive devices (e.g., fourth XR devices), and may then view patient data presented in the virtual conference space (e.g., three-dimensional anatomical models of the patient), and share respective views in real-time (e.g., conduct a voice discussion in the virtual conference space based on the respective worn interactive devices), and determine a final surgical plan.
In some embodiments, a doctor (e.g., doctor 271) can determine whether an expert conference is required to be held based on patient data. If the doctor determines that an expert conference is required, the doctor may initiate a request to hold the expert conference via doctor terminal 270 (e.g., second XR device 270-2) and send to processing device 210. The processing device 210 may control the doctor terminal 270 to present the sixth interface element 760 for initiating the expert conference in response to the above-described request for the expert conference. The manner in which the doctor initiates the request to hold the expert conference may be to interact with the doctor terminal 270, for example, to interact with the virtual character 520 shown in fig. 5 in voice.
In some embodiments, processing device 210 may control the second XR device of the physician and the fourth XR device of the remote specialist to present virtual conference spaces, respectively, in response to the request to hold the specialist conference, obtain perception information (also referred to as fifth perception information) collected by the second XR device of the physician and the fourth XR device of the remote specialist, and generate a surgical plan for the target patient based on the patient data of the target patient and the fifth perception information. In some embodiments, the processing device 210 may determine the surgical plan for the target patient by other means. For example, processing device 210 may generate an initial surgical plan based on patient data of a target patient, present the initial surgical plan to a physician via a second XR device, and generate the surgical plan based on the initial surgical plan and feedback information regarding the initial surgical plan entered by the physician via the second XR device.
In some embodiments, processing device 210 may process the surgical plan and at least a portion of the patient data through a risk assessment model that is a trained machine learning model, determine a risk precaution from the risk assessment result, and present the risk assessment result and the risk precaution of the surgical plan to the physician.
Fig. 18 is a flow diagram of an exemplary update initial admission record shown in accordance with some embodiments of the present description. As shown in fig. 18, in some embodiments, the flow 1800 may include the following steps. In some embodiments, the process 1800 may be performed by the processing device 210.
At step 1810, a request to access an initial admission record for a target patient is obtained.
In some embodiments, the above-described request to access the initial admission record for the target patient may be obtained from a physician terminal (e.g., physician terminal 270), which is entered by the physician through interaction with seventh interface element 770.
Illustratively, the doctor terminal 270 may generate and send a request to access the initial admission record of the target patient to the processing device 210 in response to the interaction of a doctor (e.g., doctor 271) with the seventh interface element 770 (e.g., clicking or voice-selecting the seventh interface element 770).
Initial admission records refer to recorded data for information about the time a patient began a hospital stay. The initial admission record may include patient basic information (including sex, age, height, weight, etc. of the patient), inquiry records (including inquiry contents of doctor at the time of inquiry and reply contents of the patient), physical examination data, medical examination data (including blood examination, biochemical examination, urine examination, immunological examination, microbiological examination, allergen examination, imaging examination (e.g., CT scan examination, MR scan examination, PET scan examination, ultrasound scan examination, etc.), preliminary diagnosis results, etc.
Patient basic information, physical examination data and medical examination data may be obtained from a unified health record (Unified Health Records, UHR) or may be supplemented by a physician. For example, personal information of a patient (e.g., after first registration entry), physical examination reports, medical examination reports, etc. may be stored in a folder (e.g., a folder named by patient number, name, etc.) in the UHR that corresponds to the patient.
The inquiry records may be acquired at the time of inquiry by a sound sensor (e.g., microphone) configured at a doctor terminal (e.g., doctor terminal 270).
The preliminary diagnostic result may be determined by the doctor based on the inquiry records, physical examination data, and medical examination data.
Step 1820, in response to a request to access an initial admission record for a target patient, controlling the doctor terminal to present the initial admission record.
Illustratively, the processing device 210 may retrieve (e.g., retrieve from the storage device 230) and transmit the initial admission record 1821 to a doctor terminal (e.g., the doctor terminal 270) in response to a request to access the initial admission record for the target patient, and then control the doctor terminal (e.g., the doctor terminal 270) to present the initial admission record 1821 for viewing by the doctor. For example, the initial admission record 1821 described above may be presented for viewing by the physician via the second XR device 270-2 or a display large screen of the physician's office.
In step 1830, the initial admission record is updated based on feedback information of the initial admission record entered by the doctor through the doctor terminal.
Feedback information refers to feedback of missing information or altered information of the initial admission record. Missing information refers to information that the physician wants to know from the initial admission record but is not recorded in the initial admission record. The change information refers to information recorded in the initial admission record and required to be adjusted by mistake.
Illustratively, the physician terminal 270 may receive and transmit feedback information of the initial admission record entered by the physician (e.g., text input, voice input, etc.) to the processing device 210, and the processing device 210 may update the initial admission record of the target patient based on the feedback information. For example, if the feedback information entered by the doctor is "lack of abdominal CT images of the patient in the initial admission record", the processing device 210 may supplement the abdominal CT images of the patient into the initial admission record according to this information to update the initial admission record.
In some embodiments, the processing device 210 may update the initial admission record through steps 1831-1834.
In step 1831, query content of the supplemental query is determined based on the feedback information.
Supplementary query refers to the process of communicating with the patient to obtain information that needs to be supplemented or modified for initial admission records. The query content of the supplemental query may include questions that the patient needs to answer during the query.
In some embodiments, the processing device 210 may determine the corresponding query content based on the content of the missing information or the altered information in the feedback information. For example, taking feedback information as "lack of family genetic medical history of patient in initial admission record", the query content of the corresponding supplemental query may be "what is your family have family genetic medical history.
And step 1832, controlling a third terminal device arranged in the hospital ward where the target patient is located to perform supplementary query based on the query content.
The third terminal device may be a terminal device provided by a hospital to a patient for use in an inpatient unit during an inpatient session. The third terminal device may be an XR device, a mobile device, a presentation device or the like, or any combination thereof. In some embodiments, the third terminal device may be part of a hospital bed.
In some embodiments, the processing device 210 may conduct the supplemental query by presenting query content of the supplemental query through the third terminal apparatus. For example, the processing device 210 may present the above-described query content to the target patient in text form through the screen of the third terminal device. As another example, the processing device 210 may report the query to the target patient through a speaker on the third terminal apparatus.
In some embodiments, if the third terminal device is an XR device worn by the target patient, processing device 210 may control the third terminal device to assume a virtual query role in performing the supplemental query. A virtual query role refers to a computer-generated virtual character or virtual object intended to interact with a patient in an augmented reality environment. The virtual query role may be configured to conduct supplemental queries by communicating with the target patient. For example, the virtual query role may be a digital role having certain appearance characteristics, acoustic characteristics, and the like. By way of example only, processing device 210 may cause the XR device worn by the target patient to assume a virtual query role and cause the virtual query role to express query content of the supplemental query to the target patient. Meanwhile, the virtual query roles can simulate the expression, action and the like of the person when speaking, and provide near-real communication experience for the patient.
Step 1833 obtains the sensory information (also referred to as fourth sensory information) acquired by one or more sensory devices (also referred to as fourth sensory devices) of the hospital ward during the supplemental query.
The fourth perception information refers to information acquired by one or more sensing devices (e.g., microphones, etc.) within the patient room during the complementary query. For example, the fourth perception information may be speech information of the target patient presentation acquired by the microphone on the third terminal device during the supplementary query.
In step 1834, the initial admission record is updated based on the fourth awareness information.
In some embodiments, the processing device 210 may determine answer content for the patient to the query content based on the fourth awareness information. The processing device 210 may update the initial admission record 1821 based on the answer content.
Illustratively, the query is still taken as an example of "what is your family's genetic history? the processing device 210 may perform voice recognition on the patient voice information in the fourth perception information, determine a sentence related to the genetic disease in the voice information, and determine answer content based on the sentence. For example, if the content related to the genetic disease in the voice information is "no genetic disease for people in me home", the processing device 210 may determine that the answer content is "no family genetic history". Based on this, the processing device 210 complements the record of no familial genetic disease of the target patient in the initial admission record, or modifies the original record related to genetic disease in the initial admission record to no genetic disease, based on the above-described answer content.
Fig. 19 is a flow chart of an exemplary method of assisting a physician in a task according to some embodiments of the present description. As shown in fig. 19, in some embodiments, the process 1900 may include the following steps. In some embodiments, the process 1900 may be performed by a physician terminal (e.g., physician terminal 270).
At step 1910, an access request to a doctor space application entered by a doctor is received.
The above access request refers to a doctor's call request for the medical space application. The doctor may enter the access request to the medical space application in a number of ways. For example, the geospatial application may be displayed on a display screen of the mobile terminal 270-1, and the doctor 271 may generate an access request to the geospatial application by clicking on the geospatial application or inputting a request to access the geospatial application by voice (e.g., the doctor 271 says "launch the geospatial application"). For another example, second XR device 270-2 may generate a virtual reality hospital space application, and doctor 271 may generate an access request to the hospital space application by clicking on the virtual reality hospital space application or voice input a request to access the hospital space application as described above.
At 1920, an access request to the medical space application is sent to a processing device (e.g., processing device 210).
At step 1930, instructions are received from the processing device (e.g., processing device 210) to present an interactive interface through the geospatial application.
In response to receiving the above-described instruction to present the interactive interface through the hospital space application, the doctor terminal 270 may present the interactive interface through the hospital space application.
In some embodiments, the interactive interface includes a first interface element for obtaining assistance services related to at least one of the one or more tasks to be completed by the physician. For example, the interactive interface includes a first interface element for remote ward rounds, a first interface element for obtaining patient data for patients who have subscribed to the inquiry service, a first interface element for entering a virtual office, a first interface element for obtaining patient data related to a target patient, and so forth. For further description of the first interface element, see FIG. 7 and its associated description.
In some embodiments, a processing device (e.g., processing device 210) determines one or more tasks to complete based on the processing device's time of receipt of the access request and the doctor's calendar information. Further description of this embodiment may be found in fig. 4 and related description.
Some embodiments of the present description also provide a system comprising at least one storage medium comprising a set of instructions, and one or more processors in communication with the at least one storage medium. Wherein the one or more processors are configured to perform the methods (e.g., flow 400 through flow 1900) as described above when executing the instructions.
Some embodiments of the present description also provide a computer-readable storage medium storing computer instructions that, when read by a computer, perform a method as described above (e.g., flow 400 through flow 1900).
While the basic concepts have been described above, it will be apparent to those skilled in the art that the foregoing detailed disclosure is by way of example only and is not intended to be limiting. Although not explicitly described herein, various modifications, improvements, and adaptations to the present disclosure may occur to one skilled in the art. Such modifications, improvements, and modifications are intended to be suggested within this specification, and therefore, such modifications, improvements, and modifications are intended to be included within the spirit and scope of the exemplary embodiments of the present invention.
Meanwhile, the specification uses specific words to describe the embodiments of the specification. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the present description. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the present description may be combined as suitable.
Furthermore, the order in which the elements and sequences are processed, the use of numerical letters, or other designations in the description are not intended to limit the order in which the processes and methods of the description are performed unless explicitly recited in the claims. While certain presently useful inventive embodiments have been discussed in the foregoing disclosure, by way of various examples, it is to be understood that such details are merely illustrative and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements included within the spirit and scope of the embodiments of the present disclosure. For example, while the system components described above may be implemented by hardware devices, they may also be implemented solely by software solutions, such as installing the described system on an existing server or mobile device.
Likewise, it should be noted that in order to simplify the presentation disclosed in this specification and thereby aid in understanding one or more inventive embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof. This method of disclosure does not imply that the subject matter of the present description requires more features than are set forth in the claims. Indeed, less than all of the features of a single embodiment disclosed above.
In some embodiments, numbers describing the components, number of attributes are used, it being understood that such numbers being used in the description of embodiments are modified in some examples by the modifier "about," approximately, "or" substantially. Unless otherwise indicated, "about," "approximately," or "substantially" indicate that the number allows for a 20% variation. Accordingly, in some embodiments, numerical parameters set forth in the specification and claims are approximations that may vary depending upon the desired properties sought to be obtained by the individual embodiments. In some embodiments, the numerical parameters should take into account the specified significant digits and employ a method for preserving the general number of digits. Although the numerical ranges and parameters set forth herein are approximations that may be employed in some embodiments to confirm the breadth of the range, in particular embodiments, the setting of such numerical values is as precise as possible.
Each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., referred to in this specification is incorporated herein by reference in its entirety. Except for application history documents that are inconsistent or conflicting with the content of this specification, documents that are currently or later attached to this specification in which the broadest scope of the claims to this specification is limited are also. It is noted that, if the description, definition, and/or use of a term in an attached material in this specification does not conform to or conflict with what is described in this specification, the description, definition, and/or use of the term in this specification controls.
Finally, it should be understood that the embodiments described in this specification are merely illustrative of the principles of the embodiments of this specification. Other variations are possible within the scope of this description. Thus, by way of example, and not limitation, alternative configurations of embodiments of the present specification may be considered as consistent with the teachings of the present specification. Accordingly, the embodiments of the present specification are not limited to only the embodiments explicitly described and depicted in the present specification.

Claims (28)

1. A method for assisting a physician in working, implemented on a processing device communicatively coupled to a physician terminal of the physician, the method comprising:
Acquiring an access request for medical space application from the doctor terminal;
determining a task to be completed of a doctor based on a receiving time of the access request and schedule information of the doctor in response to the access request, and
Controlling the doctor terminal to present an interactive interface through the doctor space application, wherein the interactive interface comprises a first interface element, and the first interface element is used for acquiring assistance services related to at least one task to be completed;
The processing equipment is provided with an agent capable of self-evolution based on artificial intelligence technology, and the layout of the interactive interface is determined by the agent based on behavior data and preference information of the doctor.
2. The method of claim 1, wherein the behavioral data reflects daily behavior of the physician and/or interactive behavior of the physician with the physician-space application, and the preference information reflects preferences of the physician for interface styles and interface elements of the interactive interface.
3. The method of claim 2, wherein determining, by the agent, a layout of the interactive interface based on the behavioral data and preference information of the doctor comprises:
Determining preference information of the doctor based on the historical operation data of the doctor;
and determining the layout of the interaction interface corresponding to the doctor based on the preference information and the behavior data of the doctor.
4. The method of claim 1, wherein the method further comprises:
autonomously acquiring real-time behavior data of the doctor;
Updating the layout of the interactive interface in real time based on the real-time behavior data, and
And controlling the doctor terminal to present an updated interactive interface through the doctor space application.
5. The method of claim 1, wherein determining the task to be completed by the doctor in response to the access request comprises:
Controlling the doctor terminal to present an initial interaction interface through the doctor space application in response to the access request, wherein the initial interaction interface comprises interface elements for reminding the doctor to view the working schedule, and
And responding to the request of accessing the working schedule, which is input by the doctor through the doctor terminal, and determining the task to be completed.
6. The method of claim 5, wherein the initial interactive interface further provides a virtual character configured to communicate with the doctor, the request to access a work schedule and/or the request to access assistance services associated with at least one of the tasks to be completed being entered by the doctor through voice interaction with the virtual character.
7. The method of claim 1, wherein the interactive interface further comprises a second interface element for obtaining a real-time 3D map associated with a target location corresponding to at least one of the tasks to be completed.
8. The method of claim 1, wherein the method further comprises:
determining a completed task of the doctor based on the receiving time of the access request and schedule information of the doctor, wherein the interactive interface further comprises foldable elements related to the completed task.
9. The method of claim 1, wherein the method further comprises:
Responsive to a request entered by the physician via the physician-space application in relation to at least one of the assistance services, determining a digital twin relating to the assistance service, the digital twin reflecting a real-time status of the respective physical entity;
And controlling the doctor terminal to present the digital twin body through the doctor space application.
10. The method of claim 9, wherein the method further comprises:
Receiving interaction information of the doctor and the digital twin body through the medical space application;
Based on the interaction information, updating the state of the physical entity corresponding to the digital twin.
11. The method of claim 1, wherein the method further comprises:
in response to a request entered by the physician via the medical space application relating to at least one of the assistance services, determining an agent relating to the assistance service and implementing at least one of the assistance services with the agent relating to the assistance service.
12. The method of claim 1, wherein the task to be completed comprises a ward round, the interactive interface comprising a first interface element for applying for a remote ward round, the method further comprising:
Acquiring a request input by the doctor to remotely participate in a ward round from the doctor terminal;
acquiring first perception information acquired by first perception equipment in a hospital ward during ward rounds;
Generating a virtual ward space based on the first perceived information and patient data of the target patient, and
And controlling the doctor terminal to present the virtual ward space.
13. The method of claim 12, wherein the first interface element comprises a first interface element for obtaining an initial ward record, the method further comprising:
Acquiring a request for accessing an initial ward round record of the target patient from the doctor terminal;
And responding to the request, controlling the doctor terminal to present the initial ward record of the target patient, wherein the initial ward record is generated by an agent corresponding to ward service based on the first perception information.
14. The method of claim 1, wherein the task to be completed comprises providing a consultation service within a consulting room, the interactive interface comprising a first interface element for obtaining patient data for a patient for whom the consultation service has been reserved, the method further comprising:
acquiring a request for accessing patient data of a target patient among the patients who have subscribed to the inquiry service from the doctor terminal;
Generating a patient avatar representing the target patient based on patient data of the target patient, and
And controlling the doctor terminal to present the patient virtual character so that the patient virtual character explains the patient data of the target patient to the doctor.
15. The method of claim 14, wherein the first interface element comprises a first interface element for obtaining an initial diagnostic record associated with the interview service, the method further comprising:
obtaining a request from the doctor terminal to access an initial diagnostic record of the target patient;
and in response to the request, controlling the doctor terminal to present the initial diagnosis record, wherein the initial diagnosis record is generated by an agent corresponding to the inquiry service based on second perception information acquired by second perception equipment in the consulting room during the inquiry service.
16. The method of claim 1, wherein the task to be completed comprises providing a remote interrogation service, the interactive interface comprising a first interface element for entering a virtual clinic, the method further comprising:
acquiring a request for entering a virtual consulting room to provide remote consultation service for a target patient from the doctor terminal;
Controlling the doctor terminal to present a 3D patient model of the target patient;
Acquiring from the doctor terminal a physical examination instruction entered by the doctor by interacting with the 3D patient model, and
Based on the physical examination instruction, the wearable device worn by the target patient is controlled to acquire physical examination data of the target patient.
17. The method of claim 1, wherein the task to be completed comprises performing a procedure on a target patient, the interactive interface comprising a first interface element for acquiring patient data related to the target patient.
18. The method of claim 17, wherein the first interface element comprises a first interface element for obtaining an initial surgical record for a target patient, the method further comprising:
acquiring a request from the doctor terminal to access an initial surgical record of a target patient;
and responding to the request, controlling the doctor terminal to present an initial operation record of the target patient, wherein the initial operation record is generated by an agent corresponding to operation service based on third perception information acquired during operation by third perception equipment in an operation room.
19. The method of claim 1, wherein the interactive interface further comprises a third interface element for performing preoperative patient teaching, the physician terminal comprising a second XR device worn by the physician, the method further comprising:
Obtaining a request for preoperative patient education of a target patient from a second XR device worn by the physician, the request being entered by the physician through interaction with the third interface element;
generating, in response to the request, explanation data for explaining the surgical plan of the target patient;
And controlling the first XR equipment worn by the target patient and the second XR equipment worn by the doctor to display the explanation data to the target patient and the doctor simultaneously.
20. The method of claim 19, wherein the method further comprises:
Acquiring a first confirmation instruction about the surgical plan entered by the target patient through the first XR device;
acquiring a second confirmation instruction about the surgical plan input by a patient family of the patient through a worn third XR device;
in response to the first acknowledgement instruction and the second acknowledgement instruction,
Controlling the first XR device, the second XR device and the third XR device to present surgical consent, respectively, and
Signature information of the surgical consent is acquired from the first, second and third XR devices, respectively.
21. The method of claim 1, wherein the interactive interface further comprises a fourth interface element for performing surgical simulation.
22. The method of claim 21, wherein the physician terminal includes a second XR device worn by the physician, the method further comprising:
Obtaining a request to simulate a target procedure from a second XR device worn by the physician, the request being entered by the physician through interaction with the fourth interface element;
in response to the request, generating a virtual surgical scene corresponding to the target surgery, the virtual surgical scene including a virtual surgical site and a virtual surgical device;
And controlling a second XR device worn by the doctor to present the virtual surgical scene to the doctor for the doctor to perform surgical simulation.
23. The method of claim 22, wherein the method further comprises:
Acquiring an interaction instruction about the virtual surgical device, which is input by the doctor through a second XR device worn by the doctor or an interaction device corresponding to the virtual surgical device;
Determining an emergency situation which may occur in the virtual surgical scene based on the interactive instruction, and
Updating the virtual surgical portion and/or the virtual surgical device in the virtual surgical scene based on the possible emergency situation.
24. The method of claim 1, wherein the interactive interface further comprises a fifth interface element for performing a surgical plan, the method further comprising:
acquiring a request from the doctor terminal to perform a surgical plan for a target patient, the request being input by the doctor through interaction with the fifth interface element;
Determining a surgical difficulty factor based on patient data of the target patient in response to the request;
determining whether an expert conference is required to be held based on the operation difficulty coefficient;
in response to determining that the expert conference is required, controlling the doctor terminal to present a sixth interface element that initiates the expert conference.
25. The method of claim 24, wherein the physician terminal includes a second XR device worn by the physician, the method further comprising:
In response to a need to hold an expert conference, controlling the second XR device of the doctor and a fourth XR device of a remote expert to respectively present a virtual conference space;
Acquiring fifth perception information acquired by the second XR device of the physician and a fourth XR device of a remote expert;
and generating a surgical plan of the target patient according to the patient data of the target patient and the fifth perception information.
26. The method of claim 25, wherein the method further comprises:
Processing the surgical plan and at least a portion of the patient data through a risk assessment model, the risk assessment model being a trained machine learning model, generating a risk assessment result for the surgical plan;
Determining risk preventive measures based on the risk assessment results, and
And displaying the risk assessment result of the operation plan and the risk preventive measures to the doctor.
27. The method of claim 1, wherein the interactive interface further comprises a seventh interface element for patient management, the method further comprising:
obtaining a request from the doctor terminal to access an initial admission record of a target patient, the request being entered by the doctor through interaction with the seventh interface element;
Controlling the doctor terminal to present the initial admission record in response to the request, and
And updating the initial admission record based on the feedback information of the initial admission record input by the doctor through the doctor terminal.
28. The method of claim 27, wherein the updating the initial admission record comprises:
determining query content of the supplementary query based on the feedback information;
Controlling terminal equipment arranged in a hospital ward where the target patient is located to perform supplementary query based on the query content, and
Updating the initial admission record based on fourth perceived information acquired by a fourth perceived device of the hospital ward during the supplemental inquiry.
CN202411385860.1A 2024-07-31 2024-09-30 Method and system for assisting doctor in working Pending CN121460091A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
PCT/CN2024/109063 WO2026025418A1 (en) 2024-07-31 2024-07-31 Methods and systems for assisting doctors in work
CNPCT/CN2024/109063 2024-07-31

Publications (1)

Publication Number Publication Date
CN121460091A true CN121460091A (en) 2026-02-03

Family

ID=98579655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411385860.1A Pending CN121460091A (en) 2024-07-31 2024-09-30 Method and system for assisting doctor in working

Country Status (2)

Country Link
CN (1) CN121460091A (en)
WO (1) WO2026025418A1 (en)

Also Published As

Publication number Publication date
WO2026025418A1 (en) 2026-02-05

Similar Documents

Publication Publication Date Title
WO2026025413A1 (en) Methods, systems, and storage mediums for providing medical consultation services
Sharma Personalized telemedicine utilizing artificial intelligence, robotics, and Internet of Medical Things (IoMT)
WO2026025415A1 (en) Methods and systems for surgery planning and executing
WO2026025414A1 (en) Systems and methods for providing hospitalization services
WO2026025416A1 (en) Systems for hospital management
WO2026025419A1 (en) Hospital support platforms
WO2026025418A1 (en) Methods and systems for assisting doctors in work
WO2026025420A1 (en) Medical service systems, devices, and methods
WO2026025411A1 (en) Medical service systems and methods
WO2026025412A1 (en) Medical service systems and methods
Ashwini et al. MultiSense Diagnosis: Navigating Disease Diagnosis with Metaverse Multimodal Interaction
CN119964751A (en) A medical service system, device, equipment and method
WO2026025417A1 (en) Methods, systems, and storage media for providing medical services
CN119811610A (en) A system and method for providing inpatient nursing services
Johnson et al. Enhancing Healthcare Through AI, AR, and VR: Integrating Human–Computer Interaction for Improved Teleconsultations and Diagnostic Accuracy
CN120496723A (en) Ophthalmic care plan generation method and system based on multiple intelligent agents, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication