Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments.
The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present application.
The terms "comprises," "comprising," "including," or any other variation thereof, are intended to cover a specific feature, number, step, operation, element, component, or combination of the foregoing, which may be used in various embodiments of the present application, and are not intended to first exclude the presence of or increase the likelihood of one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
Furthermore, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of the application belong. The terms (such as those defined in commonly used dictionaries) will be interpreted as having a meaning that is the same as the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in connection with the various embodiments of the application.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The embodiments described below and features of the embodiments may be combined with each other without conflict.
In a first aspect, an embodiment of the present application provides a home robot, where the home robot is used as a carrier of a user-side tool body, and integrates autonomous movement and environment sensing capabilities, an interactive large screen, a lightweight mechanical arm, and a modularized diagnosis and treatment interface base, so that the home robot can autonomously move in a home environment, perform natural interaction with a user, and complete blood pressure, blood sampling, ultrasound and other feature acquisition operations under the control of a digital doctor intelligent agent or a real doctor, thereby providing a data base for subsequent intervention and diagnosis and treatment.
As shown in fig. 1, the home robot includes a diagnosis and treatment device interface base 101, an interaction module 102, and a mechanical arm module 103. The diagnosis and treatment equipment interface base 101 is used for bearing at least one diagnosis and treatment equipment detachably mounted on a household robot, the type of the diagnosis and treatment equipment comprises basic diagnosis and treatment equipment and special diagnosis and treatment equipment determined according to the illness state of a user, the interaction module 102 is used for displaying a digital doctor intelligent agent through an interaction interface, interacting with the user through the digital doctor intelligent agent and collecting body index data of the user, and the mechanical arm module 103 is used for responding to control instructions of the digital doctor intelligent agent and controlling the target diagnosis and treatment equipment to execute intervention operation and diagnosis and treatment operation.
In an embodiment, the home robot may be rented or purchased from a hospital by a user, before the home robot is put into use, the home robot needs to be initialized and diagnosis and treatment equipment is assembled, according to the personalized health requirement and disease type of the user, real doctors input basic information, medical record data and personalized service requirement of the user to the home robot, and according to the personalized health requirement and disease type of the user, basic diagnosis and treatment equipment and special diagnosis and treatment equipment are installed on a diagnosis and treatment equipment interface base of the home robot. On the interface base of the diagnosis and treatment equipment, a batch of basic diagnosis and treatment equipment suitable for most chronic diseases can be connected, for example, a fingertip blood detection module (used for blood sugar, blood fat and partial biochemical index detection), a blood pressure measurement module, an infrared temperature measurement module, an electroencephalogram/electrocardio acquisition module and the like, on the other hand, a batch of special diagnosis and treatment equipment meeting personalized health requirements and disease types of users can be connected on the interface base of the diagnosis and treatment equipment, for example, for IVF/assisted reproduction users, abdomen/gynecological ultrasonic equipment, hormone-related blood sampling equipment, intramuscular injection equipment (used for hormone injection) and reproduction-related hormone detection equipment can be assembled at major points, and used for supporting multi-period emission promotion, follicle monitoring and medicine administration, for users with mental diseases such as depression and the like, an electroencephalogram acquisition equipment, a sleep monitoring-related equipment, a heart rate variability detection equipment and a blood sampling equipment of biomarkers related to depression can be assembled, and used for matching with a behavior monitoring and psychological intervention strategy of a digital doctor intelligent body, and for users with diabetes, a fingertip blood sugar detection equipment, a blood pressure monitoring equipment, a body composition monitoring equipment and a body weight, and necessary injection-related equipment can be assembled for insulin or other relevant equipment, and used for matching or foot-part management and expanding and other imaging modules for long-term diabetes and the like.
It should be noted that, diagnosis and treatment equipment interface base decouples the house robot body and various diagnosis and treatment equipment to form the structure of "general base + pluggable diagnosis and treatment equipment". The base provides a unified physical fixed structure, a data communication channel and a power supply interface, so that plug and play of the diagnosis and treatment equipment is realized. The physical interface structure can be in various forms such as a sliding rail type, a magnetic attraction type or a buckle type, so as to adapt to diagnosis and treatment equipment with different sizes and different installation postures. The diagnosis and treatment equipment interface base not only supports standardized access of basic diagnosis and treatment equipment, but also supports access of special diagnosis and treatment equipment aiming at different disease paths and individual differences, so that a hardware platform is prevented from being independently designed for each disease, and reusability and expansibility of a household robot are improved.
In an embodiment, the home robot further comprises a diagnosis and treatment equipment management module, wherein the diagnosis and treatment equipment management module is used for detecting the electrical connection, the data link and the safety state of the diagnosis and treatment equipment when the diagnosis and treatment equipment is installed on the diagnosis and treatment equipment through the diagnosis and treatment equipment interface base, identifying the equipment type, the equipment function attribute and the equipment state information of the diagnosis and treatment equipment when the detection passes, generating diagnosis and treatment equipment information, and writing the diagnosis and treatment equipment information into a configuration file of the home robot.
In an embodiment, since the home robot is used in the home of the user, the home robot also needs to perform multi-modal sensing calibration and environment modeling before being put into use to support the subsequent personalized active services and accurate navigation. The home robot performs space coordinate calibration on each module or equipment carried in the home robot, establishes a unified coordinate system to realize space alignment of vision, machinery and a sensing system, completes SLAM mapping of the home space through a laser radar, a depth camera and a vision recognition algorithm, generates semantic layer marking information (such as bedrooms, living rooms, kitchens, toilets and the like), marks inaccessible areas (such as bedrooms, private spaces, pet activity areas and the like) according to user setting or AI recognition results, establishes virtual boundaries to ensure operation safety, recognizes furniture positions, common activity areas, illumination states and user daily behavior hot spots, locally caches environment and user activity data, and simultaneously establishes a situation model index for follow-up personalized task scheduling (such as actively initiating health reminding in the common user areas) and model dynamic updating.
In an embodiment, the interaction module 102 includes an image rendering sub-module, where the image rendering sub-module defines a unified digital doctor intelligent body image for a real doctor corresponding to a user, binds the head portrait, gender, voice features and language style of the real doctor, and ensures the cognitive continuity of the user in different modes. The interaction mode is distinguished by adjusting the background, the clothing and the rendering style. In the interactive mode of the autonomous service of the intelligent agent of the digital doctor, the background is a home/living scene, the garment is leisure or has a cartoon style, more anthropomorphic and compatible animation expression is supported, in the interactive mode of the access of the real doctor, the background is switched to a consulting room or office environment, the garment is a formal real doctor garment, the rendering style is closer to the real person, and only the voice and expression of the real doctor are carried.
In one embodiment, the interaction module 102 includes a behavior and speech synthesis sub-module that exhibits different behaviors and voices on the display screen in different interaction modes. Under the interaction mode of autonomous service of the digital doctor intelligent agent, the dialogue content output by the digital doctor intelligent agent is converted into natural voice synthesis and mouth shape and expression animation, the digital doctor intelligent agent is driven to make limb actions such as nodding, indication, smiling and the like, and is matched with intervention tasks (such as exercise guidance and meditation guidance) to conduct gesture demonstration and rhythm guidance, and the digital doctor intelligent agent is driven to generate corresponding emotion and feedback behaviors (encouragement, pacifying, reminding and the like) in combination with task states. In the interactive mode of the access of the real doctor, the voice synthesis of the digital doctor agent is not performed any more, but the voice flow of the real doctor drives the mouth shape and the expression of the individual body in real time, and the behavior actions (such as indication, emphasis and gazing direction) can be simply controlled by the real doctor end or automatically supplemented by the agent according to the semantics, but no independent voice content is generated, so that a single interactive channel is maintained. And in both modes, the method supports the cooperation with animation and prompt sound effects of other interfaces of the screen, and improves the consistency and the understandability of the whole interaction.
In an embodiment, the interaction module 102 includes a multi-mode interface generation sub-module, which generates a reminding card, a task progress bar, a simple chart (blood sugar/blood pressure curve), a questionnaire form and the like on a display screen according to tasks and prompt requirements of the digital doctor intelligent agent in an interaction mode of autonomous service of the digital doctor intelligent agent, and superimposes a real doctor splitting and light interface in a small window form for non-invasive reminding and information collection when video or application is running. Under the interactive mode accessed by a real doctor, semantic analysis is carried out in the background according to real-time voice content and operation intention of the real doctor, auxiliary display content such as key sign trend graphs and historical data, check image contrast graphs, medicine description cards, structured inquiry abstracts, dynamic schematics, short videos or animations is automatically selected and generated for explaining disease mechanisms, treatment schemes or operation steps, interface layout and hierarchy are automatically adjusted to enable information related to a current consultation theme to be presented preferentially, and feedback interface state changes such as prompt tones, loading animations and the like and data loading progress are combined.
In one embodiment, the mechanical arm module 103 supports two modes of autonomous operation of the digital doctor intelligent agent or remote teleoperation of the real doctor, and the mechanical arm module 103 can be configured with various types of light mechanical arms or flexible mechanical arms for controlling diagnosis and treatment equipment (such as a blood pressure cuff, a B-ultrasonic probe, a blood sampling device and the like). The mechanical arm module 103 can be provided with various end effectors (clamping jaws, a probe holder, a syringe interface and the like), and the digital doctor intelligent body or a true man doctor can control the end replacement according to the task type so as to adapt to different diagnosis and treatment or sampling tasks. The mechanical arm module 103 supports force feedback and limit control, and ensures safety and comfort in the process of interaction with a user. In an embodiment, the digital doctor intelligent agent is further configured to receive user individual data, generate a monitoring scheme, an intervention scheme, and a consultation scheme for a user according to the user individual data and preset auxiliary data, and send the monitoring scheme, the intervention scheme, and the consultation scheme to a real doctor associated with the user for auditing and confirmation, and receive a target monitoring scheme, a target intervention scheme, and a target consultation scheme that are audited and confirmed by the real doctor, and execute an intervention operation and a diagnosis operation based on the target monitoring scheme, the target intervention scheme, and the target consultation scheme.
In an embodiment, the user individual data collected by the digital doctor agent includes case information, disease type, health index and treatment history of the user, and the preset auxiliary data includes long-term health data of similar historic patients, wearable device data accessed through the external interface module, electronic medical record (EMR/HIS) records, home IoT health monitoring data and the like, and other stored treatment cases. Based on a built-in special health medical large model, the digital doctor intelligent body fuses user individual data and preset auxiliary data to complete cognitive modeling and health target planning of user states, and the process comprises the following steps: normalizing and extracting features of user individual data and preset auxiliary data, generating a user health image, calling a special health medical large model, reasoning and optimizing monitoring items, intervention measures and inquiry arrangement, and outputting a monitoring scheme, an intervention scheme and an inquiry scheme aiming at a user.
In an embodiment, the monitoring scheme generated by the digital doctor intelligent agent comprises monitoring parameters such as physical sign items, monitoring indexes (blood sugar, blood pressure, heart rate, electroencephalogram, sleep and the like), time planning of fixed-period, event-triggered or multi-condition combined monitoring, defining weights of high-frequency items and key indexes to ensure reasonable resource allocation, corresponding action paths of diagnosis and treatment equipment and the mechanical arm module 103, abnormal detection, signal loss compensation acquisition, threshold trigger alarm and other data quality and alarm rules.
In one embodiment, the intervention scheme generated by the digital doctor intelligent agent comprises intervention types such as medicine reminding, physical therapy, illumination intervention, movement guidance, psychological support and the like, intervention time sequence and frequency which are dynamically adjusted at fixed time or based on monitoring feedback, intervention intensity and duration period which are set according to the difference between disease types and individuals, an post-intervention feedback mechanism for tracking user behaviors and physical sign responses to form a curative effect evaluation loop, and interaction forms for implementing intervention in a multi-mode such as voice, screen, action feedback and the like.
In one embodiment, the digital doctor agent generated inquiry strategy comprises automatically scheduling agent inquiry and real doctor inquiry, inquiring content (such as medication response, living habit, emotion state and the like) defined according to disease progress, and mainly carrying out auxiliary information acquisition tasks to help real doctors acquire necessary structural information before formal consultation.
In one embodiment, the digital doctor agent transmits the monitoring scheme, the intervention scheme and the inquiry scheme to the corresponding real doctor of the user for auditing and confirmation. Analyzing and marking the review necessity of each item in the monitoring scheme, the intervention scheme and the inquiry scheme before sending so as to prompt the links of important review of the true doctor and reduce the burden of the true doctor. The digital doctor intelligent agent receives the target monitoring scheme, the target intervention scheme and the target inquiry scheme which are checked and confirmed by the true doctor, and diagnoses the user based on the target monitoring scheme, the target intervention scheme and the target inquiry scheme. Meanwhile, the built-in special health medical large model can be automatically reversely trained through the target monitoring scheme, the target intervention scheme and the target consultation scheme which are checked and confirmed by the real doctor, so that the self-learning update of the digital doctor intelligent body is realized, and the accuracy of future scheme generation and the individual matching degree are continuously improved.
In one embodiment, the digital doctor agent generates an executable behavior plan according to a target monitoring scheme, a target intervention scheme and a target consultation scheme by combining local situation labels and user states, and the calculation comprises the steps of scheduling time and mode of health reminding, information acquisition and agent consultation, scheduling planning intervention tasks (such as medication reminding, motion guidance, meditation guidance and the like), calling a robot chassis, a mechanical arm and diagnosis equipment to complete physical sign acquisition or equipment operation, and supporting periodic tasks and event triggering tasks (such as monitoring abnormality and dynamic intervention after consultation abnormality).
In an embodiment, the digital doctor agent performs a target monitoring scheme, and unlike passive monitoring in the traditional sense, the digital doctor agent can control the home robot to have autonomous movement, environment understanding and task planning capabilities in a home scene, and "supervise" the user's daily life through behavior recognition and natural interaction. The robot utilizes SLAM map and space semantic information to autonomously plan a path to identify and record daily activities (such as diet, exercise, rest, entertainment and the like) of a patient. In order to reduce interference and improve acceptance, the digital doctor intelligent body generates personalized dialogue strategies to communicate with users naturally, and health related information (such as dining time, sleep quality, emotion state and the like) is acquired in the dialogue process.
In one embodiment, the digital doctor agent performs a target intervention protocol. Target intervention schemes are divided into two categories, planning intervention and situational intervention. According to the target intervention scheme, the digital doctor intelligent body automatically executes fixed tasks such as daily medicine reminding, meditation guiding, movement guiding, work and rest reminding and the like at a preset time, the user is guided to complete the tasks through voice and screen interaction, and in the intervention process, the display screen of the home robot can render the digital doctor intelligent body into a real doctor body-separating image, and voice prompt, expression feedback and visual guiding are provided in a personification mode. And the situational intervention, namely when abnormal conditions occur in a monitoring stage or a consultation stage (such as sedentary, drinking, medication delay, physical sign abnormality and the like of a user are detected), the digital doctor intelligent body selects an intervention strategy (prompting, warning, guiding action or automatically starting consultation) according to specific abnormal types, and in the intervention execution process, the digital doctor intelligent body carries out dynamic behavior adjustment according to language response, emotional state and behavior feedback of the user so as to realize personalized intervention rhythm.
In one embodiment, the digital doctor agent performs a target interrogation regimen. The inquiry of the intelligent body of the digital doctor is an important functional link and is mainly used for collecting structural health information and assisting the diagnosis of a real doctor. The inquiry modes are classified into two modes, namely active inquiry and passive inquiry. And actively inquiring, namely actively initiating inquiry by the digital doctor intelligent body at proper time according to the time and the theme set in the inquiry strategy, performing voice and screen interaction with the patient, and collecting illness state information and behavior feedback. Passive inquiry, that is, the patient can actively initiate an inquiry request, the intelligent body of the digital doctor responds immediately, and the information acquisition and the primary diagnosis process are executed. In the inquiry process, the digital doctor intelligent body can autonomously call the mechanical arm module 103 and diagnosis and treatment equipment in the home robot to complete necessary physical sign acquisition for the user, and in the sampling process, the digital doctor intelligent body is interactively guided with the user through voice and a screen, so that the accuracy and safety of the sampling process are ensured.
In an embodiment, the home robot further comprises an autonomous movement and situation awareness module, wherein the autonomous movement and situation awareness module is used for acquiring environment information of an environment where the home robot is located and pose information of a user, determining a situation state corresponding to the user currently and sending the situation state to the digital doctor intelligent agent, and the digital doctor intelligent agent is further used for judging whether to execute intervention operation and diagnosis and treatment operation according to the situation state, the target monitoring scheme, the target intervention scheme and the target consultation scheme. It will be appreciated that the digital doctor agent maintains the current context state of the patient, such as "eat at dining table", "rest at sofa", "sleep" and "watching television", etc., in combination with environmental information from the environment in which the home robot 201 is collecting and pose information of the user (SLAM map, spatial semantics, user position and pose, lighting/time information, etc.), combines the context information with historical interaction records, health data, for deciding whether to disturb, how to disturb, when to perform a target monitoring scheme, a target intervention scheme, and a target consultation scheme.
In a second aspect, as shown in fig. 2, the embodiment of the present application provides a remote diagnosis and treatment system, where the remote diagnosis and treatment system includes a remote diagnosis and treatment cloud platform 202 and a home robot 201, where the home robot 201 is configured to send, to the remote diagnosis and treatment cloud platform 202, interaction information of interaction with a user through the digital doctor agent and diagnosis and treatment information of diagnosis and treatment performed by the user, and the remote diagnosis and treatment cloud platform 202 is configured to store the interaction information and the diagnosis and treatment information, and display the interaction information and the diagnosis and treatment information to a real doctor.
In an embodiment, the home robot 201 is further configured to receive user individual data, generate a monitoring scheme, an intervention scheme, and a consultation scheme for a user according to the user individual data and preset auxiliary data, and send the generated monitoring scheme, the intervention scheme, and the consultation scheme to the remote diagnosis and treat cloud platform 202, and the remote diagnosis and treat cloud platform 202 is further configured to display the monitoring scheme, the intervention scheme, and the consultation scheme to a real doctor, generate a target monitoring scheme, a target intervention scheme, and a target consultation scheme based on an audit and a confirmation of the real doctor, and send the target monitoring scheme, the target intervention scheme, and the target consultation scheme to the home robot 201.
In an embodiment, the remote diagnosis and treatment cloud platform 202 is further configured to establish a communication channel with the home robot 201 in response to an access request of a real doctor, and send audio-visual information and/or control instructions of the real doctor to the home robot 201 through the communication channel, wherein the home robot 201 is further configured to display an image of the real doctor in the interaction module to interact with a user based on the audio-visual information, and/or control the mechanical arm module to operate the diagnosis and treatment device to diagnose the user based on the control instructions.
In an embodiment, a real doctor accesses the remote diagnosis and treatment cloud platform 202, and after the remote diagnosis and treatment cloud platform 202 logs in its own account, the real-time state, history data and reservation plan of the patient can be checked, and the target patient is selected to initiate remote consultation. The remote diagnosis and treatment cloud platform 202 automatically establishes an encryption communication channel and switches the home robot 201 to a 'real doctor access mode';
In one embodiment, if a real doctor initiates a appointment remote consultation, the digital doctor agent in the home robot 201 will prompt the patient for preparation, if the consultation needs to be empty, the patient will be reminded to avoid eating before 8 hours, if the blood pressure needs to be measured or signs are collected, the patient will be reminded to rest quietly in advance, if mental state assessment is involved, the patient will be guided to maintain good environmental and light conditions, and the digital doctor agent records the preparation state in the background for the real doctor to check and confirm.
In one embodiment, the display of the home robot 201 displays the digital body of the human doctor, which can be switched, and the digital body of the digital doctor and the image of the human doctor do not appear in a voice or visual form at the same time. In the autonomous service stage of the digital doctor intelligent body, the real doctor split image represents the digital doctor intelligent body, the background is a home scene, the clothing is leisure or cartoon vision style, and the split image has the functions of voice interaction and anthropomorphic expression. In the access stage of the real doctor, the body-separating image of the real doctor is switched into the image (head portrait, voice, language style and the like are consistent) which maps the characteristics of the real doctor, the background is switched into the doctor's office or office environment, the clothing is changed into professional real doctor's clothing, the intelligent body closes the voice channel and is only responsible for generating a screen information interface in the background, executing the instructions of the real doctor and generating sound effect feedback (such as button sound and reminding prompt sound). Through the switching mechanism, the patient always sees the image of the same real doctor, but can clearly distinguish whether the current intelligent body mode or the real doctor access mode is the intelligent body mode or the real doctor access mode through visual semantics (clothing, background and rendering style), so that mental burden and communication confusion are reduced.
In one embodiment, during the live doctor access, the digital doctor agent does not make speech or independent expressions, but rather performs the generated interface management and action control as a background assistant. The generation interface is presented by a digital doctor intelligent body to analyze the voice content and instruction intention of a real doctor in real time and generate an auxiliary information display interface which comprises a key sign trend graph, a history detection data visualization, a comparison image, a structured inquiry abstract, a medicine description and the like, and an auxiliary picture, a short video, a chart or a dynamic label for explanation of the real doctor. And the intelligent body generates auxiliary sound effects and interface transition animation according to the operation rhythm of the real doctor, so as to prompt information change, data loading or action confirmation and enhance interaction rhythm sense. And the action execution coordination is that when a true doctor issues a specific operation instruction (such as 'measurement start', 'movement of the mechanical arm close to the abdomen area'), the intelligent agent is responsible for analyzing the command, generating control parameters and driving the robot end to complete the action.
In an embodiment, the remote diagnosis and treatment cloud platform 202 further includes a remote control module, and the remote control module is configured to respond to real-time operation of a doctor, generate a remote control instruction, and send the remote control instruction to the home robot 201 based on the communication channel, so that the doctor can remotely control the mechanical arm module to operate the diagnosis and treatment device to diagnose the user.
In one embodiment, the remote control module supports two teleoperation modes to meet the requirements of remote diagnosis and treatment with different precision and security levels. The first mode is an instruction control mode (the digital doctor agent executes), the real doctor operates the remote control module to send remote control instructions to the home robot 201 through the communication channel of the remote diagnosis and treatment cloud platform 202, and the digital doctor agent automatically plans an action path and executes tasks such as blood pressure measurement, body temperature detection, blood sampling and the like after analyzing the remote control instructions. The digital doctor intelligent agent sends a confirmation prompt to the real doctor equipment before executing the task and waits for the real doctor to confirm and then execute the task, the real doctor can send a confirmation voice through a communication channel of the remote diagnosis and treatment cloud platform 202 and perform operation preparation confirmation with a patient, and in the execution process, a display screen of the home robot 201 displays real-time progress, module state and sampling feedback. Mode two, a fine remote control mode (direct control of a real doctor), and when the task involves high-precision operation (such as B-ultrasonic probe positioning, local blood sampling, electroencephalogram electrode adhesion and the like), the real doctor can directly control the home robot 201 in real time through a remote control module. The remote control module can be provided with a touch feedback control rod, a VR handle control console or a medical special remote mechanical arm control system, and the remote diagnosis and treatment cloud platform 202 transmits low-delay video stream and force feedback data in the remote control process, so that real-time perception of a real doctor is realized, meanwhile, the home robot 201 automatically monitors the mechanical arm range, the safety domain and the force feedback threshold value, operation safety is ensured, a digital doctor intelligent body carries out safety judgment and action prediction in the background, and overload or false triggering of equipment is prevented. In any of the above modes, the real doctor has main control rights, all key actions are required to be confirmed and triggered by the real doctor, and the home robot 201 responds and confirms through voice or a touch screen, so that the safety and controllability of medical actions are ensured.
In an embodiment, the remote diagnosis and treatment cloud platform 202 is further configured to optimize the target monitoring scheme, the target intervention scheme, and the target inquiry scheme according to the interaction information and the diagnosis and treatment information, so as to obtain an optimized monitoring scheme, an optimized intervention scheme, and an optimized inquiry scheme.
In an embodiment, the remote diagnosis and treat cloud platform 202 is further configured to extract key items in the optimized monitoring scheme, the optimized intervention scheme, and the optimized inquiry scheme, display the key items to the real doctor, generate optimized key items based on the audit and confirmation of the real doctor, update the optimized monitoring scheme, the optimized intervention scheme, and the optimized inquiry scheme based on the optimized key items, and send the optimized key items to the home robot 201.
In an embodiment, the home robot 201 classifies and caches the interaction information and diagnosis information generated during the interaction and diagnosis process with the user, and sends the interaction information and diagnosis information to the remote diagnosis cloud platform 202, where the interaction information and diagnosis information include, for example, abnormal signs, intervention failure, emergency events, daily behavior logs, and light interaction records during video and audio playing. All data are accompanied by time stamps, spatial location information (from environmental map and semantic map), task identification (corresponding to specific monitoring/intervention/interviewing tasks), and protocol version numbers, ensuring subsequent traceability analysis.
In an embodiment, the remote diagnosis and treatment cloud platform 202 performs fusion processing on data after receiving the interaction information and the diagnosis and treatment information, performs data denoising, missing value processing and outlier labeling, aligns and associates monitoring data, intervention records, inquiry records and real doctor operation records based on a unified patient identification and time axis, and establishes a time-series health data view for a single patient and a statistical view for a similar disease group.
In an embodiment, the remote diagnosis and treatment cloud platform 202 performs multidimensional analysis on the summarized data based on a proprietary health and medical knowledge model, including but not limited to health trend analysis such as long-term fluctuation trend of blood sugar/blood pressure, sleep quality change, and movement quantity reaching standard condition, intervention effect evaluation such as comparing sign data and behavior change before and after intervention, evaluating effectiveness of intervention measures, compliance and risk evaluation such as counting completion rate, delay condition and abnormal event frequency of executing an intervention plan by a patient, outputting risk classification prompt, and inquiry and operation quality evaluation such as analyzing coverage content, duration and key problem reaching rate of remote inquiry of an agent inquiry and a real doctor, and providing basis for flow optimization. The remote diagnosis cloud platform 202 generates structured optimization suggestions and decision support information based on the multidimensional analysis results, including optimization suggestions for monitoring schemes (e.g., increasing/decreasing the frequency of certain monitoring items, adjusting trigger thresholds, etc.), optimization suggestions for intervention schemes (e.g., adjusting intervention time periods, introducing alternative intervention modes, enhancing certain types of behavioral guidance, etc.), optimization suggestions for interview schemes (e.g., increasing targeted questions, shortening unnecessary answers, increasing specific topic follow-ups, etc.).
In an embodiment, the remote diagnosis and treatment cloud platform 202 automatically optimizes the original target monitoring scheme, the target intervention scheme and the target inquiry scheme according to the optimization suggestion, so as to obtain the optimized monitoring scheme, the optimized intervention scheme and the optimized inquiry scheme. The optimization process is exemplified by a monitoring scheme aspect of adjusting the monitoring frequency, increasing key index monitoring, relaxing or tightening an alarm threshold value and the like, an intervention scheme aspect of increasing or decreasing a certain type of intervention task, changing an intervention period or rhythm, replacing an intervention form which is easier to accept by a user (such as changing from voice reminding to soft visual reminding), and a consultation scheme aspect of rearranging consultation time, increasing a certain type of follow-up problem (agent consultation) and presetting a more simplified consultation script for a real doctor. The remote diagnosis and treatment cloud platform 202 analyzes each item in the optimized monitoring scheme, the optimized intervention scheme and the optimized inquiry scheme according to the influence range and the risk level, and marks different application strategies such as "automatic application", "confirmation by a doctor needing a true person", "only recommended reference", and the like for each item.
In one embodiment, the remote diagnostic cloud platform 202 determines items labeled "require real doctor confirmation" and "advice reference only" as key items (e.g., adjust medication related intervention alerts, key physical sign monitoring thresholds) and sends the key items to the real doctor device. The real doctor can check the key items through the real doctor equipment, can accept, modify or reject the key items, and can manually check the key items, and the final decision of the real doctor can be recorded as a strategy update version and used as labeling data for self-learning of a subsequent model.
In an embodiment, the real doctor apparatus sends the final decision to the remote diagnosis and treat cloud platform 202, and the remote diagnosis and treat cloud platform 202 updates the optimized monitoring scheme, the optimized intervention scheme, and the optimized inquiry scheme based on the final decision and sends to the home robot 201. The digital doctor agent in the home robot 201 can adjust task scheduling rules and dialogue strategies according to the optimized monitoring scheme, the optimized intervention scheme and the optimized inquiry scheme sent by the remote diagnosis and treatment cloud platform 202, the robot end adjusts task timetable, equipment calling strategies and the like, and for safety related strategies (such as newly added alarm rules) needing to be in effect immediately, the system priority is improved, and the first time is distributed and started.
In one embodiment, the remote diagnosis and treat cloud platform 202 uses the modification behavior of the real doctor, the policy application effect and the patient feedback as training data to continuously optimize the proprietary health and medical knowledge model, when a certain type of policy is frequently and manually modified among a plurality of patients, the system automatically adjusts the generation logic to reduce the occurrence of similar errors or improper schemes, and for the policy combination with good proven effect, the system can be marked as a 'preferred path' for generating references for the schemes of the subsequent similar patients. Through the circulation of data acquisition, cloud analysis, strategy optimization, verification by a real doctor, issuing execution and re-acquisition, the self-adaptive update of the strategies of the intelligent agent and the real doctor is realized, and a dynamic evolving health management closed loop is formed.
According to the remote diagnosis and treatment system, the home robot 201 capable of moving autonomously and a digital doctor agent work cooperatively, and the environment mapping, situation understanding and behavior recognition capability are utilized to perform supervision 'monitoring', situational intervention and agent inquiry in a real life scene of a user, so that the user is not passively waited for uploading data or initiating operation. On the diagnosis and treatment equipment interface base of the home robot 201, a batch of basic diagnosis and treatment equipment suitable for management of most chronic diseases can be connected, and a batch of special diagnosis and treatment equipment meeting personalized health requirements and disease types of users can be connected, so that different diagnosis and treatment, detection and monitoring equipment can be flexibly combined. The home robot 201 and diagnosis and treatment equipment are supported to be automatically standardized by an intelligent body locally through high-level instructions by a real doctor through a remote consultation cloud platform or to be subjected to fine teleoperation with high precision and force feedback through special remote control equipment, so that diagnosis and treatment are actively performed on a user. And (3) opening up data links of a patient end, a real doctor end and a cloud end, continuously acquiring and analyzing monitoring logs, intervention records, inquiry contents, teleoperation records and environmental situation data, generating and optimizing a monitoring scheme, an intervention scheme and an inquiry scheme based on a special health/medical model, introducing real doctor audit and feedback as training signals, realizing strategy self-learning update of cooperation of a digital doctor intelligent body and the real doctor, and forming a personalized health management closed loop capable of evolving for a long time. In implementation, referring to fig. 3, a schematic structural diagram of one implementation of a remote diagnosis and treatment system according to an embodiment of the present application is shown. The home robot comprises an autonomous mobile and situation sensing module 301, an interactive large screen and audio/video module 302, a lightweight/flexible mechanical arm module 303, a modularized diagnosis and treatment interface base 304, a local control and communication module 305 and a digital doctor splitting module 306. The digital physician tuning module 306 includes an autonomous monitoring and intervention scheduling sub-module 3061, a physician access and intent mapping sub-module 3062, a context and context awareness sub-module 3063, an image rendering sub-module 3064, a behavioral and speech synthesis sub-module 3065, and a multimodal interface generation sub-module 3066. The remote consultation cloud platform includes a doctor access and consultation management module 307, a device control and data management module 308, an intelligent analysis and plan support module 309, and a system security and external interface module 3010.
The autonomous movement and situation sensing module 301 comprises a movement and positioning system, a laser radar, an RGB-D camera, an IMU and a depth sensor, and supports SLAM mapping and semantic modeling, wherein the movement and positioning system adopts a wheeled movement chassis and a multi-sensor fusion navigation system to realize autonomous movement, path planning and fixed-point stopping. Autonomous obstacle avoidance and user approach actions in complex indoor environments are supported. The autonomous movement and context awareness module combines visual recognition and behavioral analysis to recognize semantic elements (e.g., furniture layout, lighting status, user activity area) in the environment and generate personalized context labels. For example, the scenes of 'the user is dining on a table', 'the user is resting on a sofa', 'the illumination of a bedroom is turned off', and the method is used for behavior planning and intervention strategy triggering of an intelligent agent.
The interactive large screen and audio/video module 302 is configured with a rotatable interactive large screen, and the screen integrates an RGB-D or depth camera for capturing the expression, gesture and interactive action of the user. The screen can be adjusted in angle between vertical and overlook angles through the electric rotating mechanism so as to support interaction experience and hand-eye coordination under different scenes, for example, the screen can be automatically adjusted to overlook angles when the inquiry equipment is operated, and visual guidance of the mechanical arm is assisted. Under the default condition, the large screen always presents the body-separating image of the doctor and provides an anthropomorphic visual interactive interface. In a daily non-medical state, the user can also call other application programs to play media or practical contents (such as music, video, health information and the like), and the two types of interfaces can run simultaneously. When a critical task (remote inquiry, intervention, monitoring or diagnosis) is entered, the screen automatically interrupts other applications and switches to a doctor's personal interaction interface so as to ensure the priority and safety of interaction and diagnosis tasks. The light-weight/flexible mechanical arm module 303 is configured with a plurality of types of light-weight mechanical arms or flexible mechanical arms, and is used for controlling diagnosis and treatment equipment (such as a blood pressure cuff, a B-ultrasonic probe, a blood sampling device and the like), and is provided with a plurality of end effectors (clamping jaws, a probe fixer, an injector interface and the like), and a digital doctor intelligent body autonomously performs end replacement according to task types so as to adapt to different diagnosis and treatment or sampling tasks. The mechanical arm supports force feedback and limit control, ensures safety and comfort in the interaction process with a user, and supports two modes of autonomous operation of an intelligent body or remote teleoperation of a doctor. The modular diagnosis and treatment interface base 304 decouples the robot body from various diagnosis and treatment devices, and forms a structure of a universal base and pluggable diagnosis and treatment devices. The base provides a unified physical fixed structure, a data communication channel and a power supply interface, so that plug and play of the diagnosis and treatment equipment is realized. The physical interface structure can be in various forms such as a sliding rail type, a magnetic attraction type or a buckle type, so as to adapt to diagnosis and treatment equipment with different sizes and different installation postures. Meanwhile, the access to the basic diagnosis and treatment equipment and the special diagnosis and treatment equipment which meet the personalized health requirements and disease types of users are supported. The local control and communication module 305 uniformly schedules task execution of the chassis, the mechanical arm, the interaction screen and the diagnosis and treatment device, and provides an interface for the intelligent agent and the cloud to call. And the data transmission safety is ensured by the real-time connection of the encryption communication protocol and the remote cloud platform. And monitoring the equipment state, the power supply and the network connection in real time, and triggering a local protection and cloud alarm mechanism when abnormal.
The autonomous monitoring and intervention scheduling sub-module 3061 is responsible for maintaining an overall working mode state machine, and generates an executable behavior plan according to a monitoring scheme, an intervention scheme and a consultation scheme issued by the cloud platform by combining a local situation label and a user state.
The doctor access and intention mapping submodule 3062 is responsible for receiving voice instructions and interface operations of the real doctor from the cloud platform when the real doctor accesses remotely, and performing semantic analysis and intention recognition. The doctor intent is mapped to specific control instructions for the home robot. Before key operation, the cloud platform is matched to trigger a doctor to confirm and prompt, safety monitoring and state feedback are carried out on the execution process, and information such as an execution result, failure reasons and the like is returned to a real doctor.
Wherein the context and scenario understanding sub-module 3063, integrating multimodal inputs from the home robot (SLAM map, spatial semantics, user position and posture, lighting/time information, etc.), maintains the current context state of the user, such as "eat at dining table", "rest at sofa", "have fallen asleep", "watch tv", etc. For deciding whether to disturb, how to disturb, when to perform a task, for selecting mood, topics, and presentation (e.g., small window lightweight reminders in an entertainment scenario, rather than complete questioning). Meanwhile, the digital doctor continuously updates the context state (such as user answer, emotion change and task completion degree) in the process of agent inquiry and intervention for subsequent rounds of dialogue and behavior planning.
The image rendering sub-module 3064 defines only one unified digital body-separating image for the real doctor, binds the head portrait, sex, voice characteristics and language style of the doctor, and ensures the cognitive continuity of the user in different modes. The digital doctor intelligent autonomous service mode and the digital personal figure distinction of the real doctor access mode are distinguished through adjustment, background, clothing and rendering styles. Meanwhile, the foreground is ensured to only present one identity-type body-separating image at any moment, and confusion of patients on a digital doctor intelligent body and a true doctor is avoided.
The behavior and voice synthesis submodule 3065 supports animation and prompt sound effects of other interfaces of the screen to cooperate in the autonomous service mode of the digital doctor intelligent agent and the access mode of the real doctor, and the consistency and the understandability of overall interaction are improved.
The multi-mode interface generation submodule 3066 generates a reminding card, a task progress bar, a simple chart (blood sugar/blood pressure curve), a questionnaire form and the like on a screen according to the task and prompt requirements of the digital doctor intelligent agent in the digital doctor intelligent agent mode, and superimposes a doctor body separation and light interface in a small window mode for non-invasive reminding and information acquisition when video or application is running. And under the access mode of the real doctor, according to the real-time voice content and the operation intention of the real doctor, carrying out semantic analysis in the background, and automatically selecting and generating auxiliary display content. The interface layout and the hierarchy are automatically adjusted to enable the information related to the current consultation theme to be presented preferentially, and the feedback interface state change and the data loading progress of the prompting voice, the loading animation and the like are combined.
The doctor access and consultation management module 307 is responsible for doctor identity and authority management, patient list and follow-up plan management, appointment and consultation scheduling, session establishment and mode switching. And the doctor account registration and authentication are supported, and the authority control is performed by combining a hospital end account system, so that only authorized doctors can access specific patients and corresponding equipment control channels. The doctor is provided with a patient list view which is grouped according to disease types, risk grades and follow-up stages, and the remote follow-up plan and the inquiry period are supported to be configured and synchronously issued to the patient-side agent. Receiving an appointment request initiated by an agent at a patient end, generating a consultation task by a doctor after a calendar interface is confirmed, and triggering a consultation preparation reminder at a robot end before the appointment time is reached. When a doctor initiates a remote consultation, an encrypted audio and video channel is established, and a 'doctor access mode' switching instruction is issued to a robot end, so that a digital doctor is split and switched from an intelligent body mode to a doctor mode, and the interaction channels are unified.
The device control and data management module 308 is responsible for unified management and control of the home robots and the diagnosis and treatment devices. The main functions include recording the type, state and configuration of each robot and diagnosis and treatment equipment assembled by the robots (such as a fingertip blood module, an abdomen/gynecologic ultrasonic module, a depression blood sampling module, an intramuscular injection module and the like), and maintaining the topology and the service condition of the equipment. In the instruction control mode, a high-level instruction (such as ' start blood pressure measurement ' and ' abdomen ultrasound) issued by a doctor at the cloud is forwarded to a patient-side agent, and an action sequence is automatically generated and executed by the agent. In the fine remote control mode, a low-delay channel is established with special control equipment (a touch control rod, a VR control terminal and the like) at the doctor side, and position, gesture and force feedback instructions are forwarded, so that real-time teleoperation on the mechanical arm and diagnosis and treatment equipment is realized. And receiving physical sign data, a behavior log, equipment state data and a doctor operation log from the robot end, and completing basic verification, distribution and storage, so as to provide original input for a subsequent intelligent analysis module.
The intelligent analysis and scheme support module 309 is responsible for generating monitoring, intervention and inquiry schemes by calling a proprietary model and outputting optimization suggestions during fusion analysis of multi-source data.
The system security and external interface module 3010 provides security and interconnection capabilities for the whole remote diagnosis and treatment cloud platform.
The home robot 201 provided by the application exemplarily further comprises a processor and a memory, wherein the memory stores a computer program, and the processor executes the computer program to enable the home robot 201 to execute corresponding actions.
The processor may be an integrated circuit chip with signal processing capabilities. The processor may be a general purpose processor including at least one of a central processing unit (Central Processing Unit, CPU), a graphics processor (Graphics Processing Unit, GPU) and a network processor (Network Processor, NP), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like that may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application.
The Memory may be, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc. The memory is used for storing a computer program, and the processor can correspondingly execute the computer program after receiving the execution instruction.
The present application also provides a computer storage medium for storing the computer program used in the above-mentioned home robot 201. The computer storage medium may be a readable storage medium, a nonvolatile storage medium, or a volatile storage medium. For example, the computer storage media may include, but is not limited to, U disk, removable hard disk, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, etc. various media that can store program code.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, of the flow diagrams and block diagrams in the figures, which illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules or units in various embodiments of the application may be integrated together to form a single part, or the modules may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a smart phone, a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application.