[go: up one dir, main page]

CN120544861A - Patient psychological assessment method, equipment and storage medium based on information interaction - Google Patents

Patient psychological assessment method, equipment and storage medium based on information interaction

Info

Publication number
CN120544861A
CN120544861A CN202510419196.6A CN202510419196A CN120544861A CN 120544861 A CN120544861 A CN 120544861A CN 202510419196 A CN202510419196 A CN 202510419196A CN 120544861 A CN120544861 A CN 120544861A
Authority
CN
China
Prior art keywords
psychological assessment
patient
psychological
multimodal
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202510419196.6A
Other languages
Chinese (zh)
Inventor
周蜜
刘坚
杨超
张温泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji Hospital Affiliated to Tongji Medical College of Huazhong University of Science and Technology
Original Assignee
Tongji Hospital Affiliated to Tongji Medical College of Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji Hospital Affiliated to Tongji Medical College of Huazhong University of Science and Technology filed Critical Tongji Hospital Affiliated to Tongji Medical College of Huazhong University of Science and Technology
Priority to CN202510419196.6A priority Critical patent/CN120544861A/en
Publication of CN120544861A publication Critical patent/CN120544861A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • G06F18/256Fusion techniques of classification results, e.g. of results related to same input data of results relating to different input data, e.g. multimodal recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Public Health (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Pathology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The invention discloses a patient psychological assessment method, equipment and a storage medium based on information interaction, and relates to the technical field of medical information processing, wherein the method comprises the steps of constructing a scene interaction module and collecting a multi-mode data set of a patient under a plurality of scene samples; through carrying out space-time alignment on the multi-mode data set, inputting a multi-mode fusion psychological assessment model, learning the correlation among different modes, outputting a plurality of psychological assessment indexes corresponding to a plurality of scene samples respectively, carrying out fusion analysis, obtaining fusion psychological assessment indexes, and sending the fusion psychological assessment indexes to an interaction upper computer for medical record storage. The invention solves the technical problems that the traditional psychological assessment method in the prior art relies on manual observation and has single assessment dimension, cannot comprehensively capture emotion change, has poor assessment accuracy, real-time performance and comprehensiveness, and achieves the technical effects of automatically assessing the psychological state of the patient by fusing multidimensional data and improving the accuracy, real-time performance and comprehensiveness of psychological assessment.

Description

Patient psychological assessment method, equipment and storage medium based on information interaction
Technical Field
The invention relates to the technical field of medical information processing, in particular to a patient psychological assessment method, equipment and a storage medium based on information interaction.
Background
With the continued development of medical technology, mental health is increasingly being considered as an important aspect in disease treatment and rehabilitation, especially in the face of serious diseases and complex treatments, where the mental state of the patient has a significant impact on the therapeutic outcome. Patients with advanced disease (e.g., patients with advanced liver cancer) often experience mood swings such as anxiety, depression, fear, etc. during treatment, these psychological states not only affect the patient's compliance with treatment, but may also affect the patient's immunity and overall health status. Therefore, the psychological state of the patient is accurately and timely estimated, and the psychological state is important for improving the treatment effect and the life quality of the patient.
Traditional methods of psychological assessment of patients typically rely on manual observation and assessment by doctors and psychologists, which are effective in some situations, but still have certain limitations. Firstly, the traditional method often requires a large amount of manual operation, the evaluation process depends on the experience of medical staff, subjectivity and inconsistency of the evaluation result can exist, secondly, the traditional evaluation method is often limited to a single evaluation dimension (such as by interviews, questionnaires and the like) and cannot comprehensively evaluate the psychological state of a patient, and furthermore, the traditional method often has difficulty in monitoring the change of the psychological state of the patient in real time because the psychological state of the patient is continuously changed along with the progress of diseases and the treatment process.
Disclosure of Invention
The application provides a patient psychological assessment method, equipment and a storage medium based on information interaction, which are used for solving the technical problems that the traditional psychological assessment method in the prior art relies on manual observation, the assessment dimension is single, emotion change cannot be comprehensively captured, and the assessment accuracy, instantaneity and comprehensiveness are poor.
The application provides a patient psychological assessment method based on information interaction, which comprises the steps of constructing a scene interaction module, collecting a multimodal dataset of a patient through a multimodal sensor under the scene interaction module, inputting a multimodal fusion psychological assessment model by carrying out time-space alignment on the multimodal dataset, learning correlation among different modalities according to the multimodal fusion psychological assessment model, outputting a plurality of psychological assessment indexes corresponding to the scene sample respectively, carrying out fusion analysis on the psychological assessment indexes to obtain a fusion psychological assessment index of the patient, and sending the fusion psychological assessment index to an interaction upper computer for medical record storage.
The application provides information interaction-based patient psychological assessment equipment, which comprises a scene interaction module, a multimodal data acquisition module and an assessment index interaction module, wherein the scene interaction module comprises a plurality of scene samples, the multimodal data acquisition module is used for acquiring a multimodal data set of a patient through a multimodal sensor under the plurality of scene samples, the multimodal data set comprises voice data, facial expression data and behavior data, the multimodal fusion psychological assessment module is used for carrying out time-space alignment on the multimodal data set, inputting a multimodal fusion psychological assessment model, outputting a plurality of psychological assessment indexes corresponding to the plurality of scene samples according to correlation among different modes of the multimodal fusion psychological assessment model, the fusion analysis module is used for carrying out fusion analysis on the plurality of psychological assessment indexes to obtain the fusion psychological assessment indexes of the patient, and the assessment index interaction module is used for sending the fusion psychological assessment indexes to an interaction host computer and storing the psychological assessment indexes by the interaction host computer.
In a third aspect of the present application, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of the first aspect.
One or more technical schemes provided by the application have at least the following technical effects or advantages:
The application provides a patient psychological assessment method, equipment and a storage medium based on information interaction, which relate to the technical field of medical information processing and collect multi-mode data under a plurality of scene samples by constructing a scene interaction module, and the data is input into a multi-mode fusion psychological assessment model through space-time alignment, the correlation among different modes is learned, and the comprehensive psychological assessment index of a patient is generated through fusion analysis and is sent to an interaction upper computer for medical record storage, so that personalized treatment and psychological management are supported. The technical problems that the traditional psychological assessment method in the prior art depends on manual observation, the assessment dimension is single, the emotion change cannot be comprehensively captured, and the assessment accuracy, the real-time performance and the comprehensiveness are poor are solved, the psychological state of a patient is automatically assessed by fusing multidimensional data, and the technical effects of the accuracy, the real-time performance and the comprehensiveness of psychological assessment are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following description will briefly explain the drawings needed in the description of the embodiments, which are merely examples of the present invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a patient psychological assessment method based on information interaction according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a patient psychological assessment device based on information interaction according to an embodiment of the present application.
The reference numerals illustrate a scene interaction module 11, a multi-mode data acquisition module 12, a multi-mode fusion psychological assessment module 13, a fusion analysis module 14 and an assessment index interaction module 15.
Detailed Description
The application provides a patient psychological assessment method, equipment and a storage medium based on information interaction, which are used for solving the technical problems that the traditional psychological assessment method in the prior art relies on manual observation, the assessment dimension is single, emotion change cannot be comprehensively captured, and the assessment accuracy, instantaneity and comprehensiveness are poor.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that, the terms "first," "second," and the like in the description of the present application and the above drawings are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus.
In a first embodiment, as shown in fig. 1, the present application provides a method for psychological assessment of a patient based on information interaction, the method comprising:
and P10, constructing a scene interaction module, wherein the scene interaction module comprises a plurality of scene samples.
Specifically, the core objective of the step is to construct a scene interaction module, and the core function of the module is to provide rich scene simulation for a psychological assessment system, so that psychological reactions of patients in different scenes can be accurately captured. The design of the scenario interaction module should include a plurality of scenario samples, each representing a mental state reaction of the patient in a particular context. To fully evaluate the psychological condition of a patient, the scene sample should cover various possible psychological states and situations, ensuring that the model is able to evaluate different types of emotional responses.
The design of the scene interaction module firstly considers the construction of scene samples, wherein the scene samples refer to psychological reactions and behavioral patterns of patients under different situations. To cover the possible emotional changes of the patient entirely, these scenarios need to be defined according to clinical experience or psychological studies. For example, in the course of treatment of patients with advanced liver cancer, there may be situations of emotional states such as anxiety, depression, fear, etc., or psychological reactions of patients during different treatment phases (e.g., before and after chemotherapy). Each scene sample should include data such as the patient's language, facial expression, and behavioral response in that context.
First, a plurality of scene samples are designed for various psychological reactions that a patient may experience. The scene samples refer to psychological reactions and behavioral patterns of patients under different circumstances. To cover the possible emotional changes of the patient entirely, these scenarios need to be defined according to clinical experience or psychological studies. For example, in the course of treatment of patients with advanced liver cancer, there may be situations of emotional states such as anxiety, depression, fear, etc., or psychological reactions of patients during different treatment phases (e.g., before and after chemotherapy). Each scene sample should include data such as the patient's language, facial expression, and behavioral response in that context.
Exemplary, anxiety scenarios simulate the patient's context while waiting for important exam results. At this time, the patient may be in a state of rapid mood, unstable intonation, tension in facial expression, and restless movements of the body language (e.g., frequent hand rubbing). Depression, simulating the psychological state of the patient in case of worsening illness. Speech may exhibit a low intonation, facial expressions may exhibit tired and depressed states, and limb movements may be slow or stiff. Fear scene-simulating fear psychology possibly generated by a patient after receiving adverse treatment results. Speech may be accompanied by significant tremors or high frequency intonation changes, facial expressions may appear panic, and limb movements may also appear tension or avoidance.
The design of the scene samples not only needs to cover the psychological state change of the patient, but also needs to consider the interaction condition of the patient and the medical staff. For example, patients may exhibit different emotional responses in face-to-face consultation or remote video interactions. Therefore, the design needs to ensure the diversity of scenes, and covers various psychological states of patients in different environments and treatment backgrounds.
And integrating all scene samples into a unified scene interaction module, so that the module can dynamically select a proper scene for interaction according to different patient situations and psychological reactions. In the optimization process of the module, a machine learning algorithm and a data analysis method are adopted, so that the system can automatically select a scene which is most matched with the current situation of the patient and trigger a corresponding psychological assessment flow. In addition, the module should have the ability to respond efficiently and feed back in real time, ensuring that the system can quickly adapt to psychological changes in the patient.
And P20, collecting a multi-modal data set of the patient through a multi-modal sensor under the plurality of scene samples, wherein the multi-modal data set comprises voice data, facial expression data and behavior data.
Optionally, in a plurality of scene samples, data acquisition is performed on the patient through the multi-mode sensor, psychological states and behavioral characteristics of the patient are comprehensively captured, and multi-dimensional data support is provided for subsequent evaluation. The multi-modal data is a multi-source data set combining the voice, facial expression and behavioral characteristics of the patient, and the data are derived from different sensory information inputs and can reflect the overall performance of the patient in a complex psychological state.
In particular, each scene sample represents a particular context that the patient may experience, such as anxiety states waiting for treatment results, fear of facing a treatment regimen, and the like. In these scenarios, the patient is subjected to real-time data acquisition using multimodal sensors (e.g., voice recognition devices, facial recognition cameras, and behavior monitoring devices). The scene-specific data acquisition mode can ensure that acquired data has scene pertinence and context relevance.
Illustratively, the characteristics of the language content, intonation, speed of speech, etc. of the patient are captured by speech recognition techniques. The speech data can reflect the emotional state of the patient, for example, in anxiety state, speech speed is generally faster, speech is rapid, and speech fluctuation is larger. In a depressed state, the intonation is relatively low, the speech speed is slow and the intermittence is relatively high. Analysis of the patient's expression changes using a facial recognition camera, such as the eyebrow lock representing anxiety, facial stiffness or lack of expression may suggest depressed mood. Through computer vision technology (such as convolutional neural network, CNN), the micro-expression change of the patient can be captured and classified in real time, so that the emotional state of the patient can be judged. These expressive features can complement the shortfall of speech data, providing an additional dimension for psychological assessment. Behavior data is acquired through motion capture devices or video monitoring, including patient limb language and behavior patterns. For example, a patient may appear to be rushed or repeatedly moved in an anxiety setting, while in a depression setting, the motion may become sluggish or weak. The behavior features are processed through an action recognition algorithm and a behavior analysis model, and are subjected to association analysis with voice and facial expression data.
The multi-mode data can comprehensively reflect the psychological state of the patient in a specific scene, and provide multi-dimensional accurate support for analysis of a subsequent psychological assessment module. The multi-mode data acquisition mode not only enriches the dimension of psychological assessment, but also provides a powerful data base for subsequent personalized health propaganda and education.
And P30, through carrying out space-time alignment on the multi-modal data set, inputting a multi-modal fusion psychological assessment model, and according to the correlation between different modalities learned by the multi-modal fusion psychological assessment model, outputting a plurality of psychological assessment indexes corresponding to the scene samples respectively.
Specifically, the acquired multi-mode data sets are subjected to space-time alignment and are processed through a multi-mode fusion psychological assessment model, and finally accurate psychological assessment indexes are generated. The multimodal dataset comprises speech data, facial expression data and behavioural data reflecting respectively the linguistic manifestation, mood swings and limb behaviour of the patient in different situations. By means of space-time alignment and multi-mode fusion technology, data information of different sources can be integrated, and a group of comprehensive psychological assessment results can be output for each scene sample.
Optionally, after multi-modal data acquisition, data from different sensors may first need to be spatially and temporally aligned. Spatiotemporal alignment refers to aligning data of different time sequences and sources onto a unified time axis, ensuring that all data reflect the mental state of the patient at the same time. For example, the speech data and the facial expression data may have different time stamps, respectively, and after time-space alignment, they are synchronized to the same point in time, ensuring that the speech, facial expression, and behavioral data at each point in time reflect the emotional state of the patient at the same time. The technical implementation of spatio-temporal alignment typically relies on data preprocessing algorithms to ensure consistency when all data is input to the fusion model.
After the completion of the space-time alignment, the data is input into a multimodal fusion psychological assessment model. The core of the model is to convert various types of data into a comprehensive psychological assessment result by learning correlations between different modalities (speech, facial expression and behavioral data). The information provided by each modality has different characteristics when reflecting the psychological state of the patient, the voice can reveal the emotional intensity, the facial expression provides visual expression of the emotion, and the behavioral data shows the emotional response mode of the patient. The model can extract high-level emotion characteristics from multi-modal data through a Convolutional Neural Network (CNN) or a cyclic neural network (RNN), and can give different weights to different modalities through an attention mechanism so as to give more attention to most critical modality information in the evaluation process.
Moreover, the key of multi-modal fusion is to learn the relevance between modalities. For example, the model may recognize that in certain scenarios, anxiety speech features (e.g., increased speech speed, intonation changes) are often accompanied by tense facial expressions (e.g., frowning, eye enlargement) and restless performance (e.g., hands-on, walking). By learning the multi-modal features, the model can effectively fuse data of different modalities, thereby providing accurate psychological assessment.
According to the trained multi-mode fusion psychological assessment model, psychological assessment indexes are generated, and the psychological assessment indexes reflect psychological states of a patient in each scene sample, such as corresponding emotional states (such as anxiety, depression, fear and the like) and severity (such as mild, moderate and severe) of the patient in each scene. These assessment indicators will provide basis for subsequent health announcements and psychological interventions.
Further, after outputting the plurality of psychological assessment indexes corresponding to the plurality of scene samples, the embodiment of the present application further includes step P30a, where step P30a further includes:
The method comprises the steps of P31a, calculating the performance consistency of psychological assessment indexes of a patient under different scene samples according to the psychological assessment indexes to obtain consistency coefficients, P32a, carrying out weighted fusion analysis on the psychological assessment indexes to obtain fusion psychological assessment indexes when the consistency coefficients are larger than a preset threshold value, and P33a, generating reminding information when the consistency coefficients are smaller than the preset threshold value, re-acquiring a multi-mode data set of the patient under the scene samples according to the reminding information, and outputting an updated multi-mode data set.
The method comprises the following steps of performing performance consistency calculation on psychological assessment indexes of patients under different scene samples, wherein the calculation formula is as follows: Wherein, the method comprises the steps of, In order for the coefficient of uniformity to be a factor of uniformity,For the number of test items in each scenario,For the variance of the ith test item,For the sum of the variances of all test items,The total variance weighted for all test items.
It should be understood that after outputting multiple psychological assessment indexes of the patient under different scene samples, in order to ensure the reliability and consistency of the assessment results, the assessment results are further optimized, so that the assessment results can comprehensively and accurately reflect the psychological states of the patient, and more accurate basis is provided for subsequent treatment and intervention.
Specifically, firstly, performance consistency calculation is required to be performed on a plurality of psychological assessment indexes so as to judge whether psychological reactions of patients in different scenes are stable and consistent. The core goal of the performance consistency calculation is to quantify the fluctuation degree of the evaluation index under different situations and know whether the psychological evaluation index under each scene shows a similar mode. If the psychological assessment index in different scenes is greatly different, it may mean that some data are abnormal, and the psychological state of the patient needs to be further confirmed.
The calculation of the consistency coefficient is the core for evaluating the consistency of psychological indexes under different scene samples. By way of example, it may be calculated by the following formula:
Wherein, the method comprises the steps of, The coefficient of consistency is represented by a coefficient of consistency,For the variance of the ith test item,Is the weight of the i-th test item,Is the total number of test items. The number of the test items refers to the dimension or the measurement item for psychological assessment in each scene, such as anxiety, depression and other emotion states, the variance reflects the fluctuation degree of each test item in a certain scene, and the larger the variance is, the more remarkable the index changes in the scene. Through calculation, a consistency coefficient can be obtained and used for judging whether the emotion fluctuation of the patient under different situations is consistent.
If the consistency coefficient is larger than a preset threshold value, the psychological state change of the patient under different situations is consistent, and the assessment result is reliable and stable. At this time, the evaluation indexes can be subjected to weighted fusion analysis, so that a fusion psychological evaluation index is obtained and is used as an accurate representation of the overall psychological state of the patient. The weighted fusion analysis is to perform weighted processing on the evaluation results in each scene, and the weight setting is based on the importance of each scene and the influence on the psychological state of the patient, and can be based on a deep learning model (such as a neural network model) or a statistical analysis method (such as a weighted average method), which can automatically adjust the weight according to the importance of different evaluation indexes, and integrate the results of a plurality of scenes into a final psychological evaluation output. Therefore, the evaluation results in different scenes are combined into a unified psychological evaluation index, and a more comprehensive evaluation result is provided.
If the consistency coefficient is smaller than a preset threshold value, the difference of psychological assessment results under different scenes is excessively large, and the situation that data are abnormal or an assessment model needs to be adjusted may be indicated. In this case, we need to generate a reminder to prompt the clinical staff or AI system to re-acquire the patient's multimodal data. The re-collected data is updated into a new multi-mode data set, and the new multi-mode data set is input into the multi-mode fusion psychological assessment model again after time-space alignment, and finally the updated psychological assessment result is output so as to ensure the accuracy of psychological assessment. This process improves the reliability and individuality of psychological assessment and provides more powerful support for subsequent therapeutic adjustment and health instruction.
Further, when the consistency coefficient is greater than a preset threshold, step P32a of the embodiment of the present application further includes:
The method comprises the steps of performing variance calculation on a plurality of psychological assessment indexes, detecting response sensitivity of a patient under different scene samples, wherein the response sensitivity is obtained by calculating fluctuation sizes of the indexes under different scene samples, obtaining an identification scene sample larger than preset response sensitivity, performing secondary multimodal data acquisition on the identification scene sample by P32-3a, outputting a secondary multimodal dataset, and inputting the multimodal dataset and the secondary multimodal dataset into a multimodal fusion psychological assessment model by P32-4a, and updating the psychological assessment indexes.
Optionally, the evaluation process is further refined, and when the consistency coefficient is greater than a preset threshold value, not only the weighted fusion analysis is performed, but also a mechanism for detecting the reaction sensitivity can be introduced.
First, variance calculation is needed for a plurality of psychological assessment indexes to detect response sensitivity of patients in different scenes. The variance is an important index for measuring the fluctuation of data, and can reflect the fluctuation amplitude of the emotion state of a patient in a specific scene. Sensitivity of response refers to the intensity of the patient's response to external stimuli or changes in psychological factors under different circumstances. If the patient generates larger psychological fluctuation on the factors such as treatment results, disease changes and the like in a certain scene, the reaction sensitivity of the patient in the situation is higher. For example, psychological assessment indicators (such as anxiety, depression, etc.) in each scenario will have a corresponding variance value that describes the magnitude of the fluctuation in that scenario. If the variance in a certain scene is significantly higher than that of other scenes, the condition response of the patient in the scene is more intense, and the response sensitivity is stronger. By calculating the variance under each scene and comparing the variances with the variances of other scenes, the scene with higher reaction sensitivity can be identified, and a basis is provided for subsequent further data acquisition and analysis.
After the variance calculation is completed, the identification scene samples with the reaction sensitivity higher than a preset threshold value are automatically screened out. These identified scenarios refer to those situations that exhibit significant fluctuations or instability in the psychological assessment index, where the patient typically responds more strongly to psychological stimuli, possibly requiring more attention and intervention. By setting a reasonable reaction sensitivity threshold, the system is able to automatically identify scene samples with high sensitivity.
Upon identification of a scene sample with high response sensitivity, the system will initiate a secondary multi-modality data acquisition, i.e., re-acquire the multi-modality data set of the patient in these particular scenes. Through secondary data acquisition, the system can acquire more accurate patient response data, further supplements error or incompleteness that may exist when first acquisition. For example, secondary data acquisition may more accurately capture emotional fluctuations of the patient in highly sensitive scenes, e.g., by extending the observation time, ensuring that all emotional responses of the patient are captured. Besides speech and facial expressions, secondary collection can increase monitoring of patient behavioral data to further understand patient limb language or nonverbal performance as context changes.
And finally, inputting the first multi-modal data set and the secondarily acquired multi-modal data set into a multi-modal fusion psychological assessment model for processing. The adaptability of the model to the hypersensitive scene can be enhanced by adding the secondary data, so that the model can fully consider the emotion change of the patient under a specific situation, and the corresponding psychological assessment index is updated. By fusing the data of different time points, the model not only can reflect the psychological state of the patient in the sensitive scene more accurately, but also can further optimize the evaluation result. For example, if the patient's emotion fluctuations are large in certain situations, the system may adjust personalized health advice and treatment advice based on these fluctuations, thereby providing a more accurate intervention plan.
Further, the embodiment of the present application further includes a step P30b, where the step P30b further includes:
The method comprises the steps of P31b constructing a multi-modal sample data set under the scene samples, wherein the multi-modal sample data set comprises voice sample data, facial expression sample data and behavior sample data, P32b extracting characteristics of the voice sample data, the facial expression sample data and the behavior sample data to obtain voice characteristic data, facial characteristic data and behavior characteristic data, P33b initializing a multi-layer fusion network, performing layer-by-layer correlation training on the multi-layer fusion network according to the voice characteristic data, the facial characteristic data and the behavior characteristic data, and outputting a trained multi-modal fusion psychological assessment model by taking emotion recognition accuracy larger than a preset threshold as a convergence condition.
Specifically, the construction process of the multimodal fusion psychological assessment model is further refined. First, a multi-modal sample dataset is constructed containing a plurality of scene samples. Each scene sample represents a psychological response of the patient in different situations, and the multimodal sample data set includes speech sample data, facial expression sample data, and behavior sample data of the patient in each scene.
Wherein the voice sample data comprises voice recordings of the patient in different scenarios. Through the voice recognition technology, emotion information in voice can be extracted, and emotion fluctuation (such as anxiety, depression, fear and the like) of a patient under a certain scene is reflected. The facial expression sample data comprise facial expression information of the patient in different scenes, such as eyebrow pucker, mouth corner lifting and the like, which are acquired through a facial recognition technology, and the emotional state (such as pleasure, tension and the like) of the patient can be revealed. The behavior sample data comprise behavior data of a patient in different scenes, and may comprise information such as limb actions, action frequencies and the like. Such as frequent walking or shaking of the finger in anxiety.
After the dataset is constructed, feature extraction is then performed on the data for each modality so that subsequent model training can extract more valuable emotion recognition features from these raw data. Including extracting speech feature data, facial feature data, and behavioral feature data. Illustratively, the voice feature data may extract features related to emotion, such as speech rate, pitch, voice intensity, mood, etc., through voice signal processing techniques (e.g., mel-frequency cepstral coefficient MFCC, voice waveform analysis, etc.). These features can reflect the intensity and fluctuations of the patient's mood. The facial feature data may use computer vision algorithms (e.g., convolutional neural network CNN) to extract key features (e.g., facial expressions, eye-mind, angular mouth movements, etc.) from the facial image and map these features into numerical data to help assess the patient's facial emotional manifestations. The behavior characteristic data can be used for extracting limb action characteristics of a patient, including action amplitude, frequency, duration and the like, through an action recognition algorithm (such as gesture recognition based on deep learning or object tracking technology). These features help to determine the behavioral pattern and emotional response of the patient.
And initializing a multi-layer fusion network, and performing layer-by-layer correlation training according to the voice characteristic data, the facial characteristic data and the behavior characteristic data, wherein the structure of the multi-layer fusion network is composed of a plurality of layers, and each layer can learn the correlation of different characteristics. And after the multi-layer fusion network is initialized, performing layer-by-layer correlation training according to the extracted characteristic data. In particular, the training process includes speech feature training in which the model will first learn how to extract emotion-related patterns, such as speech rate, intonation variations, etc., from the speech features. Facial expression training, in subsequent layers, models learn how to recognize different emotional states from facial expression features, capturing subtle changes in facial muscles. The behavior feature training, other layers of the network focus on extracting emotion fluctuation information from behavior data, and learning how to associate actions and emotions of a patient. In the training process of each layer, the model can continuously adjust the network weight, take emotion recognition accuracy as an optimization target, and continuously improve the training effect through a back propagation algorithm (such as gradient descent).
The ultimate goal of training is that the emotion recognition accuracy is greater than a preset threshold, i.e., the model is able to accurately recognize the patient's emotional state on the test dataset. At this time, the training process of the model can be regarded as convergence, and the training process is ended. The obtained multi-modal fusion psychological assessment model can accurately assess the psychological state of the patient and output corresponding psychological assessment indexes.
Further, the step P31b of the embodiment of the present application further includes:
P31-1b, determining a plurality of emotional states, classifying the multi-modal sample data set according to the plurality of emotional states, and outputting the multi-modal sample data set under each emotional state;
And P31-2b, namely generating a plurality of emotion state comparison libraries by using a multi-modal sample data set under each state, and P31-3b, namely traversing the plurality of emotion state comparison libraries to compare according to the multi-modal data set by using the multi-modal fusion psychological assessment model, and outputting a plurality of psychological assessment indexes.
Optionally, the multi-modal sample dataset is further refined by first determining a plurality of emotional states that the patient may experience, such as common emotional states of anxiety, depression, fear, pleasure, anger, and the like. Each emotional state reflects the psychological response pattern of the patient in a specific situation, and is closely related to the physiological and psychological characteristics of the patient. And classifying the data according to the emotional state after the multi-modal sample dataset is collected.
The classification of speech data is illustratively performed such that the patient's speech performance (e.g., pace, pitch, mood) varies significantly from emotion to emotion. For example, speech may be jerky during anxiety, relaxed and pleasant during pleasure, and sunken during depression. Based on these differences, the speech data will be classified into corresponding emotion categories. Facial expression data classification is performed in such a way that facial expressions are very sensitive to the reaction of emotional states, a patient may show tension expressions such as frowning, eyes being open greatly and the like when anxiety, mouth corners are raised when pleasure, and expressions may show tired or silent states when depression. Facial expression features are extracted through a computer vision algorithm and matched with corresponding emotional state classifications. Behavioral data classification is typically performed in that behavioral data reflects the patient's response to emotional states, e.g., anxiety is often accompanied by erratic behavior such as hand rubbing, walking, etc., while pleasure, the patient may exhibit more social behavior such as smiling and interaction, while depression, the behavior may become sluggish or stiff. Through the classifying process, the multi-mode data under different emotion states are reasonably classified, and an ordered structure is provided for subsequent emotion comparison.
After the emotional state classification is completed, an emotional state comparison library is generated based on the multimodal data for each emotional state. The comparison library is a multi-modal data sample set containing each emotional state, and can provide a reference standard for subsequent emotion recognition and psychological assessment. Each emotional state comparison library contains characteristic information from voice data, facial expression data and behavior data, and all the data are extracted emotional characteristics and can reflect typical psychological behaviors of a patient in the emotional state. By means of the comparison library, the system can be rapidly compared with data in the multi-mode data set, and therefore the most matched emotion assessment result is output.
When the emotional state comparison library is established, a new multimodal dataset (i.e., sample data of the current emotional state of the patient) can be input into the multimodal fusion psychological assessment model and compared with the comparison library. The method includes inputting multi-modal data of a current patient into a model, traversing a plurality of emotion state comparison libraries, calculating similarity between the current data and each emotion state sample in the libraries, and outputting corresponding psychological assessment indexes, usually the current emotion state and severity of the patient, according to comparison results. For example, evaluation indexes such as "anxiety: high", "depression: medium", "pleasure: low" and the like are output. The evaluation indexes provide a detailed emotion state analysis result, so that medical staff or an AI robot can be helped to know emotion changes of a patient in time, and support is provided for subsequent intervention and treatment.
And P40, carrying out fusion analysis on the multiple psychological assessment indexes to obtain the fusion psychological assessment indexes of the patient.
It should be appreciated that during the early multi-modal data acquisition and processing, multiple psychological assessment indicators have been extracted by different sensors (speech recognition, facial expression analysis, behavioral pattern recognition, etc.). These assessment indices provide preliminary quantitative information on the mental state of multiple dimensions, but a single index cannot fully describe the complex emotional response of a patient, so it is necessary to integrate it into one comprehensive index by fusion analysis.
First, each evaluation index needs to be assigned a different weight according to its relevance and importance. Different emotional states may be more sensitive to certain modal data (e.g., speech data) and less influential to other data (e.g., behavioral data). By giving appropriate weights to the various assessment indicators, the impact of the important indicators can be enhanced, ensuring that the final fusion result more represents the true psychological state of the patient. And combining the evaluation indexes of each dimension according to the weight of the evaluation indexes through weighted fusion, so as to generate comprehensive evaluation values of each dimension.
Furthermore, the comprehensive evaluation values of a plurality of data dimensions (such as voice, facial expression, behavior and the like) are integrated, so that the accuracy of the evaluation result is further enhanced. According to the characteristics of each dimension, the data can be comprehensively considered through statistical analysis or a machine learning model (such as a support vector machine, a neural network and the like), so that the full utilization of the information of each dimension is ensured.
In order to improve the accuracy of the evaluation result, the system also needs to perform feature selection on each psychological evaluation index. The purpose of feature selection is to select the most representative features from mass data, remove redundant data, reduce noise and ensure that the fused evaluation result is more accurate. For example, in the fusion analysis process, the features most representing the emotional state of the patient are automatically selected through principal component analysis, so that the prediction capability of the model is improved.
Finally, a comprehensive fusion psychological assessment index is output through weighted fusion and feature selection. The index integrates emotion information from multidimensional data such as voice, facial expression, behavior and the like, and provides a comprehensive and accurate psychological state assessment for a doctor.
And P50, sending the fusion psychological assessment index to an interaction upper computer, and storing medical records by the interaction upper computer.
Specifically, the fusion psychological assessment index is sent to the interaction upper computer, and the upper computer stores medical records. The core goal of this step is to effectively integrate the psychological assessment of the patient with their clinical treatment data, ensuring that the psychological state of the patient is fully reflected in the medical records, providing data support for subsequent treatment and health management.
The interactive upper computer is an integrated system and is used for receiving information from various data acquisition devices (such as sensors, analysis modules and the like) and uniformly processing, storing and displaying the information. The host computer typically integrates a database management system, such as MySQL, mongoDB, for storing all health data of the patient, including clinical data, psychological assessment data, treatment records, and the like. After receiving the fused psychological assessment index, the interaction upper computer stores psychological assessment data in association with other medical records of the patient according to unique identification information (such as patient ID, medical record number and the like) of the patient. The medical history of each patient may contain multi-dimensional information such as personal information, medical history, examination results, treatment plans, treatment effects, and psychological assessment indices for the patient.
By sending the fusion psychological assessment index to the interaction upper computer and storing medical records, accurate recording and management of psychological states of patients can be achieved, safe storage of psychological assessment data is guaranteed, and necessary basis is provided for follow-up treatment, intervention decision-making and health tracking.
Further, the embodiment of the present application further includes a step P60, where the step P60 further includes:
The method comprises the steps of P61, obtaining medical record information of a patient, P62, extracting operation information to be executed of the patient according to the medical record information, obtaining required psychological assessment indexes corresponding to the operation information to be executed, P63, comparing the fused psychological assessment indexes with the required psychological assessment indexes, obtaining psychological assessment passing results if the fused psychological assessment indexes meet the required psychological assessment indexes, and P64, obtaining psychological assessment failing results if the fused psychological assessment indexes do not meet the required psychological assessment indexes.
In one possible embodiment of the present application, the medical record information and psychological assessment index of the patient are compared to ensure that the psychological state of the patient matches the requirements of the operation to be performed. The key objective of this step is to determine whether it is appropriate to perform a procedure based on the current mental state of the patient, ensuring that the patient is able to obtain the necessary mental support and intervention prior to the procedure. And outputting a psychological assessment passing result or a psychological assessment failing result through the comparison of the fusion psychological assessment index and the required psychological assessment index, and providing guidance for clinical decision.
Specifically, first, medical record information of a patient needs to be acquired from an interactive host computer or a hospital information management system (HIS). The medical record information includes personal information of the patient, disease history, treatment records, diagnostic results, and all relevant health data. By uniquely identifying information (e.g., patient ID, medical record number, etc.), complete medical record information of the patient, including basic personal information, disease history, surgical records, etc., can be quickly retrieved from the database.
And then, extracting the operation information to be executed of the patient, such as operation type, operation date and duration, operation risk, operation requirement and the like, according to the medical record information. And further extracting required psychological assessment indexes according to the type of the operation to be performed. The required psychological assessment index is an index extracted according to the specific requirements of the operation to be performed, and generally relates to an expected value of psychological states of the patient before the operation. Different types of surgery may require different psychological assessment indicators, such as anxiety assessment indicators, and certain surgery, particularly major surgery, may require patient anxiety level assessment prior to surgery. Depression assessment indicators, some patients may need to be screened for depression prior to surgery to assess whether their emotional state is appropriate for surgery. Emotional stability assessment indicators for some high risk procedures, emotional stability of the patient may require special attention to ensure that their psychological state during the procedure does not affect the procedure.
Further, the fusion psychological assessment index is compared with the required psychological assessment index, and the fusion psychological assessment index is a comprehensive assessment result obtained through fusion analysis according to the voice, facial expression and behavior data of the patient. This index reflects the current mental state of the patient and takes into account a number of emotional dimensions (e.g., anxiety, depression, fear, etc.). And judging whether the fusion psychological assessment index meets the standard of the required psychological assessment index or not through a comparison algorithm (such as similarity calculation, threshold judgment and the like). For example, if the anxiety level of a patient is too high (i.e., the anxiety assessment exceeds a preset threshold), the patient may require further psychological intervention or treatment.
And finally, outputting a psychological assessment passing result or a psychological assessment failing result according to the comparison result. If the fused psychological assessment index meets or is lower than the standard of the required psychological assessment index, outputting a psychological assessment passing result, wherein the psychological state of the patient is indicated to be suitable for operation. At this point, the procedure may be planned or the patient may be operated without excessive psychological intervention. If the fused psychological assessment index fails to meet the criteria of the desired psychological assessment index (e.g., excessive anxiety level, mood swings, etc.), a psychological assessment failed result is output indicating that the patient's current psychological state may not be suitable for surgery. At this point, the patient may need to receive further psychological intervention, treatment or assessment until their psychological state meets the criteria for being suitable for surgery. The process not only ensures that the psychological state of the patient can be effectively evaluated, but also provides data support for whether the operation is executed or not, and improves the scientificity and the accuracy of medical decision.
Further, step P64 of the embodiment of the present application further includes:
And P64-1, if the fused psychological assessment index does not meet the required psychological assessment index, generating an intervention regulation instruction according to the psychological assessment non-passing result, and P64-2, sending the intervention regulation instruction to a medical terminal of the patient to carry out intervention regulation reminding to medical staff.
Optionally, when the patient's fused psychological assessment fails to meet the required psychological assessment, the system generates corresponding intervention adjustment instructions and sends those instructions to the patient's medical terminal for appropriate psychological intervention by the healthcare worker. Ensures that the psychological state of the patient can be adjusted before the operation, and avoids influencing the success rate of the operation and postoperative recovery due to emotional problems.
Specifically, when the system detects that the fused psychological assessment index does not meet the required psychological assessment index, it indicates that the psychological state of the patient may have a certain risk, and intervention adjustment is needed. At this time, the system generates an intervention regulation instruction according to the psychological assessment failure result, and the contents of the instruction can include emotional intervention advice, operation delay advice and the like, and the aim of the instruction is to assist medical staff in regulating the emotional state of a patient through scientific psychological intervention so as to achieve the optimal psychological state before operation.
After generating the intervention adjustment instructions, the system will send these instructions to the patient's medical terminal via a medical information delivery system (such as a hospital information management system HIS or an electronic health record system EHR). The medical terminal refers to equipment used by medical staff, such as a smart phone, a tablet computer, a medical information terminal and the like. Through these terminals, medical personnel can in time receive the intervention instruction to carry out corresponding intervention operation according to the instruction.
After sending the intervention command, the medical staff verifies the psychological state of the patient according to the command and adjusts accordingly. At this time, the system may also dynamically monitor the mental state of the patient and adjust the intervention based on the patient's feedback after the intervention. For example, if the patient has improved emotion after receiving psychological distraction, the psychological assessment may be updated and a decision may be made as to whether to continue the intervention alert. The feedback mechanism of the system will ensure the effectiveness and timely adjustment of the intervention. The medical staff can also check the real-time feedback of the patient through the medical terminal as psychological intervention reference.
In summary, the embodiment of the application has at least the following technical effects:
The application collects multi-mode data under a plurality of scene samples by constructing a scene interaction module, inputs the data into a multi-mode fusion psychological assessment model by space-time alignment, learns the correlation among different modes and outputs psychological assessment indexes, generates comprehensive psychological assessment indexes of patients by fusion analysis, and sends the comprehensive psychological assessment indexes to an interaction upper computer for medical record storage, thereby supporting personalized treatment and psychological management.
The technical effects of automatically evaluating the psychological state of the patient by fusing multidimensional data and improving the accuracy, the instantaneity and the comprehensiveness of psychological evaluation are achieved.
In a second embodiment, based on the same inventive concept as the method for assessing patient psychology based on information interaction in the previous embodiment, as shown in fig. 2, the present application provides a device for assessing patient psychology based on information interaction, and the assessment device and the method in the embodiment of the present application are based on the same inventive concept. Wherein the evaluation device comprises:
A scene interaction module 11, said scene interaction module 11 comprising a plurality of scene samples.
The multi-modal data collection module 12 is configured to collect, by the multi-modal sensor, a multi-modal data set of the patient under the plurality of scene samples, wherein the multi-modal data set includes speech data, facial expression data, and behavior data.
The multi-modal fusion psychological assessment module 13 is configured to input a multi-modal fusion psychological assessment model by performing space-time alignment on the multi-modal data set, learn correlations between different modalities according to the multi-modal fusion psychological assessment model, and output a plurality of psychological assessment indexes corresponding to the plurality of scene samples respectively.
And the fusion analysis module 14 is used for carrying out fusion analysis on the multiple psychological assessment indexes to obtain the fusion psychological assessment indexes of the patient by the fusion analysis module 14.
And the evaluation index interaction module 15 is used for sending the fusion psychological evaluation index to an interaction upper computer, and the interaction upper computer stores medical records.
Further, the multimodal fusion psychological assessment module 13 is further configured to perform the following steps:
Performing performance consistency calculation on psychological assessment indexes of a patient under different scene samples according to the psychological assessment indexes to obtain consistency coefficients, performing weighted fusion analysis on the psychological assessment indexes to obtain fusion psychological assessment indexes when the consistency coefficients are larger than a preset threshold, generating reminding information when the consistency coefficients are smaller than the preset threshold, re-acquiring a multi-mode dataset of the patient under the scene samples according to the reminding information, and outputting an updated multi-mode dataset.
Further, the multimodal fusion psychological assessment module 13 is further configured to perform the following steps:
and performing performance consistency calculation on psychological assessment indexes of the patient under different scene samples, wherein the calculation formula is as follows: Wherein, the method comprises the steps of, In order for the coefficient of uniformity to be a factor of uniformity,For the number of test items in each scenario,For the variance of the ith test item,For the sum of the variances of all test items,The total variance weighted for all test items.
Further, the multimodal fusion psychological assessment module 13 is further configured to perform the following steps:
The method comprises the steps of carrying out variance calculation on a plurality of psychological assessment indexes, detecting response sensitivity of a patient under different scene samples, obtaining the response sensitivity through calculating fluctuation of the indexes under different scene samples, obtaining an identification scene sample larger than preset response sensitivity, carrying out secondary multi-modal data acquisition based on the identification scene sample, outputting a secondary multi-modal data set, inputting the multi-modal data set and the secondary multi-modal data set into a multi-modal fusion psychological assessment model, and updating the plurality of psychological assessment indexes.
Further, the multimodal fusion psychological assessment module 13 is further configured to perform the following steps:
The method comprises the steps of constructing a multi-modal sample data set under a plurality of scene samples, wherein the multi-modal sample data set comprises voice sample data, facial expression sample data and behavior sample data, extracting characteristics of the voice sample data, the facial expression sample data and the behavior sample data to obtain voice characteristic data, facial characteristic data and behavior characteristic data, initializing a multi-layer fusion network, performing layer-by-layer correlation training on the multi-layer fusion network according to the voice characteristic data, the facial characteristic data and the behavior characteristic data, and outputting a trained multi-modal fusion psychological assessment model by taking emotion recognition accuracy larger than a preset threshold as a convergence condition.
Further, the multimodal fusion psychological assessment module 13 is further configured to perform the following steps:
The method comprises the steps of determining a plurality of emotion states, classifying the multi-modal sample data sets according to the plurality of emotion states, outputting the multi-modal sample data sets under each emotion state, generating a plurality of emotion state comparison libraries by the multi-modal sample data sets under each state, traversing the plurality of emotion state comparison libraries according to the multi-modal data sets by the multi-modal fusion psychological assessment model, and outputting a plurality of psychological assessment indexes.
Further, the evaluation apparatus further includes:
The system comprises a medical record information acquisition module, a to-be-executed operation information extraction module, an index comparison module and a psychological assessment failure acquisition module, wherein the medical record information acquisition module is used for acquiring medical record information of a patient, the to-be-executed operation information extraction module is used for extracting to-be-executed operation information of the patient according to the medical record information and acquiring required psychological assessment indexes corresponding to the to-be-executed operation information, the index comparison module is used for comparing the fused psychological assessment indexes with the required psychological assessment indexes, if the fused psychological assessment indexes meet the required psychological assessment indexes, psychological assessment passing results are acquired, and if the fused psychological assessment indexes do not meet the required psychological assessment indexes, psychological assessment failing results are acquired.
In a third embodiment, based on the same inventive concept as the method for assessing patient psychology based on information interaction in the previous embodiment, the present application further provides a computer readable storage medium having a computer program stored thereon, which when executed by a processor, implements the method as in the first embodiment.
The foregoing detailed description of a patient psychological assessment method based on information interaction will be clear to those skilled in the art, and the patient psychological assessment method, apparatus and storage medium based on information interaction in this embodiment will not be described in detail herein for brevity of description. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
It should be noted that the sequence of the embodiments of the present application is only for description, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The foregoing description of the preferred embodiments of the application is not intended to limit the application to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the application are intended to be included within the scope of the application.
The specification and figures are merely exemplary illustrations of the present application and are considered to cover any and all modifications, variations, combinations, or equivalents that fall within the scope of the application. It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the scope of the application. Thus, the present application is intended to include such modifications and alterations insofar as they come within the scope of the application or the equivalents thereof.

Claims (10)

1.一种基于信息交互的患者心理评估方法,其特征在于,所述方法包括:1. A patient psychological assessment method based on information interaction, characterized in that the method comprises: 构建场景交互模块,所述场景交互模块包括多个场景样本;Constructing a scene interaction module, wherein the scene interaction module includes multiple scene samples; 在所述多个场景样本下,通过多模态传感器采集患者的多模态数据集,其中,所述多模态数据集包括语音数据、面部表情数据以及行为数据;Under the multiple scene samples, collecting a multimodal dataset of the patient through a multimodal sensor, wherein the multimodal dataset includes voice data, facial expression data, and behavior data; 通过对所述多模态数据集进行时空对齐,输入多模态融合心理评估模型,根据所述多模态融合心理评估模型学习不同模态之间的相关性,输出所述多个场景样本下分别对应的多个心理评估指标;By performing spatiotemporal alignment on the multimodal dataset, inputting the multimodal fusion psychological assessment model, learning the correlation between different modalities according to the multimodal fusion psychological assessment model, and outputting multiple psychological assessment indicators corresponding to the multiple scene samples respectively; 对所述多个心理评估指标进行融合分析,获取所述患者的融合心理评估指标;Performing a fusion analysis on the multiple psychological assessment indicators to obtain a fusion psychological assessment indicator for the patient; 将所述融合心理评估指标发送至交互上位机,由所述交互上位机进行病历存储。The fusion psychological assessment index is sent to an interactive host computer, which stores the medical records. 2.如权利要求1所述的方法,其特征在于,输出所述多个场景样本下分别对应的多个心理评估指标之后,方法包括:2. The method according to claim 1, wherein after outputting the plurality of psychological assessment indicators corresponding to the plurality of scene samples, the method comprises: 根据所述多个心理评估指标,对患者在不同场景样本下的心理评估指标进行表现一致性计算,获取一致性系数;Based on the multiple psychological assessment indicators, calculating the performance consistency of the patient's psychological assessment indicators under different scenario samples to obtain a consistency coefficient; 当所述一致性系数大于预设阈值,对所述多个心理评估指标进行加权融合分析,获取融合心理评估指标;When the consistency coefficient is greater than a preset threshold, performing weighted fusion analysis on the multiple psychological assessment indicators to obtain a fusion psychological assessment indicator; 当所述一致性系数小于所述预设阈值,生成提醒信息,根据所述提醒信息在所述多个场景样本下重新采集所述患者的多模态数据集,输出更新的多模态数据集。When the consistency coefficient is less than the preset threshold, a reminder message is generated, and the multimodal dataset of the patient is re-collected under the multiple scene samples according to the reminder message, and an updated multimodal dataset is output. 3.如权利要求2所述的方法,其特征在于,对患者在不同场景样本下的心理评估指标进行表现一致性计算,计算式如下:3. The method according to claim 2, wherein the performance consistency of the patient's psychological assessment indicators under different scene samples is calculated using the following formula: ; 其中,为一致性系数,为每个场景下的测试项目数量,为第i个测试项目的方差,为所有测试项目的方差的加和,为所有测试项目加权后的总方差。in, is the consistency coefficient, The number of test items for each scenario, is the variance of the i-th test item, is the sum of the variances of all test items, The total variance after weighting all test items. 4.如权利要求2所述的方法,其特征在于,当所述一致性系数大于预设阈值,方法还包括:4. The method according to claim 2, wherein when the consistency coefficient is greater than a preset threshold, the method further comprises: 对所述多个心理评估指标进行方差计算,检测所述患者在不同场景样本下的反应敏感性,其中,所述反应敏感性通过计算不同场景样本下指标的波动大小获取;Performing variance calculation on the multiple psychological assessment indicators to detect the patient's reaction sensitivity under different scene samples, wherein the reaction sensitivity is obtained by calculating the fluctuation size of the indicators under different scene samples; 获取大于预设反应敏感性的标识场景样本;Obtaining identification scene samples that are greater than a preset response sensitivity; 基于所述标识场景样本进行二次多模态数据采集,输出二次多模态数据集;Performing secondary multimodal data collection based on the identified scene samples, and outputting a secondary multimodal data set; 将所述多模态数据集和所述二次多模态数据集输入多模态融合心理评估模型,更新所述多个心理评估指标。The multimodal dataset and the secondary multimodal dataset are input into a multimodal fusion psychological assessment model to update the multiple psychological assessment indicators. 5.如权利要求1所述的方法,其特征在于,所述方法还包括:5. The method according to claim 1, further comprising: 构建所述多个场景样本下的多模态样本数据集,所述多模态样本数据集包括语音样本数据、面部表情样本数据以及行为样本数据;Constructing a multimodal sample data set under the multiple scene samples, the multimodal sample data set including speech sample data, facial expression sample data, and behavior sample data; 对所述语音样本数据、面部表情样本数据以及行为样本数据进行特征提取,获取语音特征数据、面部特征数据以及行为特征数据;Performing feature extraction on the voice sample data, facial expression sample data, and behavior sample data to obtain voice feature data, facial feature data, and behavior feature data; 初始化多层融合网络,根据所述语音特征数据、面部特征数据以及行为特征数据对所述多层融合网络进行逐层相关性训练,以情绪识别准确度大于预设阈值为收敛条件,输出训练好的多模态融合心理评估模型。Initialize a multi-layer fusion network, perform layer-by-layer correlation training on the multi-layer fusion network according to the voice feature data, facial feature data, and behavioral feature data, take the emotion recognition accuracy greater than a preset threshold as the convergence condition, and output a trained multimodal fusion psychological assessment model. 6.如权利要求5所述的方法,其特征在于,构建所述多个场景样本下的多模态样本数据集后,方法还包括:6. The method according to claim 5, wherein after constructing the multimodal sample dataset under the multiple scene samples, the method further comprises: 确定多个情绪状态,根据所述多个情绪状态对所述多模态样本数据集进行分类,输出每个情绪状态下的多模态样本数据集;Determining a plurality of emotional states, classifying the multimodal sample dataset according to the plurality of emotional states, and outputting a multimodal sample dataset under each emotional state; 以每个状态下的多模态样本数据集,生成多个情绪状态比对库;Generate multiple emotional state comparison libraries using multimodal sample datasets in each state; 所述多模态融合心理评估模型根据所述多模态数据集,遍历所述多个情绪状态比对库进行比对,输出多个心理评估指标。The multimodal fusion psychological assessment model traverses the multiple emotional state comparison libraries to perform comparisons based on the multimodal data set, and outputs multiple psychological assessment indicators. 7.如权利要求1所述的方法,其特征在于,由所述交互上位机进行存储后,方法还包括:7. The method according to claim 1, wherein after the interactive host computer performs storage, the method further comprises: 获取所述患者的病历信息;Obtaining medical records of the patient; 根据所述病历信息提取所述患者的待执行手术信息,并获取所述待执行手术信息对应的所需心理评估指标;Extracting the patient's pending surgery information based on the medical record information, and obtaining the required psychological assessment indicators corresponding to the pending surgery information; 以所述融合心理评估指标和所需心理评估指标进行比对,若所述融合心理评估指标满足所需心理评估指标,获取心理评估通过结果;Comparing the fusion psychological assessment indicator with the required psychological assessment indicator, and obtaining a psychological assessment pass result if the fusion psychological assessment indicator meets the required psychological assessment indicator; 若所述融合心理评估指标不满足所需心理评估指标,获取心理评估不通过结果。If the integrated psychological assessment indicators do not meet the required psychological assessment indicators, a psychological assessment failure result is obtained. 8.如权利要求7所述的方法,其特征在于,若所述融合心理评估指标不满足所需心理评估指标,根据所述心理评估不通过结果,生成干预调节指令;8. The method according to claim 7, wherein if the integrated psychological assessment indicator does not meet the required psychological assessment indicator, an intervention adjustment instruction is generated according to the failure result of the psychological assessment; 将所述干预调节指令发送至所述患者的医疗终端,向医护人员进行干预调节提醒。The intervention adjustment instruction is sent to the patient's medical terminal to remind medical staff of the intervention adjustment. 9.一种基于信息交互的患者心理评估设备,其特征在于,所述评估设备包括:9. A patient psychological assessment device based on information interaction, characterized in that the assessment device comprises: 场景交互模块,所述场景交互模块包括多个场景样本;A scene interaction module, wherein the scene interaction module includes multiple scene samples; 多模态数据采集模块,所述多模态数据采集模块用于在所述多个场景样本下,通过多模态传感器采集患者的多模态数据集,其中,所述多模态数据集包括语音数据、面部表情数据以及行为数据;A multimodal data acquisition module, configured to acquire a multimodal data set of the patient using a multimodal sensor under the plurality of scene samples, wherein the multimodal data set includes voice data, facial expression data, and behavioral data; 多模态融合心理评估模块,所述多模态融合心理评估模块用于通过对所述多模态数据集进行时空对齐,输入多模态融合心理评估模型,根据所述多模态融合心理评估模型学习不同模态之间的相关性,输出所述多个场景样本下分别对应的多个心理评估指标;A multimodal fusion psychological assessment module, which is used to perform spatiotemporal alignment on the multimodal dataset, input a multimodal fusion psychological assessment model, learn the correlation between different modalities based on the multimodal fusion psychological assessment model, and output a plurality of psychological assessment indicators corresponding to the plurality of scene samples; 融合分析模块,所述融合分析模块用于对所述多个心理评估指标进行融合分析,获取所述患者的融合心理评估指标;A fusion analysis module, configured to perform fusion analysis on the multiple psychological assessment indicators to obtain a fusion psychological assessment indicator for the patient; 评估指标交互模块,所述评估指标交互模块用于将所述融合心理评估指标发送至交互上位机,由所述交互上位机进行病历存储。An evaluation index interaction module is used to send the fusion psychology evaluation index to an interactive host computer, and the interactive host computer stores the medical records. 10.一种可读存储介质,其特征在于,所述存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至8任一项所述方法步骤。10. A readable storage medium, characterized in that a computer program is stored on the storage medium, and when the computer program is executed by a processor, the method steps according to any one of claims 1 to 8 are implemented.
CN202510419196.6A 2025-04-03 2025-04-03 Patient psychological assessment method, equipment and storage medium based on information interaction Pending CN120544861A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510419196.6A CN120544861A (en) 2025-04-03 2025-04-03 Patient psychological assessment method, equipment and storage medium based on information interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510419196.6A CN120544861A (en) 2025-04-03 2025-04-03 Patient psychological assessment method, equipment and storage medium based on information interaction

Publications (1)

Publication Number Publication Date
CN120544861A true CN120544861A (en) 2025-08-26

Family

ID=96779119

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510419196.6A Pending CN120544861A (en) 2025-04-03 2025-04-03 Patient psychological assessment method, equipment and storage medium based on information interaction

Country Status (1)

Country Link
CN (1) CN120544861A (en)

Similar Documents

Publication Publication Date Title
Skaramagkas et al. Multi-modal deep learning diagnosis of Parkinson’s disease—a systematic review
US11776669B2 (en) System and method for synthetic interaction with user and devices
CN117064388A (en) System for realizing mental disorder assessment analysis based on emotion recognition
CN118352083A (en) An AI-assisted assessment method and system for mental health during pregnancy
US20190013092A1 (en) System and method for facilitating determination of a course of action for an individual
CN114969557A (en) Propaganda and education pushing method and system based on multi-source information fusion
WO2026025413A1 (en) Methods, systems, and storage mediums for providing medical consultation services
Vakanski et al. Metrics for performance evaluation of patient exercises during physical therapy
Mousavi et al. Emotion recognition in adaptive virtual reality settings: Challenges and opportunities
Beschi et al. Advancing Remote Cognitive Assessments through Telemedicine with Cloud Computing and AdaBoost Classifiers
Wang et al. Visual Human Behavior Sensing and Understanding for Autism Spectrum Disorder Treatment: A Review.
Beriwal et al. Techniques for suicidal ideation prediction: a qualitative systematic review
Hsu et al. Movement disorder evaluation of Parkinson’s disease severity based on deep neural network models
CN120611149A (en) Guidance method, device, equipment and medium based on multimodal perception
CN119791659A (en) Mental health assessment method, system and storage medium based on virtual reality equipment
CN119650054A (en) Emotion management method, device, equipment and storage medium
Rumahorbo et al. Exploring recurrent neural network models for depression detection through facial expressions: A systematic literature review
CN120544861A (en) Patient psychological assessment method, equipment and storage medium based on information interaction
Abeydeera et al. Smart mirror with virtual twin
Despotovic et al. Multimodal Fusion for Vocal Biomarkers Using Vector Cross-Attention.
EP4342168B1 (en) Image capturing method
Zuo et al. Intelligent assessment algorithm of mental health status based on multimodal data fusion
US20250299836A1 (en) Artificial intelligence/machine learning-based bioinformatics platform for encephalopathy and multifactorial evidence-based analysis method
US20240347206A1 (en) Early detection of conditions and/or events
US20250017535A1 (en) Systems and methods for assessing a patient and predicting patient outcomes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination