[go: up one dir, main page]

CN116206779A - Wisdom ward interactive system based on visual perception - Google Patents

Wisdom ward interactive system based on visual perception Download PDF

Info

Publication number
CN116206779A
CN116206779A CN202310475672.7A CN202310475672A CN116206779A CN 116206779 A CN116206779 A CN 116206779A CN 202310475672 A CN202310475672 A CN 202310475672A CN 116206779 A CN116206779 A CN 116206779A
Authority
CN
China
Prior art keywords
image
module
patient
processing unit
action state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310475672.7A
Other languages
Chinese (zh)
Inventor
王广政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Mingtai Medical Equipment Group Co ltd
Original Assignee
Shandong Mingtai Medical Equipment Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Mingtai Medical Equipment Group Co ltd filed Critical Shandong Mingtai Medical Equipment Group Co ltd
Priority to CN202310475672.7A priority Critical patent/CN116206779A/en
Publication of CN116206779A publication Critical patent/CN116206779A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/20ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the management or administration of healthcare resources or facilities, e.g. managing hospital staff or surgery rooms
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Epidemiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • Biomedical Technology (AREA)
  • General Business, Economics & Management (AREA)
  • Business, Economics & Management (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Infusion, Injection, And Reservoir Apparatuses (AREA)

Abstract

本发明涉及病房交互系统技术领域,尤其涉及一种基于视觉感知的智慧病房交互系统,本发明通过设置移动模块、视觉感知模块、交互模块、处理模块以及数据存储模块,通过处理模块控制移动模块移动至各检查点采集图像,并判定患者的动作状态,在第一动作状态下控制视觉感模块动作采集滴液图像,并判定滴液速率是否符合标准,在第二动作状态下控制交互模块发出语音问询,在第三工作状态下时控制交互模块的储存仓伸出,并且,在患者处于不同的动作状态时控制移动模块在各检查点的停留时间,本发明通过上述过程在保证信息采集可靠性的前提下提高了对病房的巡检效率,减少巡检空窗期,减少对患者的影响,减轻医护人员的工作负担。

Figure 202310475672

The invention relates to the technical field of ward interaction systems, and in particular to a visual perception-based smart ward interaction system. The invention controls the movement of the mobile module through the processing module by setting a mobile module, a visual perception module, an interaction module, a processing module and a data storage module. Go to each checkpoint to collect images, and determine the patient's action state. In the first action state, control the visual sense module to collect drip images, and determine whether the drip rate meets the standard. In the second action state, control the interaction module to make a voice Inquiry, in the third working state, the storage bin of the interactive module is controlled to protrude, and when the patient is in different action states, the residence time of the mobile module at each checkpoint is controlled. The present invention ensures reliable information collection through the above process. Under the premise of safety, the inspection efficiency of the ward is improved, the inspection window period is reduced, the impact on patients is reduced, and the workload of medical staff is reduced.

Figure 202310475672

Description

Wisdom ward interactive system based on visual perception
Technical Field
The invention relates to the technical field of ward interaction systems, in particular to an intelligent ward interaction system based on visual perception.
Background
The ward is a main space for providing treatment and rehabilitation maintenance services for patients, is a more scientific, harmonious and humanized ward space, is a common pursuit and hopeful for hospital constructors, managers and patients, but because of the great business data of the hospitals, the medical staff is limited in energy, the demands of the patients for fine care in treatment and rehabilitation stages are difficult to meet, the intelligent ward interactive system more meeting the demands of people is applied, and in the present stage, the intelligent ward interactive system is used for tightly connecting the patients with medical staff and intelligent medical equipment by constructing an Internet of things platform in the hospital, and in recent years, the intelligent ward interactive system based on visual perception is also developed and applied in a large scale.
Chinese patent publication No.: CN114567701a, the present invention discloses an intelligent ward system, which is characterized by comprising: the terminal comprises a shell and a circuit board, wherein the shell is provided with a communication interface, a display screen and a camera, and the circuit board is arranged in the shell; the circuit board is provided with an infusion monitoring module, an electronic medical record module, a meal ordering module, a patrol module, a voice calling module and an equipment management module; the server is used for receiving and processing the data transmitted by the terminal; the display screen is arranged at the nurse station and used for displaying the infusion process and checking the data when the infusion is finished.
However, the following problems still exist in the prior art, the patient is not considered to be in different action states in different time periods, the interaction between the patient and the inspection robot is affected, the inspection efficiency and reliability are poor, and the applicability is low.
Disclosure of Invention
In order to solve the above-mentioned problems, the present invention provides an intelligent ward interaction system based on visual perception, which comprises:
the moving module is used for moving among preset check points according to a preset track and comprises a body and a moving chassis arranged at the bottom of the body, so that the moving chassis drives the body to move;
the visual perception module is arranged on the mobile module and used for shooting images, and comprises a steering unit arranged on the body through a telescopic rod and an image acquisition unit arranged on the steering unit, so that the shooting direction of the image acquisition unit is changed through the steering unit;
the interaction module is arranged on the mobile module and comprises a voice unit for sending out query voice and receiving voice feedback and voice instructions sent by a patient and a storage bin for storing the contact oximeter and the contact sphygmomanometer, and the storage bin is arranged inside the body through a sliding rail so that the storage bin extends out of the body;
the interaction module is connected with a nurse station terminal so as to input inquiry voice to the interaction module through the nurse station terminal;
the processing module is connected with the mobile module, the visual perception module and the interaction module and used for controlling the mobile module, the visual perception module and the interaction module, and comprises an image processing unit, a first control unit, a second control unit and a mobile control unit which are mutually connected.
Further, the image processing unit is connected with the visual perception module and is used for controlling the image acquisition unit to shoot image data under a preset condition, the image data comprises a facial feature image, a body contour image and an infusion hanger area image, and the action state of a patient is judged based on the image data, and the action state comprises a first action state, a second action state and a third action state;
the first control unit is used for controlling the image acquisition unit to adjust the focusing multiplying power to shoot the dropper to obtain a drip image when the image processing unit judges that the patient is in a first action state, and judging whether the drip rate is abnormal or not based on the drip image;
the second control unit is used for controlling the interaction module to send out query voice when the image processing unit judges that the patient is in a second action state, recording received voice feedback and controlling the storage bin to extend out of the body when the image processing unit judges that the patient is in a third action state;
the movement control unit is used for controlling the movement module to move along a preset track between check points, determining waiting time based on the action state of the patient determined by the image processing unit and controlling the movement module to stay for the waiting time at the corresponding check point;
the preset condition is when the mobile module moves to a check point.
Further, the image processing unit controls the image acquisition unit to shoot images of all areas of a ward, wherein the image processing unit controls the telescopic rod to rise when the moving module moves to a check point, controls the steering unit to rotate to different angles to control the image acquisition unit to shoot image data in different shooting directions, an infusion hanging frame area image in the image data is obtained by acquiring an area above a sickbed corresponding to the check point by the image acquisition unit, a body contour image is obtained by shooting a body part of a patient on the sickbed, and a facial feature image is obtained by shooting a face of the patient on the sickbed.
Further, the image processing unit determines an action state of the patient based on the image data;
the visual judging unit judges whether eye-closing features appear in the facial feature image, judges the pose state corresponding to the body contour image and judges whether the infusion bottle contour appears in the infusion hanger area image;
if the infusion bottle outline appears in the infusion hanger area image, the image processing unit judges that the patient is in a first action state;
if the infusion bottle outline does not appear in the infusion hanger area image and the eye-closing feature does not appear in the facial feature image, the image processing unit judges that the patient is in a second action state;
and if the eye-closing feature appears in the facial feature image and the pose state corresponding to the body contour image is a lying state, the image processing unit judges that the patient is in a third action state.
Further, the image processing unit determines a pose state corresponding to the body contour image based on the body contour image;
the image processing unit identifies a head node and a foot node of a patient based on the body contour image, determines an included angle between a connecting line and a horizontal plane based on the connecting line of the head node and the foot node, compares the included angle with a preset pose reference included angle, and judges whether a pose state corresponding to the body contour image is a lying state or not according to a comparison result;
under the first included angle comparison condition, the image processing unit judges that the pose state corresponding to the body contour image is a lying state;
the first included angle comparison condition is that the included angle is smaller than or equal to the pose reference included angle.
Further, the first control unit controls the image acquisition unit to adjust the focusing multiplying power to shoot a dropper to obtain a drip image, wherein the first control unit controls the image acquisition unit to adjust the focusing multiplying power to acquire a label of the medicament in the infusion hanging frame area image, and the image processing unit confirms a standard drip rate interval of the medicament in the database based on the label of the medicament.
Further, the first control unit determines whether a drip rate is abnormal based on the drip image;
the first control unit controls the image acquisition unit to adjust focusing multiplying power, shoots a dropper within preset time to obtain the current dropping rate of the dropping liquid, compares the current dropping rate with a standard dropping rate interval, and judges whether the dropping rate is abnormal or not according to a comparison result;
under a first rate comparison condition, the first control unit judges that the dropping rate is not abnormal;
under a second rate comparison condition, the first control unit judges that the dropping rate is abnormal;
the first rate comparison condition is that the current dropping rate is in the standard dropping rate interval, and the second rate comparison condition is that the current dropping rate is not in the standard dropping rate interval.
Further, the movement control unit adjusts the preset waiting time when the image processing unit judges that the patient is in different action states, wherein the movement control unit determines that the waiting time is the first waiting time when the image processing unit judges that the patient is in the first action state or the second action state;
the mobile control unit determines the waiting time to be the second waiting time when the image processing unit judges that the patient is in the third action state;
wherein the first latency is greater than the second latency.
Further, the system also comprises a data storage module, wherein the data storage module is connected with the visual perception module and the interaction module so as to store the data sent by the visual perception module and the interaction module.
Further, the contact oximeter and the contact sphygmomanometer stored in the storage bin are connected with the data storage module, so that detected data can be sent to the data storage module.
Compared with the prior art, the invention controls the mobile module to move to each check point to collect images through the processing module, judges the action state of a patient, controls the visual sensing module to act to collect the drip image in the first action state, judges whether the drip rate accords with the standard, controls the interactive module to send out voice query in the second action state, controls the storage bin of the interactive module to extend out in the third working state, and controls the stay time of the mobile module in each check point when the patient is in different action states.
Particularly, the image processing unit is used for controlling the image acquisition unit to shoot image data, the image data comprises facial feature images, body contour images and infusion hanger area images, the action state of a patient can be well represented, data basis is provided for a subsequent system to execute different actions, intelligent and unmanned automatic inspection is realized, and further the burden of medical staff is reduced.
In particular, the image processing unit is used for controlling the image acquisition unit to shoot image data and determining the action state of a patient, in the actual situation, the action states of different patients in different time periods are different, interaction cannot be generated with the interaction module, and the data acquisition effect is poor, so that the invention determines the action state of the patient, the first action state characterizes the patient in an infusion state, the acquired drip image judges whether the current drip rate is abnormal or not, the second action state characterizes the patient in a non-sleep non-infusion state, the voice interaction unit is controlled to send query voice to the patient in the first action state and the second action state, the interaction effect of the interaction module is improved, and the patient can automatically use the contact blood pressure meter and the contact blood oxygen meter to detect the signs only in the second action state.
In particular, the interaction module is arranged in the invention, in the actual situation, medical staff needs to inquire daily symptoms of a patient routinely, such as defecation, respiration and the like, great effort is required to be consumed by the medical staff, and different patients can be in different action states when the medical staff inquires, and have a break in patient rest and poor patient coordination, so that the inquiry voice can be input into the interaction module in advance to replace the medical staff inquire, and the patient experience is improved when the patient is in the first action state and the second action state.
In particular, the waiting time is determined by the mobile control unit based on the action state of the patient determined by the image processing unit, the residence time is reduced when the patient is in the third action state, in the practical situation, the patient can call medical staff or perform other operations through the interaction module, the patient stays for a certain time when the patient is in the first action state or in the second action state, the interaction time is provided for the patient, the residence time is reduced when the patient is in the third action state, the inspection efficiency is improved, the inspection empty window period is shortened, and abnormal condition omission is avoided.
Drawings
Fig. 1 is a schematic structural diagram of an intelligent ward interaction system based on visual perception according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a processing module according to an embodiment of the invention.
Detailed Description
In order that the objects and advantages of the invention will become more apparent, the invention will be further described with reference to the following examples; it should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present invention, and are not intended to limit the scope of the present invention.
It should be noted that, in the description of the present invention, terms such as "upper," "lower," "left," "right," "inner," "outer," and the like indicate directions or positional relationships based on the directions or positional relationships shown in the drawings, which are merely for convenience of description, and do not indicate or imply that the apparatus or elements must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention.
Furthermore, it should be noted that, in the description of the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention can be understood by those skilled in the art according to the specific circumstances.
Referring to fig. 1 and 2, fig. 1 is a schematic structural diagram of an intelligent ward interaction system based on visual perception according to an embodiment of the present invention, and fig. 2 is a schematic structural diagram of a processing module according to an embodiment of the present invention, where the intelligent ward interaction system based on visual perception includes:
the moving module is used for moving among preset check points according to a preset track and comprises a body and a moving chassis arranged at the bottom of the body, so that the moving chassis drives the body to move;
the visual perception module is arranged on the mobile module and used for shooting images, and comprises a steering unit arranged on the body through a telescopic rod and an image acquisition unit arranged on the steering unit, so that the shooting direction of the image acquisition unit is changed through the steering unit;
the interaction module is arranged on the mobile module and comprises a voice unit for sending out query voice and receiving voice feedback and voice instructions sent by a patient and a storage bin for storing the contact oximeter and the contact sphygmomanometer, and the storage bin is arranged inside the body through a sliding rail so that the storage bin extends out of the body;
the processing module is connected with the mobile module, the visual perception module and the interaction module and used for controlling the mobile module, the visual perception module and the interaction module, and comprises an image processing unit, a first control unit, a second control unit and a mobile control unit which are mutually connected, wherein the image processing unit is connected with the visual perception module and used for controlling the image acquisition unit to shoot image data under the preset condition, the image data comprises facial feature images, body contour images and infusion hanger area images, and the action state of a patient is judged based on the image data, and the action state comprises a first action state, a second action state and a third action state;
the first control unit is used for controlling the image acquisition unit to adjust the focusing multiplying power to shoot the dropper to obtain a drip image when the image processing unit judges that the patient is in a first action state, and judging whether the drip rate is abnormal or not based on the drip image;
the second control unit is used for controlling the interaction module to send out query voice and record received voice feedback when the image processing unit judges that the patient is in a first action state and a second action state, and controlling the storage bin to extend out of the body when the image processing unit judges that the patient is in the second action state;
the movement control unit is used for controlling the movement module to move along a preset track between check points, determining waiting time based on the action state of the patient determined by the image processing unit and controlling the movement module to stay for the waiting time at the corresponding check point;
the preset condition is when the mobile module moves to a check point.
Specifically, the specific structure of the mobile module is not limited, and the mobile chassis only needs to move in all directions, and of course, preferably, the mobile chassis needs to be remotely controlled so as to move based on the control instruction sent by the processing module, which is the prior art and is not described in detail.
Specifically, the specific structures of the steering unit and the image acquisition unit are not limited, the steering unit can be a steering joint applied to the mechanical arm so as to drive the image acquisition unit to rotate, and the image acquisition unit can be a camera capable of adjusting the focusing multiplying power, which is not described herein.
Specifically, the invention does not limit the specific structure of the interactive module, and only needs to send out voice and receive language for the language unit, of course, the voice unit can be preferably provided with a voice recognition logic component, can recognize the voice command sent by the patient, converts the voice command into an electric signal and sends the electric signal to a nurse station, and realizes calling medical staff or other functions.
Specifically, the specific structure of the processing module is not limited, the processing module and each unit thereof may be a logic component, and the logic component is a field programmable processor, a computer, or a processor in the computer, which is not described herein.
Specifically, the specific structures of the contact oximeter and the contact sphygmomanometer are not limited, the contact oximeter can be a clip oximeter or a bracelet oximeter, and the contact sphygmomanometer can be a bracelet sphygmomanometer and is not described herein.
Specifically, the detection points are arranged on one side of the sickbed, and a single sickbed corresponds to a single detection point.
Specifically, the method for identifying the outline of the infusion bottle, the outline of the body, the head node and the foot node in the outline of the body and the eye-closing feature of the facial feature image by the image processing unit is not particularly limited, the image identification model can be trained in advance, and the image identification model is led into the image processing unit to realize the functions, which are the prior art and are not repeated.
Specifically, the image processing unit controls the image acquisition unit to shoot images of all areas of a ward, wherein the image processing unit controls the telescopic rod to rise when the moving module moves to a check point, controls the steering unit to rotate to different angles to control the image acquisition unit to shoot image data in different shooting directions, an infusion hanging frame area image in the image data is obtained by acquiring an area above a sickbed corresponding to the check point by the image acquisition unit, a body contour image is obtained by shooting a body part of a patient on the sickbed, and a facial feature image is obtained by shooting a face of the patient on the sickbed.
Specifically, the image processing unit is used for controlling the image acquisition unit to shoot image data, the image data comprises facial feature images, body contour images and infusion hanger area images, the action state of a patient can be well represented, data basis is provided for a subsequent system to execute different actions, intelligent and unmanned automatic inspection is realized, and further the burden of medical staff is reduced.
Specifically, the image processing unit determines the action state of the patient based on the image data;
the visual judging unit judges whether eye-closing features appear in the facial feature image, judges the pose state corresponding to the body contour image and judges whether the infusion bottle contour appears in the infusion hanger area image;
if the infusion bottle outline appears in the infusion hanger area image, the image processing unit judges that the patient is in a first action state;
if the infusion bottle outline does not appear in the infusion hanger area image and the eye-closing feature does not appear in the facial feature image, the image processing unit judges that the patient is in a second action state;
and if the eye-closing feature appears in the facial feature image and the pose state corresponding to the body contour image is a lying state, the image processing unit judges that the patient is in a third action state.
Specifically, the image processing unit determines a pose state corresponding to the body contour image based on the body contour image;
the image processing unit identifies a head node and a foot node of a patient based on the body contour image, determines an included angle between a connecting line and a horizontal plane based on the connecting line of the head node and the foot node, compares the included angle with a preset pose reference included angle, and judges whether a pose state corresponding to the body contour image is a lying state or not according to a comparison result;
under the first included angle comparison condition, the image processing unit judges that the pose state corresponding to the body contour image is a lying state;
the first included angle comparison condition is that the included angle is smaller than or equal to the pose reference included angle.
Specifically, as an implementation manner, preferably, the determining process may be that the included angle α is compared with a preset pose reference included angle α ', and according to a comparison result, whether the pose state corresponding to the body contour image is a lying pose is determined, if α is less than or equal to α', the computing control unit determines that the pose state corresponding to the body contour image is a lying pose;
wherein, the position reference included angle can be set manually by medical staff according to actual conditions, and preferably, alpha' =20° can be set.
According to the invention, the image processing unit is used for controlling the image acquisition unit to shoot image data and determining the action state of a patient, in the actual situation, the action states of different patients in different time periods are different, interaction with the interaction module cannot be generated, and the data acquisition effect is poor, so that the invention determines the action state of the patient, the first action state characterizes the patient in an infusion state, the drip image is acquired to judge whether the current drip rate is abnormal, the second action state characterizes the patient in a non-sleep non-infusion state, the voice interaction unit is controlled to send inquiry voice to the patient in the first action state and the second action state, the interaction effect of the interaction module is improved, and the storage bin is only extended out in the second action state, so that the patient can self-use the contact sphygmomanometer and the contact oximeter to detect the signs.
Specifically, the first control unit controls the image acquisition unit to adjust the focusing multiplying power to shoot a dropper to obtain a drip image, wherein the first control unit controls the image acquisition unit to adjust the focusing multiplying power to acquire a label of a medicament in the infusion hanger region image, and the image processing unit confirms a standard drip rate interval of the medicament in the database based on the label of the medicament.
Specifically, when the image acquisition unit acquires the labels of the medicines in the infusion hanger area image, the steering unit is controlled to act to shoot the infusion bottle at multiple angles, and as a practical mode, preferably, the steering unit is a rotating joint on a mechanical arm, one end of the mechanical arm is arranged on a telescopic rod, when the moving module moves to a check point, the first control unit controls the telescopic rod to extend and controls the steering unit to act to shoot the infusion bottle at multiple angles to acquire a plurality of images to obtain an image set, the image processing unit identifies the labels in each image of the image set one by one, and the degree of freedom of the mechanical arm and a rod connecting structure can be set according to specific requirements by a person skilled in the art.
The first control unit determines whether a drip rate is abnormal based on the drip image;
the first control unit controls the image acquisition unit to adjust focusing multiplying power, shoots a dropper within preset time to obtain the current dropping rate of the dropping liquid, compares the current dropping rate with a standard dropping rate interval, and judges whether the dropping rate is abnormal or not according to a comparison result;
increasing the focusing multiplying power when adjusting the focusing multiplying power;
under a first rate comparison condition, the first control unit judges that the dropping rate is not abnormal;
under a second rate comparison condition, the first control unit judges that the dropping rate is abnormal;
the first rate comparison condition is that the current dropping rate is in the standard dropping rate interval, and the second rate comparison condition is that the current dropping rate is not in the standard dropping rate interval.
Specifically, when the first control unit of the present invention obtains the dropping rate, the first control unit may select a mode of identifying the total number of drops in a certain period of time to perform calculation, and the present invention is not limited in particular to the mode of identifying the total number of drops.
In particular, as an implementation, the drop rate can be calculated according to formula (1),
Figure SMS_1
(1)
wherein,,
Figure SMS_2
for the current drip rate, +.>
Figure SMS_3
For a preset period of time, < > is->
Figure SMS_4
In seconds, it is preferable to set +.>
Figure SMS_5
Figure SMS_6
Is the total number of drops in a preset time period.
Specifically, the first control unit is provided with a database for storing standard dripping speed intervals of different medicines, and the source of data in the database can be set by a person skilled in the art based on the standard dripping speed intervals required by different medicines in the clinical medicine guideline, which is not described herein.
Specifically, the movement control unit determines a waiting time based on the motion state of the patient determined by the image processing unit, wherein the movement control unit determines the waiting time as a first waiting time when the image processing unit determines that the patient is in a first motion state or a second motion state;
the mobile control unit determines the waiting time to be the second waiting time when the image processing unit judges that the patient is in the third action state;
wherein the first latency is greater than the second latency.
Specifically, the first residence time is set in [0,2] in seconds, and the second residence time is set in [10, 30] in seconds.
Specifically, the invention further comprises a data storage module, wherein the data storage module is connected with the visual perception module and the interaction module so as to store data sent by the visual perception module and the interaction module.
Specifically, the contact oximeter and the contact sphygmomanometer stored in the storage bin are connected with the data storage module, so that detected data can be sent to the data storage module.
Specifically, the waiting time is determined by the mobile control unit based on the action state of the patient determined by the image processing unit, the stay time is reduced when the patient is in the third action state, in the practical situation, the patient can call medical staff or perform other operations through the interaction module, stay for a certain time when the patient is in the first action state or in the second action state, interaction time is provided for the patient, and the stay time is reduced when the patient is in the third action state, so that the inspection efficiency is improved, the inspection empty window period is shortened, and abnormal condition omission is avoided.
Thus far, the technical solution of the present invention has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will be within the scope of the present invention.
The foregoing description is only of the preferred embodiments of the invention and is not intended to limit the invention; various modifications and variations of the present invention will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An intelligent ward interaction system based on visual perception, comprising:
the moving module is used for moving among preset check points according to a preset track and comprises a body and a moving chassis arranged at the bottom of the body, so that the moving chassis drives the body to move;
the visual perception module is arranged on the mobile module and used for shooting images, and comprises a steering unit arranged on the body through a telescopic rod and an image acquisition unit arranged on the steering unit, so that the shooting direction of the image acquisition unit is changed through the steering unit;
the interaction module is arranged on the mobile module and comprises a voice unit for sending out query voice and receiving voice feedback and voice instructions sent by a patient and a storage bin for storing the contact oximeter and the contact sphygmomanometer, and the storage bin is arranged inside the body through a sliding rail so that the storage bin extends out of the body;
the interaction module is connected with a nurse station terminal so as to input inquiry voice to the interaction module through the nurse station terminal;
the processing module is connected with the mobile module, the visual perception module and the interaction module and used for controlling the mobile module, the visual perception module and the interaction module, and comprises an image processing unit, a first control unit, a second control unit and a mobile control unit which are mutually connected.
2. The visual perception-based intelligent ward interaction system of claim 1, wherein the image processing unit is connected with the visual perception module and is used for controlling the image acquisition unit to shoot image data under preset conditions, wherein the image data comprises facial feature images, body contour images and infusion hanger area images, and the action state of a patient is judged based on the image data, and the action state comprises a first action state, a second action state and a third action state;
the first control unit is used for controlling the image acquisition unit to adjust the focusing multiplying power to shoot the dropper to obtain a drip image when the image processing unit judges that the patient is in a first action state, and judging whether the drip rate is abnormal or not based on the drip image;
the second control unit is used for controlling the interaction module to send out query voice and record received voice feedback when the image processing unit judges that the patient is in a first action state and a second action state, and controlling the storage bin to extend out of the body when the image processing unit judges that the patient is in the second action state;
the movement control unit is used for controlling the movement module to move along a preset track between check points, determining waiting time based on the action state of the patient determined by the image processing unit and controlling the movement module to stay for the waiting time at the corresponding check point;
the preset condition is when the mobile module moves to a check point.
3. The visual perception-based intelligent ward interaction system according to claim 2, wherein the image processing unit controls the image acquisition unit to image each region of a ward, wherein the image processing unit controls the telescopic rod to be lifted when the moving module moves to a check point, controls the steering unit to rotate to different angles to control the image acquisition unit to shoot image data in different shooting directions, wherein an infusion hanging frame region image in the image data is obtained by the image acquisition unit acquiring a region above the sickbed corresponding to the check point, the body contour image is obtained by shooting a body part of a patient on the sickbed, and the facial feature image is obtained by shooting a face of the patient on the sickbed.
4. The visual perception-based intelligent ward interaction system of claim 2, wherein the image processing unit determines an action state of a patient based on the image data;
the visual judging unit judges whether eye-closing features appear in the facial feature image, judges the pose state corresponding to the body contour image and judges whether the infusion bottle contour appears in the infusion hanger area image;
if the infusion bottle outline appears in the infusion hanger area image, the image processing unit judges that the patient is in a first action state;
if the infusion bottle outline does not appear in the infusion hanger area image and the eye-closing feature does not appear in the facial feature image, the image processing unit judges that the patient is in a second action state;
and if the eye-closing feature appears in the facial feature image and the pose state corresponding to the body contour image is a lying state, the image processing unit judges that the patient is in a third action state.
5. The visual perception-based intelligent ward interaction system of claim 4, wherein the image processing unit determines a pose state corresponding to the body contour image based on the body contour image;
the image processing unit identifies a head node and a foot node of a patient based on the body contour image, determines an included angle between a connecting line and a horizontal plane based on the connecting line of the head node and the foot node, compares the included angle with a preset pose reference included angle, and judges whether a pose state corresponding to the body contour image is a lying state or not according to a comparison result;
under the first included angle comparison condition, the image processing unit judges that the pose state corresponding to the body contour image is a lying state;
the first included angle comparison condition is that the included angle is smaller than or equal to the pose reference included angle.
6. The visual perception-based intelligent ward interaction system according to claim 2, wherein the first control unit controls the image acquisition unit to adjust a focusing magnification to shoot a dropper to obtain a drip image,
the first control unit controls the image acquisition unit to adjust focusing multiplying power to acquire the label of the medicament in the infusion hanger region image, and the image processing unit confirms the standard dripping speed interval of the medicament in the database based on the label of the medicament.
7. The visual perception based intelligent ward interaction system of claim 2, wherein the first control unit determines whether a drip rate is abnormal based on the drip image;
the first control unit controls the image acquisition unit to adjust focusing multiplying power, shoots a dropper within preset time to obtain the current dropping rate of the dropping liquid, compares the current dropping rate with a standard dropping rate interval, and judges whether the dropping rate is abnormal or not according to a comparison result;
under a first rate comparison condition, the first control unit judges that the dropping rate is not abnormal;
under a second rate comparison condition, the first control unit judges that the dropping rate is abnormal;
the first rate comparison condition is that the current dropping rate is in the standard dropping rate interval, and the second rate comparison condition is that the current dropping rate is not in the standard dropping rate interval.
8. The vision-based intelligent ward interaction system according to claim 2, wherein the movement control unit determines a waiting time based on the motion state of the patient determined by the image processing unit, wherein,
the mobile control unit determines the waiting time to be a first waiting time when the image processing unit judges that the patient is in a first action state or a second action state;
the mobile control unit determines the waiting time to be the second waiting time when the image processing unit judges that the patient is in the third action state;
wherein the first latency is greater than the second latency.
9. The visual perception-based intelligent ward interaction system of claim 1, further comprising a data storage module coupled to the visual perception module and the interaction module to store data transmitted by the visual perception module and the interaction module.
10. The visual perception based intelligent ward interaction system of claim 9, wherein the contact oximeter and the contact sphygmomanometer stored in the storage compartment are each connected to the data storage module to send detected data to the data storage module.
CN202310475672.7A 2023-04-28 2023-04-28 Wisdom ward interactive system based on visual perception Pending CN116206779A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310475672.7A CN116206779A (en) 2023-04-28 2023-04-28 Wisdom ward interactive system based on visual perception

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310475672.7A CN116206779A (en) 2023-04-28 2023-04-28 Wisdom ward interactive system based on visual perception

Publications (1)

Publication Number Publication Date
CN116206779A true CN116206779A (en) 2023-06-02

Family

ID=86513279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310475672.7A Pending CN116206779A (en) 2023-04-28 2023-04-28 Wisdom ward interactive system based on visual perception

Country Status (1)

Country Link
CN (1) CN116206779A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883220A (en) * 2023-09-07 2023-10-13 成都鼎毅维元科技有限公司 Intelligent campus dangerous situation sensing method and system based on multi-source data fusion

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103663A (en) * 2011-02-26 2011-06-22 山东大学 Ward visit service robot system and target searching method thereof
JP2015079339A (en) * 2013-10-16 2015-04-23 セコム株式会社 Posture estimation device
KR20150136255A (en) * 2014-05-27 2015-12-07 심재환 Patient Monitoring System and Method
CN105701355A (en) * 2016-02-01 2016-06-22 中国人民解放军第四军医大学 Interactive sickroom nursing management system
CN106773950A (en) * 2016-12-13 2017-05-31 安徽乐年健康养老产业有限公司 A kind of intelligence patrols the control method of room and dispensing
CN107133612A (en) * 2017-06-06 2017-09-05 河海大学常州校区 Based on image procossing and the intelligent ward of speech recognition technology and its operation method
JP2019162241A (en) * 2018-03-19 2019-09-26 アイホン株式会社 Nurse call system
CN110405789A (en) * 2019-08-01 2019-11-05 昆山市工研院智能制造技术有限公司 Make the rounds of the wards accompany and attend to robot, robot of one kind makes the rounds of the wards system and method for accompanying and attending to
WO2020129993A1 (en) * 2018-12-17 2020-06-25 Groove X株式会社 Autonomous robot
CN111590582A (en) * 2020-05-27 2020-08-28 华南理工大学 An intelligent medical service robot that can realize biofeedback and remote visit

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103663A (en) * 2011-02-26 2011-06-22 山东大学 Ward visit service robot system and target searching method thereof
JP2015079339A (en) * 2013-10-16 2015-04-23 セコム株式会社 Posture estimation device
KR20150136255A (en) * 2014-05-27 2015-12-07 심재환 Patient Monitoring System and Method
CN105701355A (en) * 2016-02-01 2016-06-22 中国人民解放军第四军医大学 Interactive sickroom nursing management system
CN106773950A (en) * 2016-12-13 2017-05-31 安徽乐年健康养老产业有限公司 A kind of intelligence patrols the control method of room and dispensing
CN107133612A (en) * 2017-06-06 2017-09-05 河海大学常州校区 Based on image procossing and the intelligent ward of speech recognition technology and its operation method
JP2019162241A (en) * 2018-03-19 2019-09-26 アイホン株式会社 Nurse call system
WO2020129993A1 (en) * 2018-12-17 2020-06-25 Groove X株式会社 Autonomous robot
CN110405789A (en) * 2019-08-01 2019-11-05 昆山市工研院智能制造技术有限公司 Make the rounds of the wards accompany and attend to robot, robot of one kind makes the rounds of the wards system and method for accompanying and attending to
CN111590582A (en) * 2020-05-27 2020-08-28 华南理工大学 An intelligent medical service robot that can realize biofeedback and remote visit

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883220A (en) * 2023-09-07 2023-10-13 成都鼎毅维元科技有限公司 Intelligent campus dangerous situation sensing method and system based on multi-source data fusion
CN116883220B (en) * 2023-09-07 2023-11-17 成都鼎毅维元科技有限公司 Smart campus hazard situation awareness method and system based on multi-source data fusion

Similar Documents

Publication Publication Date Title
CN111687852B (en) Medical emergency robot, working method and system
EP2732761A1 (en) Augmented reality system in the patient care environment
US20200237225A1 (en) Wearable patient monitoring systems and associated devices, systems, and methods
CN105082149B (en) A kind of health service robot of achievable blood oxygen saturation detection
CN110850898A (en) Intelligent medical nursing following trolley and following method
CN114533446A (en) Medical mobile nursing robot
CN109129502A (en) A kind of intelligent robot applied to clinical medical and nursing
US20220395984A1 (en) Nursing robot for monitoring critical patients
CN114947774B (en) Medical injury detection system, group injury scheduling control method and rescue scheduling control device
CN116206779A (en) Wisdom ward interactive system based on visual perception
CN211862774U (en) Medical auxiliary device
US12427071B2 (en) Image-based pairing and controlling of devices in a clinical environment
CN207087880U (en) A kind of medical nurse robot
CN204971307U (en) Can realize blood oxygen saturability detection&#39;s health service robot
CN111544720A (en) Method and system for identifying false positive alarm of breathing machine
CN108766553A (en) A kind of state of an illness real-time analyzer based on Internet of Things
CN115429581B (en) Facial state recording instrument for severe bedridden patient
CN111482974A (en) A health detection robot
CN210408620U (en) Ward monitoring system
CN112618863A (en) Robot intelligent transfusion system based on medical robot and implementation method
CN111710380A (en) Medical method, apparatus, device and readable storage medium based on medical pump
CN106859593A (en) A kind of new Internet of Things intelligent guarding system
CN116072310A (en) An intelligent edge computing system applied to medical and health services
CN204496497U (en) For the robot system of ward data acquisition
CN218428373U (en) Ward inspection robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20230602

RJ01 Rejection of invention patent application after publication