CN117565076A - Intelligent medicine feeding inspection robot for pension institution - Google Patents
Intelligent medicine feeding inspection robot for pension institution Download PDFInfo
- Publication number
- CN117565076A CN117565076A CN202311580856.6A CN202311580856A CN117565076A CN 117565076 A CN117565076 A CN 117565076A CN 202311580856 A CN202311580856 A CN 202311580856A CN 117565076 A CN117565076 A CN 117565076A
- Authority
- CN
- China
- Prior art keywords
- robot
- mechanical arm
- human body
- module
- pension
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000003814 drug Substances 0.000 title claims abstract description 103
- 238000007689 inspection Methods 0.000 title claims abstract description 23
- 230000003993 interaction Effects 0.000 claims abstract description 20
- 238000012544 monitoring process Methods 0.000 claims abstract description 18
- 239000011664 nicotinic acid Substances 0.000 claims abstract description 15
- 229940079593 drug Drugs 0.000 claims abstract description 13
- 238000001514 detection method Methods 0.000 claims description 20
- 238000012377 drug delivery Methods 0.000 claims description 18
- 230000003044 adaptive effect Effects 0.000 claims description 10
- 210000003127 knee Anatomy 0.000 claims description 10
- 230000008859 change Effects 0.000 claims description 9
- 238000000034 method Methods 0.000 claims description 9
- 210000000988 bone and bone Anatomy 0.000 claims description 7
- 210000003423 ankle Anatomy 0.000 claims description 6
- 230000000737 periodic effect Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 6
- 210000000707 wrist Anatomy 0.000 claims description 6
- 230000006399 behavior Effects 0.000 claims description 5
- 230000003068 static effect Effects 0.000 claims description 4
- 230000002688 persistence Effects 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 abstract description 3
- 230000000007 visual effect Effects 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000006872 improvement Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 210000004247 hand Anatomy 0.000 description 3
- 230000002618 waking effect Effects 0.000 description 3
- 208000027418 Wounds and injury Diseases 0.000 description 2
- 210000000746 body region Anatomy 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000006378 damage Effects 0.000 description 2
- 208000014674 injury Diseases 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 208000034656 Contusions Diseases 0.000 description 1
- 206010017076 Fracture Diseases 0.000 description 1
- 206010019196 Head injury Diseases 0.000 description 1
- 208000010040 Sprains and Strains Diseases 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000000740 bleeding effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000002483 medication Methods 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/008—Manipulators for service tasks
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
Abstract
The invention relates to an intelligent medicine feeding and inspection robot for a pension care mechanism, which comprises a first mechanical arm, a second mechanical arm, a mobile navigation AMR chassis and a control system, wherein the first mechanical arm is connected with the first mechanical arm; the first mechanical arm is a six-axis mechanical arm, the tail end of the first mechanical arm is connected with a bionic smart hand for grabbing medicines, a robot vision system is arranged at the joint of the bionic smart hand and the first mechanical arm, and the robot vision system comprises a 2D camera and a 3D camera; the second mechanical arm is provided with a storage table; the mobile navigation AMR chassis is used for enabling the robot to autonomously move to a designated position through autonomous navigation; the control system comprises a voice interaction module, a face recognition module, a fall monitoring module and a main control module. The invention can autonomously move and avoid collision, automatically carry out medicine distribution and safety inspection, support voice interaction and visual recognition, and improve the service quality and efficiency of the pension institution.
Description
Technical Field
The invention belongs to the technical field of robots, and particularly relates to an intelligent medicine feeding and inspection robot for a pension institution.
Background
With the aggravation of population aging, the aged-keeping institutions face the problems of insufficient hands, uneven service quality and the like. Due to the reduced physical function of elderly persons, the occurrence of fall events is high, which may lead to serious injuries such as fractures, sprains, bruises, head injuries, internal bleeding, etc. Therefore, in the drug delivery and patrol process of the pension care institution, the addition of the fall monitoring function becomes necessary. In addition, traditional medicine distribution and inspection generally need to be realized manually, are easy to cause misoperation and take a long time.
Disclosure of Invention
The invention aims to provide an intelligent medicine feeding and inspection robot for a pension institution, which can automatically carry out medicine distribution and safety inspection and improve the service quality and efficiency of the pension institution. Meanwhile, the anti-falling monitoring device has a falling monitoring function, can timely find the falling situation of the old, takes corresponding rescue measures, effectively reduces the occurrence of falling injury accidents, and improves the safety guarantee capability of the aged care organization on the old.
The invention provides an intelligent medicine feeding and inspection robot for a pension care mechanism, which comprises a first mechanical arm, a second mechanical arm, a mobile navigation AMR chassis and a control system, wherein the first mechanical arm is connected with the first mechanical arm;
the first mechanical arm is a six-axis mechanical arm, the tail end of the first mechanical arm is connected with a bionic smart hand for grabbing medicines, a robot vision system is arranged at the joint of the bionic smart hand and the first mechanical arm, and the robot vision system comprises a 2D camera and a 3D camera; the second mechanical arm is provided with a storage table;
the mobile navigation AMR chassis is used for enabling the robot to autonomously move to a designated position through autonomous navigation;
the control system comprises a voice interaction module, a face recognition module, a tumble monitoring module and a main control module, wherein the interaction process of each module is as follows:
the voice interaction module receives a voice command word to wake up the robot and receives a medicine command, the main control module identifies the medicine taking command, analyzes the medicine taking command after the medicine taking command is identified successfully, controls the mobile navigation AMR chassis to move to enable the robot to reach a medicine taking position, controls the first mechanical arm to move, identifies a medicine box two-dimensional code by using the robot vision system, and executes medicine taking operation from a nurse station or a medicine box cabinet by using the robot vision system and a bionic smart hand after the identification is successful;
the robot vision system and the face recognition module are used for carrying out face recognition and locating for finding a person, and after the person is found, the voice interaction module receives a hand loosening instruction and then controls the bionic smart hand to loosen the medicine box and place the medicine box at a specified position;
recording a patient medicine taking video through the robot vision system, and uploading the video to a cloud for trace keeping and archiving;
and when the falling condition is detected, an alarm notification is sent out through the voice interaction module.
Further, the mobile navigation AMR chassis is a wheeled or crawler-type mobile navigation AMR chassis.
Further, the control system further comprises a storage module, wherein the storage module is used for storing the medicine taking video.
Further, the robot also includes a head display and a main system display.
Further, the object placing table is used for placing objects distributed along with the medicine boxes.
Further, the control system further comprises an adaptive grabbing module, and the adaptive grabbing module executes medicine taking operation based on an adaptive grabbing algorithm.
Further, the control system further comprises a medicine taking reminding module, wherein the medicine taking reminding module is used for sending a timing medicine taking reminding to a patient through a voice prompt, a short message or an APP notification mode.
Further, the fall monitoring module monitors fall through a human skeleton point recognition algorithm and a human fall detection algorithm.
Further, the human body bone point recognition algorithm adopts an improved YOLOv8 algorithm, and in the detected human body boundary box, openPose is used for detecting bone points, and 18 bone points are recognized and marked in each detected human body region, wherein the 18 bone points are respectively: nose, neck, right shoulder, right elbow, right wrist, left shoulder, left elbow, left wrist, right hip, right knee, right ankle, left hip, left knee, left ankle, right eye, left eye, right ear, left ear, combined with the human body position information provided by YOLOv8 and the bone point information provided by openPose, the average value of 4 key point coordinates of the left and right shoulders and the left and right knees of the human body is taken as the central point coordinate value of the human body coordinates, so that the human body fall detection is realized.
Further, the formula of the human body fall detection algorithm is as follows:
wherein: v y Representing the longitudinal velocity of the vehicle,and->Representing the ordinate, t, of the jth node in frames 10 and 1, respectively 10 And t 1 Time stamps representing frame 10 and frame 1, respectively;
the human body fall detection method comprises the following steps:
firstly judging whether the human body is in a static state, if v y To a small extentThe fluctuation in the surrounding indicates that the human body is static or does not move severely; when v y Greatly changes and exceeds a set threshold v th When the movement is intense, the movement is shown; if v is detected successively a plurality of times in a short time y ≥v th And v is still detected for the next time y ≥v th Judging that the persistence change is large; on the contrary, v alternately appears y ≥v th And v y <v th Representing a periodic variation; if v y A large change but not a periodic or continuous change, v is continuously detected in a short time y ≥v th After which v is continuously detected y <v th Initially judging the falling behavior; combined with aspect ratio of human body, when v y <And when the human body is 10cm, the behaviors such as squatting are eliminated, the human body posture is finally determined, and whether the human body falls down or not is judged.
By means of the scheme, the intelligent medicine feeding inspection robot for the pension institution can automatically move and avoid collision, medicine distribution and safety inspection can be automatically carried out, voice interaction and visual recognition are supported, and service quality and efficiency of the pension institution are improved. The functions of intelligent medicine taking, medicine feeding, medicine taking condition monitoring and recording and the like are realized through a voice recognition and interaction technology, an image recognition and analysis technology, a face recognition technology and the like, the monitoring of the falling condition of the old is finished by utilizing a human body dynamic recognition technology, and an alarm notification is sent timely.
The foregoing description is only an overview of the present invention, and is intended to provide a better understanding of the present invention, as it is embodied in the following description, with reference to the preferred embodiments of the present invention and the accompanying drawings.
Drawings
Fig. 1 is a schematic structural view of an intelligent drug delivery inspection robot for a pension institution;
FIG. 2 is a flow chart of robotic drug delivery in accordance with one embodiment of the present invention;
FIG. 3 is a flow chart of robotic drug delivery in accordance with an embodiment of the present invention;
FIG. 4 is 18 skeletal points of a human region in the improved YOLOv8 algorithm of the present invention;
FIG. 5 is a diagram of a conventional YOLOv8 network architecture;
FIG. 6 is a diagram of the improved YOLOv8 network of the present invention;
FIG. 7 is a schematic diagram of a robot fall monitoring algorithm in accordance with an embodiment of the present invention.
Detailed Description
The following describes in further detail the embodiments of the present invention with reference to the drawings and examples. The following examples are illustrative of the invention and are not intended to limit the scope of the invention.
Referring to fig. 1, the embodiment provides an intelligent drug delivery inspection robot for a pension institution, which comprises a first mechanical arm 2, a second mechanical arm 6, a mobile navigation AMR chassis 8 and a control system;
the first mechanical arm 2 is a six-axis mechanical arm, the tail end of the first mechanical arm is connected with a bionic smart hand 4 for grabbing medicines, a robot vision system 3 is arranged at the joint of the bionic smart hand 4 and the first mechanical arm 6, and the robot vision system 3 comprises a 2D camera and a 3D camera; the second mechanical arm 6 is provided with a storage table 7;
the mobile navigation AMR chassis 8 is used for enabling the robot to autonomously move to a specified position through autonomous navigation;
the control system comprises a voice interaction module, a face recognition module, a tumble monitoring module and a main control module, wherein the interaction process of each module is as follows:
the voice interaction module receives a voice command word to wake up the robot and receives a medicine command, the main control module identifies the medicine taking command, analyzes the medicine taking command after the medicine taking command is identified successfully, controls the mobile navigation AMR chassis 8 to move after the analysis is successful, enables the robot to reach a medicine taking position, controls the first mechanical arm 1 to move, identifies a medicine box two-dimensional code by using the robot vision system 3, and executes medicine taking operation from a nurse station or a medicine box cabinet by using the robot vision system 3 and the bionic smart hand 4 after the identification is successful;
the robot vision system 3 and the face recognition module are used for carrying out face recognition and locating for finding a person, and after the person is found, the voice interaction module receives the instruction of loosening the hands, controls the bionic smart hand to loosen the medicine box 9 and places the medicine box at a specified position;
recording a patient medicine taking video through the robot vision system 3, and uploading the video to a cloud for trace keeping and archiving;
the falling detection is carried out through the falling monitoring module and the robot vision system 3, and after the falling condition is detected, an alarm notification is sent out through the voice interaction module.
The robot is provided with a six-axis cooperative mechanical arm and a mobile navigation AMR chassis, so that the robot can provide medicine taking, delivering and fall monitoring services for the old in a senior citizen's institution or family environment. Through voice recognition and voice synthesis techniques, a user may perform control and operation of the robot, such as starting the robot, designating a patrol goal, etc., via voice interaction.
In this embodiment, the mobile navigation AMR chassis 8 is a wheeled or tracked mobile navigation AMR chassis.
In this embodiment, the control system further includes a storage module, where the storage module is configured to store the medication video.
In this embodiment, the robot further comprises a head display 1 and a main system display 5.
In this embodiment, the placement table 7 is used for placing articles to be dispensed with the cartridge.
In this embodiment, the control system further includes an adaptive grabbing module, where the adaptive grabbing module performs a medicine taking operation based on an adaptive grabbing algorithm.
In this embodiment, the control system further includes a medicine taking reminding module, where the medicine taking reminding module is used to send a timed medicine taking reminder to the patient through a voice prompt, a short message or an APP notification mode.
In this embodiment, the fall monitoring module performs fall monitoring through a human skeleton point recognition algorithm and a human fall detection algorithm.
Referring to fig. 2 and 3, the function of robot drug delivery is divided into two steps of identifying and taking and searching for a person to deliver the drug:
1. identifying and grasping medications
1) Waking up the robot: before waking up the robot with command words, it is necessary to ensure that the ambient noise is small and that the voice input is clear and accurate. Meanwhile, the command word for waking up the robot needs to be trained and set in advance so as to ensure that the robot can accurately recognize. If the wake-up fails, the user needs to wake up again (because the wake-up failure may be caused by environmental noise, unclear voice input, etc., multiple attempts are needed after the wake-up failure until the wake-up is successful).
2) Issuing a medicine taking instruction: before giving instructions to the robot, the instructions need to be ensured to be clear and accurate so as to avoid the fact that the robot cannot correctly identify the instructions. The robot recognizes the instruction, and if the recognition fails, the robot needs to be awakened again and a medicine taking instruction is issued.
3) Analyzing the medicine taking instruction: after the instruction is successfully identified, the robot enters an analysis state, analyzes the instruction and performs subsequent operations. If the analysis of the medicine taking instruction fails, the robot needs to be woken up again and an accurate medicine taking instruction is issued to the robot, and the robot enters the analysis state of the medicine taking instruction again.
4) Identifying a medicine box: after the instruction is successfully analyzed, the robot starts to identify the medicine box for subsequent operation. And if the robot fails to identify the medicine box, re-identification is performed.
5) Identifying a two-dimensional code of the medicine box: after the medicine box is successfully identified, the robot starts to identify the two-dimensional code on the medicine box so as to facilitate subsequent operation. If the robot fails to recognize the two-dimensional code of the medicine box, recognition is performed again.
6) Hand-eye-foot linkage medicine taking: after the two-dimensional code is successfully identified, the robot enters the hand-eye-foot linkage medicine taking stage, and the actual medicine taking operation is started by means of the camera module and the bionic smart hand. When executing the medicine box grabbing flow, the robot needs to be identified and grabbed through a vision system. This requires the image processing of the robot and the robotic arm system to perform a coordinated operation, ensuring the accuracy and stability of the gripping. If the operation of the dexterous hand for grabbing the medicine box fails, grabbing again; if the grabbing is successful, the process of grabbing the medicine box is finished: after the grabbing is successful, the robot can send the medicine box to the appointed position, the medicine taking task is completed, and the whole process is finished.
When the robot realizes the medicine taking function, an Adaptive grabbing algorithm AOCA (Adaptive Object-center application) is adopted:
thus obtainedThe pose of the flange tool head relative to the robot base coordinate system. Wherein the method comprises the steps ofAnd->The coordinates of the position points of the gripped object and the center of the tool head flange are respectively.
2. Delivering and monitoring medication for a user
1) Locating and finding a person: after the medicine box is successfully grabbed, the robot starts to enter a positioning and finding mode. Robots need to face recognition and localization through 2D and 3D cameras in the vision system for subsequent operations. If the robot fails to find the person, the robot is identified again for finding.
2) Releasing the medicine: after the person is found, the robot enters a state of waiting for a release instruction. After receiving the instruction to loosen the hands, the smart hand of the robot can loosen the medicine box and put the medicine box at a specified position (such as a table, a patient's hand, etc.).
3) Recording a medicine taking video: after the medicine box is placed, the robot starts to record the medicine taking video of the patient, and the video is uploaded to the cloud for mark keeping and archiving. To ensure that the patient takes the medication on time and leave a relevant medication record.
In addition, the robot can also send a timed medicine taking prompt to the patient in a mode of voice prompt, short message or APP notification and the like, and the robot can carry out customized distribution according to personal information and medicine requirements of the patient.
In this embodiment, the robot implements a fall monitoring function using a human skeletal point recognition algorithm and a human fall detection algorithm. When the falling situation is detected, the robot can immediately send out an alarm notice, so that the falling personnel can be timely rescued.
Basic principle of human skeleton point recognition algorithm:
YOLOv8 is a disclosed target detection algorithm, is a new YOLO series based on an object detection model and is proposed by Ultralytics company, and can provide the most advanced object detection performance so far, and can be used for detection, classification, segmentation, tracking and even attitude estimation. By means of the previous YOLO model version support technology, the YOLO v8 model runs faster and more accurately, while providing a unified framework for training models to perform tasks. A conventional YOLOv8 network structure is shown in fig. 5.
In this embodiment, the conventional YOLOv8 is improved, and in the human body bounding box detected by the improved YOLOv8, openPose is used to detect the skeletal points. openPose is a deep learning-based human body pose estimation library, which can accurately detect and estimate key points and pose information of a human body from images or videos. The goal of openPose is to turn human posture estimation into a real-time, multi-person, accurate task.
Referring to fig. 6, the improved YOLOv8 detects the human body in the image and gives a bounding box, which provides the position information of the region where the human body is located. For each human body region detected by the improved YOLOv8 algorithm, openelse will identify and label 18 skeletal points, which are respectively: nose, neck, right shoulder, right elbow, right wrist, left shoulder, left elbow, left wrist, right hip, right knee, right ankle, left hip, left knee, left ankle, right eye, left eye, right ear, left ear, reference is made to fig. 4. Human body falling detection can be achieved by combining human body position information provided by YOLOv8 and bone point information provided by openPose. Taking the average value of 4 key point coordinates of the left shoulder, the right shoulder and the left knee and the right knee of the human body as the coordinate value of the central point of the human body coordinate.
Basic principle of human body target detection algorithm based on improved YOLOv 8:
the attention mechanism CBAM is added last in the neg network structure of the traditional YOLOv8 network. The method has the effects of enhancing the attention of the model to important features, reducing the interference to noise and unimportant features, and improving the accuracy and the robustness of the model.
Evaluation index: precision, recall, mAP value.
Precision is a prediction result that indicates how many samples of samples predicted to be positive are true positive samples. The calculation formula of the accuracy rate is:
where TP denotes predicting positive class as positive class and FP denotes predicting negative class as positive class.
Recall (Recall) is for the original sample and indicates how many positive examples in the sample were predicted to be correct. The calculation formula of the recall rate is as follows:
where TP denotes predicting the original positive class as a positive class, and FN denotes predicting the original positive class as a negative class.
AP (Average Precision) is defined as the mean of the Precision values on the PR curve, i.e. the area under the PR curve. It is used to measure the average accuracy of the algorithm over a single class. The higher the AP value, the higher the detection accuracy for this class. mAP (Mean Average Precision) is the average of the class APs and is used to measure the overall accuracy performance of the algorithm across all classes. Briefly, mAP is an average of the AP values of all classes. mAP is one of the most important evaluation indexes of the target detection algorithm.
Evaluation index comparison result: the mAP value after improvement is obviously improved from 92.2% to 93.7%, and is improved by 1.5%. Experimental results show that the improved YOLOv8 model has better effect.
TABLE 1 YOLOv8 human recognition Effect before and after improvement
Basic principle of human body fall detection algorithm:
wherein: v y Representing the longitudinal velocity of the vehicle,and->Representing the ordinate, t, of the jth node in frames 10 and 1, respectively 10 And t 1 Representing the time stamps of frame 10 and frame 1, respectively.
Referring to fig. 7, it is first determined whether the human body is in a stationary state, if v y Fluctuation in a small range indicates that the human body is stationary or not moving severely. When v y Greatly changes and exceeds a set threshold v th When the movement is intense. If v is detected a plurality of times in succession within a short time (e.g. 4 seconds) y ≥v th And still check for the next timeV is measured y ≥v th It can be determined that the persistence change is large; on the contrary, v alternately appears y ≥v th And v y <v th Representing a periodic variation. If v y A large change but not a periodic or continuous change, v is continuously detected in a short time y ≥v th After which v is continuously detected y <v th And initially judging the falling behavior. In combination with aspect ratio and other parameters of the body (e.g. v y <10 cm), squatting and other actions can be eliminated, and finally the human body posture is determined and whether the human body falls down or not is judged.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, and it should be noted that it is possible for those skilled in the art to make several improvements and modifications without departing from the technical principle of the present invention, and these improvements and modifications should also be regarded as the protection scope of the present invention.
Claims (10)
1. The intelligent medicine delivery inspection robot for the pension institution is characterized by comprising a first mechanical arm, a second mechanical arm, a mobile navigation AMR chassis and a control system;
the first mechanical arm is a six-axis mechanical arm, the tail end of the first mechanical arm is connected with a bionic smart hand for grabbing medicines, a robot vision system is arranged at the joint of the bionic smart hand and the first mechanical arm, and the robot vision system comprises a 2D camera and a 3D camera; the second mechanical arm is provided with a storage table;
the mobile navigation AMR chassis is used for enabling the robot to autonomously move to a designated position through autonomous navigation;
the control system comprises a voice interaction module, a face recognition module, a tumble monitoring module and a main control module, wherein the interaction process of each module is as follows:
the voice interaction module receives a voice command word to wake up the robot and receives a medicine command, the main control module identifies the medicine taking command, analyzes the medicine taking command after the medicine taking command is identified successfully, controls the mobile navigation AMR chassis to move to enable the robot to reach a medicine taking position, controls the first mechanical arm to move, identifies a medicine box two-dimensional code by using the robot vision system, and executes medicine taking operation from a nurse station or a medicine box cabinet by using the robot vision system and a bionic smart hand after the identification is successful;
the robot vision system and the face recognition module are used for carrying out face recognition and locating for finding a person, and after the person is found, the voice interaction module receives a hand loosening instruction and then controls the bionic smart hand to loosen the medicine box and place the medicine box at a specified position;
recording a patient medicine taking video through the robot vision system, and uploading the video to a cloud for trace keeping and archiving;
and when the falling condition is detected, an alarm notification is sent out through the voice interaction module.
2. The pension facility intelligent drug delivery inspection robot of claim 1, wherein the mobile navigation AMR chassis is a wheeled or tracked mobile navigation AMR chassis.
3. The intelligent drug delivery inspection robot of a pension facility of claim 1, wherein the control system further comprises a storage module for storing a drug video.
4. The intelligent drug delivery inspection robot of a pension facility of claim 1, further comprising a head display and a main system display.
5. The intelligent drug delivery inspection robot of a pension facility of claim 3, wherein the object placement table is configured to place objects for delivery with the drug cassette.
6. The pension facility intelligent drug delivery inspection robot of claim 1, wherein the manipulation system further comprises an adaptive grabbing module that performs a drug taking operation based on an adaptive grabbing algorithm.
7. The intelligent drug delivery inspection robot of claim 1, wherein the control system further comprises a drug delivery reminding module, wherein the drug delivery reminding module is used for sending a timed drug delivery reminder to a patient in a voice prompt, short message or APP notification mode.
8. The intelligent drug delivery inspection robot of the pension care institution of claim 1, wherein the fall monitoring module performs fall monitoring through a human skeletal point recognition algorithm and a human fall detection algorithm.
9. The intelligent drug delivery inspection robot of the pension institution of claim 8, wherein the human skeleton point recognition algorithm adopts an improved YOLOv8 algorithm, and in the detected human body boundary box, openwise is used for detecting skeleton points, and in each detected human body area, 18 skeleton points are recognized and marked, wherein the 18 skeleton points are respectively: nose, neck, right shoulder, right elbow, right wrist, left shoulder, left elbow, left wrist, right hip, right knee, right ankle, left hip, left knee, left ankle, right eye, left eye, right ear, left ear, combined with the human body position information provided by YOLOv8 and the bone point information provided by openPose, the average value of 4 key point coordinates of the left and right shoulders and the left and right knees of the human body is taken as the central point coordinate value of the human body coordinates, so that the human body fall detection is realized.
10. The intelligent drug delivery inspection robot of the pension institution of claim 9, wherein the human body fall detection algorithm formula is:
wherein: v y Representing the longitudinal velocity of the vehicle,and->Representing the ordinate, t, of the jth node in frames 10 and 1, respectively 10 And t 1 Time stamps representing frame 10 and frame 1, respectively;
the human body fall detection method comprises the following steps:
firstly judging whether the human body is in a static state, if v y Fluctuation in a small range indicates that the human body is static or does not move severely; when v y Greatly changes and exceeds a set threshold v th When the movement is intense, the movement is shown; if v is detected successively a plurality of times in a short time y ≥v th And v is still detected for the next time y ≥v th Judging that the persistence change is large; on the contrary, v alternately appears y ≥v th And v y <v th Representing a periodic variation; if v y A large change but not a periodic or continuous change, v is continuously detected in a short time y ≥v th After which v is continuously detected y <v th Initially judging the falling behavior; combined with aspect ratio of human body, when v y <And when the human body is 10cm, the behaviors such as squatting are eliminated, the human body posture is finally determined, and whether the human body falls down or not is judged.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311580856.6A CN117565076A (en) | 2023-11-24 | 2023-11-24 | Intelligent medicine feeding inspection robot for pension institution |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311580856.6A CN117565076A (en) | 2023-11-24 | 2023-11-24 | Intelligent medicine feeding inspection robot for pension institution |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117565076A true CN117565076A (en) | 2024-02-20 |
Family
ID=89885952
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311580856.6A Pending CN117565076A (en) | 2023-11-24 | 2023-11-24 | Intelligent medicine feeding inspection robot for pension institution |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117565076A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118968566A (en) * | 2024-10-17 | 2024-11-15 | 吉林大学第一医院 | Fall event identification method, device and medium |
-
2023
- 2023-11-24 CN CN202311580856.6A patent/CN117565076A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118968566A (en) * | 2024-10-17 | 2024-11-15 | 吉林大学第一医院 | Fall event identification method, device and medium |
CN118968566B (en) * | 2024-10-17 | 2024-12-13 | 吉林大学第一医院 | Fall event identification method, device and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102103663B (en) | Ward visit service robot system and target searching method thereof | |
CN113688736B (en) | Method for analyzing and monitoring health condition of patient | |
CN110503077B (en) | A vision-based real-time human motion analysis method | |
Nam et al. | Physical activity recognition using multiple sensors embedded in a wearable device | |
CN108764059A (en) | A method and system for human behavior recognition based on neural network | |
CN111523559B (en) | Abnormal behavior detection method based on multi-feature fusion | |
CN105159452B (en) | A kind of control method and system based on human face modeling | |
CN113823376B (en) | Intelligent medicine taking reminding method, device, equipment and storage medium | |
CN117565076A (en) | Intelligent medicine feeding inspection robot for pension institution | |
CN113936340B (en) | AI model training method and device based on training data acquisition | |
CN112836641A (en) | A method for hand hygiene monitoring based on machine vision | |
CN116028670B (en) | Cloud edge cooperative intelligent detection injury classification system | |
CN113743279A (en) | Ship pilot state monitoring method, system, storage medium and equipment | |
CN118181282A (en) | Rehabilitation robot dynamic simulation system and method based on deep learning | |
WO2024103733A1 (en) | Intelligent care robot | |
CN116894978A (en) | An online exam anti-cheating system that integrates facial emotions and behavioral features | |
Tsai et al. | Predicting canine posture with smart camera networks powered by the artificial intelligence of things | |
CN116563758A (en) | Lion head goose monitoring method, device, equipment and storage medium | |
CN115641610A (en) | A recognition system and method for waving for help | |
CN113688740B (en) | Indoor gesture detection method based on multi-sensor fusion vision | |
CN119369427A (en) | Home intelligent elderly care robot system based on multifunctional integration and its control method | |
CN109992008A (en) | Method and device for target following of a robot | |
CN117711567A (en) | Myopia prevention and control system and method based on AI learning | |
CN115131876B (en) | Emotion recognition method and system based on human body movement gait and posture | |
CN116584978A (en) | Test tube recovery system, equipment and medium for self-help nucleic acid sampling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |