[go: up one dir, main page]

CN115880852B - Infant monitoring method and device - Google Patents

Infant monitoring method and device Download PDF

Info

Publication number
CN115880852B
CN115880852B CN202211467650.8A CN202211467650A CN115880852B CN 115880852 B CN115880852 B CN 115880852B CN 202211467650 A CN202211467650 A CN 202211467650A CN 115880852 B CN115880852 B CN 115880852B
Authority
CN
China
Prior art keywords
recognition
current
posture
gesture
center point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211467650.8A
Other languages
Chinese (zh)
Other versions
CN115880852A (en
Inventor
游经纬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunshan Q Technology Co Ltd
Original Assignee
Kunshan Q Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunshan Q Technology Co Ltd filed Critical Kunshan Q Technology Co Ltd
Priority to CN202211467650.8A priority Critical patent/CN115880852B/en
Publication of CN115880852A publication Critical patent/CN115880852A/en
Application granted granted Critical
Publication of CN115880852B publication Critical patent/CN115880852B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a baby monitoring method and a baby monitoring device, comprising the following steps: identifying the current scene based on a preset identification period, and if the current scene is determined to be a target scene, determining the current identification times of the baby position; carrying out gesture recognition on the target infant to obtain a gesture recognition result; the gesture recognition result includes: supine posture, prone posture, upright posture and unknown posture; determining the current recognition times of the lying gesture and the current recognition times of the unknown gesture according to the gesture recognition result; pushing alarm prompt information if the current recognition times of the infant position are larger than or equal to the first times, the current recognition times of the prone posture are larger than or equal to the second times or the current recognition times of the unknown posture are larger than or equal to the third posture times; therefore, the baby position and the baby posture are continuously and circularly identified, and when an alarm is triggered, alarm prompt information is pushed to a user, so that the user can remind the user to watch the baby in time, and timeliness and safety of baby monitoring are improved.

Description

Infant monitoring method and device
Technical Field
The application relates to the technical field of safety monitoring, in particular to a baby monitoring method and device.
Background
With the promotion of national living standard, the requirements and the attention degree of the new-promotion father and mother on the care, the care and the safety of the newborns are also increasingly improved. For example, the activity of infants is very limited, and suffocation and hazard can be caused by improper sleeping (lying down or the like) or the blocking of the mouth and nose caused by the movement of the infants.
Generally, an adult usually looks at a child at home in daytime, however, at night, or in daytime, the adult has to place the child alone in a certain room, or in a situation where the infant is not cared for in time due to artificial omission (such as a situation where the infant is awake and played alone or lying prone for a long time, etc.), the above-mentioned danger is unavoidable.
Disclosure of Invention
Aiming at the problems in the prior art, the embodiment of the invention provides a method and a device for monitoring infants, which are used for solving or partially solving the technical problems that the timeliness of monitoring infants cannot be ensured in the prior art, so that the safety of the infants cannot be ensured.
In a first aspect of the invention, there is provided a method of monitoring an infant, the method comprising:
identifying a current scene based on a preset identification period, and if the current scene is determined to be a target scene, determining the current identification times of the baby position;
carrying out gesture recognition on the target infant through a camera to obtain a gesture recognition result; the gesture recognition result includes: supine posture, prone posture, upright posture and unknown posture;
determining the current recognition times of the prone posture and the current recognition times of the unknown posture according to the posture recognition result;
If the current recognition times of the infant position are larger than or equal to the preset first times, the current recognition times of the prone posture are larger than or equal to the preset second times or the current recognition times of the unknown posture are larger than or equal to the preset third posture times, pushing alarm prompt information.
In the above solution, the determining the current number of times of identifying the infant position includes:
if the target baby exists in the current scene, resetting the current identification times of the baby position;
if it is determined that the target infant does not exist in the current scene, the current identification number of the infant position is increased once.
In the above scheme, the gesture recognition is performed on the target infant through the camera to obtain a gesture recognition result, including:
determining a head center point, a body center point, and a leg center point of the target infant;
Determining a first angle between a first line connecting the head center point and the body center point and an X-axis;
Determining a second angle between a second line between the body center point and the leg center point and the X-axis;
determining a third included angle between the first connection line and the third connection line;
And carrying out gesture recognition on the target infant according to the first included angle, the second included angle and the third included angle to obtain a gesture recognition result.
In the above scheme, the gesture recognition is performed on the target infant according to the first included angle, the second included angle and the third included angle to obtain a gesture recognition result, including:
If the first included angle is smaller than or equal to a first angle threshold, the second included angle is smaller than or equal to the first angle threshold and the third included angle is smaller than or equal to a second angle threshold, judging whether facial features of the target infant can be acquired or not;
if the facial features of the target infant can be obtained and the recognition times meet the recognition conditions, determining that the posture recognition result is a supine posture;
And if the facial features of the target infant are not acquired and the recognition times meet the recognition conditions, determining that the gesture recognition result is a lying gesture.
In the above scheme, the gesture recognition is performed on the target infant according to the first included angle, the second included angle and the third included angle to obtain a gesture recognition result, including:
if the first included angle is larger than or equal to a first angle threshold, the second included angle is larger than or equal to the first angle threshold and the third included angle is smaller than or equal to a second angle threshold, acquiring the longitudinal coordinate of the head center point, the longitudinal coordinate of the body center point and the longitudinal coordinate of the leg center point of the target infant;
if the ordinate of the head center point, the ordinate of the body center point and the ordinate of the leg center point of the target infant are determined to meet YH & gtYB & gtYL, and the recognition times are determined to meet the recognition conditions, determining that the gesture recognition result is an upright gesture; wherein,
The YH is the ordinate of the head center point, the YB is the ordinate of the body center point, and the YL is the ordinate of the leg center point.
In the above solution, the determining the current recognition times of the prone posture and the current recognition times of the unknown posture according to the posture recognition result includes:
if the gesture recognition result is determined to be an upright gesture, and the current recognition frequency of the prone gesture is determined to be more than 0, the current recognition frequency of the prone gesture is reduced once; and
And if the current recognition times of the unknown gesture are determined to be more than 0, reducing the current recognition times of the unknown gesture once.
In the above solution, the determining the current recognition times of the prone posture and the current recognition times of the unknown posture according to the posture recognition result includes:
if the gesture recognition result is determined to be the prone gesture, the current recognition times of the prone gesture are increased once; and
And if the current recognition times of the unknown gesture are determined to be more than 0, reducing the current recognition times of the unknown gesture once.
In the above scheme, the method further comprises:
If the gesture recognition result is not the prone gesture and the current recognition frequency of the prone gesture is greater than 0, reducing the current recognition frequency of the prone gesture once; and
And increasing the current recognition times of the unknown gesture once.
In the above scheme, the method further comprises:
If the current recognition times of the infant position are smaller than the preset first times, the current recognition times of the prone posture are smaller than the preset second times or the current recognition times of the unknown posture are smaller than the preset third posture times, judging whether a looking-after person exists in the current scene or not;
if the existence of the looking-after person is determined, the current identification times of the looking-after person are increased once;
And pushing alarm prompt information when the current identification times of the looking-after person are larger than or equal to the preset fourth times.
In a second aspect of the present invention, there is provided a monitoring device comprising:
the first recognition unit is used for recognizing the current scene based on a preset recognition period, and if the current scene is determined to be a target scene, the current recognition times of the baby position are determined;
the second recognition unit is used for carrying out gesture recognition on the target infant to obtain a gesture recognition result; the gesture recognition result includes: supine posture, prone posture, upright posture and unknown posture;
the determining unit is used for determining the current recognition times of the prone posture and the current recognition times of the unknown posture according to the posture recognition result;
The pushing unit is used for pushing the alarm prompt information if the current recognition times of the infant position are larger than or equal to the preset first times, the current recognition times of the prone posture are larger than or equal to the preset second times or the current recognition times of the unknown posture are larger than or equal to the preset third posture times.
The invention provides a baby monitoring method and a baby monitoring device, wherein the method comprises the following steps: identifying a current scene based on a preset identification period, and if the current scene is determined to be a target scene, obtaining the current identification times of the baby position; carrying out gesture recognition on the target infant through a camera to obtain a gesture recognition result; the gesture recognition result includes: supine posture, prone posture, upright posture and unknown posture; determining the current recognition times of the prone posture and the current recognition times of the unknown posture according to the posture recognition result; pushing alarm prompt information if the current recognition times of the infant position are larger than or equal to the preset first times, the current recognition times of the prone posture are larger than or equal to the preset second times or the current recognition times of the unknown posture are larger than or equal to the preset third posture times; therefore, the position and the posture of the baby are continuously and circularly identified, when an alarm is triggered, alarm prompt information is timely pushed to a user, the user can conveniently remind the user to watch the baby in time, timeliness of monitoring the baby is improved, and then safety of the baby is improved.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 shows a flow diagram of a baby monitoring method according to one embodiment of the invention;
FIG. 2 shows an overall logic diagram of an infant monitoring method according to one embodiment of the present invention;
FIGS. 2-1 to 2-4 are partial enlarged schematic views of FIG. 2;
FIG. 3 shows a schematic view of an infant site acquisition reference point in accordance with one embodiment of the invention;
FIG. 4 shows a schematic structural view of an infant monitoring device according to one embodiment of the present invention;
FIG. 5 shows a schematic diagram of a computer device architecture according to one embodiment of the invention;
FIG. 6 shows a schematic diagram of a computer-readable storage medium structure according to one embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The embodiment of the invention provides a baby monitoring method, which can refer to fig. 1, 2 and 2-1 to 2-4, and comprises the following steps:
s110, identifying a current scene based on a preset identification period, and if the current scene is determined to be a target scene, determining the current identification times of the baby position;
In this embodiment, before identification, the looking-after person feature, the baby feature and the scene feature need to be recorded. Such as a look-around feature may include: looking at facial features, appearance features, etc.; infant characteristics may include: facial features and appearance features of infants, and the like. The scene features may include: environmental features such as furniture features, room layout features, etc.
In this embodiment, the camera is installed in the baby bedroom, and the camera is installed at a corner of the baby bedroom and near to the ceiling, so that the movable range of the baby and the looking-up person can be covered, and the scene in the baby bedroom can be monitored in all directions.
Then determining the scene level, the recognition period, the maximum recognition times of the baby position, the maximum recognition times of the prone posture, the maximum recognition times of the unknown posture and the maximum recognition times of the looking-after person.
When the identification is needed, the camera shooting function is started, and the identification is carried out once every preset identification period. The preset identification period may be 1s, or may be set based on actual conditions, which is not limited herein.
That is, the present embodiment needs to identify the current scene based on a preset identification period, determine whether the current scene is a target scene according to the scene features, and if the scene features in the current scene are consistent with the entered scene features, determine that the current scene is the target scene, and determine the current identification times of the baby position.
In one embodiment, determining the current number of identifications of the infant's location includes:
Judging whether a target infant exists in the target scene, and if so, resetting the current identification times of the infant position;
And if the target infant does not exist in the current scene, increasing the current recognition times of the infant position once.
When judging whether a target infant exists in the target scene, comparing the currently acquired face features and appearance features with the recorded infant features, and if the similarity is greater than or equal to a similarity threshold, determining that the target infant exists.
S111, carrying out gesture recognition on a target infant through a camera to obtain a gesture recognition result; the gesture recognition result includes: supine posture, prone posture, upright posture and unknown posture;
then, taking the scene horizontal plane as a reference, carrying out gesture recognition on the target baby to obtain a gesture recognition result, wherein the gesture recognition result specifically comprises the following steps:
determining a head center point, a body center point, and a leg center point of a target infant;
determining a first included angle between a first connecting line of the head center point and the body center point and the X axis;
determining a second angle between a second line between the body center point and the leg center point and the X-axis;
determining a third included angle between the first connection line and the third connection line;
and carrying out gesture recognition on the target infant according to the first included angle, the second included angle and the third included angle to obtain a gesture recognition result.
In one embodiment, the gesture recognition is performed on the target infant according to the first included angle, the second included angle and the third included angle, so as to obtain a gesture recognition result, including:
if the first included angle is smaller than or equal to the first angle threshold, the second included angle is smaller than or equal to the first angle threshold and the third included angle is smaller than or equal to the second angle threshold, judging whether the facial features of the target infant can be obtained or not;
if the facial features of the target infant can be obtained and the recognition times meet the recognition conditions, determining that the posture recognition result is a supine posture;
if the facial features of the target infant are not acquired and the recognition times meet the recognition conditions, determining that the gesture recognition result is a lying gesture.
The first angle threshold and the second angle threshold may be determined based on practical situations, which is not limited in this embodiment. For example, the first angle threshold may be 30 ° and the second angle threshold may be 15 °.
In one embodiment, the gesture recognition is performed on the target infant according to the first included angle, the second included angle and the third included angle, so as to obtain a gesture recognition result, including:
If the first included angle is larger than or equal to the first angle threshold, the second included angle is larger than or equal to the first angle threshold and the third included angle is smaller than or equal to the second angle threshold, acquiring the longitudinal coordinate of the head center point, the longitudinal coordinate of the body center point and the longitudinal coordinate of the leg center point of the target infant;
If the ordinate of the head center point, the ordinate of the body center point and the ordinate of the leg center point of the target infant are determined to meet YH & gtYB & gtYL, and the recognition times are determined to meet the recognition conditions, determining that the gesture recognition result is an upright gesture; where YH is the head center point ordinate, YB is the body center point ordinate, and YL is the leg center point ordinate.
Specifically, referring to fig. 3, before determining the head center point, the body center point, and the leg center point, 5 reference points H1, H2, H3, H4, and H5, respectively, need to be set at the head; setting 9 reference points on the body, namely B1, B2, B3, B4, B5, B6, B7, B8 and B9; at the legs 6 reference points are set, L1, L2, L3, L4, L5 and L6, respectively.
And then, determining the coordinates of the central point of the corresponding part according to the average value of the coordinates of the reference points of all the parts. Taking the head center point as an example, the coordinates of H1, H2, H3, H4, and H5 can be determined, the average value of the abscissas of H1, H2, H3, H4, and H5 is taken as the abscissas of the head center point, and the average value of the ordinates of H1, H2, H3, H4, and H5 is taken as the ordinate of the head center point.
It should be noted that, in practical application, if the number of acquired reference points is less than the preset number of reference points, the following processing needs to be performed:
taking the head reference points as an example for explanation, if the number of the acquired reference points is 0-3, gesture recognition cannot be performed, and the acquisition is resumed. If the number of the acquired reference points is 4, determining coordinates of the remaining 1 missing reference points based on the coordinates of the acquired 4 reference points; the realization is as follows:
Firstly, acquiring the coordinates of each reference point acquired at the current time, acquiring the coordinates corresponding to each reference point acquired at the last time, and determining the coordinate difference value of each reference point based on the coordinates of each reference point acquired at the current time and the coordinates corresponding to each reference point acquired at the last time.
And secondly, acquiring the last acquired missing reference point coordinate of the current time, and determining the coordinate of the current missing reference point based on the last acquired missing reference point coordinate of the current time and the coordinate difference value of each reference point.
Such as: if the missing reference point of the current time is H5, the coordinates of H1 are (X1, Y1), the coordinates of H2 are (X2, Y2), the coordinates of H3 are (X3, Y3) and the coordinates of H4 are (X4, Y4) when the current time is acquired;
when the last acquisition is performed last time, the coordinates of H1 are (X1 ', Y1'), the coordinates of H2 are (X2 ', Y2'), the coordinates of H3 are (X3 ', Y3'), the coordinates of H4 are (X4 ', Y4'), and the coordinates of H5 are (X5 ', Y5');
taking H1 as an example, the coordinate difference of H1 may be determined based on equation (1) and equation (2):
ΔXH1=X1′-X1 (1)
ΔYH1=Y1′-Y1 (2)
similarly, the coordinate differences delta XH2 and delta YH2 of H2 can be obtained; obtaining coordinate differences delta XH3 and delta YH3 of H3; the difference in coordinates Δxh4 and Δyh4 of H4 are obtained.
Then the abscissa X5 of the current sub-H5 reference point may be determined based on equation (3):
the ordinate Y5 of the current time H5 reference point may be determined based on equation (4):
After the coordinates of the 5 head reference points are determined, the average value of the abscissas of the 5 head reference points is taken as the abscissas of the head center point, and the average value of the ordinates of the 5 head reference points is taken as the ordinate YH of the head center point, that is, the coordinates of the head center point are (XH, YH).
Similarly, for the body reference points, if the number of the reference points acquired at the current time is 0-5, gesture recognition cannot be performed, and the body reference points are acquired again. If the number of acquired reference points is 6 or more and 9 or less, the coordinates of the remaining missing body reference points may be determined based on the acquired coordinates of the known body reference points. The specific determination method can refer to the determination method of the head missing reference point, and is not described herein.
After the coordinates of the 9 body reference points are determined, the abscissa average value of the 9 body reference points is taken as the abscissa XB of the body center point, and the ordinate average value of the 9 body reference points is taken as the ordinate YB of the body center point, that is, the coordinates of the body center point are (XB, YB).
For the leg reference points, if the number of the reference points acquired at the current time is 0-3, gesture recognition cannot be performed, and the acquisition is repeated. If the number of acquired reference points is 4 or more and less than 6, the coordinates of the remaining missing leg reference points may be determined based on the acquired coordinates of the known reference points. The specific determination method can refer to the determination method of the head missing reference point, and is not described herein.
After the coordinates of the 6 leg reference points are determined, the abscissa average value of the 6 leg reference points is taken as the abscissa XL of the head center point, and the ordinate average value of the 6 leg reference points is taken as the ordinate YL of the leg center point, that is, the coordinates of the leg center point are (XL, YL).
After the head center point, the body center point and the leg center point are determined, a first included angle HB between a first connecting line of the head center point and the body center point and the X axis is determined according to a formula (5):
∠HB=tan-1[|(YH-YB)/(XH-XB)|] (5)
Determining a second angle LB between a second line between the body center point and the leg center point and the X-axis according to equation (6):
∠LB=tan-1[|(YL-YB)/(XL-XB)|] (6)
determining a third angle between the first connection and the third connection according to equation (7):
∠HBL=|∠HB-∠LB| (7)
After the first included angle, the second included angle and the third included angle are determined, if the first included angle HB is less than or equal to 30 degrees, the second included angle LB is less than or equal to 30 degrees and the angle HBL is less than or equal to 15 degrees, continuously judging whether the facial features of the target infant can be acquired;
If the facial features of the target infant can be acquired and the recognition times meet the recognition conditions, determining that the posture recognition result is a supine posture. Wherein determining that the number of identifications satisfies the identification condition comprises: in the gesture recognition of the preset number of times, the recognition results with the preset reference number are the same. Such as: in the 7 times of gesture recognition, if 6 times of recognition results are the same, the recognition condition is satisfied.
If the first included angle HB is more than or equal to 30 degrees, the second included angle LB is more than or equal to 30 degrees and the angle HBL is less than or equal to 15 degrees, acquiring the ordinate of the head center point, the ordinate of the body center point and the ordinate of the leg center point of the target infant; and if the ordinate of the head center point, the ordinate of the body center point and the ordinate of the leg center point of the target infant are determined to meet YH & gtYB & gtYL, and the recognition times are determined to meet the recognition conditions, determining that the gesture recognition result is an upright gesture.
If the judgment conditions of the supine posture, the prone posture or the upright posture cannot be met in the gesture recognition of the preset times, determining that the gesture recognition result is an unknown gesture.
S112, determining the current recognition times of the prone posture and the current recognition times of the unknown posture according to the posture recognition result;
After the gesture recognition result is determined, determining the current recognition times of the prone gesture and the current recognition times of the unknown gesture according to the gesture recognition result, including:
if the gesture recognition result is determined to be the vertical gesture and the current recognition frequency of the prone gesture is determined to be more than 0, the current recognition frequency of the prone gesture is reduced once; and
And if the current recognition times of the unknown gesture are determined to be more than 0, reducing the current recognition times of the unknown gesture once.
In one embodiment, determining the current recognition times of the prone posture and the current recognition times of the unknown posture according to the posture recognition result includes:
If the gesture recognition result is determined to be the prone position, the current recognition times of the prone position are increased once; and
And if the current recognition times of the unknown gesture are determined to be more than 0, reducing the current recognition times of the unknown gesture once.
In one embodiment, if it is determined that the gesture recognition result is not the prone position, and it is determined that the current recognition frequency of the prone position is greater than 0, the current recognition frequency of the prone position is reduced once; and
The current recognition times of the unknown gesture are increased once.
Therefore, after each recognition, the current recognition times of the lying gesture and the current recognition times of the unknown gesture are determined, and a data basis is provided for subsequent alarm prompts.
S113, pushing alarm prompt information if the current recognition times of the infant position are larger than or equal to the preset first times, the current recognition times of the prone posture are larger than or equal to the preset second times or the current recognition times of the unknown posture are larger than or equal to the preset third posture times.
As described above, the maximum number of times of recognition of the baby position (first number), the maximum number of times of recognition of the prone posture (second number of times), and the maximum number of times of recognition of the unknown posture (third number of times) have been set in advance, then after the current number of times of recognition of the baby position, the current number of times of recognition of the prone posture, and the current number of times of recognition of the unknown posture are determined, in one embodiment, if it is determined that the current number of times of recognition of the baby position is equal to or greater than the preset first number of times, the current number of times of recognition of the prone posture is equal to or greater than the preset second number of times, or the current number of times of recognition of the unknown posture is equal to or greater than the preset third number of times of postures, the alarm prompt is pushed.
In one embodiment, if it is determined that the number of times of current recognition of the baby position is smaller than the preset first number of times, the number of times of current recognition of the prone posture is smaller than the preset second number of times, or the number of times of current recognition of the unknown posture is smaller than the preset third number of times, judging whether a looking-after person exists in the current scene;
If the fact that the looking-after person does not exist is determined, the current identification times of the looking-after person are increased once;
when the current identification times of the looking-after person are larger than or equal to the preset fourth times, pushing alarm prompt information.
In one embodiment, if the gesture recognition result is determined to be the prone position, the current recognition frequency of the prone position is increased once; and if the current recognition times of the unknown gesture are determined to be more than 0, reducing the current recognition times of the unknown gesture once, wherein the method further comprises the following steps:
Judging whether a looking-after person exists in the current scene;
If the fact that the careless person exists is determined, the current recognition times of the careless person are increased once;
when the current identification times of the looking-after person are larger than or equal to the preset fourth times, pushing alarm prompt information.
The first number, the second number, the third number, and the fourth number may be determined based on actual conditions, and are not limited herein.
In one embodiment, after pushing the alarm prompt information, the method further includes:
sending prompt information of whether to reset warning to a user;
And if a command needing to reset warning fed back by a user is received, resetting the corresponding gesture recognition times.
For example, when it is determined that the current identification number of the looking-after person is greater than or equal to the preset fourth number, after the alarm prompt information is pushed, if an instruction that needs to be reset for warning is received, which is fed back by the user, the current identification number of the looking-after person is cleared.
Therefore, the position, the posture and the looking-after person of the baby are continuously and circularly identified, and when the alarm is triggered, the alarm prompt information is pushed to the user, so that the user can remind the looking-after person in time, the timeliness of monitoring the baby is improved, and the safety of the baby is further improved.
Based on the same inventive concept as in the previous embodiments, this embodiment further provides a monitoring device, as shown in fig. 4, where the device includes:
the first identifying unit 41 is configured to identify a current scene based on a preset identifying period, and determine a current identifying number of times of the baby position if the current scene is determined to be a target scene;
A second recognition unit 42, configured to perform gesture recognition on the target infant, and obtain a gesture recognition result; the gesture recognition result includes: supine posture, prone posture, upright posture and unknown posture;
A determining unit 43 for determining the current recognition times of the prone posture and the current recognition times of the unknown posture according to the posture recognition result;
The pushing unit 44 is configured to push the alarm prompt information if it is determined that the number of times of current recognition of the infant position is greater than or equal to a preset first number of times, the number of times of current recognition of the prone posture is greater than or equal to a preset second number of times, or the number of times of current recognition of the unknown posture is greater than or equal to a preset third number of times.
Since the device described in the embodiments of the present invention is a device for implementing the infant monitoring method in the embodiments of the present invention, based on the method described in the embodiments of the present invention, a person skilled in the art can understand the specific structure and the deformation of the device, and thus the detailed description thereof is omitted herein. All devices used in the method of the embodiment of the invention are within the scope of the invention.
Based on the same inventive concept, the present embodiment provides a computer device 500, as shown in fig. 5, including a memory 510, a processor 520, and a computer program 511 stored on the memory 510 and executable on the processor 520, wherein the processor 520 implements the following steps when executing the computer program 511:
identifying a current scene based on a preset identification period, and if the current scene is determined to be a target scene, determining the current identification times of the baby position;
Carrying out gesture recognition on the target infant to obtain a gesture recognition result; the gesture recognition result includes: supine posture, prone posture, upright posture and unknown posture;
determining the current recognition times of the prone posture and the current recognition times of the unknown posture according to the posture recognition result;
If the current recognition times of the infant position are larger than or equal to the preset first times, the current recognition times of the prone posture are larger than or equal to the preset second times or the current recognition times of the unknown posture are larger than or equal to the preset third posture times, pushing alarm prompt information.
Based on the same inventive concept, the present embodiment provides a computer-readable storage medium 600, as shown in fig. 6, having stored thereon a computer program 611, which computer program 611 when executed by a processor implements the steps of:
identifying a current scene based on a preset identification period, and if the current scene is determined to be a target scene, determining the current identification times of the baby position;
Carrying out gesture recognition on the target infant to obtain a gesture recognition result; the gesture recognition result includes: supine posture, prone posture, upright posture and unknown posture;
determining the current recognition times of the prone posture and the current recognition times of the unknown posture according to the posture recognition result;
If the current recognition times of the infant position are larger than or equal to the preset first times, the current recognition times of the prone posture are larger than or equal to the preset second times or the current recognition times of the unknown posture are larger than or equal to the preset third posture times, pushing alarm prompt information.
Through one or more embodiments of the present invention, the present invention has the following benefits or advantages:
The invention provides a baby monitoring method and a baby monitoring device, wherein the method comprises the following steps: identifying a current scene based on a preset identification period, and if the current scene is determined to be a target scene, determining the current identification times of the baby position; carrying out gesture recognition on the target infant to obtain a gesture recognition result; the gesture recognition result includes: supine posture, prone posture, upright posture and unknown posture; determining the current recognition times of the prone posture and the current recognition times of the unknown posture according to the posture recognition result; pushing alarm prompt information if the current recognition times of the infant position are larger than or equal to the preset first times, the current recognition times of the prone posture are larger than or equal to the preset second times or the current recognition times of the unknown posture are larger than or equal to the preset third posture times; therefore, the position and the posture of the baby are continuously and circularly identified, and when an alarm is triggered, alarm prompt information is pushed to a user, so that the user can remind the user to watch the baby in time, the timeliness of monitoring the baby is improved, and the safety of the baby is further improved.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a construction of such a system is apparent from the description above. In addition, the present invention is not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in a gateway, proxy server, system according to embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
The above description is not intended to limit the scope of the invention, but is intended to cover any modifications, equivalents, and improvements within the spirit and principles of the invention.

Claims (7)

1. A method of monitoring an infant, the method comprising:
identifying a current scene based on a preset identification period, and if the current scene is determined to be a target scene, determining the current identification times of the baby position;
carrying out gesture recognition on the target infant through the camera to obtain a gesture recognition result; the gesture recognition result includes: supine posture, prone posture, upright posture and unknown posture;
determining the current recognition times of the prone posture and the current recognition times of the unknown posture according to the posture recognition result;
pushing alarm prompt information if the current recognition times of the infant position are larger than or equal to the preset first times, the current recognition times of the prone posture are larger than or equal to the preset second times or the current recognition times of the unknown posture are larger than or equal to the preset third posture times; wherein,
Carrying out gesture recognition on the target infant through a camera to obtain a gesture recognition result, wherein the gesture recognition result comprises the following steps:
determining a head center point, a body center point, and a leg center point of the target infant;
Determining a first angle between a first line connecting the head center point and the body center point and an X-axis;
Determining a second angle between a second line between the body center point and the leg center point and the X-axis;
Determining a third included angle between the first connecting line and a third connecting line;
carrying out gesture recognition on the target infant according to the first included angle, the second included angle and the third included angle to obtain a gesture recognition result;
The step of carrying out gesture recognition on the target infant according to the first included angle, the second included angle and the third included angle to obtain a gesture recognition result comprises the following steps:
If the first included angle is smaller than or equal to a first angle threshold, the second included angle is smaller than or equal to the first angle threshold and the third included angle is smaller than or equal to a second angle threshold, judging whether facial features of the target infant can be acquired or not;
if the facial features of the target infant can be obtained and the recognition times meet the recognition conditions, determining that the posture recognition result is a supine posture;
If the facial features of the target infant are not obtained and the recognition times meet the recognition conditions, determining that the gesture recognition result is a lying gesture;
The step of carrying out gesture recognition on the target infant according to the first included angle, the second included angle and the third included angle to obtain a gesture recognition result comprises the following steps:
if the first included angle is larger than or equal to a first angle threshold, the second included angle is larger than or equal to the first angle threshold and the third included angle is smaller than or equal to a second angle threshold, acquiring the longitudinal coordinate of the head center point, the longitudinal coordinate of the body center point and the longitudinal coordinate of the leg center point of the target infant;
if the ordinate of the head center point, the ordinate of the body center point and the ordinate of the leg center point of the target infant are determined to meet YH & gtYB & gtYL, and the recognition times are determined to meet the recognition conditions, determining that the gesture recognition result is an upright gesture; wherein,
The YH is the ordinate of the head center point, the YB is the ordinate of the body center point, and the YL is the ordinate of the leg center point.
2. The method of claim 1, wherein determining the current number of identifications of the infant's location comprises:
if the target baby exists in the current scene, resetting the current identification times of the baby position;
if it is determined that the target infant does not exist in the current scene, the current identification number of the infant position is increased once.
3. The method of claim 1, wherein the determining the current recognition times of the prone posture and the current recognition times of the unknown posture according to the posture recognition result includes:
if the gesture recognition result is determined to be an upright gesture, and the current recognition frequency of the prone gesture is determined to be more than 0, the current recognition frequency of the prone gesture is reduced once; and
And if the current recognition times of the unknown gesture are determined to be more than 0, reducing the current recognition times of the unknown gesture once.
4. The method of claim 1, wherein the determining the current recognition times of the prone posture and the current recognition times of the unknown posture according to the posture recognition result includes:
if the gesture recognition result is determined to be the prone gesture, the current recognition times of the prone gesture are increased once; and
And if the current recognition times of the unknown gesture are determined to be more than 0, reducing the current recognition times of the unknown gesture once.
5. The method of claim 4, wherein the method further comprises:
If the gesture recognition result is not the prone gesture and the current recognition frequency of the prone gesture is greater than 0, reducing the current recognition frequency of the prone gesture once; and
And increasing the current recognition times of the unknown gesture once.
6. The method of claim 1, wherein the method further comprises:
If the current recognition times of the infant position are smaller than the preset first times, the current recognition times of the prone posture are smaller than the preset second times or the current recognition times of the unknown posture are smaller than the preset third posture times, judging whether a looking-after person exists in the current scene or not;
If the fact that the looking-after person does not exist is determined, the current identification times of the looking-after person are increased once;
And pushing alarm prompt information when the current identification times of the looking-after person are larger than or equal to the preset fourth times.
7. A monitoring device, the device comprising:
the first recognition unit is used for recognizing the current scene based on a preset recognition period, and if the current scene is determined to be a target scene, the current recognition times of the baby position are determined;
The second recognition unit is used for carrying out gesture recognition on the target infant to obtain a gesture recognition result; the gesture recognition result includes: supine posture, prone posture, upright posture and unknown posture;
the determining unit is used for determining the current recognition times of the prone posture and the current recognition times of the unknown posture according to the posture recognition result;
The pushing unit is used for pushing alarm prompt information if the current recognition times of the infant position are larger than or equal to the preset first times, the current recognition times of the prone posture are larger than or equal to the preset second times or the current recognition times of the unknown posture are larger than or equal to the preset third posture times; wherein,
The gesture recognition of the target infant is performed by a camera to obtain a gesture recognition result, comprising:
determining a head center point, a body center point, and a leg center point of the target infant;
Determining a first angle between a first line connecting the head center point and the body center point and an X-axis;
Determining a second angle between a second line between the body center point and the leg center point and the X-axis;
Determining a third included angle between the first connecting line and a third connecting line;
carrying out gesture recognition on the target infant according to the first included angle, the second included angle and the third included angle to obtain a gesture recognition result;
The step of carrying out gesture recognition on the target infant according to the first included angle, the second included angle and the third included angle to obtain a gesture recognition result comprises the following steps:
If the first included angle is smaller than or equal to a first angle threshold, the second included angle is smaller than or equal to the first angle threshold and the third included angle is smaller than or equal to a second angle threshold, judging whether facial features of the target infant can be acquired or not;
if the facial features of the target infant can be obtained and the recognition times meet the recognition conditions, determining that the posture recognition result is a supine posture;
If the facial features of the target infant are not obtained and the recognition times meet the recognition conditions, determining that the gesture recognition result is a lying gesture;
The step of carrying out gesture recognition on the target infant according to the first included angle, the second included angle and the third included angle to obtain a gesture recognition result comprises the following steps:
if the first included angle is larger than or equal to a first angle threshold, the second included angle is larger than or equal to the first angle threshold and the third included angle is smaller than or equal to a second angle threshold, acquiring the longitudinal coordinate of the head center point, the longitudinal coordinate of the body center point and the longitudinal coordinate of the leg center point of the target infant;
if the ordinate of the head center point, the ordinate of the body center point and the ordinate of the leg center point of the target infant are determined to meet YH & gtYB & gtYL, and the recognition times are determined to meet the recognition conditions, determining that the gesture recognition result is an upright gesture; wherein,
The YH is the ordinate of the head center point, the YB is the ordinate of the body center point, and the YL is the ordinate of the leg center point.
CN202211467650.8A 2022-11-22 2022-11-22 Infant monitoring method and device Active CN115880852B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211467650.8A CN115880852B (en) 2022-11-22 2022-11-22 Infant monitoring method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211467650.8A CN115880852B (en) 2022-11-22 2022-11-22 Infant monitoring method and device

Publications (2)

Publication Number Publication Date
CN115880852A CN115880852A (en) 2023-03-31
CN115880852B true CN115880852B (en) 2024-07-16

Family

ID=85760571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211467650.8A Active CN115880852B (en) 2022-11-22 2022-11-22 Infant monitoring method and device

Country Status (1)

Country Link
CN (1) CN115880852B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105813562A (en) * 2013-12-13 2016-07-27 皇家飞利浦有限公司 Sleep monitoring system and method
CN108932803A (en) * 2018-07-27 2018-12-04 广东交通职业技术学院 A kind of method that preventing sudden death of the baby, warning device and system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2396043A (en) * 2002-12-04 2004-06-09 John Phizackerley Baby alarm with tilt sensor
US9572528B1 (en) * 2012-08-06 2017-02-21 Los Angeles Biomedical Research Insitute at Harbor-UCLA Medical Center Monitor for SIDS research and prevention
CN114216564B (en) * 2021-11-26 2025-02-11 杭州七格智联科技有限公司 An intelligent temperature detection method for infants and young children based on multi-region positioning of the head
CN115331308A (en) * 2022-06-15 2022-11-11 南京木马牛智能科技有限公司 Identification method suitable for motion posture of baby

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105813562A (en) * 2013-12-13 2016-07-27 皇家飞利浦有限公司 Sleep monitoring system and method
CN108932803A (en) * 2018-07-27 2018-12-04 广东交通职业技术学院 A kind of method that preventing sudden death of the baby, warning device and system

Also Published As

Publication number Publication date
CN115880852A (en) 2023-03-31

Similar Documents

Publication Publication Date Title
JP6411373B2 (en) Recognition data transmission device, recognition data recording device, and recognition data recording method
EP3715994B1 (en) Electronic apparatus and control method thereof
CN110472481B (en) Sleeping gesture detection method, device and equipment
US10083376B2 (en) Human presence detection in a home surveillance system
CN109658666A (en) Danger protection method, equipment, system, electronic equipment and storage medium
CN108682112A (en) A kind of infant monitoring device, terminal, system, method and storage medium
WO2019085589A1 (en) Method and apparatus for recognizing illegal behavior in unattended scenario
TWI666933B (en) Method and computing device for monitoring object
WO2023273331A1 (en) Method and apparatus for controlling air conditioner, air conditioner, and storage medium
CN109740511B (en) Facial expression matching method, device, equipment and storage medium
JP2016045573A (en) Control method, control device, and control program
CN106373337A (en) Intelligent nursing method
CN112712020A (en) Sleep monitoring method, device and system
CN115880852B (en) Infant monitoring method and device
CN113465155B (en) Method, system, host and storage medium for reducing quilt kicking behavior of infants
TWM537277U (en) Infant caring information system
Alswedani et al. A smart baby cradle based on IoT
CN104305967A (en) Method and equipment for detecting risk of catching cold
Dingli et al. Turning homes into low-cost ambient assisted living environments
JP2015213537A (en) Image processing apparatus, image processing method, and image processing program
WO2020008726A1 (en) Target object detection program and target object detection device
CN109523668B (en) Intelligent access control method and device and computing equipment
CN106313048A (en) Method and device for monitoring wake-up state of user based on intelligent robot
CN114092554A (en) Indoor human body pose detection method and device
CN207199052U (en) It is a kind of intelligent baby bed

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant