WO2017085743A2 - Wearable personal safety device with image and voice processing capabilities - Google Patents
Wearable personal safety device with image and voice processing capabilities Download PDFInfo
- Publication number
- WO2017085743A2 WO2017085743A2 PCT/IN2016/050404 IN2016050404W WO2017085743A2 WO 2017085743 A2 WO2017085743 A2 WO 2017085743A2 IN 2016050404 W IN2016050404 W IN 2016050404W WO 2017085743 A2 WO2017085743 A2 WO 2017085743A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- child
- person
- voice
- images
- safety device
- Prior art date
Links
- 238000012545 processing Methods 0.000 title description 12
- 230000000694 effects Effects 0.000 claims abstract description 38
- 238000000034 method Methods 0.000 claims abstract description 17
- 230000015654 memory Effects 0.000 claims description 16
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 230000008921 facial expression Effects 0.000 claims description 8
- 238000004891 communication Methods 0.000 claims description 4
- 230000003993 interaction Effects 0.000 claims 1
- 230000009429 distress Effects 0.000 abstract description 4
- 230000000875 corresponding effect Effects 0.000 description 18
- 238000013461 design Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 239000007795 chemical reaction product Substances 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 239000000047 product Substances 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 206010011224 Cough Diseases 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000003796 beauty Effects 0.000 description 1
- 239000000919 ceramic Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000013067 intermediate product Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 230000001568 sexual effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
- G08B21/0202—Child monitoring systems using a transmitter-receiver system carried by the parent and the child
- G08B21/0205—Specific application combined with child monitoring using a transmitter-receiver system
- G08B21/0208—Combination with audio or video communication, e.g. combination with "baby phone" function
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
- G08B21/0202—Child monitoring systems using a transmitter-receiver system carried by the parent and the child
- G08B21/0205—Specific application combined with child monitoring using a transmitter-receiver system
- G08B21/0211—Combination with medical sensor, e.g. for measuring heart rate, temperature
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
- G08B21/0202—Child monitoring systems using a transmitter-receiver system carried by the parent and the child
- G08B21/028—Communication between parent and child units via remote transmission means, e.g. satellite network
Definitions
- the embodiments herein generally relate to a wearable personal safety device, and, more particularly, to a system and a method for tracking a child using a wearable personal safety device with image and voice processing capabilities.
- an embodiment herein provides a system for tracking a child's day-to-day activities.
- the system includes a wearable personal safety device, and a tracking tool.
- the wearable personal safety device includes analyzes the child's day-to-day activities.
- the wearable personal safety device includes an image capturing unit, a voice recognition unit, a voice comparison unit, a sensor unit, and a communication unit.
- the image capturing unit captures a plurality of images of a person at different angles and different time intervals corresponding to the person interacts with the child.
- the voice recognition unit recognizes a conversation between the child and the person.
- the voice comparison unit records the comparison to compare with voice signatures that is stored in the wearable personal safety device to compute a situation.
- the sensor unit senses a movement or a location of the child.
- the communication unit communicates (i) the plurality of images, (ii) the conversation, (iii) the movement or the location.
- the tracking tool that receives at least one of (i) the plurality of images of the person, (ii) the conversation, (iii) the movement or the location of child to track the wearable personal safety device.
- the tracking tool includes a memory, and a processor.
- the memory stores a database and a set of modules.
- the database stores predefined images, and predefined voice signatures.
- the processor executes the set of modules.
- the set of modules includes an image retrieval module, an image sorting module, a voice recognition module, a situation analyzing module, an image voice correlating module, a male person tracking module, and a feelings analyzing module.
- the image retrieval module retrieves each of the plurality of images from the wearable personal safety device to identify a gender of the person based on the predefined images corresponding to the gender.
- the image sorting module segregates each of the plurality of images based on the gender.
- the voice recognition module recognizes voice of the person based on the predefined voice signatures corresponding to the gender.
- the situation analyzing module analyzes the situation of the child based on a facial expression of the child and activities of the person in (a) the conversion, and (b) each of the plurality of images.
- the image voice correlating module that correlates the voice of the person with the corresponding each of the plurality of images.
- the male person tracking module assigns a priority value to a male person corresponding to each of the plurality of images and tracks continuously the male person when the gender is the male person.
- the feelings analyzing module that analyzes the conversation to determine a different type of impression of the child.
- the tracking tool further includes a health analyzing module that analyzes a health data of the child to communicate the health data to the tracking tool of a user.
- the tracking tool further includes a voice comparison module that compares the voice of the person with the predefined voice signatures stored in the database to determine whether the voice is already exists or not.
- a method for tracking day-to-day activities of a child In one aspect, a method for tracking day-to-day activities of a child.
- the method includes the step of: (a) capturing a plurality of images of a person at different angles and different time intervals corresponding speaking to the child; (b) recognizing a conversation between the child and the person to compute a situation of the child; (c) recording the comparison to compare with voice signatures already stored in a wearable personal safety device; (d) sensing a movement or a location of the child; (e) communicating (i) the plurality of images, (ii) the conversation, (iii) the movement or the location to a user device to track the child's activities; (f) retrieving each of the plurality of images from the wearable personal safety device to identify gender of the person; (g) segregating each of the plurality of images based on the gender; (h) recognizing voice of the person based on the gender; (i) comparing the voice of the person with the voice signatures stored in the database; (j) analyzing the situation of the child based on a facial expression of the child and activities of the person in (a) the conversion,
- the method further includes the step of: (k) correlating the voice with the corresponding each of the plurality of images; (1) assigning a priority value to a male person corresponding to each of the plurality of images and tracks continuously the male person; (m) analyzing the conversation to determine a different type of impression of the child and to measure number of times that corresponds to the different type of impression; (n) analyzing a health data of the child to communicate the health data to the tracking tool of a user; (o) compares the voice of the person with the predefined voice signatures stored in the database to determine whether the voice is already exists or not.
- one or more non-transitory computer readable storage mediums storing one or more sequences of instructions.
- the one or more non- transitory computer readable storage mediums performed the step of: (a) capturing a plurality of images of a person at different angles and different time intervals corresponding speaking to the child; (b) recognizing a conversation between the child and the person to compute a situation of the child; (c) recording the comparison to compare with voice signatures already stored in a wearable personal safety device; (d) sensing a movement or a location of the child; (e) communicating (i) the plurality of images, (ii) the conversation, (iii) the movement or the location to a user device to track the child's activities; (f) retrieving each of the plurality of images from the wearable personal safety device to identify gender of the person; (g) segregating each of the plurality of images based on the gender; (h) recognizing voice of the person based on the gender; (i) comparing the
- the one or more non-transitory computer readable storage mediums further performed the step of: (k) correlating the voice with the corresponding each of the plurality of images; (1) assigning a priority value to a male person corresponding to each of the plurality of images and tracks continuously the male person; (m) analyzing the conversation to determine a different type of impression of the child and to measure number of times that corresponds to the different type of impression; (n) analyzing a health data of the child to communicate the health data to the tracking tool of a user; (o) compares the voice of the person with the predefined voice signatures stored in the database to determine whether the voice is already exists or not.
- FIG. 1 illustrates a system view of a user tracking a child's day-to-day activities using a wearable personal safety device according to an embodiment herein;
- FIG. 2 illustrates an exploded view of the wearable personal safety device of FIG. 1 according to an embodiment herein;
- FIG. 3 illustrates an exploded view of a tracking tool of FIG. 1 according to an embodiment herein;
- FIG. 4 is an exemplary view of the user downloading information from the wearable personal safety device by connecting the wearable personal safety device with a user device of FIG. 1 according to an embodiment herein;
- FIG. 5 is an exemplary view of the user analyzing the child (for example, the child getting distressed by the activities of a male person) of FIG. 1 according to an embodiment herein;
- FIG. 6 is an exemplary view of the child wearing the wearable personal safety device of FIG. 1 according to an embodiment herein;
- FIGS. 7A-7B are flow diagrams that illustrate a method of a user tracking a child's activities using a wearable personal safety device according to an embodiment herein;
- FIG. 8 illustrates an exploded view of a receiver of FIG. 1 according to an embodiment herein;
- FIG. 9 illustrates a schematic diagram of computer architecture of a user device, in accordance with the embodiments herein.
- FIGS. 1 through 9 where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments.
- FIG. 1 illustrates a system view 100 of a user 102 tracking a child's day-today activities using a wearable personal safety device 108 according to an embodiment herein.
- the system view 100 includes the user 102, a user device 104, a tracking tool 106, and the wearable personal safety device 108.
- the tracking tool 106 is installed in the user device 104.
- the wearable personal safety device 108 is wearing by the child to track the child's activities when the child is conversating with a known/unknown person.
- the tracking tool 106 shows the child's activities to the user 102. In one embodiment, the tracking tool 106 sorts only male pictures to the user 102.
- the tracking tool 106 analyzes the voice and the pictures of the male who is speaking to the child and standing in front of the child. In one embodiment, the tracking tool 106 analyzes the distress of the child based on the facial recognition of the child and/or the person. In one embodiment, a reset option may be applicable in the wearable personal safety device 108 to maintain the storage and battery of the wearable personal safety device 108.
- the user 102 may be the child's mother, the child's father, and/or child's relatives etc.
- a teacher/a male staff may wear the wearable personal safety device that points downward to analyze a conversation between the teacher and the child at a school.
- the user device 104 may be a computer, a mobile phone, a tablet, and/or a smart phone.
- FIG. 2 illustrates an exploded view 200 of the wearable personal safety device 108 of FIG. 1 according to an embodiment herein.
- the wearable personal safety device 108 includes an image capturing unit 202, a voice recognition unit 204, a voice comparison unit 206, information download unit 208, a sensor unit 210, a photo detector unit 212, a memory unit 214 and an enclosure 216.
- the information download unit 208 includes a security unit 208A.
- the image capturing unit 202 captures an image of a person who is speaking to the child. The person may be a known person, a third person (i.e. stranger), and friends of the child, etc.
- the image capturing unit 108A captures images for every 30 minutes.
- 4000 images may be captured by the image capturing unit 108 A for each day.
- the image capturing unit 202 may be mounted at an angle suitable for capturing the person's face which might be at a greater height than the child.
- the voice recognition unit 204 recognizes the conversation between the child and the person while the child is conversating with the person.
- the voice recognition tracks the voice that stimulates around the child.
- the wearable personal safety device 108 may trigger the video recording event and may be recorded for 50 minutes.
- the voice comparison unit 206 compares the voice signatures already stored in the wearable personal safety device 108 to record conversations.
- the voice comparison unit 206 may be used to filter out the background noise and provide only conversations between the child and the person.
- the information download unit 208 is configured to download the information data of the conversations between the child and the person, images of the person and provide to the user 102.
- the information download unit 208 may be rendered accessible only to an authorized person (e.g. the user 102) using the security unit 208A.
- the access to the information may be username and password protected.
- the access to the information may be physically locked with a tumbler lock.
- the sensor unit 210 senses the movement of the wearable personal safety device 108. When the wearable personal safety device 108 is moved from its original place, a light in the wearable personal safety device 108 may be changed. This indication helps the user 102 to analyze that the child is in trouble.
- the photo detector unit 212 analyzes the light received by the wearable personal safety device 108. If the wearable personal safety device 108 receives less or more light than the predetermined level, an alarm may be sent to the user 102 to track the child.
- the memory unit 214 that stores all the data included the voice conversation between the child and the person, the images of the person, etc.
- the enclosure 216 that covers the wearable personal safety device 108.
- FIG. 3 illustrates an exploded view 300 of the tracking tool 106 of FIG. 1 according to an embodiment herein.
- the exploded view 300 includes a database 302, an image retrieval module 304, an image sorting module 306, a voice recognizing module 308, a situation analyzing module 310, an image voice correlating module 312, a male person tracking module 314, a feelings analyzing module 316, and a health analyzing module 318.
- the database 302 may store simulation, emulation and/or prototype data, stores predefined images, and predefined voice signatures.
- the image retrieval module 304 retrieves each of said plurality of images from the wearable personal safety device 108 to identify a gender of the person based on the predefined images corresponding to the gender.
- the tracking tool 106 is trained initially to recognize the gender using a machine learning method.
- the image is given to the database 302 to recognize whether the image is a male or a female.
- the image sorting module 306 segregates each of the plurality of images based on the gender.
- the voice recognition module 308 recognizes voice of the person based on the predefined voice signatures corresponding to the gender. For example, all instances of a child's conversation whose voice signature that is stored in the wearable personal safety device 108 may be identified and played back.
- the situation analyzing module 310 analyzes the situation (for example, if the child is feeling distressed because of the person) of the child based on a facial expression of the child and activities of the person in (a) the conversion, and (b) each of the plurality of images.
- the image voice correlating module 312 correlates the voice of the person with the corresponding each of the plurality of images by tracking the image for either two minutes before and/or after the voice to analyze the male image.
- the male person tracking module 314 is configured to track the male person who disturbs the child.
- the male person tracking module assigns a priority value to a male person corresponding to each of the plurality of images and tracks continuously the male person when the gender is the male person.
- the user 102 can track the male person who disturbs the child frequently.
- the user 102 can give priority to the male person's voice and images using searching option and the user 102 may be able to recognize the voice of the male person in the tracking tool 106.
- the next time if the male person speaks to the child, then the speech conversation of the child and the same male person is saved in the same list in the male person's conversation folder.
- the feelings analyzing module 316 analyzes the conversation to determine a different type of impression of the child.
- the analyzing may include whether the child is happy, sad, adjusted, sociable and/or reserved on the day.
- the feelings analyzing module 316 analyzes the feelings of the child by measuring the number of times the child laughs, cries, and interacts with others etc.
- the health analyzing module 318 that is configured to analyze a health of the child by counting the number of times the child coughs, and sneezes, etc.
- FIG. 4 is an exemplary view 400 of the user 102 downloading information from the wearable personal safety device 108 by connecting the wearable personal safety device 108 with a user device of FIG. 1 according to an embodiment herein.
- the exemplary view 400 shows the user 102 downloading the information that includes the voice conversation between the child and the person, the images of the person who interacts with the child, etc. from the wearable personal safety device 108.
- the user 102 watching the child's day-to-day activities using the tracking tool 106 in the user device 104.
- the wearable personal safety device 108 stores all the data including the images of the persons who are interacting with the child, the voice conversation between the child and the person, and the situation of the child during conversation and/or interacting with the male person.
- the wearable personal safety device 108 is connected to the user device 104 by wired and/or wireless connection.
- the user 102 can identify the voice conversation, and the images of the male person, and what is going on around the child. In one embodiment, the user 102 identifies where the child is based on the images retrieved from the wearable personal safety device 108.
- FIG. 5 is an exemplary view 500 of user 102 analyzing the child (for example, the child getting distressed by the activities of a male person 502) of FIG. 1 according to an embodiment herein.
- the wearable personal safety device 108 captures images of the male person 502 who is standing in front of the child. Using the images, the user 102 may analyze the situation of the child while interacting with the male person 502. The user 102 may analyze that the child gets distress by the images shown as (a) the male person 502 may take his shirt off, (b) may show body exposure, and (c) may show organic part etc. The user 102 may recognize the voice conversation between the child and the male person.
- the exemplary view 500 shows the conversation between the child and the male person 502.
- the user 102 may analyze the image of the male person 502 anyways.
- the image capturing unit does not capture the images of the male person 502.
- the image voice correlating module 314 matches the voice with the corresponding male person's 502 images by tracking the image for either two minutes before and/or after the voice to analyze the male person's 502 images using a time stamp.
- FIG. 6 is an exemplary view 600 of the child wearing the wearable personal safety device 108 of FIG. 1 according to an embodiment herein.
- the exemplary view 600 shows the child/dear one wearing the wearable personal safety device 108 in the child's belt.
- the wearable personal safety device 108 may be placed in the child's bag, the child's dress, the child's chain, and the child's shoes, etc. to capture the images of the person who is interacting with the child and recording the voice conversation between the child and the male person 502.
- FIGS. 7A-7B are flow diagrams that illustrate a method of a user 102 tracking a child's activities using a wearable personal safety device 108 according to an embodiment herein.
- step 702 one or more images of a person at different angles and different time intervals corresponding speaking to the child is captured.
- a conversation between the child and the person is recognized to compute a situation of the child.
- step 706 the comparison to compare with voice signatures already stored in a wearable personal safety device 108 is recorded.
- a movement or a location of the child is sensed.
- each of the plurality of images, (ii) the conversation, (iii) the movement or the location is communicated to a user device 104 to track the child's activities.
- each of the plurality of images from the wearable personal safety device 108 is retrieved to identify gender of the person.
- the image and voice of the male person 502 can be correlated when the male person 502 who is not standing in front of the child and speaking with the child using time stamp.
- each of the plurality of images is segregated based on the gender.
- voice of the person is recognized based on the gender.
- the voice of the person is compared with the voice signatures stored in the database 302.
- the situation of the child based on a facial expression of the child and activities of the person in (a) the conversion, and (b) each of the plurality of images is analyzed.
- FIG. 8 illustrates an exploded view of a receiver 800 of FIG. 1 having a memory 802 having a set of instructions, a bus 804, a display 806, a speaker 808, and a processor 810 capable of processing the set of instructions to perform any one or more of the methodologies herein, according to an embodiment herein.
- the processor 810 may also enable digital content to be consumed in the form of video for output via one or more displays 806 or audio for output via speaker and/or earphones 808.
- the processor 810 may also carry out the methods described herein and in accordance with the embodiments herein.
- Digital content may also be stored in the memory 802 for future processing or consumption.
- the memory 802 may also store program specific information and/or service information (PSI/SI), including information about digital content (e.g., the detected information bits) available in the future or stored from the past.
- PSI/SI program specific information and/or service information
- a user of the receiver 800 may view this stored information on display 806 and select an item for viewing, listening, or other uses via input, which may take the form of keypad, scroll, or other input device(s) or combinations thereof.
- the processor 810 may pass information.
- the content and PSI/SI may be passed among functions within the receiver 800 using the bus 804.
- the techniques provided by the embodiments herein may be implemented on an integrated circuit chip (not shown).
- the chip design is created in a graphical computer programming language, and stored in a computer storage medium (such as a disk, tape, physical hard drive, or virtual hard drive such as inside a storage access network). If the designer does not fabricate chips or the photolithographic masks used to fabricate the chips, the designer transmits the resulting design by physical means (e.g., by providing a copy of the storage medium storing the design) or electronically (e.g., through the Internet) to such entities, directly or indirectly.
- the stored design is then converted into the appropriate format (e.g., GDSII) for the fabrication of photolithographic masks, which typically includes multiple copies of the chip design in question that are to be formed on a wafer.
- the photolithographic masks are utilized to define areas of the wafer (and/or the layers thereon) to be etched or otherwise processed.
- the resulting integrated circuit chips can be distributed by the fabricator in raw wafer form (i.e. as a single wafer that has multiple unpackaged chips), as a bare die, or in a packaged form.
- the chip is mounted in a single chip package (such as a plastic carrier, with leads that are affixed to a motherboard or other higher level carrier) or in a multichip package (such as a ceramic carrier that has either or both surface interconnections or buried interconnections).
- the chip is then integrated with other chips, discrete circuit elements, and/or other signal processing devices as part of either (a) an intermediate product, such as a motherboard, or (b) an end product.
- the end product can be any product that includes integrated circuit chips ranging from toys and other low-end applications to advanced computer products having a display, a keyboard or other input device, and a central processor.
- the embodiments herein can take the form of, an entirely hardware embodiment, an entirely software embodiment, or an embodiment including both hardware and software elements.
- the embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc.
- the embodiments herein can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system.
- a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
- the medium can be an electronic, magnetic, optical, electromagnetic, infrared, a semiconductor system (or apparatus or device), or a propagation medium.
- Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk, and an optical disk.
- Current examples of optical disks include compact disk - read only memory (CD-ROM), compact disk - read/write (CD-R/W) and DVD.
- a data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus.
- the memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
- I/O devices can be coupled to the system either directly or through intervening I/O controllers.
- Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
- FIG. 9 A representative hardware environment for practicing the embodiments herein is depicted in FIG. 9.
- the system comprises at least one processor or central processing unit (CPU) 10.
- the CPUs 10 are interconnected via a system bus 12 to various devices such as a random access memory (RAM) 14, read-only memory (ROM) 16, and an input/output (I/O) adapter 18.
- RAM random access memory
- ROM read-only memory
- I/O input/output
- the I/O adapter 18 can connect to peripheral devices, such as disk units 11 and tape drives 13, or other program storage devices that are readable by the system.
- the system can read the inventive instructions on the program storage devices and follow these instructions to execute the methodology of the embodiments herein.
- the system further includes a user interface adapter 19 that connects a keyboard 15, mouse 17, speaker 24, microphone 22, and/or other user interface devices such as a touch screen device (not shown) or a remote control to the bus 12 to gather user input.
- a communication adapter 20 connects the bus 12 to a data processing network 25, and a display adapter 21 connects the bus 12 to a display device 23 which may be embodied as an output device such as a monitor, printer, or transmitter, for example.
- the wearable personal safety device 108 is used to capture images, recognizing the voice of the male person 502 when the male person 502 is interacting with the child.
- the tracking tool 106 is used to track the activities of the child to the user 102.
- the wearable personal safety device 108 is capable of triggering 50 minutes video recording when the bad situation is happening to the child by the male person 502.
- the beauty of the device is that the user 102 has whole day to analyze the data. There is no need of the presence of the child when the user 102 watches the activities of the child.
Landscapes
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Child & Adolescent Psychology (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Emergency Management (AREA)
- Heart & Thoracic Surgery (AREA)
- Cardiology (AREA)
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
- Emergency Alarm Devices (AREA)
- Alarm Systems (AREA)
Abstract
The embodiments herein describe a system and method for tracking a child's day-to-day activities using a wearable personal safety device (108). A tracking tool (106) tracks the child's day-to-day activities and reports the activities to a user. The tracking tool (106) receives data that includes images, and voice conversation between the child and a male person from the wearable personal safety device (108). Situation of the child (i.e. the child getting distress by the male person) is analyzed using the tracking tool (106). Feelings of the child and a health of the child can be analyzed using the tracking tool (106).
Description
WEARABLE PERSONAL SAFETY DEVICE WITH IMAGE AND VOICE
PROCESSING CAPABILITIES BACKGROUND
Technical Field
[0001] The embodiments herein generally relate to a wearable personal safety device, and, more particularly, to a system and a method for tracking a child using a wearable personal safety device with image and voice processing capabilities.
Description of the Related Art
[0002] In this recent years, man is concerned about his own, his near, and dear ones safety and security. With rising competition, enmities among people and sexual offences have also increased. Due to the reasons, unknown people, and strangers kidnapping the children, women, and/or causing different kinds of harms to them. Such incidents make people cautious and they keep worrying about their safety and security. When compared to the other countries, in India, there are lots of crimes are happening everyday (e.g. child/women rape, a murder, and a theft etc.). With the development of science and technology, various inventions have been made for providing safety to individuals and keeping a track on them. Even though, the technologies provide safety to individuals, the images of the suspected people may not be tracked and caught easily. Accordingly, there remains a need for tracking the child's day-to-day activities, and unknown persons using a wearable personal safety device.
SUMMARY
[0003] In view of foregoing, an embodiment herein provides a system for tracking a child's day-to-day activities. The system includes a wearable personal safety device, and a tracking tool. The wearable personal safety device includes analyzes the child's day-to-day activities. The wearable personal safety device includes an image capturing unit, a voice recognition unit, a voice comparison unit, a sensor unit, and a communication unit. The image capturing unit captures a plurality of images of a person at different angles and different time intervals corresponding to the person interacts with the child. The voice
recognition unit recognizes a conversation between the child and the person. The voice comparison unit records the comparison to compare with voice signatures that is stored in the wearable personal safety device to compute a situation. The sensor unit senses a movement or a location of the child. The communication unit communicates (i) the plurality of images, (ii) the conversation, (iii) the movement or the location. The tracking tool that receives at least one of (i) the plurality of images of the person, (ii) the conversation, (iii) the movement or the location of child to track the wearable personal safety device. The tracking tool includes a memory, and a processor. The memory stores a database and a set of modules. The database stores predefined images, and predefined voice signatures. The processor executes the set of modules.
[0004] The set of modules includes an image retrieval module, an image sorting module, a voice recognition module, a situation analyzing module, an image voice correlating module, a male person tracking module, and a feelings analyzing module. The image retrieval module retrieves each of the plurality of images from the wearable personal safety device to identify a gender of the person based on the predefined images corresponding to the gender. The image sorting module segregates each of the plurality of images based on the gender. The voice recognition module recognizes voice of the person based on the predefined voice signatures corresponding to the gender. The situation analyzing module analyzes the situation of the child based on a facial expression of the child and activities of the person in (a) the conversion, and (b) each of the plurality of images. The image voice correlating module that correlates the voice of the person with the corresponding each of the plurality of images. The male person tracking module assigns a priority value to a male person corresponding to each of the plurality of images and tracks continuously the male person when the gender is the male person. The feelings analyzing module that analyzes the conversation to determine a different type of impression of the child.
[0005] The tracking tool further includes a health analyzing module that analyzes a health data of the child to communicate the health data to the tracking tool of a user. The tracking tool further includes a voice comparison module that compares the voice of the person with the predefined voice signatures stored in the database to determine whether the voice is already exists or not.
[0006] In one aspect, a method for tracking day-to-day activities of a child. The method includes the step of: (a) capturing a plurality of images of a person at different angles and different time intervals corresponding speaking to the child; (b) recognizing a conversation between the child and the person to compute a situation of the child; (c) recording the comparison to compare with voice signatures already stored in a wearable personal safety device; (d) sensing a movement or a location of the child; (e) communicating (i) the plurality of images, (ii) the conversation, (iii) the movement or the location to a user device to track the child's activities; (f) retrieving each of the plurality of images from the wearable personal safety device to identify gender of the person; (g) segregating each of the plurality of images based on the gender; (h) recognizing voice of the person based on the gender; (i) comparing the voice of the person with the voice signatures stored in the database; (j) analyzing the situation of the child based on a facial expression of the child and activities of the person in (a) the conversion, and (b) each of the plurality of images.
[0007] The method further includes the step of: (k) correlating the voice with the corresponding each of the plurality of images; (1) assigning a priority value to a male person corresponding to each of the plurality of images and tracks continuously the male person; (m) analyzing the conversation to determine a different type of impression of the child and to measure number of times that corresponds to the different type of impression; (n) analyzing a health data of the child to communicate the health data to the tracking tool of a user; (o) compares the voice of the person with the predefined voice signatures stored in the database to determine whether the voice is already exists or not.
[0008] In another aspect, one or more non-transitory computer readable storage mediums storing one or more sequences of instructions are provided. The one or more non- transitory computer readable storage mediums performed the step of: (a) capturing a plurality of images of a person at different angles and different time intervals corresponding speaking to the child; (b) recognizing a conversation between the child and the person to compute a situation of the child; (c) recording the comparison to compare with voice signatures already stored in a wearable personal safety device; (d) sensing a movement or a location of the child; (e) communicating (i) the plurality of images, (ii) the conversation, (iii) the movement or the location to a user device to track the child's activities; (f) retrieving each of
the plurality of images from the wearable personal safety device to identify gender of the person; (g) segregating each of the plurality of images based on the gender; (h) recognizing voice of the person based on the gender; (i) comparing the voice of the person with the voice signatures stored in the database; (j) analyzing the situation of the child based on a facial expression of the child and activities of the person in (a) the conversion, and (b) each of the plurality of images.
[0009] The one or more non-transitory computer readable storage mediums further performed the step of: (k) correlating the voice with the corresponding each of the plurality of images; (1) assigning a priority value to a male person corresponding to each of the plurality of images and tracks continuously the male person; (m) analyzing the conversation to determine a different type of impression of the child and to measure number of times that corresponds to the different type of impression; (n) analyzing a health data of the child to communicate the health data to the tracking tool of a user; (o) compares the voice of the person with the predefined voice signatures stored in the database to determine whether the voice is already exists or not.
[0010] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] The embodiments herein will be better understood from the following detailed description with reference to the drawings, in which:
[0012] FIG. 1 illustrates a system view of a user tracking a child's day-to-day activities using a wearable personal safety device according to an embodiment herein;
[0013] FIG. 2 illustrates an exploded view of the wearable personal safety device of FIG. 1 according to an embodiment herein;
[0014] FIG. 3 illustrates an exploded view of a tracking tool of FIG. 1 according to an embodiment herein;
[0015] FIG. 4 is an exemplary view of the user downloading information from the wearable personal safety device by connecting the wearable personal safety device with a user device of FIG. 1 according to an embodiment herein;
[0016] FIG. 5 is an exemplary view of the user analyzing the child (for example, the child getting distressed by the activities of a male person) of FIG. 1 according to an embodiment herein;
[0017] FIG. 6 is an exemplary view of the child wearing the wearable personal safety device of FIG. 1 according to an embodiment herein;
[0018] FIGS. 7A-7B are flow diagrams that illustrate a method of a user tracking a child's activities using a wearable personal safety device according to an embodiment herein;
[0019] FIG. 8 illustrates an exploded view of a receiver of FIG. 1 according to an embodiment herein; and
[0020] FIG. 9 illustrates a schematic diagram of computer architecture of a user device, in accordance with the embodiments herein.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
[0021] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[0022] As mentioned, there remains a need for a system and method to track a child's day-to-day activities using a wearable personal safety device. The embodiments herein achieve this by using a tracking tool to track the child's day-to-day activities to the user by receiving the information from the wearable personal safety device. Situation of the child (for example, the child gets distress from the activities of a person) can be analyzed while conversation between the child and the person based on the facial recognition of the child
and/or the person using the tracking tool. Furthermore, a teacher/a male staff may wear the wearable personal safety device that points downward to analyze a conversation between the teacher and the child. Referring now to the drawings, and more particularly to FIGS. 1 through 9, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments.
[0023] FIG. 1 illustrates a system view 100 of a user 102 tracking a child's day-today activities using a wearable personal safety device 108 according to an embodiment herein. The system view 100 includes the user 102, a user device 104, a tracking tool 106, and the wearable personal safety device 108. The tracking tool 106 is installed in the user device 104. The wearable personal safety device 108 is wearing by the child to track the child's activities when the child is conversating with a known/unknown person. The tracking tool 106 shows the child's activities to the user 102. In one embodiment, the tracking tool 106 sorts only male pictures to the user 102. The tracking tool 106 analyzes the voice and the pictures of the male who is speaking to the child and standing in front of the child. In one embodiment, the tracking tool 106 analyzes the distress of the child based on the facial recognition of the child and/or the person. In one embodiment, a reset option may be applicable in the wearable personal safety device 108 to maintain the storage and battery of the wearable personal safety device 108. In one embodiment, the user 102 may be the child's mother, the child's father, and/or child's relatives etc. In one embodiment, a teacher/a male staff may wear the wearable personal safety device that points downward to analyze a conversation between the teacher and the child at a school. The user device 104 may be a computer, a mobile phone, a tablet, and/or a smart phone.
[0024] FIG. 2 illustrates an exploded view 200 of the wearable personal safety device 108 of FIG. 1 according to an embodiment herein. The wearable personal safety device 108 includes an image capturing unit 202, a voice recognition unit 204, a voice comparison unit 206, information download unit 208, a sensor unit 210, a photo detector unit 212, a memory unit 214 and an enclosure 216. The information download unit 208 includes a security unit 208A. The image capturing unit 202 captures an image of a person who is speaking to the child. The person may be a known person, a third person (i.e. stranger), and friends of the child, etc. The image capturing unit 108A captures images for every 30 minutes. 4000 images may be captured by the image capturing unit 108 A for each day. In one embodiment,
the image capturing unit 202 may be mounted at an angle suitable for capturing the person's face which might be at a greater height than the child. The voice recognition unit 204 recognizes the conversation between the child and the person while the child is conversating with the person. In one embodiment, the voice recognition tracks the voice that stimulates around the child. In one embodiment, if a bad situation is happening to the child by the male, the wearable personal safety device 108 may trigger the video recording event and may be recorded for 50 minutes. In one embodiment, the voice comparison unit 206 compares the voice signatures already stored in the wearable personal safety device 108 to record conversations. For example, all instances of a child's conversation whose voice signature that is stored in the wearable personal safety device 108 may be identified and played back. In another embodiment, the voice comparison unit 206 may be used to filter out the background noise and provide only conversations between the child and the person.
[0025] In one embodiment, the information download unit 208 is configured to download the information data of the conversations between the child and the person, images of the person and provide to the user 102. The information download unit 208 may be rendered accessible only to an authorized person (e.g. the user 102) using the security unit 208A. In one embodiment, the access to the information may be username and password protected. In another embodiment, the access to the information may be physically locked with a tumbler lock. The sensor unit 210 senses the movement of the wearable personal safety device 108. When the wearable personal safety device 108 is moved from its original place, a light in the wearable personal safety device 108 may be changed. This indication helps the user 102 to analyze that the child is in trouble. In one embodiment, the photo detector unit 212 analyzes the light received by the wearable personal safety device 108. If the wearable personal safety device 108 receives less or more light than the predetermined level, an alarm may be sent to the user 102 to track the child. The memory unit 214 that stores all the data included the voice conversation between the child and the person, the images of the person, etc. The enclosure 216 that covers the wearable personal safety device 108.
[0026] FIG. 3 illustrates an exploded view 300 of the tracking tool 106 of FIG. 1 according to an embodiment herein. The exploded view 300 includes a database 302, an image retrieval module 304, an image sorting module 306, a voice recognizing module 308, a
situation analyzing module 310, an image voice correlating module 312, a male person tracking module 314, a feelings analyzing module 316, and a health analyzing module 318. The database 302 may store simulation, emulation and/or prototype data, stores predefined images, and predefined voice signatures. The image retrieval module 304 retrieves each of said plurality of images from the wearable personal safety device 108 to identify a gender of the person based on the predefined images corresponding to the gender. The tracking tool 106 is trained initially to recognize the gender using a machine learning method. When the face of the image is recognized, the image is given to the database 302 to recognize whether the image is a male or a female. The image sorting module 306 segregates each of the plurality of images based on the gender. The voice recognition module 308 recognizes voice of the person based on the predefined voice signatures corresponding to the gender. For example, all instances of a child's conversation whose voice signature that is stored in the wearable personal safety device 108 may be identified and played back.
[0027] The situation analyzing module 310 analyzes the situation (for example, if the child is feeling distressed because of the person) of the child based on a facial expression of the child and activities of the person in (a) the conversion, and (b) each of the plurality of images. The image voice correlating module 312 correlates the voice of the person with the corresponding each of the plurality of images by tracking the image for either two minutes before and/or after the voice to analyze the male image. The male person tracking module 314 is configured to track the male person who disturbs the child. The male person tracking module assigns a priority value to a male person corresponding to each of the plurality of images and tracks continuously the male person when the gender is the male person. The user 102 can track the male person who disturbs the child frequently. Hence, the user 102 can give priority to the male person's voice and images using searching option and the user 102 may be able to recognize the voice of the male person in the tracking tool 106. The next time, if the male person speaks to the child, then the speech conversation of the child and the same male person is saved in the same list in the male person's conversation folder.
[0028] The feelings analyzing module 316 analyzes the conversation to determine a different type of impression of the child.
[0029] The analyzing may include whether the child is happy, sad, adjusted, sociable and/or reserved on the day. The feelings analyzing module 316 analyzes the feelings of the
child by measuring the number of times the child laughs, cries, and interacts with others etc. The health analyzing module 318 that is configured to analyze a health of the child by counting the number of times the child coughs, and sneezes, etc.
[0030] FIG. 4 is an exemplary view 400 of the user 102 downloading information from the wearable personal safety device 108 by connecting the wearable personal safety device 108 with a user device of FIG. 1 according to an embodiment herein. The exemplary view 400 shows the user 102 downloading the information that includes the voice conversation between the child and the person, the images of the person who interacts with the child, etc. from the wearable personal safety device 108. The user 102 watching the child's day-to-day activities using the tracking tool 106 in the user device 104. The wearable personal safety device 108 stores all the data including the images of the persons who are interacting with the child, the voice conversation between the child and the person, and the situation of the child during conversation and/or interacting with the male person. In one embodiment, the wearable personal safety device 108 is connected to the user device 104 by wired and/or wireless connection. The user 102 can identify the voice conversation, and the images of the male person, and what is going on around the child. In one embodiment, the user 102 identifies where the child is based on the images retrieved from the wearable personal safety device 108.
[0031] FIG. 5 is an exemplary view 500 of user 102 analyzing the child (for example, the child getting distressed by the activities of a male person 502) of FIG. 1 according to an embodiment herein. The wearable personal safety device 108 captures images of the male person 502 who is standing in front of the child. Using the images, the user 102 may analyze the situation of the child while interacting with the male person 502. The user 102 may analyze that the child gets distress by the images shown as (a) the male person 502 may take his shirt off, (b) may show body exposure, and (c) may show organic part etc. The user 102 may recognize the voice conversation between the child and the male person. The exemplary view 500 shows the conversation between the child and the male person 502. If the male person 502 is standing in front of the child but may not be speaking to the child, the user 102 may analyze the image of the male person 502 anyways. In case, if the male person's 502 voice is identified and the male person 502 is not standing in front of the child, the image capturing unit does not capture the images of the male person 502. At the instant, the image
voice correlating module 314 matches the voice with the corresponding male person's 502 images by tracking the image for either two minutes before and/or after the voice to analyze the male person's 502 images using a time stamp.
[0032] FIG. 6 is an exemplary view 600 of the child wearing the wearable personal safety device 108 of FIG. 1 according to an embodiment herein. The exemplary view 600 shows the child/dear one wearing the wearable personal safety device 108 in the child's belt. The wearable personal safety device 108 may be placed in the child's bag, the child's dress, the child's chain, and the child's shoes, etc. to capture the images of the person who is interacting with the child and recording the voice conversation between the child and the male person 502.
[0033] FIGS. 7A-7B are flow diagrams that illustrate a method of a user 102 tracking a child's activities using a wearable personal safety device 108 according to an embodiment herein. At step 702, one or more images of a person at different angles and different time intervals corresponding speaking to the child is captured. At step 704, a conversation between the child and the person is recognized to compute a situation of the child. At step 706, the comparison to compare with voice signatures already stored in a wearable personal safety device 108 is recorded. At step 708, a movement or a location of the child is sensed. At step 710, (i) the plurality of images, (ii) the conversation, (iii) the movement or the location is communicated to a user device 104 to track the child's activities. At step 712, each of the plurality of images from the wearable personal safety device 108 is retrieved to identify gender of the person. The image and voice of the male person 502 can be correlated when the male person 502 who is not standing in front of the child and speaking with the child using time stamp. At step 714, each of the plurality of images is segregated based on the gender. At step 716, voice of the person is recognized based on the gender. At step 718, the voice of the person is compared with the voice signatures stored in the database 302. At step 720, the situation of the child based on a facial expression of the child and activities of the person in (a) the conversion, and (b) each of the plurality of images is analyzed.
[0034] FIG. 8 illustrates an exploded view of a receiver 800 of FIG. 1 having a memory 802 having a set of instructions, a bus 804, a display 806, a speaker 808, and a processor 810 capable of processing the set of instructions to perform any one or more of the methodologies herein, according to an embodiment herein. The processor 810 may also
enable digital content to be consumed in the form of video for output via one or more displays 806 or audio for output via speaker and/or earphones 808. The processor 810 may also carry out the methods described herein and in accordance with the embodiments herein.
[0035] Digital content may also be stored in the memory 802 for future processing or consumption. The memory 802 may also store program specific information and/or service information (PSI/SI), including information about digital content (e.g., the detected information bits) available in the future or stored from the past. A user of the receiver 800 may view this stored information on display 806 and select an item for viewing, listening, or other uses via input, which may take the form of keypad, scroll, or other input device(s) or combinations thereof. When digital content is selected, the processor 810 may pass information. The content and PSI/SI may be passed among functions within the receiver 800 using the bus 804.
[0036] The techniques provided by the embodiments herein may be implemented on an integrated circuit chip (not shown). The chip design is created in a graphical computer programming language, and stored in a computer storage medium (such as a disk, tape, physical hard drive, or virtual hard drive such as inside a storage access network). If the designer does not fabricate chips or the photolithographic masks used to fabricate the chips, the designer transmits the resulting design by physical means (e.g., by providing a copy of the storage medium storing the design) or electronically (e.g., through the Internet) to such entities, directly or indirectly.
[0037] The stored design is then converted into the appropriate format (e.g., GDSII) for the fabrication of photolithographic masks, which typically includes multiple copies of the chip design in question that are to be formed on a wafer. The photolithographic masks are utilized to define areas of the wafer (and/or the layers thereon) to be etched or otherwise processed.
[0038] The resulting integrated circuit chips can be distributed by the fabricator in raw wafer form (i.e. as a single wafer that has multiple unpackaged chips), as a bare die, or in a packaged form. In the latter case, the chip is mounted in a single chip package (such as a plastic carrier, with leads that are affixed to a motherboard or other higher level carrier) or in a multichip package (such as a ceramic carrier that has either or both surface interconnections or buried interconnections). In any case the chip is then integrated with other chips, discrete
circuit elements, and/or other signal processing devices as part of either (a) an intermediate product, such as a motherboard, or (b) an end product. The end product can be any product that includes integrated circuit chips ranging from toys and other low-end applications to advanced computer products having a display, a keyboard or other input device, and a central processor.
[0039] The embodiments herein can take the form of, an entirely hardware embodiment, an entirely software embodiment, or an embodiment including both hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. Furthermore, the embodiments herein can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[0040] The medium can be an electronic, magnetic, optical, electromagnetic, infrared, a semiconductor system (or apparatus or device), or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk, and an optical disk. Current examples of optical disks include compact disk - read only memory (CD-ROM), compact disk - read/write (CD-R/W) and DVD.
[0041] A data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
[0042] Input/output (I/O) devices (including but not limited to keyboards, displays, pointing devices, remote controls, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or
remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
[0043] A representative hardware environment for practicing the embodiments herein is depicted in FIG. 9. This schematic drawing illustrates a hardware configuration of a computer architecture/computer system in accordance with the embodiments herein. The system comprises at least one processor or central processing unit (CPU) 10. The CPUs 10 are interconnected via a system bus 12 to various devices such as a random access memory (RAM) 14, read-only memory (ROM) 16, and an input/output (I/O) adapter 18. The I/O adapter 18 can connect to peripheral devices, such as disk units 11 and tape drives 13, or other program storage devices that are readable by the system. The system can read the inventive instructions on the program storage devices and follow these instructions to execute the methodology of the embodiments herein.
[0044] The system further includes a user interface adapter 19 that connects a keyboard 15, mouse 17, speaker 24, microphone 22, and/or other user interface devices such as a touch screen device (not shown) or a remote control to the bus 12 to gather user input. Additionally, a communication adapter 20 connects the bus 12 to a data processing network 25, and a display adapter 21 connects the bus 12 to a display device 23 which may be embodied as an output device such as a monitor, printer, or transmitter, for example.
[0045] The wearable personal safety device 108 is used to capture images, recognizing the voice of the male person 502 when the male person 502 is interacting with the child. The tracking tool 106 is used to track the activities of the child to the user 102. The wearable personal safety device 108 is capable of triggering 50 minutes video recording when the bad situation is happening to the child by the male person 502. The beauty of the device is that the user 102 has whole day to analyze the data. There is no need of the presence of the child when the user 102 watches the activities of the child.
[0046] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and therefore such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the
disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the appended claims.
Claims
1. A system for tracking a child's day-to-day activities, comprising:
a wearable personal safety device 108 that analyzes said child's day-to-day activities, wherein said wearable personal safety device 108 comprises
an image capturing unit 202 that captures a plurality of images of a person at different angles and different time intervals corresponding to said person's interaction with said child;
a voice recognition unit 204 that recognizes a conversation between said child and said person;
a voice comparison unit 206 that records said comparison to compare with voice signatures that are stored in said wearable personal safety device 108 to compute a situation;
a sensor unit 210 that senses a movement or a location of said child; and a communication unit that communicates (i) said plurality of images, (ii) said conversation, (iii) said movement or said location;
a tracking tool 106 that receives at least one of (i) said plurality of images of said person, (ii) said conversation, (iii) said movement or said location of child to track said wearable personal safety device 108, wherein said tracking tool 106 comprises
a memory that stores a database and a set of modules, wherein said database stores predefined images, and predefined voice signatures; and
a processor that executes said set of modules, wherein said set of modules comprises
an image retrieval module 304 that retrieves each of said plurality of images from said wearable personal safety device to identify a gender of said person based on said predefined images corresponding to said gender;
an image sorting module 306 that segregates each of said plurality of images based on said gender;
a voice recognition module 308 that recognizes voice of said person based on said predefined voice signatures corresponding to said gender;
a situation analyzing module 310 that analyzes said situation of said child based on a facial expression of said child and activities of said person in (a) said conversion, and (b) each of said plurality of images;
an image voice correlating module 312 that correlates said voice of said person with the corresponding each of said plurality of images;
a male person tracking module 314 that assigns a priority value to a male person corresponding to each of said plurality of images and tracks continuously said male person when said gender is said male person; and
a feelings analyzing module 316 that analyzes said conversation to determine a different type of impression of said child.
2. The system as claimed in claim 1 , wherein said tracking tool further comprises a health analyzing module that analyzes a health data of said child to communicate said health data to said tracking tool of a user.
3. The system as claimed in claim 2, wherein said tracking tool further comprises a voice comparison module that compares said voice of said person with said predefined voice signatures stored in said database to determine whether said voice is already exists or not.
4. A method for tracking day-to-day activities of a child, wherein said method comprises:
capturing a plurality of images of a person at different angles and different time intervals corresponding speaking to said child;
recognizing a conversation between said child and said person to compute a situation of said child;
recording said comparison to compare with voice signatures already stored in a wearable personal safety device;
sensing a movement or a location of said child;
communicating (i) said plurality of images, (ii) said conversation, (iii) said movement or said location to a user device to track said child's activities;
retrieving each of said plurality of images from said wearable personal safety device to identify gender of said person;
. segregating each of said plurality of images based on said gender;
recognizing voice of said person based on said gender;
comparing said voice of said person with said voice signatures stored in said database; and
analyzing said situation of said child based on a facial expression of said child and activities of said person in (a) said conversion, and (b) each of said plurality of images.
5. The method as claimed in claim 4, further comprising
correlating said voice with the corresponding each of said plurality of images;
assigning a priority value to a male person corresponding to each of said plurality of images and tracks continuously said male person; and
analyzing said conversation to determine a different type of impression of said child and to measure number of times that corresponds to said different type of impression.
6. The method as claimed in claim 5, further comprising
analyzing a health data of said child to communicate said health data to said tracking tool of a user; and
comparing said voice of said person with said predefined voice signatures stored in said database to determine whether said voice is already exists or not.
7. One or more non-transitory computer readable storage mediums storing one or more sequences of instructions, which when executed by one or more processors, causes tracking day-to-day activities of a child, by performing the steps of:
capturing a plurality of images of a person at different angles and different time intervals corresponding speaking to said child;
recognizing a conversation between said child and said person to compute a situation of said child;
recording said comparison to compare with voice signatures already stored in a wearable personal safety device;
sensing a movement or a location of said child;
communicating (i) said plurality of images, (ii) said conversation, (iii) said movement or said location to a user device to track said child's activities;
retrieving each of said plurality of images from said wearable personal safety device to identify gender of said person;
. segregating each of said plurality of images based on said gender;
recognizing voice of said person based on said gender;
comparing said voice of said person with said voice signatures stored in said database; and
analyzing said situation of said child based on a facial expression of said child and activities of said person in (a) said conversion, and (b) each of said plurality of images.
8. The one or more non-transitory computer readable storage mediums storing one or more sequences of instructions of claim 7, further causes:
correlating said voice with the corresponding each of said plurality of images;
assigning a priority value to a male person corresponding to each of said plurality of images and tracks continuously said male person; and
analyzing said conversation to determine a different type of impression of said child and to measure number of times that corresponds to said different type of impression.
9. The one or more non-transitory computer readable storage mediums storing one or more sequences of instructions of claim 8, further causes:
analyzing a health data of said child to communicate said health data to said tracking tool of a user; and
comparing said voice of said person with said predefined voice signatures stored in said database to determine whether said voice is already exists or not.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IN6199/CHE/2015 | 2015-11-17 | ||
IN6199CH2015 | 2015-11-17 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2017085743A2 true WO2017085743A2 (en) | 2017-05-26 |
WO2017085743A3 WO2017085743A3 (en) | 2017-10-05 |
Family
ID=58718538
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IN2016/050404 WO2017085743A2 (en) | 2015-11-17 | 2016-11-16 | Wearable personal safety device with image and voice processing capabilities |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2017085743A2 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108022585A (en) * | 2017-12-13 | 2018-05-11 | 四川西谷物联科技有限公司 | Information processing method, device and electronic equipment |
CN108109331A (en) * | 2017-12-13 | 2018-06-01 | 四川西谷物联科技有限公司 | Monitoring method and monitoring system |
CN108109628A (en) * | 2017-12-13 | 2018-06-01 | 四川西谷物联科技有限公司 | Information collecting method, device and electronic equipment |
US20220139204A1 (en) * | 2019-02-14 | 2022-05-05 | Ruth Nicola Millican | Mobile personal-safety apparatus |
US20240029541A1 (en) * | 2020-02-14 | 2024-01-25 | Ruth Nicola Millican | Personal mobile safety apparatus and evidence secure method |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6275806B1 (en) * | 1999-08-31 | 2001-08-14 | Andersen Consulting, Llp | System method and article of manufacture for detecting emotion in voice signals by utilizing statistics for voice signal parameters |
CA2385595C (en) * | 1999-09-15 | 2010-07-27 | Quid Technologies Llc | Biometric recognition utilizing unique energy characteristics of an individual organism |
-
2016
- 2016-11-16 WO PCT/IN2016/050404 patent/WO2017085743A2/en active Application Filing
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108022585A (en) * | 2017-12-13 | 2018-05-11 | 四川西谷物联科技有限公司 | Information processing method, device and electronic equipment |
CN108109331A (en) * | 2017-12-13 | 2018-06-01 | 四川西谷物联科技有限公司 | Monitoring method and monitoring system |
CN108109628A (en) * | 2017-12-13 | 2018-06-01 | 四川西谷物联科技有限公司 | Information collecting method, device and electronic equipment |
US20220139204A1 (en) * | 2019-02-14 | 2022-05-05 | Ruth Nicola Millican | Mobile personal-safety apparatus |
US12165482B2 (en) * | 2019-02-14 | 2024-12-10 | Ruth Nicola Millican | Mobile personal-safety apparatus |
US20240029541A1 (en) * | 2020-02-14 | 2024-01-25 | Ruth Nicola Millican | Personal mobile safety apparatus and evidence secure method |
Also Published As
Publication number | Publication date |
---|---|
WO2017085743A3 (en) | 2017-10-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Hussain et al. | Activity-aware fall detection and recognition based on wearable sensors | |
He et al. | A smart device enabled system for autonomous fall detection and alert | |
JP7405200B2 (en) | person detection system | |
Bedri et al. | EarBit: using wearable sensors to detect eating episodes in unconstrained environments | |
Porzi et al. | A smart watch-based gesture recognition system for assisting people with visual impairments | |
WO2017085743A2 (en) | Wearable personal safety device with image and voice processing capabilities | |
CN112364696B (en) | Method and system for improving family safety by utilizing family monitoring video | |
Zhang et al. | A comprehensive study of smartphone-based indoor activity recognition via Xgboost | |
KR101165537B1 (en) | User Equipment and method for cogniting user state thereof | |
JP6447108B2 (en) | Usability calculation device, availability calculation method, and availability calculation program | |
CN105631403B (en) | Face identification method and device | |
US12333882B2 (en) | Multiple-factor recognition and validation for security systems | |
US20090226043A1 (en) | Detecting Behavioral Deviations by Measuring Respiratory Patterns in Cohort Groups | |
Mansoor et al. | A machine learning approach for non-invasive fall detection using Kinect | |
CN113591701A (en) | Respiration detection area determination method and device, storage medium and electronic equipment | |
CN109495727A (en) | Intelligent control method and device, system, readable storage medium storing program for executing | |
Ramanujam et al. | A vision-based posture monitoring system for the elderly using intelligent fall detection technique | |
WO2020144835A1 (en) | Information processing device and information processing method | |
US20170316258A1 (en) | Augmenting gesture based security technology for improved differentiation | |
JP2008176689A (en) | Age verification device, age verification method, and age verification program | |
CN110458052B (en) | Target object identification method, device, equipment and medium based on augmented reality | |
US10628682B2 (en) | Augmenting gesture based security technology using mobile devices | |
Kodikara et al. | Surveillance based Child Kidnap Detection and Prevention Assistance | |
US20170316259A1 (en) | Augmenting gesture based security technology for improved classification and learning | |
Guney et al. | A deep neural network based toddler tracking system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
NENP | Non-entry into the national phase in: |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16865899 Country of ref document: EP Kind code of ref document: A2 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16865899 Country of ref document: EP Kind code of ref document: A2 |