[go: up one dir, main page]

CN109935228B - Identity information association system and method, computer storage medium and user equipment - Google Patents

Identity information association system and method, computer storage medium and user equipment Download PDF

Info

Publication number
CN109935228B
CN109935228B CN201711354200.7A CN201711354200A CN109935228B CN 109935228 B CN109935228 B CN 109935228B CN 201711354200 A CN201711354200 A CN 201711354200A CN 109935228 B CN109935228 B CN 109935228B
Authority
CN
China
Prior art keywords
action
trigger
sound
individual
sounds
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711354200.7A
Other languages
Chinese (zh)
Other versions
CN109935228A (en
Inventor
林忠亿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Futaihua Industry Shenzhen Co Ltd
Hon Hai Precision Industry Co Ltd
Original Assignee
Futaihua Industry Shenzhen Co Ltd
Hon Hai Precision Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Futaihua Industry Shenzhen Co Ltd, Hon Hai Precision Industry Co Ltd filed Critical Futaihua Industry Shenzhen Co Ltd
Priority to CN201711354200.7A priority Critical patent/CN109935228B/en
Publication of CN109935228A publication Critical patent/CN109935228A/en
Application granted granted Critical
Publication of CN109935228B publication Critical patent/CN109935228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Emergency Alarm Devices (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

一种身份信息关联方法,应用于一身份信息关联系统中,该方法包括:识别场景内的个体;记录所述场景内的声音信息;识别所述声音信息中的个体声音;判断一目标个体是否对所述个体声音中一触发声音具有应答动作;记录所述触发声音;及关联所述触发声音与所述目标个体。本发明还揭示了一种实现上述身份信息关联方法的身份信息关联系统、计算机存储介质及用户设备。

Figure 201711354200

An identity information association method, applied to an identity information association system, the method comprises: identifying individuals in a scene; recording sound information in the scene; identifying individual voices in the sound information; judging whether a target individual is Responding to a trigger sound in the individual sounds; recording the trigger sound; and associating the trigger sound with the target individual. The invention also discloses an identity information association system, computer storage medium and user equipment for realizing the above identity information association method.

Figure 201711354200

Description

Identity information association system and method, computer storage medium and user equipment
Technical Field
The present invention relates to big data analysis technologies, and in particular, to an identity information association system and method, a computer storage medium, and a user device.
Background
At present, most software requires a user to register, and the user identity information of the user needs to be manually input during registration, particularly a user name, such as a membership card, a personal identity information form needs to be manually filled in during application and even a photo needs to be manually filled in. This approach requires recording in advance, and in some special situations, such as a one-time conference, participant information cannot be recorded in advance.
Disclosure of Invention
In view of the foregoing, there is a need for an identity information association system and method, a computer storage medium, and a user device that do not require pre-registration and can be based on user behavior analysis.
An identity information association method is applied to an identity information association device, and comprises the following steps:
identifying individuals within a scene;
recording sound information within the scene;
identifying individual sounds in the sound information;
judging whether a target individual has a response action to a trigger sound in the individual sounds;
recording the trigger sound; and
associating the trigger sound with the target individual.
Further, the air conditioner is provided with a fan,
prior to said associating said trigger sound with said target individual, said method further comprising the steps of:
analyzing semantics of a plurality of trigger sounds; and
and judging whether the number of the same semantics in the plurality of trigger sounds exceeds a preset number.
Further, the determining whether a target individual has an action of responding to a trigger sound in the sound information includes:
judging whether the target individual has a body action after the sound is triggered;
judging whether the body motion amplitude exceeds a preset amplitude or not; and
determining whether a plurality of individuals have the physical action simultaneously.
Further, the body action comprises at least one of a head action, a face action and a hand action, the head action comprises raising or turning the head, the face action comprises a specific mouth action or eye action, and the hand action comprises a hand raising response action.
An identity information association system comprising:
the video monitoring module is used for identifying individuals in a scene;
the sound monitoring module is used for recording sound information in the scene;
the voice identification module is used for identifying the individual voice in the voice information;
the response judging module is used for judging whether a target individual has a response action on a trigger sound in the individual information;
the trigger recording module is used for recording the trigger sound; and
and the identity correlation module is used for correlating the trigger sound with the target individual.
Furthermore, the identity information correlation system further comprises a semantic analysis module, a voice conversion module and a semantic judgment module, wherein the voice analysis module is used for analyzing the semantics of the plurality of trigger voices, and the semantic judgment module is used for judging whether the number of the same semantics in the plurality of trigger voices exceeds a preset number; the voice conversion module is used for converting the trigger voice into characters to be associated with the target individual when the number of the same semantics in the trigger voices exceeds a preset number.
Further, the response judging module is further configured to judge whether the target individual has a body motion after the trigger sound, judge whether the body motion amplitude exceeds a predetermined amplitude, and judge whether a plurality of individuals have the body motion at the same time.
Further, the body action comprises at least one of a head action, a face action and a hand action, the head action comprises raising or turning the head, the face action comprises a specific mouth action or eye action, and the hand action comprises a hand raising response action.
A computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the above identity information association method.
A user equipment, comprising:
a processor to implement one or more instructions; and
a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the above identity information association method.
According to the identity information correlation system and the identity information correlation method, the target individual and the corresponding title converted characters are stored in a correlation mode, when identity information such as a user name is needed, only individual features such as physical signs and body states need to be identified, and manual filling is not needed.
Drawings
Fig. 1 is a flowchart illustrating steps of an identity information association method according to an embodiment of the present invention.
Fig. 2 is a flowchart of the steps of determining whether there is a body response action in the target environment in the method for associating identity information in fig. 1.
Fig. 3 is a block diagram of an identity information association system according to an embodiment of the present invention.
Fig. 4 is a block diagram of a user equipment according to an embodiment of the present invention.
Description of the main elements
Video monitoring module 31
Sound monitoring module 32
Voice recognition module 33
Answer judging module 34
Trigger recording module 35
Identity correlation module 36
Sound conversion module 37
Semantic analysis module 38
Voice judging module 39
Processor with a memory having a plurality of memory cells 71
Computer storage medium 73
The following detailed description will further illustrate the invention in conjunction with the above-described figures.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that when an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. When an element is referred to as being "disposed on" another element, it can be directly on the other element or intervening elements may also be present.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Referring to fig. 1, an embodiment of the present invention provides an identity information association method, which can obtain and record identity information of an individual through behavior analysis. The method comprises the following steps:
step S101: individuals within the scene are identified. The scene is a fixed-area activity space, such as a conference room, a supermarket, a laboratory, a classroom, a restaurant, a mall and the like, which is monitored by a video monitoring device. The individual may be a human body, an animal or a man-made object, such as an artificial intelligence robot or the like. The video monitoring device is a camera. The video monitoring device can determine and track each individual through sign recognition (such as face recognition) and posture recognition.
Step S102: recording sound information within the scene. Recording the sound in the target scene through a sound monitoring device; in one embodiment, the sound monitoring device is a microphone. Sounds in a scene may include sounds made by individuals and other sounds.
Step S103: individual sounds in the sound information are identified. The individual sounds in the sound information may be identified by sound frequency, by combining changes in the individual's posture in the video, such as mouth opening actions, or by identifying semantics.
Step S104: and judging whether a target individual has response action on a trigger sound in the individual sound, if so, executing the step S105, and if not, returning to the step S101. The response action comprises at least one of head action, face action and hand action, the head action comprises head raising and head turning, the face action comprises specific mouth action and eye action, and the hand action comprises hand raising response; the trigger sound includes a name of the target individual, such as a name, a foreign number, a nickname, and the like.
Step S105: recording the trigger sound.
Step S106: the semantics of a plurality of trigger sounds are analyzed.
Step S107: and judging whether the number of the same semantics in the plurality of trigger sounds exceeds a preset number, if so, executing the step S108, and if not, returning to the step S104.
Step S108: associating the trigger sound with the target individual.
Step S109: and converting the trigger sound into characters to be associated with the target individual.
After the trigger sound is converted into characters to be associated with the target individual, the target individual can be directly registered through sign recognition (such as face recognition) or posture recognition without manual registration when the target individual needs to be registered. By using the target individual and the associated characters, other personal data corresponding to the characters, such as career experience, diagnosis data, health condition and the like, can be associated in the database through big data analysis. After physical sign recognition or posture recognition, the corresponding name and the related personal data can be known.
Referring to fig. 2, step S103 includes:
step S201: judging whether the target individual has physical movement after the sound is triggered, if so, executing step S202, and if not, only executing step S201.
Step S202, judging whether the body motion amplitude exceeds a preset amplitude, if so, executing step S203, and if not, returning to step S201. The body action comprises at least one of head action, face action and hand action, the head action comprises head raising and head turning, the face action comprises specific mouth action and eye action, and the hand action comprises hand raising response.
Step S203: and judging whether a plurality of individuals have the physical movement at the same time, if not, executing the step S204, and if so, returning to the step S201.
Step S204: the physical action is recorded as a response action.
Referring to fig. 3, an identity information association system according to an embodiment of the present invention includes:
the video monitoring module 31 is used for identifying individuals in the scene. The scene is a fixed area activity space such as a conference room, supermarket, laboratory, classroom, restaurant, mall, etc. The individual may be a human body, an animal or a man-made object, such as an artificial intelligence robot or the like. The video monitoring device is a camera. The video monitoring module 31 can determine and track each individual through sign recognition (such as face recognition) and posture recognition.
And the sound monitoring module 32 is used for recording sound information in the scene. Recording the sound in the target scene by installing a sound monitoring device in the target scene; in one embodiment, the sound monitoring device is a microphone. Sounds in a scene may include sounds made by individuals and other sounds.
The voice recognition module 33 is used for recognizing the individual voice in the voice message. The individual sounds in the sound information can be identified through sound frequency, a mode combining the individual posture change in the video or identifying semantics, such as mouth opening action.
The response determining module 34 is used to determine whether a target individual has a response action to a trigger sound in the individual sounds. The response action comprises at least one of head action, face action and hand action, the head action comprises head raising and head turning, the face action comprises specific mouth action and eye action, and the hand action comprises hand raising response.
The response determination module 34 determines whether to respond by determining whether there is a body movement in the target scene, determining whether the body movement amplitude exceeds a predetermined amplitude, and determining whether multiple individuals have the body movement at the same time. The body action comprises head action, face action and hand action, the head action comprises raising head and turning head, the face action comprises specific mouth action and eye action, and the hand action comprises raising hand response. When the magnitude of the body motion is too small or when many people have body motion, it is not considered as a response motion.
The trigger recording module 35 is used to record the trigger voice for triggering the response action.
The semantic analysis module 38 is used for analyzing the semantics of the triggering sounds.
The semantic determining module 39 is used to determine whether the number of the same semantic in the triggering sounds exceeds a preset number. The preset number is more than or equal to two.
An identity associating module 36, configured to associate the trigger sounds with the target individual when the number of the same semantics in the trigger sounds exceeds a preset number.
The voice conversion module 37 is used for converting the trigger voice into text to be associated with the target individual.
After the trigger sound is converted into characters to be associated with the target individual, the target individual can be directly registered through sign recognition (such as face recognition) or posture recognition without manual registration when the target individual needs to be registered. By using the target individual and the associated characters, other personal data corresponding to the characters, such as career experience, diagnosis data, health condition and the like, can be associated in the database through big data analysis. After physical sign recognition or posture recognition, the corresponding name and the related personal data can be known.
Referring to fig. 4, the present invention also discloses a user equipment, which may include at least one processor 71 (one processor 71 is shown as an example) and a computer storage medium 73. Processor 71 may invoke logic instructions in computer storage medium 73 to perform the methods in the embodiments described above. In one embodiment, the user equipment is a server.
Furthermore, the logic instructions in the computer storage medium 73 can be implemented in the form of software functional units and stored in a computer storage medium when sold or used as a stand-alone product.
The computer storage medium 73 may be configured to store software programs, computer-executable programs, such as program instructions or modules corresponding to the methods in the embodiments of the present disclosure. The processor 71 executes functional applications and data processing, i.e. implements the methods in the above-described embodiments, by running software programs, instructions or modules stored in the computer storage medium 73.
The computer storage medium 73 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. Further, computer storage media 73 may include high speed random access computer storage media, and may also include non-volatile computer storage media. For example, a variety of media that can store program codes, such as a usb disk, a removable hard disk, a Read-Only computer Memory (ROM), a Random Access computer Memory (RAM), a magnetic disk, or an optical disk, may also be transient storage media.
The processor 71 loads and executes one or more instructions stored in the computer storage medium 73 to implement the corresponding steps of the method flows shown in fig. 1-2; in a specific implementation, one or more instructions in a computer storage medium are loaded by a processor and perform the following steps:
step S101: individuals within the scene are identified. The scene is a fixed-area activity space, such as a conference room, a supermarket, a laboratory, a classroom, a restaurant, a mall and the like, which is monitored by a video monitoring device. The individual may be a human body, an animal or a man-made object, such as an artificial intelligence robot or the like. The video monitoring device is a camera. The video monitoring device can determine and track each individual through sign recognition (such as face recognition) and posture recognition.
Step S102: recording sound information within the scene. Recording the sound in the target scene through a sound monitoring device; in one embodiment, the sound monitoring device is a microphone. Sounds in a scene may include sounds made by individuals and other sounds.
Step S103: individual sounds in the sound information are identified. The individual sounds in the sound information can be identified through sound frequency, a mode combining the individual posture change in the video or identifying semantics, such as mouth opening action.
Step S104: and judging whether a target individual has response action on a trigger sound in the individual sound, if so, executing the step S105, and if not, returning to the step S101. The response action comprises at least one of head action, face action and hand action, the head action comprises head raising and head turning, the face action comprises specific mouth action and eye action, and the hand action comprises hand raising response; the trigger sound includes a name of the target individual, such as a name, a foreign number, a nickname, and the like.
Step S105: recording the trigger sound.
Step S106: the semantics of a plurality of trigger sounds are analyzed.
Step S107: and judging whether the number of the same semantics in the plurality of trigger sounds exceeds a preset number, if so, executing the step S108, and if not, returning to the step S104.
Step S108: associating the trigger sound with the target individual.
Step S109: and converting the trigger sound into characters to be associated with the target individual.
After the trigger sound is converted into characters to be associated with the target individual, the target individual can be directly registered through sign recognition (such as face recognition) or posture recognition without manual registration when the target individual needs to be registered. By using the target individual and the associated characters, other personal data corresponding to the characters, such as career experience, diagnosis data, health condition and the like, can be associated in the database through big data analysis. After physical sign recognition or posture recognition, the corresponding name and the related personal data can be known.
Step S201: judging whether the target individual has physical movement after the sound is triggered, if so, executing step S202, and if not, only executing step S201.
Step S202, judging whether the body motion amplitude exceeds a preset amplitude, if so, executing step S203, and if not, returning to step S201. The body action comprises at least one of head action, face action and hand action, the head action comprises head raising and head turning, the face action comprises specific mouth action and eye action, and the hand action comprises hand raising response.
Step S203: and judging whether a plurality of individuals have the physical movement at the same time, if not, executing the step S204, and if so, returning to the step S201.
Step S204: the physical action is recorded as a response action.
In addition, other modifications within the spirit of the invention will occur to those skilled in the art, and it is understood that such modifications are included within the scope of the invention as claimed.

Claims (8)

1. An identity information association method, characterized in that the method comprises:
identifying individuals within a scene;
recording sound information within the scene;
identifying individual sounds in the sound information;
judging whether a target individual has a response action to a trigger sound in the individual sounds; wherein the judging whether a target individual has a response action to a trigger sound in the individual sounds comprises judging whether the target individual has a body action after the trigger sound; if yes, judging whether the body motion amplitude exceeds a preset amplitude or not; if yes, judging whether a plurality of individuals have the body motion at the same time; if not, recording the body action as a response action;
if yes, recording the trigger sound; and
associating the trigger sound with the target individual.
2. The identity information association method of claim 1, wherein: prior to said associating said trigger sound with said target individual, said method further comprising the steps of:
analyzing semantics of a plurality of trigger sounds; and
and judging whether the number of the same semantics in the multiple trigger sounds exceeds a preset number or not, so as to associate the trigger sounds with the target individual when the number of the same semantics in the multiple trigger sounds exceeds the preset number.
3. The identity information association method of claim 1, wherein: the body action comprises at least one of head action, face action and hand action, the head action comprises head raising or head turning, the face action comprises specific mouth action or eye action, and the hand action comprises hand raising response action.
4. An identity information association system, comprising:
the video monitoring module is used for identifying individuals in a scene;
the sound monitoring module is used for recording sound information in the scene;
the voice identification module is used for identifying the individual voice in the voice information;
the response judging module is used for judging whether a target individual has a response action on a trigger sound in the individual sounds; the response judging module is further used for judging whether the target individual has a body action after the trigger sound, if so, judging whether the body action amplitude exceeds a preset amplitude, if so, judging whether a plurality of individuals have the body action at the same time, and if not, recording the body action as a response action;
the trigger recording module is used for recording a trigger sound when judging that a target individual has a response action to the trigger sound in the individual sounds; and
and the identity correlation module is used for correlating the trigger sound with the target individual.
5. The identity information association system of claim 4, wherein: the identity information correlation system further comprises a semantic analysis module, a sound conversion module and a semantic judgment module, wherein the semantic analysis module is used for analyzing the semantics of the multiple trigger sounds, and the semantic judgment module is further used for judging whether the number of the same semantics in the multiple trigger sounds exceeds a preset number; the voice conversion module is used for converting the trigger voice into characters to be associated with the target individual when the number of the same semantics in the trigger voices exceeds a preset number.
6. The identity information association system of claim 4, wherein: the body action comprises at least one of head action, face action and hand action, the head action comprises head raising or head turning, the face action comprises specific mouth action or eye action, and the hand action comprises hand raising response action.
7. A computer storage medium, characterized in that: the computer storage medium stores a plurality of instructions adapted to be loaded by a processor and to perform the identity information association method of any of claims 1-3.
8. A user device, comprising:
a processor to implement one or more instructions; and
computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the identity information association method according to any one of claims 1 to 3.
CN201711354200.7A 2017-12-15 2017-12-15 Identity information association system and method, computer storage medium and user equipment Active CN109935228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711354200.7A CN109935228B (en) 2017-12-15 2017-12-15 Identity information association system and method, computer storage medium and user equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711354200.7A CN109935228B (en) 2017-12-15 2017-12-15 Identity information association system and method, computer storage medium and user equipment

Publications (2)

Publication Number Publication Date
CN109935228A CN109935228A (en) 2019-06-25
CN109935228B true CN109935228B (en) 2021-06-22

Family

ID=66980742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711354200.7A Active CN109935228B (en) 2017-12-15 2017-12-15 Identity information association system and method, computer storage medium and user equipment

Country Status (1)

Country Link
CN (1) CN109935228B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0343656A2 (en) * 1988-05-27 1989-11-29 Kyowa Metal Works Co., Ltd. Vibration free handle
JP2002018147A (en) * 2000-07-11 2002-01-22 Omron Corp Automatic answering machine
CN103488073A (en) * 2013-09-29 2014-01-01 陕西科技大学 Sports watch with musical function
CN205584578U (en) * 2015-11-25 2016-09-14 惠阳帝宇工业有限公司 A fully automatic infrared wireless lighting control system
CN107466389A (en) * 2015-04-30 2017-12-12 谷歌公司 The unknowable RF signals of type represent

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8140340B2 (en) * 2008-01-18 2012-03-20 International Business Machines Corporation Using voice biometrics across virtual environments in association with an avatar's movements
US8847881B2 (en) * 2011-11-18 2014-09-30 Sony Corporation Gesture and voice recognition for control of a device
US9536528B2 (en) * 2012-07-03 2017-01-03 Google Inc. Determining hotword suitability
TW201409351A (en) * 2012-08-16 2014-03-01 Hon Hai Prec Ind Co Ltd Electronic device with voice control function and voice control method
US9020194B2 (en) * 2013-06-14 2015-04-28 Qualcomm Incorporated Systems and methods for performing a device action based on a detected gesture
IN2014DE00332A (en) * 2014-02-05 2015-08-07 Nitin Vats
US20160103655A1 (en) * 2014-10-08 2016-04-14 Microsoft Corporation Co-Verbal Interactions With Speech Reference Point
US10192549B2 (en) * 2014-11-28 2019-01-29 Microsoft Technology Licensing, Llc Extending digital personal assistant action providers
JP2017049471A (en) * 2015-09-03 2017-03-09 カシオ計算機株式会社 Dialogue control apparatus, dialogue control method, and program
CN105845144A (en) * 2016-03-21 2016-08-10 陈宁 Intelligent health management system for realizing animal sound and form translation function
CN206441536U (en) * 2016-10-10 2017-08-25 德尔福电子(苏州)有限公司 A kind of active voice assistant based on recognition of face
CN106528859A (en) * 2016-11-30 2017-03-22 英华达(南京)科技有限公司 Data pushing system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0343656A2 (en) * 1988-05-27 1989-11-29 Kyowa Metal Works Co., Ltd. Vibration free handle
JP2002018147A (en) * 2000-07-11 2002-01-22 Omron Corp Automatic answering machine
CN103488073A (en) * 2013-09-29 2014-01-01 陕西科技大学 Sports watch with musical function
CN107466389A (en) * 2015-04-30 2017-12-12 谷歌公司 The unknowable RF signals of type represent
CN205584578U (en) * 2015-11-25 2016-09-14 惠阳帝宇工业有限公司 A fully automatic infrared wireless lighting control system

Also Published As

Publication number Publication date
CN109935228A (en) 2019-06-25

Similar Documents

Publication Publication Date Title
Bogomolov et al. Daily stress recognition from mobile phone data, weather conditions and individual traits
Albertson et al. Did that scare you? Tips on creating emotion in experimental subjects
DeSteno et al. Detecting the trustworthiness of novel partners in economic exchange
KR102611751B1 (en) Augmentation of key phrase user recognition
JP6857581B2 (en) Growth interactive device
CN107256428B (en) Data processing method, data processing device, storage equipment and network equipment
US10836044B2 (en) Robot control device and robot control method
CN106113054A (en) service processing method based on robot
Djupe et al. The prosperity gospel of coronavirus response
CN112151027B (en) Method, device and storage medium for querying specific person based on digital person
US10991142B1 (en) Computer-implemented essence generation platform for posthumous persona simulation
Le et al. Deep learning based multi-modal addressee recognition in visual scenes with utterances
CN109421044A (en) Intelligent robot
US20140163986A1 (en) Voice-based captcha method and apparatus
US11483593B2 (en) System for providing a virtual focus group facility
JP6516805B2 (en) DECISION DEVICE, DECISION METHOD, AND DECISION PROGRAM
CN109935228B (en) Identity information association system and method, computer storage medium and user equipment
CN114008621A (en) Determining observations about a topic in a meeting
TWI661329B (en) Identity information interconnected system and method,computer storage medium and use device
Wilks Is a Companion a distinctive kind of relationship with a machine?
Walecka Determinants of managers' behaviour in a crisis situation in an enterprise-an attempt at model construction
CN114445052A (en) Intelligent education student attendance big data statistical method and system based on block chain
KR20170036927A (en) System for building social emotion network and method thereof
McGlynn Blowing the whistle is laden with risk
KR101997161B1 (en) Method And Apparatus for Classifying User Persona by Using Sensors Data and Online Messenger Data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant