[go: up one dir, main page]

CN110209264B - Behavior data processing system and method - Google Patents

Behavior data processing system and method Download PDF

Info

Publication number
CN110209264B
CN110209264B CN201910242486.2A CN201910242486A CN110209264B CN 110209264 B CN110209264 B CN 110209264B CN 201910242486 A CN201910242486 A CN 201910242486A CN 110209264 B CN110209264 B CN 110209264B
Authority
CN
China
Prior art keywords
information
user
interaction
limb
interactive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910242486.2A
Other languages
Chinese (zh)
Other versions
CN110209264A (en
Inventor
钟炜凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910242486.2A priority Critical patent/CN110209264B/en
Publication of CN110209264A publication Critical patent/CN110209264A/en
Application granted granted Critical
Publication of CN110209264B publication Critical patent/CN110209264B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H13/00Toy figures with self-moving parts, with or without movement of the toy as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a behavior data processing system and a method, wherein the method comprises the following steps: intelligent puppets and Virtual Reality (VR) devices; wherein: the intelligent doll is used for detecting the interactive characteristic information made by a user on the intelligent doll through a built-in sensor and sending the interactive characteristic information to the VR equipment; the VR equipment is used for receiving the interactive feature information; determining user interaction behavior based on the interaction feature information and a pre-stored interaction judgment database; and outputting corresponding voice information and/or image information based on the interaction behavior of the user. According to the method, the behavior data of the user can be interactively processed between the intelligent doll and the VR display device, the accuracy of identifying the behavior data of the user is high, and therefore the user simulated social experience is improved.

Description

Behavior data processing system and method
Technical Field
The application relates to the technical field of mixed reality, in particular to a system and a method for processing behavior data.
Background
The working pressure of people in the current society is gradually increased, the social time of accompanying the old, children and individuals is correspondingly reduced, meanwhile, the technologies such as Virtual Reality (VR) and Mixed Reality (MR) are vigorously developed, and the simulated social contact is more and more a function required by people at present.
In the current simulation social process, the judgment accuracy of the interaction behavior of the user is single, the judgment elements are simple, the judgment precision is low, the misjudgment and the misjudgment of the user behavior are easy to carry out, more error interaction is caused, and the user experience is poor. It is difficult to meet the highly anthropomorphic interaction requirements of users for simulating social interaction.
Disclosure of Invention
In view of this, an object of the embodiments of the present application is to provide a behavior data processing system and method, which can cooperatively determine an interaction purpose of a user through multiple elements, so as to reduce situations of misjudgment and improve a user simulated social experience.
In a first aspect, an embodiment of the present application provides a behavior data processing system, including: intelligent puppets and Virtual Reality (VR) devices; wherein:
the intelligent doll is used for detecting the interactive characteristic information made by a user on the intelligent doll through a built-in sensor and sending the interactive characteristic information to the VR equipment;
the VR equipment is used for receiving the interactive feature information; determining user interaction behavior based on the interaction feature information and a pre-stored interaction judgment database; and outputting corresponding voice information and/or image information based on the interaction behavior of the user.
In one possible embodiment, the built-in sensor comprises at least one of the following sensors:
a pressure sensor; a temperature sensor; a humidity sensor;
the intelligent doll is used for obtaining the interactive feature information by adopting the following steps:
receiving pressure data uploaded by a built-in pressure sensor; and/or receiving temperature data uploaded by a built-in temperature sensor; and/or receiving humidity data uploaded by a built-in humidity device sensor.
In a possible implementation manner, the interaction feature information further includes interaction position information when the user performs an interaction action on the intelligent doll;
the intelligent doll is further used for obtaining the interaction position information by adopting the following steps:
determining the identification of the sensor uploading the interactive characteristic information, and determining the position information of each triggered sensor based on the determined identification of the sensor;
and according to the determined position information of each triggered sensor, determining the interaction position information when the user carries out interaction action on the intelligent doll.
In one possible implementation manner, the interaction characteristic information corresponding to a preset behavior mode is stored in the interaction judgment database;
the VR device is configured to determine the interaction behavior of the user in the following manner:
matching the received interactive characteristic information with interactive characteristic information stored in the interactive judgment database, and determining target interactive characteristic information with the highest matching degree in the interactive judgment database;
and determining a preset behavior mode corresponding to the target interaction characteristic information as the interaction behavior of the user.
In one possible embodiment, the VR device is configured to determine the output voice information and/or image information by:
the method comprises the steps of determining preset output information corresponding to interaction behaviors of a user based on a pre-stored mapping relation between the preset user interaction behaviors and the preset output information, wherein the preset output information comprises voice information and/or image information.
In one possible implementation, the VR device is further configured to:
capturing a voice to be recognized of a user;
matching the voice to be recognized of the user with the voice stored in a pre-stored voice judgment database, and determining the target voice with the highest matching degree in the voice judgment database;
determining the target voice as user interaction voice;
the method comprises the steps of determining preset output information corresponding to user interaction voice based on a mapping relation between the preset user interaction voice and the preset output information which are stored in advance, wherein the preset output information comprises voice information and/or image information.
In one possible implementation, the VR device is further configured to:
capturing the body motion information to be recognized of a user;
when the fact that the limbs of the user are in contact with the intelligent doll is detected, matching the to-be-identified limb action information of the user with limb action characteristic information stored in a pre-stored limb action judgment database, and determining the target limb action with the highest matching degree in the limb action judgment database; the user limb motion information to be identified is user limb track information and limb motion speed information in a preset time period before contact;
determining the target limb motion as a user interaction limb motion;
the method comprises the steps of determining preset output information corresponding to user interaction limb actions based on a pre-stored mapping relation between the preset user interaction limb actions and the preset output information, wherein the preset output information comprises voice information and/or image information.
In one possible embodiment, the smart figure is further configured to:
sending the current terminal state information of the intelligent doll to the VR equipment;
the VR device further to:
receiving the current terminal state information of the intelligent doll;
and generating the virtual image of the intelligent doll based on the terminal state information and preset virtual appearance information of the intelligent doll.
In a second aspect, an embodiment of the present application further provides a behavior data processing method, where the method includes:
detecting interactive characteristic information made by a user on the intelligent doll through a built-in sensor;
sending the interactive feature information to VR equipment so that the VR equipment can determine user interactive behaviors based on the interactive feature information and a pre-stored interactive judgment database; and outputting corresponding voice information and/or image information based on the interaction behavior of the user.
In a possible implementation, the detecting, by a built-in sensor, the interactive feature information made by the user to the smart doll includes:
receiving pressure data uploaded by a built-in pressure sensor; and/or receiving temperature data uploaded by a built-in temperature sensor; and/or receiving humidity data uploaded by a built-in humidity sensor of the humidity equipment.
In a possible implementation manner, the interaction feature information further includes interaction location information when the user performs an interaction action on the smart figure:
the interactive characteristic information of the user to the intelligent doll is detected through the built-in sensor, and the method further comprises the following steps:
determining the identification of the sensor uploading the interactive characteristic information, and determining the position information of each triggered sensor based on the determined identification of the sensor;
and according to the determined position information of each triggered sensor, determining the interaction position information when the user carries out interaction action on the intelligent doll.
In a third aspect, an embodiment of the present application further provides a behavior data processing method, where the method includes:
receiving interactive characteristic information sent by the intelligent doll;
determining the interaction behavior of the user based on the interaction feature information and a pre-stored interaction judgment database;
and outputting corresponding voice information and/or image information based on the interaction behavior of the user.
In a fourth aspect, an embodiment of the present application further provides a behavior data processing apparatus, including:
the detection module is used for detecting the interactive characteristic information made by the user on the behavior data processing device through a built-in sensor;
and the sending module is used for sending the interactive feature information to the VR equipment.
In a possible implementation, the detection module is further configured to: receiving pressure data uploaded by a built-in pressure sensor; and/or receiving temperature data uploaded by a built-in temperature sensor; and/or receiving humidity data uploaded by a built-in humidity device sensor.
In a possible implementation, the detection module is further configured to: the identification of the sensors uploading the interactive feature information is determined, and the position information of each triggered sensor is determined based on the determined identification of the sensors.
In a possible implementation, the sending module is further configured to: and sending interaction position information when a user carries out interaction action on the behavior data processing device, wherein the interaction position information is determined by the detection module based on the position information of the triggered sensor.
In a fifth aspect, an embodiment of the present application further provides another behavior data processing apparatus, where the apparatus includes:
the receiving module is used for receiving the interactive characteristic information sent by the intelligent doll;
the comparison module is used for determining the interaction behavior of the user based on the interaction characteristic information and a pre-stored interaction judgment database;
and the output module is used for outputting corresponding voice information and/or image information based on the interactive behavior of the user.
In a sixth aspect, an embodiment of the present application further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine readable instructions when executed by the processor performing the method of behavior data processing in any one of the possible embodiments of the second aspect or the steps of the method of behavior data processing in any one of the possible embodiments of the third aspect or the third aspect.
In a seventh aspect, this application embodiment further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor to perform the behavior data processing method in any one of the above-mentioned second aspect or second possible implementation manner, or to perform the steps of the behavior data processing method in any one of the above-mentioned third aspect or third possible implementation manner.
The embodiment of the application provides a behavior data processing system and method, which can capture interactive characteristic information when a user and an intelligent doll interact through a built-in sensor of the intelligent doll, determine the interactive behavior of the user based on the captured interactive characteristic information, finally determine corresponding voice and/or image information based on the interactive behavior of the user, and output the corresponding voice and image information to the user through VR equipment so as to realize a simulated interaction process between the user and the intelligent doll.
Through the system, when the system interacts with the intelligent doll, the behavior of the user can be judged independently or jointly according to various factors such as different interaction actions and voice of the user, the interaction purpose of the user is accurately determined, and then corresponding voice and/or image information is given as a response, so that the personification degree of the simulated social contact is improved, and the immersive experience of the user is enhanced. In addition, the simulated social contact in the prior art is basically based on a single fixed instruction and judgment elements, and the interaction result is relatively fixed, rigid, misjudged and high in misjudgment rate. Through the system, various interactive actions of the user can be judged, and the judgment accuracy is improved.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
FIG. 1 illustrates a structural framework of a behavioral data processing system provided by an embodiment of the present application;
FIG. 2 is a flow chart illustrating a behavior data processing method according to an embodiment of the present application;
FIG. 3 is a flow chart of another behavior data processing method provided by an embodiment of the application;
fig. 4 is a schematic diagram illustrating an apparatus for processing behavior data according to an embodiment of the present application;
FIG. 5 is a schematic diagram of another behavior data processing device provided in an embodiment of the present application;
fig. 6 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
At present, in the simulated social contact realized by a companion robot and an intelligent doll, the main behavior data processing modes comprise: key control, fixed voice control, fixed motion command control, etc. When the behavior data processing system processes the behavior data, the purpose of the user behavior is judged based on the single element, and a response is made according to the fixed judgment.
However, in the simulated social interaction, the types of the interactive behaviors of the users are numerous, and in some cases, the interactive behaviors of the users simultaneously include various judgment elements such as voice and actions. At the moment, the user behavior is judged only by using a single element, the misjudgment and misjudgment conditions of the user interaction purpose can occur, the wrong response is made, the user experience is reduced, and the requirements of the user cannot be met.
Based on this, the embodiment of the application provides a behavior data processing system and a method. When the behavior data processing system detects that a user interacts with the intelligent doll, the interaction characteristic information of the interaction behavior is captured through a built-in sensor of the intelligent doll, the interaction voice is captured through a microphone, the body action information of the user is captured through a camera or a somatosensory controller, and the interaction purpose of the user is cooperatively judged based on a plurality of elements. Compared with the prior art, the method and the device have the advantages that the judgment precision is higher, the misjudgment rate is reduced, the corresponding voice and/or image information output through the VR equipment is more realistic, immersive experience can be brought to a user, and the requirement of the user on simulated social contact is met.
The technical solutions in the present application will be described clearly and completely with reference to the drawings in the present application, and it should be understood that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the present application, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
Example one
The embodiment of the application provides a behavior data processing system, which can be applied to a partner type intelligent doll system based on VR technology, and is used for processing the action behavior data of a user and outputting corresponding voice and/or image information. Fig. 1 shows a behavior data processing system 100 provided for a first embodiment of the present application, which includes: the communication modes between the smart doll 101 and the VR device 102 include, but are not limited to: UWB (Ultra wide band), Radio Frequency Identification (RFID), bluetooth (Blue tooth), and the like, although other communication methods may be used. Wherein:
the intelligent doll 101 is configured to detect, through a built-in sensor, interactive feature information made by a user on the intelligent doll, and send the interactive feature information to the VR device 102;
here, the smart figure 101 may be a human-shaped, semi-human-shaped, or doll-shaped figure made of any material and including electronic devices. Interaction feature information of the user in the interaction process is captured based on the internal sensor and sent to the VR device 102, so that the interaction behavior of the user can be judged.
The interactive characteristic information comprises the running state information and the triggering data of the sensor. Therefore, the intelligent doll 101 acquires the operating condition information and the trigger data of each built-in sensor of the intelligent doll in real time.
The built-in sensor comprises at least one of a pressure sensor, a temperature sensor and a humidity sensor. The trigger data for the different sensors include:
triggering starting time, triggering pressure and triggering duration of the pressure sensor; triggering starting time, triggering temperature and triggering duration of the temperature sensor; the triggering starting time, the triggering humidity size and the triggering duration of the humidity sensor.
The operation condition information of the sensors comprises the number of triggered sensors in one interactive action and the position information of the triggered sensors. Here, the location information of the triggered sensor is determined by the sensor identification of the uploaded trigger data.
The position information of the triggered sensor can be obtained through the following implementation modes:
in one possible implementation, the smart figure may derive the triggered sensor identification characteristic by parsing the received trigger data for the triggered sensor. For example: by extracting the head message of the data frame containing the trigger data, the identification characteristics of the device for sending the data frame are obtained, and the position information of the intelligent doll 101 area where the triggered sensor is located is judged according to the identification characteristics.
The above-mentioned identification features include: the type of sensor and the unique number of the sensor.
In a possible implementation manner, the smart doll includes a mapping table storing the sensor identification features and the areas where the sensor identification features are located, and after the identification features of the triggered sensors are determined, the location information of the triggered sensors can be determined according to the mapping table.
If the extracted identification feature is the pressure sensor 001 number, the pressure sensor 001 number can be determined to be installed in the face area of the intelligent doll according to the mapping table, and the position information of the triggered sensor is determined to be the face area.
After determining the interactive feature information, the smart doll 101 sends the interactive feature information to the VR device 102.
The VR device 102 is configured to receive the interactive feature information; determining user interaction behaviors based on the interaction feature information and a pre-stored interaction judgment database; and outputting corresponding voice information and/or image information based on the interactive behaviors of the user. The image information includes, for example, video, animation, and the like, and the present application is not limited thereto.
The VR device 102 may be a VR head-mounted display device, and at least includes an information receiving apparatus, a built-in processor, a storage apparatus, and a video/audio output device; the mobile phone can also be a VR device which is arranged in the mobile phone, and the mobile phone is used as an information receiving device, a processor and video and audio output equipment in the VR device which is arranged in the mobile phone.
Specifically, the VR device 102 is configured to determine the user interaction behavior information in the following manner:
after receiving the interactive feature information sent by the smart doll 101, the interactive feature information may be matched with the interactive feature information in the extracted interaction determination database stored in advance.
In this embodiment, the pre-stored interaction determination database includes a pre-set interaction behavior set and an interaction feature information set corresponding to the pre-set interaction behavior set one to one.
In particular, the set of interactive behaviors may contain a sample set of actions for the interactive actions in the social network. Wherein the interaction in the social interaction may be understood as a physical contact that may occur in the social interaction. The reasons for the above-mentioned limb contact may be based on different social relationships including, but not limited to, caress between lovers, kissing, hugging and shaking hands between friends, etc. The interactive characteristic information in the interactive characteristic information set is the running state information and the triggering data of the sensor corresponding to the different interactive actions.
In the embodiment of the application, after the interactive feature information sent by the intelligent doll 101 is received, an interactive feature information set in a pre-stored interactive judgment database is retrieved. And matching the interactive characteristic information sent by the intelligent doll with the interactive characteristic information in the interactive characteristic information set one by one, and determining the interactive characteristic information with the highest similarity as target interactive characteristic information.
And determining the preset interactive behavior in the interactive behavior set corresponding to the target interactive feature information as the interactive behavior performed by the user according to the corresponding relation between the interactive feature information set and the interactive behavior set.
And outputting voice and/or image information corresponding to the interaction behavior of the user according to the mapping relation between the preset user interaction behavior and the preset output information.
In some embodiments of the application, when the user interaction behavior is determined, prediction may be performed based on an artificial intelligence technique, for example, a prediction model for predicting the user interaction behavior is obtained by training using a training sample set in a machine learning manner, and then the received interaction feature information may be input to a prediction module obtained by training for prediction, and the predicted user interaction behavior is output. For example, a convolutional neural network model may be used as a specific prediction model, which is not limited in the present application.
In one possible implementation, image information of blinking, blushing, may be output by VR device 102 when the user's interaction behavior is determined to be a head stroke. The image information can be output by a built-in screen with an output device, and can also be output by a mobile phone connected with VR glasses.
The VR device 102 may also be configured to process voice behavior data of the user and output corresponding voice and image information.
The VR device 102 is used for capturing the voice to be recognized of the user; matching the voice to be recognized of the user with the voice stored in a pre-stored voice judgment database, and determining the target voice with the highest matching degree in the voice judgment database; determining the target voice as user interaction voice; the method comprises the steps of determining preset output information corresponding to user interaction voice based on a mapping relation between the preset user interaction voice and the preset output information which are stored in advance, wherein the preset output information comprises voice information and/or image information.
The VR device 102 captures voice information to be recognized of a user through a microphone, and after capturing the voice information to be recognized, calls voice information stored in a pre-stored voice determination database to perform matching.
In this embodiment, the pre-stored voice determination database includes a pre-set interactive voice set and an interactive voice feature set corresponding to the pre-set interactive voice set one to one.
In particular, the set of interactive voices may include a set of conversation samples of interactive voices in social interaction. The interactive voice features include tone, language and semantic information.
The process of matching the speech to be recognized and the speech determination database described in this possible embodiment to determine the user interaction speech is similar to the process of matching the interaction feature information and determining the user interaction behavior in the interaction determination database described in the previous possible embodiment, and details are not repeated here.
The VR device 102 may further process the body movement behavior data of the user, and output corresponding voice and image information.
The VR device 102 is used for capturing limb motion information of a user; when the fact that the limbs of the user are in contact with the intelligent doll 101 is detected, matching the limb action information of the user with limb action characteristic information stored in a prestored limb action judgment database, and determining the target limb action with the highest matching degree in the limb action judgment database; determining the target limb action as an interactive limb action of a user; and determining preset output information corresponding to the user interaction voice based on a pre-stored mapping relation between the preset user interaction limb action and the preset output information, wherein the preset output information comprises voice information and/or image information.
The VR device 102 captures the limb movement of the user through a camera or a somatosensory controller, and when the limb of the user is detected to be in contact with the intelligent doll 101, the limb track information, the limb movement speed information and the contact position information of the user in a preset time period before the contact are determined as the limb movement information to be identified. In a specific embodiment, the limb motion trajectory information and the limb speed information within 0.5 second or 1 second before the limb of the user is detected to be in contact with the smart doll 101, and the position information of the limb in contact with the smart doll 101 are the limb motion information to be identified.
And after determining the to-be-recognized limb action information, calling preset limb action characteristic information in a pre-stored limb action judgment database for matching.
In this embodiment, the pre-stored limb movement determination database includes a pre-set interactive limb movement set and a limb movement characteristic information set corresponding to the pre-set interactive limb movement set one to one.
Specifically, the limb movement characteristic information includes limb movement track information, limb movement speed information, and position information of the contact of the limb and the intelligent doll 101.
The process of determining the user interaction of the limb motion for matching the limb motion to be recognized and the limb motion determination database described in this embodiment is similar to the process of determining the user interaction by matching the interaction characteristic information and the interaction determination database described in the previous possible embodiment, and is not described herein again.
In a possible embodiment, before a user interacts with a smart doll, an avatar of a current smart doll is first output in a VR device, which specifically includes:
the intelligent doll sends current terminal state information to VR equipment, wherein the current terminal state information comprises position information of the intelligent doll, speed information of the intelligent doll and inertial direction information of the intelligent doll; and the VR equipment receives the terminal state information sent by the intelligent doll and generates the virtual image of the intelligent doll based on the preset virtual appearance information of the intelligent doll.
The terminal state information can be obtained by installing electronic equipment on the body and joint parts of the intelligent doll. The electronic device includes one or more of a locator, a gyroscope, and an accelerometer.
By adopting the mode, the interaction action of the user is judged based on the interaction voice, the limb action and the sensor trigger data and various different elements independently or jointly, and when the user action data is processed, different interaction actions can be effectively distinguished so as to make correct anthropomorphic response. The misjudgment of the user interaction behavior caused by too few judgment elements is avoided, the VR technology is introduced, the output voice and/or image information is enabled to have higher human-like performance, compared with the simulated social contact in the prior art, the scheme provided by the application can effectively improve the immersive experience of the simulated social contact, and the user requirements are met.
Example two
Referring to fig. 2, an embodiment of the present application further provides a behavior data processing method, which is executed by an intelligent doll in the behavior data processing system, and includes the following steps:
step S201, detecting the interactive characteristic information of the user to the intelligent doll through a built-in sensor.
Step S202, sending the interactive feature information to the VR equipment so that the VR equipment can determine user interactive behaviors based on the interactive feature information and a pre-stored interactive judgment database; and outputting corresponding voice information and/or image information based on the interaction behavior of the user.
In a possible implementation, the detecting, by a built-in sensor, the interactive feature information made by the user to the smart doll includes:
receiving pressure data uploaded by a built-in pressure sensor; and/or receiving temperature data uploaded by a built-in temperature sensor; and/or receiving humidity data uploaded by a built-in humidity device sensor.
In a possible implementation, the interactive feature information further includes: the interaction position information when the user carries out interaction action on the intelligent doll;
in specific implementation, the detecting, by a built-in sensor, the interactive feature information made by the user to the smart figure specifically includes:
determining the identification of the sensor uploading the interactive characteristic information, and determining the position information of each triggered sensor based on the determined identification of the sensor;
and according to the determined position information of each triggered sensor, determining the interaction position information when the user carries out interaction action on the intelligent doll.
The specific execution flow of the behavior data processing method can refer to the related description of the intelligent doll device in the behavior data processing system, and is not explained herein.
EXAMPLE III
Referring to fig. 3, an embodiment of the present application further provides a behavior data processing method, where the method is executed by a VR device in the behavior data processing system, and includes the following steps:
s301, receiving interactive characteristic information sent by the intelligent doll;
step S302, determining the interaction behavior of the user based on the interaction characteristic information and a pre-stored interaction judgment database;
and step S303, outputting corresponding voice information and/or image information based on the interaction behavior of the user.
The specific execution flow of the behavior data processing method can refer to the relevant description about the VR device in the behavior data processing system, and is not explained here.
Example four
Referring to fig. 4, an embodiment of the present application further provides a behavior data processing apparatus 400, including:
a detection module 401, configured to detect, through a built-in sensor, interactive feature information made by a user to the behavior data processing apparatus 400;
a sending module 402, configured to send the interaction feature information to the VR device, so that the VR device can determine a user interaction behavior based on the interaction feature information and a pre-stored interaction determination database; and outputting corresponding voice information and/or image information based on the interaction behavior of the user.
In a possible implementation manner, the detecting module 401, when detecting the interaction feature information made by the user to the behavior data processing apparatus 400 through a built-in sensor, is specifically configured to:
receiving pressure data uploaded by a built-in pressure sensor; and/or receiving temperature data uploaded by a built-in temperature sensor; and/or receiving humidity data uploaded by a built-in humidity device sensor.
In a possible implementation, the detection module 401, when detecting the interaction feature information made by the user to the behavior data processing apparatus through a built-in sensor, is further configured to determine: interaction position information when the user performs an interaction action on the behavior data processing apparatus 400:
the detecting, by a built-in sensor, the interactive feature information made by the user to the behavior processing apparatus 400 further includes:
the identification of the sensors uploading the interactive feature information is determined, and the position information of each triggered sensor is determined based on the determined identification of the sensors.
And determining interactive position information when a user interacts with the behavior data processing device according to the determined position information of each triggered sensor.
EXAMPLE five
Referring to fig. 5, an embodiment of the present application further provides a behavior data processing apparatus 500, including:
the receiving module 501 is configured to receive interaction feature information sent by an intelligent doll;
a comparison module 502, configured to determine an interaction behavior of the user based on the interaction feature information and a pre-stored interaction decision database;
an output module 503, configured to output corresponding voice information and/or image information based on the interaction behavior of the user.
EXAMPLE six
Fig. 6 shows an electronic device 600 provided in an embodiment of the present application, which includes a processor 601, a memory 602, and a bus 603, where the processor 601 and the memory 602 are connected via the bus 603; the processor 601 is used to execute executable modules, such as computer programs, stored in the memory 602.
The Memory 602 may include a Random Access Memory (RAM), and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The bus 603 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 6, but that does not indicate only one bus or one type of bus.
The memory 602 is configured to store a program, and the processor 603 executes the program after receiving an execution instruction, where the behavior data processing method disclosed in any of the foregoing embodiments of the present invention may be applied to the processor 601, or implemented by the processor 601.
The processor 601 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 601. The Processor 601 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 602, and the processor 601 reads the information in the memory 602, and completes the steps in the behavior data processing method of the second embodiment or performs the steps in the behavior data processing method of the third embodiment in combination with the hardware of the processor.
The computer program product of the behavior data processing method provided in the embodiment of the present application includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the behavior data processing method described in the foregoing method embodiment, and specific implementations may refer to the method embodiment and are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used to illustrate the technical solutions of the present application, but not to limit the technical solutions, and the scope of the present application is not limited to the above-mentioned embodiments, although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the exemplary embodiments of the present application, and are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A behavioral data processing system, comprising: intelligent puppets and Virtual Reality (VR) devices; wherein:
the intelligent doll is used for detecting interaction feature information made by a user on the intelligent doll through a built-in sensor and sending the interaction feature information to the VR equipment; the interactive characteristic information comprises running state information and triggering data of the sensor; the running state information comprises the number of triggered sensors and the position information of the triggered sensors in one interactive action;
the VR equipment is used for receiving the interactive feature information; determining user interaction behavior based on the interaction feature information and a pre-stored interaction judgment database; outputting corresponding voice information and/or image information based on the interaction behavior of the user;
the VR device further to:
capturing the body motion information to be identified of a user;
when the fact that the limbs of the user are in contact with the intelligent doll is detected, matching the to-be-identified limb action information of the user with limb action characteristic information stored in a pre-stored limb action judgment database, and determining the target limb action with the highest matching degree in the limb action judgment database; the user limb action information to be identified is user limb track information and limb movement speed information in a preset time period before contact and position information of contact between a limb and the intelligent doll;
determining the target limb motion as a user interaction limb motion;
the method comprises the steps of determining preset output information corresponding to user interaction limb actions based on a pre-stored mapping relation between the preset user interaction limb actions and the preset output information, wherein the preset output information comprises voice information and/or image information.
2. The behavioral data processing system according to claim 1, characterized in that the built-in sensors include at least one of the following sensors:
a pressure sensor; a temperature sensor; a humidity sensor;
the intelligent doll is used for obtaining the interactive feature information by adopting the following steps:
receiving pressure data uploaded by a built-in pressure sensor; and/or receiving temperature data uploaded by a built-in temperature sensor; and/or receiving humidity data uploaded by a built-in humidity sensor.
3. The behavior data processing system according to claim 2, wherein the interaction feature information further includes interaction location information when a user makes an interaction with the smart figure;
the intelligent doll is further used for obtaining the interaction position information by adopting the following steps:
determining the identification of the sensor uploading the interactive characteristic information, and determining the position information of each triggered sensor based on the determined identification of the sensor;
and according to the determined position information of each triggered sensor, determining the interaction position information when the user carries out interaction action on the intelligent doll.
4. The behavior data processing system according to claim 1, wherein the interaction determination database stores therein interaction feature information corresponding to a preset behavior pattern;
the VR device is used for determining the interaction behavior of the user in the following way:
matching the received interactive characteristic information with interactive characteristic information stored in the interactive judgment database, and determining target interactive characteristic information with the highest matching degree in the interactive judgment database;
and determining a preset behavior mode corresponding to the target interaction characteristic information as the interaction behavior of the user.
5. A behavioural data processing system as claimed in claim 1, wherein the VR device is configured to determine the output speech information and/or image information by:
the method comprises the steps of determining preset output information corresponding to interaction behaviors of a user based on a pre-stored mapping relation between the preset user interaction behaviors and the preset output information, wherein the preset output information comprises voice information and/or image information.
6. The behavioral data processing system according to claim 1, wherein the VR device is further configured to:
capturing a voice to be recognized of a user;
matching the voice to be recognized of the user with the voice stored in a pre-stored voice judgment database, and determining the target voice with the highest matching degree in the voice judgment database;
determining the target voice as user interaction voice;
the method comprises the steps of determining preset output information corresponding to user interaction voice based on a mapping relation between the preset user interaction voice and the preset output information which are stored in advance, wherein the preset output information comprises voice information and/or image information.
7. The behavioral data processing system according to claim 1, wherein the smart figure is further configured to:
sending the current terminal state information of the intelligent doll to the VR equipment; the terminal state information comprises intelligent doll position information, intelligent doll speed information and intelligent doll inertia direction information;
the VR device further to:
receiving the current terminal state information of the intelligent doll;
and generating the virtual image of the intelligent doll based on the terminal state information and preset virtual appearance information of the intelligent doll.
8. A method of behavioral data processing, the method comprising:
detecting interaction characteristic information made by a user on the intelligent doll through a built-in sensor; the interactive characteristic information comprises running state information and triggering data of the sensor; the running state information comprises the number of triggered sensors and the position information of the triggered sensors in one interactive action;
sending the interactive feature information to Virtual Reality (VR) equipment so that the VR equipment can determine user interactive behaviors based on the interactive feature information and a pre-stored interactive judgment database; outputting corresponding voice information and/or image information based on the interaction behavior of the user;
the VR equipment also captures the to-be-identified limb action information of the user; when the fact that the limbs of the user are in contact with the intelligent doll is detected, matching the to-be-recognized limb action information of the user with the limb action characteristic information stored in a pre-stored limb action judgment database, and determining the target limb action with the highest matching degree in the limb action judgment database; the user limb action information to be identified is user limb track information and limb movement speed information in a preset time period before contact and position information of contact between a limb and the intelligent doll; determining the target limb action as a user interaction limb action; the method comprises the steps of determining preset output information corresponding to user interaction limb actions based on a pre-stored mapping relation between the preset user interaction limb actions and the preset output information, wherein the preset output information comprises voice information and/or image information.
9. The behavior data processing method according to claim 8, wherein the detecting, by a built-in sensor, the interaction feature information made by the user to the smart figure comprises:
receiving pressure data uploaded by a built-in pressure sensor; and/or receiving temperature data uploaded by a built-in temperature sensor; and/or receiving humidity data uploaded by a built-in humidity sensor.
10. The behavior data processing method according to claim 9, wherein the interaction feature information further includes interaction location information when the user performs an interaction action on the smart figure:
the interactive characteristic information of the user to the intelligent doll is detected through the built-in sensor, and the method further comprises the following steps:
determining the identification of the sensor uploading the interactive characteristic information, and determining the position information of each triggered sensor based on the determined identification of the sensor;
and according to the determined position information of each triggered sensor, determining the interaction position information when the user carries out interaction action on the intelligent doll.
11. A method of behavioral data processing, the method comprising:
receiving interactive characteristic information sent by the intelligent doll; the interactive characteristic information comprises running state information and triggering data of the sensor; the running state information comprises the number of triggered sensors and the position information of the triggered sensors in one interactive action;
determining the interaction behavior of the user based on the interaction feature information and a pre-stored interaction judgment database;
outputting corresponding voice information and/or image information based on the interaction behavior of the user;
capturing the body motion information to be recognized of a user; when the fact that the limbs of the user are in contact with the intelligent doll is detected, matching the to-be-recognized limb action information of the user with the limb action characteristic information stored in a pre-stored limb action judgment database, and determining the target limb action with the highest matching degree in the limb action judgment database; the user limb action information to be identified is user limb track information and limb movement speed information in a preset time period before contact and position information of contact between a limb and the intelligent doll; determining the target limb action as a user interaction limb action; the method comprises the steps of determining preset output information corresponding to user interaction limb actions based on a pre-stored mapping relation between the preset user interaction limb actions and the preset output information, wherein the preset output information comprises voice information and/or image information.
CN201910242486.2A 2019-03-28 2019-03-28 Behavior data processing system and method Active CN110209264B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910242486.2A CN110209264B (en) 2019-03-28 2019-03-28 Behavior data processing system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910242486.2A CN110209264B (en) 2019-03-28 2019-03-28 Behavior data processing system and method

Publications (2)

Publication Number Publication Date
CN110209264A CN110209264A (en) 2019-09-06
CN110209264B true CN110209264B (en) 2022-07-05

Family

ID=67785235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910242486.2A Active CN110209264B (en) 2019-03-28 2019-03-28 Behavior data processing system and method

Country Status (1)

Country Link
CN (1) CN110209264B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206906843U (en) * 2017-06-30 2018-01-19 深圳光启合众科技有限公司 The control device and robot of robot
CN107643820A (en) * 2016-07-20 2018-01-30 郎焘 The passive humanoid robots of VR and its implementation method
CN107639620A (en) * 2017-09-29 2018-01-30 西安交通大学 A kind of control method of robot, body feeling interaction device and robot
KR20180077974A (en) * 2016-12-29 2018-07-09 유한책임회사 매드제너레이터 Vr-robot synchronize system and method for providing feedback using robot

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107643820A (en) * 2016-07-20 2018-01-30 郎焘 The passive humanoid robots of VR and its implementation method
KR20180077974A (en) * 2016-12-29 2018-07-09 유한책임회사 매드제너레이터 Vr-robot synchronize system and method for providing feedback using robot
CN206906843U (en) * 2017-06-30 2018-01-19 深圳光启合众科技有限公司 The control device and robot of robot
CN107639620A (en) * 2017-09-29 2018-01-30 西安交通大学 A kind of control method of robot, body feeling interaction device and robot

Also Published As

Publication number Publication date
CN110209264A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
TWI751161B (en) Terminal equipment, smart phone, authentication method and system based on face recognition
JP2020507835A5 (en)
CN111324253B (en) Virtual article interaction method and device, computer equipment and storage medium
US20140351720A1 (en) Method, user terminal and server for information exchange in communications
CN109521927B (en) Robot interaction method and equipment
CN109872297A (en) Image processing method and device, electronic equipment and storage medium
CN111368796B (en) Face image processing method and device, electronic equipment and storage medium
CN110609620A (en) Human-computer interaction method and device based on virtual image and electronic equipment
CN109254650B (en) Man-machine interaction method and device
CN110781881B (en) A method, device, equipment and storage medium for identifying sports scores in video
CN111240482B (en) Special effect display method and device
JP2023524119A (en) Facial image generation method, device, electronic device and readable storage medium
KR20210124307A (en) Interactive object driving method, apparatus, device and recording medium
KR20210084444A (en) Gesture recognition method and apparatus, electronic device and recording medium
KR101452359B1 (en) Method for providing of toy assembly video
CN110688319A (en) Application keep-alive capability test method and related device
JP2020201926A (en) System and method for generating haptic effect based on visual characteristics
CN114063784A (en) Simulated virtual XR BOX somatosensory interaction system and method
CN114513694B (en) Score determination method, device, electronic equipment and storage medium
CN108537149B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110209264B (en) Behavior data processing system and method
CN111176435A (en) A human-computer interaction method and speaker based on user behavior
CN107885318A (en) A kind of virtual environment exchange method, device, system and computer-readable medium
CN112698747A (en) Robot touch interaction method and robot
CN111651054A (en) Sound effect control method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant