CN109166365A - The method and system of more mesh robot language teaching - Google Patents
The method and system of more mesh robot language teaching Download PDFInfo
- Publication number
- CN109166365A CN109166365A CN201811109873.0A CN201811109873A CN109166365A CN 109166365 A CN109166365 A CN 109166365A CN 201811109873 A CN201811109873 A CN 201811109873A CN 109166365 A CN109166365 A CN 109166365A
- Authority
- CN
- China
- Prior art keywords
- information
- target user
- video
- video information
- mesh
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 230000019771 cognition Effects 0.000 claims abstract description 36
- 230000008569 process Effects 0.000 claims abstract description 36
- 230000009885 systemic effect Effects 0.000 claims abstract description 16
- 238000004364 calculation method Methods 0.000 claims abstract description 13
- 230000008921 facial expression Effects 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims description 15
- 238000004891 communication Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 claims description 2
- 230000000875 corresponding effect Effects 0.000 description 44
- 230000001815 facial effect Effects 0.000 description 10
- 210000005252 bulbus oculi Anatomy 0.000 description 9
- 230000005540 biological transmission Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 210000001508 eye Anatomy 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 210000003128 head Anatomy 0.000 description 4
- 238000012546 transfer Methods 0.000 description 4
- 206010057315 Daydreaming Diseases 0.000 description 3
- 230000009471 action Effects 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000001149 cognitive effect Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000011835 investigation Methods 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000003205 fragrance Substances 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 235000012736 patent blue V Nutrition 0.000 description 1
- 150000003003 phosphines Chemical class 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000014616 translation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/06—Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
- G09B5/065—Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Manipulator (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
Present invention discloses a kind of methods of more mesh robot language teaching, comprising steps of obtaining the video information of multiple angles in target user's learning process;The video information of the multiple angle is subjected to overlapping calculation, obtains the characteristic information of the target user, the characteristic information includes the phonetic feature of the target user, limbs feature and facial expression feature;Corresponding cognition is searched in systemic presupposition information according to the characteristic information as a result, wherein including various characteristic informations, and cognition result corresponding with various characteristic informations in the systemic presupposition information;Execute the corresponding instruction of the cognition result.The video information of multiple target users can not only be obtained simultaneously, moreover it is possible to be made the characteristic information more true and accurate of same target user, and then target user can be instructed to learn in time, improve the efficiency of teaching of robot.
Description
Technical field
The present invention relates to field of artificial intelligence more particularly to a kind of method that more mesh robot languages are imparted knowledge to students and it is
System.
Background technique
According to the quantity and characteristic of visual sensor, the mobile robotic vision system of mainstream has monocular vision, double at present
Item stereo vision, multi-vision visual and panoramic vision etc..
Single camera vision system projects on N-dimensional image in imaging process from three-dimensional objective world, has lost depth information.
Binocular vision system is made of two video cameras, and the depth information of scene is obtained using principle of triangulation, and
3D shape and the position of surrounding scenes can be rebuild, similar to the stereoscopic function of human eye, principle is simple.Binocular vision system needs
The spatial relation being accurately known that between two video cameras, and the 3D information of scene environment needs two video cameras never
Same angle, while the two images of Same Scene are shot, and carry out complicated matching, it could accurately obtain stereo visual system energy
Enough three-dimensional informations for relatively accurately restoring visual scene, localization for Mobile Robot navigation, avoidance and in terms of
It is widely used.However, stereo visual system corresponding point matching there is a problem of it is larger.
In existing teaching robot's system, existing can not observe that two and two or more students' is instant in the same time
Reaction acquires inaccuracy to the information data of student, to cannot find what a certain position or multidigit student were shown in time
Study puzzlement, and then under causing efficiency of teaching relatively low.
Therefore, it is necessary to more mesh machines that one kind can accurately obtain multiple target users and accurate information data
People.
Summary of the invention
The main object of the present invention is to provide one kind can accurately obtain multiple target users and accurate information data
More mesh robot languages teaching method and system.
A kind of method that the present invention proposes more mesh robot language teaching, comprising steps of
Obtain the video information of multiple angles in target user's learning process;
The video information of the multiple angle is subjected to overlapping calculation, obtains the characteristic information of the target user, it is described
Characteristic information includes the phonetic feature of the target user, limbs feature and facial expression feature;
Corresponding cognition is searched in systemic presupposition information according to the characteristic information as a result, wherein the systemic presupposition is believed
It include various characteristic informations, and cognition result corresponding with various characteristic informations in breath;
Execute the corresponding instruction of the cognition result.
Further, described the step of obtaining the video information of multiple angles in target user's learning process, comprising:
The video information in multiple target user's learning processes is obtained respectively, wherein the video information includes different angle
Video;
By the video storage of the different angle containing same target user in same reservoir, the target user is formed
The video information of multiple angles in learning process.
Further, described the step of executing the cognition result corresponding instruction, comprising:
Identify the corresponding classes of instructions of the cognition result;
If the first instruction, then control system enters next teaching link;
If the second instruction, then the teaching link is repeated, and issue suggestion voice information.
It further, further include the location information of the target user, the repetition teaching ring in the characteristic information
Section, and the step of issuing suggestion voice information, comprising:
The location information of the target user is obtained according to the positional information;
Suggestion voice information is issued according to the location information directive property.
Further, the robot includes trick photographic device, the side that the trick photographic device passes through wireless connection
Formula and more mesh robot bodies establish communication connection, described to obtain the video information of multiple angles in target user's learning process
Step, further includes:
The long-distance video information being located in target user's learning process of second area is obtained by trick photographic device,
Described in second area be more mesh robot bodies region except region;
The long-distance video information is stored in remote storage device, the video information of remote object user is formed.
The system that the present invention proposes a kind of more mesh robot language teaching, comprising:
Photographing module, for obtaining the video information of multiple angles in target user's learning process;
Processing module obtains the target user's for the video information of the multiple angle to be carried out overlapping calculation
Characteristic information, the characteristic information include the phonetic feature of target user, limbs feature and facial expression feature;
Service module searches corresponding cognition according to the characteristic information as a result, wherein described in systemic presupposition information
It include various characteristic informations, and cognition result corresponding with various characteristic informations in systemic presupposition information;
Execution module, for executing the corresponding instruction of the cognition result.
Further, the photographing module includes:
First camera unit, for obtaining the video information in multiple target user's learning processes respectively, wherein the view
Frequency information includes the video of different angle;
First storage unit, by the video storage of the different angle containing same target user in same reservoir, shape
At the video information of multiple angles in target user's learning process.
Further, the execution module includes:
First teaching unit, for identification corresponding classes of instructions of the cognition result;
First execution unit, if the first instruction, then control system enters next teaching link;
Second execution unit then repeats the teaching link, and issue suggestion voice information if the second instruction.
Further, second execution unit includes:
Locator unit, for obtaining the location information of the target user according to the positional information;
Subelement is prompted, for issuing suggestion voice information according to the location information directive property.
Further, the photographing module further include:
Second camera unit is located in target user's learning process of second area for being obtained by trick photographic device
Long-distance video information, wherein the second area be more mesh robot bodies region except region;
Second storage unit forms remote object and uses for the long-distance video information to be stored in remote storage device
The video information at family.
A kind of method and system of more mesh robot language teaching of the present invention have the beneficial effect that, by obtaining multiple angles
Video information, by image overlapping calculation handle, the video information of multiple target users can not only be obtained simultaneously, moreover it is possible to make
The characteristic information of same target user more true and accurate, and then target user can be instructed to learn in time, improve machine
The efficiency of teaching of people.
Detailed description of the invention
Fig. 1 is the step schematic diagram of the method for the more mesh robot language teaching of one embodiment of the invention;
Fig. 2 is another step schematic diagram of the method for the more mesh robot language teaching of one embodiment of the invention;
Fig. 3 is the system structure diagram of the more mesh robot language teaching of one embodiment of the invention;
Fig. 4 is the structural schematic diagram of one embodiment of the invention photographing module;
Fig. 5 is the structural schematic diagram of another embodiment of the present invention photographing module;
Fig. 6 is the system structure diagram of the more mesh robots of another embodiment of the present invention.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
Those skilled in the art of the present technique are appreciated that unless expressly stated, singular " one " used herein, " one
It is a ", " described " " above-mentioned " and "the" may also comprise plural form.It is to be further understood that making in specification of the invention
Wording " comprising " refers to that there are the feature, integer, step, operation, element, unit, module and/or components, but simultaneously
Do not preclude the presence or addition of other one or more features, integer, step, operation, element, unit, module, component and/or it
Group.It should be understood that when we say that an element is " connected " or " coupled " to another element, it can be directly connected to or couple
To other elements, or there may also be intermediary elements.In addition, " connection " used herein or " coupling " may include wirelessly connecting
It connects or wirelessly couples.Wording "and/or" used herein includes one or more associated listing the whole or any of item
Unit and all combination.
Those skilled in the art of the present technique are appreciated that unless otherwise defined, all terms used herein (including technology art
Language and scientific term), there is meaning identical with the general understanding of those of ordinary skill in fields of the present invention.Should also
Understand, those terms such as defined in the general dictionary, it should be understood that have in the context of the prior art
The consistent meaning of meaning, and unless idealization or meaning too formal otherwise will not be used by specific definitions as here
To explain.
As shown in Figs. 1-2, the method for proposing a kind of more mesh robot language teaching of one embodiment of the present invention, including
Step:
S1 obtains the video information of multiple angles in target user's learning process;
The present embodiment is by the multiple a picture pick-up devices of different direction installation with robot, for acquiring target user's
Video information, in one embodiment, specially five picture pick-up devices, respectively the first video camera, the second video camera, third camera shooting
Machine, the 4th video camera and the 5th video camera, the first video camera setting is in the front of robot, and the second video camera setting is in robot
The back side, third video camera setting sets in the left side of robot, the 4th video camera setting in the right side of robot, the 5th video camera
It is placed in the top of robot, video camera is installed by the different direction in robot, can be obtained all in the same space
The video information of angle, first video camera, the second video camera, third video camera, the 4th video camera and the 5th video camera can be with
Being set as wide angle cameras may be alternatively provided as general camera, can be arranged according to different demands.The video information contains image
Information and voice messaging.Image information contains robot in teaching process, the limb action and facial expression of target user,
Voice messaging contains robot in teaching process, the voice messaging of target user, the facial information include eye information and
Eyeball information.
The video information of multiple angles is carried out overlapping calculation, obtains the characteristic information of target user, characteristic information packet by S2
Include phonetic feature, limbs feature and the facial expression feature of target user;
The collected video information of multiple video cameras is subjected to overlapping calculation in the present embodiment, which refers to acquisition
The video information of multiple a students in the same area establishes just identical target user according between different video informations
True corresponding relationship, the same target user observed to multiple video cameras are indicated using the same label, are established and are corresponded to
Target handoff process.The video information that the different angle of the same target user can be obtained collects each video camera
Video information carry out overlapping calculation after, according to grey scale carry out identifying processing, obtain the characteristic information of the target user, this reality
Applying the characteristic information in example is the eye information in the facial information of target user, further according to eye information extraction target user's
Expressive features may be selected in the characteristic information of eyeball information, the facial area, have both judged whether the facial area of the target user is deposited
In the expressions such as stupefied, in one embodiment to extract the eyeball feature of target user's facial area in preset time range,
The characteristic information of target user can be accurately extracted using eyeball feature, this feature information more can be showed accurately
State of the target user in teaching process out can have more the feedback information of target user in time convenient for teaching robot, refer to
It leads target user to learn, improves the efficiency of teaching of robot.
S3 searches corresponding cognition according to characteristic information as a result, wherein in systemic presupposition information in systemic presupposition information
Including various characteristic informations, and cognition result corresponding with various characteristic informations;
In the present embodiment, for the ease of judging whether the attention of the student is concentrated, one can be pre-established and preset
Special information collection, that is to say, that include several eye features for being able to reflect attention concentration in the presupposed information, pass through feature
Whether matched mode is concentrated judging the attention of the student or to the cognitive assessments of education informations, video camera is collected
Video information carry out data analysis, corresponding parameter is compared with the parameter area of systemic presupposition, obtains target user
Cognition as a result, the systemic presupposition parameter include limbs information, facial expression and voice messaging, by carrying out figure to video information
As identification, in language education courses, by showing education informations, allows target user to become literate, judge the limb of target user
Body movement nods or shakes the head, or judges whether target user shows to recognize the behavior act of the text or voice letter
It ceases, when occurring the information of illiterate property such as " not knowing " in the language message of target user, equally can also determine target use
Family to the cognitive assessment of the text, in one embodiment by by preset time range eyeball feature and default feature carry out
Matching, when preset matching structure is to mismatch successfully, assert that the target user's is absent minded, attention of student collection
When middle, pupil is usually to stare at educational robot, and therefore, eyeball feature can be pupil orientation information, certainly, eyeball feature
Other information can be used, the present embodiment is without restriction to this.
It should be understood that, if matched in student's blink, will lead to since student is there may be blinking
The case where erroneous matching, occurs, to avoid the problem, in the present embodiment, it may also be used for extracts facial area in preset time range
Interior eyeball feature matches the eyeball feature in preset time range with default feature set, is in matching result
With it is unsuccessful when, assert that student's is absent minded.
S4 executes the corresponding instruction of cognition result.
There is the corresponding characteristic information of different situations in advance in the present embodiment system, which includes voice
Information, limbs information and facial information utilize the characteristic information and system of the target user that overlapping calculation and image procossing obtain
Preset characteristic information compares, and finds out cognitive assessment corresponding to individual features information, can be according to corresponding cognition
Situation executes corresponding instruction, judges that the limb action of target user is to shake the head or without limb action, then weigh in the present embodiment
Current education informations are shown again, and are issued language of instruction information and known that target user becomes literate.Target in one embodiment
The voice messaging of user is " not knowing ", and system repeats to show current education informations, and issues language of instruction information and know mesh
Mark user becomes literate.If the characteristic information that target user is fed back is the education informations for recognizing the link, system is jumped automatically
Next teaching link is gone to, which can be the identification link of next text, be also possible to teaching shape alternatively
Formula.
In one embodiment, the student it is absent minded when, can prejudge whether learning time is more than pre-
If time span, if being less than predetermined time period, it can be sent to the reminder feature and remind instruction, when the preset time is exceeded
Length then illustrates that student is likely to be at the fatigue phase, target user can be prompted to rest, and target user is made to suspend study.
It should be noted that reminding it, needing for the ease of in the scatterbrained situation of target user
Prompting message is issued, but the mode of broadcasting music and color conversion can be used in the concrete realization to be reminded, to stimulate
The learning interest of target user, that is to say, that can be reminded by modes such as music or color translations, specific real
In existing, the music can be the music for making us loosening, such as: brisk absolute music etc..It will be appreciated that robot is at work
Would generally shine (such as: current luminous color is white, i.e. sending white light), to prompt target user, it is in work shape
State, the color conversion 50 can convert current luminous color, such as: it is from white transition by current luminous color
Sky blue.
Certainly, in the concrete realization, other modes also can be used to be reminded, such as: fragrant (such as fragrance of a flower), machine
The modes such as people's deformation, the exchange of remote personnel video are realized that the present embodiment is without restriction to this.
In the present embodiment, the step of obtaining the video information of multiple angles in target user's learning process, comprising:
S101 obtains the video information in multiple target user's learning processes respectively, and wherein video information includes different angles
The video of degree;
In the present embodiment, by the way that multiple video cameras are arranged, video letter comprehensive in the same space can be collected
Breath, can detect teaching robot to the teaching affairs of multiple target users, and then in time to the feedback of teaching affairs simultaneously
Corresponding instruction is made, and then target user can be instructed to learn in time, improves the efficiency of teaching of robot.
S102 forms target and uses by the video storage of the different angle containing same target user in same reservoir
The video information of multiple angles in the learning process of family.
In the present embodiment, according to the collected video information of different cameras, image recognition is carried out, by the same target
The video information of user is stored in the same reservoir, and the video information containing same target user is carried out overlapping calculation,
Each target user's characteristic information can accurately be obtained.
In the present embodiment, the step of executing cognition result corresponding instruction, comprising:
S41, the corresponding classes of instructions of identification cognition result;
Video information is acquired to target user by multiple video cameras in the present embodiment, then weight is carried out to multiple video informations
It is folded to calculate, the characteristic information of corresponding target user is obtained, the characteristic information of the user and systemic presupposition characteristic information are carried out
Compare, is executed instruction accordingly.In the characteristic information of target user containing " not knowing ", " will not " negative of relevant nature
Word or shake the head etc. information when, then judge the cognition result of target user to contain in the characteristic information of target user not recognize
When the affirmative information of " I knows ", the voice messaging relevant nature nodded or issue the education informations, then judge target user's
Recognizing result is understanding.After judging the cognition result of target user, its corresponding instruction can be executed according to the cognition result.
S42, if the first instruction, then control system enters next teaching link;
In this embodiment, if the corresponding property of cognition result is to recognize the education informations, which is
Next teaching link is jumped to, system screen shows that next spoken and written languages identify target user, and believes by voice
Breath auxiliary mark user learns.
S43 then repeats the teaching link, and issue suggestion voice information if the second instruction.
In this embodiment, if the corresponding property of cognition result is not recognize the education informations, corresponding first instruction
To repeat to show the education informations, and pass through the study of voice prompting mode auxiliary mark user progress education informations.
In this embodiment, further include the location information of target user in characteristic information, repeat the teaching link, and issue and mention
The step of showing voice messaging, comprising:
S431 obtains the location information of target user according to location information;
S432 issues suggestion voice information according to location information directive property.
After video information by obtaining target user's different angle, characteristics of image is carried out to the video information of different angle
Matching, obtains the relative positional relationship of the target user and teaching robot, with the artificial origin of target machine, obtains target user
Relative coordinate values, teaching robot to do not recognize corresponding education informations target user carry out suggestion voice information auxiliary learn
When habit, which can be exported using the sound equipment output equipment of corresponding orientation, can reach and remind corresponding target
The function of user.
In the present embodiment, robot includes trick photographic device, trick photographic device by way of wireless connection with
The step of more mesh robot bodies establish communication connection, obtain the video information of multiple angles in target user's learning process, also
Include:
S111 is obtained the long-distance video being located in target user's learning process of second area by trick photographic device and believed
Breath, wherein second area is the region except more mesh robot bodies region;
Long-distance video information is stored in remote storage device by S112, forms the video information of remote object user.
In one embodiment, trick photographic device, the target in the available different zones of trick photographic device are equipped with
Collected video information is transmitted to control system by wireless transmission method by the instructional video information of user, trick photographic device
In system, which includes WiFi and bluetooth two ways.By the collected video information of trick photographic device
Carry out information processing, obtain the characteristic information of different zones target user, so as to be capable of tremendous range to target user
Study situation detected.
As in Figure 3-5, propose one embodiment of the present invention more mesh robot languages teaching system include:
Photographing module 1, for obtaining the video information of multiple angles in target user's learning process;
Processing module 2 obtains the feature letter of target user for the video information of multiple angles to be carried out overlapping calculation
Breath, characteristic information includes the phonetic feature of target user, limbs feature and facial expression feature;
Service module 3 searches corresponding cognition according to characteristic information as a result, wherein systemic presupposition in systemic presupposition information
It include various characteristic informations, and cognition result corresponding with various characteristic informations in information;
Execution module 4, for executing the corresponding instruction of cognition result.
In the present embodiment, photographing module 1 includes:
First camera unit 11, for obtaining the video information in multiple target user's learning processes respectively, wherein video
Information includes the video of different angle;
First storage unit 12, by the video storage of the different angle containing same target user in same reservoir,
Form the video information of multiple angles in target user's learning process.
In the present embodiment, execution module 4 includes:
First teaching unit recognizes the corresponding classes of instructions of result for identification;
First execution unit, if the first instruction, then control system enters next teaching link;
Second execution unit then repeats the teaching link, and issue suggestion voice information if the second instruction.
In the present embodiment, the second execution unit includes:
Locator unit, for obtaining the location information of target user according to location information;
Subelement is prompted, for issuing suggestion voice information according to location information directive property.
In the present embodiment, photographing module 1 further include:
Second camera unit 13, for obtaining the target user's learning process for being located at second area by trick photographic device
In long-distance video information, wherein second area be more mesh robot bodies region except region;
Second storage unit 14 forms remote object user for long-distance video information to be stored in remote storage device
Video information.
To be convenient for resource-sharing, in another embodiment, more mesh robots further include: long-range link block;
Long-range link block, for remotely being connect with other robot, to form block chain network.
It should be noted that block chain network is for region locating for robot, there is no limit, that is to say, that can not only make
Several robots in identical area form block chain network, can also make several robots composition block in different regions
Chain network.
To realize the shared of video pictures, in the present embodiment, the educational robot may also include that data transmission module and
Human-machine interface module;
The data transmission module, for obtaining target robot the first multimedia messages being played on, and by first
Multimedia messages are sent to processing module 2, and target machine artificially belongs to the robot of same block chain with more mesh robots;Processing
Module 2 is also used to the first multimedia Information Transfer to human-machine interface module;
Human-machine interface module, for playing the multimedia messages.
It should be noted that human-machine interface module is the module that robot is interacted with target user, it can be aobvious
Show the combination of device and keyboard, can also be touch screen, can also be other component certainly, the present embodiment is without restriction to this.
For the lookup convenient for carrying out multimedia messages according to the demand of target user, in the present embodiment, more mesh robots are also
It include: data memory module;
Human-machine interface module, is also used to receive the keyword of user's input, and the keyword is transmitted to processing module
2;
In the concrete realization, keyword can be inputted by keyboard, certainly, can also be used the modes such as voice carry out it is defeated
Enter, the present embodiment is without restriction to this.
It should be noted that the target user can be student, it can also be teacher certainly, the present embodiment does not limit this
System.
Processing module 2, is also used to search the second multimedia messages corresponding with keyword in data memory module, and by
Two multimedia messages feed back to human-machine interface module.
Due to the limited storage space of the data memory module of a robot, to guarantee the resource in block chain as far as possible
It is more, different multimedia messages can be stored in data memory module, that is to say, that processing module 2 is also used to keyword
It is forwarded to data transmission module;
Data transmission module is also used to search third multimedia messages corresponding with keyword in target robot, and will
Third multimedia messages are sent to processing module 2;
Processing module 2 is also used to third multimedia Information Transfer to human-machine interface module.
It is exchanged to be easy to implement remote speech, in the present embodiment, more mesh robots further include: connect with processing module 2
Voice module.
To be easy to implement movement, in the present embodiment, more mesh robots further include: the mobile module being connect with processing module 2
And vision module.
It is also settable to be connected with processing module 2 in the present embodiment it will be appreciated that for convenient for being managed to power supply
Power management module.
As shown in fig. 6, in one embodiment, more mesh robots are equipped with top view video camera, test video camera, forward looking camera
And hand-eye camera, the top formula video camera are fixedly installed on the top of more mesh robots, which is fixedly installed on more
The side of mesh robot, the forward looking camera are set to the positive portion of more mesh robots, which is equipped with images with trick
The groove of machine adaptation, the hand-eye camera are installed in the groove, and hand-eye camera and more mesh robots pass through wireless connection,
It is equipped with corresponding top view image processor corresponding to top formula video camera, is equipped with corresponding side view corresponding to side view video camera
As processor, correspond to forward looking camera and be equipped with corresponding forward sight image processor, corresponds to hand-eye camera equipped with right with it
The chiral phosphines guidance processor answered.Each processor is electrically connected primary server.Further include Context awareness, environmental positioning and
Robot controller, the Context awareness are used to carry out collected video information the identification of environment, the information recognized are passed
Primary server is transported to, environmental positioning is used to detect the relative positional relationship between each object of reference in video information, thus right
Target user and environment position, and robot controller is used for when being matched to corresponding instruction, hold according to corresponding instruction
The corresponding movement of row.The present embodiment investigates the character learning situation of two students simultaneously, constantly becomes on the display of more mesh robots
Different Chinese characters is presented dynamicly, such as: Heaven, Earth and Man, road, flower etc., the sedulous Chinese for staring at above-mentioned appearance of two students
Word indicates whether the seen Chinese character of understanding with " nodding " or " shaking the head ", and in the process, more mesh robots transfer its " top view
The first student that video camera " is aligned in two students transfers the second student that its " forward looking camera " is aligned in two students, and top view takes the photograph
Camera carries out the video information transmission of collected first classmate to top view image processor, top view image processor to the video
Processing, and extracts the facial area area information of first classmate, and forward looking camera is by the video information transmission of collected second classmate
To forward sight image processor, the facial area characteristic information of second classmate is extracted further according to forward sight image processor.By this feature
Information is transmitted to primary server, has preset feature set in primary server, and this feature collection includes several being able to reflect student couple
The characteristic information of Chinese character recognization situation passes through the facial area information for extracting processor and the progress of preset feature set
Match, can obtain first classmate and second classmate to the character learning situation of Chinese character, when more mesh robotic views to a wherein classmate are " late
After doubtful certain time " it " nods " again, when indicating to recognize seen Chinese character, internal processor is again presented related Chinese character
Once, to further confirm that whether two students really recognize related Chinese character, after investigation, more mesh robots pass through master
The structure of investigation is exported a investigation report by preset table by server, which is labeled with two video cameras and is seen
" situation of hesitating " of the student observed.
A kind of method and system of more mesh robot language teaching of the present invention, by obtaining the video information of multiple angles,
It is handled by image overlapping calculation, the video information of multiple target users can not only be obtained simultaneously, moreover it is possible to use same target
The characteristic information at family more true and accurate, and then target user can be instructed to learn in time, improve the teaching effect of robot
Rate.
The above description is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all utilizations
Equivalent structure or equivalent flow shift made by description of the invention and accompanying drawing content is applied directly or indirectly in other correlations
Technical field, be included within the scope of the present invention.
Claims (10)
1. a kind of method of more mesh robot language teaching, which is characterized in that comprising steps of
Obtain the video information of multiple angles in target user's learning process;
The video information of the multiple angle is subjected to overlapping calculation, obtains the characteristic information of the target user, the feature
Information includes the phonetic feature of the target user, limbs feature and facial expression feature;
Corresponding cognition is searched in systemic presupposition information according to the characteristic information as a result, wherein in the systemic presupposition information
Including various characteristic informations, and cognition result corresponding with various characteristic informations;
Execute the corresponding instruction of the cognition result.
2. the method for more mesh robot language teaching according to claim 1, which is characterized in that the acquisition target user
In learning process the step of the video information of multiple angles, comprising:
The video information in multiple target user's learning processes is obtained respectively, wherein the video information includes the view of different angle
Frequently;
By the video storage of the different angle containing same target user in same reservoir, target user's study is formed
The video information of multiple angles in the process.
3. the method for more mesh robot language teaching according to claim 1, which is characterized in that described to execute the cognition
As a result the step of corresponding instruction, comprising:
Identify the corresponding classes of instructions of the cognition result;
If the first instruction, then control system enters next teaching link;
If the second instruction, then the teaching link is repeated, and issue suggestion voice information.
4. the method for more mesh robot language teaching according to claim 3, which is characterized in that in the characteristic information also
Location information including the target user, the repetition teaching link, and the step of issuing suggestion voice information, comprising:
The location information of the target user is obtained according to the positional information;
Suggestion voice information is issued according to the location information directive property.
5. the method for more mesh robot language teaching according to claim 1, which is characterized in that the robot includes hand
Eye imaging device, the trick photographic device establish communication connection with more mesh robot bodies by way of wireless connection, institute
State the step of obtaining the video information of multiple angles in target user's learning process, further includes:
The long-distance video information being located in target user's learning process of second area is obtained by trick photographic device, wherein institute
Stating second area is the region except more mesh robot bodies region;
The long-distance video information is stored in remote storage device, the video information of remote object user is formed.
6. a kind of system of more mesh robot language teaching characterized by comprising
Photographing module, for obtaining the video information of multiple angles in target user's learning process;
Processing module obtains the feature of the target user for the video information of the multiple angle to be carried out overlapping calculation
Information, the characteristic information include the phonetic feature of target user, limbs feature and facial expression feature;
Service module searches corresponding cognition according to the characteristic information as a result, the wherein system in systemic presupposition information
It include various characteristic informations, and cognition result corresponding with various characteristic informations in presupposed information;
Execution module, for executing the corresponding instruction of the cognition result.
7. the system of more mesh robot language teaching according to claim 6, which is characterized in that the photographing module packet
It includes:
First camera unit, for obtaining the video information in multiple target user's learning processes respectively, wherein the video is believed
Breath includes the video of different angle;
First storage unit forms institute by the video storage of the different angle containing same target user in same reservoir
State the video information of multiple angles in target user's learning process.
8. the system of more mesh robot language teaching according to claim 6, which is characterized in that the execution module packet
It includes:
First teaching unit, for identification corresponding classes of instructions of the cognition result;
First execution unit, if the first instruction, then control system enters next teaching link;
Second execution unit then repeats the teaching link, and issue suggestion voice information if the second instruction.
9. the system of more mesh robot language teaching according to claim 8, which is characterized in that second execution unit
Include:
Locator unit, for obtaining the location information of the target user according to the positional information;
Subelement is prompted, for issuing suggestion voice information according to the location information directive property.
10. the system of more mesh robot language teaching according to claim 6, which is characterized in that the photographing module is also
Include:
Second camera unit, it is remote in the target user's learning process for being located at second area for being obtained by trick photographic device
Journey video information, wherein the second area is the region except more mesh robot bodies region;
Second storage unit forms remote object user's for the long-distance video information to be stored in remote storage device
Video information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811109873.0A CN109166365A (en) | 2018-09-21 | 2018-09-21 | The method and system of more mesh robot language teaching |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811109873.0A CN109166365A (en) | 2018-09-21 | 2018-09-21 | The method and system of more mesh robot language teaching |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109166365A true CN109166365A (en) | 2019-01-08 |
Family
ID=64879968
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811109873.0A Pending CN109166365A (en) | 2018-09-21 | 2018-09-21 | The method and system of more mesh robot language teaching |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109166365A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110009941A (en) * | 2019-04-12 | 2019-07-12 | 广东小天才科技有限公司 | Robot-assisted learning method and system |
CN111093113A (en) * | 2019-04-22 | 2020-05-01 | 广东小天才科技有限公司 | A method and electronic device for outputting video content |
CN111243353A (en) * | 2020-02-10 | 2020-06-05 | 南通大学 | A smart reading method |
CN111273558A (en) * | 2020-02-10 | 2020-06-12 | 南通大学 | An intelligent reading system |
CN112306832A (en) * | 2020-10-27 | 2021-02-02 | 北京字节跳动网络技术有限公司 | User state response method and device, electronic equipment and storage medium |
CN112330998A (en) * | 2020-11-18 | 2021-02-05 | 托普爱英(北京)科技有限公司 | Learning support method and learning support device |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN201304244Y (en) * | 2008-11-14 | 2009-09-09 | 成都绿芽科技发展有限公司 | Multifunctional robot |
CN102446428A (en) * | 2010-09-27 | 2012-05-09 | 北京紫光优蓝机器人技术有限公司 | Robot-based interactive learning system and interactive method thereof |
CN102905094A (en) * | 2012-10-26 | 2013-01-30 | 鸿富锦精密工业(深圳)有限公司 | Voice-activated TV set and method for improving voice reception effect |
CN105205455A (en) * | 2015-08-31 | 2015-12-30 | 李岩 | Liveness detection method and system for face recognition on mobile platform |
CN107203953A (en) * | 2017-07-14 | 2017-09-26 | 深圳极速汉语网络教育有限公司 | It is a kind of based on internet, Expression Recognition and the tutoring system of speech recognition and its implementation |
CN107293236A (en) * | 2017-07-27 | 2017-10-24 | 耿凯 | The intelligent display device of adaptive different user |
CN107369341A (en) * | 2017-06-08 | 2017-11-21 | 深圳市科迈爱康科技有限公司 | Educational robot |
CN107544266A (en) * | 2016-06-28 | 2018-01-05 | 广州零号软件科技有限公司 | Health Care Services robot |
CN108537321A (en) * | 2018-03-20 | 2018-09-14 | 北京智能管家科技有限公司 | A kind of robot teaching's method, apparatus, server and storage medium |
-
2018
- 2018-09-21 CN CN201811109873.0A patent/CN109166365A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN201304244Y (en) * | 2008-11-14 | 2009-09-09 | 成都绿芽科技发展有限公司 | Multifunctional robot |
CN102446428A (en) * | 2010-09-27 | 2012-05-09 | 北京紫光优蓝机器人技术有限公司 | Robot-based interactive learning system and interactive method thereof |
CN102905094A (en) * | 2012-10-26 | 2013-01-30 | 鸿富锦精密工业(深圳)有限公司 | Voice-activated TV set and method for improving voice reception effect |
CN105205455A (en) * | 2015-08-31 | 2015-12-30 | 李岩 | Liveness detection method and system for face recognition on mobile platform |
CN107544266A (en) * | 2016-06-28 | 2018-01-05 | 广州零号软件科技有限公司 | Health Care Services robot |
CN107369341A (en) * | 2017-06-08 | 2017-11-21 | 深圳市科迈爱康科技有限公司 | Educational robot |
CN107203953A (en) * | 2017-07-14 | 2017-09-26 | 深圳极速汉语网络教育有限公司 | It is a kind of based on internet, Expression Recognition and the tutoring system of speech recognition and its implementation |
CN107293236A (en) * | 2017-07-27 | 2017-10-24 | 耿凯 | The intelligent display device of adaptive different user |
CN108537321A (en) * | 2018-03-20 | 2018-09-14 | 北京智能管家科技有限公司 | A kind of robot teaching's method, apparatus, server and storage medium |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110009941A (en) * | 2019-04-12 | 2019-07-12 | 广东小天才科技有限公司 | Robot-assisted learning method and system |
CN111093113A (en) * | 2019-04-22 | 2020-05-01 | 广东小天才科技有限公司 | A method and electronic device for outputting video content |
CN111243353A (en) * | 2020-02-10 | 2020-06-05 | 南通大学 | A smart reading method |
CN111273558A (en) * | 2020-02-10 | 2020-06-12 | 南通大学 | An intelligent reading system |
CN112306832A (en) * | 2020-10-27 | 2021-02-02 | 北京字节跳动网络技术有限公司 | User state response method and device, electronic equipment and storage medium |
CN112330998A (en) * | 2020-11-18 | 2021-02-05 | 托普爱英(北京)科技有限公司 | Learning support method and learning support device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109166365A (en) | The method and system of more mesh robot language teaching | |
CN108427910B (en) | Deep neural network AR sign language translation learning method, client and server | |
Breazeal et al. | Challenges in building robots that imitate people | |
CN103460256B (en) | In Augmented Reality system, virtual image is anchored to real world surface | |
JP6722786B1 (en) | Spatial information management device | |
US11132845B2 (en) | Real-world object recognition for computing device | |
KR102585311B1 (en) | Non-face-to-face real-time education method that uses 360-degree images and HMD, and is conducted within the metaverse space | |
CN109545003B (en) | Display method, display device, terminal equipment and storage medium | |
CN110969905A (en) | Remote teaching interaction and teaching aid interaction system for mixed reality and interaction method thereof | |
JP6094131B2 (en) | Education site improvement support system, education site improvement support method, information processing device, communication terminal, control method and control program thereof | |
CN107656505A (en) | Use the methods, devices and systems of augmented reality equipment control man-machine collaboration | |
CN107369341A (en) | Educational robot | |
CN108303994A (en) | Team control exchange method towards unmanned plane | |
CN109376737A (en) | Method and system for assisting user in solving learning problem | |
CN109582123B (en) | Information processing device, information processing system and information processing method | |
CN106409033A (en) | Remote teaching assisting system and remote teaching method and device for system | |
Li et al. | Action recognition based on multimode fusion for VR online platform | |
CN112489138A (en) | Target situation information intelligent acquisition system based on wearable equipment | |
CN110841266A (en) | Auxiliary training system and method | |
Banes et al. | The potential evolution of universal design for learning (UDL) through the lens of technology innovation | |
Gandhi et al. | A CMUcam5 computer vision based arduino wearable navigation system for the visually impaired | |
CN106708266A (en) | AR action correction projection method and system based on binocular gesture recognition | |
CN112748800B (en) | Intelligent glove-based experimental scene perception interaction method | |
CN113010009A (en) | Object sharing method and device | |
Tan et al. | Impacts of Teaching towards Training Gesture Recognizers for Human-Robot Interaction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190108 |