CN106055105A - Robot and man-machine interactive system - Google Patents
Robot and man-machine interactive system Download PDFInfo
- Publication number
- CN106055105A CN106055105A CN201610389676.3A CN201610389676A CN106055105A CN 106055105 A CN106055105 A CN 106055105A CN 201610389676 A CN201610389676 A CN 201610389676A CN 106055105 A CN106055105 A CN 106055105A
- Authority
- CN
- China
- Prior art keywords
- robot
- behavior
- expression
- processing means
- sensor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/0005—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
- B25J11/001—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means with emotions simulating means
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Toys (AREA)
Abstract
The invention provides a robot and a man-machine interactive system. The robot comprises a processing device, at least one sensor, a pronunciation device and a face display screen, wherein the pronunciation device and the face display screen are respectively connected with the processing device; the sensor is used for detecting behaviors carried out on the robot by users; the processing device is used for determining voices and expressions corresponding to the behaviors according to the behaviors; the pronunciation device is used for playing voices corresponding to the behaviors; and the face display screen is used for displaying expressions corresponding to the behaviors, so that the robot can communicate with the users through manner such as voices, expressions and the like, the emotional transmission between the users and the robot is increased, and the user experience is improved.
Description
Technical field
The present embodiments relate to robot field, particularly relate to a kind of robot and man-machine interactive system.
Background technology
Along with the development of science and technology, robot has been used in many industries such as medical treatment, food and drink, building, and people are for machine
The requirement of people is more and more higher.
At present, robot has been able to carry out simple interaction with limbs with people, such as, first-class by shaking hands, wave, putting
Mode and people carry out interaction.But, existing robot can only do simple action, as a frosty machine, it is impossible to hold
Long is attractive, poor user experience.
Summary of the invention
The embodiment of the present invention provides a kind of robot and man-machine interactive system so that machine can pass through language between men
The mode such as sound, expression exchanges, and increases the emotional conveyance before people and robot, improves for experiencing.
On the one hand the embodiment of the present invention provides a kind of robot, including: processing means, at least one sensor are with described
The pronunciation device that processing means connects respectively and face display screen;
Described sensor is for detecting the behavior that described robot is made by user;
Described processing means is for determining the voice corresponding with described behavior and expression according to described behavior;
Described pronunciation device is for playing the voice corresponding with described behavior;
Described face display screen is for showing the expression corresponding with described behavior.
In an embodiment of the present invention, at least corresponding described language of each described behavior and a described expression.
In an embodiment of the present invention, the corresponding pass that described processing means is additionally operable between the behavior that arranges with language and expression
System;
Described processing means determines the voice corresponding with described behavior and expression according to described behavior, including:
Described processing means, based on set described corresponding relation, determines corresponding with described behavior according to described behavior
Voice and expression.
In an embodiment of the present invention, described robot also includes: the memorizer being connected with described processing means;
Described memorizer is for storing described corresponding relation and the voice corresponding with described behavior and expression.
In an embodiment of the present invention, the number of described pronunciation device is more than or equal to 1.
In an embodiment of the present invention, described behavior is proximate to, contacts, touches, pats, clashes into or speech act.
In an embodiment of the present invention, described sensor be ultrasonic sensor, laser sensor, infrared ray sensor or
Speech transducer.
In an embodiment of the present invention, described pronunciation device is speaker.
In an embodiment of the present invention, described processing means is integrated in described sensor internal.
Another aspect of the present invention also provides for a kind of man-machine interactive system, including: at least one is such as above-mentioned any embodiment institute
The robot stated.
The robot of the present embodiment offer and man-machine interactive system, when sensor detects the row that robot is made by user
For time, processing means determines the voice corresponding with the behavior and expression according to the behavior, then plays this voice by pronunciation device,
This expression is shown, so that machine can be handed over by the mode such as voice, expression between men by face display screen
Stream, increases the emotional conveyance before people and robot, improves Consumer's Experience.
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
In having technology to describe, the required accompanying drawing used is briefly described, it should be apparent that, the accompanying drawing in describing below is this
Some bright embodiments, for those of ordinary skill in the art, on the premise of not paying creative work, it is also possible to
Other accompanying drawing is obtained according to these accompanying drawings.
The structural representation of the robot that Fig. 1 provides for one embodiment of the invention;
The structural representation of the robot that Fig. 2 provides for another embodiment of the present invention.
Description of reference numerals:
1: processing means;
2: sensor;
3: pronunciation device;
4: face display screen;
5: memorizer.
Detailed description of the invention
For making the purpose of the embodiment of the present invention, technical scheme and advantage clearer, below in conjunction with the embodiment of the present invention
In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is
The a part of embodiment of the present invention rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art
The every other embodiment obtained under not making creative work premise, broadly falls into the scope of protection of the invention.
The structural representation of the robot that Fig. 1 provides for one embodiment of the invention.As it is shown in figure 1, this robot includes place
Pronunciation device 3 that reason device 1, at least one sensor 2 are connected respectively with processing means 1 and face display screen 4.Wherein, sensing
Device 2 is for detecting the behavior that robot is made by user;Processing means 1 for according to behavior determine the voice corresponding with behavior and
Expression;Pronunciation device 3 is for playing the voice corresponding with behavior;Face display screen 4 is for showing the expression corresponding with behavior.
In the present embodiment, when sensor 2 detects the behavior that robot is made by user, processing means 1 is according to this
Behavior determines the voice corresponding with the behavior and expression, then plays this voice by pronunciation device 3, is shown by face display screen 4
Show this expression.Such as, when people goes the tripe blocking robot to wink with hands, sensor can detect that the behavior of people is for " close
Tripe is winked ", processing means is to determine matched default voice and expression the behavior, by pronunciation device and face display
Screen shows, e.g., voice device sends the voice of " not touching my belly, other can be shy ", face display screen display evil
Shy expression.
It should be noted that in the present embodiment being diagrammatically only by property show a sensor, a robot can include
Multiple sensors, and, the type of each sensor can identical can also differ, and can select sensor according to actual needs,
The present invention is not any limitation as.
The robot that the present embodiment provides, when sensor detects the behavior that robot is made by user, processing means
Determine the voice corresponding with the behavior and expression according to the behavior, then play this voice by pronunciation device, shown by face
Screen shows this expression, so that machine can be exchanged by the mode such as voice, expression between men, increases people and machine
Emotional conveyance before device people, improves Consumer's Experience.
Alternatively, at least corresponding language of each behavior and an expression.
In the present embodiment, each behavior can a corresponding language and an expression, it is also possible to corresponding multiple language with
Multiple expressions, when robot is made a behavior by user, voice device can play multistage voice, and face display screen also may be used
To continuously display multiple expression so that the language of robot and expression are more abundant lively.
Alternatively, processing means 1 is additionally operable to the corresponding relation between the behavior that arranges and language and expression;Processing means 1 is
The voice corresponding with behavior and expression is determined according to behavior, including: processing means 1 is based on set corresponding relation, according to behavior
Determine the voice corresponding with behavior and expression.
In the present embodiment, processing means 1 can be that each behavior arranges specific language and expression, and records each row
For language and expression between corresponding relation, when sensor detects the behavior that robot is made by user, can basis
Corresponding relation between behavior and language and expression, determines language and the expression of correspondence for the behavior.
Alternatively, behavior and language and expression between corresponding relation can also artificially pre-set, then by
Reason device storage, when sensor detects the behavior that robot is made by user, can directly invoke behavior and language and table
Corresponding relation between feelings, determines language and the expression of correspondence for the behavior.
Alternatively, processing means 1 is integrated in sensor internal.
In the present embodiment, processing means 1 can be integrated in the inside of sensor 2, processing means 1 and sensor 2 height
Integrated, device volume can be reduced, reduce line fault.Such as, direct employing not only has sensor function but also have place
The intelligence sensor of reason device function replaces processing means 1 and sensor 2.
The structural representation of the robot that Fig. 2 provides for another embodiment of the present invention.As in figure 2 it is shown, this robot also wraps
Include: the memorizer 5 being connected with processing means 1;Memorizer 5 is for storing corresponding relation and the voice corresponding with behavior and table
Feelings.
In the present embodiment, corresponding relation can be artificial pre-configured storage in memory, it is also possible to be process
The corresponding relation that device is arranged, voice and expression that behavior is corresponding are then artificially to prestore in memory.
Alternatively, the number of pronunciation device is more than or equal to 1.
Alternatively, pronunciation device is speaker.
Alternatively, behavior is proximate to, contacts, touches, pats, clashes into or speech act.
In the present embodiment, behavior can be close, contact, touch, pat, clash into or any one in the behavior such as voice
Individual or multiple, what the present embodiment was merely exemplary gives above-mentioned behavior, and behavior can also is that other behaviors many, such as, uses
Some body languages such as gesture, action that family is made, even can also include some facial expressions etc. of people.
Alternatively, sensor is ultrasonic sensor, laser sensor, infrared ray sensor or speech transducer.
In the present embodiment, sensor can be that ultrasonic sensor, laser sensor, infrared ray sensor or voice pass
Any one in sensor.Use various types of sensor, more user behavior can be detected, so that robot
Demonstrate more abundant linguistic competence and expression ability, increase the affinity of robot.
The embodiment of the present invention also provides for a kind of man-machine interactive system and includes that at least one provides such as above-mentioned any embodiment
Robot.When sensor detects the behavior that robot is made by user, processing means determines according to the behavior and the behavior
Corresponding voice and expression, then play this voice by pronunciation device, show this expression by face display screen, so that machine
Device can be exchanged by the mode such as voice, expression between men, increases the emotional conveyance before people and robot, improves
It is used for experiencing.
One of ordinary skill in the art will appreciate that: all or part of step realizing above-mentioned each method embodiment can be led to
The hardware crossing programmed instruction relevant completes.Aforesaid program can be stored in a computer read/write memory medium.This journey
Sequence upon execution, performs to include the step of above-mentioned each method embodiment;And aforesaid storage medium includes: read only memory
(Read-Only Memory is called for short ROM), random access memory (random access memory is called for short RAM), magnetic disc
Or the various medium that can store program code such as CD.
Last it is noted that various embodiments above is only in order to illustrate technical scheme, it is not intended to limit;To the greatest extent
The present invention has been described in detail by pipe with reference to foregoing embodiments, it will be understood by those within the art that: it depends on
So the technical scheme described in foregoing embodiments can be modified, or the most some or all of technical characteristic is entered
Row equivalent;And these amendments or replacement, do not make the essence of appropriate technical solution depart from various embodiments of the present invention technology
The scope of scheme.
Claims (10)
1. a robot, it is characterised in that including: processing means, at least one sensor connect respectively with described processing means
The pronunciation device connect and face display screen;
Described sensor is for detecting the behavior that described robot is made by user;
Described processing means is for determining the voice corresponding with described behavior and expression according to described behavior;
Described pronunciation device is for playing the voice corresponding with described behavior;
Described face display screen is for showing the expression corresponding with described behavior.
Robot the most according to claim 1, it is characterised in that at least corresponding described language of each described behavior and
One described expression.
Robot the most according to claim 1 and 2, it is characterised in that described processing means is additionally operable to the behavior that arranges and language
Make peace expression between corresponding relation;
Described processing means determines the voice corresponding with described behavior and expression according to described behavior, including:
Described processing means, based on set described corresponding relation, determines the voice corresponding with described behavior according to described behavior
And expression.
Robot the most according to claim 3, it is characterised in that described robot also includes: with described processing means even
The memorizer connect;
Described memorizer is for storing described corresponding relation and the voice corresponding with described behavior and expression.
Robot the most according to claim 1 and 2, it is characterised in that the number of described pronunciation device is more than or equal to
1。
Robot the most according to claim 1 and 2, it is characterised in that described behavior is proximate to, contacts, touches, pats,
Clash into or speech act.
Robot the most according to claim 1 and 2, it is characterised in that described sensor is ultrasonic sensor, laser
Sensor, infrared ray sensor or speech transducer.
Robot the most according to claim 1 and 2, it is characterised in that described pronunciation device is speaker.
Robot the most according to claim 1 and 2, it is characterised in that described processing means is integrated in described sensor
Portion.
10. a man-machine interactive system, it is characterised in that including: at least one machine as described in any one of claim 1-9
People.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610389676.3A CN106055105A (en) | 2016-06-02 | 2016-06-02 | Robot and man-machine interactive system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610389676.3A CN106055105A (en) | 2016-06-02 | 2016-06-02 | Robot and man-machine interactive system |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN106055105A true CN106055105A (en) | 2016-10-26 |
Family
ID=57170027
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201610389676.3A Pending CN106055105A (en) | 2016-06-02 | 2016-06-02 | Robot and man-machine interactive system |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN106055105A (en) |
Cited By (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106313079A (en) * | 2016-11-05 | 2017-01-11 | 杭州畅动智能科技有限公司 | Robot man-machine interaction method and system |
| CN106393132A (en) * | 2016-11-09 | 2017-02-15 | 苏州市职业大学 | Electronic hand for pacifying infant |
| CN106557164A (en) * | 2016-11-18 | 2017-04-05 | 北京光年无限科技有限公司 | It is applied to the multi-modal output intent and device of intelligent robot |
| CN106773923A (en) * | 2016-11-30 | 2017-05-31 | 北京光年无限科技有限公司 | The multi-modal affection data exchange method and device of object manipulator |
| CN106873773A (en) * | 2017-01-09 | 2017-06-20 | 北京奇虎科技有限公司 | Robot interactive control method, server and robot |
| CN107243905A (en) * | 2017-06-28 | 2017-10-13 | 重庆柚瓣科技有限公司 | Mood Adaptable System based on endowment robot |
| CN108133503A (en) * | 2017-12-29 | 2018-06-08 | 北京物灵智能科技有限公司 | A kind of method and system that expression animation is realized using game engine |
| CN109129509A (en) * | 2018-09-17 | 2019-01-04 | 金碧地智能科技(珠海)有限公司 | A kind of endowment based on screen intelligent interaction is accompanied and attended to robot |
| CN109189363A (en) * | 2018-07-24 | 2019-01-11 | 上海常仁信息科技有限公司 | A kind of robot of human-computer interaction |
| CN110480648A (en) * | 2019-07-30 | 2019-11-22 | 深圳市琅硕海智科技有限公司 | A kind of ball shape robot intelligent interactive system |
| CN112363789A (en) * | 2020-11-11 | 2021-02-12 | 上海擎朗智能科技有限公司 | Page interaction method, device, terminal and storage medium |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080010070A1 (en) * | 2006-07-10 | 2008-01-10 | Sanghun Kim | Spoken dialog system for human-computer interaction and response method therefor |
| CN102500113A (en) * | 2011-11-11 | 2012-06-20 | 山东科技大学 | Comprehensive greeting robot based on smart phone interaction |
| CN203861914U (en) * | 2014-01-07 | 2014-10-08 | 深圳市中科睿成智能科技有限公司 | Pet robot |
| CN104800950A (en) * | 2015-04-22 | 2015-07-29 | 中国科学院自动化研究所 | Robot and system for assisting autistic child therapy |
| CN105159111A (en) * | 2015-08-24 | 2015-12-16 | 百度在线网络技术(北京)有限公司 | Artificial intelligence-based control method and control system for intelligent interaction equipment |
| CN105159148A (en) * | 2015-07-16 | 2015-12-16 | 深圳前海达闼科技有限公司 | Robot instruction processing method and device |
| CN205201537U (en) * | 2015-11-04 | 2016-05-04 | 上海拓趣信息技术有限公司 | Robot of accompanying and attending to |
| CN107030717A (en) * | 2017-06-22 | 2017-08-11 | 山东英才学院 | A kind of child intelligence educational robot |
-
2016
- 2016-06-02 CN CN201610389676.3A patent/CN106055105A/en active Pending
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080010070A1 (en) * | 2006-07-10 | 2008-01-10 | Sanghun Kim | Spoken dialog system for human-computer interaction and response method therefor |
| CN102500113A (en) * | 2011-11-11 | 2012-06-20 | 山东科技大学 | Comprehensive greeting robot based on smart phone interaction |
| CN203861914U (en) * | 2014-01-07 | 2014-10-08 | 深圳市中科睿成智能科技有限公司 | Pet robot |
| CN104800950A (en) * | 2015-04-22 | 2015-07-29 | 中国科学院自动化研究所 | Robot and system for assisting autistic child therapy |
| CN105159148A (en) * | 2015-07-16 | 2015-12-16 | 深圳前海达闼科技有限公司 | Robot instruction processing method and device |
| CN105159111A (en) * | 2015-08-24 | 2015-12-16 | 百度在线网络技术(北京)有限公司 | Artificial intelligence-based control method and control system for intelligent interaction equipment |
| CN205201537U (en) * | 2015-11-04 | 2016-05-04 | 上海拓趣信息技术有限公司 | Robot of accompanying and attending to |
| CN107030717A (en) * | 2017-06-22 | 2017-08-11 | 山东英才学院 | A kind of child intelligence educational robot |
Cited By (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106313079A (en) * | 2016-11-05 | 2017-01-11 | 杭州畅动智能科技有限公司 | Robot man-machine interaction method and system |
| CN106393132A (en) * | 2016-11-09 | 2017-02-15 | 苏州市职业大学 | Electronic hand for pacifying infant |
| CN106557164A (en) * | 2016-11-18 | 2017-04-05 | 北京光年无限科技有限公司 | It is applied to the multi-modal output intent and device of intelligent robot |
| CN106773923A (en) * | 2016-11-30 | 2017-05-31 | 北京光年无限科技有限公司 | The multi-modal affection data exchange method and device of object manipulator |
| CN106873773A (en) * | 2017-01-09 | 2017-06-20 | 北京奇虎科技有限公司 | Robot interactive control method, server and robot |
| CN107243905A (en) * | 2017-06-28 | 2017-10-13 | 重庆柚瓣科技有限公司 | Mood Adaptable System based on endowment robot |
| CN108133503A (en) * | 2017-12-29 | 2018-06-08 | 北京物灵智能科技有限公司 | A kind of method and system that expression animation is realized using game engine |
| CN109189363A (en) * | 2018-07-24 | 2019-01-11 | 上海常仁信息科技有限公司 | A kind of robot of human-computer interaction |
| CN109129509A (en) * | 2018-09-17 | 2019-01-04 | 金碧地智能科技(珠海)有限公司 | A kind of endowment based on screen intelligent interaction is accompanied and attended to robot |
| CN110480648A (en) * | 2019-07-30 | 2019-11-22 | 深圳市琅硕海智科技有限公司 | A kind of ball shape robot intelligent interactive system |
| CN112363789A (en) * | 2020-11-11 | 2021-02-12 | 上海擎朗智能科技有限公司 | Page interaction method, device, terminal and storage medium |
| CN112363789B (en) * | 2020-11-11 | 2024-06-04 | 上海擎朗智能科技有限公司 | Page interaction method, device, terminal and storage medium |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN106055105A (en) | Robot and man-machine interactive system | |
| AU2018202162B2 (en) | Methods and systems of handling a dialog with a robot | |
| US8203528B2 (en) | Motion activated user interface for mobile communications device | |
| JP6719741B2 (en) | Dialogue method, dialogue device, and program | |
| Kim et al. | Designers characterize naturalness in voice user interfaces: their goals, practices, and challenges | |
| CN101391146A (en) | Method and apparatus for enhancing entertainment software through haptic insertion | |
| US20130031476A1 (en) | Voice activated virtual assistant | |
| CN106548773A (en) | Child user searching method and device based on artificial intelligence | |
| CN108549662A (en) | The supplement digestion procedure and device of semantic analysis result in more wheel sessions | |
| US20160125295A1 (en) | User-interaction toy and interaction method of the toy | |
| Gonzalez Diaz et al. | Interactive machine learning for more expressive game interactions | |
| US11762451B2 (en) | Methods and apparatus to add common sense reasoning to artificial intelligence in the context of human machine interfaces | |
| Krings et al. | “What if everyone is able to program?”–Exploring the Role of Software Development in Science Fiction | |
| Divekar et al. | HUMAINE: human multi-agent immersive negotiation competition | |
| Farkas et al. | How boardgame players imagine interacting with technology | |
| WO2017200079A1 (en) | Dialog method, dialog system, dialog device, and program | |
| Zhang et al. | Prompting an Embodied AI Agent: How Embodiment and Multimodal Signaling Affects Prompting Behaviour | |
| Tian et al. | Recognizing emotions in dialogues with acoustic and lexical features | |
| US20210174703A1 (en) | Methods and systems for facilitating learning of a language through gamification | |
| Hoffmann | Technological Brave new world? Eschatological narratives on digitalization and their flaws | |
| JP6755509B2 (en) | Dialogue method, dialogue system, dialogue scenario generation method, dialogue scenario generator, and program | |
| Ko et al. | A novel affinity enhancing method for human robot interaction-preliminary study with proactive docent avatar | |
| US20260024450A1 (en) | Methods and systems for facilitating learning of a language through gamification | |
| JP7809156B2 (en) | Behavior Control System | |
| Zargham | Expanding speech interaction for domestic activities |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20161026 |
|
| RJ01 | Rejection of invention patent application after publication |