CN101618280B - Humanoid-head robot device with human-computer interaction function and behavior control method thereof - Google Patents
Humanoid-head robot device with human-computer interaction function and behavior control method thereof Download PDFInfo
- Publication number
- CN101618280B CN101618280B CN2009100724055A CN200910072405A CN101618280B CN 101618280 B CN101618280 B CN 101618280B CN 2009100724055 A CN2009100724055 A CN 2009100724055A CN 200910072405 A CN200910072405 A CN 200910072405A CN 101618280 B CN101618280 B CN 101618280B
- Authority
- CN
- China
- Prior art keywords
- robot
- human
- humanoid
- emotion
- facial
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Manipulator (AREA)
Abstract
具有人机交互功能的仿人头像机器人装置及行为控制方法,它涉及一种仿人头像机器人及其行为控制方法。解决了现有的仿人头像机器人不能完全实现对人类面部表情的再现、感知功能有限、不具有人工情感模型及人机交互功能的问题。传感器感知系统将感知到的信息输出给主控机进行处理,机器人行为控制系统中的控制系统软件根据人工情感模型得到相应电机的相关控制量,将运动控制指令通过运动控制卡输出PWM脉冲驱动相应的电机运动到指定的位置,实现机器人的人机交互功能及各种情感反应。传感器感知系统感知外界的情感信号,并识别出相应的情感信号,利用人工情感模型实现机器人的行为控制。本发明实现了对人类面部表情的再现,具有嗅觉、触觉、视觉等拟人的多感知功能。
The invention relates to a humanoid head portrait robot device and a behavior control method with human-computer interaction function, and relates to a humanoid head portrait robot and a behavior control method thereof. It solves the problems that the existing humanoid avatar robot cannot fully realize the reproduction of human facial expressions, has limited perception function, does not have artificial emotion model and human-computer interaction function. The sensor perception system outputs the sensed information to the main control machine for processing. The control system software in the robot behavior control system obtains the relevant control quantities of the corresponding motors according to the artificial emotion model, and outputs the motion control commands through the motion control card to output PWM pulses to drive the corresponding motors. The motor moves to the designated position to realize the robot's human-computer interaction function and various emotional responses. The sensor perception system perceives the external emotional signals, and recognizes the corresponding emotional signals, and uses the artificial emotional model to realize the behavior control of the robot. The invention realizes the reproduction of human facial expressions, and has anthropomorphic multi-sensory functions such as smell, touch and vision.
Description
技术领域technical field
本发明涉及一种仿人头像机器人装置及其行为控制方法,属于机器人应用领域。The invention relates to a humanoid avatar robot device and a behavior control method thereof, belonging to the field of robot applications.
背景技术Background technique
仿人机器人的研究开始于上世纪60年代,经过50几年的发展目前已成为机器人技术领域的主要研究方向之一。它集机械、电子、计算机、材料、传感器、控制技术等多门科学于一体,代表着一个国家的高科技发展水平。“仿人”的意义在于机器人具有类人的感知、决策、行为和交互能力。仿人头像机器人是仿人机器人研究领域中实现人机情感交互的重要方向。情感能够提高机器人的便利性和可信度,同时可以向使用者提供机器人的内部状态、目标和意图等反馈信息。在人机交互中,被设计成具有情感的机器与人建立友好的交互界面,使其具有参与社会事务和开展人际交往的能力,更易被人类所接受;同时机器具有“生命”使其主体的思维(或行为)具有明确的目的性和方向性,从而显著地提高思维(或行为)的效率和速度;在动态、不可预知和具有潜在“危险”的环境中,赋予机器思维上的创造性和行为上的自觉性,提高机器适应环境的能力。The research on humanoid robot began in the 1960s, and after more than 50 years of development, it has become one of the main research directions in the field of robotics. It integrates many sciences such as machinery, electronics, computers, materials, sensors, and control technology, and represents a country's high-tech development level. The meaning of "humanoid" is that the robot has human-like perception, decision-making, behavior and interaction capabilities. Humanoid avatar robot is an important direction to realize human-computer emotional interaction in the field of humanoid robot research. Emotions can enhance the convenience and trustworthiness of robots, while providing feedback to users about the robot's internal state, goals and intentions. In human-computer interaction, a machine designed to have emotions establishes a friendly interface with humans, making it capable of participating in social affairs and conducting interpersonal communication, and is more easily accepted by humans; at the same time, machines have "life" to make their subjects Thinking (or behavior) has a clear purpose and direction, thereby significantly improving the efficiency and speed of thinking (or behavior); in a dynamic, unpredictable and potentially "dangerous" environment, endow machines with creativity and creativity in thinking Consciousness in behavior improves the ability of the machine to adapt to the environment.
目前,一些仿人头像机器人的研制不具有多种感知功能,对基本面部表情的实现也只限于一种表情。经文献检索发现中国专利公告号为CN 201088839专利号为200720189947.7名称为“机器人笑表情肌肉动作机构”,该机构包括头颅架、橡胶表皮,橡胶表皮贴在头颅架上。其特点为机构简单能够实现各种笑的表情。该机构不足之处在于不能实现其他面部表情。中国专利公告号为CN 101020315A专利号为200710038295.1名称为“仿人机器人头部系统”。该系统包括一个六自由度的串联型机构,以DSPF2812为主控节点的可扩展处理器网络。六自由度的串联机构本体由六个舵机驱动,用来模拟双眼、颈部和下巴的运动。可扩展处理器网络由视频处理器、语音处理器、接口模块和DSPF2812主控电路组成,能实现人机交互过程中的运动控制和计算要求。但是该仿人机器人头部系统不具有弹性面部皮肤,因此不能实现对人类面部表情的再现,不具有如嗅觉、触觉、视觉等拟人的多感知功能。同时以上申请的专利中机器人不具有人工情感模型及人机交互功能。At present, the development of some humanoid avatar robots does not have multiple perception functions, and the realization of basic facial expressions is limited to one expression. Through literature search, it is found that the Chinese patent announcement number is CN 201088839, and the patent number is 200720189947.7. Its characteristic is that the mechanism is simple and can realize various smiling expressions. The mechanism falls short in that other facial expressions cannot be achieved. The Chinese patent announcement number is CN 101020315A and the patent number is 200710038295.1, which is called "humanoid robot head system". The system includes a six-degree-of-freedom serial mechanism, and an expandable processor network with DSPF2812 as the main control node. The body of the six-degree-of-freedom series mechanism is driven by six servos to simulate the movements of the eyes, neck and jaw. The scalable processor network is composed of video processor, voice processor, interface module and DSPF2812 main control circuit, which can realize the motion control and calculation requirements in the process of human-computer interaction. However, the humanoid robot head system does not have elastic facial skin, so it cannot realize the reproduction of human facial expressions, and does not have anthropomorphic multi-sensory functions such as smell, touch, and vision. At the same time, the robot in the above patent application does not have artificial emotion model and human-computer interaction function.
发明内容Contents of the invention
鉴于上述技术状态,本发明的目的是提供一种具有人机交互功能的仿人头像机器人装置及行为控制方法,以解决现有的仿人头像机器人不能完全实现对人类面部表情的再现、感知功能有限、不具有人工情感模型及人机交互功能的问题。In view of the above-mentioned state of the art, the purpose of the present invention is to provide a humanoid avatar robot device and a behavior control method with human-computer interaction function, so as to solve the problem that existing humanoid avatar robots cannot fully realize the reproduction and perception functions of human facial expressions. Problems with limited and no artificial emotion models and human-computer interaction functions.
本发明解决上述技术问题所采用的技术方案是:The technical solution adopted by the present invention to solve the problems of the technologies described above is:
本发明所述具有人机交互功能的仿人头像机器人装置由仿人头像机器人本体、机器人行为控制系统和传感器感知系统三部分组成;所述仿人头像机器人本体包括眼球运动单元、上下颚运动单元、人工肺装置、面部表情及口形驱动机构、前支板、后支板、立架、面部壳体和面部弹性皮肤;眼球运动单元由两个眼球、眼球传动机构、两个眼球伺服电机、两个眼睑、眼睑传动机构、两个眼睑伺服电机和伺服电机构成;上下颚运动单元由上鄂、下颚、电机和转轴构成;人工肺装置由软管、汽缸、活塞、螺母、气体驱动导向轴、气体驱动丝杠、驱动电机和进气管构成;面部表情及仿人口形驱动机构由第一驱动伺服电机、第二驱动伺服电机、第三驱动伺服电机、第四驱动伺服电机、第五驱动伺服电机、第六驱动伺服电机和多组面部皮肤驱动绳轮机构构成;机器人行为控制系统包括控制系统硬件与控制系统软件,所述控制系统硬件包括主控机和运动控制卡;所述控制系统软件即为行为控制方法;传感器感知系统包括两个小型CCD传感器、语音识别单片机、嗅觉传感器;前支板、后支板平行设置并与立架固接在一起构成仿人头像机器人的头部骨架,眼球运动单元、上下颚运动单元由上至下安装在立架上;在每个眼球中嵌入一个小型CCD传感器形成机器人的双目视觉,每个眼睑设置在对应眼球的上方,两个眼球伺服电机分别通过眼球传动机构带动两个眼球左右转动,两个眼睑伺服电机分别通过眼睑传动机构带动两个眼睑运动,伺服电机同时驱动两个眼球一起上下动动;上鄂设置在下颚的上方,电机带动转轴驱动下颚运动;嗅觉传感器安装在眼球运动单元和上下颚运动单元之间的立架上,软管的一端与嗅觉传感器连接,软管的另一端与气缸连接,电机与气体驱动丝杠转动连接,螺母安装在气体驱动丝杠上,螺母沿着气体驱动导向轴移动,驱动固连在螺母上的活塞运动,实现机器人的嗅觉功能,进气管的一端与汽缸连接;第一驱动伺服电机、第二驱动伺服电机、第三驱动伺服电机、第四驱动伺服电机、第五驱动伺服电机和第六驱动伺服电机均安装在由前支板、后支板和立架构成的仿人头像机器人头部骨架上,所述六个驱动伺服电机通过面部皮肤驱动绳轮机构与面部弹性皮肤的相应的控制点连接;面部壳体和面部弹性皮肤与人脸的轮廓形状相一致,面部壳体、面部弹性皮肤由内向外安装在眼球运动单元、上下颚运动单元的前端上形成仿人头像机器人装置的外部轮廓形状;运动控制卡安装在后支板上,语音识别单片机安装在前支板的上端上;传感器感知系统将感知到的信息输出给主控机进行处理,机器人行为控制系统中的控制系统软件根据人工情感模型得到相应电机的相关控制量,将运动控制指令通过运动控制卡(电机控制器)输出PWM脉冲驱动相应的电机运动到指定的位置,从而实现机器人的人机交互功能及各种情感反应。The humanoid avatar robot device with human-computer interaction function of the present invention is composed of three parts: a humanoid avatar robot body, a robot behavior control system and a sensor perception system; the humanoid avatar robot body includes an eyeball movement unit, an upper and lower jaw movement unit , artificial lung device, facial expression and mouth shape driving mechanism, front support board, rear support board, stand, face shell and facial elastic skin; the eyeball movement unit consists of two eyeballs, eyeball transmission mechanism, two eyeball servo motors, two One eyelid, eyelid transmission mechanism, two eyelid servo motors and servo motor; upper and lower jaw movement unit is composed of upper jaw, lower jaw, motor and shaft; artificial lung device is composed of hose, cylinder, piston, nut, gas-driven guide shaft, Gas-driven lead screw, drive motor and air intake pipe; facial expression and imitation mouth shape drive mechanism consists of the first drive servo motor, the second drive servo motor, the third drive servo motor, the fourth drive servo motor, and the fifth
上述具有人机交互功能的仿人头像机器人装置的行为控制方法按以下步骤实现的:The above-mentioned behavior control method of the humanoid avatar robot device with human-computer interaction function is realized in the following steps:
步骤一、通过传感器感知系统(传感器CCD和MIC)感知外界的情感信号,对感知到的情感信号进行分析、特征提取并识别;
步骤二、将识别后的情感信号传递给人工情感模型;所述人工情感模型主要包括三部分:机器人的情感产生、情感交互和情感表达;情感产生主要包括刺激信号采集、情感定义、情感驱动、情感转换四大模块;在情感驱动中定义两个阈值α,β;α为情感的激活阈值,β为情感的饱和度阈值;情感状态的转换要考虑到外部因素、内部因素和以前情感状态三方面的影响;
在明确人工情感模型的各个状态以及一些完全可以用触发事件来确定各状态转换关系的基础上,用扩展的有限状态机来实现情感之间的状态转换;增加定义状态上的变量属性集合,作为一种扩展有限状态机EFSM;利用EFSM分析清楚情感交互模型后,即可准确地确定人工情感模型中各个函数的变量构成,可以有效地避免变量定义冲突,为下一步的机器人控制行为提供依据;On the basis of clarifying each state of the artificial emotion model and some trigger events can be used to determine the state transition relationship, the extended finite state machine is used to realize the state transition between emotions; the variable attribute set on the defined state is added as An extended finite state machine EFSM; after using the EFSM to analyze the emotional interaction model, the variable composition of each function in the artificial emotional model can be accurately determined, which can effectively avoid variable definition conflicts and provide a basis for the next step of robot control behavior;
步骤三、根据人工情感模型计算出相应驱动电机的相关控制量(相应转角),到机器人的面部表情及仿人口形,进而得到机器人表达其情感的行为。
本发明具有以下有益技术效果:本发明实现了对人类面部表情的再现,而且还具有如嗅觉、触觉、视觉等拟人的多感知功能。本发明通过弹性面部皮肤能够实现基本面部表情和机器人的动态口形。采用基于情感模型的行为控制方法,实现机器人的行为控制,通过机器人装置中传感器感知外界的情感刺激,经过人工情感模型计算识别,控制系统实现机器人的各种行为反应,使机器人具有人机交互功能。本发明最大的特点是仿人头像机器人装置实现了与成人头部体积1∶1比例的设计,结构紧凑。The invention has the following beneficial technical effects: the invention realizes the reproduction of human facial expressions, and also has anthropomorphic multi-sensory functions such as sense of smell, touch and vision. The invention can realize basic facial expressions and dynamic mouth shapes of robots through elastic facial skin. The behavior control method based on the emotional model is used to realize the behavior control of the robot. The sensor in the robot device perceives the external emotional stimulation, and the artificial emotion model is calculated and recognized. The control system realizes various behavioral responses of the robot, so that the robot has the function of human-computer interaction. . The greatest feature of the present invention is that the humanoid head robot device realizes a design with a ratio of 1:1 to the head volume of an adult, and has a compact structure.
附图说明Description of drawings
图1a是本发明所述机器人的立体图,图1b是本发明所述机器人的立体图(为便于表达,没有画出面部壳体和面部弹性皮肤等);图2a是本发明所述机器人的上下颚运动单元立体图,图2b是本发明所述机器人的上下颚运动单元立体图(主视立体图);图3是本发明的机器人眼球运动单元的立体图;图4a是机器人的面部皮肤上定义皮肤特征点的具体位置示意图,图4b是本发明的机器人面部表情和口形的形成原理图示意图;图5是本发明的机器人人机交互原理示意图;图6是本发明的机器人的行为控制硬件构成的方框图;图7a是本发明的机器人的行为控制软件的具体流程图,图7b-1是机器人表达“很高兴认识你”时的语音信号图,图是7b-2是机器人表达“很高兴认识你”时下颚驱动电机的转角图;图8a~8c是本发明的机器人面部表情识别模糊神经网络方法示意图;图9是本发明的机器人语音识别方法示意图;图10是本发明的机器人人工情感模型的结构示意图;图11是本发明的机器人基于有限状态机的情感交互模型示意图;图12a是本发明的机器人能够表达的部分基本口形组图,图12b是机器人表达“很高兴认识你”时各个时间点的表情组图;图12c是本发明的机器人的人机交互实验照片组图。Fig. 1 a is the perspective view of robot of the present invention, and Fig. 1 b is the perspective view of robot of the present invention (for ease of expression, does not draw facial shell and facial elastic skin etc.); Fig. 2 a is the upper and lower jaw of robot of the present invention Motion unit perspective view, Fig. 2 b is the upper and lower jaw movement unit perspective view (front perspective view) of robot of the present invention; Fig. 3 is the perspective view of robot eye movement unit of the present invention; Fig. 4 a is the definition skin feature point on the face skin of robot Concrete position schematic diagram, Fig. 4 b is the schematic diagram of the forming principle diagram of robot facial expression and mouth shape of the present invention; Fig. 5 is the schematic diagram of human-computer interaction principle of robot of the present invention; Fig. 6 is the block diagram of the behavior control hardware composition of the robot of the present invention; Fig. 7a is a specific flow chart of the behavior control software of the robot of the present invention. FIG. 7b-1 is a voice signal diagram when the robot expresses "Nice to meet you", and Fig. 7b-2 is the lower jaw when the robot expresses "Nice to meet you". The corner diagram of the drive motor; Fig. 8a ~ 8c is a schematic diagram of the robot facial expression recognition fuzzy neural network method of the present invention; Fig. 9 is a schematic diagram of the robot speech recognition method of the present invention; Fig. 10 is a structural representation of the artificial emotion model of the robot of the present invention; Fig. 11 is a schematic diagram of the emotional interaction model of the robot based on the finite state machine of the present invention; Fig. 12a is a part of the basic mouth shapes that the robot of the present invention can express; Group of pictures; Figure 12c is a group of photos of the human-computer interaction experiment of the robot of the present invention.
具体实施方式Detailed ways
具体实施方式一:如图1a、图1b、图2a、图2b、图3、图4a、图4b和图6所示,本实施方式所述的具有人机交互功能的仿人头像机器人装置由仿人头像机器人本体、机器人行为控制系统和传感器感知系统三部分组成;所述仿人头像机器人本体包括眼球运动单元1、上下颚运动单元61、人工肺装置71、面部表情及口形驱动机构81、前支板7、后支板6、立架51、面部壳体17和面部弹性皮肤18;眼球运动单元1由两个眼球12、眼球传动机构、两个眼球伺服电机14、两个眼睑13、眼睑传动机构、两个眼睑伺服电机16和伺服电机29构成;上下颚运动单元61由上鄂8、下颚9、电机27和转轴28构成;人工肺装置71由软管19、汽缸20、活塞21、螺母22、气体驱动导向轴23、气体驱动丝杠25、驱动电机26和进气管92构成;面部表情及仿人口形驱动机构81由第一驱动伺服电机35、第二驱动伺服电机34、第三驱动伺服电机33、第四驱动伺服电机40、第五驱动伺服电机41、第六驱动伺服电机43和多组面部皮肤驱动绳轮机构构成;Specific embodiment one: as shown in Fig. 1a, Fig. 1b, Fig. 2a, Fig. 2b, Fig. 3, Fig. 4a, Fig. 4b and Fig. 6, the humanoid avatar robot device with human-computer interaction function described in this embodiment consists of The humanoid head robot body, robot behavior control system and sensor perception system are composed of three parts; the humanoid head robot body includes
机器人行为控制系统包括控制系统硬件与控制系统软件,所述控制系统硬件包括主控机91和运动控制卡5;所述控制系统软件即为行为控制方法;The robot behavior control system includes control system hardware and control system software, and the control system hardware includes a
传感器感知系统包括两个小型CCD传感器3、语音识别单片机4、嗅觉传感器24;The sensor perception system includes two
前支板7、后支板6平行设置并与立架51固接在一起构成仿人头像机器人的头部骨架,眼球运动单元1、上下颚运动单元61由上至下安装在立架51上;在每个眼球12中嵌入一个小型CCD传感器3形成机器人的双目视觉,每个眼睑13设置在对应眼球12的上方,两个眼球伺服电机14分别通过眼球传动机构带动两个眼球12左右转动,两个眼睑伺服电机16分别通过眼睑传动机构带动两个眼睑13运动,伺服电机29同时驱动两个眼球12一起上下动动;上鄂8设置在下颚9的上方,电机27带动转轴28驱动下颚9运动;嗅觉传感器24安装在眼球运动单元1和上下颚运动单元61之间的立架51上,软管19的一端与嗅觉传感器24连接,软管19的另一端与气缸20连接,电机26与气体驱动丝杠25转动连接,螺母22安装在气体驱动丝杠25上,螺母22沿着气体驱动导向轴23移动,驱动固连在螺母22上的活塞21运动,实现机器人的嗅觉功能,进气管92的一端与汽缸20连接;第一驱动伺服电机35、第二驱动伺服电机34、第三驱动伺服电机33、第四驱动伺服电机40、第五驱动伺服电机41和第六驱动伺服电机43均安装在由前支板7、后支板6和立架51构成的仿人头像机器人头部骨架上,所述六个驱动伺服电机通过面部皮肤驱动绳轮机构与面部弹性皮肤18的相应的控制点连接;面部壳体17和面部弹性皮肤18与人脸的轮廓形状相一致,面部壳体17、面部弹性皮肤18由内向外安装在眼球运动单元1、上下颚运动单元61的前端上形成仿人头像机器人装置的外部轮廓形状;运动控制卡5安装在后支板6上,语音识别单片机4安装在前支板7的上端上;传感器感知系统将感知到的信息输出给主控机91进行处理,机器人行为控制系统中的控制系统软件根据人工情感模型得到相应电机的相关控制量,将运动控制指令通过运动控制卡5(电机控制器)输出PWM脉冲驱动相应的电机运动到指定的位置,从而实现机器人的人机交互功能及各种情感反应。The
本实施方式的仿人头像机器人装置的长为162mm、宽为156mm、高为184mm,重量为2.8kg,共有14个自由度。通过眼球运动单元1、上下颚运动单元61实现机器人的仿人面部器官运动;通过电机27带动转轴28驱动下颚9实现机器人的嘴部的运动。人工肺装置通过电机26带动气体驱动丝杠25转动,螺母22沿着气体驱动导向轴23移动,从而驱动固连在螺母22上的活塞21运动,实现机器人的嗅觉功能。能够识别出酒精、烟、氨气等气味,人工肺装置安放可在仿人机器人胸部的适当位置。图3为本发明的机器人的眼球运动驱动单元1,眼球运动具有2个自由度,眼睑运动具有1个自由度。在眼球12分别嵌入小型CCD传感器3,构成机器人的双目视觉。其中眼球的运动速度最快可达500deg/s,眼睑的运动速度最快为900deg/s。在仿人头像机器人装置中考虑到机构的具体空间需要,机器人的行为控制采用体积小、输出力矩较大,易于位置控制的眼球伺服电机14、眼睑伺服电机16、伺服电机29,采用同步带传动机构作为电机的传动机构,同时考虑到电机驱动轴的支撑问题,设计轴承支座15增强机构的刚度。机器人的眼球运动机构是左右对称的。其中眼球伺服电机14通过相应的第一绳轮31和第二绳轮11带动眼球12左右转动。伺服电机16通过相应的第三绳轮30、第四绳轮33驱动相应的眼睑13运动。伺服电机29驱动两上眼球12一起上下运动。The humanoid avatar robot device of the present embodiment has a length of 162 mm, a width of 156 mm, a height of 184 mm, a weight of 2.8 kg, and a total of 14 degrees of freedom. The human-like facial organ movement of the robot is realized through the
考虑到伺服控制卡与主控机的通讯问题及合理分配和有效利用运动控制卡的硬件资源,实现仿人头像机器人系统的硬件集成两方面问题,运动控制卡5选用SSC-32运动控制卡,SSC-32运动控制卡可以控制多达32个伺服电机协调动作。运动控制硬件即SSC-32板卡5本身,该舵机控制器采用RS232串口与PC机进行通信,通过PC机操作上位机软件给控制器传递控制指令信号,就可以从控制器输出一定占空比的PWM信号,实现多路伺服电机单独控制或同时控制。控制卡的控制指令精简、控制精度为0.09°/μs,可以进行舵机的位置控制以及速度控制。Considering the communication problem between the servo control card and the main control machine and the reasonable allocation and effective use of the hardware resources of the motion control card, and the realization of the hardware integration of the humanoid head robot system, the
语音识别单片机4是SPCE061A单片机。仿人头像机器人采用主控机与SPCE061A单片机4相结合作为机器人的听觉系统。SPCE061A单片机4是凌阳公司推出的一款16位微处理器,内嵌32k字闪存flash,处理速度高,能够很方便地完成语音识别和语音信号的处理。系统通讯部分电路主要包括32位I/O通讯电路和通用异步串行接口(UART)通讯电路,本系统通过通用异步串行接口(UART)电路实现与主控机之间的通讯,实现单片机与主控机间的数据传输。语音识别时用到的语音提示借助于凌阳公司提供的API语音函数来实现。识别时的模板匹配算法采用Viterbi算法。它是一种前向搜索算法,其可以是在给定相应的观察序列时,找出从模型中找出的最佳状态序列,即选择输出概率最大的模版作为输出结果。模板训练的计算量很大,是借助于主控机平台完成的,语音信号样本通过SPCE061A系统来采集,通过RS232的通信模块将语音信号传送到主控机上保存,它保证了训练用的语音特征与识别时的一致性,从而减少硬件系统不一致而带来的误差。Speech recognition single-
嗅觉传感器24采用FIS系列气体传感器。
具体实施方式二:本实施方式所述传感器感知系统还包括触觉传感器,所述触觉传感器设置在前额中部位置。其它组成及连接关系与具体实施方式一相同。Embodiment 2: The sensor perception system in this embodiment further includes a tactile sensor, and the tactile sensor is arranged in the middle of the forehead. Other components and connections are the same as those in the first embodiment.
具体实施方式三:本实施方式所述传感器感知系统还包括两个温度传感器,所述两个温度传感器分别设置在前额的左右两侧。其它组成及连接关系与具体实施方式一或二相同。Embodiment 3: The sensor perception system in this embodiment further includes two temperature sensors, and the two temperature sensors are respectively arranged on the left and right sides of the forehead. Other compositions and connections are the same as those in
具体实施方式四:本实施方式所述小型CCD传感器3是利用微软公司在ActiveMovie和Video for Windows的基础上推出的新一代基于COM技术的流媒体处理开发包DirectShow进行视频捕获;图像采集卡通过PCI放置在主板上,通过小型CCD传感器3内带的基于windows软件开发平台建立仿人头像机器人的视觉来完成对外界环境的识别,这里主要指仿人头像机器人对人脸面部表情的识别;所述小型CCD传感器3的主要功能为:Specific embodiment four: the small-
(1)静态、动态的图像获取功能包括图像采集卡的参数设置、位图图像读入和存储、多路图像采集控制、显示和切换;(1) Static and dynamic image acquisition functions include image acquisition card parameter setting, bitmap image reading and storage, multi-channel image acquisition control, display and switching;
(2)涉及的动态、静态图像分析算法和面部表情识别算法。其它与具体实施方式一相同。(2) Dynamic and static image analysis algorithms and facial expression recognition algorithms involved. Others are the same as in the first embodiment.
具体实施方式五:如图1a、图1b、图4a、图4b、图12a和图12b所示,本实施方式实现面部表情及仿人口形的方法为:Specific embodiment five: as shown in Fig. 1a, Fig. 1b, Fig. 4a, Fig. 4b, Fig. 12a and Fig. 12b, the method for realizing facial expression and imitating human shape in this embodiment is:
步骤一、在面部弹性皮肤18上定义皮肤特征点:a、a-1、b、b-1、c、c-1、d、d-1、e、e-1、g,所述各点即为运动控制点;
步骤二、在每个运动控制点处设置一个滑块,所述滑块与面部弹性皮肤18的相应控制点连接在一起,每个滑块与一个与相应的导向槽滑动连接,所述导向槽设置在面部壳体17上,所述导向槽的设置方向确定每个运动控制点相应的受力方向;面部皮肤驱动绳轮机构的绳索的一端与滑块连接,面部皮肤驱动绳轮机构的绳索的另一端与相应的驱动伺服电机连接;
步骤三、通过表情控制点的组合和位移变化,实现不同的基本面部表情和机器人口形。Step 3: Realize different basic facial expressions and robot mouth shapes through the combination of expression control points and displacement changes.
为了实现表情控制,也可在在相应的制控点设置带有导向槽的皮肤运动导向弯板10。为实现机器人的面部表情和口形,在面部皮肤18上定义皮肤特征点(a,a-1,b,b-1,c,c-1,d,d-1,e,e-1,g)。图中圆点代表在机器人皮肤上设定的控制点,箭头表明控制点运动的方向。在机器人面部表情设计中,通过控制这些控制点的组合和运动的位移和方向来实现基本的面部表情和口形,模拟人类肌肉的双向运动。在实际机构中,面部皮肤驱动绳组与控制点相连,通过表情控制点的组合和位移变化,实现不同的基本面部表情。表1表达了各控制点与面部皮肤驱动绳组及驱动伺服电机的分配。为了有效利用空间,各控制点的运动是左右对称的(第四驱动伺服电机40和第五驱动伺服电机41左中对称设置;38与38-1对称设置;39与39-1对称设置)。In order to realize expression control, a skin movement guide
具体实施方式六:如图1a、图1b、图5、图6和图7a、图7b-1、图7b-2,图10、图11、图12a、图12b和图12c所示,本实施方式所述具有人机交互功能的仿人头像机器人装置的行为控制方法按以下步骤实现的:Specific embodiment six: as shown in Figure 1a, Figure 1b, Figure 5, Figure 6 and Figure 7a, Figure 7b-1, Figure 7b-2, Figure 10, Figure 11, Figure 12a, Figure 12b and Figure 12c, this implementation The behavior control method of the humanoid avatar robot device with human-computer interaction function described in the method is realized in the following steps:
步骤一、通过传感器感知系统(传感器CCD和MIC)感知外界的情感信号,对感知到的情感信号进行分析、特征提取并识别;所述情感信号包括人类基本面部表情和语音信号;人类基本面部表情识别采用模糊神经网络结构来识别;语音信号识别采用CHMM语音识别模型结构来识别;
步骤二、将识别后的情感信号传递给人工情感模型;所述人工情感模型主要包括三部分:机器人的情感产生、情感交互和情感表达;情感产生主要包括刺激信号采集、情感定义、情感驱动、情感转换四大模块;在情感驱动中定义两个阈值α,β;α为情感的激活阈值,β为情感的饱和度阈值;情感状态的转换要考虑到外部因素、内部因素和以前情感状态三方面的影响;情感产生中通过传感器来实现人类的感知功能,用来感受外界事件。由于视觉和听觉能够感知外界环境的大部分信息,因此在仿人头像机器人中通过视觉传感器和听觉传感器来实现人类的感知功能。情感表达包括面部表情表达和语音表达。
情感交互基于有限状态机理论建立,在明确情感模型的各个状态以及一些完全可以用触发事件来确定各状态转换关系的基础上,用有限状态机来实现情感之间的状态转换。建立情感交互的目的是为控制仿人头像机器人的行为,使其根据情感状态做出相应的行为反应。因此,情感状态机建立后就应该清晰准确地确定其所要使用的主要函数及变量。针对要求,对有限状态机的基本概念进行扩展,增加定义状态上的变量属性集合,作为一种扩展有限状态机(EFSM)。利用EFSM分析清楚情感交互模型后,即可准确地确定情感模型中各个函数的变量构成,可以有效地避免变量定义冲突,为下一步的机器人行为控制提供依据。图8中E1代表个体的初始情感状态,条件代表输入状态,根据当前的情感状态和输入状态,情感载体的情感状态发生改变,并做出相应的行为。行为中的声音和表情表示状态上的变量集合V;Emotional interaction is established based on the finite state machine theory. On the basis of clarifying each state of the emotional model and some trigger events can be used to determine the state transition relationship, the finite state machine is used to realize the state transition between emotions. The purpose of establishing emotional interaction is to control the behavior of the humanoid avatar robot, so that it can make corresponding behavioral responses according to the emotional state. Therefore, after the emotional state machine is established, the main functions and variables to be used should be clearly and accurately determined. According to the requirements, the basic concept of the finite state machine is expanded, and the variable attribute set on the defined state is added as an extended finite state machine (EFSM). After using the EFSM to analyze the emotional interaction model, the variable composition of each function in the emotional model can be accurately determined, which can effectively avoid variable definition conflicts and provide a basis for the next step of robot behavior control. In Figure 8, E 1 represents the initial emotional state of the individual, and the condition represents the input state. According to the current emotional state and input state, the emotional state of the emotional carrier changes and makes corresponding behaviors. The sound and expression in the behavior represent the variable set V on the state;
步骤三、根据人工情感模型计算出相应驱动电机的相关控制量(相应转角),到机器人的面部表情及仿人口形,进而得到机器人表达其情感的行为。
具体实施方式七:、如图8a、8b和8c所示,本实施方式采用模糊神经网络结构识别人类基本面部表情的具体过程为:网络输入层节点数为6,即面部表情特征值{θ1,θ2,θ3,θ4,L1,L2};输出层节点数为7种基本面部表情(高兴、吃惊、悲伤、生气、厌恶、害怕和正常);期望网络的输出为第i个输出节点的值为1其余输出节点均为零,而实际输出是围绕期望值区间的某一具体数值;根据竞争选择,将输入样本类别判定为网络的实际输出中具有最大值的输出节点对应的类别;如果网络的实际输出节点同时存在多个最大值则做出拒绝判断;基于个体差异和表情的变化,特征点的相对位置不是固定的而是具有一定可变性,以及在图像采集过程中,人与摄像头的距离的变化等因素引起特征值的变化,故采用无量纲值作为面部表情识别的特征值,其中
具体实施方式八:、如图9所示,本实施方式所述采用CHMM语音识别模型结构识别语音信号的具体过程为:语音信号通常可看作是从一系列HMM状态产生的观察量,每个观察序列就是一帧MFCC参数;在识别过程中,通过短时平均能量和短时过零率来实现语音信号的端点检测;采用经典的Baum-Welch算法实现HMM参数估计问题,识别过程中采用动态规划算法——Viterbi算法。其它与具体实施方式六相同。Embodiment eight:, as shown in Figure 9, the specific process of using the CHMM speech recognition model structure to recognize speech signals in this embodiment is: speech signals can usually be regarded as observations produced from a series of HMM states, each The observation sequence is a frame of MFCC parameters; in the recognition process, the endpoint detection of the speech signal is realized through the short-term average energy and short-term zero-crossing rate; the classic Baum-Welch algorithm is used to realize the HMM parameter estimation problem, and the dynamic Planning algorithm - Viterbi algorithm. Others are the same as in the sixth embodiment.
本发明方法的原理:通过传感器CCD和MIC感知外界的情感信号,并识别出相应的情感信号。根据情感模型中的扩展有限状态机中的变量集合V中元素如面部表情控制点及基本口形,得到机器人表达其情感的行为。本实施机器人能够表达的部分基本口形如图12a所示,图中机器人的各个口形为相应的汉语拼音的发音。根据人工情感模型理论,外界刺激信号影响机器人的行为,实验中外界刺激信号包括语音信号和视觉信号。根据不同的外界刺激信号,机器人与人进行不同的情感交互,这里情感交互主要指语言和机器人面部表情的交互。机器人在情感模型的驱动下作出相应的反应——表情和语音回答。该控制方法通过情感、驱动、行为三部分相互交互来实现机器人的“类人”的行为。在该体系中“驱动”决定“做什么”,“情感”决定“怎么做”。通过传感器CCD和MIC感知外界的情感信号,根据情感模型中的扩展有限状态机中的变量集合V中元素如面部表情控制点及基本口形,得到机器人表达其情感的行为。图7a为仿人头像机器人装置行为控制系统软件流程图。机器人通过传感器将采集到的面部表情信息和语音信息进行融合之后,控制软件根据人工情感模型得到驱动电机的相关控制量,控制机器人的行为表达。图7b-1、图7b-2为机器人表达“很高兴认识你”时,下颚驱动电机的转角。在图中0-11.8秒为人类对机器人的问候“你好”。本发明装置通过语音识别单片机识别出外界的语音信号,作出相应的触发应答。在进行应答时,根据应答的语句得出各孤立字的应答时间及其对应的口形,从而得到驱动电机的相应转角。The principle of the method of the present invention is to perceive external emotional signals through sensors CCD and MIC, and identify corresponding emotional signals. According to the elements in the variable set V in the extended finite state machine in the emotional model, such as facial expression control points and basic mouth shapes, the behavior of the robot to express its emotions is obtained. Some of the basic mouth shapes that the robot can express in this implementation are shown in Figure 12a, and each mouth shape of the robot in the figure is the pronunciation of the corresponding Chinese pinyin. According to the artificial emotion model theory, external stimuli affect the behavior of the robot. In the experiment, external stimuli include voice signals and visual signals. According to different external stimuli signals, the robot and human have different emotional interactions, where the emotional interaction mainly refers to the interaction between language and facial expressions of the robot. Driven by the emotional model, the robot responds accordingly—expressions and voice responses. The control method achieves the "human-like" behavior of the robot through the interaction of emotion, drive and behavior. In this system, "drive" decides "what to do", and "emotion" decides "how to do it". Perceive the external emotional signal through the sensor CCD and MIC, and obtain the behavior of the robot to express its emotion according to the elements in the variable set V in the extended finite state machine in the emotional model, such as facial expression control points and basic mouth shapes. Fig. 7a is a flow chart of the software of the behavior control system of the humanoid avatar robot device. After the robot fuses the collected facial expression information and voice information through the sensor, the control software obtains the relevant control amount of the drive motor according to the artificial emotion model, and controls the behavior expression of the robot. Figure 7b-1 and Figure 7b-2 show the rotation angle of the lower jaw drive motor when the robot expresses "Nice to meet you". In the figure, 0-11.8 seconds is the greeting "Hello" from human to the robot. The device of the invention recognizes the external voice signal through the voice recognition single-chip microcomputer, and makes a corresponding trigger response. When answering, the answering time of each isolated word and the corresponding mouth shapes are obtained according to the sentences of the answering, so as to obtain the corresponding rotation angle of the driving motor.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009100724055A CN101618280B (en) | 2009-06-30 | 2009-06-30 | Humanoid-head robot device with human-computer interaction function and behavior control method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009100724055A CN101618280B (en) | 2009-06-30 | 2009-06-30 | Humanoid-head robot device with human-computer interaction function and behavior control method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101618280A CN101618280A (en) | 2010-01-06 |
CN101618280B true CN101618280B (en) | 2011-03-23 |
Family
ID=41511797
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2009100724055A Expired - Fee Related CN101618280B (en) | 2009-06-30 | 2009-06-30 | Humanoid-head robot device with human-computer interaction function and behavior control method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101618280B (en) |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101885181B (en) * | 2010-05-24 | 2011-08-10 | 上海电气集团股份有限公司 | Robot for playing cucurbit flute and control method thereof |
CN102825604A (en) * | 2012-09-18 | 2012-12-19 | 广西玉林正方机械有限公司 | Motion control programming system of six-DOF (degree of freedom) robot |
CN103413113A (en) * | 2013-01-15 | 2013-11-27 | 上海大学 | A method for intelligent emotional interaction of service robots |
CN103358310B (en) * | 2013-07-04 | 2015-05-27 | 上海大学 | Mouth movement mechanism of humanoid robot |
FR3011375B1 (en) * | 2013-10-01 | 2017-01-27 | Aldebaran Robotics | METHOD FOR DIALOGUE BETWEEN A MACHINE, SUCH AS A HUMANOID ROBOT, AND A HUMAN INTERLOCUTOR, COMPUTER PROGRAM PRODUCT AND HUMANOID ROBOT FOR IMPLEMENTING SUCH A METHOD |
CN103853071B (en) * | 2014-01-20 | 2016-09-28 | 南京升泰元机器人科技有限公司 | Man-machine facial expression interactive system based on bio signal |
CN104091370B (en) * | 2014-08-01 | 2017-02-15 | 哈尔滨工业大学 | Human head imitation portrait robot capable of changing facial form and face organs as well as mathematical modeling method and control method thereof |
CN106325228B (en) * | 2015-06-26 | 2020-03-20 | 北京贝虎机器人技术有限公司 | Method and device for generating control data of robot |
WO2016206642A1 (en) * | 2015-06-26 | 2016-12-29 | 北京贝虎机器人技术有限公司 | Method and apparatus for generating control data of robot |
CN105244042B (en) * | 2015-08-26 | 2018-11-13 | 安徽建筑大学 | A kind of speech emotional interactive device and method based on finite-state automata |
CN105843118B (en) * | 2016-03-25 | 2018-07-27 | 北京光年无限科技有限公司 | A kind of robot interactive method and robot system |
CN106203344A (en) * | 2016-07-12 | 2016-12-07 | 北京光年无限科技有限公司 | A kind of Emotion identification method and system for intelligent robot |
CN106371583B (en) * | 2016-08-19 | 2020-07-31 | 北京儒博科技有限公司 | Control method and device of intelligent equipment |
CN106426196B (en) * | 2016-08-31 | 2018-01-30 | 佛山博文机器人自动化科技有限公司 | A kind of service-delivery machine head part |
CN108115678B (en) * | 2016-11-28 | 2020-10-23 | 深圳光启合众科技有限公司 | Robot and its motion control method and device |
CN108732943A (en) * | 2017-04-18 | 2018-11-02 | 深圳市丰巨泰科电子有限公司 | Expression robot man-machine interaction method |
CN107813294B (en) * | 2017-10-31 | 2019-01-01 | 梅其珍 | A kind of nonmetal flexible anthropomorphic robot |
CN108577866A (en) * | 2018-04-03 | 2018-09-28 | 中国地质大学(武汉) | A kind of system and method for multidimensional emotion recognition and alleviation |
CN108714902A (en) * | 2018-06-28 | 2018-10-30 | 香港中文大学(深圳) | Apery expression robot head construction and robot head control system |
CN108563138A (en) * | 2018-07-04 | 2018-09-21 | 深圳万发创新进出口贸易有限公司 | A kind of intelligent domestic system |
CN110103234B (en) * | 2019-04-30 | 2024-05-24 | 广东工业大学 | Humanoid facial expression robot |
SG10201910949PA (en) * | 2019-11-21 | 2020-11-27 | Lian Wang | Artificial Intelligence Brain |
CN112991886B (en) * | 2021-03-09 | 2023-05-09 | 湖北工业大学 | Barrier-free communication and learning assistance system for the deaf-mute |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1262844A1 (en) * | 2001-06-01 | 2002-12-04 | Sony International (Europe) GmbH | Method for controlling a man-machine-interface unit |
JP2004174692A (en) * | 2002-11-29 | 2004-06-24 | Mitsubishi Heavy Ind Ltd | Man-machine robot and control method of man machine robot |
CN101020315A (en) * | 2007-03-22 | 2007-08-22 | 上海交通大学 | Head system of anthropomorphic robot |
CN200998593Y (en) * | 2007-02-14 | 2008-01-02 | 杨建良 | Face act device of the robot |
CN101373380A (en) * | 2008-07-14 | 2009-02-25 | 浙江大学 | A humanoid robot control system and robot manipulation method |
-
2009
- 2009-06-30 CN CN2009100724055A patent/CN101618280B/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1262844A1 (en) * | 2001-06-01 | 2002-12-04 | Sony International (Europe) GmbH | Method for controlling a man-machine-interface unit |
EP1406135A1 (en) * | 2001-06-01 | 2004-04-07 | Sony International (Europe) GmbH | Man-machine interface unit control method; robot apparatus; and its action control method |
JP2004174692A (en) * | 2002-11-29 | 2004-06-24 | Mitsubishi Heavy Ind Ltd | Man-machine robot and control method of man machine robot |
CN200998593Y (en) * | 2007-02-14 | 2008-01-02 | 杨建良 | Face act device of the robot |
CN101020315A (en) * | 2007-03-22 | 2007-08-22 | 上海交通大学 | Head system of anthropomorphic robot |
CN101373380A (en) * | 2008-07-14 | 2009-02-25 | 浙江大学 | A humanoid robot control system and robot manipulation method |
Also Published As
Publication number | Publication date |
---|---|
CN101618280A (en) | 2010-01-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101618280B (en) | Humanoid-head robot device with human-computer interaction function and behavior control method thereof | |
CN101474481B (en) | emotional robot system | |
CN102354349B (en) | Human-machine interaction multi-mode early intervention system for improving social interaction capacity of autistic children | |
KR100813668B1 (en) | How to express emotions in android robot | |
CN104493827A (en) | Intelligent cognitive robot and cognitive system thereof | |
WO2002076687A1 (en) | Robot device and control method therefor, and storage medium | |
CN101458778A (en) | Artificial head robot with facial expression and multiple perceptional functions | |
CN102566474A (en) | Interaction system and method for robot with humanoid facial expressions, and face detection and tracking method | |
CN109773807B (en) | Motion control method and robot | |
CN117093689A (en) | Question-answering robot device and method for simulating human image and response | |
Shidujaman et al. | “roboquin”: A mannequin robot with natural humanoid movements | |
CN113246156A (en) | Child accompanying robot based on intelligent emotion recognition and control method | |
Silva et al. | Mirroring and recognizing emotions through facial expressions for a RoboKind platform | |
Weiguo et al. | Development of the humanoid head portrait robot system with flexible face and expression | |
Pelachaud et al. | Expressive gestures displayed by a humanoid robot during a storytelling application | |
De Beir et al. | Enhancing nao expression of emotions using pluggable eyebrows | |
Esfandbod et al. | Human-robot interaction based on facial expression imitation | |
Kirandziska et al. | Human-robot interaction based on human emotions extracted from speech | |
CN214955998U (en) | Voice interaction equipment based on deep learning | |
Bottega et al. | Jubileo: an open-source robot and framework for research in human-robot social interaction | |
Shaif et al. | Vision and voice-based human-robot interactive interface for humanoid robot | |
Naeem et al. | An AI based voice controlled humanoid robot | |
Hu et al. | Implementation of voice interaction based on questions and answers for guide robot in exhibition hall | |
Cid et al. | A real-time synchronization algorithm between Text-To-Speech (TTS) system and Robot Mouth for Social Robotic Applications | |
Aidokas | Human–robot interaction communication control system using Lithuanian language |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20110323 |
|
CF01 | Termination of patent right due to non-payment of annual fee |