[go: up one dir, main page]

CN101618280B - Humanoid-head robot device with human-computer interaction function and behavior control method thereof - Google Patents

Humanoid-head robot device with human-computer interaction function and behavior control method thereof Download PDF

Info

Publication number
CN101618280B
CN101618280B CN2009100724055A CN200910072405A CN101618280B CN 101618280 B CN101618280 B CN 101618280B CN 2009100724055 A CN2009100724055 A CN 2009100724055A CN 200910072405 A CN200910072405 A CN 200910072405A CN 101618280 B CN101618280 B CN 101618280B
Authority
CN
China
Prior art keywords
robot
human
humanoid
emotion
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2009100724055A
Other languages
Chinese (zh)
Other versions
CN101618280A (en
Inventor
吴伟国
孟庆梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology Shenzhen
Original Assignee
Harbin Institute of Technology Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology Shenzhen filed Critical Harbin Institute of Technology Shenzhen
Priority to CN2009100724055A priority Critical patent/CN101618280B/en
Publication of CN101618280A publication Critical patent/CN101618280A/en
Application granted granted Critical
Publication of CN101618280B publication Critical patent/CN101618280B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Manipulator (AREA)

Abstract

具有人机交互功能的仿人头像机器人装置及行为控制方法,它涉及一种仿人头像机器人及其行为控制方法。解决了现有的仿人头像机器人不能完全实现对人类面部表情的再现、感知功能有限、不具有人工情感模型及人机交互功能的问题。传感器感知系统将感知到的信息输出给主控机进行处理,机器人行为控制系统中的控制系统软件根据人工情感模型得到相应电机的相关控制量,将运动控制指令通过运动控制卡输出PWM脉冲驱动相应的电机运动到指定的位置,实现机器人的人机交互功能及各种情感反应。传感器感知系统感知外界的情感信号,并识别出相应的情感信号,利用人工情感模型实现机器人的行为控制。本发明实现了对人类面部表情的再现,具有嗅觉、触觉、视觉等拟人的多感知功能。

Figure 200910072405

The invention relates to a humanoid head portrait robot device and a behavior control method with human-computer interaction function, and relates to a humanoid head portrait robot and a behavior control method thereof. It solves the problems that the existing humanoid avatar robot cannot fully realize the reproduction of human facial expressions, has limited perception function, does not have artificial emotion model and human-computer interaction function. The sensor perception system outputs the sensed information to the main control machine for processing. The control system software in the robot behavior control system obtains the relevant control quantities of the corresponding motors according to the artificial emotion model, and outputs the motion control commands through the motion control card to output PWM pulses to drive the corresponding motors. The motor moves to the designated position to realize the robot's human-computer interaction function and various emotional responses. The sensor perception system perceives the external emotional signals, and recognizes the corresponding emotional signals, and uses the artificial emotional model to realize the behavior control of the robot. The invention realizes the reproduction of human facial expressions, and has anthropomorphic multi-sensory functions such as smell, touch and vision.

Figure 200910072405

Description

具有人机交互功能的仿人头像机器人装置及行为控制方法 Human-like avatar robot device with human-computer interaction function and behavior control method

技术领域technical field

本发明涉及一种仿人头像机器人装置及其行为控制方法,属于机器人应用领域。The invention relates to a humanoid avatar robot device and a behavior control method thereof, belonging to the field of robot applications.

背景技术Background technique

仿人机器人的研究开始于上世纪60年代,经过50几年的发展目前已成为机器人技术领域的主要研究方向之一。它集机械、电子、计算机、材料、传感器、控制技术等多门科学于一体,代表着一个国家的高科技发展水平。“仿人”的意义在于机器人具有类人的感知、决策、行为和交互能力。仿人头像机器人是仿人机器人研究领域中实现人机情感交互的重要方向。情感能够提高机器人的便利性和可信度,同时可以向使用者提供机器人的内部状态、目标和意图等反馈信息。在人机交互中,被设计成具有情感的机器与人建立友好的交互界面,使其具有参与社会事务和开展人际交往的能力,更易被人类所接受;同时机器具有“生命”使其主体的思维(或行为)具有明确的目的性和方向性,从而显著地提高思维(或行为)的效率和速度;在动态、不可预知和具有潜在“危险”的环境中,赋予机器思维上的创造性和行为上的自觉性,提高机器适应环境的能力。The research on humanoid robot began in the 1960s, and after more than 50 years of development, it has become one of the main research directions in the field of robotics. It integrates many sciences such as machinery, electronics, computers, materials, sensors, and control technology, and represents a country's high-tech development level. The meaning of "humanoid" is that the robot has human-like perception, decision-making, behavior and interaction capabilities. Humanoid avatar robot is an important direction to realize human-computer emotional interaction in the field of humanoid robot research. Emotions can enhance the convenience and trustworthiness of robots, while providing feedback to users about the robot's internal state, goals and intentions. In human-computer interaction, a machine designed to have emotions establishes a friendly interface with humans, making it capable of participating in social affairs and conducting interpersonal communication, and is more easily accepted by humans; at the same time, machines have "life" to make their subjects Thinking (or behavior) has a clear purpose and direction, thereby significantly improving the efficiency and speed of thinking (or behavior); in a dynamic, unpredictable and potentially "dangerous" environment, endow machines with creativity and creativity in thinking Consciousness in behavior improves the ability of the machine to adapt to the environment.

目前,一些仿人头像机器人的研制不具有多种感知功能,对基本面部表情的实现也只限于一种表情。经文献检索发现中国专利公告号为CN 201088839专利号为200720189947.7名称为“机器人笑表情肌肉动作机构”,该机构包括头颅架、橡胶表皮,橡胶表皮贴在头颅架上。其特点为机构简单能够实现各种笑的表情。该机构不足之处在于不能实现其他面部表情。中国专利公告号为CN 101020315A专利号为200710038295.1名称为“仿人机器人头部系统”。该系统包括一个六自由度的串联型机构,以DSPF2812为主控节点的可扩展处理器网络。六自由度的串联机构本体由六个舵机驱动,用来模拟双眼、颈部和下巴的运动。可扩展处理器网络由视频处理器、语音处理器、接口模块和DSPF2812主控电路组成,能实现人机交互过程中的运动控制和计算要求。但是该仿人机器人头部系统不具有弹性面部皮肤,因此不能实现对人类面部表情的再现,不具有如嗅觉、触觉、视觉等拟人的多感知功能。同时以上申请的专利中机器人不具有人工情感模型及人机交互功能。At present, the development of some humanoid avatar robots does not have multiple perception functions, and the realization of basic facial expressions is limited to one expression. Through literature search, it is found that the Chinese patent announcement number is CN 201088839, and the patent number is 200720189947.7. Its characteristic is that the mechanism is simple and can realize various smiling expressions. The mechanism falls short in that other facial expressions cannot be achieved. The Chinese patent announcement number is CN 101020315A and the patent number is 200710038295.1, which is called "humanoid robot head system". The system includes a six-degree-of-freedom serial mechanism, and an expandable processor network with DSPF2812 as the main control node. The body of the six-degree-of-freedom series mechanism is driven by six servos to simulate the movements of the eyes, neck and jaw. The scalable processor network is composed of video processor, voice processor, interface module and DSPF2812 main control circuit, which can realize the motion control and calculation requirements in the process of human-computer interaction. However, the humanoid robot head system does not have elastic facial skin, so it cannot realize the reproduction of human facial expressions, and does not have anthropomorphic multi-sensory functions such as smell, touch, and vision. At the same time, the robot in the above patent application does not have artificial emotion model and human-computer interaction function.

发明内容Contents of the invention

鉴于上述技术状态,本发明的目的是提供一种具有人机交互功能的仿人头像机器人装置及行为控制方法,以解决现有的仿人头像机器人不能完全实现对人类面部表情的再现、感知功能有限、不具有人工情感模型及人机交互功能的问题。In view of the above-mentioned state of the art, the purpose of the present invention is to provide a humanoid avatar robot device and a behavior control method with human-computer interaction function, so as to solve the problem that existing humanoid avatar robots cannot fully realize the reproduction and perception functions of human facial expressions. Problems with limited and no artificial emotion models and human-computer interaction functions.

本发明解决上述技术问题所采用的技术方案是:The technical solution adopted by the present invention to solve the problems of the technologies described above is:

本发明所述具有人机交互功能的仿人头像机器人装置由仿人头像机器人本体、机器人行为控制系统和传感器感知系统三部分组成;所述仿人头像机器人本体包括眼球运动单元、上下颚运动单元、人工肺装置、面部表情及口形驱动机构、前支板、后支板、立架、面部壳体和面部弹性皮肤;眼球运动单元由两个眼球、眼球传动机构、两个眼球伺服电机、两个眼睑、眼睑传动机构、两个眼睑伺服电机和伺服电机构成;上下颚运动单元由上鄂、下颚、电机和转轴构成;人工肺装置由软管、汽缸、活塞、螺母、气体驱动导向轴、气体驱动丝杠、驱动电机和进气管构成;面部表情及仿人口形驱动机构由第一驱动伺服电机、第二驱动伺服电机、第三驱动伺服电机、第四驱动伺服电机、第五驱动伺服电机、第六驱动伺服电机和多组面部皮肤驱动绳轮机构构成;机器人行为控制系统包括控制系统硬件与控制系统软件,所述控制系统硬件包括主控机和运动控制卡;所述控制系统软件即为行为控制方法;传感器感知系统包括两个小型CCD传感器、语音识别单片机、嗅觉传感器;前支板、后支板平行设置并与立架固接在一起构成仿人头像机器人的头部骨架,眼球运动单元、上下颚运动单元由上至下安装在立架上;在每个眼球中嵌入一个小型CCD传感器形成机器人的双目视觉,每个眼睑设置在对应眼球的上方,两个眼球伺服电机分别通过眼球传动机构带动两个眼球左右转动,两个眼睑伺服电机分别通过眼睑传动机构带动两个眼睑运动,伺服电机同时驱动两个眼球一起上下动动;上鄂设置在下颚的上方,电机带动转轴驱动下颚运动;嗅觉传感器安装在眼球运动单元和上下颚运动单元之间的立架上,软管的一端与嗅觉传感器连接,软管的另一端与气缸连接,电机与气体驱动丝杠转动连接,螺母安装在气体驱动丝杠上,螺母沿着气体驱动导向轴移动,驱动固连在螺母上的活塞运动,实现机器人的嗅觉功能,进气管的一端与汽缸连接;第一驱动伺服电机、第二驱动伺服电机、第三驱动伺服电机、第四驱动伺服电机、第五驱动伺服电机和第六驱动伺服电机均安装在由前支板、后支板和立架构成的仿人头像机器人头部骨架上,所述六个驱动伺服电机通过面部皮肤驱动绳轮机构与面部弹性皮肤的相应的控制点连接;面部壳体和面部弹性皮肤与人脸的轮廓形状相一致,面部壳体、面部弹性皮肤由内向外安装在眼球运动单元、上下颚运动单元的前端上形成仿人头像机器人装置的外部轮廓形状;运动控制卡安装在后支板上,语音识别单片机安装在前支板的上端上;传感器感知系统将感知到的信息输出给主控机进行处理,机器人行为控制系统中的控制系统软件根据人工情感模型得到相应电机的相关控制量,将运动控制指令通过运动控制卡(电机控制器)输出PWM脉冲驱动相应的电机运动到指定的位置,从而实现机器人的人机交互功能及各种情感反应。The humanoid avatar robot device with human-computer interaction function of the present invention is composed of three parts: a humanoid avatar robot body, a robot behavior control system and a sensor perception system; the humanoid avatar robot body includes an eyeball movement unit, an upper and lower jaw movement unit , artificial lung device, facial expression and mouth shape driving mechanism, front support board, rear support board, stand, face shell and facial elastic skin; the eyeball movement unit consists of two eyeballs, eyeball transmission mechanism, two eyeball servo motors, two One eyelid, eyelid transmission mechanism, two eyelid servo motors and servo motor; upper and lower jaw movement unit is composed of upper jaw, lower jaw, motor and shaft; artificial lung device is composed of hose, cylinder, piston, nut, gas-driven guide shaft, Gas-driven lead screw, drive motor and air intake pipe; facial expression and imitation mouth shape drive mechanism consists of the first drive servo motor, the second drive servo motor, the third drive servo motor, the fourth drive servo motor, and the fifth drive servo motor 1. The sixth driving servo motor and multiple sets of facial skin driving rope wheel mechanisms; the robot behavior control system includes control system hardware and control system software, and the control system hardware includes a main control machine and a motion control card; the control system software is It is a behavior control method; the sensor perception system includes two small CCD sensors, a speech recognition single-chip microcomputer, and an olfactory sensor; the front support plate and the rear support plate are arranged in parallel and are fixedly connected with the stand to form the head skeleton of the humanoid head robot, and the eyeballs The movement unit and the upper and lower jaw movement units are installed on the stand from top to bottom; a small CCD sensor is embedded in each eyeball to form the binocular vision of the robot, each eyelid is set above the corresponding eyeball, and the two eyeball servo motors are respectively Drive the two eyeballs to rotate left and right through the eyeball transmission mechanism, and the two eyelid servo motors drive the two eyelids to move through the eyelid transmission mechanism respectively, and the servo motor simultaneously drives the two eyeballs to move up and down together; the upper jaw is set above the lower jaw, and the motor drives the shaft Drive the movement of the lower jaw; the olfactory sensor is installed on the stand between the eye movement unit and the upper and lower jaw movement unit, one end of the hose is connected to the olfactory sensor, the other end of the hose is connected to the cylinder, and the motor is connected to the gas-driven screw for rotation. The nut is installed on the gas-driven lead screw, and the nut moves along the gas-driven guide shaft to drive the piston fixedly connected to the nut to realize the olfactory function of the robot. One end of the intake pipe is connected to the cylinder; the first drives the servo motor, the second The drive servo motor, the third drive servo motor, the fourth drive servo motor, the fifth drive servo motor and the sixth drive servo motor are all installed on the humanoid head robot head skeleton composed of the front support plate, the rear support plate and the stand Above, the six driving servo motors are connected to the corresponding control points of the facial elastic skin through the facial skin driving rope wheel mechanism; It is installed on the front end of the eyeball movement unit and the upper and lower jaw movement unit from the inside to the outside to form the outer contour shape of the human head robot device; the motion control card is installed on the rear support plate, and the voice recognition single-chip microcomputer is installed on the upper end of the front support plate; the sensor perception The system outputs the perceived information to the main control machine for processing. The control system software in the robot behavior control system obtains the relevant control quantities of the corresponding motors according to the artificial emotion model, and outputs the motion control commands through the motion control card (motor controller) to PWM The pulse drives the corresponding motor to move to the specified position, so as to realize the human-computer interaction function and various emotional responses of the robot.

上述具有人机交互功能的仿人头像机器人装置的行为控制方法按以下步骤实现的:The above-mentioned behavior control method of the humanoid avatar robot device with human-computer interaction function is realized in the following steps:

步骤一、通过传感器感知系统(传感器CCD和MIC)感知外界的情感信号,对感知到的情感信号进行分析、特征提取并识别;Step 1. Perceive external emotional signals through the sensor perception system (sensor CCD and MIC), analyze, feature extract and identify the perceived emotional signals;

步骤二、将识别后的情感信号传递给人工情感模型;所述人工情感模型主要包括三部分:机器人的情感产生、情感交互和情感表达;情感产生主要包括刺激信号采集、情感定义、情感驱动、情感转换四大模块;在情感驱动中定义两个阈值α,β;α为情感的激活阈值,β为情感的饱和度阈值;情感状态的转换要考虑到外部因素、内部因素和以前情感状态三方面的影响;Step 2, passing the identified emotion signal to the artificial emotion model; the artificial emotion model mainly includes three parts: emotion generation, emotion interaction and emotion expression of the robot; emotion generation mainly includes stimulus signal collection, emotion definition, emotion drive, Four modules of emotional conversion; two thresholds α and β are defined in the emotional drive; α is the activation threshold of emotion, and β is the saturation threshold of emotion; the conversion of emotional state should take into account external factors, internal factors and the previous emotional state aspects of the impact;

在明确人工情感模型的各个状态以及一些完全可以用触发事件来确定各状态转换关系的基础上,用扩展的有限状态机来实现情感之间的状态转换;增加定义状态上的变量属性集合,作为一种扩展有限状态机EFSM;利用EFSM分析清楚情感交互模型后,即可准确地确定人工情感模型中各个函数的变量构成,可以有效地避免变量定义冲突,为下一步的机器人控制行为提供依据;On the basis of clarifying each state of the artificial emotion model and some trigger events can be used to determine the state transition relationship, the extended finite state machine is used to realize the state transition between emotions; the variable attribute set on the defined state is added as An extended finite state machine EFSM; after using the EFSM to analyze the emotional interaction model, the variable composition of each function in the artificial emotional model can be accurately determined, which can effectively avoid variable definition conflicts and provide a basis for the next step of robot control behavior;

步骤三、根据人工情感模型计算出相应驱动电机的相关控制量(相应转角),到机器人的面部表情及仿人口形,进而得到机器人表达其情感的行为。Step 3. According to the artificial emotion model, calculate the relevant control quantity (corresponding rotation angle) of the corresponding drive motor, and then get the robot's facial expression and imitation human shape, and then obtain the behavior of the robot to express its emotion.

本发明具有以下有益技术效果:本发明实现了对人类面部表情的再现,而且还具有如嗅觉、触觉、视觉等拟人的多感知功能。本发明通过弹性面部皮肤能够实现基本面部表情和机器人的动态口形。采用基于情感模型的行为控制方法,实现机器人的行为控制,通过机器人装置中传感器感知外界的情感刺激,经过人工情感模型计算识别,控制系统实现机器人的各种行为反应,使机器人具有人机交互功能。本发明最大的特点是仿人头像机器人装置实现了与成人头部体积1∶1比例的设计,结构紧凑。The invention has the following beneficial technical effects: the invention realizes the reproduction of human facial expressions, and also has anthropomorphic multi-sensory functions such as sense of smell, touch and vision. The invention can realize basic facial expressions and dynamic mouth shapes of robots through elastic facial skin. The behavior control method based on the emotional model is used to realize the behavior control of the robot. The sensor in the robot device perceives the external emotional stimulation, and the artificial emotion model is calculated and recognized. The control system realizes various behavioral responses of the robot, so that the robot has the function of human-computer interaction. . The greatest feature of the present invention is that the humanoid head robot device realizes a design with a ratio of 1:1 to the head volume of an adult, and has a compact structure.

附图说明Description of drawings

图1a是本发明所述机器人的立体图,图1b是本发明所述机器人的立体图(为便于表达,没有画出面部壳体和面部弹性皮肤等);图2a是本发明所述机器人的上下颚运动单元立体图,图2b是本发明所述机器人的上下颚运动单元立体图(主视立体图);图3是本发明的机器人眼球运动单元的立体图;图4a是机器人的面部皮肤上定义皮肤特征点的具体位置示意图,图4b是本发明的机器人面部表情和口形的形成原理图示意图;图5是本发明的机器人人机交互原理示意图;图6是本发明的机器人的行为控制硬件构成的方框图;图7a是本发明的机器人的行为控制软件的具体流程图,图7b-1是机器人表达“很高兴认识你”时的语音信号图,图是7b-2是机器人表达“很高兴认识你”时下颚驱动电机的转角图;图8a~8c是本发明的机器人面部表情识别模糊神经网络方法示意图;图9是本发明的机器人语音识别方法示意图;图10是本发明的机器人人工情感模型的结构示意图;图11是本发明的机器人基于有限状态机的情感交互模型示意图;图12a是本发明的机器人能够表达的部分基本口形组图,图12b是机器人表达“很高兴认识你”时各个时间点的表情组图;图12c是本发明的机器人的人机交互实验照片组图。Fig. 1 a is the perspective view of robot of the present invention, and Fig. 1 b is the perspective view of robot of the present invention (for ease of expression, does not draw facial shell and facial elastic skin etc.); Fig. 2 a is the upper and lower jaw of robot of the present invention Motion unit perspective view, Fig. 2 b is the upper and lower jaw movement unit perspective view (front perspective view) of robot of the present invention; Fig. 3 is the perspective view of robot eye movement unit of the present invention; Fig. 4 a is the definition skin feature point on the face skin of robot Concrete position schematic diagram, Fig. 4 b is the schematic diagram of the forming principle diagram of robot facial expression and mouth shape of the present invention; Fig. 5 is the schematic diagram of human-computer interaction principle of robot of the present invention; Fig. 6 is the block diagram of the behavior control hardware composition of the robot of the present invention; Fig. 7a is a specific flow chart of the behavior control software of the robot of the present invention. FIG. 7b-1 is a voice signal diagram when the robot expresses "Nice to meet you", and Fig. 7b-2 is the lower jaw when the robot expresses "Nice to meet you". The corner diagram of the drive motor; Fig. 8a ~ 8c is a schematic diagram of the robot facial expression recognition fuzzy neural network method of the present invention; Fig. 9 is a schematic diagram of the robot speech recognition method of the present invention; Fig. 10 is a structural representation of the artificial emotion model of the robot of the present invention; Fig. 11 is a schematic diagram of the emotional interaction model of the robot based on the finite state machine of the present invention; Fig. 12a is a part of the basic mouth shapes that the robot of the present invention can express; Group of pictures; Figure 12c is a group of photos of the human-computer interaction experiment of the robot of the present invention.

具体实施方式Detailed ways

具体实施方式一:如图1a、图1b、图2a、图2b、图3、图4a、图4b和图6所示,本实施方式所述的具有人机交互功能的仿人头像机器人装置由仿人头像机器人本体、机器人行为控制系统和传感器感知系统三部分组成;所述仿人头像机器人本体包括眼球运动单元1、上下颚运动单元61、人工肺装置71、面部表情及口形驱动机构81、前支板7、后支板6、立架51、面部壳体17和面部弹性皮肤18;眼球运动单元1由两个眼球12、眼球传动机构、两个眼球伺服电机14、两个眼睑13、眼睑传动机构、两个眼睑伺服电机16和伺服电机29构成;上下颚运动单元61由上鄂8、下颚9、电机27和转轴28构成;人工肺装置71由软管19、汽缸20、活塞21、螺母22、气体驱动导向轴23、气体驱动丝杠25、驱动电机26和进气管92构成;面部表情及仿人口形驱动机构81由第一驱动伺服电机35、第二驱动伺服电机34、第三驱动伺服电机33、第四驱动伺服电机40、第五驱动伺服电机41、第六驱动伺服电机43和多组面部皮肤驱动绳轮机构构成;Specific embodiment one: as shown in Fig. 1a, Fig. 1b, Fig. 2a, Fig. 2b, Fig. 3, Fig. 4a, Fig. 4b and Fig. 6, the humanoid avatar robot device with human-computer interaction function described in this embodiment consists of The humanoid head robot body, robot behavior control system and sensor perception system are composed of three parts; the humanoid head robot body includes eye movement unit 1, upper and lower jaw movement unit 61, artificial lung device 71, facial expression and mouth shape driving mechanism 81, Front support plate 7, rear support plate 6, stand 51, facial shell 17 and facial elastic skin 18; An eyelid transmission mechanism, two eyelid servo motors 16 and a servo motor 29 are formed; the upper and lower jaw movement unit 61 is composed of an upper jaw 8, a lower jaw 9, a motor 27 and a rotating shaft 28; the artificial lung device 71 is composed of a hose 19, a cylinder 20, and a piston 21 , nut 22, gas-driven guide shaft 23, gas-driven screw 25, drive motor 26 and intake pipe 92 constitute; Three drive servo motors 33, the fourth drive servo motor 40, the fifth drive servo motor 41, the sixth drive servo motor 43 and multiple groups of facial skin drive rope pulley mechanisms;

机器人行为控制系统包括控制系统硬件与控制系统软件,所述控制系统硬件包括主控机91和运动控制卡5;所述控制系统软件即为行为控制方法;The robot behavior control system includes control system hardware and control system software, and the control system hardware includes a main control machine 91 and a motion control card 5; the control system software is the behavior control method;

传感器感知系统包括两个小型CCD传感器3、语音识别单片机4、嗅觉传感器24;The sensor perception system includes two small CCD sensors 3, a voice recognition microcontroller 4, and an olfactory sensor 24;

前支板7、后支板6平行设置并与立架51固接在一起构成仿人头像机器人的头部骨架,眼球运动单元1、上下颚运动单元61由上至下安装在立架51上;在每个眼球12中嵌入一个小型CCD传感器3形成机器人的双目视觉,每个眼睑13设置在对应眼球12的上方,两个眼球伺服电机14分别通过眼球传动机构带动两个眼球12左右转动,两个眼睑伺服电机16分别通过眼睑传动机构带动两个眼睑13运动,伺服电机29同时驱动两个眼球12一起上下动动;上鄂8设置在下颚9的上方,电机27带动转轴28驱动下颚9运动;嗅觉传感器24安装在眼球运动单元1和上下颚运动单元61之间的立架51上,软管19的一端与嗅觉传感器24连接,软管19的另一端与气缸20连接,电机26与气体驱动丝杠25转动连接,螺母22安装在气体驱动丝杠25上,螺母22沿着气体驱动导向轴23移动,驱动固连在螺母22上的活塞21运动,实现机器人的嗅觉功能,进气管92的一端与汽缸20连接;第一驱动伺服电机35、第二驱动伺服电机34、第三驱动伺服电机33、第四驱动伺服电机40、第五驱动伺服电机41和第六驱动伺服电机43均安装在由前支板7、后支板6和立架51构成的仿人头像机器人头部骨架上,所述六个驱动伺服电机通过面部皮肤驱动绳轮机构与面部弹性皮肤18的相应的控制点连接;面部壳体17和面部弹性皮肤18与人脸的轮廓形状相一致,面部壳体17、面部弹性皮肤18由内向外安装在眼球运动单元1、上下颚运动单元61的前端上形成仿人头像机器人装置的外部轮廓形状;运动控制卡5安装在后支板6上,语音识别单片机4安装在前支板7的上端上;传感器感知系统将感知到的信息输出给主控机91进行处理,机器人行为控制系统中的控制系统软件根据人工情感模型得到相应电机的相关控制量,将运动控制指令通过运动控制卡5(电机控制器)输出PWM脉冲驱动相应的电机运动到指定的位置,从而实现机器人的人机交互功能及各种情感反应。The front support plate 7 and the rear support plate 6 are arranged in parallel and are affixed together with the stand 51 to form the head skeleton of the humanoid head robot. The eyeball movement unit 1 and the upper and lower jaw movement units 61 are installed on the stand 51 from top to bottom. Embed a small CCD sensor 3 in each eyeball 12 to form the binocular vision of the robot, each eyelid 13 is arranged on the top of the corresponding eyeball 12, and two eyeball servo motors 14 drive the two eyeballs 12 to rotate left and right through the eyeball transmission mechanism respectively The two eyelid servo motors 16 respectively drive the two eyelids 13 to move through the eyelid transmission mechanism, and the servo motor 29 simultaneously drives the two eyeballs 12 to move up and down together; the upper jaw 8 is arranged above the lower jaw 9, and the motor 27 drives the rotating shaft 28 to drive the lower jaw 9 movement; the smell sensor 24 is installed on the stand 51 between the eye movement unit 1 and the upper and lower jaw movement unit 61, one end of the hose 19 is connected with the smell sensor 24, the other end of the hose 19 is connected with the cylinder 20, and the motor 26 Rotately connected with the gas-driven lead screw 25, the nut 22 is installed on the gas-driven lead screw 25, the nut 22 moves along the gas-driven guide shaft 23, drives the piston 21 fixedly connected to the nut 22 to move, and realizes the olfactory function of the robot. One end of the air pipe 92 is connected with the cylinder 20; the first drive servo motor 35, the second drive servo motor 34, the third drive servo motor 33, the fourth drive servo motor 40, the fifth drive servo motor 41 and the sixth drive servo motor 43 All are installed on the humanoid head robot head frame that is made of front support plate 7, rear support plate 6 and stand 51, and described six driving servo motors drive the rope pulley mechanism and the corresponding corresponding parts of facial elastic skin 18 through facial skin. The control points are connected; the facial shell 17 and the facial elastic skin 18 are consistent with the contour shape of the human face, and the facial shell 17 and the facial elastic skin 18 are installed on the front ends of the eyeball movement unit 1 and the upper and lower jaw movement unit 61 from inside to outside. Imitate the external contour shape of the human head robot device; the motion control card 5 is installed on the rear support plate 6, and the voice recognition single-chip microcomputer 4 is installed on the upper end of the front support plate 7; the sensor perception system outputs the perceived information to the main control unit 91 For processing, the control system software in the robot behavior control system obtains the relevant control amount of the corresponding motor according to the artificial emotion model, and outputs the PWM pulse through the motion control command through the motion control card 5 (motor controller) to drive the corresponding motor to the specified position , so as to realize the human-computer interaction function and various emotional responses of the robot.

本实施方式的仿人头像机器人装置的长为162mm、宽为156mm、高为184mm,重量为2.8kg,共有14个自由度。通过眼球运动单元1、上下颚运动单元61实现机器人的仿人面部器官运动;通过电机27带动转轴28驱动下颚9实现机器人的嘴部的运动。人工肺装置通过电机26带动气体驱动丝杠25转动,螺母22沿着气体驱动导向轴23移动,从而驱动固连在螺母22上的活塞21运动,实现机器人的嗅觉功能。能够识别出酒精、烟、氨气等气味,人工肺装置安放可在仿人机器人胸部的适当位置。图3为本发明的机器人的眼球运动驱动单元1,眼球运动具有2个自由度,眼睑运动具有1个自由度。在眼球12分别嵌入小型CCD传感器3,构成机器人的双目视觉。其中眼球的运动速度最快可达500deg/s,眼睑的运动速度最快为900deg/s。在仿人头像机器人装置中考虑到机构的具体空间需要,机器人的行为控制采用体积小、输出力矩较大,易于位置控制的眼球伺服电机14、眼睑伺服电机16、伺服电机29,采用同步带传动机构作为电机的传动机构,同时考虑到电机驱动轴的支撑问题,设计轴承支座15增强机构的刚度。机器人的眼球运动机构是左右对称的。其中眼球伺服电机14通过相应的第一绳轮31和第二绳轮11带动眼球12左右转动。伺服电机16通过相应的第三绳轮30、第四绳轮33驱动相应的眼睑13运动。伺服电机29驱动两上眼球12一起上下运动。The humanoid avatar robot device of the present embodiment has a length of 162 mm, a width of 156 mm, a height of 184 mm, a weight of 2.8 kg, and a total of 14 degrees of freedom. The human-like facial organ movement of the robot is realized through the eyeball movement unit 1 and the upper and lower jaw movement unit 61; the motor 27 drives the rotating shaft 28 to drive the lower jaw 9 to realize the movement of the mouth of the robot. The artificial lung device drives the gas-driven lead screw 25 to rotate through the motor 26, and the nut 22 moves along the gas-driven guide shaft 23, thereby driving the movement of the piston 21 fixedly connected to the nut 22 to realize the olfactory function of the robot. Able to identify alcohol, smoke, ammonia and other odors, the artificial lung device can be placed in an appropriate position on the chest of the humanoid robot. Fig. 3 is the eye movement driving unit 1 of the robot of the present invention, the eye movement has two degrees of freedom, and the eyelid movement has one degree of freedom. Small CCD sensors 3 are respectively embedded in the eyeballs 12 to form the binocular vision of the robot. Among them, the fastest movement speed of the eyeball can reach 500deg/s, and the fastest movement speed of the eyelid is 900deg/s. Considering the specific space requirements of the mechanism in the humanoid avatar robot device, the behavior control of the robot adopts eyeball servo motor 14, eyelid servo motor 16, and servo motor 29, which are small in size, large in output torque, and easy to position control, and are driven by synchronous belts. The mechanism is used as the transmission mechanism of the motor, and considering the support of the motor drive shaft, the bearing support 15 is designed to enhance the rigidity of the mechanism. The robot's eye movement mechanism is left-right symmetrical. Wherein the eyeball servo motor 14 drives the eyeball 12 to rotate left and right through the corresponding first sheave 31 and the second sheave 11 . The servo motor 16 drives the corresponding eyelid 13 to move through the corresponding third sheave 30 and fourth sheave 33 . Servo motor 29 drives two upper eyeballs 12 to move up and down together.

考虑到伺服控制卡与主控机的通讯问题及合理分配和有效利用运动控制卡的硬件资源,实现仿人头像机器人系统的硬件集成两方面问题,运动控制卡5选用SSC-32运动控制卡,SSC-32运动控制卡可以控制多达32个伺服电机协调动作。运动控制硬件即SSC-32板卡5本身,该舵机控制器采用RS232串口与PC机进行通信,通过PC机操作上位机软件给控制器传递控制指令信号,就可以从控制器输出一定占空比的PWM信号,实现多路伺服电机单独控制或同时控制。控制卡的控制指令精简、控制精度为0.09°/μs,可以进行舵机的位置控制以及速度控制。Considering the communication problem between the servo control card and the main control machine and the reasonable allocation and effective use of the hardware resources of the motion control card, and the realization of the hardware integration of the humanoid head robot system, the motion control card 5 uses the SSC-32 motion control card, SSC-32 motion control card can control up to 32 servo motors to coordinate actions. The motion control hardware is the SSC-32 board 5 itself. The steering gear controller uses the RS232 serial port to communicate with the PC, and the PC operates the upper computer software to transmit the control instruction signal to the controller, and then the controller can output a certain duty cycle. The ratio of the PWM signal to achieve separate or simultaneous control of multiple servo motors. The control instructions of the control card are simplified, and the control accuracy is 0.09°/μs, which can control the position and speed of the steering gear.

语音识别单片机4是SPCE061A单片机。仿人头像机器人采用主控机与SPCE061A单片机4相结合作为机器人的听觉系统。SPCE061A单片机4是凌阳公司推出的一款16位微处理器,内嵌32k字闪存flash,处理速度高,能够很方便地完成语音识别和语音信号的处理。系统通讯部分电路主要包括32位I/O通讯电路和通用异步串行接口(UART)通讯电路,本系统通过通用异步串行接口(UART)电路实现与主控机之间的通讯,实现单片机与主控机间的数据传输。语音识别时用到的语音提示借助于凌阳公司提供的API语音函数来实现。识别时的模板匹配算法采用Viterbi算法。它是一种前向搜索算法,其可以是在给定相应的观察序列时,找出从模型中找出的最佳状态序列,即选择输出概率最大的模版作为输出结果。模板训练的计算量很大,是借助于主控机平台完成的,语音信号样本通过SPCE061A系统来采集,通过RS232的通信模块将语音信号传送到主控机上保存,它保证了训练用的语音特征与识别时的一致性,从而减少硬件系统不一致而带来的误差。Speech recognition single-chip microcomputer 4 is SPCE061A single-chip microcomputer. The humanoid avatar robot adopts the combination of the main control computer and the SPCE061A single-chip microcomputer 4 as the auditory system of the robot. SPCE061A single-chip microcomputer 4 is a 16-bit microprocessor launched by Sunplus, embedded with 32k words of flash memory, with high processing speed, and can easily complete speech recognition and speech signal processing. The communication part of the system mainly includes a 32-bit I/O communication circuit and a universal asynchronous serial interface (UART) communication circuit. Data transfer between host computers. The voice prompt used in voice recognition is realized by means of the API voice function provided by Sunplus. The template matching algorithm in recognition adopts Viterbi algorithm. It is a forward search algorithm, which can find the best state sequence from the model when the corresponding observation sequence is given, that is, select the template with the highest output probability as the output result. Template training requires a lot of calculation, which is completed with the help of the main control computer platform. The voice signal samples are collected through the SPCE061A system, and the voice signal is transmitted to the main control computer through the RS232 communication module for storage, which ensures the voice characteristics used for training. Consistency with recognition, thereby reducing errors caused by inconsistencies in hardware systems.

嗅觉传感器24采用FIS系列气体传感器。Smell sensor 24 adopts FIS series gas sensor.

具体实施方式二:本实施方式所述传感器感知系统还包括触觉传感器,所述触觉传感器设置在前额中部位置。其它组成及连接关系与具体实施方式一相同。Embodiment 2: The sensor perception system in this embodiment further includes a tactile sensor, and the tactile sensor is arranged in the middle of the forehead. Other components and connections are the same as those in the first embodiment.

具体实施方式三:本实施方式所述传感器感知系统还包括两个温度传感器,所述两个温度传感器分别设置在前额的左右两侧。其它组成及连接关系与具体实施方式一或二相同。Embodiment 3: The sensor perception system in this embodiment further includes two temperature sensors, and the two temperature sensors are respectively arranged on the left and right sides of the forehead. Other compositions and connections are the same as those in Embodiment 1 or 2.

具体实施方式四:本实施方式所述小型CCD传感器3是利用微软公司在ActiveMovie和Video for Windows的基础上推出的新一代基于COM技术的流媒体处理开发包DirectShow进行视频捕获;图像采集卡通过PCI放置在主板上,通过小型CCD传感器3内带的基于windows软件开发平台建立仿人头像机器人的视觉来完成对外界环境的识别,这里主要指仿人头像机器人对人脸面部表情的识别;所述小型CCD传感器3的主要功能为:Specific embodiment four: the small-sized CCD sensor 3 described in this embodiment utilizes the new generation of COM technology-based stream media processing development package DirectShow that Microsoft Corporation releases on the basis of ActiveMovie and Video for Windows to carry out video capture; Placed on the main board, the visual recognition of the external environment is completed by the windows software development platform based on the windows software development platform built in the small CCD sensor 3, which mainly refers to the identification of human face and facial expressions by the humanoid head robot; The main functions of the small CCD sensor 3 are:

(1)静态、动态的图像获取功能包括图像采集卡的参数设置、位图图像读入和存储、多路图像采集控制、显示和切换;(1) Static and dynamic image acquisition functions include image acquisition card parameter setting, bitmap image reading and storage, multi-channel image acquisition control, display and switching;

(2)涉及的动态、静态图像分析算法和面部表情识别算法。其它与具体实施方式一相同。(2) Dynamic and static image analysis algorithms and facial expression recognition algorithms involved. Others are the same as in the first embodiment.

具体实施方式五:如图1a、图1b、图4a、图4b、图12a和图12b所示,本实施方式实现面部表情及仿人口形的方法为:Specific embodiment five: as shown in Fig. 1a, Fig. 1b, Fig. 4a, Fig. 4b, Fig. 12a and Fig. 12b, the method for realizing facial expression and imitating human shape in this embodiment is:

步骤一、在面部弹性皮肤18上定义皮肤特征点:a、a-1、b、b-1、c、c-1、d、d-1、e、e-1、g,所述各点即为运动控制点;Step 1. Define skin feature points on the facial elastic skin 18: a, a-1, b, b-1, c, c-1, d, d-1, e, e-1, g, each point is the motion control point;

步骤二、在每个运动控制点处设置一个滑块,所述滑块与面部弹性皮肤18的相应控制点连接在一起,每个滑块与一个与相应的导向槽滑动连接,所述导向槽设置在面部壳体17上,所述导向槽的设置方向确定每个运动控制点相应的受力方向;面部皮肤驱动绳轮机构的绳索的一端与滑块连接,面部皮肤驱动绳轮机构的绳索的另一端与相应的驱动伺服电机连接;Step 2, a slider is set at each motion control point, and the slider is connected with the corresponding control point of the facial elastic skin 18, and each slider is slidably connected with a corresponding guide groove, and the guide groove It is arranged on the facial shell 17, and the setting direction of the guide groove determines the corresponding force direction of each motion control point; one end of the rope of the facial skin driving the sheave mechanism is connected with the slider, and the facial skin drives the rope of the sheave mechanism. The other end is connected with the corresponding driving servo motor;

步骤三、通过表情控制点的组合和位移变化,实现不同的基本面部表情和机器人口形。Step 3: Realize different basic facial expressions and robot mouth shapes through the combination of expression control points and displacement changes.

为了实现表情控制,也可在在相应的制控点设置带有导向槽的皮肤运动导向弯板10。为实现机器人的面部表情和口形,在面部皮肤18上定义皮肤特征点(a,a-1,b,b-1,c,c-1,d,d-1,e,e-1,g)。图中圆点代表在机器人皮肤上设定的控制点,箭头表明控制点运动的方向。在机器人面部表情设计中,通过控制这些控制点的组合和运动的位移和方向来实现基本的面部表情和口形,模拟人类肌肉的双向运动。在实际机构中,面部皮肤驱动绳组与控制点相连,通过表情控制点的组合和位移变化,实现不同的基本面部表情。表1表达了各控制点与面部皮肤驱动绳组及驱动伺服电机的分配。为了有效利用空间,各控制点的运动是左右对称的(第四驱动伺服电机40和第五驱动伺服电机41左中对称设置;38与38-1对称设置;39与39-1对称设置)。In order to realize expression control, a skin movement guide curved plate 10 with a guide groove can also be provided at the corresponding control point. For realizing facial expression and mouth shape of robot, define skin feature point (a, a-1, b, b-1, c, c-1, d, d-1, e, e-1, g ). The dots in the figure represent the control points set on the robot skin, and the arrows indicate the movement direction of the control points. In the robot facial expression design, the basic facial expression and mouth shape are realized by controlling the combination of these control points and the displacement and direction of the movement, simulating the bidirectional movement of human muscles. In the actual mechanism, the facial skin driving rope group is connected with the control points, and different basic facial expressions can be realized through the combination and displacement changes of the expression control points. Table 1 expresses the distribution of each control point and facial skin driving rope group and driving servo motor. In order to effectively utilize the space, the motion of each control point is left-right symmetrical (the fourth driving servo motor 40 and the fifth driving servo motor 41 are symmetrically arranged in the left middle; 38 and 38-1 are symmetrically arranged; 39 and 39-1 are symmetrically arranged).

  皮肤控制点skin control point   驱动伺服电机drive servo motor   面部皮肤驱动绳轮Facial skin drive rope pulley   a,a-1a, a-1   3535   2-1,2-22-1, 2-2   b,b-1b, b-1   3434   37-1,37-237-1, 37-2   c,c-1,c, c-1,   3333   36-1,36-236-1, 36-2   d,d-1d, d-1   4040   38,3938,39   e,e-1e, e-1   4141   38-1,39-138-1, 39-1   gg   4343   4242

具体实施方式六:如图1a、图1b、图5、图6和图7a、图7b-1、图7b-2,图10、图11、图12a、图12b和图12c所示,本实施方式所述具有人机交互功能的仿人头像机器人装置的行为控制方法按以下步骤实现的:Specific embodiment six: as shown in Figure 1a, Figure 1b, Figure 5, Figure 6 and Figure 7a, Figure 7b-1, Figure 7b-2, Figure 10, Figure 11, Figure 12a, Figure 12b and Figure 12c, this implementation The behavior control method of the humanoid avatar robot device with human-computer interaction function described in the method is realized in the following steps:

步骤一、通过传感器感知系统(传感器CCD和MIC)感知外界的情感信号,对感知到的情感信号进行分析、特征提取并识别;所述情感信号包括人类基本面部表情和语音信号;人类基本面部表情识别采用模糊神经网络结构来识别;语音信号识别采用CHMM语音识别模型结构来识别;Step 1. Perceive external emotional signals through the sensor perception system (sensor CCD and MIC), analyze, feature extract and identify the perceived emotional signals; the emotional signals include human basic facial expressions and voice signals; human basic facial expressions The recognition adopts the fuzzy neural network structure to recognize; the speech signal recognition adopts the CHMM speech recognition model structure to recognize;

步骤二、将识别后的情感信号传递给人工情感模型;所述人工情感模型主要包括三部分:机器人的情感产生、情感交互和情感表达;情感产生主要包括刺激信号采集、情感定义、情感驱动、情感转换四大模块;在情感驱动中定义两个阈值α,β;α为情感的激活阈值,β为情感的饱和度阈值;情感状态的转换要考虑到外部因素、内部因素和以前情感状态三方面的影响;情感产生中通过传感器来实现人类的感知功能,用来感受外界事件。由于视觉和听觉能够感知外界环境的大部分信息,因此在仿人头像机器人中通过视觉传感器和听觉传感器来实现人类的感知功能。情感表达包括面部表情表达和语音表达。Step 2, passing the identified emotion signal to the artificial emotion model; the artificial emotion model mainly includes three parts: emotion generation, emotion interaction and emotion expression of the robot; emotion generation mainly includes stimulus signal collection, emotion definition, emotion drive, Four modules of emotional conversion; two thresholds α and β are defined in the emotional drive; α is the activation threshold of emotion, and β is the saturation threshold of emotion; the conversion of emotional state should take into account external factors, internal factors and the previous emotional state Aspects of influence; in the generation of emotion, the sensor is used to realize the human perception function, which is used to feel the external events. Since vision and hearing can perceive most of the information of the external environment, the human perception function is realized through visual sensors and auditory sensors in the humanoid avatar robot. Emotional expression includes facial expression expression and voice expression.

情感交互基于有限状态机理论建立,在明确情感模型的各个状态以及一些完全可以用触发事件来确定各状态转换关系的基础上,用有限状态机来实现情感之间的状态转换。建立情感交互的目的是为控制仿人头像机器人的行为,使其根据情感状态做出相应的行为反应。因此,情感状态机建立后就应该清晰准确地确定其所要使用的主要函数及变量。针对要求,对有限状态机的基本概念进行扩展,增加定义状态上的变量属性集合,作为一种扩展有限状态机(EFSM)。利用EFSM分析清楚情感交互模型后,即可准确地确定情感模型中各个函数的变量构成,可以有效地避免变量定义冲突,为下一步的机器人行为控制提供依据。图8中E1代表个体的初始情感状态,条件代表输入状态,根据当前的情感状态和输入状态,情感载体的情感状态发生改变,并做出相应的行为。行为中的声音和表情表示状态上的变量集合V;Emotional interaction is established based on the finite state machine theory. On the basis of clarifying each state of the emotional model and some trigger events can be used to determine the state transition relationship, the finite state machine is used to realize the state transition between emotions. The purpose of establishing emotional interaction is to control the behavior of the humanoid avatar robot, so that it can make corresponding behavioral responses according to the emotional state. Therefore, after the emotional state machine is established, the main functions and variables to be used should be clearly and accurately determined. According to the requirements, the basic concept of the finite state machine is expanded, and the variable attribute set on the defined state is added as an extended finite state machine (EFSM). After using the EFSM to analyze the emotional interaction model, the variable composition of each function in the emotional model can be accurately determined, which can effectively avoid variable definition conflicts and provide a basis for the next step of robot behavior control. In Figure 8, E 1 represents the initial emotional state of the individual, and the condition represents the input state. According to the current emotional state and input state, the emotional state of the emotional carrier changes and makes corresponding behaviors. The sound and expression in the behavior represent the variable set V on the state;

步骤三、根据人工情感模型计算出相应驱动电机的相关控制量(相应转角),到机器人的面部表情及仿人口形,进而得到机器人表达其情感的行为。Step 3. According to the artificial emotion model, calculate the relevant control quantity (corresponding rotation angle) of the corresponding drive motor, and then get the robot's facial expression and imitation human shape, and then obtain the behavior of the robot to express its emotion.

具体实施方式七:、如图8a、8b和8c所示,本实施方式采用模糊神经网络结构识别人类基本面部表情的具体过程为:网络输入层节点数为6,即面部表情特征值{θ1,θ2,θ3,θ4,L1,L2};输出层节点数为7种基本面部表情(高兴、吃惊、悲伤、生气、厌恶、害怕和正常);期望网络的输出为第i个输出节点的值为1其余输出节点均为零,而实际输出是围绕期望值区间的某一具体数值;根据竞争选择,将输入样本类别判定为网络的实际输出中具有最大值的输出节点对应的类别;如果网络的实际输出节点同时存在多个最大值则做出拒绝判断;基于个体差异和表情的变化,特征点的相对位置不是固定的而是具有一定可变性,以及在图像采集过程中,人与摄像头的距离的变化等因素引起特征值的变化,故采用无量纲值作为面部表情识别的特征值,其中 θ ‾ i = θ i Δθ , L ‾ i = L i ΔL . 其它与具体实施方式六相同。Specific embodiment seven:, as shown in Fig. 8a, 8b and 8c, the concrete process that the present embodiment adopts fuzzy neural network structure to recognize human's basic facial expression is: the network input layer node number is 6, and facial expression characteristic value {θ 1 , θ 2 , θ 3 , θ 4 , L 1 , L 2 }; the number of nodes in the output layer is 7 basic facial expressions (happy, surprised, sad, angry, disgusted, scared and normal); the output of the expected network is the ith The value of the first output node is 1, and the remaining output nodes are all zero, while the actual output is a specific value around the expected value interval; according to the competition selection, the input sample category is determined to be the corresponding output node with the maximum value in the actual output of the network category; if the actual output node of the network has multiple maximum values at the same time, it will make a rejection judgment; based on individual differences and changes in expression, the relative positions of feature points are not fixed but have certain variability, and in the process of image acquisition, Factors such as changes in the distance between people and the camera cause changes in eigenvalues, so dimensionless values are used as eigenvalues for facial expression recognition, where θ ‾ i = θ i Δθ , L ‾ i = L i ΔL . Others are the same as in the sixth embodiment.

具体实施方式八:、如图9所示,本实施方式所述采用CHMM语音识别模型结构识别语音信号的具体过程为:语音信号通常可看作是从一系列HMM状态产生的观察量,每个观察序列就是一帧MFCC参数;在识别过程中,通过短时平均能量和短时过零率来实现语音信号的端点检测;采用经典的Baum-Welch算法实现HMM参数估计问题,识别过程中采用动态规划算法——Viterbi算法。其它与具体实施方式六相同。Embodiment eight:, as shown in Figure 9, the specific process of using the CHMM speech recognition model structure to recognize speech signals in this embodiment is: speech signals can usually be regarded as observations produced from a series of HMM states, each The observation sequence is a frame of MFCC parameters; in the recognition process, the endpoint detection of the speech signal is realized through the short-term average energy and short-term zero-crossing rate; the classic Baum-Welch algorithm is used to realize the HMM parameter estimation problem, and the dynamic Planning algorithm - Viterbi algorithm. Others are the same as in the sixth embodiment.

本发明方法的原理:通过传感器CCD和MIC感知外界的情感信号,并识别出相应的情感信号。根据情感模型中的扩展有限状态机中的变量集合V中元素如面部表情控制点及基本口形,得到机器人表达其情感的行为。本实施机器人能够表达的部分基本口形如图12a所示,图中机器人的各个口形为相应的汉语拼音的发音。根据人工情感模型理论,外界刺激信号影响机器人的行为,实验中外界刺激信号包括语音信号和视觉信号。根据不同的外界刺激信号,机器人与人进行不同的情感交互,这里情感交互主要指语言和机器人面部表情的交互。机器人在情感模型的驱动下作出相应的反应——表情和语音回答。该控制方法通过情感、驱动、行为三部分相互交互来实现机器人的“类人”的行为。在该体系中“驱动”决定“做什么”,“情感”决定“怎么做”。通过传感器CCD和MIC感知外界的情感信号,根据情感模型中的扩展有限状态机中的变量集合V中元素如面部表情控制点及基本口形,得到机器人表达其情感的行为。图7a为仿人头像机器人装置行为控制系统软件流程图。机器人通过传感器将采集到的面部表情信息和语音信息进行融合之后,控制软件根据人工情感模型得到驱动电机的相关控制量,控制机器人的行为表达。图7b-1、图7b-2为机器人表达“很高兴认识你”时,下颚驱动电机的转角。在图中0-11.8秒为人类对机器人的问候“你好”。本发明装置通过语音识别单片机识别出外界的语音信号,作出相应的触发应答。在进行应答时,根据应答的语句得出各孤立字的应答时间及其对应的口形,从而得到驱动电机的相应转角。The principle of the method of the present invention is to perceive external emotional signals through sensors CCD and MIC, and identify corresponding emotional signals. According to the elements in the variable set V in the extended finite state machine in the emotional model, such as facial expression control points and basic mouth shapes, the behavior of the robot to express its emotions is obtained. Some of the basic mouth shapes that the robot can express in this implementation are shown in Figure 12a, and each mouth shape of the robot in the figure is the pronunciation of the corresponding Chinese pinyin. According to the artificial emotion model theory, external stimuli affect the behavior of the robot. In the experiment, external stimuli include voice signals and visual signals. According to different external stimuli signals, the robot and human have different emotional interactions, where the emotional interaction mainly refers to the interaction between language and facial expressions of the robot. Driven by the emotional model, the robot responds accordingly—expressions and voice responses. The control method achieves the "human-like" behavior of the robot through the interaction of emotion, drive and behavior. In this system, "drive" decides "what to do", and "emotion" decides "how to do it". Perceive the external emotional signal through the sensor CCD and MIC, and obtain the behavior of the robot to express its emotion according to the elements in the variable set V in the extended finite state machine in the emotional model, such as facial expression control points and basic mouth shapes. Fig. 7a is a flow chart of the software of the behavior control system of the humanoid avatar robot device. After the robot fuses the collected facial expression information and voice information through the sensor, the control software obtains the relevant control amount of the drive motor according to the artificial emotion model, and controls the behavior expression of the robot. Figure 7b-1 and Figure 7b-2 show the rotation angle of the lower jaw drive motor when the robot expresses "Nice to meet you". In the figure, 0-11.8 seconds is the greeting "Hello" from human to the robot. The device of the invention recognizes the external voice signal through the voice recognition single-chip microcomputer, and makes a corresponding trigger response. When answering, the answering time of each isolated word and the corresponding mouth shapes are obtained according to the sentences of the answering, so as to obtain the corresponding rotation angle of the driving motor.

Claims (10)

1. humanoid-head robot device with human-computer interaction function, described humanoid-head robot device is made up of humanoid-head robot body, robot behavior control system and sensor senses system three parts; It is characterized in that:
Described humanoid-head robot body comprises eye movement unit (1), the upper jaw and the lower jaw moving cell (61), artificial lung device (71), facial expression and degree of lip-rounding driving mechanism (81), front supporting plate (7), rear carrier plate (6), stand (51), facial housing (17) and facial cutis elastica (18); Eye movement unit (1) is made of two eyeballs (12), eyeball transmission mechanism, two eyeball servomotors (14), two eyelids (13), eyelid transmission mechanism, two eyelid servomotors (16) and servomotor (29); The upper jaw and the lower jaw moving cell (61) is made of last Hubei Province (8), lower jaw (9), motor (27) and rotating shaft (28); Artificial lung device (71) is made of flexible pipe (19), cylinder (20), piston (21), nut (22), the gas-powered axis of guide (23), gas-powered leading screw (25), drive motors (26) and air inlet pipe (92); Facial expression and apery degree of lip-rounding driving mechanism (81) drive servomotor (34), three driving servomotor (33), 4 wheel driven moving servomotor (40), five driving servomotor (41), six driving servomotor (43) and organize skin of face by the first driving servomotor (35), second more and drive rope sheave mechanism formation;
The robot behavior control system comprises control system hardware and control system software, and described control system hardware comprises main control computer (91) and motion control card (5); Described control system software is the behavior control method;
The sensor senses system comprises two small ccd sensors (3), speech recognition single-chip microcomputer (4), olfactory sensor (24);
Front supporting plate (7), rear carrier plate (6) be arranged in parallel and are fixed together with stand (51) and constitute the head skeleton of humanoid-head robot, and eye movement unit (1), the upper jaw and the lower jaw moving cell (61) from top to bottom are installed in the stand (51); In each eyeball (12), embed the binocular vision that a small ccd sensor (3) forms robot, each eyelid (13) is arranged on the top of corresponding eyeball (12), two eyeball servomotors (14) drive two eyeballs (12) left-right rotation by the eyeball transmission mechanism respectively, two eyelid servomotors (16) drive two eyelids (13) motion by the eyelid transmission mechanism respectively, and it is moving up and down together that servomotor (29) drives two eyeballs (12) simultaneously; Last Hubei Province (8) is arranged on the top of lower jaw (9), and motor (27) drives rotating shaft (28) and drives lower jaw (9) motion; Olfactory sensor (24) is installed in the stand (51) between eye movement unit (1) and the upper jaw and the lower jaw moving cell (61), one end of flexible pipe (19) is connected with olfactory sensor (24), the other end of flexible pipe (19) is connected with cylinder (20), drive motors (26) is rotationally connected with gas-powered leading screw (25), nut (22) is installed on the gas-powered leading screw (25), nut (22) moves along the gas-powered axis of guide (23), driving is connected in piston (21) motion on the nut (22), realize the olfactory function of robot, an end of air inlet pipe (92) is connected with cylinder (20); The first driving servomotor (35), second drives servomotor (34), the 3rd driving servomotor (33), the moving servomotor (40) of 4 wheel driven, the 5th drives servomotor (41) and the 6th driving servomotor (43) is installed on the humanoid-head robot head skeleton that is made of front supporting plate (7), rear carrier plate (6) and stand (51), and described six driving servomotors are connected with the control corresponding point of facial cutis elastica (18) by skin of face driving rope sheave mechanism; Facial housing (17) and facial cutis elastica (18) are consistent with the contour shape of people's face, and facial housing (17), facial cutis elastica (18) are installed in the outer contoured shape of formation humanoid-head robot device on the front end of eye movement unit (1), the upper jaw and the lower jaw moving cell (61) from inside to outside; Motion control card (5) is installed on the rear carrier plate (6), and speech recognition single-chip microcomputer (4) is installed on the upper end of front supporting plate (7); The sensor senses system exports to main control computer (91) with the information that perceives and handles, and the artificial emotion model of control system software basis in the robot behavior control system obtains the relevant controlled quentity controlled variable of corresponding motor, and motion control instruction is passed through motion control card
(5) the output pwm pulse drives corresponding motor movement to appointed positions, thereby realizes the human-computer interaction function and the various emotional responses of robot.
2. the humanoid-head robot device with human-computer interaction function according to claim 1 is characterized in that: described sensor senses system also comprises touch sensor, and described touch sensor is arranged on the forehead medium position.
3. the humanoid-head robot device with human-computer interaction function according to claim 1 and 2 is characterized in that: described sensor senses system also comprises two temperature sensors, and described two temperature sensors are separately positioned on the left and right sides of forehead.
4. the humanoid-head robot device with human-computer interaction function according to claim 1 is characterized in that: described small ccd sensor (3) is to utilize a new generation that Microsoft releases on the basis of ActiveMovie and Videofor Windows to handle kit DirectShow based on the Streaming Media of COM technology to carry out Video Capture; Image pick-up card is placed on the mainboard by PCI, the vision of setting up humanoid-head robot based on the windows Software Development Platform by band in the small ccd sensor (3) is finished the identification of environment to external world, mainly refers to the identification of humanoid-head robot to people's face portion expression here; The major function of described small ccd sensor (3) is:
(1) static, dynamic image-acquisition functions comprises that parameter setting, the bitmap images of image pick-up card read in and storage, multiplex image acquisition control, demonstration and switching;
(2) relate to dynamically, still image parser and human facial expression recognition algorithm.
5. the humanoid-head robot device with human-computer interaction function according to claim 1 is characterized in that: the method that realizes the facial expression and the apery degree of lip-rounding is:
Step 1, go up definition skin characteristic point at facial cutis elastica (18): a, a-1, b, b-1, c, c-1, d, d-1, e, e-1, g, described each point are the motion control point;
Step 2, a slide block is set at each motion control point place, the corresponding control point of described slide block and facial cutis elastica (18) links together, each slide block and one are slidingly connected with corresponding gathering sill, described gathering sill is arranged on the facial housing (17), and the direction that is provided with of described gathering sill determines that each motion control point is subjected to force direction accordingly; The end that skin of face drives the rope of rope sheave mechanism is connected with slide block, and the other end that skin of face drives the rope of rope sheave mechanism is connected with the corresponding driving servomotor;
Step 3, by the expression control point combination and change in displacement, realize the different basic facial expressions and the robot degree of lip-rounding.
6. the described behavior control method of claim 1 with humanoid-head robot device of human-computer interaction function, it is characterized in that: described method realizes according to the following steps:
Step 1, the emotion signal by the sensor senses system senses external world, to the emotion signal that perceives analyze, feature extraction and identification;
Step 2, the emotion signal after will discerning pass to artificial emotion model; Described artificial emotion model mainly comprises three parts: the emotion of robot produces, emotion is mutual and emotional expression; Emotion produces and comprises that mainly stimulus signal collection, emotion definition, emotion drive, emotion conversion four module; Two threshold alpha of definition in emotion drives, β; α is the activation threshold of emotion, and β is the saturation degree threshold value of emotion; The conversion of affective state will be considered external factor, internal factor and the influence of affective state three aspects in the past;
Can determine on the basis of each state transformational relation with trigger event fully at each state of clear and definite artificial emotion model and some, realize that with the finite state machine of expanding the state between the emotion changes; Increase the variable's attribute set on the definition status, as a kind of extended finite state machine EFSM; After utilizing EFSM to analyze to know the emotion interaction models, can determine the variable formation of each function in the artificial emotion model exactly, can avoid the variable-definition conflict effectively, for next step robot control behavior provides foundation;
Step 3, the artificial emotion model of basis calculate the relevant controlled quentity controlled variable of respective drive motor, to the facial expression and the apery degree of lip-rounding of robot, and then obtain the behavior that its emotion is expressed by robot.
7. the behavior control method with humanoid-head robot device of human-computer interaction function according to claim 6 is characterized in that: described emotion signal comprises human basic facial expression and voice signal.
8. the behavior control method with humanoid-head robot device of human-computer interaction function according to claim 7 is characterized in that: human basic human facial expression recognition adopts structure of fuzzy neural network to discern; Voice signal identification adopts CHMM speech recognition modeling structure to discern.
9. the behavior control method with humanoid-head robot device of human-computer interaction function according to claim 8, it is characterized in that: adopt the detailed process of the human basic facial expression of structure of fuzzy neural network identification to be: network input layer number is 6, i.e. the facial expression feature value
Figure FSB00000265516600051
Output layer node number is 7 kinds of basic facial expressions: glad, be taken aback, sad, angry, detest, fear and normally; The value that the expectation network is output as i output node is that 1 all the other output nodes are zero, and actual output is a certain concrete numerical value around the desired value interval; Select according to competition, the input sample class is judged to be in the actual output of network and has peaked output node corresponding class; Making refusal if there are a plurality of maximums simultaneously in the actual output node of network judges; Variation based on individual difference and expression, the relative position of characteristic point be not fix but have certain changeability, and in image acquisition process, the factors such as variation of the distance of people and camera cause the variation of characteristic value, so adopt the characteristic value of dimensionless number, wherein as human facial expression recognition
Figure FSB00000265516600052
Figure FSB00000265516600053
10. the behavior control method with humanoid-head robot device of human-computer interaction function according to claim 8, it is characterized in that: adopt the detailed process of CHMM speech recognition modeling structure recognition of speech signals to be: voice signal can be regarded the observed quantity that produces from a series of HMM states usually as, and each observation sequence is exactly a frame MFCC parameter; In identifying, realize the end-point detection of voice signal by short-time average energy and short-time zero-crossing rate; Adopt classical Baum-Welch algorithm to realize HMM parameter Estimation problem, adopt dynamic programming algorithm in the identifying---the Viterbi algorithm.
CN2009100724055A 2009-06-30 2009-06-30 Humanoid-head robot device with human-computer interaction function and behavior control method thereof Expired - Fee Related CN101618280B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009100724055A CN101618280B (en) 2009-06-30 2009-06-30 Humanoid-head robot device with human-computer interaction function and behavior control method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009100724055A CN101618280B (en) 2009-06-30 2009-06-30 Humanoid-head robot device with human-computer interaction function and behavior control method thereof

Publications (2)

Publication Number Publication Date
CN101618280A CN101618280A (en) 2010-01-06
CN101618280B true CN101618280B (en) 2011-03-23

Family

ID=41511797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009100724055A Expired - Fee Related CN101618280B (en) 2009-06-30 2009-06-30 Humanoid-head robot device with human-computer interaction function and behavior control method thereof

Country Status (1)

Country Link
CN (1) CN101618280B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101885181B (en) * 2010-05-24 2011-08-10 上海电气集团股份有限公司 Robot for playing cucurbit flute and control method thereof
CN102825604A (en) * 2012-09-18 2012-12-19 广西玉林正方机械有限公司 Motion control programming system of six-DOF (degree of freedom) robot
CN103413113A (en) * 2013-01-15 2013-11-27 上海大学 A method for intelligent emotional interaction of service robots
CN103358310B (en) * 2013-07-04 2015-05-27 上海大学 Mouth movement mechanism of humanoid robot
FR3011375B1 (en) * 2013-10-01 2017-01-27 Aldebaran Robotics METHOD FOR DIALOGUE BETWEEN A MACHINE, SUCH AS A HUMANOID ROBOT, AND A HUMAN INTERLOCUTOR, COMPUTER PROGRAM PRODUCT AND HUMANOID ROBOT FOR IMPLEMENTING SUCH A METHOD
CN103853071B (en) * 2014-01-20 2016-09-28 南京升泰元机器人科技有限公司 Man-machine facial expression interactive system based on bio signal
CN104091370B (en) * 2014-08-01 2017-02-15 哈尔滨工业大学 Human head imitation portrait robot capable of changing facial form and face organs as well as mathematical modeling method and control method thereof
CN106325228B (en) * 2015-06-26 2020-03-20 北京贝虎机器人技术有限公司 Method and device for generating control data of robot
WO2016206642A1 (en) * 2015-06-26 2016-12-29 北京贝虎机器人技术有限公司 Method and apparatus for generating control data of robot
CN105244042B (en) * 2015-08-26 2018-11-13 安徽建筑大学 A kind of speech emotional interactive device and method based on finite-state automata
CN105843118B (en) * 2016-03-25 2018-07-27 北京光年无限科技有限公司 A kind of robot interactive method and robot system
CN106203344A (en) * 2016-07-12 2016-12-07 北京光年无限科技有限公司 A kind of Emotion identification method and system for intelligent robot
CN106371583B (en) * 2016-08-19 2020-07-31 北京儒博科技有限公司 Control method and device of intelligent equipment
CN106426196B (en) * 2016-08-31 2018-01-30 佛山博文机器人自动化科技有限公司 A kind of service-delivery machine head part
CN108115678B (en) * 2016-11-28 2020-10-23 深圳光启合众科技有限公司 Robot and its motion control method and device
CN108732943A (en) * 2017-04-18 2018-11-02 深圳市丰巨泰科电子有限公司 Expression robot man-machine interaction method
CN107813294B (en) * 2017-10-31 2019-01-01 梅其珍 A kind of nonmetal flexible anthropomorphic robot
CN108577866A (en) * 2018-04-03 2018-09-28 中国地质大学(武汉) A kind of system and method for multidimensional emotion recognition and alleviation
CN108714902A (en) * 2018-06-28 2018-10-30 香港中文大学(深圳) Apery expression robot head construction and robot head control system
CN108563138A (en) * 2018-07-04 2018-09-21 深圳万发创新进出口贸易有限公司 A kind of intelligent domestic system
CN110103234B (en) * 2019-04-30 2024-05-24 广东工业大学 Humanoid facial expression robot
SG10201910949PA (en) * 2019-11-21 2020-11-27 Lian Wang Artificial Intelligence Brain
CN112991886B (en) * 2021-03-09 2023-05-09 湖北工业大学 Barrier-free communication and learning assistance system for the deaf-mute

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1262844A1 (en) * 2001-06-01 2002-12-04 Sony International (Europe) GmbH Method for controlling a man-machine-interface unit
JP2004174692A (en) * 2002-11-29 2004-06-24 Mitsubishi Heavy Ind Ltd Man-machine robot and control method of man machine robot
CN101020315A (en) * 2007-03-22 2007-08-22 上海交通大学 Head system of anthropomorphic robot
CN200998593Y (en) * 2007-02-14 2008-01-02 杨建良 Face act device of the robot
CN101373380A (en) * 2008-07-14 2009-02-25 浙江大学 A humanoid robot control system and robot manipulation method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1262844A1 (en) * 2001-06-01 2002-12-04 Sony International (Europe) GmbH Method for controlling a man-machine-interface unit
EP1406135A1 (en) * 2001-06-01 2004-04-07 Sony International (Europe) GmbH Man-machine interface unit control method; robot apparatus; and its action control method
JP2004174692A (en) * 2002-11-29 2004-06-24 Mitsubishi Heavy Ind Ltd Man-machine robot and control method of man machine robot
CN200998593Y (en) * 2007-02-14 2008-01-02 杨建良 Face act device of the robot
CN101020315A (en) * 2007-03-22 2007-08-22 上海交通大学 Head system of anthropomorphic robot
CN101373380A (en) * 2008-07-14 2009-02-25 浙江大学 A humanoid robot control system and robot manipulation method

Also Published As

Publication number Publication date
CN101618280A (en) 2010-01-06

Similar Documents

Publication Publication Date Title
CN101618280B (en) Humanoid-head robot device with human-computer interaction function and behavior control method thereof
CN101474481B (en) emotional robot system
CN102354349B (en) Human-machine interaction multi-mode early intervention system for improving social interaction capacity of autistic children
KR100813668B1 (en) How to express emotions in android robot
CN104493827A (en) Intelligent cognitive robot and cognitive system thereof
WO2002076687A1 (en) Robot device and control method therefor, and storage medium
CN101458778A (en) Artificial head robot with facial expression and multiple perceptional functions
CN102566474A (en) Interaction system and method for robot with humanoid facial expressions, and face detection and tracking method
CN109773807B (en) Motion control method and robot
CN117093689A (en) Question-answering robot device and method for simulating human image and response
Shidujaman et al. “roboquin”: A mannequin robot with natural humanoid movements
CN113246156A (en) Child accompanying robot based on intelligent emotion recognition and control method
Silva et al. Mirroring and recognizing emotions through facial expressions for a RoboKind platform
Weiguo et al. Development of the humanoid head portrait robot system with flexible face and expression
Pelachaud et al. Expressive gestures displayed by a humanoid robot during a storytelling application
De Beir et al. Enhancing nao expression of emotions using pluggable eyebrows
Esfandbod et al. Human-robot interaction based on facial expression imitation
Kirandziska et al. Human-robot interaction based on human emotions extracted from speech
CN214955998U (en) Voice interaction equipment based on deep learning
Bottega et al. Jubileo: an open-source robot and framework for research in human-robot social interaction
Shaif et al. Vision and voice-based human-robot interactive interface for humanoid robot
Naeem et al. An AI based voice controlled humanoid robot
Hu et al. Implementation of voice interaction based on questions and answers for guide robot in exhibition hall
Cid et al. A real-time synchronization algorithm between Text-To-Speech (TTS) system and Robot Mouth for Social Robotic Applications
Aidokas Human–robot interaction communication control system using Lithuanian language

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110323

CF01 Termination of patent right due to non-payment of annual fee