[go: up one dir, main page]

CN104769645A - virtual companion - Google Patents

virtual companion Download PDF

Info

Publication number
CN104769645A
CN104769645A CN201480002468.2A CN201480002468A CN104769645A CN 104769645 A CN104769645 A CN 104769645A CN 201480002468 A CN201480002468 A CN 201480002468A CN 104769645 A CN104769645 A CN 104769645A
Authority
CN
China
Prior art keywords
virtual
companion
user
virtual companion
partner
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201480002468.2A
Other languages
Chinese (zh)
Inventor
王·维克多
邓硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhe Rui Co Ltd
Original Assignee
Zhe Rui Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/939,172 external-priority patent/US20140125678A1/en
Application filed by Zhe Rui Co Ltd filed Critical Zhe Rui Co Ltd
Publication of CN104769645A publication Critical patent/CN104769645A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/825Fostering virtual characters
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Pathology (AREA)
  • Epidemiology (AREA)
  • Theoretical Computer Science (AREA)
  • Primary Health Care (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • User Interface Of Digital Computer (AREA)
  • Medical Treatment And Welfare Office Work (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Rehabilitation Tools (AREA)

Abstract

本发明描述的虚拟伴侣可以自然地对触觉输入进行反应。在有人工进行实时辅助的情况下,可以与用户进行智能语音交流。此发明可应用于老年陪护,通过对老人的持续陪伴促进心理健康状况。The virtual companion described in the present invention can respond naturally to tactile input. In the case of real-time human assistance, intelligent voice communication can be carried out with the user. This invention can be applied to the care of the elderly, and promote the mental health of the elderly through continuous companionship.

Description

虚拟伴侣virtual companion

相关申请的交叉引用Cross References to Related Applications

临时专利申请61/774,591提交于2013年3月8日Provisional Patent Application 61/774,591 filed March 8, 2013

临时专利申请61/670,154提交于2012年7月11日Provisional Patent Application 61/670,154 filed July 11, 2012

以上专利申请内容在此被纳入作为引用,但均非为本专利申请所基于的现有技术。The contents of the above patent applications are hereby incorporated by reference, but none of them are prior art on which this patent application is based.

发明背景Background of the invention

在中国、美国以及世界各地,老年人数量正在快速增加。医疗保健系统的研究表明社会对于老人群体的关注程度远远低于预期。目前,由于缺乏具有老年保健专业人士,老年护理行业所提供的服务并不能满足老人对生理和心理健康的需求。In China, the United States, and around the world, the number of older people is growing rapidly. Research on the health care system shows that society's attention to the elderly group is much lower than expected. At present, due to the lack of senior care professionals, the services provided by the elderly care industry cannot meet the physical and mental health needs of the elderly.

研究表明,孤独感和与社会及他人的隔绝是老年痴呆症,抑郁症,身体机能减退,甚至死亡的一个明显诱因。这个问题的严重性已经日益显现:在美国,每8个65岁以上的老人中就有一个患有阿茲海默症(老年痴呆症)。老年抑郁症也普通存在:独居老人中有9.4%有抑郁症状,居住在养老院中的老人抑郁症比例则高达42%。Research shows that loneliness and isolation from society and other people is a clear contributor to Alzheimer's disease, depression, reduced physical function, and even death. The severity of this problem has become increasingly apparent: In the United States, one out of every eight people over the age of 65 suffers from Alzheimer's disease (senile dementia). Elderly depression is also common: 9.4% of the elderly living alone have depressive symptoms, and the proportion of depression among the elderly living in nursing homes is as high as 42%.

此外,研究表明,老人的身心健康问题对老人的家人也造成的心理与生理的不良影响。由于美国护理人工费昂贵,平均价格高达21美元/小时,很多老人的家人无力为老人聘请全天候的护工服务,只能将老人长时间单独留在家中,或是牺牲自己的休息、工作时间来照顾老人。在美国,所以老人家人因为聘请护工,或减少自己工作时间来照顾老人,总成本高达3万亿美元/年。In addition, studies have shown that the physical and mental health problems of the elderly also have adverse psychological and physical effects on the family members of the elderly. Due to the high cost of nursing labor in the United States, the average price is as high as 21 US dollars per hour. Many family members of the elderly cannot afford to hire round-the-clock nursing services for the elderly. They can only leave the elderly alone at home for a long time, or sacrifice their rest and working time to take care of them. elder. In the United States, the total cost of all elderly families is as high as 3 trillion US dollars per year because they hire caregivers or reduce their working hours to take care of the elderly.

另一方面,目前市面上存在的以促进老人社交为目的的科技产品通常需要使用者有一定的电脑、互联网知识以及操作经验。对于老人,特别是对电脑不熟悉的老人,或者已经有老年痴呆症的老人来说并不适用。On the other hand, the technological products currently on the market for the purpose of promoting social interaction for the elderly usually require users to have certain computer and Internet knowledge and operating experience. It is not suitable for the elderly, especially those who are not familiar with computers, or those who already have Alzheimer's disease.

发明简介Introduction to the invention

此发明包括:一个虚拟伴侣设备;一套多人协作,远程控制虚拟伴侣的操作流程;以及一套将后台员工与虚拟伴侣通过网络连接的系统。This invention includes: a virtual companion device; a set of multi-person collaboration and remote control of the operation process of the virtual companion; and a system for connecting the backstage staff and the virtual companion through the network.

以上功能,以及各功能的应用优势,将在下文中详细阐述。The above functions, as well as the application advantages of each function, will be described in detail below.

图片解释picture explanation

图1:虚拟伴侣的2维示例。虚拟伴侣以宠物的形象展示给用户,用户可以通过手指点击、拖动等动作对虚拟伴侣进行喂食、抚摸等互动。Figure 1: 2D example of a virtual companion. The virtual companion is displayed to the user in the image of a pet, and the user can interact with the virtual companion by feeding, stroking, etc. by clicking and dragging with fingers.

图2:虚拟伴侣喂食交互界面示例:用户通过触摸将饮料喂给虚拟伴侣。Figure 2: Example of a virtual companion feeding interface: the user feeds a beverage to a virtual companion by touch.

图3:虚拟伴侣可变外观3维示例:虚拟伴侣可以根据用户喜好改变外观。此图显示了一个以狗的形象展示的虚拟伴侣。图中,狗抬起的前肢是其对用户触摸进行的反应。Figure 3: 3D example of variable appearance of virtual companion: Virtual companion can change appearance according to user preference. This image shows a virtual companion in the form of a dog. In the picture, the dog's raised forelimbs are in response to the user's touch.

图4:虚拟伴侣可变外观3维示例:虚拟伴侣可以根据用户喜好改变外观。此图显示了一个以卡通形象展示的虚拟伴侣。Figure 4: 3D example of variable appearance of virtual companion: Virtual companion can change appearance according to user preference. This image shows a virtual companion represented as a cartoon.

图5:虚拟伴侣图片展示功能示例:虚拟伴侣可以根据用户的要求,从互联网上获取图片,并展示给用户。Figure 5: An example of the virtual partner's picture display function: the virtual partner can obtain pictures from the Internet according to the user's requirements, and display them to the user.

图6:虚拟伴侣系统后台登陆界面示例。Figure 6: Example of the background login interface of the virtual companion system.

图7:虚拟伴侣系统后台多窗口监视界面。此界面供虚拟伴侣后台人员远程控制使用。每个窗口显示从客户端发来的,进行了带宽优化的实时视频流(即图中显示的8个图片窗口);音频强度指示条(对应图中每个图片窗口下方的条状区域);预约事项列表(对应图中八个窗口的其中五个下方的列表区域);预警提示信息(对应图中八个窗口的另外三个下方的显示X标记的区域);当前服务时段信息(图中左上方的一行文字区域);用户与虚拟伴侣信息(每个视频窗口上方的一行文字);后台服务人员之间的聊天窗口(图中右下方带有滚动条的区域);其他员工当前服务对象信息提示(右上方窗口中的沙漏图标,右下方窗口中的手形图标以及附带的用户名信息);退出登陆按钮(整个界面的最右上方)。Figure 7: The background multi-window monitoring interface of the virtual companion system. This interface is used for remote control by the virtual companion background personnel. Each window displays the bandwidth-optimized real-time video stream sent from the client (that is, the 8 picture windows shown in the figure); the audio intensity indicator bar (corresponding to the bar area below each picture window in the figure); Reservation item list (corresponding to the list area below five of the eight windows in the figure); warning prompt information (corresponding to the area marked with X under the other three of the eight windows in the figure); current service period information (in the figure A line of text in the upper left); user and virtual partner information (a line of text above each video window); chat windows between background service personnel (the area with a scroll bar in the lower right of the figure); other employees’ current service objects Information prompt (the hourglass icon in the upper right window, the hand icon in the lower right window and the accompanying user name information); logout button (the upper right of the entire interface).

图8:后台一对一服务界面,包括:当前服务时段信息(图中左上方的一行文字区域);从客户端发来实时视频流(图中以用户照片作为示例);虚拟伴侣当前注视用户的眼睛聚焦位置(用户照片中,双眼中间,标记有“Look”字样的浅色圆形区域);历史互动记录(视频流右侧的双栏列表);提前预设的用户与虚拟伴侣交流时间表(历史互动记录下方的双栏表格);虚拟伴侣在客户端呈现的形象(视频流下方的企鹅图像);可输入文本框,用来输入需要虚拟伴侣读出的内容(企鹅图像下方的长方形区域);虚拟伴侣动作控制按钮(排列在企鹅图像两侧的12个按钮);虚拟伴侣状态设定区(企鹅图像左侧按钮左侧的三个可勾选项以及三个滑动设置条);后台服务团队通知栏(整个界面左下方区域);后台服务团队成员间聊天窗口(整个界面右下方区域);多标签结构杂项信息汇总区(视频流窗口左侧标记有‘tab’的标签以及下方空白区域);返回多窗口监视界面按钮(整个界面右上方标记有“Monitor All”的按钮)。Figure 8: One-to-one service interface in the background, including: current service period information (a line of text area in the upper left of the figure); real-time video stream sent from the client (the user’s photo is used as an example in the figure); the virtual partner is currently watching the user Eye focus position (in the user's photo, between the eyes, the light-colored circular area marked with the word "Look"); historical interaction records (the double-column list on the right side of the video stream); the preset communication time between the user and the virtual partner Table (the two-column table below the historical interaction records); the image presented by the virtual partner on the client (the penguin image below the video stream); the input text box is used to input the content that needs to be read by the virtual partner (the rectangle below the penguin image area); virtual companion action control buttons (12 buttons arranged on both sides of the penguin image); virtual companion status setting area (three checkable options and three sliding setting bars on the left side of the button on the left side of the penguin image); background Service team notification bar (lower left area of the entire interface); chat window between members of the background service team (lower right area of the entire interface); miscellaneous information summary area with multi-tab structure (the label marked with 'tab' on the left side of the video stream window and the blank space below area); return to the multi-window monitoring interface button (the button marked with "Monitor All" on the upper right of the entire interface).

图9:基于统一建模语言(UML)的系统示意图。展示了前端用户与后台服务人员可能的几种交换方式。标记有“Prototype 1”的线框中的功能是已经实现的,用于验证系统可行性的示例功能。Figure 9: Schematic diagram of a system based on the Unified Modeling Language (UML). It shows several possible exchange methods between front-end users and back-end service personnel. The functions in the wireframes marked "Prototype 1" are already implemented, example functions used to verify the feasibility of the system.

图10:基于统一建模语言(UML)的后台工作人员工作流程图。此图描述了当一位后台员工登陆系统后,操作图6-8中的不同界面时,需要遵循的一系列工作流程。Figure 10: Workflow diagram of background workers based on Unified Modeling Language (UML). This figure describes a series of workflows that need to be followed when a background employee logs in to the system and operates the different interfaces in Figure 6-8.

图11:基于统一建模语言(UML)的系统部署示意图。此图展示了一个可行的部署范例。此范例中,系统通过一个中央控制服务器将前端的虚拟伴侣与后端的控制界面连接。中央控制服务器通过连接多个前端平板电脑,来控制运行在其上的虚拟伴侣;还连接多台由后台工作人员正在使用的电脑。在此范例中,为了减轻网络延时,视频、音频流信息通过专用的通信协议(例如,RTMP协议)在前端平板电脑与后台工作人员电脑之间直接传输,不通过中央控制服务器。Figure 11: Schematic diagram of system deployment based on Unified Modeling Language (UML). This diagram shows an example of a possible deployment. In this example, the system connects the front-end virtual companion to the back-end control interface through a central control server. The central control server controls the virtual companion running on it by connecting multiple front-end tablet computers; it also connects multiple computers that are being used by back-office workers. In this example, in order to reduce network delay, video and audio stream information is directly transmitted between the front-end tablet computer and the background staff computer through a dedicated communication protocol (for example, RTMP protocol), without going through the central control server.

发明详细介绍Invention in detail

虚拟伴侣系统前端Virtual companion system front end

虚拟伴侣前端以宠物的形象展示给用户,从而帮助用户与之建立类似于人与人之间的情感联系。此种情感联系有助于提高独居老人的身心健康状况,还为老人提供了一个十分容易操作的,从互联网上获取信息的手段。相比于使用台式电脑、手提电脑或传统的平板电脑应用程序,在使用虚拟伴侣时,老人只需要通过自然语言发出指令,不需要掌握任何电脑操作技能。此发明的实现方式详述如下。The front end of the virtual companion is displayed to the user in the image of a pet, thereby helping the user to establish an emotional connection similar to that between people. This kind of emotional connection helps to improve the physical and mental health of the elderly living alone, and also provides a very easy-to-operate means for the elderly to obtain information from the Internet. Compared with using a desktop computer, a laptop computer or a traditional tablet computer application, when using a virtual companion, the elderly only need to issue instructions through natural language and do not need to have any computer skills. The implementation of this invention is described in detail as follows.

虚拟伴侣展示方式Virtual Companion Presentation

虚拟伴侣可以以2维(如图1,图2)或3维(如图3,图4)的形式展示在LCD或OLED显示屏幕、投影或液晶显示面板上。虚拟伴侣的外观可以是卡通化的形象(如图1,图2),逼真的动物形象(如图3),或介于真实形象与卡通之间(如图4)。虚拟伴侣的形象可以为人或类人卡通形象,逼真动物形象,例如企鹅(如图1,图2)或狗(如图3);或是任何虚构形象,例如龙、独角兽,圆球等等;甚至可以是多种生物形象的结合(如图4中的形象是根据狗、猫、海豹的形象综合生成的)。利用虚构形象的优点是用户在潜意识中不会对虚拟伴侣有先入为主的定义,不会期望虚拟伴侣应该/不应该有特定的行为。因此当虚拟伴侣不能做出某些动作时,用户不会因为预先的期望无法达到而感到失望;此外,用户也能够根据自己的喜好、想象创建自己最喜爱的虚拟伴侣形象。The virtual companion can be displayed on LCD or OLED display screens, projection or liquid crystal display panels in 2-dimensional (as shown in Figure 1, Figure 2) or 3-dimensional (as shown in Figure 3, Figure 4). The appearance of the virtual companion can be a cartoon image (as shown in Fig. 1 and Fig. 2), a lifelike animal image (as shown in Fig. 3), or between a real image and a cartoon (as shown in Fig. 4). The image of the virtual partner can be a human or human-like cartoon image, a realistic animal image, such as a penguin (as shown in Figure 1, Figure 2) or a dog (as shown in Figure 3); or any fictional image, such as a dragon, a unicorn, a ball, etc. etc.; it can even be a combination of multiple biological images (the image in Figure 4 is synthetically generated based on the images of dogs, cats, and seals). The advantage of using a fictional image is that the user will not subconsciously have a preconceived definition of the virtual partner, and will not expect that the virtual partner should/should not behave in a certain way. Therefore, when the virtual partner cannot make certain actions, the user will not be disappointed because the pre-expectation cannot be achieved; in addition, the user can also create his favorite virtual partner image according to his own preferences and imagination.

对于一个用户来说,虚拟伴侣的形象可以是统一的,随机变化的,或是根据用户的喜好在开始使用虚拟伴侣、或是在使用过程中由用户指定。当用户希望选择虚拟伴侣时,用户将可以从一系列形象种类中挑选,包括狗、猫、外星人等等。当用户选择了虚拟伴侣的种类之后,可以通过一个交互界面对形象的外观颜色、大小、身体各部分比例进行自定义设置。另一种使用场景是,虚拟伴侣的形象由另外的用户提前设定。此用户可以为适用虚拟伴侣的老人的家人或其他监护人。对虚拟伴侣的自定义设定还可以包括与外观无关的行为设定,将在下文详细定义。一旦虚拟伴侣的形象设定完成,虚拟伴侣可以在不做任何背景介绍的情况下展示给用户,或者以其他创意方式从屏幕上显示:例如,从蛋中孵出、从礼盒包装中取出,或以其他感人的方式介绍给用户等等。图4是虚拟伴侣形象的一个示例,此伴侣形象是一只介于逼真与卡通之间的狗,并且尚未进行任何自定义。For a user, the image of the virtual companion can be unified, change randomly, or be designated by the user at the beginning of using the virtual companion according to the user's preferences, or during use. When users wish to choose a virtual companion, users will be able to choose from a range of avatar categories, including dogs, cats, aliens and more. After the user selects the type of virtual partner, he can customize the appearance color, size, and proportion of each body part of the image through an interactive interface. Another usage scenario is that the image of the virtual partner is set in advance by another user. This user may be a family member or other guardian of the elderly person for whom the virtual companion is applicable. Custom settings for virtual companions can also include behavior settings not related to appearance, which will be defined in detail below. Once the avatar of the virtual companion is set, the virtual companion can be shown to the user without any background introduction, or it can be displayed from the screen in other creative ways: for example, hatched from an egg, taken out of a gift box, or Introduce to users in other touching ways and more. Figure 4 is an example of a virtual companion avatar, which is a dog that is somewhere between a lifelike and a cartoon, and has not been customized in any way.

从技术的角度讲,虚拟伴侣的展示可以通过一系列2维、3维图像渲染、视频处理实现;或可以以用一系列2维静态图片展示虚拟伴侣身体的各个部分,通过对每个图片独立设置位移动画来实现对虚拟伴侣身体动作的模拟;或者可以用一系列矢量图展示身体的各个部分,通过数学上的矢量变换方法进行动作模拟;在3维展示时,虚拟伴侣的身体由一系列点、线、面定义,并附有材质信息,通过二维点阵图进行材质贴图。此外,虚拟伴侣也可以是一个实体机器人,在机器人上附有控制机器人动作的传动装置、触觉感知器等等。From a technical point of view, the display of the virtual partner can be realized through a series of 2D and 3D image rendering and video processing; Set displacement animation to realize the simulation of the virtual partner's body movements; or use a series of vector graphics to display various parts of the body, and perform motion simulation through mathematical vector transformation methods; in 3D display, the virtual partner's body consists of a series of Points, lines, and surfaces are defined, with material information attached, and material mapping is performed through a two-dimensional bitmap. In addition, the virtual companion can also be a physical robot, which is attached with a transmission device, a tactile sensor, etc. for controlling the robot's actions.

对虚拟伴侣外观的定义除了静态形象定义,还包括对其动作的定义。动作定义可以是虚拟伴侣在不同身体形态下保存的一系列关键帧图像数据,或是对一个3维形象的3维矢量权重定义。一个示例性实施方案(如图4所示)中,虚拟伴侣可以是通过3维建模软件构建的3维模型附加一系列动作定义。首先由3维建模软件定义虚拟伴侣形象的骨架,此骨架的构建需要参考真实动物骨架建立,从而可以根据真实动物运动过程中骨骼运动情况定义虚拟伴侣的运行行为。此外,在虚拟伴侣的面部需要添加附加虚拟骨骼,以便于后续定义虚拟伴侣的面部表情动作。骨架定义完成后,可以通过改变某些骨骼的摆放位置定义动作关键帧。虚拟伴侣的动作包括一个默认的空闲姿势以及其他姿势,例如左右摆头、点头、下巴上抬、抬起左前肢、抬起右前肢、悲伤的面部表情、高兴的面部表、呼吸、眨眼、环视四周、摇尾巴等等、吠叫、说话时的嘴部动作等等。所有的模型外观以及动作信息都可以导出为通用的FBX文件格式。当虚拟伴侣以实体机器人的形象进行展示时,可以通过预定义一系列驱动装置电机状态来定义虚拟伴侣的动作。The definition of the virtual partner's appearance includes not only the definition of the static image, but also the definition of its actions. The action definition can be a series of key frame image data saved by the virtual partner in different body shapes, or a 3D vector weight definition for a 3D image. In an exemplary embodiment (as shown in FIG. 4 ), the virtual companion can be a 3D model constructed by 3D modeling software with a series of action definitions attached. First, the 3D modeling software defines the skeleton of the virtual companion image. The construction of this skeleton needs to be established with reference to the real animal skeleton, so that the running behavior of the virtual companion can be defined according to the skeleton movement during the real animal movement. In addition, additional virtual bones need to be added to the face of the virtual partner to facilitate subsequent definition of the virtual partner's facial expressions. After the skeleton definition is complete, you can define action keyframes by changing the placement of certain bones. Virtual companion actions include a default idle pose as well as other gestures such as head side to side, nod, chin lift, left foreleg lift, right foreleg lift, sad facial expression, happy facial expression, breathing, blinking, looking around Going around, wagging the tail, etc., barking, mouth movements when speaking, etc. All model appearance and motion information can be exported to the common FBX file format. When the virtual companion is displayed in the image of a real robot, the actions of the virtual companion can be defined by predefining a series of drive motor states.

在用户使用虚拟伴侣的过程中,可以通过调整虚拟伴侣的模型设置(包括骨骼大小、表皮纹理等)来展示虚拟伴侣由一个幼年动物成长的过程。此外,后文中将要提到的多点触控互动行为也可以在模拟成长的过程中进行变化。此外,前文定义的虚拟伴侣的动作也可以随着虚拟伴侣的成长产生变化。During the process of using the virtual companion, the user can adjust the virtual companion's model settings (including bone size, skin texture, etc.) to show the growth process of the virtual companion from a young animal. In addition, the multi-touch interaction behavior mentioned later can also be changed in the process of simulating growth. In addition, the actions of the virtual partner defined above can also change as the virtual partner grows.

虚拟伴侣对多点触控的反应Virtual companion reacts to multi-touch

此发明的关键点之一是用户可以对虚拟伴侣进行触摸。当虚拟伴侣接收到触摸信号后,会进行一系列逼真的动作反应。下文的示例性实施方案假设虚拟伴侣软件运行在可以接受多点触摸屏输入信息的平板电脑(iPad,安卓平板等)上,当虚拟伴侣软件运行在其他平台时,也可通过获取其他类型的输入信息进行类似的操作。例如,除了触摸屏以外,用户还可以通过电脑鼠标的滑动、同时或单独点击鼠标左、中、右键,点击鼠标并拖动等等来模拟不同的触摸状态。当虚拟伴侣以实体机器人的形态展示时,机器人可以通过附带的触摸输入接收器获取输入信息。在最基本示例性实施方案中,虚拟宠物展示在平板电脑的LCD显示屏上。此类可触摸显示屏可以通过检测不同区域的电容或电阻值变化得知当前被用户触摸的区域。此发明的一个创新点是,当显示屏检测到屏幕多个点被触摸时,虚拟伴侣可以对每个触摸部位(例如头部、左前肢等等)进行独立的动作反馈,并通过3维软件将所有动作合并,从而给用户展现一个流畅的多部位同时动作的视觉效果。此种对多点触摸先分离再融合的方式最终将会为用户呈现逼真的、类似于与真实宠物互动的体验。此外,通过对触摸力度、轨迹长短等信息的分析,虚拟宠物还可以分辨不同种类的触摸方式(例如轻抚、戳击等),从而作出不同的反应。One of the key points of this invention is that the user can touch the virtual companion. When the virtual partner receives the touch signal, it will perform a series of realistic motion responses. The following exemplary embodiments assume that the virtual companion software runs on a tablet computer (iPad, Android tablet, etc.) that can accept multi-touch screen input information. When the virtual companion software runs on other platforms, it can also obtain other types of input information Do something similar. For example, in addition to the touch screen, the user can also simulate different touch states by sliding the computer mouse, clicking the left, middle, and right buttons of the mouse simultaneously or separately, clicking and dragging the mouse, and the like. When the virtual companion is displayed in the form of a physical robot, the robot can obtain input information through the attached touch input receiver. In the most basic exemplary embodiment, the virtual pet is displayed on the LCD screen of the tablet computer. This type of touchable display screen can know the area currently touched by the user by detecting the change of capacitance or resistance value in different areas. An innovative point of this invention is that when the display screen detects that multiple points on the screen are touched, the virtual partner can provide independent motion feedback for each touched part (such as the head, left forelimb, etc.), and through 3D software Merge all actions to present the user with a smooth visual effect of multiple parts moving at the same time. This split-and-merge approach to multi-touch will ultimately give users a realistic experience similar to interacting with real pets. In addition, through the analysis of information such as touch strength and track length, virtual pets can also distinguish different types of touch methods (such as stroking, poking, etc.), and thus respond differently.

实现虚拟伴侣对触摸反应的第一步是对触摸事件的监测。当虚拟伴侣通过游戏引擎、或其他软件运行环境时,对触摸感知器状态的查询在程序的主循环中实现。此外,还可以通过触发机制、回调机制等方式实现对触摸信号的接收。The first step in realizing a virtual companion's response to touch is the monitoring of touch events. When the virtual partner runs through the game engine or other software, the query of the state of the touch sensor is implemented in the main loop of the program. In addition, the receiving of the touch signal can also be realized through a trigger mechanism, a callback mechanism, and the like.

触摸交互实现的第二个步骤是获取触摸位置对应的虚拟伴侣身体位置。在平板电脑或其他二维可触摸平面设备中,设备可以告知软件每次触摸发生时被触摸点相对整个触摸屏的坐标位置。当虚拟伴侣以二维图像方式呈现给用户时,可以通过比对当前被触摸位置的坐标,以及代表虚拟伴侣每个身体部位的几何包络线所包含的坐标点集位置,从而得知虚拟伴侣的哪个身体部位被触摸。当虚拟伴侣以3维图形呈现给用户时,对三维模型中的每一个骨骼都需要建立三维的几何包络体。例如,对于一个模型的腿部骨骼,可以在建模软件中建立一个胶囊状的包络体(一个圆柱体,两个顶端为半球型)。包络体的长、宽、高设置保证整个腿部骨骼被完全包覆在包络体内部,并且在模型中定义包络体与骨骼不会存在相对位移,即包络体能够随时根据骨骼的移动而移动到相应位置。不同的身体部位可以定义不同的包络体形状。例如,对于一个短粗的身体部位,包络体可以定义为球形。理论上,包络体的几何形状可以与模型中定义的、用户可见的各身体部位的几何形状完全重合。但是由于在定义身体部位几何形状时,考虑到美观程度,几何形状包含的点、线、面数量众多,因此如果包络体与之相同,则包络体数目繁多的点、线、面定义将会对后续软件运行带来较大的计算量负担。此外,将包络体定义为简单的几何形状、并且总体积略大于实际身体部位几何形状的好处还在于,当触摸信号返回的位置信息与实际值有一些误差时,程序仍然可以判定当前用户触摸到了虚拟伴侣的某个身体部位。为了容许误差,另一种方法是定义一个与该身体部位形状相似,但大小略大的包络体,但是此解决方案仍存在计算量负担重的问题。在定义包络体时,可以为三维模型的每一根骨骼定义一个包络体,也可以只为那些程序中设定可以活动的身体部位对应的骨骼定义包络体。为了更好的接收多点触摸信号,还可以对一个骨骼定义多个包络体,每一个对多点触摸中的一个点做出反应。因此,对于一个虚拟伴侣来说,可触摸部分与身体部位并不是一对一的关系,而是多对一的关系。此外,如果虚拟伴侣的运行环境不支持对一个骨骼建立多个包络体,可以通过创建多个虚拟骨骼,每个骨骼附带一个包络体的方式实现对多点触摸的处理。为了提高虚拟伴侣软件运行的流畅性,所有的包络体最好在建模是预先设置,从而在运行时,呈现虚拟伴侣的三维引擎只需要逐帧记录每一个包络体的旋转与位移信息。当虚拟伴侣以三维方式呈现时,2维的触摸信号坐标被投射到三维的虚拟伴侣身上,第一个与投射线相交的包络体则对应了用户想要触摸的虚拟伴侣的身体部位。此外,还可以对屏幕中的其他物体定义包络体,并且定义其他物体与虚拟伴侣的联动关系。当用户触摸其他物体时,有联动关系的虚拟伴侣的身体部位也将做出相应反应。触摸输入除了可以与宠物的身体部位相对应,还可以进一步细分为“开始触摸”(如果此触摸信号在前一帧没有出现),“停止触摸”(上一帧中有触摸信息,当前帧中没有),“连续触摸”(当前帧中的触摸信息在上一帧中也存在)。在“连续触摸”状态,虚拟伴侣软件会记录在上一帧中此触摸信号的二维坐标位置、投射对应的三维身体部位信息等等。The second step of the touch interaction is to obtain the body position of the virtual partner corresponding to the touch position. In a tablet computer or other two-dimensional touchable planar devices, the device can inform the software of the coordinate position of the touched point relative to the entire touch screen when each touch occurs. When the virtual partner is presented to the user in the form of a two-dimensional image, the virtual partner can be known by comparing the coordinates of the current touched position with the coordinate points contained in the geometric envelope representing each body part of the virtual partner which part of the body was touched. When the virtual partner is presented to the user in 3D graphics, a 3D geometric envelope needs to be established for each bone in the 3D model. For example, for the leg bones of a model, a capsule-shaped envelope (a cylinder with two hemispherical ends) can be created in the modeling software. The length, width, and height settings of the envelope ensure that the entire leg bones are completely covered inside the envelope, and there will be no relative displacement between the envelope and the bones in the model, that is, the envelope can be adjusted at any time according to the bone's Move to the corresponding position. Different body parts can define different envelope shapes. For example, for a stubby body part, the volume can be defined as a sphere. In theory, the geometry of the volume can exactly coincide with the geometry of each body part that is visible to the user as defined in the model. However, when defining the geometric shape of body parts, considering the degree of aesthetics, the geometric shape contains a large number of points, lines, and surfaces, so if the envelope is the same, the definition of points, lines, and surfaces with a large number of envelopes will be It will bring a large computational burden to the subsequent software operation. In addition, the advantage of defining the envelope as a simple geometric shape with a total volume slightly larger than the actual body part geometry is that when there is some error between the position information returned by the touch signal and the actual value, the program can still determine that the current user touches to a body part of the virtual partner. To allow for error, another approach is to define an enclosing volume that is similar in shape to the body part, but slightly larger in size, but this solution is still computationally expensive. When defining the envelope, you can define an envelope for each bone in the 3D model, or you can define the envelope only for the bones corresponding to the body parts that are set to be active in the program. In order to better receive multi-touch signals, you can also define multiple envelopes for a bone, each of which responds to a point in the multi-touch. Therefore, for a virtual companion, there is not a one-to-one relationship between touchable parts and body parts, but a many-to-one relationship. In addition, if the operating environment of the virtual companion does not support the establishment of multiple envelopes for one bone, multiple virtual bones can be created, and each bone is accompanied by an envelope to handle multi-touch. In order to improve the smoothness of the virtual companion software, it is best to pre-set all the envelopes during modeling, so that at runtime, the 3D engine that presents the virtual companion only needs to record the rotation and displacement information of each envelope frame by frame . When the virtual companion is presented in 3D, the 2D touch signal coordinates are projected onto the 3D virtual companion, and the first envelope that intersects the projection line corresponds to the body part of the virtual companion that the user wants to touch. In addition, an envelope can also be defined for other objects on the screen, and a linkage relationship between other objects and the virtual partner can be defined. When the user touches other objects, the body parts of the linked virtual partner will also respond accordingly. In addition to corresponding to the pet's body parts, the touch input can be further subdivided into "start touch" (if the touch signal did not appear in the previous frame), "stop touch" (the touch information in the previous frame, the current frame None in ), "continuous touch" (the touch information in the current frame also exists in the previous frame). In the "continuous touch" state, the virtual companion software will record the two-dimensional coordinate position of the touch signal in the previous frame, project the corresponding three-dimensional body part information, and so on.

对于短时间内触摸状态的缓存以及分析,可以抽象出触摸的其他性质。在虚拟伴侣软件实现中,对于每一个身体部位的包络体都定义了一个“触摸缓存”,用来保存用户对该身体部位进行触摸的多帧信息。在一个示例性实施方案中,可以通过对每一个触摸缓存内保存信息进行分析,计算出“持久性”,“断续数”,“位移量”三个量化指标。持久性指标从0开始计数,当一个身体部位的触摸缓存内,前一帧以及接收到了一个触摸信号,此帧又收到一个触摸信号时,持久性指标加一。若不满足此条件,则持久性指标减1,直至减到0为止。因此,当用户对一个身体部位的触摸结束后,持久性指标最终会降为0。因此,持久性指标与用户持续触摸该身体部位的时间成正比。断续数从0开始计数。如果当前帧内没有触摸信号,但是前一帧包含一个触摸结束信号,则断续数加1。如果不满足此条件,则在每帧之后断续数减少一个固定值x(x<1),直至减到0为止。对于一个已知的x,当用户以高于一定的频率点击(重复的触摸、释放)屏幕时,断续数会持续增加。因此,断续数定义了用户连续敲击虚拟伴侣某一身体部位的次数。在实际软件编写中,应该对断续数设置一个固定的上限值。位移量可以定义为多维的矢量,记录在每一维上移动的长度,或标量,记录一个持续的触摸的运动轨迹长度。在矢量与标量两种情况下,都可以通过下述方法计算:位移量从0开始计数,如果触摸在当前帧和上一帧都存在,则在当前帧中,计算当前被触摸坐标点与前一帧被触摸坐标点之间的距离。当触摸结束后,位移量可以逐帧减少一个固定值,直至为0,也可以逐帧减少一个与当前持久性指标相关的值,直至为0。例如,可以逐帧减少持久性指标的倍数。位移量描述了用户对虚拟伴侣的一个身体部位进行连续触摸时,触摸轨迹的长度。综上,持久性、断续数、位移量定量描述了跨多帧的触摸事件。For the cache and analysis of the touch state in a short period of time, other properties of the touch can be abstracted. In the implementation of the virtual companion software, a "touch buffer" is defined for the envelope of each body part, which is used to save the multi-frame information of the user's touch on the body part. In an exemplary embodiment, by analyzing the information stored in each touch buffer, three quantitative indicators of "persistence", "intermittent number" and "displacement" can be calculated. The persistence index starts counting from 0. When a touch signal is received in the previous frame and a touch signal is received in this frame in the touch buffer of a body part, the persistence index is increased by one. If this condition is not met, the persistence index will be decremented by 1 until it reaches 0. Therefore, when the user's touch on a body part ends, the persistence indicator will eventually drop to 0. Therefore, the persistence metric is proportional to how long the user continues to touch that body part. Intermittent numbers start counting from 0. If there is no touch signal in the current frame, but the previous frame contains a touch end signal, then the number of stutters is incremented by 1. If this condition is not met, the intermittent number is reduced by a fixed value x (x<1) after each frame until it reaches 0. For a known x, when the user taps (repeated touch, release) the screen above a certain frequency, the number of stutters will continue to increase. Therefore, the number of stutters defines the number of times the user taps a certain body part of the virtual partner in succession. In actual software writing, a fixed upper limit should be set for the number of discontinuities. The displacement can be defined as a multi-dimensional vector, which records the length of movement in each dimension, or a scalar, which records the length of a continuous touch motion track. In the two cases of vector and scalar, it can be calculated by the following method: the displacement is counted from 0, if the touch exists in the current frame and the previous frame, then in the current frame, calculate the current touched coordinate point and the previous frame The distance between touched coordinate points in one frame. When the touch ends, the displacement can be reduced by a fixed value frame by frame until it is 0, or it can be reduced by a value related to the current persistence index frame by frame until it is 0. For example, it is possible to reduce the multiplier of the persistence metric on a frame-by-frame basis. The displacement describes the length of the touch track when the user continuously touches a body part of the virtual partner. In summary, persistence, intermittent number, and displacement quantitatively describe touch events across multiple frames.

除了持久性、断续数、位移量以外的触摸状态量化指标也可以通过底层触摸输入信号进行分析提取。持久性、断续数、位移量的定量计算方法也不限于前文中描述的几种,还可以通过其他方法计算。例如:每个量化指标可以随时间按照指数级增减。或者当指标增加时,每次的增加量随着当前指标值的增加而减少,从而无需对每个指标设置上限值,因为随着指标值的增加,增长速度会越来越缓慢。为了简化计算量,多点触摸也可以简化为单点触摸,例如,当有两个手指触摸屏幕时,程序只接收第一个接触的手指的触摸信息,忽略第二个手指。此外,可以在各触摸缓存中的指标值,或者各指标值的增量中添加随机噪声。在触摸缓存中添加一定量的随机噪声可以使虚拟伴侣自主的做出抽搐或其他随机动作。在多个身体部位同时对触摸进行反应时,添加随机噪声也可以使得身体部位之间的动作过渡更加自然,避免跳变。Quantitative indicators of the touch state other than persistence, intermittent number, and displacement can also be analyzed and extracted through the underlying touch input signal. The quantitative calculation methods of persistence, intermittent number, and displacement are not limited to those described above, and can also be calculated by other methods. For example: Each quantitative indicator can increase or decrease exponentially over time. Or when the indicator increases, the amount of each increase decreases with the increase of the current indicator value, so there is no need to set an upper limit for each indicator, because as the indicator value increases, the growth rate will become slower and slower. In order to simplify the calculation, multi-touch can also be simplified to single-point touch. For example, when two fingers touch the screen, the program only receives the touch information of the first finger and ignores the second finger. In addition, random noise can be added to the index value in each touch buffer, or to the increment of each index value. Adding a certain amount of random noise to the touch cache can make the virtual partner twitch or other random movements autonomously. When multiple body parts respond to touch at the same time, adding random noise can also make the transition between body parts more natural and avoid jumps.

虚拟伴侣借助动画混合技术,根据在程序的每次主循环中计算触摸缓存中的各定量指标值,将多点触摸信息转化为动态的,逼真的动作反应。动画混合泛指一系列已有的对3维动画人物多个独立动画合并为一组连续动作的成熟技术。例如,虚拟伴侣的一个低头动作对应一个时间段内的颈部骨骼的多个位移、旋转动画。虚拟伴侣的另一个将头偏向右的动作则对应了另外一组颈部骨骼的位移、旋转动画。将这两个独立动作进行动画混合,可以得到虚拟伴侣低头同时头向右偏的混合动作,动作的幅度对应两个独立动作中骨骼位移、旋转幅度的平均值。另一种动画混合的实现方式,是在定义单独动作的时候不定义骨骼的绝对位移与旋转量、而是记录相对位移、旋转量。动画混合的过程则是对多个相对变量的加和,混合后的动作幅度相对于前一种实现方式要大。在这两种实现方式中,都可以对被混合的各个独立动作设定不同的权重,从而最后混合的动作视觉上更偏向于权重大的独立动作。With the help of animation mixing technology, the virtual companion converts multi-point touch information into dynamic and realistic action responses according to the quantitative index values in the touch buffer calculated in each main loop of the program. Animation mixing generally refers to a series of existing mature technologies for merging multiple independent animations of 3D animation characters into a set of continuous actions. For example, a head-down motion of the virtual partner corresponds to multiple displacement and rotation animations of the neck bones within a time period. Another movement of the virtual partner to turn the head to the right corresponds to the displacement and rotation animation of another set of neck bones. Combining the animations of these two independent actions can result in a mixed action in which the virtual partner bows his head while turning his head to the right. The range of the action corresponds to the average value of the bone displacement and rotation range in the two independent actions. Another way to achieve animation mixing is to record the relative displacement and rotation instead of defining the absolute displacement and rotation of the bones when defining a single action. The process of animation mixing is the addition of multiple relative variables, and the range of motion after mixing is larger than that of the previous implementation. In these two implementations, different weights can be set for each independent action to be mixed, so that the final mixed action is more visually biased towards the independent action with heavy weight.

本发明的一个创新点是用多点触摸信号作为动画混合的输入控制信号。在一个示例性实施方案中,软件为虚拟伴侣预定义一系列姿态(默认状态下的空闲姿态是其中之一)。每个姿态的定义中包含两个关键帧信息,第一个关键帧是虚拟伴侣处于空闲姿态的状态,第二个关键帧是虚拟伴侣处于当前被定义的姿态的状态。这两个关键帧之间虚拟伴侣模型各部分骨骼的位移、旋转量之差可以用来进行基于相对量的动画混合。(若是进行基于绝对量的动画混合,则只需一个关键帧表面被定义姿态中各骨骼状态。)每一个定义的姿态对应虚拟伴侣对一个持续单点触摸时间的反应状态。例如,一个抬起左前腿的姿态对应虚拟伴侣的左前腿部位被持续触摸时的反应状态。再比如,一个头向右偏的姿态对应虚拟伴侣左侧面部被持续触摸时的反应状态。以上两个例子都是虚拟伴侣对于持续性触摸的反应。还可以定义另一类姿态,对应虚拟伴侣对断续性触摸的反应。例如,一个虚拟伴侣头向后仰的动作对应虚拟伴侣鼻子受到断续触摸的反应。类似的,还可以定义一系列对于位移触摸的姿态。An innovative point of the present invention is to use the multi-point touch signal as the input control signal for animation mixing. In an exemplary embodiment, the software predefines a series of poses for the virtual companion (idle pose in default state being one of them). The definition of each posture contains two keyframe information, the first keyframe is the state of the virtual companion in the idle posture, and the second keyframe is the state of the virtual companion in the currently defined posture. The difference between the displacement and rotation of the bones of each part of the virtual companion model between the two keyframes can be used for animation blending based on relative quantities. (If animation blending based on absolute quantities is performed, only one keyframe surface is required to define the state of each bone in the pose.) Each defined pose corresponds to the response state of the virtual partner to a continuous single-touch time. For example, a gesture of lifting the left front leg corresponds to the response state of the virtual partner when the left front leg is continuously touched. For another example, a gesture of turning the head to the right corresponds to the reaction state of the virtual partner when the left side of the face is continuously touched. Both examples above are virtual partner responses to persistent touch. Another class of gestures can be defined, corresponding to the virtual partner's response to intermittent touch. For example, a virtual partner's head tilting back corresponds to intermittent touches on the virtual partner's nose. Similarly, you can also define a series of gestures for displacement touch.

程序运行在过程中,对所有持续性触摸对应的姿态进行加权混合。每个姿态的权重是该姿态对应的持续性触摸的持续性指标值。即,当一个身体部位在相当一段时间内没有被触摸时,其权重为0,混合动作不会包含预先定义的此身体部位被触摸时的反应姿态。如果程序中预先定义了一系列的持续性触摸反应姿态,并且预先定义了合理的持续性指标增减行为,则单纯的通过对持续性触摸的处理就可以为用户呈现逼真的、类似于触摸实体宠物时宠物进行反应的视觉效果。例如,让用户用一个手指连续划过多个虚拟伴侣的身体部位时,每一个身体部位所对应的反应姿态将被触发,当用户手指离开该身体部位时,反应姿态随时间降低权重。从而用户可以看到一系列身体部位连贯的对触摸进行反应。在此基础上,可以定义一系列对断续性触摸的反应姿态,用于呈现一系列逼真的情绪变化。例如,可以定义虚拟伴侣缩回左前肢的姿态,对应用户对虚拟伴侣左前肢的断续性触摸。此姿态的权重与该断续性触摸的断续数指标成正比,从而使得当用户连续快速点击左前肢时,断续数指标增高,缩回左前肢姿态在整体被混合动画中的权重增高,使得虚拟伴侣呈现出缩回左前肢动作。但是当用户缓慢的重复点击左前肢时,断续数指标较低,持久性指标较高,抬起左前肢的姿态的权重较大,从而虚拟伴侣呈现抬起左前肢动作。对断续性触摸的另一种处理方式,是将虚拟伴侣在一段时间内所有身体部位接受到的,没有单独定义反应姿态的断续性触摸指标进行加和。加和后的数值对应用户对虚拟伴侣的随意戳击数。并且定义一个“悲伤”的面部表情姿态。当戳击数越大,悲伤姿态权重越高。此外,还可以对身体的某特定部位的断续性触摸定义其他的情绪姿态。例如,可以对头部的断续性触摸定义“高兴”的面部表情姿态。当头部接受断续性触摸时,将不计入戳击数。类似的,还可以对位移性触摸定义表情姿态。例如,可以对宠物狗形象的虚拟伴侣定义当下巴部位受到位移性触摸时,对应高兴的表情姿态。虚拟伴侣通过一系列动作、表情姿态的定义与动画混合,可以为用户呈现逼真的,类似于与真实宠物进行互动的视觉效果。用户还可以在不被提前告知的情况下,通过不断尝试各种可能的触摸部位与触摸方式,探索虚拟伴侣对不同触摸的反应。While the program is running, a weighted blend is performed on all gestures corresponding to persistent touches. The weight of each gesture is the persistence index value of the persistent touch corresponding to the gesture. That is, when a body part has not been touched for a considerable period of time, its weight is 0, and the blend action will not contain the predefined reaction gesture when this body part is touched. If the program pre-defines a series of persistent touch response gestures, and pre-defines a reasonable increase and decrease behavior of the persistent index, then the user can be presented with a realistic, touch-like entity simply by processing the persistent touch Visual effects for pets to react to when petting. For example, when the user uses one finger to continuously swipe multiple body parts of the virtual partner, the corresponding reaction gesture for each body part will be triggered. When the user leaves the body part, the weight of the reaction gesture will decrease over time. The user can thus see a series of body parts responding to touch in a coherent manner. On this basis, a series of response gestures to intermittent touch can be defined, which can be used to present a series of realistic emotional changes. For example, the posture of the virtual partner retracting the left forelimb can be defined, corresponding to the user's intermittent touch on the virtual partner's left forelimb. The weight of this gesture is proportional to the intermittent number index of the intermittent touch, so that when the user continuously and quickly clicks on the left forelimb, the intermittent number index increases, and the weight of the gesture of retracting the left forelimb in the overall mixed animation increases. Make the virtual partner show the motion of retracting the left forelimb. But when the user slowly and repeatedly clicks on the left forelimb, the intermittent number index is low, the persistence index is high, and the weight of the posture of raising the left forelimb is relatively large, so that the virtual partner presents the action of raising the left forelimb. Another way to deal with intermittent touch is to add the intermittent touch indicators received by all body parts of the virtual partner within a period of time, without separately defining the reaction gesture. The summed value corresponds to the number of random pokes on the virtual partner by the user. And define a "sad" facial expression pose. The higher the number of pokes, the higher the weight of the sad stance. In addition, other emotional gestures can be defined for intermittent touching of a specific part of the body. For example, the "happy" facial expression gesture can be defined for intermittent touches of the head. When the head is touched intermittently, the poke count will not be counted. Similarly, expression gestures can also be defined for displacement touches. For example, it is possible to define a happy expression gesture for a virtual partner of a pet dog image when the jaw is subjected to a displacement touch. Through a series of movements, the definition of expression and posture mixed with animation, the virtual companion can present the user with a realistic visual effect similar to interacting with a real pet. Users can also explore the virtual partner's response to different touches by constantly trying various possible touch locations and touch methods without being informed in advance.

除了上文描述的通过多点触控控制虚拟伴侣的动作行为,还可以采用以下其他的可选方式。例如,一个姿态的权重可以由对应触摸的持久性、断续数、位移量综合决定。再比如,可以在对触摸的反应中添加随机变量,使得反应更有不确定性,更自然。此外,不同身体部位对应的触摸反应可以随时间变化随机互换。或者,同一个身体部位对应的不同的触摸反应权重可以随时间变化按照预先定义的某种随机过程性质而变化,或是按照虚拟伴侣当前的情绪信息而变化。此外,还可以定义更加复杂的触摸反应。例如,虚拟伴侣的骨骼位置可以跟随位移触摸的触摸位置而变化,从而实现例如虚拟伴侣的手掌位置跟随用户手指触摸位置而变化的效果,或是类似于虚拟伴侣与用户握手的效果。除了静止的姿态以外,具有多个关键帧的动画也可以被定义为对触摸的反应。例如,可以对虚拟伴侣的头部定义持续左右摇晃的动作,作为对用户触摸其头部的反应。当进行动画混合时,为保证混合后的动作的逼真效果,需要对混合计算进行特定的限制。例如,当被混合的动画包括虚拟宠物的左前肢抬起姿态时,需要保证此时右前肢抬起姿态的权重为0,即保证右前肢当前处于触地状态,以保证视觉上的合理性。此外,程序还需要对被混合的动作进行幅度上的限制,以保证在动作过程中虚拟宠物的3维模型可以正常显示。当虚拟宠物允许在平板电脑或其他带有加速度传感器信息的设备上时,可以通过加速度传感器的读数判断设备当前的摆放方式,并将重力方向作为虚拟伴侣动作反应的输入控制参数之一。此外,可以通过设备上的摄像头获取用户的影像并进行手势分析,将手势输入作为与虚拟伴侣交互的另一种方式。此外,还可以通过设备上的麦克风接收环境中的声音,并将声音作为控制虚拟伴侣行为的输入,例如,当声音较大时,虚拟伴侣的耳朵显示前后扇动的动作。In addition to controlling the action behavior of the virtual companion through multi-touch described above, the following other optional ways can also be used. For example, the weight of a gesture may be comprehensively determined by the persistence, intermittent number, and displacement of the corresponding touch. For another example, random variables can be added to the response to touch, making the response more uncertain and natural. Furthermore, touch responses corresponding to different body parts can be randomly interchanged over time. Alternatively, different touch response weights corresponding to the same body part may change over time according to a predefined random process property, or according to the current emotional information of the virtual partner. In addition, more complex touch reactions can be defined. For example, the bone position of the virtual partner can change according to the touch position of the displacement touch, so as to realize the effect that the palm position of the virtual partner changes following the touch position of the user's finger, or an effect similar to that of the virtual partner shaking hands with the user. In addition to static poses, animations with multiple keyframes can also be defined as reactions to touch. For example, a continuous side-to-side shaking motion can be defined for a virtual companion's head as a reaction to the user touching its head. When performing animation blending, in order to ensure the realistic effect of the blended motion, it is necessary to impose specific restrictions on the blending calculation. For example, when the mixed animation includes the left forelimb raising gesture of the virtual pet, it is necessary to ensure that the weight of the right forelimb raising gesture is 0 at this time, that is, ensure that the right forelimb is currently touching the ground to ensure visual rationality. In addition, the program also needs to limit the range of the mixed actions to ensure that the 3D model of the virtual pet can be displayed normally during the action. When the virtual pet is allowed on a tablet computer or other devices with accelerometer information, the current arrangement of the device can be judged through the readings of the accelerometer, and the direction of gravity can be used as one of the input control parameters for the virtual companion's action response. In addition, the camera on the device can capture the image of the user and perform gesture analysis, using gesture input as another way to interact with the virtual partner. In addition, the sound in the environment can also be received through the microphone on the device, and the sound can be used as an input to control the behavior of the virtual partner. For example, when the sound is loud, the virtual partner's ears show the motion of flapping back and forth.

除了上文中描述的针对触摸进行反应而定义的一系列虚拟伴侣的行为、姿态,虚拟伴侣还预先定义了一系列自发动作,例如眨眼、呼吸、摇尾巴、犬吠、跳跃、说话等等。这些动作可以在没有触摸事件发生的情况下自动进行。In addition to the defined series of behaviors and gestures of the virtual companion described above for responding to touch, the virtual companion also pre-defines a series of spontaneous actions, such as blinking, breathing, wagging the tail, barking, jumping, talking, and so on. These actions can be performed automatically without touch events.

当虚拟伴侣以实体机器人的方式实现时,对触摸输入的信息收集可以通过机器人的触摸传感器实现,动作混合算法则可以用来对机器人电机进行控制。When the virtual companion is implemented as a physical robot, the collection of touch input information can be realized through the robot's touch sensor, and the motion mixing algorithm can be used to control the robot's motors.

虚拟伴侣的情绪控制Emotional control of virtual companions

虚拟伴侣除了可以对用户的触摸进行动作反应外,还可以对软件定义的内在情绪模型进行动作反应。在一个示例性实施方案中,情绪模型建立在由Albert Mehrabian和James A.Russel提出的PAD模型基础之上。PAD是Pleasure(愉悦度)-Arousal(激活度)-Dominance(优势度)的缩写。该模型利用愉悦度、激活度、优势度三个量化维度来表示所有的情绪类型。此前已有研究课题将PAD模型应用于对虚拟人物进行表情定义。In addition to the action response to the user's touch, the virtual companion can also perform action responses to the internal emotional model defined by the software. In an exemplary embodiment, the emotion model is based on the PAD model proposed by Albert Mehrabian and James A. Russel. PAD is the abbreviation of Pleasure (Pleasure)-Arousal (Activation)-Dominance (Dominance). The model uses three quantitative dimensions of pleasure, activation, and dominance to represent all emotional types. Previous research projects have applied the PAD model to define the expressions of virtual characters.

在本发明的一个示例性实施方案中,虚拟伴侣主程序中包括两组PAD数值:长期PAD与短期PAD。长期PAD对应虚拟伴侣的长期性格特征,短期PAD用于表示当前的暂时情绪状态。短期PAD与长期PAD的初始值均可以通过以下方式设定1)设定为中性默认值2)由用户选择3)由用户的监护人选择。在某一时刻,短期PAD的值可以与长期PAD不同,但从总体趋势来看,短期PAD值会向长期PAD值收敛,收敛方式可以对应任何复杂的函数关系,也可以是间单的与目前长、短期PAD差值呈线性关系。类似地,长期PAD也会有向短期PAD收敛的趋势,但是趋势较不明显。长期PAD向短期PAD收敛可以使得虚拟伴侣能够随着不断的接受与用户的互动而改变性格特征。结合前文所述的多点触控输入与动作、情绪反应之间的对应关系,虚拟伴侣短期PAD值得改变可以通过以下方式实现:In an exemplary embodiment of the present invention, the virtual companion main program includes two sets of PAD values: long-term PAD and short-term PAD. The long-term PAD corresponds to the long-term personality traits of the virtual partner, and the short-term PAD is used to represent the current temporary emotional state. Both the initial values of the short-term PAD and the long-term PAD can be set in the following ways: 1) set to a neutral default value, 2) be selected by the user, and 3) be selected by the user's guardian. At a certain moment, the short-term PAD value can be different from the long-term PAD value, but from the overall trend, the short-term PAD value will converge to the long-term PAD value, and the convergence method can correspond to any complex functional relationship, or it can be simple and current The difference between long-term and short-term PAD is linear. Similarly, long-term PAD also tends to converge to short-term PAD, but the trend is less obvious. The convergence of long-term PAD to short-term PAD can enable the virtual partner to change personality traits with continuous acceptance and interaction with the user. Combined with the correspondence between multi-touch input and actions and emotional responses mentioned above, the short-term PAD value change of the virtual companion can be realized in the following ways:

当虚拟伴侣接收到的戳击数超过一定阈值时,PAD的愉悦度(Pleasure)值降低。When the number of pokes received by the virtual partner exceeds a certain threshold, the PAD's Pleasure value decreases.

当用户对于虚拟伴侣定义了“高兴”表情反应的身体部位进行断续性触摸时。愉悦度升高。When the user intermittently touches the body part of the virtual partner that defines the "happy" emoticon response. Increased pleasure.

当用户对于虚拟伴侣定义了“高兴”表情反应的身体部位进行位移性或连续性触摸时。愉悦度升高。When the user makes a displacement or continuous touch on the body part of the virtual partner that defines the "happy" expression response. Increased pleasure.

来自于用户的任意触摸输入都可以提高PAD的激活度(Arousal),降低优势度(Dominance),对于不同身体部位的触摸导致的指标增减量可以不同。Any touch input from the user can increase the activation degree (Arousal) of the PAD and reduce the degree of dominance (Dominance). The amount of index increase or decrease caused by the touch of different body parts can be different.

可以对虚拟伴侣的特定身体部位定义与上述描述不同的对PAD的影响方式。例如,当虚拟伴侣的眼睛部位收到触摸输入时,降低愉悦度,或者当虚拟伴侣下巴收到触摸时,大幅升高愉悦度。Different ways of affecting PAD than described above can be defined for specific body parts of the virtual partner. For example, reduce the pleasure level when the virtual partner's eye area receives touch input, or greatly increase the pleasure level when the virtual partner's chin is touched.

长期PAD值除了在收到用户输入时可被改变,还可以随着时间逐渐变化,例如,激活度可以随着时间缓慢降低,当夜间没有用户触摸输入时,激活度不会由于触摸反应而增高,可以保证在凌晨时刻激活度达到最低,与实际的生物情绪变化规律相符合。The long-term PAD value can be changed not only when receiving user input, but also gradually over time. For example, the activation degree can slowly decrease over time. When there is no user touch input at night, the activation degree will not increase due to touch response. , which can ensure that the activation degree reaches the minimum in the early morning, which is consistent with the actual biological emotion change law.

当虚拟伴侣程序包含语音分析功能时,短期PAD的值还可以随用户说话时的语调而变化。例如,当用户用严厉的、或是命令式的语气与虚拟伴侣交流时,短期PAD中的愉悦度和优势度将降低。此外,还可以通过语音、视频输入来分析用户当前的呼吸速度、面部情绪等等,得到用户当前的激活度信息,并且相应的调整虚拟伴侣的激活度数值与之对应。When the virtual companion program includes a speech analysis function, the short-term PAD value can also vary with the intonation of the user's speech. For example, pleasure and dominance were reduced in short-term PAD when users communicated with their virtual partners in a harsh or commanding tone. In addition, the user's current breathing rate, facial emotion, etc. can be analyzed through voice and video input to obtain the user's current activation degree information, and correspondingly adjust the virtual partner's activation degree value to correspond to it.

当虚拟伴侣的短期PAD值改变时,可以对其的行为动作产生下述影响:When the short-term PAD value of the virtual partner changes, the following effects can be exerted on its behavior:

当愉悦度值高于或低于一定数值时,对应“高兴”或“不高兴”的面部表情姿态在所有被混合动作中的权重将增高,从而使虚拟伴侣在进行其他行为动作的同时可以明显的表现出高兴或不高兴的面部表情。类似地,当激活度与优势度值改变时,或者当愉悦度、激活度与优势度同时变化超过一定范围时,可以对应提高不同表情的权重。例如,当愉悦度低于一定阈值,激活度高于一定阈值,并且优势度在一个较高水平时,可以增加“生气”表情的权重。When the pleasure value is higher or lower than a certain value, the weight of the facial expression gesture corresponding to "happy" or "unhappy" in all mixed actions will be increased, so that the virtual partner can clearly understand when performing other actions. A happy or unhappy facial expression. Similarly, when the activation and dominance values change, or when the pleasure, activation and dominance change at the same time beyond a certain range, the weights of different expressions can be correspondingly increased. For example, when the degree of pleasure is lower than a certain threshold, the degree of activation is higher than a certain threshold, and the degree of dominance is at a high level, the weight of the "angry" expression can be increased.

激活度数值可以影响虚拟伴侣呼吸动作的速度。激活度越高,呼吸速度越快。呼吸动作的幅度也可以随激活度数值增大而增大。此外,虚拟伴侣摇尾巴的动作或其他动作的幅度也可以随激活度数值增大而增大。The activation value can affect the speed of the virtual companion's breathing action. The higher the activation, the faster the breathing rate. The amplitude of the breathing action can also increase with the increase of the activation degree value. In addition, the amplitude of the virtual partner's tail wagging motion or other motions can also increase with the increase of the activation degree value.

当愉悦度、激活度、优势度数字增高时,对应的虚拟伴侣的动作在动作混合中的比重也可以随之增大。例如,对于一个宠物狗形象的虚拟伴侣,默认的空闲姿态中尾巴处于下垂状态。但是当虚拟伴侣的愉悦度、激活度、优势度数字增高时,“尾巴上翘”这一姿态的权重将增高,与空闲姿态进行动作混合后,宠物狗将会呈现出尾巴上翘的状态。When the number of pleasure, activation, and dominance increases, the proportion of the corresponding virtual partner's action in the action mix can also increase accordingly. For example, for a virtual companion in the form of a pet dog, the default idle pose has the tail drooping. However, when the virtual partner's pleasure, activation, and dominance numbers increase, the weight of the "tail upturned" posture will increase. After the action is mixed with the idle posture, the pet dog will show a tail upturned state.

在虚拟伴侣软件运行过程中,虚拟伴侣的年龄也会随之增长,并且伴随着长期PAD数值的改变。例如,长期激活度数值会逐年下降。另外,长期PAD数值也会反过来影响虚拟伴侣年龄增长速度。例如,当虚拟伴侣长期愉悦度数值较高时,其年龄增长会放缓,或者其外观会比具有较低愉悦度数值的虚拟伴侣显得更为年轻,例如,有更鲜艳的毛发颜色等。During the operation of the virtual companion software, the age of the virtual companion will also increase, and accompanied by the change of the long-term PAD value. For example, the long-term activation value will decrease year by year. In addition, the long-term PAD value will in turn affect the growth rate of the virtual partner's age. For example, when a virtual partner has a high long-term happiness value, its age growth will slow down, or its appearance will appear younger than a virtual partner with a lower pleasantness value, such as having more vivid hair color.

虚拟伴侣的看护需求Caregiving needs of virtual companions

虚拟伴侣可以向用户呈现出虚拟的生理需求,需求程度随时间增长。虚拟生理需求包括食物需求、饮水需求、排泄废物需求、洗澡清洁需求、娱乐需求等等。此外,上文提到的自发行为,例如睡觉、呼吸、眨眼等等,也可以定义为生理需求。各个生理需求的强度可以被定义为主程序中的一个数值变量,其值在主循环中不断增高。或者可以为每个需求创建一个定时器,每过一定时间其数值增加一定幅度。在一天中的不同时间,增加幅度可以不同。增加幅度还可以受到当前短期PAD值的影响。Virtual companions can present users with virtual physical needs that grow over time. Virtual physiological needs include food needs, drinking water needs, waste excretion needs, bathing and cleaning needs, entertainment needs and so on. In addition, the spontaneous behaviors mentioned above, such as sleeping, breathing, blinking, etc., can also be defined as physiological needs. The strength of each physiological need can be defined as a numerical variable in the main program, whose value is continuously increased in the main loop. Or you can create a timer for each requirement, and its value will increase by a certain amount after a certain period of time. The increase can be different at different times of the day. The increase can also be affected by the current short-term PAD value.

一部分生理需求的强度可以通过改变虚拟伴侣对应的姿态、动作的权重,直观的呈现给用户。例如,当睡觉需求逐渐增强时,虚拟伴侣眼皮下垂动作的权重将不断提高。此外,生理需求的强度还可以影响短期PAD值。The strength of a part of physiological needs can be intuitively presented to the user by changing the weight of the posture and action of the virtual partner. For example, when the need to sleep gradually increases, the weight of the virtual partner's eyelid drooping action will continue to increase. In addition, the intensity of physiological needs can also affect the short-term PAD value.

每一项需求都对应一个可变的阈值。该阈值可以随着一天中的不同时间改变,也可以随当前短期PAD的值而改变,或者是按照某种提前定义的随机过程变量而改变。当某一项需求的强度达到了当前的阈值时,虚拟伴侣将会呈现出对应的行为。对于像眨眼这一的简单需求,当强度超过阈值时,虚拟伴侣会进行眨眼动作,并且将需求强度值重置为0。类似地,当呼吸需求超过其阈值时,虚拟伴侣会进行一次呼吸动作并将呼吸需求值置0。当睡觉需求超出阈值时,虚拟伴侣会进入持续性的睡眠状态,但可以通过接收外界声音、触摸信号而被唤醒。Each requirement corresponds to a variable threshold. The threshold can vary with the time of day, with the current short-term PAD value, or with some pre-defined random process variable. When the intensity of a certain demand reaches the current threshold, the virtual partner will show the corresponding behavior. For a simple demand like blinking, when the intensity exceeds the threshold, the virtual partner will blink and reset the demand intensity value to 0. Similarly, when the breathing demand exceeds its threshold, the virtual companion will perform a breathing action and set the breathing demand value to 0. When the sleep demand exceeds the threshold, the virtual partner will enter a continuous sleep state, but can be awakened by receiving external sounds and touch signals.

在一个示例性实施方案中,对于其他较复杂的生理需求,则需要通过与用户的互动来满足,从而帮助虚拟伴侣与用户之间建立一个需要与被需要的情感联系,类似于植物之于园丁、宠物之于主人的关系。研究证明,被需要的关系对于用户,特别是老年人来说,对促进心理健康有积极的作用。当复杂需求超过一定阈值时,虚拟伴侣可以通过展现特定的行为来提示用户进行对应的互动。例如,娱乐需求可以通过虚拟伴侣的持续跳跃来展示。此外,虚拟伴侣还可以通过发出特定的声音来提示用户。例如,食物需求对应虚拟伴侣发出肚子咕咕响的声音等等。In an exemplary embodiment, other complex physiological needs need to be met through interaction with the user, thereby helping to establish an emotional connection between the virtual partner and the user, similar to the relationship between plants and gardeners , The relationship between pets and their owners. Research has proven that desired relationships have a positive effect on promoting mental health for users, especially older adults. When the complex requirements exceed a certain threshold, the virtual companion can prompt the user to interact by displaying specific behaviors. For example, the need for entertainment can be demonstrated by the constant jumping of a virtual companion. In addition, the virtual companion can also prompt the user by making specific sounds. For example, a food need corresponds to a virtual partner making stomach rumbling sounds, and so on.

以下列出不同看护需求对应的交换行为的具体实施示例:The specific implementation examples of the exchange behavior corresponding to different nursing needs are listed below:

当虚拟伴侣对食物的需求超过阈值时,可以通过发出肚子咕咕叫的声音提示用户,同时,虚拟伴侣还可以表现出不高兴/极饿的表情,并且可以在程序界面中显示一个装有食物的容器。或者,装有食物的容器一直存在于界面中,当虚拟伴侣有食物需求时,容器由关闭变为部分打开状态或显示闪烁效果,来吸引用户的注意力。当用户触摸该容器时,容器呈完全打开状态,并且显示一系列可供选择的食物图片。用户可以通过触摸并滑动来选中特定的食物。如图1中所示,用户可以通过拖动任意食物图片到虚拟伴侣的位置,实现虚拟喂食动作。虚拟伴侣接收到食物后,身体姿态转换为吃食物的状态。喂食状态结束后,虚拟伴侣的食物需求值降低,降低量可以随用户选定的食物种类不同而不同。不同的食物选择还可以对虚拟伴侣的长期PAD值产生不同的影响。例如,当用户选择肉类食物时,虚拟伴侣的激活度增高;当选择蔬菜类食物时,激活度降低。食物选择还可以影响到虚拟伴侣的虚拟成长过程。例如,肉类食物可以让虚拟伴侣的外观变的更健壮。When the virtual partner's demand for food exceeds the threshold, the user can be prompted by the sound of stomach growling. At the same time, the virtual partner can also show an unhappy/extremely hungry expression, and can display a box with food in the program interface. container. Or, the container with food always exists in the interface. When the virtual companion has food needs, the container changes from closed to partially open or displays a flashing effect to attract the user's attention. When the user touches the container, it opens fully and displays a series of food images to choose from. Users can touch and swipe to select specific foods. As shown in Figure 1, the user can drag any food picture to the position of the virtual companion to realize the virtual feeding action. After the virtual partner receives the food, the body posture changes to the state of eating food. After the feeding state is over, the food demand value of the virtual companion decreases, and the amount of reduction can vary with the type of food selected by the user. Different food choices can also have different effects on the long-term PAD value of virtual partners. For example, when the user chooses meat, the activation of the virtual companion increases; when the user chooses vegetables, the activation decreases. Food choices can also affect the virtual growth process of a virtual partner. For example, meaty foods can give a virtual partner a more robust appearance.

当虚拟伴侣的饮水需求超过阈值时,可以通过发出粗糙的呼吸声来提示用户,同时显示不高兴/口渴的表情,同时程序界面上出现装水的容器。当用户点击容器时,界面上可以显示多种不同的饮料供用户选择。用户可以用类似于喂食的触摸动作实现喂水动作,如图2所示。类似于喂食,喂水行为也可以使长期PAD的值进行相应的改变,以及影响虚拟伴侣的成长过程。例如,不健康的饮料会使得虚拟伴侣外形变得肥胖,并且降低长期激活度的值,但是多糖饮料可以导致短期PAD的激活度升高。When the drinking water demand of the virtual partner exceeds the threshold, the user can be prompted by making a rough breathing sound, and at the same time displaying an unhappy/thirsty expression, and at the same time a container of water appears on the program interface. When the user clicks on the container, a variety of different beverages can be displayed on the interface for the user to choose. Users can use a touch action similar to feeding to realize the water feeding action, as shown in Figure 2. Similar to feeding, water feeding behavior can also cause corresponding changes in long-term PAD values and affect the growth process of virtual partners. For example, unhealthy beverages made the virtual partner obese and decreased long-term activation, but polysaccharide beverages increased short-term PAD activation.

当虚拟伴侣的排泄需求超过阈值时,虚拟伴侣可以自行进入排泄状态:即屏幕上显示排泄物。当排泄物存在于界面中时,虚拟伴侣的愉悦度将降低,并且做出一系列动作显示虚拟伴侣不喜欢难闻的气味。用户可以通过触摸表示排泄物的图片,并将其滑动至屏幕之外;或者通过拖动一个打扫用工具到排泄物图片上,来实现清洁行为。When the virtual partner's need for excretion exceeds the threshold, the virtual partner can enter the excretion state by itself: that is, the excrement is displayed on the screen. When excrement was present in the interface, the virtual partner's pleasure was reduced, and a series of actions were performed to show that the virtual partner did not like the bad smell. The user can touch the picture representing the excrement and slide it out of the screen; or drag a cleaning tool to the picture of the excrement to realize the cleaning behavior.

当虚拟伴侣的清洁需求超过阈值时,虚拟伴侣可以用瘙痒的动作来提醒用户,或者在虚拟伴侣身体上显示污迹、或显示3维的烟雾从虚拟伴侣身体中散发的效果,或者在屏幕上显示浴盆的图片等等。用户可以通过利用触摸将虚拟伴侣拖动进浴盆实现清洁功能。或者可以通过用户逐一触摸虚拟伴侣身上的污点,被触摸的污点逐一消失来实现清洁功能。When the virtual partner's cleaning needs exceed the threshold, the virtual partner can remind the user with an itching action, or display stains on the virtual partner's body, or display the effect of 3D smoke emanating from the virtual partner's body, or on the screen Show pictures of tubs and more. Users can use touch to drag the virtual partner into the tub for cleaning. Or the cleaning function can be realized by the user touching the stains on the body of the virtual partner one by one, and the touched stains disappear one by one.

当虚拟伴侣的娱乐需求超过阈值时,虚拟伴侣可以提升短期PAD中的激活度指标,并且通过发出吠叫的声音、展示跳跃动作、或从屏幕场景中捡起某个玩具的行为来提醒用户。用户可以通过与虚拟伴侣进行互动游戏来满足虚拟伴侣的娱乐需求。游戏的互动方式可以包括前文中描述的多点触摸互动等等。例如,一个游戏可以是用户通过多点触摸,将玩具从虚拟伴侣嘴中取出。互动游戏可以大幅提高虚拟伴侣短期PAD的愉悦度和激活度,并且可以直接增加长期PAD的愉悦度和激活度。When the entertainment needs of the virtual companion exceed the threshold, the virtual companion can increase the activation index in the short-term PAD and remind the user by barking, jumping, or picking up a toy from the screen scene. Users can meet the entertainment needs of virtual companions by playing interactive games with virtual companions. The interaction mode of the game may include the multi-touch interaction described above, and the like. For example, a game could have the user take a toy out of a virtual partner's mouth through multi-touch. Interactive games can greatly increase the pleasure and activation of the virtual partner's short-term PAD, and can directly increase the pleasure and activation of the long-term PAD.

虚拟伴侣智能对话功能及后台辅助系统Virtual partner intelligent dialogue function and background assistance system

本发明的一个关键技术是为虚拟伴侣加入智能对话功能。智能对话功能可以通过人工智能实现,也可以通过网络系统由人工进行远程控制。在有人工参与的情况下,虚拟伴侣即充当远端控制人员在用户界面展现的头像。相比于全部利用人工智能实现对话,通过远程人工控制对话内容的优势在于可以为用户提供更为接近与真人交流的体验。A key technology of the present invention is to add an intelligent dialogue function for the virtual companion. The intelligent dialogue function can be realized by artificial intelligence, and can also be remotely controlled by humans through the network system. In the case of human participation, the virtual companion acts as the avatar displayed by the remote controller on the user interface. Compared with all the use of artificial intelligence to achieve dialogue, the advantage of remotely controlling the content of the dialogue is that it can provide users with an experience closer to communicating with real people.

人工助理可以距离用户较远的地理位置远程控制虚拟伴侣。例如,用户在美国,人工助理在菲律宾或印度。人工助理通过运行在电脑上的远程控制界面,通过因特网与运行虚拟伴侣的平板电脑或其他设备相连接。在一个示例性实施方案中,人工助理可以通过如图6所示的界面登陆进入人工助理软件系统。登陆后,人工助理可以通过如图7所示的界面监控多个虚拟伴侣的状态。同时,多个人工伴侣可以通过登陆各自的软件系统,同时监控同一个或同一组虚拟伴侣。当一个虚拟伴侣需要人工协助时,人工助理软件可以自动或由人工助理手动进入如图8所示的一对一控制界面。在此界面中,人工助理可以通过点击界面上的按钮或输入文字来控制该虚拟伴侣的行为动作。上文描述的一系列交互过程对应图9中Prototype 1线框中所包含的交互功能。图10显示了人工助理的工作流程图。图11显示了人工助理软件、虚拟伴侣软件以及中央控制服务器的部署关系。An artificial assistant can remotely control a virtual companion from a geographically distant location from the user. For example, the user is in the United States and the human assistant is in the Philippines or India. The artificial assistant is connected via the Internet to a tablet or other device running a virtual companion through a remote control interface running on a computer. In an exemplary embodiment, the human assistant can log into the human assistant software system through an interface as shown in FIG. 6 . After logging in, the artificial assistant can monitor the status of multiple virtual companions through the interface shown in Figure 7 . At the same time, multiple artificial companions can simultaneously monitor the same or the same group of virtual companions by logging into their respective software systems. When a virtual partner needs manual assistance, the artificial assistant software can automatically or manually enter the one-to-one control interface shown in Figure 8 by the artificial assistant. In this interface, the artificial assistant can control the behavior of the virtual companion by clicking buttons or inputting text on the interface. The series of interaction processes described above correspond to the interaction functions contained in the Prototype 1 wireframe in Figure 9. Figure 10 shows the workflow diagram of the artificial assistant. Figure 11 shows the deployment relationship of the artificial assistant software, the virtual companion software and the central control server.

当虚拟伴侣采用计算机生成的人工智能来与用户进行对话交流时,其人工智能处理程序可以将有较高不确定性的交互任务交由远程人工助理来完成。人工助理软件可以通过多窗口监控界面的对应窗口显示提示信息(类似于图7中的提示信息显示)来提醒远程助理介入该虚拟伴侣当前与用户的互动过程。或者,任意一个当前没有对任何虚拟伴侣进行一对一控制的人工助理,其人工助理程序可以直接跳转到如图8所示的一对一控制界面。当人工助理软件处于多窗口监控状态时,可以通过一系列显示信息提示人工助理哪些虚拟伴侣正在与用户进行交互。这些提示信息包括多窗口界面的音频流内容、虚拟伴侣所运行的平板电脑麦克风接收到的音频音量变化等等。人工助理可以通过这些提示信息选择进入某个正在交互的虚拟伴侣的一对一控制界面,通过控制虚拟伴侣的行为为用户提供更好的交互体验。当人工助理在虚拟伴侣与用户进行交流的中途介入时,一对一交互界面还会显示通过语音识别技术获取的虚拟伴侣与用户的交谈历史,使得人工助理能够及时获知当前对话的上下文。每次人工助理介入一个交互过程,其对虚拟伴侣的控制信息可以作为后续人工智能模型改进所需要的输入信息。When the virtual partner uses computer-generated artificial intelligence to communicate with the user, its artificial intelligence processing program can transfer the interaction tasks with high uncertainty to the remote artificial assistant. The artificial assistant software can remind the remote assistant to intervene in the current interactive process between the virtual companion and the user by displaying prompt information (similar to the prompt information display in FIG. 7 ) in the corresponding window of the multi-window monitoring interface. Alternatively, for any artificial assistant that currently does not perform one-to-one control over any virtual partner, its artificial assistant program can directly jump to the one-to-one control interface as shown in FIG. 8 . When the artificial assistant software is in the multi-window monitoring state, a series of display information can be used to prompt the artificial assistant which virtual partners are interacting with the user. These prompt information include the audio streaming content of the multi-window interface, the audio volume change received by the microphone of the tablet computer running on the virtual companion, and the like. The artificial assistant can choose to enter the one-to-one control interface of a virtual partner who is interacting through these prompts, and provide users with a better interactive experience by controlling the behavior of the virtual partner. When the artificial assistant intervenes in the middle of the communication between the virtual partner and the user, the one-to-one interaction interface will also display the conversation history between the virtual partner and the user obtained through speech recognition technology, so that the artificial assistant can timely learn the context of the current conversation. Every time the artificial assistant intervenes in an interaction process, its control information on the virtual companion can be used as the input information required for the subsequent improvement of the artificial intelligence model.

当人工助理通过一对一控制界面控制虚拟伴侣时,人工助理可以通过界面上显示的当前虚拟伴侣获取的视频、音频流信息得知用户当前的状态以及说话内容。人工助理将虚拟伴侣需要做出的语言答复以文字方式输入到界面中的聊天窗口。在虚拟伴侣端,当其接收到文字输入时,可以通过文字转语音技术,以音频方式读出获取的文字信息。虚拟伴侣的三维模型可以同时做出说话动作,例如嘴持续张开闭合的动作。动作可以恒定、随机速度播放,或者根据说话内容及速度变速播放。或者,可以用语音引擎输出的口型同步信息,控制说话动作的频率以及幅度。当虚拟伴侣读出接收到的文字内容时,屏幕上还可以同时以字幕的方式显示文字内容,以便于有听力问题的用户与虚拟伴侣交流。When the artificial assistant controls the virtual companion through the one-to-one control interface, the artificial assistant can know the user's current status and speech content through the video and audio stream information displayed on the interface obtained by the current virtual companion. The artificial assistant enters the language response that the virtual partner needs to make into the chat window in the interface in text form. On the virtual partner side, when it receives text input, it can read out the acquired text information in audio form through text-to-speech technology. The 3D model of the virtual partner can simultaneously make speech movements, such as the continuous opening and closing of the mouth. Actions can be played at a constant, random speed, or at a variable speed based on the content and speed of the speech. Alternatively, the lip-sync information output by the speech engine can be used to control the frequency and amplitude of speaking actions. When the virtual partner reads out the received text content, the text content can also be displayed in the form of subtitles on the screen at the same time, so that users with hearing problems can communicate with the virtual partner.

从人工助理听到用户的说话内容,到通过键盘输入完整的回复内容的过程存在一定的时间延迟。为了降低时间延迟,可以专门培训人工助理,使其能够在输入一部分回复后立即点击发送键,并当虚拟伴侣读出当前接收到的内容时人工助理继续输入后续的内容。或者,一对一控制界面的聊天窗口可以在接收到当前的第一个单词时(例如,接收到一系列字母和第一个空格之后)自动将内容发送给虚拟伴侣。由语音识别加人工智能得到的由电脑生成自动的回复内容也可以显示在对话框输入窗口中,当人工助理认为此回应符合要求时,可以直接发送给虚拟伴侣。所有在人工助理监督下进行的互动内容输入,无论是由机器生成的还是由人工助理手工输入的,都可以作为后续通过机器学习改进人工智能模块时的训练集内容。此外,一对一控制界面还可以为人工助理显示一系列可以选择的回复,人工助理只需要点击最相符的回复即可。当人工助理通过打字方式输入时,可以利用“自动完成”功能,通过人工助理目前键入的几个字母显示出以该字符串开头的常用语或常用单词,从而降低人工助理需要输入内容的长度。人工助理还可以通过客户关系管理系统内的信息(将在附件中提到的客户日志与备忘信息等等)实现快速输入。例如,点击客户关系管理界面中的用户名,即可在对话框中显示客户名称。或者,通过转义字符和别名进行输入,例如,当人工助理输入“/owner”时,聊天输入框内将/owner自动转换成对应的使用虚拟伴侣的客户的姓名。客户关系管理系统的数据可以作为上文中提到的“自动完成”功能的输入信息。There is a delay between when the artificial assistant hears what the user is saying, and when the full response is entered via the keyboard. To reduce the time delay, the artificial assistant could be specially trained to hit the send button immediately after typing a part of the reply, and continue to enter the subsequent content while the virtual companion reads the current received content. Alternatively, the chat window of the one-to-one control interface could automatically send content to the virtual partner upon receiving the current first word (for example, after receiving a series of letters and the first space). The automatic reply content generated by the computer obtained by speech recognition and artificial intelligence can also be displayed in the dialog input window, and when the artificial assistant thinks that the response meets the requirements, it can be directly sent to the virtual partner. All interactive content input under the supervision of the artificial assistant, whether generated by the machine or manually input by the artificial assistant, can be used as the training set content for the subsequent improvement of the artificial intelligence module through machine learning. In addition, the one-to-one control interface can also display a series of optional responses for the artificial assistant, and the artificial assistant only needs to click on the most suitable response. When the artificial assistant enters by typing, the "auto-completion" function can be used to display the common words or common words beginning with the string through the letters currently typed by the artificial assistant, thereby reducing the length of the artificial assistant's input content. The artificial assistant can also realize rapid input through the information in the customer relationship management system (customer log and memo information, etc., which will be mentioned in the attachment). For example, clicking on the user name in the customer relationship management interface can display the customer name in the dialog box. Or, enter through escape characters and aliases. For example, when the artificial assistant enters "/owner", the chat input box will automatically convert /owner into the name of the corresponding customer who uses the virtual partner. Data from the CRM system can be used as input for the "autocomplete" function mentioned above.

虚拟伴侣与用户之间的对话内容还可以藉由专家系统、或包含了专业知识信息的人工智能系统来生成。此处的专业知识包括心理学、精神病学、老年护理学、社会学等等方面的知识。此专家系统可以根据用户的当前状况、说话内容,生成最优化的回复内容,例如,专为患有老年痴呆症患者设计的激发其脑部活动的问答内容等。但是,此专家系统的局限性在于,很难利用语音识别技术将用户任意的说话内容转换为符合专家系统输入格式的内容。例如,当用户被问到“你最近怎样”时,专家系统可以对“我很好”、“一般”、“不太好”三种特定的用户回应做下一步的判断。但是当用户回答“嗯,我不是很确定,还可以吧”的时候,专家系统无法进行下一步的判断。此时,需要由人工助理通过收听从用户发来的音频来进行人工判断,选择最接近的标准化输入(在此例中,选择“一般”)。人工助理的介入可以使得用户仍然可以用自然语言与专家系统交流,并且降低专家系统由于输入不匹配而可能产生的错误。人工助理还可以选择在对话过程中的任何时间结束专家系统辅助,或者是修改专家系统的输出内容,使得其更口语化,或者按照用户当前状况随时修改专家系统需要的参数输入。例如,专家系统的一个输入参数是当前用户的疼痛指数。人工助理可以通过视频流观察用户当前面部表情,并通过一对一界面上的参数控制滑块调整疼痛指数。此外,还可以将类似于虚拟伴侣情绪指数的PAD值赋予专家系统,使得其能够对用户在不同情绪下的说话内容进行适当回应。The dialogue content between the virtual partner and the user can also be generated by an expert system or an artificial intelligence system including professional knowledge information. Expertise here includes knowledge in psychology, psychiatry, geriatric care, sociology, and more. This expert system can generate optimized reply content according to the user's current situation and speech content, for example, a question-and-answer content specially designed for patients with Alzheimer's disease to stimulate their brain activities. However, the limitation of this expert system is that it is difficult to use speech recognition technology to convert the user's arbitrary speech content into content that conforms to the input format of the expert system. For example, when a user is asked "how are you recently", the expert system can make a next judgment on three specific user responses of "I'm fine", "general" and "not so good". But when the user answers "Well, I'm not sure, it's okay", the expert system cannot make a next step of judgment. At this time, it is necessary for the artificial assistant to make a manual judgment by listening to the audio sent from the user, and select the closest standardized input (in this example, select "General"). The intervention of artificial assistants can enable users to communicate with the expert system in natural language, and reduce the possible errors of the expert system due to input mismatch. The artificial assistant can also choose to end the expert system assistance at any time during the dialogue process, or modify the output content of the expert system to make it more colloquial, or modify the parameter input required by the expert system at any time according to the current situation of the user. For example, one input parameter of the expert system is the current user's pain index. The artificial assistant can observe the user's current facial expression through the video stream, and adjust the pain index through the parameter control slider on the one-to-one interface. In addition, the PAD value similar to the emotional index of the virtual partner can also be given to the expert system, so that it can respond appropriately to the content of the user's speech under different emotions.

在一个示例性实施方案中,如图8,人工助理需要同时按下“Alt”键和“Enter”键来将输入给虚拟伴侣的内容发送出去。而在多个人工助理的聊天界面中,只需要按下“Enter”键即可发送。这里才采用不同的发送方式可以避免人工助理误将需要发送给其他人工助理的内容发送给虚拟伴侣。In an exemplary embodiment, as shown in FIG. 8, the human assistant needs to press the "Alt" key and the "Enter" key simultaneously to send out the content entered into the virtual companion. In the chat interface of multiple artificial assistants, you only need to press the "Enter" key to send. Different sending methods are adopted here to prevent the artificial assistant from mistakenly sending the content that needs to be sent to other artificial assistants to the virtual partner.

当虚拟伴侣使用文字转语音技术读出人工助理输入的内容时,语音可以设置为可爱的孩童声音以及缓慢的语速,从而可以使有听力困难的人更好的理解虚拟伴侣的说话内容。虚拟伴侣的语音、语调、语速都可以根据虚拟伴侣情绪PAD的值而改变,也可以根据虚拟伴侣的虚拟年龄变化、外观变化而改变。或者可以由人工助理通过助理界面远程设定。When the virtual partner uses text-to-speech technology to read out the content entered by the artificial assistant, the voice can be set to a cute child's voice and a slow speech rate, so that people with hearing difficulties can better understand the content of the virtual partner's speech. The voice, intonation, and speech speed of the virtual partner can all be changed according to the value of the emotional PAD of the virtual partner, and can also be changed according to the virtual age and appearance of the virtual partner. Or it can be set remotely by a human assistant through the assistant interface.

人工伴侣可以在对话框中输入回复内容的同时,标注出语调。虚拟伴侣接收的信息后,可以在读出内容同时调整自己的语调设定。虚拟伴侣还可以根据目前的情绪PAD值自行调整语调。例如,当虚拟伴侣的激活度较高时,其语速、音量随之升高,并且多在语句结束时使用升调。The artificial companion can mark the tone of voice while typing the reply in the dialog box. After the virtual partner receives the message, it can adjust its tone setting while reading the content. The virtual partner can also adjust the tone of voice according to the current emotional PAD value. For example, when the activation degree of the virtual partner is high, its speech rate and volume increase accordingly, and it often uses a rising tone at the end of the sentence.

如图8中所示,在一对一界面中显示当前虚拟伴侣的情绪状况(PAD)值,以及与用户的历史互动内容、用户的活动日程表等等。这些信息使得人工助理了解用户的状态、以及可以在互动中提及的内容等等。例如,在图8种,根据虚拟伴侣的PAD值,人工助理可以控制虚拟伴侣用高兴的语气与用户交流,交流中可以询问用户Betty的朋友Bob的近况,或者提醒用户Betty马上要吃午饭等等。As shown in FIG. 8 , the emotional state (PAD) value of the current virtual partner, as well as the historical interaction content with the user, the user's activity schedule, and the like are displayed in the one-to-one interface. This information allows the artificial assistant to understand the user's status, what can be mentioned in the interaction, and so on. For example, in Figure 8, according to the PAD value of the virtual partner, the artificial assistant can control the virtual partner to communicate with the user in a happy tone. During the communication, the user can ask the user about the current situation of Betty's friend Bob, or remind the user that Betty is about to have lunch, etc. .

除了人工助理打字输入回复,还可以通过其他输入方式。例如,可以由人工助理通过说话的方式输入回复,由人工助理软件通过语音识别功能转换为文字,再发送给虚拟伴侣。或者,直接获取人工伴侣的音频信息发送给虚拟伴侣,通过变频技术在虚拟伴侣端转换为预设的语言、语调等。这些输入处理方式使得当不同的人工助理控制同一虚拟伴侣与用户进行交互时,用户听到的虚拟伴侣的语音始终一致,不会随不同的人工助理而变化。In addition to typing the reply by the human assistant, other input methods are also possible. For example, an artificial assistant can input a reply by speaking, and the artificial assistant software can convert it into text through speech recognition function, and then send it to the virtual companion. Or, directly obtain the audio information of the artificial companion and send it to the virtual companion, and convert it into the preset language and intonation on the virtual companion through frequency conversion technology. These input processing methods make it possible that when different artificial assistants control the same virtual companion to interact with the user, the voice of the virtual companion heard by the user is always consistent and will not vary with different artificial assistants.

如图8中所示,人工助理除了可以控制虚拟伴侣的语音对话功能,还可以控制虚拟伴侣的生理需求、情绪PAD值、面部表情、自发动作行为(入呼吸、眨眼等)、触发特定动作(例如跳舞、打滚等),人工助理甚至还可以通过点击拖动虚拟伴侣显示框中的不同身体部位为虚拟伴侣录制指定的动作。当用户触摸虚拟伴侣时,虚拟伴侣将用户当前触摸的位置信息通过网络传回并显示在人工助理界面。人工助理还可以控制虚拟伴侣眼睛聚焦位置。在图8中,“Look”标记显示虚拟伴侣的眼睛正在注视用户的鼻子部位。人工助理可以通过拖动“Look”标记改变其位置,位置信息将传递给虚拟伴侣程序,程序将控制虚拟伴侣三维模型的眼睛看向对应位置。As shown in Figure 8, in addition to controlling the virtual partner’s voice dialogue function, the artificial assistant can also control the virtual partner’s physiological needs, emotional PAD value, facial expression, spontaneous action behavior (breathing, blinking, etc.), triggering specific actions ( Such as dancing, rolling, etc.), the artificial assistant can even record specified actions for the virtual partner by clicking and dragging different body parts in the virtual partner display frame. When the user touches the virtual companion, the virtual companion sends back the location information of the user's current touch through the network and displays it on the artificial assistant interface. The artificial assistant can also control where the virtual companion's eyes focus. In FIG. 8, the "Look" mark shows that the virtual companion's eyes are looking at the user's nose. The artificial assistant can change its position by dragging the "Look" mark, and the position information will be passed to the virtual companion program, and the program will control the eyes of the virtual companion's 3D model to look at the corresponding position.

虚拟伴侣监督系统Virtual Companion Monitoring System

本发明的一个关键功能是可以通过远程监控提高老年护理机构和家政服务机构的工作质量与效率。由于虚拟伴侣可以满足用户精神上的陪护需求,以及远程安全监控,老年护理机构则只需要负责派遣少量的人员完成需要实际人员在场的工作,如打扫卫生、为老人做饭等等。A key function of the present invention is to improve the work quality and efficiency of elderly care institutions and domestic service institutions through remote monitoring. Since virtual companions can meet the needs of users for spiritual companionship and remote security monitoring, elderly care institutions only need to be responsible for dispatching a small number of personnel to complete tasks that require the presence of actual personnel, such as cleaning, cooking for the elderly, and so on.

当人工助理通过如图7所示的多窗口监控界面监视虚拟伴侣的情况时,多个人工助理,以及若干个主管,可以同时监视各自的界面,并通过员工聊天窗口互相沟通信息。在图7中显示的8个虚拟伴侣可以是部署于同一养老院中,负责监控该部分虚拟伴侣的人工助理以及主管可以被指定为专为该养老院服务。另一种可行的分配方式是,所有的虚拟伴侣与人工助理的一一对应关系是动态的。当人工助理登陆进入系统并开始当前工作时段后,可以被动态分配给一系列相似的虚拟伴侣。此处,相似指相似的用户背景、相似的虚拟伴侣状态(例如,同为宠物狗形态)等等。在动态分配中,对于每个虚拟伴侣,可以分配给至少两个人工助理来监控。对于任意两个人工助理,被其监控的虚拟伴侣可以部分重叠。在图7中,每个多窗口界面可以监控8个虚拟伴侣。在实际实现中,监控窗口数目可以对每个人工助理动态增减。增减依据是改人工助理的网络状况可以接收到的视频流数目、人工助理所使用的显示器大小、人工助理监控多窗口的熟练程度等等。为了进一步增加每个人工助理可以监控的虚拟伴侣数量,在多窗口监控界面中,可以将视频窗口替换为面积更小的抽象数据窗口。窗口中通过一系列动态图标显示虚拟伴侣接收到的声音、视频输入变化、触摸输入等等。同时,对应每种图标变化,监控界面还可以通过声音效果提示人工助理。例如,当有虚拟伴侣接收到声音输入突变时,界面发出一种提示音;当有虚拟伴侣被触摸时,发出另一种提示音。另一种动态分配人工助理的依据是,对于在以往使用记录中,使用人工辅助服务频率较高的虚拟伴侣,分配给其较多的冗余人工助理,即同一时间内,有较多个人工助理的监视界面上显示该虚拟伴侣信息。人工助理软件的中央控制系统动态地分配当前连接到系统中的虚拟伴侣与人工助理。中央控制系统可以为每个人工助理分配一部分需要人工辅助频率较高的虚拟伴侣,和一部分辅助频率较低的虚拟伴侣,使得每个人工助理的工作量较平均的分配。当系统如前文所述,可以自动将任意人工助理切换至任意虚拟伴侣的一对一控制界面时,如果当前被分配到的人工助理正在与其他的虚拟伴侣交互,则会刚刚被分配的虚拟伴侣则会有长时间的等待时间。当中央控制系统检测到等待时间过长时,会自动进行再分配。人工助理与每个虚拟伴侣的一对一监控时间可以被记录在数据库中,作为向用户收取按时间计算的服务费的依据。When the artificial assistant monitors the situation of the virtual partner through the multi-window monitoring interface as shown in Figure 7, multiple artificial assistants and several supervisors can simultaneously monitor their respective interfaces and communicate with each other through the employee chat window. The eight virtual companions shown in FIG. 7 can be deployed in the same nursing home, and the artificial assistants and supervisors responsible for monitoring these virtual companions can be designated as dedicated services for the nursing home. Another feasible assignment is that the one-to-one correspondence between all virtual companions and artificial assistants is dynamic. When an artificial assistant logs into the system and starts the current working session, it can be dynamically assigned to a series of similar virtual companions. Here, similarity refers to similar user backgrounds, similar virtual partner states (for example, both in the form of pet dogs) and so on. In dynamic allocation, for each virtual companion, at least two artificial assistants can be assigned to monitor. For any two artificial assistants, the virtual companions monitored by them can partially overlap. In Figure 7, each multi-window interface can monitor 8 virtual companions. In actual implementation, the number of monitoring windows can be dynamically increased or decreased for each artificial assistant. The increase or decrease is based on the number of video streams that can be received by changing the network conditions of the artificial assistant, the size of the display used by the artificial assistant, the proficiency of the artificial assistant in monitoring multiple windows, and so on. In order to further increase the number of virtual companions that each artificial assistant can monitor, in the multi-window monitoring interface, the video window can be replaced by an abstract data window with a smaller area. A series of dynamic icons in the window show the sound received by the virtual companion, changes in video input, touch input and so on. At the same time, corresponding to each icon change, the monitoring interface can also prompt the artificial assistant through sound effects. For example, when a virtual partner receives a sudden change in sound input, the interface emits a prompt sound; when a virtual companion is touched, another prompt sound is emitted. Another basis for dynamically allocating artificial assistants is to allocate more redundant artificial assistants to virtual companions who have used artificial assistance services more frequently in the past use records, that is, there are more artificial assistants at the same time. The information of the virtual companion is displayed on the monitoring interface of the assistant. The central control system of the artificial assistant software dynamically assigns the virtual companion and human assistant currently connected to the system. The central control system can assign to each artificial assistant a part of the virtual companions that require high frequency of artificial assistance, and a part of virtual companions that require low assistance frequency, so that the workload of each artificial assistant is more evenly distributed. When the system can automatically switch any artificial assistant to the one-to-one control interface of any virtual companion as mentioned above, if the currently assigned artificial assistant is interacting with other virtual companions, the newly assigned virtual companion There will be a long waiting time. When the central control system detects that the waiting time is too long, it will automatically redistribute. The one-on-one monitoring time of the artificial assistant with each virtual companion can be recorded in a database as the basis for charging the user a time-based service fee.

具体的记时方式可以参考下述信息:一个一对一监控界面被打开或关闭时间;音频、视频流开始或结束时间;人工助理在一对一界面中进行键盘输入操作的起始时间;人工助理手动记录的起始时间;上述时间点的综合考量等等。The specific timing method can refer to the following information: the time when a one-to-one monitoring interface is opened or closed; the start or end time of audio and video streams; The start time of the assistant's manual recording; the comprehensive consideration of the above time points, etc.

人工助理可以分为不同的等级,例如付费人工助理、人工助理主管、志愿者人工助理、甚至是用户的家人也可以成为人工助理。Artificial assistants can be divided into different levels, such as paid artificial assistants, artificial assistant supervisors, volunteer artificial assistants, and even the user's family members can also become artificial assistants.

当多个人工助理同时监控一个或一组虚拟伴侣时,需要如图7中所示的队员状态指示信息来协调多人之间的协作关系。状态指示可以通过在每个人工助理的监控界面上显示其他人工助里的鼠标位置。不同的鼠标形状则代表了当前其他助理的工作状态。例如,一个沙漏图标表示当前有一名人工助理正处于繁忙或离开状态,一个手指图标表示一个人工助理正在通过多窗口监控界面观察所指位置。当一个人工助理进入一对一监控界面时,其图标将从其他人工助理的多窗口监控界面上消失,但被一对一监控的虚拟伴侣的窗口下方将显示一行文字表示正在被该人工助理监控(例如,图7中的“BillHelper9”)。在人工助理使用此系统之前,将会接受专门培训保证当其观察屏幕的某一位置时,将鼠标移到该位置。上文描述的队员状态信息提示可以避免多个人工助理将注意力同时集中在一个虚拟伴侣上,从而在保证冗余的同时最大化监控范围。系统还可以调整每个人工助理所监控的虚拟伴侣数量,从而在保证最低反应速度的同时最大化一个人工助理对应的虚拟伴侣数量。When multiple artificial assistants monitor one or a group of virtual partners at the same time, team member status indication information as shown in FIG. 7 is needed to coordinate the collaborative relationship among multiple people. Status indication can be achieved by displaying the mouse position in other artificial assistants on the monitoring interface of each artificial assistant. Different mouse shapes represent the current working status of other assistants. For example, an hourglass icon indicates that an artificial assistant is currently busy or away, and a finger icon indicates that an artificial assistant is observing the pointed location through a multi-window monitoring interface. When an artificial assistant enters the one-to-one monitoring interface, its icon will disappear from the multi-window monitoring interface of other artificial assistants, but a line of text will be displayed under the window of the virtual partner being monitored one-on-one indicating that it is being monitored by the artificial assistant (eg, "BillHelper9" in Figure 7). Before artificial assistants can use the system, they will be trained to move the mouse to a certain location on the screen when they look at it. The team member status information prompt described above can prevent multiple artificial assistants from concentrating on one virtual partner at the same time, thereby maximizing the monitoring range while ensuring redundancy. The system can also adjust the number of virtual companions monitored by each artificial assistant, thereby maximizing the number of virtual companions corresponding to an artificial assistant while ensuring a minimum response speed.

另一个重要功能是在有新的虚拟伴侣连接到系统后,系统可以为其动态分配人工助理,并保证在任何时刻都有不少于一个人工助理在监控任何一个虚拟伴侣。此动态分配过程包括两个阶段:Another important function is that after a new virtual partner is connected to the system, the system can dynamically assign an artificial assistant to it, and ensure that no less than one artificial assistant is monitoring any virtual partner at any time. This dynamic allocation process consists of two phases:

1.学习阶段。当一个新的虚拟伴侣被连入系统后,系统首先为其分配固定数量的人工助理,并且每个人工助理的监控时间不重合。即,任何时刻只有一个人工助理监控该虚拟伴侣。在学习阶段,中央控制服务器记录下该虚拟伴侣与人工助理间的所有互动行为。每一条记录包括时间戳、人工助理ID、互动评分(评分根据用户的满意度自动生成,或由负责该人工助理的主管或用户进行打分)。经过两周的学习期,中央控制服务器将所有该虚拟伴侣的互动行为数据进行分析,对每一个与之互动过的人工助理的平均得分进行排序。另一种方式是跳过学习阶段,直接进行匹配阶段,并在匹配的过程中对人工助理进行动态排序。1. Learning stage. When a new virtual companion is connected to the system, the system first assigns a fixed number of artificial assistants to it, and the monitoring time of each artificial assistant does not overlap. That is, only one artificial assistant monitors the virtual companion at any one time. During the learning phase, the central control server records all interactions between the virtual companion and the artificial assistant. Each record includes a timestamp, an AI ID, and an interaction score (scored automatically based on the user's satisfaction, or by the supervisor or user in charge of the AI). After a two-week learning period, the central control server analyzes all the interactive behavior data of the virtual partner, and sorts the average score of each artificial assistant that has interacted with it. Another way is to skip the learning stage, go directly to the matching stage, and dynamically rank the artificial assistants during the matching process.

2.匹配阶段。当学习阶段结束后,中央控制服务器按照以下规则为虚拟伴侣制定人工助理:当虚拟伴侣处于不活跃阶段时(即用户与虚拟伴侣在该时段互动的可能性较小),中央控制服务器为该虚拟伴侣分配最小数量(例如,1)的满足下列条件的人工助理:1)该助理目前被监控的虚拟伴侣数最少;和/或2)在过去的一个小时内,该助理与其监控的所有虚拟伴侣的一对一监控时间最短;和/或3)该助理在此虚拟伴侣的学习阶段中得分最高。在做出匹配决定时,上述三个条件可以有不同的权值。当虚拟伴侣处于活跃阶段时(即用户与虚拟伴侣在该时段互动的可能性较高),中央控制服务器优先为该虚拟伴侣分配满足一下条件的人工助理:1)该助理目前被监控的虚拟伴侣数最少;和/或2)在过去的一个小时内,该助理与其监控的所有虚拟伴侣的一对一监控时间最短;和/或3)与该虚拟伴侣的一对一监控总时长最长;4)该助理在此虚拟伴侣的学习阶段以及匹配阶段中得分最高。在匹配阶段,人工助理监控每一个虚拟伴侣时仍将被打分,以保证分数持更新。2. Matching stage. When the learning phase is over, the central control server formulates an artificial assistant for the virtual companion according to the following rules: Companions assign a minimum number (e.g., 1) of human assistants that: 1) have the fewest number of virtual companions that the assistant is currently monitoring; and/or 2) within the past hour, the assistant has been and/or 3) the assistant scored the highest in the virtual companion's learning phase. When making a matching decision, the above three conditions can have different weights. When the virtual companion is in the active stage (that is, the possibility of interaction between the user and the virtual companion during this time period is high), the central control server will give priority to assigning an artificial assistant to the virtual companion that meets the following conditions: 1) the virtual companion that the assistant is currently monitoring and/or 2) the assistant had the shortest one-on-one monitoring time with all virtual partners it monitored in the past hour; and/or 3) the longest total one-on-one monitoring time with the virtual partner; 4) The assistant scored the highest in the learning phase of the virtual partner as well as in the matching phase. During the matching phase, each virtual partner will still be scored as the assistant monitors it to keep the score up to date.

对人工助理的打分还可以通过软件分析用户的语音语调来实现。包含较高激活度与愉悦度的语调表面用户对当前的互动满意度较高。此外,还可以通过其他传感器信息打分,例如用户皮肤导电率、皮肤颜色、瞳孔大小变化等等。或者可以通过分析用户对屏幕的触摸量化指标等等。The scoring of artificial assistants can also be achieved through software analysis of the user's voice intonation. The tone of voice that contains higher activation and pleasure indicates that the user is more satisfied with the current interaction. In addition, other sensor information can also be used to score, such as user skin conductivity, skin color, changes in pupil size, and so on. Or it can analyze the user's touch on the screen to quantify indicators and the like.

由于虚拟伴侣可以通过运行设备的摄像头、麦克风获取用户端的视频、音频输入,为了降低用户对于隐私的疑虑,当虚拟伴侣启用高分辨率输入时,需要给用户一定的提示以表明其目前可以被远程人员听、看到。此提示信息可以通过虚拟伴侣的姿态变化,以一种自然的方式传递给用户。例如,虚拟伴侣的颈部项圈发亮、改变虚拟伴侣眼睛颜色、或虚拟伴侣呈现睁大眼睛的状态等等。在一个示例性实施方案中,虚拟伴侣的睡觉、清醒姿态表面当前视频、音频流输入被关闭或打开。这种展示方式可以让用户在没有技术背景的情况下直观的理解当前是否有音频、视频信息采集。其他不涉及到隐私的信息,例如环境音量变化、光线变化、屏幕触摸输入等,可以在任意时间被采集并上传到后台,无论当前视频、音频输入是否开启。Since the virtual partner can obtain the video and audio input of the user terminal through the camera and microphone of the running device, in order to reduce the user’s doubts about privacy, when the virtual partner enables high-resolution input, it is necessary to give the user a certain prompt to indicate that it can be remotely monitored. People hear and see. This prompt information can be transmitted to the user in a natural way through the posture change of the virtual partner. For example, the collar of the virtual partner's neck lights up, the color of the eyes of the virtual partner changes, or the virtual partner presents a state of wide-opening eyes, etc. In an exemplary embodiment, the virtual companion's sleep and wake state surface current video, audio stream feeds are turned off or on. This display method allows users to intuitively understand whether there is audio and video information collection at present without technical background. Other information that does not involve privacy, such as ambient volume changes, light changes, screen touch input, etc., can be collected and uploaded to the background at any time, regardless of whether the current video or audio input is enabled.

远程人工助理的另一个重要服务功能是能够在用户有紧急情况、或需要人工帮助的时候及时联系第三方人员。第三方人员可以是虚拟伴侣被部署的养老院的工作人员,或者是用户的家人等等。第三方人员的联系方式与其他虚拟伴侣相关的日志记录一起保存于系统数据库中。在图8中,联系方式可以显示在标记有“tab”的区域。当人工助理点击联系方式时,系统可以自动拨打对应的电话号码,或者弹出电子邮件窗口。Another important service function of the remote artificial assistant is to be able to contact third-party personnel in time when the user has an emergency or needs manual assistance. The third party may be a staff member of a nursing home where the virtual companion is deployed, or a family member of the user, and so on. Contact details of third party personnel are kept in the system database along with log records related to other virtual companions. In FIG. 8, the contact information may be displayed in the area marked with "tab". When the artificial assistant clicks on the contact information, the system can automatically dial the corresponding phone number, or pop up an email window.

远程人工助理系统还可以用于对虚拟伴侣进行远程技术支持。技术支持的目的之一是确保虚拟伴侣的硬件、软件可以持续正常运行。虚拟伴侣通过网络,周期性的向服务器发送状态信息。监控该虚拟伴侣的人工助理可以实时接收到状态信息。人工助理还可以向虚拟伴侣发生特定指令,以获得更多信息,例如虚拟伴侣运行设备当前的屏幕截图等等。人工助理还可以发送指令,供虚拟伴侣端软件执行,例如“重启虚拟伴侣软件”,“改变扬声器音量”,“重启运行设备”等等。当虚拟伴侣软件运行不正常、或在每日例行运营维护阶段,可以通过执行远程指令使虚拟伴侣恢复初始运行状态。在程序设计上,一个简单但是稳定的守护程序可以用来监护、控制主程序的运行。主程序则负责虚拟伴侣的视觉呈现、交互控制等等更为复杂、更易发生故障的功能。通过接收远程指令,守护程序可以关闭、重启或对主程序进行其他运行问题分析操作。守护程序可以周期性的被主程序或操作系统调用,用来收发状态信息以及待执行指令。The remote artificial assistant system can also be used to provide remote technical support to the virtual companion. One of the purposes of technical support is to ensure that the hardware and software of the virtual companion can continue to operate normally. The virtual partner periodically sends status information to the server through the network. An artificial assistant monitoring the virtual companion can receive status information in real time. The artificial assistant can also issue specific instructions to the virtual partner to obtain more information, such as a screenshot of the current device the virtual partner is running on, and so on. The artificial assistant can also send instructions for the virtual companion software to execute, such as "restart the virtual companion software", "change the volume of the speaker", "restart the running device" and so on. When the virtual companion software is not running normally, or during the daily routine operation and maintenance phase, the virtual companion can be restored to its initial operating state by executing remote instructions. In terms of program design, a simple but stable daemon program can be used to monitor and control the operation of the main program. The main program is responsible for more complex and more fault-prone functions such as the visual presentation and interactive control of the virtual companion. By receiving remote commands, the daemon can shut down, restart, or perform other operational problem analysis operations on the main program. The daemon can be periodically called by the main program or the operating system to send and receive status information and instructions to be executed.

虚拟伴侣的其他功能Other functions of the virtual companion

除了上文提到的功能,虚拟伴侣还可以实现其他功能,以丰富用户的使用体验,提高用户生活质量。In addition to the functions mentioned above, the virtual companion can also implement other functions to enrich the user experience and improve the quality of life of the user.

虚拟伴侣可以在用户有需求的情况下为其播报新闻、天气或其他任何基于文字的网络资讯。用户可以通过语音向虚拟伴侣表达需求,例如“关于大选的新闻”,“东京的天气”等等,虚拟伴侣可以通过语音识别或是在人工助理的帮助下了解用户的需求。虚拟伴侣了解到用户的需求后,可以做出取出报纸或文档的动作,同时虚拟伴侣软件通过后台的网络连接在互联网上搜索相应的内容。获取内容后,虚拟伴侣使用文字转语音技术将内容读出。除了上述按照用户的要求获取并读出信息的功能,用户的家人还可以预先向虚拟伴侣提供用户可能感兴趣的内容,虚拟伴侣可以根据这些内容定期地自发向用户读出内容。例如,虚拟伴侣可以向用户提醒“你的儿子今天在网上发布了新的状态”,并随后读出状态内容。The virtual companion can broadcast news, weather or any other text-based information on the Internet to the user on demand. Users can express their needs to the virtual partner through voice, such as "news about the general election", "the weather in Tokyo", etc., and the virtual partner can understand the user's needs through voice recognition or with the help of artificial assistants. After the virtual companion understands the needs of the user, it can take out newspapers or documents, and at the same time the virtual companion software searches for the corresponding content on the Internet through the background network connection. After capturing the content, the virtual companion reads the content aloud using text-to-speech technology. In addition to the above-mentioned function of obtaining and reading information according to the user's requirements, the user's family can also provide the virtual companion with content that the user may be interested in in advance, and the virtual companion can periodically read the content spontaneously to the user based on these contents. For example, the virtual companion can remind the user that "your son posted a new status online today" and then read the status content.

虚拟伴侣还可以用类似的方式向用户提供图像信息。图像信息的展示方式可以是虚拟伴侣向用户展示一个相框或相册,相框里呈现用户需要的图片(如图5所示)。除此之外,虚拟伴侣还可以从用户的家人预设的下载地址下载由家人上传的照片并展示给用户。类似地,虚拟伴侣还可以为用户展示音频(通过虚拟的收音机或留声机进行播放)、视频资料。资料可以根据用户的需求从YouTube之类的网站下载,也可以由用户的家人预先上传到指定地点。此外,用户还可以通过语音对虚拟伴侣发出与家人视频聊天的指令。虚拟伴侣收到指令后会在屏幕中显示一个相框或电视机,在其内部显示家人视频图像。家人的视频聊天账户信息提前预存在虚拟伴侣的备忘信息中。The virtual companion can also provide image information to the user in a similar manner. The way of displaying the image information may be that the virtual companion displays a photo frame or photo album to the user, and the photo frame presents the pictures required by the user (as shown in FIG. 5 ). In addition, the virtual partner can also download photos uploaded by family members from the download address preset by the user's family members and display them to the user. Similarly, the virtual companion can also display audio (playing through a virtual radio or gramophone) and video materials for the user. Materials can be downloaded from sites such as YouTube according to the user's needs, or uploaded to designated locations in advance by the user's family members. In addition, users can also issue instructions to their virtual partners to video chat with family members through voice. The virtual companion will display a photo frame or TV on the screen after receiving instructions, and display family video images inside it. The family member’s video chat account information is pre-stored in the virtual partner’s memo information in advance.

虚拟伴侣可以通过摄像头或麦克风探测用户的呼吸频率,并将虚拟伴侣的呼吸频率与之相同步。然后,虚拟伴侣可以通过逐渐降低自己的呼吸频率来帮助稳定用户的情绪。这个功能可用于安抚情绪急躁的老年痴呆症患者或有自闭症的焦躁的儿童。The virtual partner can detect the user's breathing rate through a camera or microphone, and synchronize the virtual partner's breathing rate with it. The virtual companion can then help stabilize the user's mood by gradually reducing her own breathing rate. This feature can be used to calm irritable Alzheimer's patients or agitated children with autism.

可以通过类似于虚拟现实的技术,利用实际物品与虚拟伴侣进行互动。例如,我们在研发展品的过程中发现,用户喜欢与虚拟伴侣分享共同的经历,例如,用户希望能够跟虚拟伴侣一起睡午觉。虚拟伴侣可以通过虚拟现实技术提供服用药物等其他方面的经历共享。在一个示例性实施方案中,用户可以将其需要服用的药物的容器放置在摄像头前方。虚拟伴侣可以利用图像识别技术或通过远程人工助理帮助识别出当前物体为药品容器,并且通过与用户预设的日程信息核对确认用户此时需要服用该药物之后,在虚拟伴侣的屏幕显示中将出现一块虚拟食物。虚拟食物可以是用户所服用药品的形状,也可以是其他的形状,例如,一块骨头。当用户开始服药时,虚拟伴侣通过类似技术检测到这一行为,并开始展示吃食物的动作以及高兴的表情。通过将虚拟食物与现实药品相连接,用户通过自己定时服药实现了给虚拟宠物喂食的功能。使得用户在养成定时服药习惯的同时强化了个人责任感,并且虚拟伴侣通过显示获得食物的高兴表情为用户提供积极的反馈,促进用户的服药体验。Actual objects can be used to interact with virtual companions through technologies similar to virtual reality. For example, we found in the process of developing products that users like to share common experiences with their virtual partners. For example, users hope to take a nap with their virtual partners. Virtual companions can provide shared experience of taking medication and other aspects through virtual reality technology. In an exemplary embodiment, the user may place a container of medication that they need to take in front of the camera. The virtual companion can use image recognition technology or a remote artificial assistant to help identify the current object as a medicine container, and after checking with the user's preset schedule information to confirm that the user needs to take the medicine at this time, it will appear on the screen display of the virtual companion. A piece of virtual food. The virtual food can be in the shape of the medicine taken by the user, or in other shapes, for example, a piece of bone. When the user starts to take the medicine, the virtual partner detects this behavior through similar technology and starts to show the action of eating food and the happy expression. By connecting the virtual food with the real medicine, the user realizes the function of feeding the virtual pet by taking the medicine regularly. This enables users to develop a habit of taking medicine at regular intervals while strengthening their sense of personal responsibility, and the virtual partner provides positive feedback for users by showing happy expressions of getting food, and promotes users' taking medicine experience.

此外,用户还可以通过向虚拟伴侣设备的摄像头展示带有特定图案的卡片的形式与虚拟伴侣互动。虚拟伴侣程序检测到卡片上的内容后,将在屏幕上的虚拟环境中显示对应的虚拟物品。用户通过在实际环境中移动卡片的位置,可以控制虚拟环境中对应物品的位置移动,从而为用户提供了一个新的与虚拟伴侣互动的方式。In addition, the user can interact with the virtual companion by showing a card with a specific pattern to the camera of the virtual companion's device. After the virtual companion program detects the content on the card, it will display the corresponding virtual item in the virtual environment on the screen. By moving the position of the card in the actual environment, the user can control the position movement of the corresponding item in the virtual environment, thus providing a new way for the user to interact with the virtual partner.

部分平板电脑具有近场通迅或RFID功能,用户可以通过在平板电脑附近放置通信标签,虚拟伴侣软件接收到标签信息并在屏幕中显示对应物品的形式与虚拟伴侣互动。虚拟伴侣所在的平板电脑可以附带一个标签收集卡槽,用于接收进场通信标签。收集卡槽位于平板电脑进场通信模块附近,并且需要保证当用户向卡槽内投掷标签时,标签可以被进场通信模块读取。当虚拟伴侣软件检测到了进场通信读取事件,就会在屏幕中显示相应的物品被投掷到环境中。当平板电脑不具有近场通信功能时,类似的向虚拟伴侣所在虚拟环境中投掷物品的效果还可以通过摄像头检测带有图标的卡片滑落事件来实现。Some tablet computers have near-field communication or RFID functions. Users can interact with virtual companions by placing communication tags near the tablet computers, and the virtual companion software receives the tag information and displays the corresponding items on the screen. The tablet on which the virtual companion resides may include a tag collection card slot for receiving approach communication tags. The collection card slot is located near the entry communication module of the tablet computer, and it is necessary to ensure that when the user throws a tag into the card slot, the tag can be read by the entry communication module. When the virtual companion software detects the incoming communication reading event, it will display on the screen that the corresponding items are thrown into the environment. When the tablet computer does not have the near-field communication function, the similar effect of throwing objects into the virtual environment where the virtual partner is located can also be realized by detecting the event of the card with the icon slipping through the camera.

虚拟伴侣还可以用网页页面的方式实现,并且人工助理只能在有限的时间段内对网页版虚拟伴侣进行一对一控制,此实现主要便于首次接触虚拟伴侣的用户在没有硬件设施的情况下进行试用。对于没有注册虚拟伴侣账号的用户,可以在浏览器中打开虚拟伴侣试用网页,并且点击开始按钮。此时用户将处于等待状态,如果当前只有一个网页版用户等待,则系统会为之分配目前空闲的人工助理,并开始试用。如果当前有多个网页版用户等待,则系统按照收到用户点击开始按钮事件的先后时间将用户排入等待队列,队列第一位的用户试用一定的时间后,系统自动结束此次试用,并开始队列中第二位用户的试用时间段。在试用过程中,虚拟伴侣通过用户目前使用的电脑的摄像头和麦克风接收视频、音频信息。若电脑不支持触摸输入,则可用鼠标点击模拟用户触摸虚拟伴侣的行为。用户试用时段开始后,可以触发一个倒计时器从固定的时间(例如2分钟)开始倒计时,倒计时结束后,试用自动结束。在倒计时结束之前,人工助理也可以通过助理界面结束当前的试用。当前试用结束后,如果队列中还有等待的用户,则系统自动将下一个用户接入人工助理服务。The virtual companion can also be implemented in the form of a webpage, and the artificial assistant can only perform one-to-one control on the webpage version of the virtual companion within a limited period of time. This realization is mainly convenient for users who first contact the virtual companion without hardware facilities. Try it out. For users who have not registered a virtual companion account, they can open the virtual companion trial webpage in a browser and click the start button. At this time, the user will be in a waiting state. If there is only one web version user waiting, the system will assign an idle artificial assistant to it and start the trial. If there are currently multiple web version users waiting, the system will put the users into the waiting queue according to the time when the user clicks the start button event. Start the trial period for the second user in the queue. During the trial process, the virtual companion receives video and audio information through the camera and microphone of the computer currently used by the user. If the computer does not support touch input, a mouse click can be used to simulate the behavior of the user touching the virtual companion. After the user's trial period begins, a countdown timer can be triggered to start counting down from a fixed time (for example, 2 minutes). After the countdown ends, the trial will end automatically. The human assistant can also end the current trial through the assistant interface before the countdown ends. After the current trial ends, if there are still waiting users in the queue, the system will automatically connect the next user to the artificial assistant service.

虚拟伴侣系统还包括对人工助理的自动培训功能。在人工助理接受自动培训时,使用一个特殊版本的人工助理软件。此版本软件包括模拟的虚拟伴侣端视频、音频输入以及模拟的用户对虚拟伴侣的触摸输入。接受培训的人工助理的所有操作,包括键盘输入、鼠标移动、点击等,将被软件记录并用于后续的培训效果考评。The virtual companion system also includes automated training capabilities for human assistants. When the human assistant is being trained automatically, a special version of the human assistant software is used. This version of the software includes simulated virtual companion video, audio input, and simulated user touch input to the virtual companion. All operations of the trained artificial assistant, including keyboard input, mouse movement, click, etc., will be recorded by the software and used for subsequent training evaluation.

对已有发明的改进Improvements to existing inventions

养老院、养老社区等机构长期处于护理人员缺乏状态,并且人员更替率较高,在美国的一些养老机构,护工的流动率接近100%。因此,养老机构的老人们绝大部分时间都处于无人工陪伴的状态。由此导致的与社会的隔绝感对老人的情绪以及精神状态都有负面影响,并且会加快例如老年痴呆症等老年疾病的发生发展。由于护工价格昂贵,而且时间有限;用宠物陪伴老人又需要额外的人工来照顾宠物,因此产生了一系列利用人工智能方式陪护老人的解决方案。以下列出几种可行方案,并讨论其与本发明的差异。Nursing homes, nursing communities and other institutions have long been in a state of shortage of nursing staff, and the staff turnover rate is high. In some nursing homes in the United States, the turnover rate of nursing staff is close to 100%. Therefore, the elderly in nursing homes spend most of their time in a state of being unaccompanied. The resulting isolation from society has a negative impact on the emotional and mental state of the elderly, and can accelerate the development of diseases of the elderly such as Alzheimer's disease. Because care workers are expensive and have limited time; using pets to accompany the elderly requires additional labor to take care of pets, so a series of solutions that use artificial intelligence to accompany the elderly have emerged. Several possible solutions are listed below, and their differences from the present invention are discussed.

Paro(http://www.parorobots.com)是一个专为解决老年人孤独感设计的实体机器人。由于其制造成本较高,价格昂贵,很难大范围推广。此外,其动作仅限于简单的爬行、活动四肢等,面部表情也有限,而且不具有真人智能的语言能力。Paro (http://www.parorobots.com) is a physical robot designed to solve the loneliness of the elderly. Due to its high manufacturing cost and high price, it is difficult to popularize on a large scale. In addition, its movements are limited to simple crawling, moving limbs, etc., and its facial expressions are also limited, and it does not have the language ability of real human intelligence.

儿童虚拟宠物玩具(例如,美国专利申请号2011/0086702)是一个针对儿童研发的电子游戏。由于其功能复杂,操作繁琐,不适合老年人使用。在这一类发明中,有一部分可以接收用户的触摸、鼠标点击(例如,美国专利申请号2009/0204909;Talking Tom:http://outfit7.com/apps/talking-tom-cat)。但是其对输入的反应多为预先生成的有限的几种,或者是对用户输入的简单模仿,使得用户在使用一段时间之后看到重复的内容,产生厌倦感。相比于这些发明,本发明对触摸、鼠标点击输入的反应是动态生成的,并且视觉上更加自然。A virtual pet toy for children (eg, US Patent Application No. 2011/0086702) is an electronic game developed for children. Due to its complicated functions and cumbersome operation, it is not suitable for the elderly to use. In this category of inventions, some can receive user's touch, mouse click (for example, US Patent Application No. 2009/0204909; Talking Tom: http://outfit7.com/apps/talking-tom-cat). However, its responses to input are mostly pre-generated limited types, or simple imitation of user input, which makes users feel bored after seeing repeated content after using it for a period of time. Compared with these inventions, the responses of the present invention to touch and mouse click input are dynamically generated and visually more natural.

目前已有的可以处理语音输入的虚拟辅助系统只可以对用户的输入进行简单重复做为输出(例如Talking Tom)或者通过语音识别技术识别用户的语音输入后通过人工智能技术生成输出内容(例如美国专利号6,772,989;美国专利申请号2006/0074831;Siri:美国专利申请号2012/0016678)。由于语音识别与人工智能技术尚未成熟,上述专利很难达到本专利中利用人工助理的介入实现的与真人对话的效果。Currently existing virtual assistant systems that can handle voice input can only simply repeat the user's input as output (such as Talking Tom) or recognize the user's voice input through voice recognition technology and generate output content through artificial intelligence technology (such as the US Patent No. 6,772,989; US Patent Application No. 2006/0074831; Siri: US Patent Application No. 2012/0016678). Since speech recognition and artificial intelligence technologies are not yet mature, it is difficult for the above-mentioned patent to achieve the effect of dialogue with a real person through the intervention of an artificial assistant in this patent.

真人辅助的智能系统(例如美国专利申请号2011/0191681)被用于零售业客服系统,远程监控等。但是并未应用于类似于虚拟伴侣的场景。Human-assisted intelligent systems (such as US Patent Application No. 2011/0191681) are used in retail customer service systems, remote monitoring, etc. But it has not been applied to scenarios like virtual companions.

本发明的其他应用Other applications of the invention

当用户使用虚拟伴侣时,使用情况统计数据可以被收集并被作为诊断用户身心健康状况(例如,功能性衰退、老年痴呆症发展)的分析数据来源。例如,当老人有抑郁症状或与社交的抵触症状时,其与虚拟伴侣的互动频率会降低。因此,虚拟伴侣的使用统计数据可以作为一个较准确的、非介入式的临床诊断数据来源。When a user uses a virtual companion, usage statistics may be collected and used as a source of analytical data for diagnosing the user's physical and mental health (eg, functional decline, Alzheimer's development). For example, when older people have symptoms of depression or social resistance, they interact less frequently with their virtual partners. Therefore, virtual companion usage statistics can serve as a more accurate, non-invasive source of clinical diagnostic data.

本发明还可以被应用于其它人群,例如年轻人。由于人工助理的介入,本发明具有较强的娱乐功能,此外,还可以为用户的生活助理,为其管理时间安排或完成其它与网络相关的任务。The invention can also be applied to other populations, such as young people. Due to the intervention of the artificial assistant, the present invention has a strong entertainment function. In addition, it can also be an assistant for the user's life, manage time arrangements or complete other network-related tasks.

本发明还可以应用于自闭症儿童的护理与治疗。自闭症儿童倾向于与非人类对象进行交流。由于虚拟伴侣可以以动物的形象展示,而人工助理的存在又为其赋予了类似于人类的交流方式,因此自闭症儿童易于接受虚拟伴侣,并且在虚拟伴侣的辅助下逐渐适应与人类交流。The invention can also be applied to the nursing and treatment of children with autism. Autistic children tend to communicate with non-human subjects. Since virtual companions can be displayed in the image of animals, and the existence of artificial assistants gives them a way of communication similar to humans, children with autism are easy to accept virtual companions, and gradually adapt to communicating with humans with the help of virtual companions.

本发明还可以用作儿童玩具。在此情况下,需要增添更多的游戏互动功能,并且增添更丰富的三维模型姿态以对应用户对虚拟伴侣进行不同照顾时的状态。The invention can also be used as a toy for children. In this case, more game interaction functions need to be added, and richer 3D model poses should be added to correspond to the states of the user when taking care of the virtual partner in different ways.

本发明可以被牙医矫正师使用,为其病人提供日常的牙齿保健护理培训及提醒。在一个牙齿矫正周期中,可以预先设定多个虚拟伴侣负责周期的不同阶段,每个虚拟伴侣被提前设定在该阶段中需要通过语音或其它手段告知用户的内容,从而全自动的进行全周期陪许和提醒。The invention can be used by orthodontists to provide daily dental care training and reminders for their patients. In an orthodontic cycle, multiple virtual companions can be pre-set to be responsible for different stages of the cycle, and each virtual companion is pre-set in advance what needs to be informed to the user by voice or other means in this stage, so as to carry out the whole process fully automatically. Cycle with promises and reminders.

本发明对多点触摸的反应方式可以被用于各种三维模型,而不仅仅限于三维人类模型或动物模型。例如,可以为三维花朵模型定义多点触摸反应。The present invention's response to multi-touch can be applied to various 3D models, not limited to 3D human or animal models. For example, a multi-touch reaction can be defined for a 3D flower model.

本发明可用于对实体机器人的控制。例如,可以将运行虚拟伴侣的平板电脑与机械部件连接,由虚拟伴侣发出的指令控制机械部件的运动。The invention can be used for the control of physical robots. For example, a tablet computer running a virtual companion can be connected to a mechanical part, and the movement of the mechanical part can be controlled by commands issued by the virtual companion.

可以为运行虚拟伴侣的平板电脑添加其它物理部件,以提高用户的视觉体验。例如,当虚拟伴侣以宠物狗的形象展示时,可以在平板电脑外添加一个狗窝形状的外部框架。Additional physical components can be added to the tablet running the virtual companion to enhance the user's visual experience. For example, when the virtual companion is shown as a pet dog, a kennel-shaped outer frame can be added around the tablet.

此外,还可以为运行虚拟伴侣的平板电脑添加具有特殊触觉效果的物理部件,提升用户的触觉体验。例如,可以在平板电脑外添加一个柔软绒毛质地的外壳,并且用户可以将手指插入外壳的开口中。In addition, physical parts with special haptic effects can be added to the tablet running the virtual companion to enhance the user's tactile experience. For example, a soft, plush shell can be added to the tablet, and the user can insert a finger into the opening of the shell.

虚拟伴侣还可以通过分析触摸输入的性质分辨触摸输入是否是由于屏幕上存在液体而造成。因为液体触发的触摸输入在强度以及位置上通常会有快速的波动。虚拟伴侣检测到液体触摸后,可以在虚拟场景中展示虚拟伴侣与液体(例如雨水)的互动。The virtual companion can also tell whether a touch input is due to the presence of liquid on the screen by analyzing the nature of the touch input. Because liquid-triggered touch input usually has rapid fluctuations in intensity and position. After the virtual partner detects the liquid touch, the interaction of the virtual partner with the liquid (such as rain) can be displayed in the virtual scene.

Claims (20)

1.一种虚拟伴侣装置,包括: 1. A virtual companion device, comprising: 展示虚拟伴侣的方法; Ways to demonstrate a virtual companion; 检测用户输入的方法; A method to detect user input; 根据检测到的用户输入改变虚拟伴侣展示方式的方法。 A method for altering the presentation of a virtual companion based on detected user input. 2.权利要求1中所述的装置,其中: 2. The device as claimed in claim 1, wherein: 检测到的用户输入为触摸式的; The detected user input is touch; 改变虚拟伴侣的展示方法需要读取用户输入的位置并判断其对应虚拟伴侣的哪个身体部位; Changing the display method of the virtual partner needs to read the position input by the user and determine which body part of the virtual partner it corresponds to; 改变虚拟伴侣的展示方法包括虚拟伴侣身体部位的移动。 The method of changing the presentation of the virtual companion includes moving the body parts of the virtual companion. 3.权利要求1中所述的装置,其中: 3. The device of claim 1, wherein: 虚拟伴侣是一个可以展现一个或多个动作的动画形象,每个动作代表虚拟伴侣对某种激励的反应; A virtual companion is an animated figure that can exhibit one or more actions, each representing the virtual companion's response to a stimulus; 当接收到多个激励时,多个对应动作可以根据权重进行混合。 When multiple stimuli are received, multiple corresponding actions may be blended according to weights. 4.权利要求3中所述的装置,其中: 4. The device of claim 3, wherein: 虚拟伴侣可以展示含有特定内容的虚拟物品;内容由远程数据库获取。 A virtual companion can display a virtual item with specific content; the content is retrieved from a remote database. 5.权利要求2中所述的装置,还包括: 5. The device of claim 2, further comprising: 一种可以远程控制虚拟伴侣的方法; A method by which a virtual companion can be controlled remotely; 一种可以使虚拟伴侣在远程控制下进行语音输出的方法。 A method that enables a virtual companion to perform voice output under remote control. 6.权利要求5中所述的装置,其中: 6. The device as claimed in claim 5, wherein: 虚拟伴侣可以以非人类的形式展现。 Virtual companions can take on non-human forms. 7.权利要求1中所述的装置,还包括: 7. The device of claim 1, further comprising: 一种激发虚拟伴侣生理需求的方法; A method of stimulating the physical needs of a virtual companion; 通过混合动画展现虚拟伴侣生理需求的方法。 A method for representing the biological needs of a virtual companion through hybrid animation. 8.权利要求1中所述的装置,还包括: 8. The device of claim 1, further comprising: 一种通过图像识别辨识实体物品,并将其转化显示为虚拟伴侣可以交换的虚拟物品的方法。 A method of identifying physical items through image recognition and converting them into virtual items that can be exchanged by virtual companions. 9.权利要求8中所述的装置,还包括: 9. The apparatus of claim 8, further comprising: 一个实体物理装置用于引导用户将实体物品放置在特定位置以便于识别;此物理结构还可以用于支撑固定虚拟伴侣展示装置。 A solid physical device is used to guide the user to place the physical item in a specific location for identification; this physical structure can also be used to support a fixed virtual companion display device. 10.权利要求1中所述的装置,还包括: 10. The device of claim 1, further comprising: 一个实体物理装置附于虚拟伴侣展示装置四周或附近,作为虚拟伴侣展示视觉效果的一部分。 A physical physical device is attached around or near the virtual companion display as part of the virtual companion display's visuals. 11.权利要求1中所述的装置,其中: 11. The device of claim 1, wherein: 检测用户触摸输入的方式采用电容式触摸屏; The method of detecting user touch input adopts capacitive touch screen; 虚拟伴侣可以对由于屏幕上存在液体而触发的触摸信号进行相应。 The virtual companion can respond to touch signals triggered by the presence of liquid on the screen. 12.一种控制一个或多个虚拟伴侣的方法,包括: 12. A method of controlling one or more virtual companions, comprising: 虚拟伴侣向服务器传送数据; The virtual companion transmits data to the server; 服务器将接收到的虚拟伴侣发送的数据转发给联网电脑; The server forwards the received data sent by the virtual companion to the networked computer; 在联网电脑上显示每个虚拟伴侣的状态; Display the status of each virtual companion on a networked computer; 联网电脑的用户可以选中任意一个虚拟伴侣并打开包其详细状态的详细视图; Users of networked computers can select any virtual companion and open a detailed view including its detailed status; 从被选中展示详细视图的虚拟伴侣发送视频、音频流数据给联网电脑用户; Sending video and audio streaming data from virtual companions selected to display detailed views to networked computer users; 联网电脑用户可以向被选中虚拟伴侣发送指令。 Internet-connected computer users can send commands to selected virtual companions. 13.权利要求12中所述的装置,其中: 13. The device of claim 12, wherein: 处于非详细视图状态的虚拟伴侣向联网电脑发送的信息相对于处于详细视图的虚拟伴侣的信息精确度较低; The virtual companion in the non-detailed view state sends less accurate information to the networked computer than the virtual companion in the detailed view; 当一个虚拟伴侣被选中在联网电脑端展示详细视图时,虚拟伴侣的三维模型展示端呈现更加专注的状态。 When a virtual partner is selected to display a detailed view on the networked computer, the virtual partner's 3D model display side is more focused. 14.权利要求12中所述的装置,其中: 14. The device of claim 12, wherein: 每一个被控制的虚拟伴侣都是权利要求5中描述的一个装置。 Each controlled virtual companion is a device described in claim 5 . 15.权利要求12中所述的装置,其中: 15. The device of claim 12, wherein: 发送给虚拟伴侣的命令由人工智能生成; The commands sent to the virtual companion are generated by artificial intelligence; 由人工智能生成的命令在发送前可以被使用联网电脑的用户进行修改并审核通过。 Commands generated by artificial intelligence can be modified and approved by users using internet-connected computers before being sent. 16.一个由多个人控制多个虚拟形象的系统,包括: 16. A system in which multiple persons control multiple avatars, comprising: 多个虚拟形象,每一个虚拟形象有各自的历史事件以及性格特征记录; Multiple avatars, each avatar has its own records of historical events and personality traits; 多个人,每个人可以通过联网电脑以一定方式远程控制每个虚拟形象; Multiple people, each person can remotely control each avatar in a certain way through a networked computer; 一个可以由人为虚拟形象输入事件历史记录的方式; A means by which an event history can be entered by a human avatar; 一个人可以读出每一个虚拟形象事件历史记录的方式。 A way for one to read out the event history of each avatar. 17.权利要求16中所述的系统,其中: 17. The system of claim 16, wherein: 每一个虚拟形象都是权利要求5中描述的一个装置。 Each avatar is a device described in claim 5. 18.权利要求16中所述的系统,其中: 18. The system of claim 16, wherein: 每个人通过权利要求12中所描述的方式控制虚拟形象。 Each person controls the avatar in the manner described in claim 12. 19.权利要求16中所述的系统,还包括: 19. The system of claim 16, further comprising: 一种为了最大化人工效率,动态分配控制人员与被控制虚拟形象对应关系的方式。 A way to dynamically allocate the corresponding relationship between the controller and the controlled avatar in order to maximize the labor efficiency. 20.权利要求19中所述的系统,还包括: 20. The system of claim 19, further comprising: 一种可以记录每个人控制每个虚拟形象时的表现的方式; A way to record everyone's performance while controlling each avatar; 一种为了最大化总体表现,在动态分配时按照历史表现分配控制人员与虚拟形象的分配方式。 A method of assigning controllers and avatars based on historical performance during dynamic assignments in order to maximize overall performance.
CN201480002468.2A 2013-07-10 2014-07-09 virtual companion Pending CN104769645A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US13/939,172 2013-07-10
US13/939,172 US20140125678A1 (en) 2012-07-11 2013-07-10 Virtual Companion
PCT/IB2014/062986 WO2015004620A2 (en) 2013-07-10 2014-07-09 Virtual companion

Publications (1)

Publication Number Publication Date
CN104769645A true CN104769645A (en) 2015-07-08

Family

ID=52280769

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201480002468.2A Pending CN104769645A (en) 2013-07-10 2014-07-09 virtual companion

Country Status (2)

Country Link
CN (1) CN104769645A (en)
WO (1) WO2015004620A2 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104959985A (en) * 2015-07-16 2015-10-07 深圳狗尾草智能科技有限公司 Robot control system and robot control method thereof
CN105141587A (en) * 2015-08-04 2015-12-09 广东小天才科技有限公司 Virtual doll interaction method and device
CN105957140A (en) * 2016-05-31 2016-09-21 成都九十度工业产品设计有限公司 Pet dog interaction system based on technology of augmented reality, and analysis method
CN106375774A (en) * 2016-08-31 2017-02-01 广州酷狗计算机科技有限公司 Live broadcast room display content control method, apparatus and system
CN106850824A (en) * 2017-02-22 2017-06-13 北京爱惠家网络有限公司 A kind of intelligent service system and implementation method
CN107168174A (en) * 2017-06-15 2017-09-15 重庆柚瓣科技有限公司 A kind of method that use robot does family endowment
CN107322593A (en) * 2017-06-15 2017-11-07 重庆柚瓣家科技有限公司 Can outdoor moving company family endowment robot
CN107808191A (en) * 2017-09-13 2018-03-16 北京光年无限科技有限公司 The output intent and system of the multi-modal interaction of visual human
CN108536386A (en) * 2018-03-30 2018-09-14 联想(北京)有限公司 Data processing method, equipment and system
CN108874123A (en) * 2018-05-07 2018-11-23 北京理工大学 A kind of general modular virtual reality is by active haptic feedback system
CN108886532A (en) * 2016-01-14 2018-11-23 三星电子株式会社 Device and method for operating personal agent
WO2019037076A1 (en) * 2017-08-25 2019-02-28 深圳市得道健康管理有限公司 Artificial intelligence terminal system, server and behavior control method thereof
CN109521878A (en) * 2018-11-08 2019-03-26 歌尔科技有限公司 Exchange method, device and computer readable storage medium
CN109965466A (en) * 2018-05-29 2019-07-05 北京心有灵犀科技有限公司 AR virtual character smart jewelry
CN110653813A (en) * 2018-06-29 2020-01-07 深圳市优必选科技有限公司 A robot control method, robot and computer storage medium
CN113313836A (en) * 2021-04-26 2021-08-27 广景视睿科技(深圳)有限公司 Method for controlling virtual pet and intelligent projection equipment
CN115793938A (en) * 2022-11-08 2023-03-14 Oppo广东移动通信有限公司 Interaction method for interactive wallpaper, electronic device, and computer-readable storage medium
CN116312527A (en) * 2018-03-26 2023-06-23 苹果公司 Natural assistant interaction

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018073832A1 (en) * 2016-10-20 2018-04-26 Rn Chidakashi Technologies Pvt. Ltd. Emotionally intelligent companion device
US12112285B1 (en) 2021-07-07 2024-10-08 Wells Fargo Bank, N.A. Systems and methods for measuring employee experience

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6545682B1 (en) * 2000-05-24 2003-04-08 There, Inc. Method and apparatus for creating and customizing avatars using genetic paradigm
US20090055019A1 (en) * 2007-05-08 2009-02-26 Massachusetts Institute Of Technology Interactive systems employing robotic companions
CN101828161A (en) * 2007-10-18 2010-09-08 微软公司 Three-dimensional object simulation using audio, visual, and tactile feedback
CN201611889U (en) * 2010-02-10 2010-10-20 深圳先进技术研究院 Instant messaging partner robot

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6239785B1 (en) * 1992-10-08 2001-05-29 Science & Technology Corporation Tactile computer input device
EP1092458A1 (en) * 1999-04-30 2001-04-18 Sony Corporation Electronic pet system, network system, robot, and storage medium
US8795072B2 (en) * 2009-10-13 2014-08-05 Ganz Method and system for providing a virtual presentation including a virtual companion and virtual photography
JP5812665B2 (en) * 2011-04-22 2015-11-17 任天堂株式会社 Information processing system, information processing apparatus, information processing method, and information processing program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6545682B1 (en) * 2000-05-24 2003-04-08 There, Inc. Method and apparatus for creating and customizing avatars using genetic paradigm
US20090055019A1 (en) * 2007-05-08 2009-02-26 Massachusetts Institute Of Technology Interactive systems employing robotic companions
CN101828161A (en) * 2007-10-18 2010-09-08 微软公司 Three-dimensional object simulation using audio, visual, and tactile feedback
CN201611889U (en) * 2010-02-10 2010-10-20 深圳先进技术研究院 Instant messaging partner robot

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104959985A (en) * 2015-07-16 2015-10-07 深圳狗尾草智能科技有限公司 Robot control system and robot control method thereof
CN105141587A (en) * 2015-08-04 2015-12-09 广东小天才科技有限公司 Virtual doll interaction method and device
CN105141587B (en) * 2015-08-04 2019-01-01 广东小天才科技有限公司 Virtual doll interaction method and device
CN108886532A (en) * 2016-01-14 2018-11-23 三星电子株式会社 Device and method for operating personal agent
CN108886532B (en) * 2016-01-14 2021-12-17 三星电子株式会社 Apparatus and method for operating personal agent
CN105957140A (en) * 2016-05-31 2016-09-21 成都九十度工业产品设计有限公司 Pet dog interaction system based on technology of augmented reality, and analysis method
CN106375774A (en) * 2016-08-31 2017-02-01 广州酷狗计算机科技有限公司 Live broadcast room display content control method, apparatus and system
CN106375774B (en) * 2016-08-31 2019-12-27 广州酷狗计算机科技有限公司 Method, device and system for controlling display content of live broadcast room
CN106850824A (en) * 2017-02-22 2017-06-13 北京爱惠家网络有限公司 A kind of intelligent service system and implementation method
CN107168174A (en) * 2017-06-15 2017-09-15 重庆柚瓣科技有限公司 A kind of method that use robot does family endowment
CN107168174B (en) * 2017-06-15 2019-08-09 重庆柚瓣科技有限公司 A method of family endowment is done using robot
CN107322593A (en) * 2017-06-15 2017-11-07 重庆柚瓣家科技有限公司 Can outdoor moving company family endowment robot
WO2019037076A1 (en) * 2017-08-25 2019-02-28 深圳市得道健康管理有限公司 Artificial intelligence terminal system, server and behavior control method thereof
CN107808191A (en) * 2017-09-13 2018-03-16 北京光年无限科技有限公司 The output intent and system of the multi-modal interaction of visual human
CN116312527A (en) * 2018-03-26 2023-06-23 苹果公司 Natural assistant interaction
CN108536386A (en) * 2018-03-30 2018-09-14 联想(北京)有限公司 Data processing method, equipment and system
CN108874123A (en) * 2018-05-07 2018-11-23 北京理工大学 A kind of general modular virtual reality is by active haptic feedback system
CN109965466A (en) * 2018-05-29 2019-07-05 北京心有灵犀科技有限公司 AR virtual character smart jewelry
CN110653813A (en) * 2018-06-29 2020-01-07 深圳市优必选科技有限公司 A robot control method, robot and computer storage medium
CN109521878A (en) * 2018-11-08 2019-03-26 歌尔科技有限公司 Exchange method, device and computer readable storage medium
CN113313836A (en) * 2021-04-26 2021-08-27 广景视睿科技(深圳)有限公司 Method for controlling virtual pet and intelligent projection equipment
WO2022227290A1 (en) * 2021-04-26 2022-11-03 广景视睿科技(深圳)有限公司 Method for controlling virtual pet and intelligent projection device
CN115793938A (en) * 2022-11-08 2023-03-14 Oppo广东移动通信有限公司 Interaction method for interactive wallpaper, electronic device, and computer-readable storage medium

Also Published As

Publication number Publication date
WO2015004620A3 (en) 2015-05-14
WO2015004620A2 (en) 2015-01-15

Similar Documents

Publication Publication Date Title
CN104769645A (en) virtual companion
Salichs et al. Mini: a new social robot for the elderly
US20140125678A1 (en) Virtual Companion
Ihamäki et al. Robot pets as “serious toys”-activating social and emotional experiences of elderly people
Fong et al. A survey of socially interactive robots
Turkle et al. Relational artifacts with children and elders: the complexities of cybercompanionship
Tulsulkar et al. Can a humanoid social robot stimulate the interactivity of cognitively impaired elderly? A thorough study based on computer vision methods
CN101648079B (en) Emotional doll
CN107077229B (en) Human-machine interface device and system
Marchetti et al. Pet-robot or appliance? Care home residents with dementia respond to a zoomorphic floor washing robot
Tanner et al. The development of spontaneous gestures in zoo-living gorillas and sign-taught gorillas: from action and location to object representation
Diana My robot gets me: how social design can make new products more human
Jewitt et al. Digital touch
Row et al. CAMY: applying a pet dog analogy to everyday ubicomp products
Jeong The impact of social robots on young patients' socio-emotional wellbeing in a pediatric inpatient care context
Blocher Affective Social Quest (ASQ): teaching emotion recognition with interactive media & wireless expressive toys
Hanke et al. Embodied ambient intelligent systems
Kennedy-Costantini et al. Neonatal imitation
Henig The real transformers
Marshall Designing technology to promote play between parents and their infant children
Johal Companion Robots Behaving with Style: Towards Plasticity in Social Human-Robot Interaction
Cottrell Supporting Self-Regulation with Deformable Controllers
Huang Development of human-computer interaction for holographic ais
Lee Expressive entities: An Exploration and Critical Reflection on Poetic Engagements with Technology
KR102366054B1 (en) Healing system using equine

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150708

WD01 Invention patent application deemed withdrawn after publication