CN113176827B - AR interaction method and system based on expressions, electronic device and storage medium - Google Patents
AR interaction method and system based on expressions, electronic device and storage medium Download PDFInfo
- Publication number
- CN113176827B CN113176827B CN202110571684.0A CN202110571684A CN113176827B CN 113176827 B CN113176827 B CN 113176827B CN 202110571684 A CN202110571684 A CN 202110571684A CN 113176827 B CN113176827 B CN 113176827B
- Authority
- CN
- China
- Prior art keywords
- person
- expression
- real
- data
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000014509 gene expression Effects 0.000 title claims abstract description 100
- 230000003993 interaction Effects 0.000 title claims abstract description 58
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000002452 interceptive effect Effects 0.000 claims abstract description 13
- 230000003190 augmentative effect Effects 0.000 claims description 29
- 230000008921 facial expression Effects 0.000 claims description 28
- 238000005516 engineering process Methods 0.000 claims description 25
- 239000003795 chemical substances by application Substances 0.000 claims description 24
- 238000013145 classification model Methods 0.000 claims description 14
- 238000012549 training Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 11
- 238000013528 artificial neural network Methods 0.000 claims description 8
- 239000003086 colorant Substances 0.000 claims description 8
- 238000013527 convolutional neural network Methods 0.000 claims description 8
- 238000013480 data collection Methods 0.000 claims description 5
- 239000007787 solid Substances 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 16
- 238000004891 communication Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 7
- 230000009471 action Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 206010003805 Autism Diseases 0.000 description 2
- 208000020706 Autistic disease Diseases 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000009850 completed effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000002996 emotional effect Effects 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
本发明提出基于表情的AR交互方法、系统、电子设备及存储介质,方法技术方案包括采集现实场景中实体物和第一人物的实时图像数据,并同时采集第一人物的声音数据和现实场景的环境的实时图像数据;根据实体物、第一人物和环境的实时图像数据,生成包括实体物、第一人物和环境的虚拟成像的AR画面,在AR画面中的实体物的虚拟成像上叠加表情元素,并同时在AR画面中生成智能体形象,AR画面通过屏幕显示;智能体形象基于叠加有表情元素的实体物的虚拟成像,并根据第一人物的声音数据与现实场景中的第一人物进行互动;互动包括智能体形象根据预设语料库与第一人物进行对话。本发明解决了现有基于VR或基于AR的互动系统未能考虑到自闭症儿童群体的问题。
The present invention proposes an AR interaction method, system, electronic device and storage medium based on expressions. The technical solution of the method includes collecting real-time image data of a real object and a first character in a real scene, and simultaneously collecting the voice data of the first character and the real-time image data of the real scene. Real-time image data of the environment; according to the real-time image data of the physical object, the first person and the environment, generate an AR screen including virtual imaging of the physical object, the first person and the environment, and superimpose expressions on the virtual imaging of the physical object in the AR screen elements, and at the same time generate an agent image in the AR screen, and the AR screen is displayed on the screen; the agent image is based on the virtual imaging of the physical object superimposed with expression elements, and is based on the voice data of the first character and the first character in the real scene. To interact; the interaction includes the agent avatar having a dialogue with the first character according to the preset corpus. The present invention solves the problem that the existing VR-based or AR-based interactive systems fail to take into account the group of autistic children.
Description
技术领域technical field
本发明属于增强现实技术领域,尤其涉及一种基于表情的AR交互方法、系统、电子设备及存储介质。The invention belongs to the technical field of augmented reality, and in particular relates to an expression-based AR interaction method, system, electronic equipment and storage medium.
背景技术Background technique
在自闭症儿童的情绪干预中,现有技术会通过虚拟现实(VR)进行,而现有的虚拟现实技术产品因需要佩戴虚拟头盔或可穿戴设备,交互空间不仅有限,而且儿童容易误操作,尤其是对于自闭症儿童来说,他们并不喜欢在身上佩戴设备。而现有的增强现实(AR)技术中,需要增强现实头盔或手持数字设备实现立体成像效果,导致所需设备成本高,且因无法解放双手导致效率低,识别准确度差;此外,还忽略了自闭症儿童作为用户群的技术开发需求,操作复杂,操作流程繁琐,并不适用于儿童,尤其是自闭症儿童的使用。In the emotional intervention of autistic children, the existing technology will be carried out through virtual reality (VR), and the existing virtual reality technology products need to wear virtual helmets or wearable devices, the interaction space is not only limited, and children are prone to misuse , especially for autistic children, who don't like wearing devices on their bodies. However, in the existing augmented reality (AR) technology, augmented reality helmets or handheld digital devices are required to achieve stereoscopic imaging effects, resulting in high equipment costs, low efficiency and poor recognition accuracy due to the inability to free hands; It meets the technical development needs of autistic children as a user group. The operation is complicated and the operation process is cumbersome. It is not suitable for children, especially autistic children.
发明内容SUMMARY OF THE INVENTION
本申请实施例提供了基于表情的AR交互方法、系统、电子设备及存储介质,以至少解决现有基于VR或基于AR的互动系统未能考虑到自闭症儿童群体的问题。The embodiment of the present application provides an expression-based AR interaction method, system, electronic device, and storage medium to at least solve the problem that existing VR-based or AR-based interactive systems fail to take autistic children into consideration.
第一方面,本申请实施例提供了基于表情的AR交互方法,包括:1.基于表情的AR交互方法,其特征在于,包括:现实数据采集步骤,采集一现实场景中实体物和第一人物的实时图像数据,并同时采集所述第一人物的声音数据和所述现实场景的环境的实时图像数据;AR画面生成步骤,根据所述实体物、所述第一人物和所述环境的实时图像数据,生成一包括所述实体物、所述第一人物和所述环境的虚拟成像的AR画面,在所述AR画面中的所述实体物的虚拟成像上叠加一表情元素,并同时在所述AR画面中生成一智能体形象,所述AR画面通过一屏幕显示;AR智能互动步骤,所述智能体形象基于叠加有所述表情元素的所述实体物的虚拟成像,并根据所述第一人物的声音数据和所述实时图像数据与所述现实场景中的所述第一人物进行互动;所述互动包括所述智能体形象根据一预设语料库与所述第一人物进行对话。In the first aspect, the embodiment of the present application provides an expression-based AR interaction method, including: 1. An expression-based AR interaction method, characterized in that it includes: a real data collection step, collecting a physical object and a first person in a real scene Real-time image data of the real-time image data of the first person, and simultaneously collect the sound data of the first person and the real-time image data of the environment of the real scene; the AR picture generation step, according to the real-time images of the entity, the first person and the environment Image data, generating an AR picture including the virtual imaging of the entity, the first character and the environment, superimposing an expression element on the virtual imaging of the entity in the AR picture, and simultaneously An agent image is generated in the AR image, and the AR image is displayed on a screen; in the AR intelligent interaction step, the agent image is based on the virtual imaging of the physical object superimposed with the expression elements, and according to the The voice data of the first character and the real-time image data interact with the first character in the real scene; the interaction includes the agent image having a dialogue with the first character according to a preset corpus.
优选的,所述方法进一步包括:操作介入干预步骤,若所述预设语料库无法支持所述智能体形象与所述第一人物进行对话,则通过一第二人物对所述互动进行干预。Preferably, the method further includes: an operation intervention step, if the preset corpus cannot support the dialogue between the agent image and the first character, intervene in the interaction through a second character.
优选的,所述现实数据采集步骤进一步包括:表情训练步骤,通过CNN神经网络,根据一人脸表情数据集训练一表情识别分类模型;表情分类步骤,通过OpenCV接口对采集到的所述第一人物的实时图像数据进行识别,提取出所述第一人物的面部表情数据,并输入至所述表情识别分类模型中进行分类。Preferably, the real data collection step further includes: an expression training step, training an expression recognition and classification model according to a human facial expression data set through a CNN neural network; The real-time image data is recognized, the facial expression data of the first person is extracted, and input into the expression recognition classification model for classification.
优选的,在所述实体物的表面覆盖一识别图像,所述识别图像包括具有图纹和色彩的二维图形,用于对所述实体物进行所述实时图像数据的采集。Preferably, an identification image is covered on the surface of the entity, and the identification image includes a two-dimensional figure with pattern and color, and is used for collecting the real-time image data of the entity.
第二方面,本申请实施例提供了基于表情的AR交互系统,适用于上述基于表情的AR交互方法,包括:现实数据采集模块,采集一现实场景中实体物和第一人物的实时图像数据,并同时采集所述第一人物的声音数据和所述现实场景的环境的实时图像数据;AR画面生成模块,根据所述实体物、所述第一人物和所述环境的实时图像数据,生成一包括所述实体物、所述第一人物和所述环境的虚拟成像的AR画面,在所述AR画面中的所述实体物的虚拟成像上叠加一表情元素,并同时在所述AR画面中生成一智能体形象,所述AR画面通过一屏幕显示;AR智能互动模块,所述智能体形象基于叠加有所述表情元素的所述实体物的虚拟成像,并根据所述第一人物的声音数据和所述实时图像数据与所述现实场景中的所述第一人物进行互动;所述互动包括所述智能体形象根据一预设语料库与所述第一人物进行对话。In the second aspect, the embodiment of the present application provides an expression-based AR interaction system, which is applicable to the above-mentioned expression-based AR interaction method, including: a real data collection module, which collects real-time image data of an entity object and a first person in a real scene, And simultaneously collect the voice data of the first person and the real-time image data of the environment of the real scene; the AR picture generation module generates a real-time image data according to the real object, the first person and the environment An AR picture including virtual imaging of the physical object, the first character, and the environment, superimposing an expression element on the virtual imaging of the physical object in the AR picture, and simultaneously displaying an expression element in the AR picture Generate an intelligent body image, the AR picture is displayed on a screen; the AR intelligent interaction module, the intelligent body image is based on the virtual imaging of the physical object superimposed with the expression elements, and according to the voice of the first person data and the real-time image data to interact with the first character in the real scene; the interaction includes the agent image having a dialogue with the first character according to a preset corpus.
在其中一些实施例中,所述系统进一步包括:操作介入干预模块,若所述预设语料库无法支持所述智能体形象与所述第一人物进行对话,则通过一第二人物对所述互动进行干预。In some of these embodiments, the system further includes: an operation intervention module, if the preset corpus cannot support the dialogue between the agent image and the first character, then a second character will interact with the Intervene.
在其中一些实施例中,所述现实数据采集模块进一步包括:表情训练单元,通过CNN神经网络,根据一人脸表情数据集训练一表情识别分类模型;表情分类单元,通过OpenCV接口对采集到的所述第一人物的实时图像数据进行识别,提取出所述第一人物的面部表情数据,并输入至所述表情识别分类模型中进行分类。In some of these embodiments, the real data acquisition module further includes: an expression training unit, which trains an expression recognition and classification model according to a facial expression data set through a CNN neural network; The real-time image data of the first person is recognized, the facial expression data of the first person is extracted, and input into the expression recognition classification model for classification.
在其中一些实施例中,在所述实体物的表面覆盖一识别图像,所述识别图像包括具有图纹和色彩的二维图形,用于对所述实体物进行所述实时图像数据的采集。In some of the embodiments, an identification image is covered on the surface of the entity, and the identification image includes a two-dimensional figure with pattern and color, and is used for collecting the real-time image data of the entity.
第三方面,本申请实施例提供了一种电子设备,包括存储器、处理器以及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上述第一方面所述的基于表情的AR交互方法。In a third aspect, an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored in the memory and operable on the processor. When the processor executes the computer program, Realize the expression-based AR interaction method as described in the first aspect above.
第四方面,本申请实施例提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如上述第一方面所述的基于表情的AR交互方法。In a fourth aspect, the embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the expression-based AR interaction method as described in the first aspect above is implemented.
相比于相关技术,本申请实施例通过采集现实场景中的人物和环境,直接显示到屏幕中,此外,设计并采集用于互动的实体物,并在生成AR画面时叠加表情元素,综合上述元素可为自闭症儿童进行AR系统交互创造适合的环境体验,并且本申请实施例可以采集并对人物的表情进行分类识别,通过设计互动游戏可以使自闭症儿童基于表情进行AR互动。Compared with related technologies, the embodiments of the present application collect characters and environments in real scenes and directly display them on the screen. In addition, design and collect physical objects for interaction, and superimpose expression elements when generating AR images. Based on the above Elements can create a suitable environment experience for children with autism to interact with the AR system, and this embodiment of the application can collect and classify and recognize the expressions of characters. By designing interactive games, children with autism can interact with AR based on expressions.
附图说明Description of drawings
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:The drawings described here are used to provide a further understanding of the application and constitute a part of the application. The schematic embodiments and descriptions of the application are used to explain the application and do not constitute an improper limitation to the application. In the attached image:
图1为本发明的基于表情的AR交互方法流程图;Fig. 1 is the flow chart of the AR interaction method based on expression of the present invention;
图2为图1中步骤S1的分步骤流程图;Fig. 2 is the step-by-step flowchart of step S1 in Fig. 1;
图3为本发明的基于表情的AR交互系统的框架图;Fig. 3 is the frame diagram of the AR interactive system based on expression of the present invention;
图4为本发明的电子设备的框架图;Fig. 4 is the frame diagram of electronic equipment of the present invention;
图5为本申请实施例的实体物的效果图;Fig. 5 is an effect diagram of the physical object of the embodiment of the present application;
图6为本申请实施例的智能体形象的效果图;FIG. 6 is an effect diagram of an agent image in an embodiment of the present application;
图7为本申请实施例的一互动效果图;FIG. 7 is an interactive effect diagram of the embodiment of the present application;
图8为本申请实施例的另一互动效果图;FIG. 8 is another interactive effect diagram of the embodiment of the present application;
以上图中:In the picture above:
1、现实数据采集模块;2、AR画面生成模块;3、AR智能互动模块;4、操作介入干预模块;11、表情训练单元;12、表情分类单元;60、总线;61、处理器;62、存储器;63、通信接口。1. Realistic data acquisition module; 2. AR screen generation module; 3. AR intelligent interaction module; 4. Operation intervention intervention module; 11. Expression training unit; 12. Expression classification unit; 60. Bus; 61. Processor; 62 . Memory; 63. Communication interface.
具体实施方式Detailed ways
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行描述和说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。基于本申请提供的实施例,本领域普通技术人员在没有作出创造性劳动的前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to make the purpose, technical solutions and advantages of the present application clearer, the present application will be described and illustrated below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present application, and are not intended to limit the present application. Based on the embodiments provided in the present application, all other embodiments obtained by persons of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
显而易见地,下面描述中的附图仅仅是本申请的一些示例或实施例,对于本领域的普通技术人员而言,在不付出创造性劳动的前提下,还可以根据这些附图将本申请应用于其他类似情景。此外,还可以理解的是,虽然这种开发过程中所作出的努力可能是复杂并且冗长的,然而对于与本申请公开的内容相关的本领域的普通技术人员而言,在本申请揭露的技术内容的基础上进行的一些设计,制造或者生产等变更只是常规的技术手段,不应当理解为本申请公开的内容不充分。Obviously, the accompanying drawings in the following description are only some examples or embodiments of the present application, and those skilled in the art can also apply the present application to other similar scenarios. In addition, it can also be understood that although such development efforts may be complex and lengthy, for those of ordinary skill in the art relevant to the content disclosed in this application, the technology disclosed in this application Some design, manufacturing or production changes based on the content are just conventional technical means, and should not be understood as insufficient content disclosed in this application.
在本申请中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域普通技术人员显式地和隐式地理解的是,本申请所描述的实施例在不冲突的情况下,可以与其它实施例相结合。Reference in this application to an "embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor a separate or alternative embodiment that is mutually exclusive of other embodiments. It is understood explicitly and implicitly by those of ordinary skill in the art that the embodiments described in this application can be combined with other embodiments without conflict.
除非另作定义,本申请所涉及的技术术语或者科学术语应当为本申请所属技术领域内具有一般技能的人士所理解的通常意义。本申请所涉及的“一”、“一个”、“一种”、“该”等类似词语并不表示数量限制,可表示单数或复数。本申请所涉及的术语“包括”、“包含”、“具有”以及它们任何变形,意图在于覆盖不排他的包含;例如包含了一系列步骤或模块(单元)的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可以还包括没有列出的步骤或单元,或可以还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。Unless otherwise defined, the technical terms or scientific terms involved in the application shall have the usual meanings understood by those with ordinary skill in the technical field to which the application belongs. Words such as "a", "an", "an" and "the" involved in this application do not indicate a limitation on quantity, and may indicate singular or plural numbers. The terms "comprising", "comprising", "having" and any variations thereof involved in this application are intended to cover non-exclusive inclusion; for example, a process, method, system, product or process that includes a series of steps or modules (units). The apparatus is not limited to the listed steps or units, but may further include steps or units not listed, or may further include other steps or units inherent to the process, method, product or apparatus.
以下,结合附图详细介绍本发明的实施例:Below, describe the embodiment of the present invention in detail in conjunction with accompanying drawing:
图1为本发明的基于表情的AR交互方法流程图,请参见图1,本发明基于表情的AR交互方法包括如下步骤:Fig. 1 is the flow chart of the AR interaction method based on facial expressions of the present invention, see Fig. 1, the AR interactive method based on facial expressions of the present invention comprises the following steps:
S1:采集一现实场景中实体物和第一人物的实时图像数据,并同时采集所述第一人物的声音数据和所述现实场景的环境的实时图像数据。可选的,所述实体物的表面覆盖一识别图像,所述识别图像包括具有图纹和色彩的二维图形,用于对所述实体物进行所述实时图像数据的采集。S1: Collect real-time image data of a physical object and a first person in a real scene, and simultaneously collect voice data of the first person and real-time image data of the environment of the real scene. Optionally, the surface of the solid object is covered with an identification image, the identification image includes a two-dimensional figure with patterns and colors, and is used for collecting the real-time image data of the entity object.
在具体实施中,设计一实体物作为增强现实系统的外接实体物,该实体物符合第一人物手握的大小和范围,可选的,可使用积木作为该实体物。在具体实施中,实体物上贴有可通过计算机视觉系统识别的二维图形,并设计二维图形的图纹和色彩的复杂度,以提升计算机识别技术的精确。In a specific implementation, a physical object is designed as an external physical object of the augmented reality system, and the physical object conforms to the size and scope of the first person's hand. Optionally, building blocks can be used as the physical object. In a specific implementation, two-dimensional graphics that can be recognized by a computer vision system are pasted on the physical object, and the complexity of patterns and colors of the two-dimensional graphics is designed to improve the accuracy of computer recognition technology.
在具体实施中,通过摄像头采集实体物的实时图像,通过摄像头采集第一人物的实时图像,并同时采集第一人物的声音数据,可选的,第一人物的实时图像包括第一人物的面部表情。此外,通过摄像头还采集当前现实环境的实时图像数据。In a specific implementation, the real-time image of the physical object is collected through the camera, the real-time image of the first person is collected through the camera, and the voice data of the first person is collected at the same time. Optionally, the real-time image of the first person includes the face of the first person expression. In addition, the camera also collects real-time image data of the current real environment.
可选的,图2为图1中步骤S1的分步骤流程图,请参见图2:Optionally, Fig. 2 is a step-by-step flowchart of step S1 in Fig. 1, please refer to Fig. 2:
S11:通过CNN神经网络,根据一人脸表情数据集训练一表情识别分类模型;S11: Using a CNN neural network, train an expression recognition classification model according to a facial expression data set;
S12:通过OpenCV接口对采集到的所述第一人物的实时图像数据进行识别,提取出所述第一人物的面部表情数据,并输入至所述表情识别分类模型中进行分类。S12: Recognize the collected real-time image data of the first person through the OpenCV interface, extract the facial expression data of the first person, and input it into the expression recognition classification model for classification.
在具体实施中,使用Fer2013人脸表情数据集,使用CNN神经网络进行训练,采集到第一人物的实时图像数据后,调用OpenCV接口识别出面部表情,将面部表情传入到训练好的模型中进行表情分类。In the specific implementation, use the Fer2013 facial expression data set, use the CNN neural network for training, collect the real-time image data of the first person, call the OpenCV interface to recognize the facial expression, and transfer the facial expression to the trained model Classify facial expressions.
请继续参见图1:Please continue to see Figure 1:
S2:根据所述实体物、所述第一人物和所述环境的实时图像数据,生成一包括所述实体物、所述第一人物和所述环境的虚拟成像的AR画面,在所述AR画面中的所述实体物的虚拟成像上叠加一表情元素,并同时在所述AR画面中生成一智能体形象,所述AR画面通过一屏幕显示。S2: According to the real-time image data of the physical object, the first person and the environment, generate an AR picture including virtual imaging of the physical object, the first person and the environment, in the AR An expression element is superimposed on the virtual imaging of the physical object in the screen, and an agent image is generated in the AR screen at the same time, and the AR screen is displayed through a screen.
在具体实施中,将步骤S1所采集到的实时图像数据镜像显示在显示屏中,并识别实体物,通过增强现实技术在每一个实体物上叠加一表情元素,即在该实体物上叠加一表情图案,该表情图案随实体物运动而运动。图5为本申请实施例的实体物的效果图,请参见图5,现实场景中的积木作为实体物,并在该实体物贴有包括图纹和色彩的二维图形,通过增强现实技术在该积木上叠加一表情元素,用于与第一人物的互动。In the specific implementation, the real-time image data collected in step S1 is mirrored and displayed on the display screen, and the physical object is identified, and an expression element is superimposed on each physical object through augmented reality technology, that is, an expression element is superimposed on the physical object. Emoticon, the emoticon moves with the movement of the entity. Fig. 5 is the effect diagram of the physical object of the embodiment of the present application, please refer to Fig. 5, the building blocks in the real scene are used as the physical object, and the two-dimensional graphics including patterns and colors are pasted on the physical object, through the augmented reality technology An expression element is superimposed on the building block for interaction with the first character.
此外,利用三维设计软件设计一虚拟智能体形象,并利用增强现实技术叠加在画面中。可选的,该虚拟智能体形象为人物形象。图6为本申请实施例的实体物的效果图,请参见图6,设计一人物形象的虚拟智能体形象,并将其三维化处理。In addition, use three-dimensional design software to design a virtual intelligent body image, and use augmented reality technology to superimpose on the screen. Optionally, the image of the virtual agent is a character image. FIG. 6 is an effect diagram of a physical object in the embodiment of the present application. Referring to FIG. 6 , a virtual agent image of a human figure is designed and three-dimensionally processed.
在具体实施中,通过游戏引擎Unity来控制智能体形象的动作和语音。In the specific implementation, the action and voice of the agent image are controlled by the game engine Unity.
在具体实施中,预设一唤醒口令,第一人物通过说出唤醒口令激活智能体形象。In a specific implementation, a wake-up password is preset, and the first character activates the agent image by speaking the wake-up password.
S3:所述智能体形象基于叠加有所述表情元素的所述实体物的虚拟成像,并根据所述第一人物的声音数据和所述实时图像数据与所述现实场景中的所述第一人物进行互动;所述互动包括所述智能体形象根据一预设语料库与所述第一人物进行对话。S3: The agent image is based on the virtual imaging of the physical object superimposed with the expression elements, and according to the voice data of the first character and the real-time image data and the first character in the real scene The characters interact; the interaction includes the agent image having a dialogue with the first character according to a preset corpus.
在具体实施中,使用Vuforia AR引擎接口,来定位目标实体物的位置,实现使第一人物通过操控实体物控制屏幕中的虚拟表情。In the specific implementation, the Vuforia AR engine interface is used to locate the position of the target entity, so that the first person can control the virtual expression on the screen by manipulating the entity.
在具体实施中,图7为本申请实施例的一互动效果图,请参见图7,在画面中显示现实场景镜像,并通过增强现实技术生成一虚拟智能体,采集现实场景中的积木的实时图像数据,并将其通过增强现实技术叠加表情元素后显示在画面中,如图7所示,本申请实施例提供第一种互动规则:第一人物通过与虚拟智能体对话,完成记忆类游戏,增强现实引擎扫描作为实体物的积木,并生成虚拟表情。在画面中,虚拟表情会自动转换角度和位置,让第一人物猜测并找出指定的虚拟表情。可选的,第一人物可通过语音或动作找出该虚拟表情在现实中所对应的积木。In the specific implementation, Fig. 7 is an interactive effect diagram of the embodiment of the present application, please refer to Fig. 7, the mirror image of the real scene is displayed in the screen, and a virtual agent is generated by augmented reality technology, and the real-time images of the building blocks in the real scene are collected. Image data, and display it on the screen after superimposing expression elements through augmented reality technology, as shown in Figure 7, the embodiment of the present application provides the first interaction rule: the first character completes the memory game by talking with the virtual agent , the augmented reality engine scans the building blocks as physical objects and generates virtual expressions. In the screen, the virtual expression will automatically change the angle and position, allowing the first person to guess and find out the specified virtual expression. Optionally, the first character can find out the building block corresponding to the virtual expression in reality through voice or action.
在具体实施中,图8为本申请实施例的另一互动效果图,请参见图8,在画面中显示现实场景镜像,并通过增强现实技术生成一虚拟智能体,采集现实场景中的积木的实时图像数据,并将其通过增强现实技术叠加表情元素后显示在画面中,此外,在生成的增强现实画面中生成一虚拟白板,虚拟白板呈现二维的卡通片,如图8所示,本申请实施例提供第二种互动规则:在生成的增强现实画面中生成一虚拟白板,虚拟白板呈现二维的卡通片,虚拟智能体对第一人物进行提问,第一人物需要回答卡通中的角色此时应该做什么表情。在具体实施中,可选的,第一人物通过举起积木输入信息,由虚拟智能体判断是否正确并给予提示。在具体实施中,预设一社交场景作为该卡通片的内容。In the specific implementation, Fig. 8 is another interactive effect diagram of the embodiment of the present application, please refer to Fig. 8, the mirror image of the real scene is displayed in the screen, and a virtual agent is generated by augmented reality technology, and the image of the building blocks in the real scene is collected Real-time image data, and display it on the screen after superimposing expression elements through augmented reality technology. In addition, a virtual whiteboard is generated in the generated augmented reality screen, and the virtual whiteboard presents a two-dimensional cartoon, as shown in Figure 8. The embodiment of the application provides the second interaction rule: a virtual whiteboard is generated in the generated augmented reality screen, the virtual whiteboard presents a two-dimensional cartoon, the virtual agent asks questions to the first character, and the first character needs to answer the character in the cartoon What expression should be made at this time. In a specific implementation, optionally, the first character inputs information by lifting a building block, and the virtual agent judges whether it is correct and gives a prompt. In a specific implementation, a social scene is preset as the content of the cartoon.
在具体实施中,本申请实施例提供第三种互动规则:在增强现实画面中直接显示第一人物的图像,即第一人物的样子会出现在屏幕中,第一人物应虚拟智能体的要求做出指定的表情,采集该第一人物的面部表情,输入至表情分类模型中进行分类检测,可选的,并计算表情持续时间,以可视化的方式展示完成效果。In specific implementation, the embodiment of the present application provides a third interaction rule: directly display the image of the first character in the augmented reality screen, that is, the appearance of the first character will appear on the screen, and the first character should respond to the requirements of the virtual agent Make a specified expression, collect the facial expression of the first person, input it into the expression classification model for classification and detection, and optionally calculate the duration of the expression, and display the completed effect in a visual way.
在具体实施中,虚拟智能体与第一人物的对话通过人工智能语料库进行支持,根据第一人物会话的关键字检索对应的回复语句。In a specific implementation, the dialogue between the virtual agent and the first person is supported by an artificial intelligence corpus, and the corresponding reply sentence is retrieved according to the keywords of the first person's conversation.
S4:若所述预设语料库无法支持所述智能体形象与所述第一人物进行对话,则通过一第二人物对所述互动进行干预。S4: If the preset corpus cannot support the dialogue between the agent image and the first character, intervene in the interaction through a second character.
在具体实施中,即设计另一操作端,由第一人物以外的第二人物进行操作,若智能体形象根据现有语料库无法完成与第一人物的对话时,由第二人物进行干预介入,操控智能体进行会话;可选的,第二人物还可以控制互动的进度及其他任意预设规则无法应对的突发情况。In the specific implementation, another operating terminal is designed to be operated by a second person other than the first person. If the agent image cannot complete the dialogue with the first person based on the existing corpus, the second person will intervene. Manipulate the intelligent body to conduct a conversation; optionally, the second person can also control the progress of the interaction and other emergencies that cannot be dealt with by any preset rules.
在具体实施中,通过UDP网络协议让第二人物可以控制虚拟智能体的对话以及控制互动的进度、触发后续交互事件。In a specific implementation, the second character can control the dialogue of the virtual agent, control the progress of the interaction, and trigger subsequent interaction events through the UDP network protocol.
图3为根据本发明的基于表情的AR交互系统的框架图,请参见图3,包括:Fig. 3 is a frame diagram of the expression-based AR interactive system according to the present invention, please refer to Fig. 3, including:
现实数据采集模块1:采集一现实场景中实体物和第一人物的实时图像数据,并同时采集所述第一人物的声音数据和所述现实场景的环境的实时图像数据。可选的,所述实体物的表面覆盖一识别图像,所述识别图像包括具有图纹和色彩的二维图形,用于对所述实体物进行所述实时图像数据的采集。Reality data collection module 1: collect real-time image data of a physical object and a first person in a real scene, and simultaneously collect voice data of the first person and real-time image data of the environment of the real scene. Optionally, the surface of the solid object is covered with an identification image, the identification image includes a two-dimensional figure with patterns and colors, and is used for collecting the real-time image data of the entity object.
在具体实施中,设计一实体物作为增强现实系统的外接实体物,该实体物符合第一人物手握的大小和范围,可选的,可使用积木作为该实体物。在具体实施中,实体物上贴有可通过计算机视觉系统识别的二维图形,并设计二维图形的图纹和色彩的复杂度,以提升计算机识别技术的精确。In a specific implementation, a physical object is designed as an external physical object of the augmented reality system, and the physical object conforms to the size and scope of the first person's hand. Optionally, building blocks can be used as the physical object. In a specific implementation, two-dimensional graphics that can be recognized by a computer vision system are pasted on the physical object, and the complexity of patterns and colors of the two-dimensional graphics is designed to improve the accuracy of computer recognition technology.
在具体实施中,通过摄像头采集实体物的实时图像,通过摄像头采集第一人物的实时图像,并同时采集第一人物的声音数据,可选的,第一人物的实时图像包括第一人物的面部表情。此外,通过摄像头还采集当前现实环境的实时图像数据。In a specific implementation, the real-time image of the physical object is collected through the camera, the real-time image of the first person is collected through the camera, and the voice data of the first person is collected at the same time. Optionally, the real-time image of the first person includes the face of the first person expression. In addition, the camera also collects real-time image data of the current real environment.
可选的,现实数据采集模块1还包括:Optionally, the real
表情训练单元11:通过CNN神经网络,根据一人脸表情数据集训练一表情识别分类模型;Expression training unit 11: training an expression recognition and classification model according to a facial expression data set through the CNN neural network;
表情分类单元12:通过OpenCV接口对采集到的所述第一人物的实时图像数据进行识别,提取出所述第一人物的面部表情数据,并输入至所述表情识别分类模型中进行分类。Expression classification unit 12: recognize the collected real-time image data of the first person through the OpenCV interface, extract the facial expression data of the first person, and input it into the expression recognition classification model for classification.
在具体实施中,使用Fer2013人脸表情数据集,使用CNN神经网络进行训练,采集到第一人物的实时图像数据后,调用OpenCV接口识别出面部表情,将面部表情传入到训练好的模型中进行表情分类。In the specific implementation, use the Fer2013 facial expression data set, use the CNN neural network for training, collect the real-time image data of the first person, call the OpenCV interface to recognize the facial expression, and transfer the facial expression to the trained model Classify facial expressions.
AR画面生成模块2:根据所述实体物、所述第一人物和所述环境的实时图像数据,生成一包括所述实体物、所述第一人物和所述环境的虚拟成像的AR画面,在所述AR画面中的所述实体物的虚拟成像上叠加一表情元素,并同时在所述AR画面中生成一智能体形象,所述AR画面通过一屏幕显示。AR picture generating module 2: according to the real-time image data of the physical object, the first person and the environment, generate an AR picture including virtual imaging of the physical object, the first person and the environment, An emoticon element is superimposed on the virtual imaging of the physical object in the AR picture, and an agent image is generated in the AR picture at the same time, and the AR picture is displayed through a screen.
在具体实施中,将现实数据采集模块1所采集到的实时图像数据镜像显示在显示屏中,并识别实体物,通过增强现实技术在每一个实体物上叠加一表情元素,即在该实体物上叠加一表情图案,该表情图案随实体物运动而运动。图5为本申请实施例的实体物的效果图,请参见图5,现实场景中的积木作为实体物,并在该实体物贴有包括图纹和色彩的二维图形,通过增强现实技术在该积木上叠加一表情元素,用于与第一人物的互动。In the specific implementation, the real-time image data collected by the real
此外,利用三维设计软件设计一虚拟智能体形象,并利用增强现实技术叠加在画面中。可选的,该虚拟智能体形象为人物形象。图6为本申请实施例的实体物的效果图,请参见图6,设计一人物形象的虚拟智能体形象,并将其三维化处理。In addition, use three-dimensional design software to design a virtual intelligent body image, and use augmented reality technology to superimpose on the screen. Optionally, the image of the virtual agent is a character image. FIG. 6 is an effect diagram of a physical object in the embodiment of the present application. Referring to FIG. 6 , a virtual agent image of a human figure is designed and three-dimensionally processed.
在具体实施中,通过游戏引擎Unity来控制智能体形象的动作和语音。In the specific implementation, the action and voice of the agent image are controlled by the game engine Unity.
在具体实施中,预设一唤醒口令,第一人物通过说出唤醒口令激活智能体形象。In a specific implementation, a wake-up password is preset, and the first character activates the agent image by speaking the wake-up password.
AR智能互动模块3:所述智能体形象基于叠加有所述表情元素的所述实体物的虚拟成像,并根据所述第一人物的声音数据和所述实时图像数据与所述现实场景中的所述第一人物进行互动;所述互动包括所述智能体形象根据一预设语料库与所述第一人物进行对话。AR intelligent interaction module 3: the image of the intelligent body is based on the virtual imaging of the physical object superimposed with the expression elements, and according to the voice data of the first person and the real-time image data and the real scene The first character interacts; the interaction includes the agent image having a dialogue with the first character according to a preset corpus.
在具体实施中,使用Vuforia AR引擎接口,来定位目标实体物的位置,实现使第一人物通过操控实体物控制屏幕中的虚拟表情。In the specific implementation, the Vuforia AR engine interface is used to locate the position of the target entity, so that the first person can control the virtual expression on the screen by manipulating the entity.
在具体实施中,图7为本申请实施例的一互动效果图,请参见图7,在画面中显示现实场景镜像,并通过增强现实技术生成一虚拟智能体,采集现实场景中的积木的实时图像数据,并将其通过增强现实技术叠加表情元素后显示在画面中,如图7所示,本申请实施例提供第一种互动规则:第一人物通过与虚拟智能体对话,完成记忆类游戏,增强现实引擎扫描作为实体物的积木,并生成虚拟表情。在画面中,虚拟表情会自动转换角度和位置,让第一人物猜测并找出指定的虚拟表情。可选的,第一人物可通过语音或动作找出该虚拟表情在现实中所对应的积木。In the specific implementation, Fig. 7 is an interactive effect diagram of the embodiment of the present application, please refer to Fig. 7, the mirror image of the real scene is displayed in the screen, and a virtual agent is generated by augmented reality technology, and the real-time images of the building blocks in the real scene are collected. Image data, and display it on the screen after superimposing expression elements through augmented reality technology, as shown in Figure 7, the embodiment of the present application provides the first interaction rule: the first character completes the memory game by talking with the virtual agent , the augmented reality engine scans the building blocks as physical objects and generates virtual expressions. In the screen, the virtual expression will automatically change the angle and position, allowing the first person to guess and find out the specified virtual expression. Optionally, the first character can find out the building block corresponding to the virtual expression in reality through voice or action.
在具体实施中,图8为本申请实施例的另一互动效果图,请参见图8,在画面中显示现实场景镜像,并通过增强现实技术生成一虚拟智能体,采集现实场景中的积木的实时图像数据,并将其通过增强现实技术叠加表情元素后显示在画面中,此外,在生成的增强现实画面中生成一虚拟白板,虚拟白板呈现二维的卡通片,如图8所示,本申请实施例提供第二种互动规则:在生成的增强现实画面中生成一虚拟白板,虚拟白板呈现二维的卡通片,虚拟智能体对第一人物进行提问,第一人物需要回答卡通中的角色此时应该做什么表情。在具体实施中,可选的,第一人物通过举起积木输入信息,由虚拟智能体判断是否正确并给予提示。在具体实施中,预设一社交场景作为该卡通片的内容。In the specific implementation, Fig. 8 is another interactive effect diagram of the embodiment of the present application, please refer to Fig. 8, the mirror image of the real scene is displayed in the screen, and a virtual agent is generated by augmented reality technology, and the image of the building blocks in the real scene is collected Real-time image data, and display it on the screen after superimposing expression elements through augmented reality technology. In addition, a virtual whiteboard is generated in the generated augmented reality screen, and the virtual whiteboard presents a two-dimensional cartoon, as shown in Figure 8. The embodiment of the application provides the second interaction rule: a virtual whiteboard is generated in the generated augmented reality screen, the virtual whiteboard presents a two-dimensional cartoon, the virtual agent asks questions to the first character, and the first character needs to answer the character in the cartoon What expression should be made at this time. In a specific implementation, optionally, the first character inputs information by lifting a building block, and the virtual agent judges whether it is correct and gives a prompt. In a specific implementation, a social scene is preset as the content of the cartoon.
在具体实施中,本申请实施例提供第三种互动规则:在增强现实画面中直接显示第一人物的图像,即第一人物的样子会出现在屏幕中,第一人物应虚拟智能体的要求做出指定的表情,采集该第一人物的面部表情,输入至表情分类模型中进行分类检测,可选的,并计算表情持续时间,以可视化的方式展示完成效果。In specific implementation, the embodiment of the present application provides a third interaction rule: directly display the image of the first character in the augmented reality screen, that is, the appearance of the first character will appear on the screen, and the first character should respond to the requirements of the virtual agent Make a specified expression, collect the facial expression of the first person, input it into the expression classification model for classification and detection, and optionally calculate the duration of the expression, and display the completed effect in a visual way.
在具体实施中,虚拟智能体与第一人物的对话通过人工智能语料库进行支持,根据第一人物会话的关键字检索对应的回复语句。In a specific implementation, the dialogue between the virtual agent and the first person is supported by an artificial intelligence corpus, and the corresponding reply sentence is retrieved according to the keywords of the first person's conversation.
操作介入干预模块4:若所述预设语料库无法支持所述智能体形象与所述第一人物进行对话,则通过一第二人物对所述互动进行干预。Operational intervention module 4: If the preset corpus cannot support the dialogue between the agent image and the first character, intervene in the interaction through a second character.
在具体实施中,即设计另一操作端,由第一人物以外的第二人物进行操作,若智能体形象根据现有语料库无法完成与第一人物的对话时,由第二人物进行干预介入,操控智能体进行会话;可选的,第二人物还可以控制互动的进度及其他任意预设规则无法应对的突发情况。In the specific implementation, another operating terminal is designed to be operated by a second person other than the first person. If the agent image cannot complete the dialogue with the first person based on the existing corpus, the second person will intervene. Manipulate the intelligent body to conduct a conversation; optionally, the second person can also control the progress of the interaction and other emergencies that cannot be dealt with by any preset rules.
在具体实施中,通过UDP网络协议让第二人物可以控制虚拟智能体的对话以及控制互动的进度、触发后续交互事件。In a specific implementation, the second character can control the dialogue of the virtual agent, control the progress of the interaction, and trigger subsequent interaction events through the UDP network protocol.
另外,结合图1、图2描述的基于表情的AR交互方法可以由电子设备来实现。图4为本发明的电子设备的框架图。In addition, the expression-based AR interaction method described in conjunction with FIG. 1 and FIG. 2 can be implemented by electronic devices. FIG. 4 is a block diagram of the electronic device of the present invention.
电子设备可以包括处理器61以及存储有计算机程序指令的存储器62。The electronic device may include a
具体地,上述处理器61可以包括中央处理器(CPU),或者特定集成电路(Application Specific Integrated Circuit,简称为ASIC),或者可以被配置成实施本申请实施例的一个或多个集成电路。Specifically, the
其中,存储器62可以包括用于数据或指令的大容量存储器。举例来说而非限制,存储器62可包括硬盘驱动器(Hard Disk Drive,简称为HDD)、软盘驱动器、固态驱动器(SolidState Drive,简称为SSD)、闪存、光盘、磁光盘、磁带或通用串行总线(Universal SerialBus,简称为USB)驱动器或者两个或更多个以上这些的组合。在合适的情况下,存储器62可包括可移除或不可移除(或固定)的介质。在合适的情况下,存储器62可在数据处理装置的内部或外部。在特定实施例中,存储器62是非易失性(Non-Volatile)存储器。在特定实施例中,存储器62包括只读存储器(Read-Only Memory,简称为ROM)和随机存取存储器(RandomAccess Memory,简称为RAM)。在合适的情况下,该ROM可以是掩模编程的ROM、可编程ROM(Programmable Read-Only Memory,简称为PROM)、可擦除PROM(Erasable ProgrammableRead-Only Memory,简称为EPROM)、电可擦除PROM(Electrically Erasable ProgrammableRead-Only Memory,简称为EEPROM)、电可改写ROM(Electrically Alterable Read-OnlyMemory,简称为EAROM)或闪存(FLASH)或者两个或更多个以上这些的组合。在合适的情况下,该RAM可以是静态随机存取存储器(Static Random-Access Memory,简称为SRAM)或动态随机存取存储器(Dynamic Random Access Memory,简称为DRAM),其中,DRAM可以是快速页模式动态随机存取存储器(Fast Page Mode Dynamic Random Access Memory,简称为FPMDRAM)、扩展数据输出动态随机存取存储器(Extended Date Out Dynamic RandomAccess Memory,简称为EDODRAM)、同步动态随机存取内存(Synchronous Dynamic Random-Access Memory,简称SDRAM)等。Wherein, the
存储器62可以用来存储或者缓存需要处理和/或通信使用的各种数据文件,以及处理器61所执行的可能的计算机程序指令。The
处理器61通过读取并执行存储器62中存储的计算机程序指令,以实现上述实施例中的任意基于表情的AR交互方法。The
在其中一些实施例中,电子设备还可包括通信接口63和总线60。其中,如图4所示,处理器61、存储器62、通信接口63通过总线60连接并完成相互间的通信。In some of these embodiments, the electronic device may further include a
通信端口63可以实现与其他部件例如:外接设备、图像/数据采集设备、数据库、外部存储以及图像/数据处理工作站等之间进行数据通信。The
总线60包括硬件、软件或两者,将电子设备的部件彼此耦接在一起。总线60包括但不限于以下至少之一:数据总线(Data Bus)、地址总线(Address Bus)、控制总线(ControlBus)、扩展总线(Expansion Bus)、局部总线(Local Bus)。举例来说而非限制,总线60可包括图形加速接口(Accelerated Graphics Port,简称为AGP)或其他图形总线、增强工业标准架构(Extended Industry Standard Architecture,简称为EISA)总线、前端总线(FrontSide Bus,简称为FSB)、超传输(Hyper Transport,简称为HT)互连、工业标准架构(Industry Standard Architecture,简称为ISA)总线、无线带宽(InfiniBand)互连、低引脚数(Low Pin Count,简称为LPC)总线、存储器总线、微信道架构(Micro ChannelArchitecture,简称为MCA)总线、外围组件互连(Peripheral Component Interconnect,简称为PCI)总线、PCI-Express(PCI-X)总线、串行高级技术附件(Serial AdvancedTechnology Attachment,简称为SATA)总线、视频电子标准协会局部(Video ElectronicsStandards Association Local Bus,简称为VLB)总线或其他合适的总线或者两个或更多个以上这些的组合。在合适的情况下,总线60可包括一个或多个总线。尽管本申请实施例描述和示出了特定的总线,但本申请考虑任何合适的总线或互连。
该电子设备可以执行本申请实施例中的基于表情的AR交互方法。The electronic device can execute the expression-based AR interaction method in the embodiment of the present application.
另外,结合上述实施例中的基于表情的AR交互方法,本申请实施例可提供一种计算机可读存储介质来实现。该计算机可读存储介质上存储有计算机程序指令;该计算机程序指令被处理器执行时实现上述实施例中的任意基于表情的AR交互方法。In addition, in combination with the expression-based AR interaction method in the foregoing embodiments, the embodiments of the present application may provide a computer-readable storage medium for implementation. Computer program instructions are stored on the computer-readable storage medium; when the computer program instructions are executed by a processor, any expression-based AR interaction method in the foregoing embodiments may be implemented.
而前述的存储介质包括:U盘、移动硬盘、只读存储器(ReadOnly Memory,简称为ROM)、随机存取存储器(Random Access Memory,简称为RAM)、磁碟或者光盘等各种可以存储程序代码的介质。The aforementioned storage media include: U disk, mobile hard disk, read-only memory (ReadOnly Memory, abbreviated as ROM), random access memory (Random Access Memory, abbreviated as RAM), magnetic disk or optical disc, etc., which can store program codes. medium.
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above-mentioned embodiments can be combined arbitrarily. To make the description concise, all possible combinations of the technical features in the above-mentioned embodiments are not described. However, as long as there is no contradiction in the combination of these technical features, should be considered as within the scope of this specification.
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。The above-mentioned embodiments only represent several implementation modes of the present application, and the description thereof is relatively specific and detailed, but it should not be construed as limiting the scope of the patent for the invention. It should be noted that, for those skilled in the art, without departing from the concept of the present application, several modifications and improvements can be made, which all belong to the protection scope of the present application. Therefore, the scope of protection of the patent application should be based on the appended claims.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110571684.0A CN113176827B (en) | 2021-05-25 | 2021-05-25 | AR interaction method and system based on expressions, electronic device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110571684.0A CN113176827B (en) | 2021-05-25 | 2021-05-25 | AR interaction method and system based on expressions, electronic device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113176827A CN113176827A (en) | 2021-07-27 |
CN113176827B true CN113176827B (en) | 2022-10-28 |
Family
ID=76928211
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110571684.0A Active CN113176827B (en) | 2021-05-25 | 2021-05-25 | AR interaction method and system based on expressions, electronic device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113176827B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116643675B (en) * | 2023-07-27 | 2023-10-03 | 苏州创捷传媒展览股份有限公司 | Intelligent interaction system based on AI virtual character |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109841217A (en) * | 2019-01-18 | 2019-06-04 | 苏州意能通信息技术有限公司 | A kind of AR interactive system and method based on speech recognition |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN209821887U (en) * | 2019-03-26 | 2019-12-20 | 广东虚拟现实科技有限公司 | Mark |
JP7150894B2 (en) * | 2019-10-15 | 2022-10-11 | ベイジン・センスタイム・テクノロジー・デベロップメント・カンパニー・リミテッド | AR scene image processing method and device, electronic device and storage medium |
CN112053449A (en) * | 2020-09-09 | 2020-12-08 | 脸萌有限公司 | Augmented reality-based display method, device and storage medium |
-
2021
- 2021-05-25 CN CN202110571684.0A patent/CN113176827B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109841217A (en) * | 2019-01-18 | 2019-06-04 | 苏州意能通信息技术有限公司 | A kind of AR interactive system and method based on speech recognition |
Also Published As
Publication number | Publication date |
---|---|
CN113176827A (en) | 2021-07-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111028330B (en) | Three-dimensional expression base generation method, device, equipment and storage medium | |
CN111354079B (en) | Three-dimensional face reconstruction network training and virtual face image generation method and device | |
EP3885965B1 (en) | Image recognition method based on micro facial expressions, apparatus and related device | |
TWI751161B (en) | Terminal equipment, smart phone, authentication method and system based on face recognition | |
WO2021169637A1 (en) | Image recognition method and apparatus, computer device and storage medium | |
Le et al. | Live speech driven head-and-eye motion generators | |
US20210279934A1 (en) | Method and apparatus for generating virtual avatar | |
CN110418095B (en) | Virtual scene processing method and device, electronic equipment and storage medium | |
CN111694430A (en) | AR scene picture presentation method and device, electronic equipment and storage medium | |
CN113362263B (en) | Method, apparatus, medium and program product for transforming an image of a virtual idol | |
CN111080759A (en) | Method and device for realizing split mirror effect and related product | |
CN110610546B (en) | Video picture display method, device, terminal and storage medium | |
WO2022252866A1 (en) | Interaction processing method and apparatus, terminal and medium | |
CN109035415B (en) | Virtual model processing method, device, equipment and computer readable storage medium | |
CN110677610A (en) | Video stream control method, video stream control device and electronic equipment | |
WO2023201937A1 (en) | Human-machine interaction method and apparatus based on story scene, device, and medium | |
CN114974572A (en) | Autism early screening system based on human-computer interaction | |
CN113176827B (en) | AR interaction method and system based on expressions, electronic device and storage medium | |
EP4071760A1 (en) | Method and apparatus for generating video | |
CN114677476B (en) | A face processing method, device, computer equipment and storage medium | |
CN111597926A (en) | Image processing method and device, electronic device and storage medium | |
CN115984943B (en) | Facial expression capturing and model training method, device, equipment, medium and product | |
CN112734657A (en) | Cloud group photo method and device based on artificial intelligence and three-dimensional model and storage medium | |
CN115690281A (en) | Role expression driving method and device, storage medium and electronic device | |
Dias et al. | High-fidelity facial reconstruction from a single photo using photo-realistic rendering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |