CN109542230B - Image processing method, device, electronic device and storage medium - Google Patents
Image processing method, device, electronic device and storage medium Download PDFInfo
- Publication number
- CN109542230B CN109542230B CN201811440635.8A CN201811440635A CN109542230B CN 109542230 B CN109542230 B CN 109542230B CN 201811440635 A CN201811440635 A CN 201811440635A CN 109542230 B CN109542230 B CN 109542230B
- Authority
- CN
- China
- Prior art keywords
- user
- state
- interactive
- facial
- parameters
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
本申请实施例提供了一种图像处理方法、装置、电子设备及存储介质,涉及图像处理技术领域。方法包括:在用户参与互动活动过程中,获得用户的面部图像;基于面部图像,获得用户的神态参数;基于神态参数,判断用户对互动活动的下一步操作的交互状态;根据交互状态生成并输出与交互状态对应的提示信息。由于可以基于用户的神态参数来确定用户对互动活动的下一步操作的交互状态,从而便可以在根据用户的交互状态而确定用户此时的决策是有待提高时,为用户生成并输出提示则可以帮助用户快速提高互动活动中的决策水平。
Embodiments of the present application provide an image processing method, apparatus, electronic device, and storage medium, which relate to the technical field of image processing. The method includes: in the process of the user participating in the interactive activity, obtaining the face image of the user; obtaining the facial expression parameter of the user based on the facial image; judging the interactive state of the user's next operation of the interactive activity based on the facial expression parameter; generating and outputting according to the interactive state The prompt information corresponding to the interactive state. Since the user's interaction state for the next step of the interactive activity can be determined based on the user's demeanor parameters, it is possible to generate and output a prompt for the user when it is determined according to the user's interaction state that the user's decision at this time needs to be improved. Help users quickly improve the level of decision-making in interactive activities.
Description
技术领域technical field
本申请涉及图像处理技术领域,具体而言,涉及一种图像处理方法、装置、电子设备及存储介质。The present application relates to the technical field of image processing, and in particular, to an image processing method, apparatus, electronic device, and storage medium.
背景技术Background technique
目前,用户可以参加许多互动活动,例如棋类互动活动或者牌类互动活动等,但在互动活动的过程中,用户很难主动的发现自己的哪些决策是有待提高的,这就导致用户容易遇到瓶颈无法继续提高在互动活动中的决策水平。At present, users can participate in many interactive activities, such as chess interactive activities or card-based interactive activities, etc., but in the process of interactive activities, it is difficult for users to actively discover which decisions need to be improved, which makes users prone to encountering To the bottleneck can not continue to improve the level of decision-making in interactive activities.
发明内容SUMMARY OF THE INVENTION
本申请在于提供一种图像处理方法、装置、电子设备及存储介质,以实现为用户有待提高的决策提供提示,以实现帮助用户快速提高互动活动中的决策水平。The present application is to provide an image processing method, apparatus, electronic device and storage medium, so as to provide prompts for the user's decision-making to be improved, so as to help the user to quickly improve the decision-making level in interactive activities.
为了实现上述目的,本申请的实施例通过如下方式实现:In order to achieve the above purpose, the embodiments of the present application are achieved in the following ways:
第一方面,本申请实施例提供了一种图像处理方法,所述方法包括:In a first aspect, an embodiment of the present application provides an image processing method, the method comprising:
在用户参与互动活动过程中,获得用户的面部图像;Obtain the user's face image during the user's participation in interactive activities;
基于所述面部图像,获得所述用户的神态参数;Based on the facial image, obtain the demeanor parameter of the user;
基于所述神态参数,判断所述用户对所述互动活动的下一步操作的交互状态;Based on the demeanor parameter, determine the interactive state of the user's next operation of the interactive activity;
根据所述交互状态生成并输出与所述交互状态对应的提示信息。Generate and output prompt information corresponding to the interaction state according to the interaction state.
结合第一方面,在一些可能的实现方式中,所述面部图像包括距当前时刻之前第一预设时长内获得的M张面部图像,所述神态参数包括所述M张面部图像对应的M个神态参数,M为大于1的整数,基于所述神态参数,判断所述用户对所述互动活动的下一步操作的交互状态,包括:With reference to the first aspect, in some possible implementations, the facial images include M facial images obtained within a first preset time period before the current moment, and the facial parameters include M corresponding to the M facial images The attitude parameter, M is an integer greater than 1. Based on the attitude parameter, the interactive state of the user's next operation of the interactive activity is judged, including:
基于所述M个神态参数获得用户在所述互动活动的互动界面上的M个视线焦点;Obtaining M focus points of the user's sight on the interactive interface of the interactive activity based on the M attitude parameters;
基于所述M个视线焦点,判断所述用户对所述互动活动的下一步操作的交互状态是否处于视线聚焦状态。Based on the M focus points, it is determined whether the interaction state of the user's next operation of the interactive activity is in the focus focus state.
结合第一方面,在一些可能的实现方式中,基于所述M个视线焦点,判断所述用户对所述互动活动的下一步操作的交互状态是否处于视线聚焦状态,包括:With reference to the first aspect, in some possible implementations, judging whether the interaction state of the user's next operation of the interactive activity is in the line-of-sight state based on the M focus points, including:
判断所述M个视线焦点中位于所述互动界面上至少两个区域中同一区域的视线焦点的数量是否大于或等于第一预设数量;judging whether the number of sight focal points located in the same area of at least two areas on the interactive interface among the M sight focal points is greater than or equal to a first preset number;
若是,表示所述用户对所述互动活动的下一步操作的交互状态处于视线聚焦状态。If so, it indicates that the user's interaction state of the next operation of the interaction activity is in the focus state.
结合第一方面,在一些可能的实现方式中,基于所述M个神态参数获得用户在所述互动活动的互动界面上的M个视线焦点,包括:With reference to the first aspect, in some possible implementations, obtaining M focus points of the user's sight on the interactive interface of the interactive activity based on the M facial parameters, including:
确定所述M个神态参数中每个神态参数对应的所述用户的每两个视线方向;Determine every two gaze directions of the user corresponding to each of the M attitude parameters;
确定出每两个视线方向在所述互动活动的互动界面上形成的视线焦点,共确定出所述M个视线焦点。The sight focus formed on the interactive interface of the interactive activity is determined for every two sight directions, and the M sight focus points are determined in total.
结合第一方面,在一些可能的实现方式中,所述面部图像还包括当前时刻之前第二预设时长内获得的N张面部图像,N为大于1的整数,所述神态参数包括所述N张面部图像对应的N个神态参数,在基于所述神态参数,判断所述用户对所述互动活动的下一步操作的交互状态之后,以及在根据所述交互状态生成并输出与所述交互状态对应的提示信息之前,所述方法还包括:With reference to the first aspect, in some possible implementations, the facial image further includes N facial images obtained within a second preset time period before the current moment, where N is an integer greater than 1, and the facial parameter includes the N facial images. N attitude parameters corresponding to a face image, after judging the interaction state of the user's next operation of the interactive activity based on the attitude parameters, and after generating and outputting the interaction state according to the interaction state Before the corresponding prompt information, the method further includes:
在确定所述用户处于所述视线聚焦状态后,基于所述N个神态参数,确定出所述N个神态参数中每个神态参数对应的情绪类型,共N个情绪类型;After it is determined that the user is in the focused state of sight, based on the N attitude parameters, determine the emotional type corresponding to each of the N attitude parameters, and there are N emotional types in total;
基于所述N个情绪类型,判断所述用户的所述交互状态是否处于非正面情绪状态;Based on the N emotion types, determine whether the interaction state of the user is in a non-positive emotional state;
若是,执行步骤:所述根据所述交互状态生成并输出与所述交互状态对应的提示信息。If yes, perform the step of generating and outputting prompt information corresponding to the interaction state according to the interaction state.
结合第一方面,在一些可能的实现方式中,基于所述N个情绪类型,判断所述用户的所述交互状态是否处于非正面情绪状态,包括:With reference to the first aspect, in some possible implementations, based on the N emotion types, determining whether the interaction state of the user is in a non-positive emotional state includes:
判断所述N个情绪类型中非正面情绪的数量是否大于或等于第二预设数量,其中,所述非正面情绪的数量大于或等于所述第二预设数量表示所述用户的所述交互状态处于非正面情绪状态。Determine whether the number of non-positive emotions in the N emotion types is greater than or equal to a second preset number, wherein the number of non-positive emotions greater than or equal to the second preset number indicates the interaction of the user The state is in a non-positive emotional state.
结合第一方面,在一些可能的实现方式中,基于所述N个神态参数,确定出所述N个神态参数中每个神态参数对应的情绪类型,共N个情绪类型,包括:With reference to the first aspect, in some possible implementations, based on the N state parameters, the emotion type corresponding to each state parameter in the N state parameters is determined, and there are N emotion types in total, including:
通过人脸情绪分析模型对所述N个神态参数中每个神态参数进行分析,获得所述人脸情绪分析模型输出的每个神态参数分别为多种待确定情绪类型中每种待确定情绪类型的概率;The facial emotion analysis model is used to analyze each demeanor parameter of the N deity parameters, and each demeanor parameter output by the facial emotion analysis model is obtained as each emotion type to be determined among the various emotion types to be determined. The probability;
根据每种待确定情绪类型的概率,确定出所述多种待确定情绪类型中概率最高的待确定情绪类型,其中,每个神态参数的概率最高的待确定情绪类型为每个神态参数对应的情绪类型。According to the probability of each emotion type to be determined, the emotion type to be determined with the highest probability among the multiple emotion types to be determined is determined, wherein, the emotion type to be determined with the highest probability of each state parameter is the emotion type corresponding to each state parameter. Emotion type.
结合第一方面,在一些可能的实现方式中,根据所述交互状态生成并输出与所述交互状态对应的提示信息,包括:With reference to the first aspect, in some possible implementations, generating and outputting prompt information corresponding to the interaction state according to the interaction state, including:
根据所述用户的所述交互状态处于所述视线聚焦状态确定出所述同一区域中包含的所述互动活动中的对象;determining the object in the interactive activity contained in the same area according to the user's interactive state in the line-of-sight focused state;
根据所述对象生成与所述下一步操作的提示信息,并将所述提示信息输出。Generate prompt information related to the next operation according to the object, and output the prompt information.
结合第一方面,在一些可能的实现方式中,根据所述对象生成与所述下一步操作的提示信息,并将所述提示信息输出,包括:With reference to the first aspect, in some possible implementations, prompt information related to the next operation is generated according to the object, and the prompt information is output, including:
根据所述对象,判断所述对象为所述互动活动中的实体还是为所述互动活动中的背景;According to the object, determine whether the object is an entity in the interactive activity or a background in the interactive activity;
若所述对象为所述互动活动中的实体,将当前的评估函数中用于计算所述实体的权重从第一值提高到第二值,获得当前调整后的评估函数,基于所述当前调整后的评估函数生成与所述实体相关的所述下一步操作的提示信息;若所述对象为所述互动活动中的背景,基于所述当前的评估函数生成所述下一步操作的提示信息。If the object is an entity in the interactive activity, increase the weight used to calculate the entity in the current evaluation function from a first value to a second value to obtain a currently adjusted evaluation function, based on the current adjustment The subsequent evaluation function generates prompt information of the next operation related to the entity; if the object is the background in the interactive activity, the prompt information of the next operation is generated based on the current evaluation function.
结合第一方面,在一些可能的实现方式中,所述面部图像包括距当前时刻之前第一预设时长内获得的M张面部图像和距当前时刻之前第二预设时长内获得的N张面部图像,所述神态参数包括:所述M张面部图像对应的M个神态参数和所述N张面部图像对应的N个神态参数,M和N为大于1的整数,基于所述神态参数,判断所述用户对所述互动活动的下一步操作的交互状态,包括:With reference to the first aspect, in some possible implementations, the facial images include M facial images obtained within a first preset time period before the current moment and N faces obtained within a second preset time period before the current moment. image, the state parameters include: M state parameters corresponding to the M facial images and N state parameters corresponding to the N facial images, where M and N are integers greater than 1, and based on the state parameters, determine The interaction state of the user's next operation of the interactive activity, including:
基于所述M个神态参数获得用户在所述互动活动的互动界面上的M个视线焦点,以及基于所述N个神态参数,确定出所述N个神态参数中每个神态参数对应的情绪类型,共N个情绪类型;Obtaining M focus points of the user on the interactive interface of the interactive activity based on the M attitude parameters, and determining the emotion type corresponding to each of the N attitude parameters based on the N attitude parameters , a total of N emotion types;
基于所述M个视线焦点,判断所述M个视线焦点中位于所述互动界面上至少两个区域中同一区域的视线焦点的数量是否大于或等于第一预设数量;以及基于所述N个情绪类型,判断所述N个情绪类型中非正面情绪的数量是否大于或等于第二预设数量;Based on the M line-of-sight focal points, determine whether the number of line-of-sight focal points located in the same area of at least two areas on the interactive interface is greater than or equal to a first preset number among the M line-of-sight focal points; and Emotion type, determine whether the number of non-positive emotions in the N emotion types is greater than or equal to a second preset number;
在判定所述同一区域的视线焦点的数量满足所述第一预设数量时,确定所述用户对所述互动活动的下一步操作的交互状态处于视线聚焦状态;在判定所述N个情绪类型中非正面情绪的数量满足所述第二预设数量时,确定所述用户的所述交互状态处于非正面情绪状态。When it is determined that the number of sight-focus points in the same area satisfies the first preset number, it is determined that the interaction state of the user's next operation of the interactive activity is in the sight-focus state; when determining the N emotion types When the number of non-positive emotions meets the second preset number, it is determined that the interaction state of the user is in a non-positive emotion state.
结合第一方面,在一些可能的实现方式中,所述方法还包括:With reference to the first aspect, in some possible implementations, the method further includes:
在确定所述面部图像中不包含所述用户的面部特征中的至少部分特征时,生成并输出图像采集角度调整提示,其中,所述用户的面部特征包括所述用户的五官。When it is determined that the facial image does not contain at least part of the facial features of the user, an image acquisition angle adjustment prompt is generated and output, wherein the facial features of the user include the facial features of the user.
第二方面,本申请实施例提供了一种图像处理装置,所述装置包括:In a second aspect, an embodiment of the present application provides an image processing apparatus, and the apparatus includes:
图像获得模块,用于在用户参与互动活动过程中,获得用户的面部图像。The image obtaining module is used to obtain the face image of the user during the user's participation in the interactive activity.
神态获得模块,用于基于所述面部图像,获得所述用户的神态参数。The facial expression obtaining module is used to obtain the facial expression parameters of the user based on the facial image.
操作判断模块,用于基于所述神态参数,判断所述用户对所述互动活动的下一步操作的交互状态。An operation judging module, configured to judge the interactive state of the next operation of the interactive activity by the user based on the facial expression parameter.
提示输出模块,用于根据所述交互状态生成并输出与所述交互状态对应的提示信息。A prompt output module is configured to generate and output prompt information corresponding to the interaction state according to the interaction state.
结合第二方面,在一些可选地实现方式中,所述面部图像包括距当前时刻之前第一预设时长内获得的M张面部图像,所述神态参数包括所述M张面部图像对应的M个神态参数,M为大于1的整数,With reference to the second aspect, in some optional implementations, the facial images include M facial images obtained within a first preset time period before the current moment, and the facial parameters include M corresponding to the M facial images. A state parameter, M is an integer greater than 1,
所述操作判断模块,还用于基于所述M个神态参数获得用户在所述互动活动的互动界面上的M个视线焦点;基于所述M个视线焦点,判断所述用户对所述互动活动的下一步操作的交互状态是否处于视线聚焦状态。The operation judging module is further configured to obtain M focus points of the user on the interactive interface of the interactive activity based on the M attitude parameters; Whether the interaction state of the next operation is in the focus state.
结合第二方面,在一些可选地实现方式中,In conjunction with the second aspect, in some optional implementations,
所述操作判断模块,还用于判断所述M个视线焦点中位于所述互动界面上至少两个区域中同一区域的视线焦点的数量是否大于或等于第一预设数量;若是,表示所述用户对所述互动活动的下一步操作的交互状态处于视线聚焦状态。The operation judging module is further configured to judge whether the number of sight focal points located in the same area of at least two areas on the interactive interface among the M sight focal points is greater than or equal to a first preset number; The interactive state of the user's next operation of the interactive activity is in the focus state.
结合第二方面,在一些可选地实现方式中,In conjunction with the second aspect, in some optional implementations,
所述操作判断模块,还用于确定所述M个神态参数中每个神态参数对应的所述用户的每两个视线方向;确定出每两个视线方向在所述互动活动的互动界面上形成的视线焦点,共确定出所述M个视线焦点。The operation judging module is further configured to determine every two line of sight directions of the user corresponding to each of the M state parameters; and determine that every two line of sight directions are formed on the interactive interface of the interactive activity. The M line-of-sight focal points are determined in total.
结合第二方面,在一些可选地实现方式中,所述面部图像还包括当前时刻之前第二预设时长内获得的N张面部图像,N为大于1的整数,所述神态参数包括所述N张面部图像对应的N个神态参数,在基于所述神态参数,所述装置还包括:With reference to the second aspect, in some optional implementation manners, the facial image further includes N facial images obtained within a second preset time period before the current moment, where N is an integer greater than 1, and the demeanor parameter includes the The N state parameters corresponding to the N facial images, based on the state parameters, the device further includes:
情绪确定模块,用于在确定所述用户处于所述视线聚焦状态后,基于所述N个神态参数,确定出所述N个神态参数中每个神态参数对应的情绪类型,共N个情绪类型;An emotion determination module, configured to determine, after determining that the user is in the focus state, based on the N state parameters, determine the emotion type corresponding to each state parameter in the N state parameters, a total of N emotion types ;
提示确定模块,用于基于所述N个情绪类型,判断所述用户的所述交互状态是否处于非正面情绪状态;A prompt determination module, configured to judge whether the interaction state of the user is in a non-positive emotional state based on the N emotion types;
提示执行模块,用于若是,执行步骤:所述根据所述交互状态生成并输出与所述交互状态对应的提示信息。The prompt execution module is configured to, if yes, perform the step of: generating and outputting prompt information corresponding to the interaction state according to the interaction state.
结合第二方面,在一些可选地实现方式中,In conjunction with the second aspect, in some optional implementations,
所述提示确定模块,还用于判断所述N个情绪类型中非正面情绪的数量是否大于或等于第二预设数量,其中,所述非正面情绪的数量大于或等于所述第二预设数量表示所述用户的所述交互状态处于非正面情绪状态。The prompt determination module is further configured to determine whether the number of non-positive emotions in the N emotion types is greater than or equal to a second preset number, wherein the number of non-positive emotions is greater than or equal to the second preset number The number indicates that the interaction state of the user is in a non-positive emotional state.
结合第二方面,在一些可选地实现方式中,In conjunction with the second aspect, in some optional implementations,
所述情绪确定模块,还用于通过人脸情绪分析模型对所述N个神态参数中每个神态参数进行分析,获得所述人脸情绪分析模型输出的每个神态参数分别为多种待确定情绪类型中每种待确定情绪类型的概率;根据每种待确定情绪类型的概率,确定出所述多种待确定情绪类型中概率最高的待确定情绪类型,其中,每个神态参数的概率最高的待确定情绪类型为每个神态参数对应的情绪类型。The emotion determination module is further configured to analyze each demeanor parameter in the N deity parameters through a facial emotion analysis model, and obtain each demeanor parameter output by the facial emotion analysis model as a plurality of types to be determined. The probability of each to-be-determined emotion type in the emotion type; according to the probability of each to-be-determined emotion type, determine the to-be-determined emotion type with the highest probability among the multiple to-be-determined emotion types, wherein the probability of each demeanor parameter is the highest The emotion type to be determined is the emotion type corresponding to each facial expression parameter.
结合第二方面,在一些可选地实现方式中,In conjunction with the second aspect, in some optional implementations,
所述提示输出模块,还用于根据所述用户的所述交互状态处于所述视线聚焦状态确定出所述同一区域中包含的所述互动活动中的对象;根据所述对象生成与所述下一步操作的提示信息,并将所述提示信息输出。The prompt output module is further configured to determine the object in the interactive activity included in the same area according to the user's interactive state in the line-of-sight focused state; prompt information for one-step operation, and output the prompt information.
结合第二方面,在一些可选地实现方式中,In conjunction with the second aspect, in some optional implementations,
所述提示输出模块,还用于根据所述对象,判断所述对象为所述互动活动中的实体还是为所述互动活动中的背景;若所述对象为所述互动活动中的实体,将当前的评估函数中用于计算所述实体的权重从第一值提高到第二值,获得当前调整后的评估函数,基于所述当前调整后的评估函数生成与所述实体相关的所述下一步操作的提示信息;若所述对象为所述互动活动中的背景,基于所述当前的评估函数生成所述下一步操作的提示信息。The prompt output module is further configured to determine, according to the object, whether the object is an entity in the interactive activity or a background in the interactive activity; if the object is an entity in the interactive activity, the The weight used to calculate the entity in the current evaluation function is increased from the first value to the second value, the currently adjusted evaluation function is obtained, and the lower evaluation function related to the entity is generated based on the currently adjusted evaluation function. Prompt information for one step operation; if the object is the background in the interactive activity, generate prompt information for the next step operation based on the current evaluation function.
结合第二方面,在一些可选地实现方式中,所述面部图像包括距当前时刻之前第一预设时长内获得的M张面部图像和距当前时刻之前第二预设时长内获得的N张面部图像,所述神态参数包括:所述M张面部图像对应的M个神态参数和所述N张面部图像对应的N个神态参数,M和N为大于1的整数,In combination with the second aspect, in some optional implementations, the facial images include M facial images obtained within a first preset time period before the current moment and N face images obtained within a second preset time period before the current moment. facial images, the facial parameters include: M facial parameters corresponding to the M facial images and N facial parameters corresponding to the N facial images, where M and N are integers greater than 1,
所述操作判断模块,还用于基于所述M个神态参数获得用户在所述互动活动的互动界面上的M个视线焦点,以及基于所述N个神态参数,确定出所述N个神态参数中每个神态参数对应的情绪类型,共N个情绪类型;基于所述M个视线焦点,判断所述M个视线焦点中位于所述互动界面上至少两个区域中同一区域的视线焦点的数量是否大于或等于第一预设数量;以及基于所述N个情绪类型,判断所述N个情绪类型中非正面情绪的数量是否大于或等于第二预设数量;在判定所述同一区域的视线焦点的数量满足所述第一预设数量时,确定所述用户对所述互动活动的下一步操作的交互状态处于视线聚焦状态;在判定所述N个情绪类型中非正面情绪的数量满足所述第二预设数量时,确定所述用户的所述交互状态处于非正面情绪状态。The operation judgment module is further configured to obtain M focus points of the user on the interactive interface of the interactive activity based on the M attitude parameters, and determine the N attitude parameters based on the N attitude parameters The emotion type corresponding to each demeanor parameter in , there are a total of N emotion types; based on the M sight focus, determine the number of sight focus located in the same area in at least two areas on the interactive interface among the M sight focus Whether it is greater than or equal to a first preset number; and based on the N emotion types, judging whether the number of non-positive emotions in the N emotion types is greater than or equal to a second preset number; When the number of focus satisfies the first preset number, it is determined that the interaction state of the user's next operation of the interactive activity is in the focus state; when it is determined that the number of non-positive emotions in the N emotion types satisfies all When the second preset number is reached, it is determined that the interaction state of the user is in a non-positive emotional state.
结合第二方面,在一些可选地实现方式中,所述装置还包括:In conjunction with the second aspect, in some optional implementations, the apparatus further includes:
角度提示模块,用于在确定所述面部图像中不包含所述用户的面部特征中的至少部分特征时,生成并输出图像采集角度调整提示,其中,所述用户的面部特征包括所述用户的五官。The angle prompting module is configured to generate and output an image acquisition angle adjustment prompt when it is determined that the facial image does not contain at least part of the facial features of the user, wherein the facial features of the user include the user's facial features. facial features.
第三方面,本申请实施例提供了一种电子设备,所述电子设备包括:处理器,存储器,总线和通信接口;所述处理器、所述通信接口和存储器通过所述总线连接。所述存储器,用于存储程序。所述处理器,用于通过调用存储在所述存储器中的程序,以执行如第一方面、及第一方面的任一种实施方式所述的图像处理方法。In a third aspect, an embodiment of the present application provides an electronic device, the electronic device includes: a processor, a memory, a bus, and a communication interface; the processor, the communication interface, and the memory are connected through the bus. The memory is used to store programs. The processor is configured to execute the image processing method according to the first aspect and any one of the implementation manners of the first aspect by calling the program stored in the memory.
第四方面,本申请实施例提供了一种具有计算机可执行的非易失程序代码的计算机可读储存介质,所述程序代码使所述计算机执行第一方面、及第一方面的任一种实施方式所述的图像处理方法。In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium having a computer-executable non-volatile program code, the program code causing the computer to execute the first aspect and any one of the first aspect The image processing method described in the embodiment.
本申请实施例的有益效果是:The beneficial effects of the embodiments of the present application are:
由于可以基于用户的神态参数来确定用户对互动活动的下一步操作的交互状态,从而便可以在根据用户的交互状态而确定用户此时的决策是有待提高时,为用户生成并输出提示则可以帮助用户快速提高互动活动中的决策水平。Since the user's interaction state for the next step of the interactive activity can be determined based on the user's attitude parameters, it is possible to generate and output a prompt for the user when it is determined according to the user's interaction state that the user's decision at this time needs to be improved. Help users to quickly improve the level of decision-making in interactive activities.
为使本申请的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。In order to make the above-mentioned objects, features and advantages of the present application more obvious and easy to understand, the preferred embodiments are exemplified below, and are described in detail as follows in conjunction with the accompanying drawings.
附图说明Description of drawings
为了更清楚地说明本申请实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本申请的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。In order to illustrate the technical solutions of the embodiments of the present application more clearly, the following drawings will briefly introduce the drawings that need to be used in the embodiments. It should be understood that the following drawings only show some embodiments of the present application, and therefore do not It should be regarded as a limitation of the scope, and for those of ordinary skill in the art, other related drawings can also be obtained according to these drawings without any creative effort.
图1示出了本申请第一实施例提供的一种电子设备的结构框图;FIG. 1 shows a structural block diagram of an electronic device provided by a first embodiment of the present application;
图2示出了本申请第二实施例提供的一种图像处理方法的第一流程图;FIG. 2 shows a first flowchart of an image processing method provided by a second embodiment of the present application;
图3示出了本申请第二实施例提供的一种图像处理方法的第一流程图中步骤S130的第一子流程图;FIG. 3 shows the first sub-flow chart of step S130 in the first flow chart of an image processing method provided by the second embodiment of the present application;
图4示出了本申请第二实施例提供的一种图像处理方法的第二流程图;FIG. 4 shows a second flowchart of an image processing method provided by the second embodiment of the present application;
图5示出了本申请第二实施例提供的一种图像处理方法的第一流程图中步骤S130的第二子流程图;FIG. 5 shows a second sub-flow chart of step S130 in the first flow chart of an image processing method provided by the second embodiment of the present application;
图6示出了本申请第三实施例提供的一种图像处理装置的结构框图。FIG. 6 shows a structural block diagram of an image processing apparatus provided by a third embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本申请实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本申请的实施例的详细描述并非旨在限制要求保护的本申请的范围,而是仅仅表示本申请的选定实施例。基于本申请的实施例,本领域技术人员在没有进行出创造性劳动的前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. Obviously, the described embodiments are only a part of the embodiments of the present application, rather than all the embodiments. The components of the embodiments of the present application generally described and illustrated in the drawings herein may be arranged and designed in a variety of different configurations. Thus, the following detailed description of the embodiments of the application provided in the accompanying drawings is not intended to limit the scope of the application as claimed, but is merely representative of selected embodiments of the application. Based on the embodiments of the present application, all other embodiments obtained by those skilled in the art without creative work fall within the protection scope of the present application.
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。术语“第一”、“第二”等仅用于区分描述,而不能理解为指示或暗示相对重要性。It should be noted that like numerals and letters refer to like items in the following figures, so once an item is defined in one figure, it does not require further definition and explanation in subsequent figures. The terms "first", "second", etc. are only used to differentiate the description and should not be construed to indicate or imply relative importance.
第一实施例first embodiment
请参阅图1,本申请实施例提供了一种电子设备10,电子设备10可以为终端设备或者为服务器。其中,终端设备可以为个人电脑(persoMalcomputer,PC)、平板电脑、智能手机、个人数字助理(persoMal digital assistaMt,PDA)等;服务器可以为网络服务器、数据库服务器、云服务器或由多个子服务器构成的服务器集成等。Referring to FIG. 1, an embodiment of the present application provides an
本实施例中,该电子设备10可以包括:存储器11、通信接口11、总线13和处理器14。其中,处理器14、通信接口11和存储器11通过总线13连接。处理器14用于执行存储器11中存储的可执行模块,例如计算机程序。图1所示的电子设备10的组件和结构只是示例性的,而非限制性的,根据需要,电子设备10也可以具有其他组件和结构。In this embodiment, the
存储器11可能包含高速随机存取存储器(RaMdom Access MemoryRAM),也可能还包括非不稳定的存储器(MoM-volatile memory),例如至少一个磁盘存储器。本实施例中,存储器11存储了执行图像处理方法所需要的程序。The
总线13可以是ISA总线、PCI总线或EISA总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,图1中仅用一个双向箭头表示,但并不表示仅有一根总线或一种类型的总线。The
处理器14可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器14中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器14可以是通用处理器,包括中央处理器(CeMtral ProcessiMg UMit,简称CPU)、网络处理器(Metwork Processor,简称MP)等;还可以是数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本发明实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。The
本发明实施例任意实施例揭示的流过程或定义的装置所执行的方法可以应用于处理器14中,或者由处理器14实现。处理器14在接收到执行指令后,通过总线13调用存储在存储器11中的程序后,处理器14通过总线13控制通信接口11则可以执行图像处理方法的方法流程。The flow process disclosed in any of the embodiments of the present invention or the method executed by the defined apparatus may be applied to the
另外,在一些情况下,若电子设备10为终端设备,电子设备10还可以具有摄像头15,摄像头15可以常规的高清线摄像头。摄像头15可以与总线13连接,且摄像头15可以用于拍摄包含对象的图像,使得电子设备10的处理器14基于总线13获得摄像头15拍摄的图像而执行图像处理方法的方法流程。In addition, in some cases, if the
另外,在另一些情况下,若电子设备10为服务器,电子设备10则可以与采集用户图像的终端而获得用户的图像,这样电子设备10就可以基于获得的图像来执行图像处理方法的方法流程。In addition, in other cases, if the
第二实施例Second Embodiment
本实施例提供了一种图像处理方法,需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。以下对本实施例进行详细介绍。This embodiment provides an image processing method. It should be noted that the steps shown in the flowchart of the accompanying drawings may be executed in a computer system such as a set of computer-executable instructions, and although shown in the flowchart A logical order is presented, but in some cases steps shown or described may be performed in an order different from that herein. This embodiment will be described in detail below.
请参阅图2,在本实施例提供的图像处理方法中,该图像处理方法包括:步骤S110、步骤S120、步骤S130和步骤S140。Referring to FIG. 2 , in the image processing method provided in this embodiment, the image processing method includes: step S110 , step S120 , step S130 and step S140 .
步骤S110:在用户参与互动活动过程中,获得用户的面部图像。Step S110: During the process of the user participating in the interactive activity, the facial image of the user is obtained.
步骤S120:基于所述面部图像,获得所述用户的神态参数。Step S120: Based on the facial image, obtain the facial parameter of the user.
步骤S130:基于所述神态参数,判断所述用户对所述互动活动的下一步操作的交互状态。Step S130: Based on the demeanor parameter, determine the interactive state of the user's next operation of the interactive activity.
步骤S140:根据所述交互状态生成并输出与所述交互状态对应的提示信息。Step S140: Generate and output prompt information corresponding to the interaction state according to the interaction state.
下面将结合图2和图3,对本申请各步骤进行详细地描述。Each step of the present application will be described in detail below with reference to FIG. 2 and FIG. 3 .
步骤S110:在用户参与互动活动过程中,获得用户的面部图像。Step S110: During the process of the user participating in the interactive activity, the facial image of the user is obtained.
互动活动可以为互动类的应用,而互动类的应用的类型则可以为例如:小部程序安装在终端设备上而大部分在云端的网页式的云端应用,或者也绝大部分程序都安装运行在终端设备上的传统应用。Interactive activities can be interactive applications, and the types of interactive applications can be, for example, web-based cloud applications where small programs are installed on terminal devices and most of them are in the cloud, or most programs are installed and run Traditional applications on end devices.
在电子设备为终端设备的情况下,电子设备就可以通过运行该互动类的应用为用户提供一个互动界面,使得用户基于互动界面可以参与到该互动类的应用的互动活动中。可选地,互动活动可以为棋牌类活动,例如可以为:五子棋、围棋、中国象棋、国际象棋、扑克牌或麻将等,对此本实施例并不限定。When the electronic device is a terminal device, the electronic device can provide an interactive interface for the user by running the interactive application, so that the user can participate in the interactive activities of the interactive application based on the interactive interface. Optionally, the interactive activity may be a chess and card activity, for example, may be: Gobang, Go, Chinese chess, international chess, playing cards, or mahjong, etc., which is not limited in this embodiment.
为便于用户参与这些互动活动后的真实体验感增加,互动界面上则可以展示出这些棋牌类活动类真实的对局界面,例如,互动界面上可以显示出中国象棋的对局界面,也例如,互动界面上也可以显示出扑克牌的对局界面。In order to increase the real experience of users after participating in these interactive activities, the interactive interface can display the real game interface of these chess and card activities. For example, the interactive interface can display the game interface of Chinese chess. For example, The interactive interface can also display the game interface of the poker.
再者,在用户参与该互动活动后,用户在互动活动中对局的对手可以为AI电脑(Artificial Intelligence,人工智能),或者,用户在互动活动中对局的对手可以其他的用户。例如,用户基于电子设备可以与AI电脑进行中国象棋对战,但用户基于电子设备还可以与其他用户进行中国象棋对战。也就是说,虽然用户对局的对手可以不同,但电子设备上呈现出的对局界面可以相同。Furthermore, after the user participates in the interactive activity, the opponent played by the user in the interactive activity can be an AI computer (Artificial Intelligence, artificial intelligence), or the opponent played by the user in the interactive activity can be another user. For example, a user can play Chinese chess with an AI computer based on an electronic device, but a user can also play Chinese chess with other users based on the electronic device. That is to say, although the opponents played by the user may be different, the game interface presented on the electronic device may be the same.
再者,电子设备的对局界面上可以为用户提供一个是否开启操作提示的选项,若用户基于该选项选择开启操作提示,那么电子设备就可以响应用户的开启操作而启动对图像处理方法的执行;反之,则不启动对图像处理方法的执行,而使得用户以传统的模式参与互动活动。Furthermore, the game interface of the electronic device may provide the user with an option of whether to enable the operation prompt. If the user chooses to enable the operation prompt based on this option, then the electronic device can start the execution of the image processing method in response to the user's opening operation. ; On the contrary, the execution of the image processing method is not started, and the user is allowed to participate in the interactive activities in the traditional mode.
电子设备启动对图像处理方法的执行后,电子设备可以基于摄像头对用户的面部进行拍摄,而获得摄像头拍摄到用户面部的视频流。由于视频流可以是由多帧连续的用户的面部图像组成,故电子设备获得用户面部的视频也可以理解为电子设备获得了用户的多帧面部图像。After the electronic device starts to execute the image processing method, the electronic device can photograph the user's face based on the camera, and obtain a video stream of the user's face photographed by the camera. Since the video stream may be composed of multiple frames of consecutive user's face images, obtaining the video of the user's face by the electronic device can also be understood that the electronic device obtains multiple frames of the user's face images.
在电子设备为服务器的情况下,电子设备可以不用直接与用户交互来实现对图像处理方法的执行。在这种情况下,用户使用的用户终端上可以运行该互动类的应用,在用户使用的用户终端参与到互动类的应用的互动活动中时,电子设备可以通过与用户使用的用户终端进行交互从而实现对图像处理方法的执行。因此,电子设备可以从用户终端获得用户终端的摄像头拍摄到用户面部的视频流,这样,电子设备也获得了用户的多帧面部图像。In the case where the electronic device is a server, the electronic device can implement the image processing method without directly interacting with the user. In this case, the interactive application can be run on the user terminal used by the user. When the user terminal used by the user participates in the interactive activities of the interactive application, the electronic device can interact with the user terminal used by the user. Thus, the execution of the image processing method is realized. Therefore, the electronic device can obtain the video stream of the user's face captured by the camera of the user terminal from the user terminal, and in this way, the electronic device also obtains multiple frames of facial images of the user.
作为存储视频流的一种可选方式,由于电子设备在启动执行图像处理方法后至用户结束参与该互动活动的过程中,电子设备可以持续的获得用户面部的视频流,那么在电子设备内的存储空间有限的情况下,故电子设备可以对存储视频流进行实时的更新,以实时的存储距当前时刻之前一段时长内的视频流。例如,在整个互动活动的过程中,电子设备通过对存储的视频流的更新可以保证实时存储距当前时刻之前1-5分钟内的视频流。As an optional way to store the video stream, since the electronic device can continuously obtain the video stream of the user's face during the process from the start of the execution of the image processing method to the end of the user's participation in the interactive activity, the video stream of the user's face in the electronic device In the case of limited storage space, the electronic device can update the stored video stream in real time, so as to store the video stream within a period of time before the current moment in real time. For example, during the entire interactive activity, the electronic device can ensure real-time storage of the video stream 1-5 minutes before the current moment by updating the stored video stream.
电子设备获得用户的多帧面部图像后,电子设备可以继续执行步骤S120。After the electronic device obtains the multiple frames of facial images of the user, the electronic device may continue to perform step S120.
步骤S120:基于所述面部图像,获得所述用户的神态参数。Step S120: Based on the facial image, obtain the facial parameter of the user.
可以理解到,在用户参与互动活动的过程中,若用户不知道或不确定互动活动的下一步该如何操作比较合适,例如,在中国象棋的对局过程中,用户不知道某个棋子该往哪里走比较好,这种情况下,基于生理上的反应,用户常常会出现思索状态,即用户的目光会聚焦到某一处以便于进行思索。故电子设备便可以基于这种生理上的反应来确定用户是否不知道接下来的合适操作。It can be understood that in the process of the user participating in the interactive activity, it is more appropriate if the user does not know or is not sure how to operate the next step of the interactive activity. For example, in the process of Chinese chess, the user does not know which piece should go Where is the best way to go, in this case, based on physiological responses, users often appear in a state of thinking, that is, the user's eyes will focus on a certain place to facilitate thinking. Therefore, the electronic device can determine whether the user does not know the next appropriate action based on this physiological response.
由于用户在思索状态时,用户的目光会聚焦到某一处是一个过程性的行为,即用户的目光可以持续一段时间的聚焦在某一处,以便这段时间内能够专注的进行思索。故在用户的多帧面部图像中,单帧面部图像难以准确的反映出用户的这种思索状态,而需要通过一段时间内的多张面部图像才能够更加准确的反映出用户的这种思索状态。因此,电子设备便可以基于对多张面部图像进行处理,以通过处理多张面部图像来确定用户是否处于不知道互动活动接下来的合适操作的思索状态。When the user is in a thinking state, it is a procedural behavior that the user's eyes will focus on a certain place, that is, the user's eyes can focus on a certain place for a period of time, so that he can concentrate on thinking during this time. Therefore, in the user's multi-frame facial images, it is difficult for a single frame of facial image to accurately reflect the user's thinking state, and it is necessary to use multiple facial images within a period of time to more accurately reflect the user's thinking state. . Therefore, the electronic device can process the plurality of facial images to determine whether the user is in a thinking state of not knowing the next appropriate operation of the interactive activity by processing the plurality of facial images.
可选地,由于用户处于的思索状态时间在大多数情况下不会太长,故若电子设备处理的多张面部图像对应的时长太长,反而可能导致得到的结果不够准确,例如,用户前3秒中处于思索状态,而后10秒钟则是注意力分散的状态,若对涵盖整个这13秒时长的多张面部图像进行处理则极有可能得不到用户处于思索状态的处理结构。因此,电子设备可以采用对距当前时刻之前较短的第一预设时长内获得M张面部图像进行处理,以保证得到结果的准确性,其中,M可以为大于1的整数,第一预设时长可以为1-3秒,但并不限定。Optionally, since the time in which the user is in the thinking state is not too long in most cases, if the time corresponding to the multiple facial images processed by the electronic device is too long, the obtained results may be inaccurate. The user is in a state of thinking for 3 seconds, and then is in a state of distraction for the next 10 seconds. If multiple facial images covering the entire 13 seconds are processed, it is very likely that the processing structure that the user is in a state of thinking will not be obtained. Therefore, the electronic device may process the M facial images obtained within a first preset time period that is shorter from the current moment to ensure the accuracy of the obtained results, where M may be an integer greater than 1, and the first preset The duration can be 1-3 seconds, but is not limited.
本实施例中,作为获得M张面部图像的一种方式,电子设备获得M张面部图像的方式可以为电子设备从存储的1-5分钟的视频流中抽取出最近的1-3秒一段视频,并把这段视频所包含多帧面部图像作为M张面部图像。In this embodiment, as a way to obtain M facial images, the way for the electronic device to obtain the M facial images may be that the electronic device extracts the most recent 1-3 seconds of video from the stored video stream of 1-5 minutes , and take the multiple frames of facial images contained in this video as M facial images.
作为获得M张面部图像的另一种方式,在每一帧的层面上,由于相邻的每两帧面部图像对应的面部神态很难出现突变,故可以基于此原理来在保证结果准确性的同时减小电子设备的运算量,即电子设备可以从1-3秒的这段视频从抽取出一部分帧来作为获得的M张面部图像,例如,从连续的每两帧面部图像或连续的每三帧面部图像中抽取出一帧面部图像,但并不作为限定。As another way to obtain M facial images, at the level of each frame, since it is difficult for the facial expressions corresponding to every two adjacent frames of facial images to change abruptly, this principle can be used to ensure the accuracy of the results. At the same time, the calculation amount of the electronic device is reduced, that is, the electronic device can extract a part of the frames from the 1-3 seconds of this video as the obtained M facial images, for example, from every two consecutive frames of facial images or consecutive One frame of face image is extracted from the three frames of face images, but it is not limited.
还可以理解到的是,也由于用户的思索状态一般可以体现为用户的目光聚焦,故电子设备可以基于对用户的面部图像中眼睛部分的图像进行处理和分析来确定用户是否处于思索状态。It can also be understood that, since the user's thinking state can generally be reflected as the user's eyes are focused, the electronic device can determine whether the user is in a thinking state based on processing and analyzing the image of the eye part in the user's facial image.
本实施例中,电子设备中预先设置了训练好的人脸情绪分析模型,这样,电子设备可以调用该人脸情绪分析模型,并可以将M张面部图像的每张面部图像均输入到该人脸情绪分析模型,从而人脸情绪分析模型就可以基于深度神经网络对每张面部图像进行抠图处理,从每张面部图像中确定出用户面部上双眼部分的图像。因而电子设备便可以获得人脸情绪分析模型输出的每张面部图像中双眼部分的图像对应的图像参数,共得到M个图像参数。In this embodiment, a trained facial emotion analysis model is preset in the electronic device, so that the electronic device can call the facial emotion analysis model, and can input each facial image of the M facial images to the person The facial emotion analysis model, so that the facial emotion analysis model can perform matting processing on each facial image based on the deep neural network, and determine the image of the eye part of the user's face from each facial image. Therefore, the electronic device can obtain the image parameters corresponding to the images of the eyes in each facial image output by the facial emotion analysis model, and obtain M image parameters in total.
可以理解到,本实施例中,双眼部分的图像的作用可以为确定用户的神态,即为确定用户的神态是否处于思索状态,因此,双眼部分的图像对应的图像数据则可以为用户的神态参数。这样,电子设备获得该用户的M个图像参数则可以为获得了该用户的M个神态参数。It can be understood that, in this embodiment, the function of the image of the binocular part can be used to determine the user's attitude, that is, to determine whether the user's attitude is in a state of thinking, therefore, the image data corresponding to the image of the binocular part can be the user's attitude parameter. . In this way, when the electronic device obtains the M image parameters of the user, it can obtain the M demeanor parameters of the user.
获得M个神态参数后,电子设备可以继续执行步骤S130。After obtaining the M state parameters, the electronic device may continue to perform step S130.
步骤S130:基于所述神态参数,判断所述用户对所述互动活动的下一步操作的交互状态。Step S130: Based on the demeanor parameter, determine the interactive state of the user's next operation of the interactive activity.
如图3所示,本实施例中,步骤S130的子流程可以包括:步骤S131和步骤S132。As shown in FIG. 3, in this embodiment, the sub-flow of step S130 may include: step S131 and step S132.
步骤S131:基于所述M个神态参数获得用户在所述互动活动的互动界面上的M个视线焦点。Step S131: Obtain M focus points of the user's sight on the interactive interface of the interactive activity based on the M attitude parameters.
步骤S132:基于所述M个视线焦点,判断所述用户对所述互动活动的下一步操作的交互状态是否处于视线聚焦状态。Step S132: Based on the M sight focus points, determine whether the interaction state of the user's next operation of the interactive activity is in the sight focus state.
下面将对步骤S131和步骤S132的流程进行详细的描述。The flow of steps S131 and S132 will be described in detail below.
步骤S131:基于所述M个神态参数获得用户在所述互动活动的互动界面上的M个视线焦点。Step S131: Obtain M focus points of the user's sight on the interactive interface of the interactive activity based on the M attitude parameters.
由于电子设备可以确定的是用户的目光是否聚焦,那么电子设备可以基于M个神态参数去确定用户的视线焦点。Since what the electronic device can determine is whether the user's gaze is focused, the electronic device can determine the user's gaze focus based on the M facial parameters.
详细地,电子设备也可以继续调用该人脸情绪分析模型,从而将M个神态参数输入到该人脸情绪分析模型中。该人脸情绪分析模型可以基于深度神经网络对该M个神态参数中每个神态参数进行计算,从而可以确定出每个神态参数对应的该用户的每两个视线方向。In detail, the electronic device may also continue to call the facial emotion analysis model, so as to input M facial expression parameters into the facial emotion analysis model. The facial emotion analysis model can calculate each of the M facial parameters based on the deep neural network, so as to determine every two gaze directions of the user corresponding to each facial parameter.
需要说明的是,一般情况下,用户可以通过双眼来观看,双眼中每个眼球注视的方向则可以对应为一个视线方向,而由于每个神态参数对应的可以是用户双眼部分的图像,故每个神态参数就可以得到用户的每两个视线方向。而在用户的双眼注视的方向不同情况下,反应在用户双眼部分的图像则可以是眼球的位置不同。因此,在用户注视不同的位置时,基于不同位置对应的不同神态参数,就可以确定出不同的每两个视线方向。It should be noted that, in general, the user can watch through both eyes, and the gaze direction of each eyeball in the eyes can correspond to a line of sight direction, and since each facial parameter corresponds to the image of the user's eyes, each Every two gaze directions of the user can be obtained by using a facial expression parameter. When the directions of the user's eyes are different, the images reflected in the user's eyes may be different positions of the eyeballs. Therefore, when the user gazes at different positions, two different gaze directions can be determined based on the different facial parameters corresponding to the different positions.
本实施例中,为便于确定出用户注视到的到底是什么位置,在获得每两个视线方向后,电子设备可以将每两个视线方向输入到人脸情绪分析模型进行计算,通过对人脸情绪分析模型对每两个视线方向的计算则可以预估出每两个视线方向在互动活动的互动界面上形成的视线焦点。这样,电子设备便可以获得在互动活动的互动界面上一共形成的M个视线焦点。In this embodiment, in order to easily determine which position the user is looking at, after obtaining every two gaze directions, the electronic device can input each two gaze directions into the facial emotion analysis model for calculation. The calculation of each two gaze directions by the sentiment analysis model can predict the gaze focus formed by each two gaze directions on the interactive interface of the interactive activity. In this way, the electronic device can obtain M focus points of sight formed on the interactive interface of the interactive activity.
从而,电子设备便可以通过这M个视线焦点来继续执行步骤S132。Therefore, the electronic device can continue to perform step S132 through the M line-of-sight focal points.
步骤S132:基于所述M个视线焦点,判断所述用户对所述互动活动的下一步操作的交互状态是否处于视线聚焦状态。Step S132: Based on the M sight focus points, determine whether the interaction state of the user's next operation of the interactive activity is in the sight focus state.
为便于准确的确定出用户在互动界面上注视的位置,电子设备可以预先将互动界面划分成至少两个区域,例如将互动界面等分成20个区域但并不限定,以通过区域来作为衡量用户注视位置的标准。In order to accurately determine the position of the user's gaze on the interactive interface, the electronic device can divide the interactive interface into at least two areas in advance, such as dividing the interactive interface into 20 areas, but not limited, so that the user can be measured by the area. Gaze location criteria.
由于在确定出M个视线焦点的本质可以为确定M个视线焦点中每个视线焦点在互动界面上的坐标,故电子设备可以基于每个视线焦点在互动界面上的坐标来确定每个视线焦点位于至少两个区域中的哪一区域。这样,电子设备就可以确定出至少两个区域中同一区域中视线焦点的数量。Since the essence of determining the M line-of-sight focal points may be determining the coordinates of each line-of-sight focus on the interactive interface among the M line-of-sight focal points, the electronic device may determine each line-of-sight focus based on the coordinates of each line-of-sight focus on the interactive interface Which of the at least two regions is located. In this way, the electronic device can determine the number of line-of-sight focal points in the same area of the at least two areas.
本实施例中,若同一区域中视线焦点的数量越多,则可能表明该用户越长时间的注视一个区域,则越有可能表明该用户处于进行思索的视线聚焦状态。故电子设备中可以预先设置一个第一预设数量,该第一预设数量可以表示用户处于视线聚焦状态的下限值。In this embodiment, if the number of focus points in the same area is greater, it may indicate that the user gazes at an area for a long time, and it is more likely to indicate that the user is in a state of thinking focus. Therefore, a first preset number may be preset in the electronic device, and the first preset number may represent the lower limit of the user's line of sight focus state.
这样,电子设备可以基于确定出的至少两个区域中同一区域中视线焦点的数量,判断M个视线焦点中位于互动界面上至少两个区域中同一区域的视线焦点的数量是否大于或等于第一预设数量。例如,第一预设数量可以设置为30-60。In this way, the electronic device can determine, based on the determined number of sight focal points in the same area of the at least two areas, whether the number of sight focal points located in the same area of the at least two areas on the interactive interface among the M sight focal points is greater than or equal to the first Preset quantity. For example, the first preset number may be set to 30-60.
若同一区域的视线焦点的数量不是大于或等于第一预设数量,则表示用户在第一预设时长内的视线焦点不是聚集的,故可以表示用户在第一预设时长内对互动活动的下一步操作的交互状态不处于视线聚焦状态,即可以判定用户是知道该互动活动接下来的合适操作的。这样,电子设备可以终止在本次对图像处理方法的执行过程中后续流程的执行,以便等待下一次轮训执行该图像处理方法。If the number of line-of-sight focus points in the same area is not greater than or equal to the first preset number, it means that the user's line-of-sight focus within the first preset time period is not concentrated, so it can indicate that the user is interested in the interactive activities within the first preset time period. The interaction state of the next operation is not in the focus state, that is, it can be determined that the user knows the next appropriate operation of the interactive activity. In this way, the electronic device can terminate the execution of the subsequent procedures in the current execution of the image processing method, so as to wait for the next round of training to execute the image processing method.
可以知道的是,在电子设备开启执行该图像处理方法后,并至在用户基于是否开启操作提示的选项而选择关闭操作提示时或在互动活动过程结束时的这段时间中,电子设备可以轮询的对该图像处理方法进行执行,轮询的可以例如,电子设备在基于M张面部图像建立一个流程执行该图像处理方法,若电子设备获得M张面部图像中有5张面部图像是最新获得的面部图像时,电子设备则可以在继续执行原有的流程的同时,又可以基于更新了5张面部图像的M张面部图像另外建立一个新的流程来执行该图像处理方法。It can be known that, after the electronic device is turned on to execute the image processing method, and the user chooses to turn off the operation prompt based on whether to turn on the option of the operation prompt or at the end of the interactive activity process, the electronic device can turn the For example, the electronic device establishes a process based on M facial images to execute the image processing method, and if the electronic device obtains M facial images, 5 facial images are newly obtained. When the facial images are updated, the electronic device may continue to execute the original process, and at the same time, establish a new process based on the M facial images updated with the 5 facial images to execute the image processing method.
若同一区域的视线焦点的数量大于或等于第一预设数量,则表示用户在第一预设时长内的视线焦点是聚集的,故可以表示用户在第一预设时长内对互动活动的下一步操作的交互状态处于视线聚焦状态,即可以判定用户不知道该互动活动接下来的合适操作的。If the number of line-of-sight focus in the same area is greater than or equal to the first preset number, it means that the user's line-of-sight focus within the first preset time period is concentrated, so it can be indicated that the user is interested in the interactive activity within the first preset time period. The interaction state of the one-step operation is in the focus state, that is, it can be determined that the user does not know the next appropriate operation of the interactive activity.
在判定为该用户不知道接下来的合适操作时,电子设备便可以继续执行步骤S140。When it is determined that the user does not know the next appropriate operation, the electronic device may continue to perform step S140.
步骤S140:根据所述交互状态生成并输出与所述交互状态对应的提示信息。Step S140: Generate and output prompt information corresponding to the interaction state according to the interaction state.
为提高用户的体验,使得生成的提示信息可以尽量与用户的不知道或不确定的下一步操作相关,本实施例中,电子设备可以基于用户在互动界面上关注的对象来生成提示信息。In order to improve the user's experience so that the generated prompt information can be related to the user's unknown or uncertain next operation as much as possible, in this embodiment, the electronic device can generate the prompt information based on the object that the user pays attention to on the interactive interface.
电子设备基于确定出大于第一预设数量的视线焦点所在的同一区域,即基于确定该用户的交互状态处于视线聚焦状态,电子设备则还可以基于对该互动界面中同一区域的图像进行分析,则确定出同一区域的中包含的互动活动中的对象,其中,该对象则可以包括:互动活动中的实体或互动活动中的背景。以中国象棋为例,互动活动中的实体可以为中国象棋中的棋子,而互动活动中的背景则可以为互动界面中除了棋子之外其它区域;再者,以扑克牌为例,互动活动中的实体可以为扑克牌中属于用户的牌面,而互动活动中的背景则可以为互动界面中除了属于用户的牌面之外其它区域。Based on the electronic device determining the same area where the focus of sight is greater than the first preset number, that is, based on determining that the user's interactive state is in the focused state of sight, the electronic device may also analyze the image of the same area in the interactive interface, Then, the objects in the interactive activities contained in the same area are determined, wherein the objects may include: entities in the interactive activities or backgrounds in the interactive activities. Taking Chinese chess as an example, the entities in the interactive activity can be the pieces in Chinese chess, and the background in the interactive activity can be other areas in the interactive interface except the pieces; The entity of the poker can be the card face belonging to the user in the playing card, and the background in the interactive activity can be other areas in the interactive interface except the card face belonging to the user.
可以理解到,为保证确定出的对象为实体还是为背景的准确性,划分出至少两个区域中的每个区域所包含的内容要么可以为互动活动中的实体,反之要么可以为互动活动中背景,而不建议每个区域同时包含互动活动中的实体和互动活动中的背景。It can be understood that, in order to ensure the accuracy of determining whether the object is an entity or a background, the content contained in each area divided into at least two areas can either be an entity in the interactive activity, or vice versa. Background, it is not recommended that each area contain both the entity in the interactive activity and the background in the interactive activity.
因此,电子设备确定出同一区域中包含的互动活动中的对象,电子设备则可以判断该对象为互动活动中的实体还是为互动活动中的背景。Therefore, the electronic device determines the object in the interactive activity contained in the same area, and the electronic device can determine whether the object is the entity in the interactive activity or the background in the interactive activity.
若确定为该对象为互动活动中的实体,即可以确定该用户所注视可以是例如某个棋子或某张牌。If it is determined that the object is an entity in the interactive activity, it can be determined that the user's gaze may be, for example, a certain piece or a certain card.
作为一种可选地方式,电子设备生成提示信息的方式可以为:电子设备中预设了训练好的互动活动评估模型,该互动活动评估模型中可以包括一个评估函数,而评估函数中将该互动活动中的每个实体的参数可以由每个实体在互动活动中的情况决定,例如,评估函数可以基于当前对局局势中每个棋子的位置而确定出每个棋子的参数。这样互动活动评估模型基于每个实体的参数来计算该评估函数,就可以确定出符合当前对局局势的提示信息。As an optional method, the electronic device may generate the prompt information as follows: a trained interactive activity evaluation model is preset in the electronic device, and the interactive activity evaluation model may include an evaluation function, and the evaluation function includes the The parameters of each entity in the interactive activity can be determined by the situation of each entity in the interactive activity, for example, the evaluation function can determine the parameters of each piece based on the position of each piece in the current game situation. In this way, the interactive activity evaluation model calculates the evaluation function based on the parameters of each entity, so that prompt information that conforms to the current game situation can be determined.
因此,电子设备在确定出对象为互动活动中的实体后,电子设备便可以将当前的评估函数中用于计算该实体的权重从第一值提高到第二值,获得当前调整后的评估函数,使得在该当前调整后的评估函数中该实体对最终得到的结果能够起到更大的影响。这样,互动活动评估模型基于该当前调整后的评估函数进行计算就可以生成与实体相关的下一步操作的提示信息。Therefore, after the electronic device determines that the object is an entity in the interactive activity, the electronic device can increase the weight used to calculate the entity in the current evaluation function from the first value to the second value, and obtain the currently adjusted evaluation function , so that the entity can have a greater influence on the final result in the currently adjusted evaluation function. In this way, the interactive activity evaluation model performs calculation based on the currently adjusted evaluation function to generate prompt information for the next operation related to the entity.
在计算出提示信息后,在电子设备为服务器的情况下,电子设备可以将这个提示信息输出给用户使用的用户终端,从而使得用户终端显示出该提示信息。而在电子设备为终端设备的情况下,电子设备可以以动画或者文字的方式将该提示信息显示出来。相应的,用户就可以接收到与自己不确定或不知道的操作对应的提示信息,这样用户就可以有更好的用户体验。After calculating the prompt information, when the electronic device is a server, the electronic device can output the prompt information to the user terminal used by the user, so that the user terminal can display the prompt information. In the case where the electronic device is a terminal device, the electronic device may display the prompt information in the form of animation or text. Accordingly, the user can receive prompt information corresponding to the operation that he is uncertain or does not know, so that the user can have a better user experience.
以中国象棋为例,若确定出对象为中国象棋中的“炮”,那么说明该用户想基于“炮”这个棋子来进行下一步的操作,但用户却并不确定在当前对局局势下,“炮”这个棋子该如何操作最为合适。因此,电子设备可以增加“炮”这个实体在评估函数中的权重来计算提示信息,从而电子设备可以计算出与“炮”的操作相关的提示信息。基于此,电子设备显示出的提示信息就可以为:炮三进五。Taking Chinese chess as an example, if it is determined that the object is the "Pao" in Chinese chess, it means that the user wants to perform the next operation based on the "Pao" piece, but the user is not sure that in the current game situation, How to operate the chess piece "Cannon" is the most appropriate. Therefore, the electronic device can increase the weight of the entity "cannon" in the evaluation function to calculate the prompt information, so that the electronic device can calculate the prompt information related to the operation of the "cannon". Based on this, the prompt information displayed by the electronic device can be: three guns into five.
若确定为该对象为互动活动中的背景,即可以确定该用户所注视可以是例如某个棋格。If it is determined that the object is the background in the interactive activity, it can be determined that the user's gaze may be, for example, a certain checkerboard.
这样,电子设备就可以不调整该当前的评估函数中每个实体的权重,从而互动评估模型就可以基于该当前的评估函数进行计算,进而可以生成与当前对局局势相关的下一步操作的提示信息。因而,电子设备便也可以将该提示信息输出给用户终端或将该提示信息显示出来。In this way, the electronic device does not need to adjust the weight of each entity in the current evaluation function, so that the interactive evaluation model can perform calculations based on the current evaluation function, and then can generate a prompt for the next step related to the current game situation. information. Therefore, the electronic device can also output the prompt information to the user terminal or display the prompt information.
作为一种可选地的方式,为避免误提示,在电子设备完成执行步骤S130,以及在开始执行步骤S140之前,电子设备可以基于对互动界面的图像进行分析而判断用户是否已经进行了下一步的操作。若确定用户已经进行了下一步的操作,那么电子设备可以终止对本次执行该图像处理方法的流程中后续流程的执行;反之,电子设备则继续执行步骤S140。As an optional method, in order to avoid false prompts, after the electronic device finishes executing step S130 and before starting to execute step S140, the electronic device can determine whether the user has performed the next step based on analyzing the image of the interactive interface operation. If it is determined that the user has performed the next operation, the electronic device may terminate the execution of the subsequent process in the current process of executing the image processing method; otherwise, the electronic device continues to perform step S140.
请参阅图4,作为本实施例中一种可选地实施方式,在电子设备执行完成步骤S130后,以及在电子设备开始执行步骤S140之前,电子设备还可以执行步骤S101和步骤S102。Referring to FIG. 4 , as an optional implementation in this embodiment, after the electronic device performs step S130 and before the electronic device starts to perform step S140, the electronic device may further perform steps S101 and S102.
步骤S101:在确定所述用户处于所述视线聚焦状态后,基于所述N个神态参数,确定出所述N个神态参数中每个神态参数对应的情绪类型,共N个情绪类型。Step S101: After it is determined that the user is in the focus state, based on the N state parameters, determine the emotion type corresponding to each state parameter in the N state parameters, and there are N emotion types in total.
步骤S102:基于所述N个情绪类型,判断所述用户的所述交互状态是否处于非正面情绪状态。Step S102: Based on the N emotion types, determine whether the interaction state of the user is in a non-positive emotional state.
下面将对步骤S101和步骤S102的流程进行详细地说明。The flow of steps S101 and S102 will be described in detail below.
可以理解到,虽然确定出用户不知道互动活动接下来的合适操作后,但为提升用户的体验,可以不用马上生成提示信息并推送给用户,而可以继续检测用户的情绪,在检测到用户的情绪已经在一段时间内处于非正面情绪状态时,才可以给用户进行提示。It can be understood that although it is determined that the user does not know the next appropriate operation of the interactive activity, in order to improve the user's experience, it is not necessary to generate prompt information immediately and push it to the user, but to continue to detect the user's emotions. The user can only be prompted when the emotion has been in a non-positive emotional state for a period of time.
故在执行步骤S101之前,本实施例中,电子设备也可以从存储的视频流对应的多帧面部图像中对应提取出用于检测情绪所需的面部图像,该用于检测情绪所需的面部图像则可以为电子设备从距当前时刻之前第二预设时长内的获得N张面部图像,其中,该第二预设时长可以为10-30秒。Therefore, before step S101 is performed, in this embodiment, the electronic device can also correspondingly extract the facial image required for detecting emotion from the multi-frame facial images corresponding to the stored video stream, which is used for detecting the facial image required for emotion. The images may be N facial images obtained by the electronic device within a second preset time period before the current moment, where the second preset time period may be 10-30 seconds.
本实施例中,作为获得N张面部图像的一种方式,电子设备可以从存储的1-5分钟的视频流中抽取出最近的10-30秒一段视频,并把这段视频所包含多帧面部图像作为N张面部图像。In this embodiment, as a way to obtain N facial images, the electronic device can extract the most recent 10-30 seconds of video from the stored video stream of 1-5 minutes, and extract the multiple frames contained in this video. face images as N face images.
也作为获得N张面部图像的另一种方式,在每一帧的层面上,由于相邻的每两帧面部图像对应的面部神态很难出现突变,故也可以基于此原理来在保证结果准确性的同时减小电子设备的运算量,即电子设备可以从10-30秒的这段视频从抽取出一部分帧来作为获得的N张面部图像,例如,也从连续的每五帧面部图像或连续的每六帧面部图像中抽取出一帧面部图像,但并不作为限定。As another way to obtain N facial images, at the level of each frame, since it is difficult for the facial expressions corresponding to every two adjacent frames of facial images to mutate, it can also be based on this principle to ensure the accuracy of the results. At the same time, the computational load of the electronic device is reduced, that is, the electronic device can extract a part of the frames from this video of 10-30 seconds as the obtained N facial images, for example, also from the continuous five frames of facial images or One frame of face image is extracted from every six consecutive frames of face images, but it is not limited.
电子设备也可以调用预设的人脸情绪分析模型,并可以将N张面部图像的每张面部图像均输入到该人脸情绪分析模型,从而人脸情绪分析模型就可以基于深度神经网络对每张面部图像进行抠图处理,从每张面部图像中除用户面部以外的背景去掉,这样就可以得到仅为面部的图像。这样,电子设备便可以获得人脸情绪分析模型输出的每张仅为面部的图像对应的图像参数,共得到N个图像参数。The electronic device can also call the preset facial emotion analysis model, and can input each facial image of N facial images into the facial emotion analysis model, so that the facial emotion analysis model can analyze each facial emotion based on the deep neural network. The face images are cut out, and the background other than the user's face is removed from each face image, so that a face-only image can be obtained. In this way, the electronic device can obtain image parameters corresponding to each face-only image output by the facial emotion analysis model, and obtain N image parameters in total.
可以理解到,本实施例中,仅为面部的图像的作用也可以为确定用户的神态,即为确定用户的神态是否表示用户的交互状态是否还处于非正面情绪状态,因此,仅为面部的图像的图像数据则也可以为用户的神态参数。这样,电子设备获得该用户的N个仅为面部的图像则可以为获得了该用户的N个神态参数。It can be understood that, in this embodiment, the function of only the face image can also be to determine the user's attitude, that is, to determine whether the user's attitude indicates whether the user's interaction state is still in a non-positive emotional state, therefore, only the face image is used. The image data of the image may also be the facial parameter of the user. In this way, when the electronic device obtains N images of the user's face only, it can obtain N facial parameters of the user.
获得了N个神态参数后,电子设备则可以执行步骤S101。After obtaining the N state parameters, the electronic device may execute step S101.
步骤S101:在确定所述用户处于所述视线聚焦状态后,基于所述N个神态参数,确定出所述N个神态参数中每个神态参数对应的情绪类型,共N个情绪类型。Step S101: After it is determined that the user is in the focus state, based on the N state parameters, determine the emotion type corresponding to each state parameter in the N state parameters, and there are N emotion types in total.
电子设备还可以基于该人脸情绪分析模型来处理和分析该N个神态参数,即电子设备也可以调用该人脸情绪分析模型,从而可以将N个神态参数的个神态参数均输入到该人脸情绪分析模型。人脸情绪分析模型中预设了多种情绪类型,多种情绪类型可以包括例如:开心、乐观、中性、焦虑和伤心。The electronic device can also process and analyze the N facial expression parameters based on the facial emotion analysis model, that is, the electronic device can also call the facial emotion analysis model, so that each facial expression parameter of the N facial emotion parameters can be input to the person. Face sentiment analysis model. Various emotion types are preset in the facial emotion analysis model, and the various emotion types may include, for example, happy, optimistic, neutral, anxious, and sad.
这样,人脸情绪分析模型基于多种情绪类型对每个神态参数进行处理和分析,就可以确定出每个神态参数表示的多种待确定情绪类型中每种待确定情绪类型的概率。这样,电子设备根据每种待确定情绪类型的概率,可以从基于每个神态参数得到多种待确定情绪类型确定出中概率最高的待确定情绪类型,而每个神态参数的概率最高的待确定情绪类型为每个神态参数对应的情绪类型。例如,针对一个神态参数,确定出的待确定情绪类型为开心的概率为0.05、确定出的待确定情绪类型为乐观的概率为0.05、确定出的待确定情绪类型为中性的概率为0.01、确定出的待确定情绪类型为焦虑的概率为0.7和确定出的待确定情绪类型为伤心的概率为0.1,那么电子设备则可以确定该神态参数对应的情绪类型为焦虑。In this way, the facial emotion analysis model processes and analyzes each demeanor parameter based on multiple emotion types, and can determine the probability of each to-be-determined emotion type among the multiple to-be-determined emotion types represented by each demeanor parameter. In this way, according to the probability of each emotion type to be determined, the electronic device can obtain the emotion type to be determined with the highest probability from a plurality of emotion types to be determined based on each facial parameter, and the emotion type to be determined with the highest probability of each facial parameter is determined. The emotion type is the emotion type corresponding to each facial expression parameter. For example, for an expression parameter, the probability that the determined emotional type to be determined is happy is 0.05, the probability that the determined emotional type to be determined is optimistic is 0.05, the probability that the determined emotional type to be determined is neutral is 0.01, If the determined probability that the emotion type to be determined is anxiety is 0.7 and the determined probability that the emotion type to be determined is sad is 0.1, then the electronic device can determine that the emotion type corresponding to the demeanor parameter is anxiety.
因此,电子设备便可以确定出N个神态参数中每个神态参数对应的情绪类型,从而一共得到N个情绪类型。Therefore, the electronic device can determine the emotion type corresponding to each of the N expression parameters, thereby obtaining N emotion types in total.
步骤S102:基于所述N个情绪类型,判断所述用户的所述交互状态是否处于非正面情绪状态。Step S102: Based on the N emotion types, determine whether the interaction state of the user is in a non-positive emotional state.
本实施例中,电子设备可以将多种情绪类型归类为正面情绪和非正面情绪,例如,开心和乐观可以为归类为正面情绪,而中性、焦虑和伤心则可以为归类为非正面情绪。In this embodiment, the electronic device may classify various emotion types as positive emotions and non-positive emotions. For example, happy and optimistic may be classified as positive emotions, while neutral, anxious, and sad may be classified as non-positive emotions. positive emotions.
那么,在这个N个情绪类型中,若为非正面情绪的数量越多,则可以表示该用户越长时间的处于非正面情绪状态。故电子设备中还可以预先设置一个第二预设数量,该第二预设数量可以表示用户的交互状态处于非正面情绪状态的下限值。Then, in the N emotion types, if the number of non-positive emotions is greater, it can mean that the user has been in a non-positive emotional state for a longer time. Therefore, a second preset number may also be preset in the electronic device, and the second preset number may represent the lower limit value of the user's interaction state being a non-positive emotional state.
这样,电子设备则也可以基于确定出的N个情绪类型,判断N个情绪类型中非正面情绪的数量是否大于或等于第二预设数量。例如,第二预设数量可以设置为70-90。In this way, the electronic device can also determine, based on the determined N emotion types, whether the number of non-positive emotions in the N emotion types is greater than or equal to the second preset number. For example, the second preset number may be set to 70-90.
若N个情绪类型中非正面情绪的数量小于第二预设数量,则表示用户的交互状态在第二预设时长内不处于非正面情绪状态,即可以判定用户目前的状态可以不需要对用户进行提示。这样,电子设备便也可以终止在本次对图像处理方法的执行过程中后续流程的执行,以便等待下一次轮训执行该图像处理方法。If the number of non-positive emotions in the N emotion types is less than the second preset number, it means that the user's interaction state is not in a non-positive emotional state within the second preset time period, that is, it can be determined that the current state of the user does not require any prompt. In this way, the electronic device can also terminate the execution of the subsequent process in the execution of the image processing method this time, so as to wait for the next polling execution of the image processing method.
若N个情绪类型中非正面情绪的数量大于或等于第二预设数量,则表示用户的交互状态在第二预设时长内处于非正面情绪状态,即可以判定用户目前的状态已经比较焦虑了,可以需要对用户接下来的操作进行提示。If the number of non-positive emotions in the N emotion types is greater than or equal to the second preset number, it means that the user's interaction state is in a non-positive emotional state within the second preset time period, that is, it can be determined that the user's current state is relatively anxious , you may need to prompt the user for the next operation.
那么,确定需要接下来的操作进行提示后,电子设备也基于对互动界面的图像进行分析而确定用户还未进行了下一步的操作时,则电子设备可以执行步骤S140以对用户进行提示。Then, after it is determined that the next operation needs to be prompted, and the electronic device also determines that the user has not performed the next operation based on the analysis of the image of the interactive interface, the electronic device may perform step S140 to prompt the user.
作为本实施例中一些可选地方式,电子设备还可以基于对面部图像进行分析,若确定面部图像中不包含该用户的面部特征中的至少部分特征,其中,用户的面部特征包括用户的五官。电子设备便可以生成并输出图像采集角度调整提示,使得用户基于图像采集角度调整提示可以调整自己的姿势,使得自己的面部能够位于摄像头的采集范围内。As some optional methods in this embodiment, the electronic device may also analyze the facial image. If it is determined that the facial image does not contain at least some of the facial features of the user, the facial features of the user include the facial features of the user. . The electronic device can generate and output an image capture angle adjustment prompt, so that the user can adjust his posture based on the image capture angle adjustment prompt, so that his face can be located within the capture range of the camera.
如图5所示,作为本实施例中,步骤S130的另一种可能的实现方式,步骤S130可以包括:步骤S1301、步骤S1302和S1303。As shown in FIG. 5 , as another possible implementation manner of step S130 in this embodiment, step S130 may include: step S1301 , steps S1302 and S1303 .
步骤S1301:基于所述M个神态参数获得用户在所述互动活动的互动界面上的M个视线焦点,以及基于所述N个神态参数,确定出所述N个神态参数中每个神态参数对应的情绪类型,共N个情绪类型。Step S1301: Obtain M focus points of the user's sight on the interactive interface of the interactive activity based on the M attitude parameters, and determine, based on the N attitude parameters, corresponding to each of the N attitude parameters. There are N emotional types in total.
步骤S1302:基于所述M个视线焦点,判断所述M个视线焦点中位于所述互动界面上至少两个区域中同一区域的视线焦点的数量是否大于或等于第一预设数量;以及基于所述N个情绪类型,判断所述N个情绪类型中非正面情绪的数量是否大于或等于第二预设数量。Step S1302: Based on the M line-of-sight focuses, determine whether the number of line-of-sight focuses located in the same area of at least two areas on the interactive interface is greater than or equal to a first preset number among the M line-of-sight focuses; and The N emotion types are selected, and it is determined whether the number of non-positive emotions in the N emotion types is greater than or equal to a second preset number.
步骤S1303:在判定所述同一区域的视线焦点的数量满足所述第一预设数量时,确定所述用户对所述互动活动的下一步操作的交互状态处于视线聚焦状态;在判定所述N个情绪类型中非正面情绪的数量满足所述第二预设数量时,确定所述用户的所述交互状态处于非正面情绪状态。Step S1303: when it is determined that the number of sight-focus points in the same area meets the first preset number, it is determined that the interaction state of the user's next operation of the interactive activity is in the sight-focus state; When the number of non-positive emotions in each emotion type meets the second preset number, it is determined that the interaction state of the user is in a non-positive emotional state.
即电子设备可以将处于视线聚焦状态和处于非正面情绪状态均作为判断用户是否需要提示的条件,那么在处于任一个状态时,电子设备便可以确定需要对用户进行提示。That is, the electronic device can take both being in a focused state of sight and being in a non-positive emotional state as conditions for judging whether the user needs to be prompted, then in either state, the electronic device can determine that the user needs to be prompted.
可以理解到,步骤S1301、步骤S1302和S1303的详细实现流程可以参考前述的实现方式,再此就不再累述。It can be understood that the detailed implementation process of step S1301 , step S1302 and S1303 may refer to the aforementioned implementation manner, and will not be repeated here.
第三实施例Third Embodiment
请参阅图6,本申请实施例提供了一种图像处理装置100,该图像处理装置100可以应用于电子设备,该图像处理装置100包括:Referring to FIG. 6, an embodiment of the present application provides an
图像获得模块110,用于在用户参与互动活动过程中,获得用户的面部图像。The
神态获得模块120,用于基于所述面部图像,获得所述用户的神态参数。The facial
操作判断模块130,用于基于所述神态参数,判断所述用户对所述互动活动的下一步操作的交互状态。The
提示输出模块140,用于根据所述交互状态生成并输出与所述交互状态对应的提示信息。The
需要说明的是,由于所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。It should be noted that, as those skilled in the art can clearly understand, for the convenience and brevity of the description, for the specific working process of the above-described systems, devices and units, reference may be made to the corresponding processes in the foregoing method embodiments. No longer.
本领域内的技术人员应明白,本申请实施例可提供为方法、系统、或计算机程序产品。因此,本申请实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art should understand that the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
综上所述,本申请实施例提供了一种图像处理方法、装置、电子设备及存储介质。方法包括:在用户参与互动活动过程中,获得用户的面部图像;基于面部图像,获得用户的神态参数;基于神态参数,判断用户对互动活动的下一步操作的交互状态;根据交互状态生成并输出与交互状态对应的提示信息。To sum up, the embodiments of the present application provide an image processing method, an apparatus, an electronic device, and a storage medium. The method includes: in the process of the user participating in the interactive activity, obtaining the facial image of the user; obtaining the facial expression parameter of the user based on the facial image; judging the interactive state of the user's next operation of the interactive activity based on the facial expression parameter; generating and outputting according to the interactive state The prompt information corresponding to the interactive state.
由于可以基于用户的神态参数来确定用户对互动活动的下一步操作的交互状态,从而便可以在根据用户的交互状态而确定用户此时的决策是有待提高时,为用户生成并输出提示则可以帮助用户快速提高互动活动中的决策水平。Since the user's interaction state for the next step of the interactive activity can be determined based on the user's attitude parameters, it is possible to generate and output a prompt for the user when it is determined according to the user's interaction state that the user's decision at this time needs to be improved. Help users to quickly improve the level of decision-making in interactive activities.
以上仅为本申请的优选实施例而已,并不用于限制本申请,对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。The above are only preferred embodiments of the present application, and are not intended to limit the present application. For those skilled in the art, the present application may have various modifications and changes. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of this application shall be included within the protection scope of this application. It should be noted that like numerals and letters refer to like items in the following figures, so once an item is defined in one figure, it does not require further definition and explanation in subsequent figures.
以上,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。The above are only specific embodiments of the present application, but the protection scope of the present application is not limited to this. Any person skilled in the art can easily think of changes or replacements within the technical scope disclosed in the present application, and should cover within the scope of protection of this application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (13)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811440635.8A CN109542230B (en) | 2018-11-28 | 2018-11-28 | Image processing method, device, electronic device and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811440635.8A CN109542230B (en) | 2018-11-28 | 2018-11-28 | Image processing method, device, electronic device and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN109542230A CN109542230A (en) | 2019-03-29 |
| CN109542230B true CN109542230B (en) | 2022-09-27 |
Family
ID=65851075
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201811440635.8A Active CN109542230B (en) | 2018-11-28 | 2018-11-28 | Image processing method, device, electronic device and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN109542230B (en) |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI769419B (en) * | 2019-12-10 | 2022-07-01 | 中華電信股份有限公司 | System and method for public opinion sentiment analysis |
| CN113487670B (en) * | 2020-10-26 | 2024-09-13 | 海信集团控股股份有限公司 | Cosmetic mirror and state adjustment method |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101071252A (en) * | 2006-05-10 | 2007-11-14 | 佳能株式会社 | Focus adjustment method, focus adjustment apparatus, and control method thereof |
| JP2016081319A (en) * | 2014-10-17 | 2016-05-16 | キヤノン株式会社 | Edition support device for image material, album creation method and program for layout |
| CN107635147A (en) * | 2017-09-30 | 2018-01-26 | 上海交通大学 | Health information management TV based on multi-modal human-computer interaction |
| CN108388889A (en) * | 2018-03-23 | 2018-08-10 | 百度在线网络技术(北京)有限公司 | Method and apparatus for analyzing facial image |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8237807B2 (en) * | 2008-07-24 | 2012-08-07 | Apple Inc. | Image capturing device with touch screen for adjusting camera settings |
| US20150186912A1 (en) * | 2010-06-07 | 2015-07-02 | Affectiva, Inc. | Analysis in response to mental state expression requests |
| CN106708257A (en) * | 2016-11-23 | 2017-05-24 | 网易(杭州)网络有限公司 | Game interaction method and device |
| US10558701B2 (en) * | 2017-02-08 | 2020-02-11 | International Business Machines Corporation | Method and system to recommend images in a social application |
| CN107784281B (en) * | 2017-10-23 | 2019-10-11 | 北京旷视科技有限公司 | Method for detecting human face, device, equipment and computer-readable medium |
| CN108197533A (en) * | 2017-12-19 | 2018-06-22 | 迈巨(深圳)科技有限公司 | A kind of man-machine interaction method based on user's expression, electronic equipment and storage medium |
| CN108434757A (en) * | 2018-05-25 | 2018-08-24 | 深圳市零度智控科技有限公司 | intelligent toy control method, intelligent toy and computer readable storage medium |
-
2018
- 2018-11-28 CN CN201811440635.8A patent/CN109542230B/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101071252A (en) * | 2006-05-10 | 2007-11-14 | 佳能株式会社 | Focus adjustment method, focus adjustment apparatus, and control method thereof |
| JP2016081319A (en) * | 2014-10-17 | 2016-05-16 | キヤノン株式会社 | Edition support device for image material, album creation method and program for layout |
| CN107635147A (en) * | 2017-09-30 | 2018-01-26 | 上海交通大学 | Health information management TV based on multi-modal human-computer interaction |
| CN108388889A (en) * | 2018-03-23 | 2018-08-10 | 百度在线网络技术(北京)有限公司 | Method and apparatus for analyzing facial image |
Non-Patent Citations (2)
| Title |
|---|
| 一类具有相同结构的表情机器人共同注意方法;王巍等;《机器人》;20120515(第03期);全文 * |
| 对电视节目主持人和电视品牌节目的符号学解析;黄雨水;《中国广播电视学刊》;20090520(第05期);全文 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN109542230A (en) | 2019-03-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9679258B2 (en) | Methods and apparatus for reinforcement learning | |
| JP6467965B2 (en) | Emotion estimation device and emotion estimation method | |
| JP5483899B2 (en) | Information processing apparatus and information processing method | |
| CN109254650B (en) | Man-machine interaction method and device | |
| US12198470B2 (en) | Server device, terminal device, and display method for controlling facial expressions of a virtual character | |
| CN110837294B (en) | Facial expression control method and system based on eyeball tracking | |
| CN109902189B (en) | Picture selection method and related equipment | |
| WO2017206400A1 (en) | Image processing method, apparatus, and electronic device | |
| CN111768478B (en) | Image synthesis method and device, storage medium and electronic equipment | |
| JP2015220574A (en) | Information processing system, storage medium, and content acquisition method | |
| US11800975B2 (en) | Eye fatigue prediction based on calculated blood vessel density score | |
| US11543884B2 (en) | Headset signals to determine emotional states | |
| CN117472315B (en) | Tablet computer eye protection system and method | |
| JP2018045435A (en) | Detection device, detection method, and detection program | |
| CN109542230B (en) | Image processing method, device, electronic device and storage medium | |
| CN115100728A (en) | Method, apparatus, storage medium and program product for detecting visual acuity status | |
| CN118690453A (en) | Emotion determination method of architectural elements based on eyeball information and related equipment | |
| CN109242031B (en) | Training method, using method, device and processing equipment of posture optimization model | |
| CN114170187A (en) | Image aesthetic scoring method and device, electronic equipment and storage medium | |
| CN108665455B (en) | Method and device for evaluating image significance prediction result | |
| CN117132689A (en) | User state-based virtual image generation method, device and equipment | |
| US20250047973A1 (en) | Information processing apparatus, information processing method, and non-transitory recording medium | |
| CN109432779B (en) | Difficulty adjusting method and device, electronic equipment and computer readable storage medium | |
| Novotny et al. | Face-based difficulty adjustment for the game five in a row | |
| US20250114706A1 (en) | Virtual environment augmentation methods and systems |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| TR01 | Transfer of patent right |
Effective date of registration: 20241202 Address after: No. 257, 2nd Floor, Building 9, No. 2 Huizhu Road, Liangjiang New District, Yubei District, Chongqing 401100 Patentee after: Yuanli Jinzhi (Chongqing) Technology Co.,Ltd. Country or region after: China Address before: 313, block a, No.2, south academy of Sciences Road, Haidian District, Beijing Patentee before: BEIJING KUANGSHI TECHNOLOGY Co.,Ltd. Country or region before: China |
|
| TR01 | Transfer of patent right |
