CN114840086A - Control method, electronic device and computer storage medium - Google Patents
Control method, electronic device and computer storage medium Download PDFInfo
- Publication number
- CN114840086A CN114840086A CN202210507343.1A CN202210507343A CN114840086A CN 114840086 A CN114840086 A CN 114840086A CN 202210507343 A CN202210507343 A CN 202210507343A CN 114840086 A CN114840086 A CN 114840086A
- Authority
- CN
- China
- Prior art keywords
- distance
- electronic device
- target object
- boundary
- touch
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- User Interface Of Digital Computer (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本申请涉及电子设备中非接触式触控控制技术,尤其涉及一种控制方法、电子设备及计算机存储介质。The present application relates to a non-contact touch control technology in an electronic device, and in particular, to a control method, an electronic device and a computer storage medium.
背景技术Background technique
目前,随着近年来智能手机的快速发展,人们使用手机的时间和场景越来越多,目前,主流的人机交互方式是以触摸操作为主,在此基础上,手势交互作为一种新型的交互方式,在驾驶、就餐等场景的应用不断发展,手势交互可以实现对页面的多种控制,例如,对页面的上下滑动、翻页、拍照、截屏、结束录制等。At present, with the rapid development of smartphones in recent years, people use mobile phones more and more times and scenes. At present, the mainstream human-computer interaction method is mainly touch operation. On this basis, gesture interaction as a new With the continuous development of applications in driving, dining and other scenarios, gesture interaction can realize various controls on the page, such as sliding up and down the page, turning pages, taking pictures, taking screenshots, ending recording, etc.
然而,通过手势类型的变化,或者身体其他部位的变化来实现对电子设备的控制中,仅仅能够控制电子设备实现自身所具有的功能,导致控制不够精细化;由此可以看出,现有的非接触式触控控制方法不够精细化。However, in the control of electronic equipment through changes in gesture types or changes in other parts of the body, only the electronic equipment can be controlled to achieve its own functions, resulting in insufficient control. It can be seen that the existing The non-contact touch control method is not refined enough.
发明内容SUMMARY OF THE INVENTION
本申请实施例提供一种控制方法、电子设备及计算机存储介质,能够提高电子设备进行非接触式触控控制的精细化。Embodiments of the present application provide a control method, an electronic device, and a computer storage medium, which can improve the refinement of non-contact touch control performed by the electronic device.
本申请的技术方案是这样实现的:The technical solution of the present application is realized as follows:
本申请实施例提供一种控制方法,所述方法应用于电子设备中,包括:An embodiment of the present application provides a control method, and the method is applied to an electronic device, including:
当所述电子设备的非接触式触控控制功能处于开启状态时,获取非接触式触控操作对应的视频序列;When the non-contact touch control function of the electronic device is in an on state, acquiring a video sequence corresponding to the non-contact touch operation;
对所述视频序列进行触控识别,得到目标对象和触控类型;Perform touch recognition on the video sequence to obtain the target object and touch type;
确定所述目标对象与所述电子设备之间的第一距离;determining a first distance between the target object and the electronic device;
确定在所述第一距离下的触控操作参数;determining touch operation parameters at the first distance;
按照所述触控操作参数,控制所述电子设备以执行与所述触控类型对应的目标功能。According to the touch operation parameters, the electronic device is controlled to execute the target function corresponding to the touch type.
本申请实施例提供一种电子设备,包括:The embodiment of the present application provides an electronic device, including:
获取模块,用于当所述电子设备的非接触式触控控制功能处于开启状态时,获取非接触式触控操作对应的视频序列;an acquisition module, configured to acquire a video sequence corresponding to a non-contact touch operation when the non-contact touch control function of the electronic device is on;
处理模块,用于对所述视频序列进行触控识别,得到目标对象和触控类型;a processing module for performing touch recognition on the video sequence to obtain a target object and a touch type;
第一确定模块,用于确定所述目标对象与所述电子设备之间的第一距离;a first determining module, configured to determine a first distance between the target object and the electronic device;
第二确定模块,用于确定在所述第一距离下的触控操作参数;a second determining module, configured to determine touch operation parameters at the first distance;
控制模块,用于按照所述触控操作参数,控制所述电子设备以执行与所述触控类型对应的目标功能。The control module is configured to control the electronic device to execute the target function corresponding to the touch type according to the touch operation parameter.
本申请实施例提供一种电子设备,包括:The embodiment of the present application provides an electronic device, including:
处理器以及存储有所述处理器可执行指令的存储介质,所述存储介质通过通信总线依赖所述处理器执行操作,当所述指令被所述处理器执行时,执行上述一个或多个实施例中所述的控制方法。A processor and a storage medium storing instructions executable by the processor, the storage medium relying on the processor to perform operations through a communication bus, and when the instructions are executed by the processor, perform one or more implementations above The control method described in the example.
本申请实施例提供一种计算机存储介质,存储有可执行指令,当所述可执行指令被一个或多个处理器执行的时候,所述处理器执行如一个或多个实施例所述的控制方法。Embodiments of the present application provide a computer storage medium storing executable instructions. When the executable instructions are executed by one or more processors, the processors execute the control described in one or more embodiments. method.
本申请实施例提供了一种控制方法、电子设备及计算机存储介质,包括:当电子设备的非接触式触控控制功能处于开启状态时,获取非接触式触控操作对应的视频序列,对视频序列进行触控识别,得到目标对象和触控类型,确定目标对象与电子设备之间的第一距离,确定在第一距离下的触控操作参数,按照触控操作参数,控制电子设备以执行与触控类型对应的目标功能;也就是说,在本申请实施例中,在电子设备的非接触式触控控制功能的实现中,通过确定目标对象与电子设备的第一距离,进而确定第一距离下的触控操作参数,再控制电子设备按照触控操作参数执行触控类型对应的目标功能,如此,针对同一触控类型,不同的第一距离可以确定出不同的触控操作参数,那么,电子设备能够根据目标对象与电子设备之间的距离来实现对触控类型的响应,与现有的仅仅对触控类型对应的目标功能进行控制相比,采用同一触控类型利用不同距离下的触控操作参数进行控制有所差别,使得非接触式触控控制更加精细化。Embodiments of the present application provide a control method, an electronic device, and a computer storage medium, including: when a non-contact touch control function of the electronic device is in an enabled state, acquiring a video sequence corresponding to a non-contact touch operation, The touch recognition is performed in sequence, the target object and the touch type are obtained, the first distance between the target object and the electronic device is determined, the touch operation parameters at the first distance are determined, and the electronic device is controlled to execute according to the touch operation parameters. The target function corresponding to the touch type; that is, in the embodiment of the present application, in the realization of the non-contact touch control function of the electronic device, the first distance between the target object and the electronic device is determined, and then the first distance is determined. Touch operation parameters at a distance, and then control the electronic device to execute the target function corresponding to the touch type according to the touch operation parameters. In this way, for the same touch type, different first distances can determine different touch operation parameters. Then, the electronic device can realize the response to the touch type according to the distance between the target object and the electronic device. Compared with the existing control of the target function corresponding to the touch type, the same touch type is used to utilize different distances. The touch operation parameters under the control are different, which makes the non-contact touch control more refined.
附图说明Description of drawings
图1为本申请实施例提供的一种可选的控制方法的流程示意图;1 is a schematic flowchart of an optional control method provided by an embodiment of the present application;
图2为相关技术中页面控制方法的流程示意图;2 is a schematic flowchart of a page control method in the related art;
图3a为本申请实施例提供的一种可选的手势框的实例一的示意图;3a is a schematic diagram of example 1 of an optional gesture frame provided by an embodiment of the present application;
图3b为本申请实施例提供的一种可选的手势框的实例二的示意图;3b is a schematic diagram of Example 2 of an optional gesture frame provided by an embodiment of the present application;
图3c为本申请实施例提供的一种可选的手势框的实例三的示意图;3c is a schematic diagram of Example 3 of an optional gesture frame provided by an embodiment of the present application;
图3d为本申请实施例提供的一种可选的手势框的实例四的示意图;3d is a schematic diagram of Example 4 of an optional gesture frame provided by an embodiment of the present application;
图4为本申请实施例提供的一种可选的控制方法的实例一的示意图;4 is a schematic diagram of Example 1 of an optional control method provided by an embodiment of the present application;
图5为本申请实施例提供的一种可选的控制方法的实例二的示意图;5 is a schematic diagram of Example 2 of an optional control method provided by an embodiment of the present application;
图6a为本申请实施例提供的一种可选的手势框的实例五的示意图;6a is a schematic diagram of Example 5 of an optional gesture frame provided by an embodiment of the present application;
图6b为本申请实施例提供的一种可选的手势框的实例六的示意图;6b is a schematic diagram of Example 6 of an optional gesture frame provided by an embodiment of the present application;
图6c为本申请实施例提供的一种可选的手势框的实例七的示意图;6c is a schematic diagram of Example 7 of an optional gesture frame provided by an embodiment of the present application;
图7a为本申请实施例提供的一种可选的屏幕的实例一的排布示意图;FIG. 7a is a schematic diagram of the arrangement of example 1 of an optional screen provided by an embodiment of the present application;
图7b为本申请实施例提供的一种可选的屏幕的实例二的排布示意图;FIG. 7b is a schematic diagram of the arrangement of example 2 of an optional screen provided by an embodiment of the present application;
图7c为本申请实施例提供的一种可选的屏幕的实例三的排布示意图;FIG. 7c is a schematic diagram of the arrangement of Example 3 of an optional screen provided by an embodiment of the present application;
图8为本申请实施例提供的一种可选的电子设备的结构示意图;FIG. 8 is a schematic structural diagram of an optional electronic device provided by an embodiment of the present application;
图9为本申请实施例提供的另一种可选的电子设备的结构示意图。FIG. 9 is a schematic structural diagram of another optional electronic device provided by an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application.
本申请实施例提供了一种控制方法,该方法应用于电子设备中,图1为本申请实施例提供的一种可选的控制方法的流程示意图,如图1所示,该控制方法,可以包括:An embodiment of the present application provides a control method, which is applied to an electronic device. FIG. 1 is a schematic flowchart of an optional control method provided by an embodiment of the present application. As shown in FIG. 1 , the control method can be include:
S101:当电子设备的非接触式触控控制功能处于开启状态时,获取非接触式触控操作对应的视频序列;S101: When the non-contact touch control function of the electronic device is in an enabled state, acquire a video sequence corresponding to the non-contact touch operation;
图2为相关技术中页面控制方法的流程示意图,如图2所示,以手势来对页面进行控制为例,该页面控制方法可以包括:FIG. 2 is a schematic flowchart of a page control method in the related art. As shown in FIG. 2 , taking gestures to control a page as an example, the page control method may include:
S201:打开前置摄像头;S201: Turn on the front camera;
S202:抓拍图片;S202: Snap a picture;
S203:持续手势检测;S203: continuous gesture detection;
S204:手势判断;S204: gesture judgment;
S205:响应手势的操作。S205: an operation in response to a gesture.
其中,电子设备打开前置摄像头,利用前置摄像头抓拍包含有手势的图片,在抓拍到图片之后,对图片中的手势进行持续检测,得到手势并对手势进行判断,经过判断得到该手势对应的操作,例如,上移操作,最后,响应该手势的操作,然而利用这种方式时,仅仅能够控制电子设备实现自身所具有的功能,导致控制不够精细化。The electronic device turns on the front camera, uses the front camera to capture a picture containing gestures, and after capturing the picture, continuously detects the gesture in the picture, obtains the gesture and judges the gesture, and obtains the corresponding gesture of the gesture after the judgment. The operation, for example, the upward movement operation, and finally, the operation in response to the gesture, however, in this way, only the electronic device can be controlled to realize its own function, resulting in an insufficiently refined control.
为了提高非接触式触控控制的精细化,本申请实施例提供一种可选的控制方法,首先,当电子设备的非接触式触控控制功能处于开启状态时,也就是说,电子设备开启非接触式触控控制功能,此时,电子设备的前置摄像头开启,用于对电子设备的屏幕前方进行抓拍,当通过抓拍到的图片检测到目标对象时获取视频序列,以获取到非接触式触控操作对应的视频序列,其中,该视频序列中包括连续的多个图像帧。In order to improve the refinement of the non-contact touch control, the embodiment of the present application provides an optional control method. First, when the non-contact touch control function of the electronic device is in an on state, that is, the electronic device is turned on Non-contact touch control function. At this time, the front camera of the electronic device is turned on, which is used to capture the front of the screen of the electronic device. When the target object is detected through the captured picture, a video sequence is obtained to obtain a non-contact A video sequence corresponding to the touch operation, wherein the video sequence includes a plurality of consecutive image frames.
其中,上述触控操作可以包括上滑操作,下滑操作,长按操作,向上翻页操作,向下翻页操作,点击操作,这里,本申请实施例对此不作具体限定。The above touch operations may include a slide-up operation, a slide-down operation, a long-press operation, a page-turning operation, a page-turning operation, and a click operation, which are not specifically limited in this embodiment of the present application.
S102:对视频序列进行触控识别,得到目标对象和触控类型;S102: Perform touch recognition on the video sequence to obtain the target object and touch type;
在获取到非接触式触控操作的视频序列时,需要对该视频序列进行触控识别,从而可以得到视频序列中包含的目标对象以及触控类型。When a video sequence of a non-contact touch operation is acquired, touch recognition needs to be performed on the video sequence, so that the target object and touch type included in the video sequence can be obtained.
其中,上述目标对象可以为人体的身体部位,例如,手,头部,眼睛等种类,这里,本申请实施例对此不作具体限定。The above-mentioned target object may be a body part of a human body, such as a hand, a head, an eye, etc., which is not specifically limited in this embodiment of the present application.
针对目标对象来说,用户可以利用手势来控制电子设备执行手势对应的触控类型对应的目标功能,也可以利用头部的变化,例如,利用头部的左右晃动,控制电子设备执行头部的左右晃动对应的触控类型对应的目标功能,还可以利用眼睛的变化,例如,利用眨眼动作,控制电子设备执行眨眼动作对应的触控类型对应的目标功能,这里,本申请实施例对此不作具体限定。For the target object, the user can use gestures to control the electronic device to perform the target function corresponding to the touch type corresponding to the gesture, or use the change of the head, for example, use the left and right shaking of the head to control the electronic device to perform the movement of the head. The target function corresponding to the touch type corresponding to the shaking left and right can also be used, and the change of the eyes can also be used. For example, the blink action is used to control the electronic device to execute the target function corresponding to the touch type corresponding to the blink action. Specific restrictions.
基于上述触控操作的说明可知,该触控类型可以包括滑动类型,翻页类型,点击类型,长按类型,其中,上滑操作和下滑操作属于滑动类型,长按操作为长按类型,向上翻页操作和向下翻页操作属于翻页类型,点击操作属于点击类型,这里,本申请实施例对此不作具体限定。Based on the above description of the touch operation, it can be seen that the touch type can include sliding type, page turning type, click type, and long press type, among which, the up-swipe operation and the slide-down operation belong to the slide type, the long-press operation is a long-press type, and the up-swipe operation is a long-press type. The page turning operation and the page turning down operation belong to the page turning type, and the click operation belongs to the click type, which is not specifically limited in this embodiment of the present application.
在一种可选的实施例中,S102可以包括:In an optional embodiment, S102 may include:
对视频序列进行触控识别,得到目标对象、目标对象的边界框和目标对象的触控操作的置信度值;Perform touch recognition on the video sequence to obtain the target object, the bounding box of the target object and the confidence value of the touch operation of the target object;
基于目标对象的触控操作的置信度值,确定触控类型。The touch type is determined based on the confidence value of the touch operation of the target object.
在对视频序列的触控识别中,可以得到目标对象,目标对象的边界框和目标对象的触控操作的置信度值,其中,边界框表示目标对象的边界信息,目标对象的边界信息指的是目标对象的轮廓信息,其中,轮廓信息可以包括轮廓的形状,轮廓的面积,轮廓线条的尺寸信息;上述边界框为包裹上述目标对象的矩形框,边界框包括边界的高度和边界的宽度。In the touch recognition of video sequences, the target object, the bounding box of the target object and the confidence value of the touch operation of the target object can be obtained, wherein the bounding box represents the boundary information of the target object, and the boundary information of the target object refers to is the contour information of the target object, wherein the contour information may include the shape of the contour, the area of the contour, and the size information of the contour line; the above-mentioned bounding box is a rectangular box wrapping the above-mentioned target object, and the bounding box includes the height of the border and the width of the border.
也就是说,通过对视频序列的触控识别,不仅可以得到目标对象,还可以得到目标对象的边界信息,同时,对目标对象的姿态类别进行匹配,可以匹配得到目标对象的触控操作,这里,需要说明的是,可以匹配得到一个或者多个目标对象的触控操作,在触控识别中还会得到的目标对象的触控操作的置信度值,即目标对象的触控操作的置信度值,为了确定出触控类型,可以基于目标对象的触控操作的置信度值,确定触控类型。That is to say, through the touch recognition of the video sequence, not only the target object, but also the boundary information of the target object can be obtained. At the same time, by matching the gesture category of the target object, the touch operation of the target object can be obtained by matching, here , it should be noted that the touch operation of one or more target objects can be matched, and the confidence value of the touch operation of the target object will also be obtained in the touch recognition, that is, the confidence degree of the touch operation of the target object value, in order to determine the touch type, the touch type can be determined based on the confidence value of the touch operation of the target object.
可以理解地,当仅仅匹配得到一个目标对象的触控操作时,直接将匹配得到的目标对象的触控操作所属的类型,确定为触控类型;针对匹配得到一个以上的目标对象的触控操作时,在一种可选的实施例中,基于目标对象的触控操作的置信度值,确定触控类型,包括:It is understandable that when only one touch operation of a target object is obtained by matching, the type of the touch operation of the matched target object is directly determined as the touch type; for the touch operation of more than one target object obtained by matching , in an optional embodiment, determining the touch type based on the confidence value of the touch operation of the target object, including:
将目标对象的触控操作的置信度值的最大值对应的触控操作所属的类型,确定为触控类型。The type of the touch operation corresponding to the maximum value of the confidence value of the touch operation of the target object belongs to is determined as the touch type.
其中,目标对象的每个触控操作的置信度值可以反映出每个触控操作的可信程度,置信度值越大,触控操作的可信程度越高,所以,这里,将目标对象的触控操作的置信度值的最大值选取出来,并将最大值对应的触控操作所属的类型,确定为触控类型,如此,通过置信度值来确定触控类型,能够更加准确地确定出触控类型,进而精准地实现对电子设备的精细化控制。Among them, the confidence value of each touch operation of the target object can reflect the credibility of each touch operation. The larger the confidence value, the higher the credibility of the touch operation. Therefore, here, the target object is The maximum value of the confidence value of the touch operation is selected, and the type of the touch operation corresponding to the maximum value is determined as the touch type. In this way, the touch type can be determined more accurately by using the confidence value. The touch type can be selected, so as to realize the precise control of electronic equipment.
S103:确定目标对象与电子设备之间的第一距离;S103: Determine a first distance between the target object and the electronic device;
这里,在触控识别得到目标对象之后,需要进一步确定出目标对象与电子设备之间的第一距离,其中,需要说明的是,这里的第一距离可以是目标对象与电子设备的屏幕之间的第一距离,也可以是目标对象与电子设备的背板之间的第一距离,也可以是目标对象与电子设备的图像传感器之间的第一距离,这里,本申请实施例对此不作具体限定。Here, after the target object is obtained by touch recognition, the first distance between the target object and the electronic device needs to be further determined, wherein, it should be noted that the first distance here may be between the target object and the screen of the electronic device The first distance can also be the first distance between the target object and the backplane of the electronic device, or the first distance between the target object and the image sensor of the electronic device. Specific restrictions.
可以理解地,若采用前置摄像头获取到的视频序列中的每个图像帧还对应一个深度图像,那么,可以根据深度图像确定出目标对象与电子设备之间的第一距离,还可以根据目标对象的边界信息来确定出目标对象与电子设备之间的第一距离,这里,本申请实施例对此不作具体限定。Understandably, if each image frame in the video sequence obtained by using the front camera also corresponds to a depth image, then the first distance between the target object and the electronic device can be determined according to the depth image, and the first distance between the target object and the electronic device can also be determined according to the depth image. The boundary information of the object is used to determine the first distance between the target object and the electronic device, which is not specifically limited in this embodiment of the present application.
为了通过目标对象的边界信息确定出目标对象与电子设备之间的第一距离,由于视频序列中的每个图像帧都对应有一个边界信息,这里,可以根据每个图像帧中的目标对象的边界信息来确定目标对象与电子设备之间的第一距离,还可以根据每个图像帧中的目标对象的边界信息中的一个目标对象的边界信息来确定目标对象与电子设备之间的第一距离,这里,本申请实施例对此不作具体限定。In order to determine the first distance between the target object and the electronic device through the boundary information of the target object, since each image frame in the video sequence corresponds to a boundary information, here, according to the target object in each image frame The first distance between the target object and the electronic device can also be determined according to the boundary information of a target object in the boundary information of the target object in each image frame to determine the first distance between the target object and the electronic device. The distance, here, is not specifically limited in this embodiment of the present application.
为了实现根据每个图像帧中的目标对象的边界信息确定目标对象与电子设备之间的第一距离,在一种可选的实施例中,S103可以包括:In order to determine the first distance between the target object and the electronic device according to the boundary information of the target object in each image frame, in an optional embodiment, S103 may include:
从视频序列的每个图像帧中的目标对象的边界信息中选取出一个目标对象的边界信息;Select the boundary information of a target object from the boundary information of the target object in each image frame of the video sequence;
根据选取出的目标对象的边界信息,确定目标对象与电子设备之间的第一距离。The first distance between the target object and the electronic device is determined according to the boundary information of the selected target object.
这里,通过触控识别得到视频序列的每个图像帧中的目标对象的边界信息之后,从每个图像帧中的目标对象的边界信息中选取出一个目标对象的边界信息,在根据选取出的这一个目标对象的边界信息来确定目标对象与电子设备之间的第一距离。Here, after the boundary information of the target object in each image frame of the video sequence is obtained through touch recognition, the boundary information of a target object is selected from the boundary information of the target object in each image frame, and the boundary information of a target object is selected according to the selected boundary information. The boundary information of the target object is used to determine the first distance between the target object and the electronic device.
在一种可选的实施例中,以边界框来表示边界信息时,根据选取出的目标对象的边界信息,确定目标对象与电子设备之间的第一距离,包括:In an optional embodiment, when the boundary information is represented by a bounding box, the first distance between the target object and the electronic device is determined according to the selected boundary information of the target object, including:
确定边界框的各边界与电子设备的屏幕的对应边缘之间的第二距离;determining a second distance between each boundary of the bounding box and a corresponding edge of the screen of the electronic device;
根据各边界与屏幕的对应边缘之间的第二距离,确定目标边界;determining the target boundary according to the second distance between each boundary and the corresponding edge of the screen;
根据目标边界的长度值,利用预设的边界长度值与目标对象至电子设备的距离之间的关系,确定目标对象与电子设备之间的第一距离。According to the length value of the target boundary, the relationship between the preset boundary length value and the distance between the target object and the electronic device is used to determine the first distance between the target object and the electronic device.
可以理解地,以边界框来表示边界信息时,根据选取出的目标对象的边界框,确定目标对象与电子设备之间的第一距离中,先确定边界框的各边界与屏幕的对应边缘之间的第二距离,可以通过判断每条边界到屏幕的对应边缘的第二距离来确定边界框是否将完整的目标对象锁定在内,即目标对象是否被屏幕截断。Understandably, when the boundary information is represented by a bounding box, according to the selected bounding box of the target object, in determining the first distance between the target object and the electronic device, first determine the difference between each boundary of the bounding box and the corresponding edge of the screen. Whether the bounding box locks the complete target object, that is, whether the target object is cut off by the screen, can be determined by judging the second distance between each boundary and the corresponding edge of the screen.
在确定出边界框的各边界与屏幕的对应边缘之间的第二距离之后,可以根据各边界与屏幕的对应边缘之间的第二距离来确定目标对象是否被屏幕所截断,基于此,从各边界中选取出一条边界即为目标边界,目标边界的长度值为目标对象未被截断或者部分被截断时目标边界可用的宽度值或高度值,这样,利用选取出的目标边界的长度值来确定目标对象与电子设备之间的第一距离。After the second distance between each boundary of the bounding box and the corresponding edge of the screen is determined, it can be determined whether the target object is cut off by the screen according to the second distance between each boundary and the corresponding edge of the screen. Based on this, from A boundary selected from each boundary is the target boundary, and the length value of the target boundary is the width or height value available for the target boundary when the target object is not truncated or partially truncated. A first distance between the target object and the electronic device is determined.
需要说明的是,可以选取目标边界的两条边界,例如,既选取宽度值也选取高度值,同时采用边界框的宽度值和高度值来确定目标对象与电子设备之间的第一距离。It should be noted that two boundaries of the target boundary can be selected, for example, both the width value and the height value are selected, and the width and height values of the bounding box are used to determine the first distance between the target object and the electronic device.
这样,就可以在筛选出边界的情况下,利用目标边界确定出目标对象与电子设备之间的第一距离,进而确定出第一距离下的触控操作参数。In this way, when the boundary is filtered out, the target boundary can be used to determine the first distance between the target object and the electronic device, and then the touch operation parameters at the first distance can be determined.
为了选取出未被截断的或者部分被截断的目标对象,以得到可用的目标边界,以提高目标对象与电子设备之间的第一距离的准确性,在一种可选的实施例中,根据边界框的各边界与电子设备的屏幕的对应边缘之间的第二距离,确定目标边界,包括:In order to select the target object that is not truncated or partially truncated to obtain the available target boundary, so as to improve the accuracy of the first distance between the target object and the electronic device, in an optional embodiment, according to The second distance between each boundary of the bounding box and the corresponding edge of the screen of the electronic device, determining the target boundary, including:
当各边界与屏幕的对应边缘之间的第二距离均大于预设阈值时,从各边界中选取出一条边界;When the second distance between each boundary and the corresponding edge of the screen is greater than the preset threshold, select a boundary from each boundary;
当各边界与屏幕的对应边缘之间的第二距离中仅存在一条边界与屏幕的对应边缘的第二距离小于等于预设阈值时,从各边界中选取出与屏幕的对应边缘的第二距离小于等于预设阈值的边界;When only one of the second distances between each boundary and the corresponding edge of the screen has a second distance between the boundary and the corresponding edge of the screen that is less than or equal to the preset threshold, select the second distance from each boundary to the corresponding edge of the screen The boundary is less than or equal to the preset threshold;
将选取出的边界确定为目标边界。The selected boundary is determined as the target boundary.
也就是说,将各边界与屏幕的对应边缘之间的第二距离与预设阈值进行比较,若均大于预设阈值,说明该边界框中的目标对象不存在被屏幕截断的情况,所以,可以从各边界中选取出一条边界作为目标边界;若仅仅存在一条边界与屏幕的对应边缘之间的第二距离小于等于预设阈值,说明该边界框中的目标对象存在被屏幕截断,并且虽然目标对象被截断,但是与屏幕的对应边缘之间的第二距离小于等于预设阈值的边界相对应的边界能够反映出目标对象的尺寸,所以,可以选取与屏幕的对应边缘之间的第二距离小于等于预设阈值的边界作为目标边界,并利用目标边界的长度值来确定目标对象与电子设备之间的第一距离。That is to say, the second distance between each boundary and the corresponding edge of the screen is compared with the preset threshold. If both are greater than the preset threshold, it means that the target object in the bounding box is not cut off by the screen. Therefore, One boundary can be selected from each boundary as the target boundary; if there is only one boundary and the second distance between the corresponding edge of the screen is less than or equal to the preset threshold, it means that the target object in the bounding box is truncated by the screen, and although The target object is truncated, but the boundary corresponding to the boundary where the second distance between the corresponding edges of the screen is less than or equal to the preset threshold can reflect the size of the target object. Therefore, the second distance between the corresponding edges of the screen can be selected. The boundary whose distance is less than or equal to the preset threshold is used as the target boundary, and the length value of the target boundary is used to determine the first distance between the target object and the electronic device.
另外,若均小于等于预设阈值,说明该边界框中目标对象被屏幕截断,且边界框中的高度值和宽度值并不能够反映出目标对象的尺寸,所以,不能够采用该边界框的长度值和/或宽度值来计算目标对象与电子设备之间的第一距离,此时,可以生成提示信息,用于提示该目标对象不符合对电子设备的控制范围之内。In addition, if both are less than or equal to the preset threshold, it means that the target object in the bounding box is cut off by the screen, and the height and width values in the bounding box cannot reflect the size of the target object, so the bounding box cannot be used. The length value and/or the width value are used to calculate the first distance between the target object and the electronic device. At this time, prompt information can be generated for prompting that the target object does not conform to the control range of the electronic device.
下面以目标对象为手,边界框为手势框为例,来对边界框进行说明,图3a为本申请实施例提供的一种可选的手势框的实例一的示意图,如图3a所示,手势框的各边界与屏幕的对应边缘之间的第二距离均大于预设阈值,所以,该手势未被屏幕截断,可以选取该手势框中的任意一条边界作为目标边界来确定手与电子设备之间的第一距离;图3b为本申请实施例提供的一种可选的手势框的实例二的示意图,如图3b所示,手势框的各边界与屏幕的对应边缘之间的第二距离中仅仅有一条宽度边界与屏幕的对应边缘的第二距离小于等于预设阈值,所以,该手被屏幕截断,但是可以选取该手势框中的宽度边界作为目标边界来确定手与电子设备之间的第一距离;图3c为本申请实施例提供的一种可选的手势框的实例三的示意图,如图3c所示,手势框的各边界与屏幕的对应边缘之间的第二距离中仅仅有一条高度边界与屏幕的对应边缘的第二距离小于等于预设阈值,所以,该手被屏幕截断,但是可以选取该手势框中的高度边界作为目标边界来确定手与电子设备之间的第一距离;图3d为本申请实施例提供的一种可选的手势框的实例四的示意图,如图3d所示,手势框的各边界与屏幕的对应边缘之间的第二距离中有两条边界与屏幕的对应边缘的第二距离小于等于预设阈值,分别为高度边界和宽度边界,所以,该手被屏幕截断,此种情况无法选取该手势框中的边界作为目标边界来确定手与电子设备之间的第一距离,可以生成提示信息,用于提示该目标对象不符合对电子设备的控制范围之内。The following describes the bounding box by taking the target object as the hand and the bounding box as the gesture box as an example. FIG. 3a is a schematic diagram of Example 1 of an optional gesture box provided by the embodiment of the present application, as shown in FIG. 3a , The second distance between each boundary of the gesture frame and the corresponding edge of the screen is greater than the preset threshold, so the gesture is not cut off by the screen, and any boundary in the gesture frame can be selected as the target boundary to determine the hand and the electronic device. The first distance between the Among the distances, there is only one width boundary and the second distance between the corresponding edge of the screen is less than or equal to the preset threshold. Therefore, the hand is cut off by the screen, but the width boundary in the gesture frame can be selected as the target boundary to determine the distance between the hand and the electronic device. Figure 3c is a schematic diagram of a third example of an optional gesture frame provided by the embodiment of the application, as shown in Figure 3c, the second distance between each boundary of the gesture frame and the corresponding edge of the screen There is only one height boundary and the second distance between the corresponding edge of the screen is less than or equal to the preset threshold, so the hand is cut off by the screen, but the height boundary in the gesture frame can be selected as the target boundary to determine the distance between the hand and the electronic device 3d is a schematic diagram of Example 4 of an optional gesture frame provided by the embodiment of the application. As shown in Figure 3d, in the second distance between each boundary of the gesture frame and the corresponding edge of the screen The second distance between two borders and the corresponding edge of the screen is less than or equal to the preset threshold, which are the height border and the width border respectively. Therefore, the hand is cut off by the screen. In this case, the border in the gesture frame cannot be selected as the target border. The first distance between the hand and the electronic device is determined, and prompt information can be generated for prompting that the target object does not conform to the control range of the electronic device.
在一种可选的实施例中,根据目标边界的长度值,利用预设的边界长度值与目标对象至电子设备的距离之间的关系,计算得到目标对象与电子设备之间的第一距离,包括:In an optional embodiment, according to the length value of the target boundary, the first distance between the target object and the electronic device is calculated by using the relationship between the preset boundary length value and the distance between the target object and the electronic device ,include:
当目标边界为宽度边界时,根据目标边界的长度值,利用预设的边界宽度值与目标对象至电子设备的距离之间的关系,计算得到目标对象与电子设备之间的第一距离。When the target boundary is a width boundary, the first distance between the target object and the electronic device is calculated by using the relationship between the preset boundary width value and the distance between the target object and the electronic device according to the length value of the target boundary.
其中,在电子设备中预先存储有边界宽度值与目标对象至电子设备的距离之间的关系,那么,在得知目标边界的宽度值之后,可以根据目标边界宽度值,利用预设的边界宽度值与目标对象至电子设备的距离之间的关系,来计算得到目标对象与电子设备之间的第一距离。Wherein, the relationship between the boundary width value and the distance between the target object and the electronic device is pre-stored in the electronic device. Then, after the width value of the target boundary is known, the preset boundary width can be used according to the target boundary width value. The relationship between the value and the distance between the target object and the electronic device is used to calculate the first distance between the target object and the electronic device.
可以理解地,边界宽度值与目标对象至电子设备的距离之间存在着一定的关系,利用该关系,可以计算目标对象与电子设备之间的第一距离,以目标对象为手,边界框为手势框为例,边界宽度值与目标对象至电子设备的距离之间的关系可以采用如下方式确定:Understandably, there is a certain relationship between the boundary width value and the distance between the target object and the electronic device. Using this relationship, the first distance between the target object and the electronic device can be calculated, taking the target object as the hand and the bounding box as Taking the gesture frame as an example, the relationship between the border width value and the distance between the target object and the electronic device can be determined as follows:
记录用户的手与电子设备的距离为D1时,手势框的宽度W1;记录用户的手与电子设备的距离为D2时,手势框的宽度W2;因此,当距离为D时,当前手势框W的计算方式可以为:When the distance between the user's hand and the electronic device is recorded as D1, the width of the gesture frame is W1; when the distance between the user's hand and the electronic device is recorded as D2, the width of the gesture frame is W2; therefore, when the distance is D, the current gesture frame W can be calculated as:
W=W1+(D-D1)(W2-W1)/(D2-D1) (1)W=W1+(D-D1)(W2-W1)/(D2-D1) (1)
可以确定出边界宽度值与目标对象至电子设备的距离之间的关系可以为:It can be determined that the relationship between the border width value and the distance from the target object to the electronic device can be:
D=(W-W1)(D2-D1)/(W2-W1)+D1 (2)D=(W-W1)(D2-D1)/(W2-W1)+D1 (2)
其中,上述W1和W2均为手未被屏幕截断时手势框的宽度值。Wherein, the above W1 and W2 are both the width values of the gesture frame when the hand is not cut off by the screen.
在一种可选的实施例中,根据目标边界的长度值,利用预设的边界长度值与目标对象至电子设备的距离之间的关系,计算得到目标对象与电子设备之间的第一距离,包括:In an optional embodiment, according to the length value of the target boundary, the first distance between the target object and the electronic device is calculated by using the relationship between the preset boundary length value and the distance between the target object and the electronic device ,include:
当目标边界为高度边界时,根据目标边界的长度值,利用预设的边界高度值与目标对象至电子设备的距离之间的关系,计算得到目标对象与电子设备之间的第一距离。When the target boundary is a height boundary, the first distance between the target object and the electronic device is obtained by calculating the relationship between the preset boundary height value and the distance between the target object and the electronic device according to the length value of the target boundary.
其中,在电子设备中预先存储有边界高度值与目标对象至电子设备的距离之间的关系,那么,在得知目标边界的高度值之后,可以根据边界高度值,利用预设的边界高度值与目标对象至电子设备的距离之间的关系,来计算得到目标对象与电子设备之间的第一距离。Wherein, the relationship between the boundary height value and the distance between the target object and the electronic device is pre-stored in the electronic device. Then, after the height value of the target boundary is known, the preset boundary height value can be used according to the boundary height value. The first distance between the target object and the electronic device is obtained by calculating the relationship with the distance between the target object and the electronic device.
可以理解地,边界高度值与目标对象至电子设备的距离之间存在着一定的关系,利用该关系,可以计算目标对象与电子设备之间的第一距离,以目标对象为手,边界框为手势框为例,边界高度值与目标对象至电子设备的距离之间的关系可以采用如下方式确定:It can be understood that there is a certain relationship between the boundary height value and the distance between the target object and the electronic device. Using this relationship, the first distance between the target object and the electronic device can be calculated, taking the target object as the hand and the bounding box as Taking a gesture frame as an example, the relationship between the height of the border and the distance between the target object and the electronic device can be determined as follows:
记录用户的手与电子设备的距离为D1时,手势框的宽度H1;记录用户的手与电子设备的距离为D2时,手势框的宽度H2;因此,当距离为D时,当前手势框H的计算方式可以为:When the distance between the user's hand and the electronic device is recorded as D1, the width of the gesture frame is H1; when the distance between the user's hand and the electronic device is recorded as D2, the width of the gesture frame is H2; therefore, when the distance is D, the current gesture frame H2 can be calculated as:
H=H1+(D-D1)(H2-H1)/(D2-D1) (3)H=H1+(D-D1)(H2-H1)/(D2-D1) (3)
由此,可以确定出边界宽度值与目标对象至电子设备的距离之间的关系可以为:Thus, it can be determined that the relationship between the border width value and the distance from the target object to the electronic device can be:
D=(H-H1)(D2-D1)/(H2-H1)+D1 (4)D=(H-H1)(D2-D1)/(H2-H1)+D1 (4)
其中,上述H1和H2均为手未被屏幕截断时手势框的高度值。Wherein, the above H1 and H2 are both height values of the gesture frame when the hand is not cut off by the screen.
除了采用上述选取出的一个目标对象的边界信息来确定目标对象与电子设备之间的第一距离之外,还可以根据每个图像帧中的目标对象的边界信息来确定目标对象与电子设备之间的第一距离,在一种可选的实施例中,在获取视频序列的每个图像帧中的目标对象的边界信息之后,上述方法还包括:In addition to using the boundary information of a target object selected above to determine the first distance between the target object and the electronic device, the distance between the target object and the electronic device can also be determined according to the boundary information of the target object in each image frame. In an optional embodiment, after acquiring the boundary information of the target object in each image frame of the video sequence, the above method further includes:
根据每个图像帧中的目标对象中的边界信息,确定每个图像帧中的目标对象与电子设备之间的距离;Determine the distance between the target object in each image frame and the electronic device according to the boundary information in the target object in each image frame;
将每个图像帧中的目标对象与电子设备之间的距离的平均值,确定为目标对象与电子设备之间的第一距离。The average value of the distances between the target object and the electronic device in each image frame is determined as the first distance between the target object and the electronic device.
可以理解地,采用每个图像帧中的目标对象的边界信息,确定每个图像帧中的目标对象与电子设备之间的距离,具体地与根据选取出的目标对象的边界信息确定目标对象与电子设备之间的第一距离的实现方式相同,这里,不再赘述。Understandably, the boundary information of the target object in each image frame is used to determine the distance between the target object in each image frame and the electronic device, and specifically the distance between the target object and the electronic device is determined according to the boundary information of the selected target object. The implementation manner of the first distance between electronic devices is the same, and details are not repeated here.
在确定出每个图像帧中的目标对象与电子设备之间的距离之后,可以采用平均值算法,计算每个图像帧中的目标对象与电子设备之间的距离的平均值,将该平均值确定为目标对象与电子设备之间的第一距离。After determining the distance between the target object and the electronic device in each image frame, an average value algorithm can be used to calculate the average value of the distance between the target object and the electronic device in each image frame, and the average value It is determined as the first distance between the target object and the electronic device.
进一步地,为了确定出准确地第一距离,在一种可选的实施例中,将每个图像帧中的目标对象与电子设备之间的距离的平均值,确定为目标对象与电子设备之间的第一距离,包括:Further, in order to determine the accurate first distance, in an optional embodiment, the average value of the distance between the target object and the electronic device in each image frame is determined as the distance between the target object and the electronic device. The first distance between, including:
当每个图像帧中目标对象与电子设备之间的距离中任意两个之间的差值的绝对值小于等于预设的误差阈值时,将每个图像帧中目标对象与电子设备之间的距离的平均值,确定为目标对象与电子设备之间的第一距离。When the absolute value of the difference between any two of the distances between the target object and the electronic device in each image frame is less than or equal to the preset error threshold, the difference between the target object and the electronic device in each image frame is calculated. The average value of the distance is determined as the first distance between the target object and the electronic device.
这里,在计算平均值之前,先判断每个图像帧中目标对象与电子设备之间的距离中任意两个之间的差值的绝对值是否小于等于预设的误差阈值,若小于等于,那么说明目标对象与电子设备之间的距离在预设的误差阈值之内抖动,并没有发生太大的变化,所以,可以采用平均值发来确定目标对象与电子设备的第一距离。Here, before calculating the average value, first determine whether the absolute value of the difference between any two of the distances between the target object and the electronic device in each image frame is less than or equal to the preset error threshold, if it is less than or equal to, then It means that the distance between the target object and the electronic device vibrates within the preset error threshold and does not change much. Therefore, the average value can be used to determine the first distance between the target object and the electronic device.
另外,在一种可选的实施例中,上述方法还包括:In addition, in an optional embodiment, the above method further includes:
当每个图像帧中目标对象与电子设备之间的距离中任意两个之间的差值的绝对值中存在大于预设的误差阈值的绝对值时,按照预设的在标准距离下的触控操作参数,控制电子设备以执行与触控类型对应的目标功能。When the absolute value of the difference between any two of the distances between the target object and the electronic device in each image frame is greater than the preset error threshold, according to the preset touch at the standard distance control operation parameters, and control the electronic device to perform the target function corresponding to the touch type.
也就是说,通过判断得到每个目标对象与电子设备之间的距离中任意两个之间的差值的绝对值存在大于预设的误差阈值的绝对值,说明,目标对象与电子设备之间的距离抖动较大,发生较大的变化,所以,无法确定出目标对象与电子设备之间的第一距离,所以,可以直接按照预设的在标准距离下的触控操作参数,控制电子设备以执行与触控类型对应的目标功能。That is to say, by judging that the absolute value of the difference between any two of the distances between each target object and the electronic device has an absolute value that is greater than the preset error threshold, it means that there is a difference between the target object and the electronic device. The distance jitters and changes greatly, so the first distance between the target object and the electronic device cannot be determined. Therefore, the electronic device can be controlled directly according to the preset touch operation parameters at the standard distance. to perform the target function corresponding to the touch type.
S104:确定在第一距离下的触控操作参数;S104: Determine touch operation parameters at the first distance;
在确定出第一距离之后,可以确定出在第一距离下的触控操作参数,示例性地,电子设备中针对每种触控操作或者触控类型,都存储有距离与触控操作参数之间的对应关系,这里,可以采用该对应关系来确定在第一距离下的触控操作参数,还可以利用预设的参数计算公式来计算出第一距离下的触控操作参数,这里,本申请实施例对此不作具体限定。After the first distance is determined, the touch operation parameters at the first distance can be determined. Exemplarily, for each touch operation or touch type in the electronic device, the distance and the touch operation parameter are stored. Here, the corresponding relationship can be used to determine the touch operation parameters at the first distance, and the preset parameter calculation formula can also be used to calculate the touch operation parameters at the first distance. This is not specifically limited in the application examples.
在一种可选的实施例中,触控操作参数包括以下任一项:滑动操作的滑动距离,滑动操作的滑动速度,翻页操作的翻页速度,长按操作的长按时间,点击操作的点击频率。In an optional embodiment, the touch operation parameters include any one of the following: the sliding distance of the sliding operation, the sliding speed of the sliding operation, the page turning speed of the page turning operation, the long pressing time of the long press operation, the click operation click frequency.
也就是说,目标对象的触控操作可以为滑动操作,长按操作或者点击操作等等,对于滑动操作的操作参数可以包括滑动距离和滑动速度,长按操作的操作参数可以包括长按时间,点击操作的操作参数可以包括点击频率,这里,本申请实施例对此不作具体限定。That is to say, the touch operation of the target object may be a sliding operation, a long press operation or a click operation, etc. The operation parameters for the sliding operation may include the sliding distance and the sliding speed, and the operation parameters for the long pressing operation may include the long pressing time. The operation parameter of the click operation may include the click frequency, which is not specifically limited in this embodiment of the present application.
为了确定出在第一距离下的触控操作参数,在一种可选的实施例中,S104可以包括:In order to determine the touch operation parameters at the first distance, in an optional embodiment, S104 may include:
基于预设的距离与灵敏度系数的对应关系,计算在第一距离下的灵敏度系数;Calculate the sensitivity coefficient at the first distance based on the preset correspondence between the distance and the sensitivity coefficient;
根据在第一距离下的灵敏度系数,确定在第一距离下的触控操作参数。The touch operation parameters at the first distance are determined according to the sensitivity coefficient at the first distance.
这里,在确定出第一距离之后,在电子设备中预先存储有第一距离与灵敏度系数的对应关系,那么,基于该对应关系,可以计算得到在第一距离下的灵敏度系数,再基于在第一距离下的灵敏度系数,确定在第一距离下的触控操作参数。Here, after the first distance is determined, the corresponding relationship between the first distance and the sensitivity coefficient is pre-stored in the electronic device, then, based on the corresponding relationship, the sensitivity coefficient at the first distance can be calculated, and then based on the first distance The sensitivity coefficient at a distance determines the touch operation parameters at the first distance.
在一种可选的实施例中,根据在第一距离下的灵敏度系数,确定在第一距离下的触控操作参数,包括:In an optional embodiment, the touch operation parameters at the first distance are determined according to the sensitivity coefficient at the first distance, including:
利用在第一距离下的灵敏度系数和预设的在标准距离下的触控操作参数,计算得到在第一距离下的触控操作参数。Using the sensitivity coefficient at the first distance and the preset touch operation parameters at the standard distance, the touch operation parameters at the first distance are calculated.
可以理解地,在电子设备中存储有每个触控操作对应的触控操作参数,所以,在根据在第一距离下的灵敏度系数确定在第一距离下的触控操作参数中,示例性地,将在第一距离下的灵敏度系数与预设的在标准距离下的触控操作参数的乘积,确定为在第一距离下的触控操作参数。It can be understood that the touch operation parameters corresponding to each touch operation are stored in the electronic device. Therefore, in determining the touch operation parameters at the first distance according to the sensitivity coefficient at the first distance, exemplarily , and the product of the sensitivity coefficient at the first distance and the preset touch operation parameter at the standard distance is determined as the touch operation parameter at the first distance.
也就是说,利用第一距离下的灵敏度系数确定在第一距离下的触控操作参数,进而使得与电子设备具有不同距离的目标对象,即使相同的触控操作所对应的触控操作参数也是不同的,从而扩展了非接触式触控控制的功能,使得用户的非接触式触控操作对电子设备的控制更加精细化,提高了用户的体验度。That is to say, using the sensitivity coefficient at the first distance to determine the touch operation parameters at the first distance, so that the target objects with different distances from the electronic device have different touch operation parameters corresponding to the same touch operation. Therefore, the functions of the non-contact touch control are expanded, so that the user's non-contact touch operation controls the electronic device more refined, and the user experience is improved.
S105:按照触控操作在参数,控制电子设备以执行与触控类型对应的目标功能。S105: Control the electronic device to execute the target function corresponding to the touch type according to the touch operation parameters.
在确定出触控操作参数之后,电子设备按照确定出的触控操作参数执行触控类型对应的目标功能。其中,电子设备可以控制屏幕的页面按照触控操作参数执行触控类型对应的目标功能,电子设备也可以按照触控操作参数执行对数据的处理,例如对图像的处理功能,这里,本申请实施例对此不作具体限定。After the touch operation parameters are determined, the electronic device executes the target function corresponding to the touch type according to the determined touch operation parameters. Among them, the electronic device can control the page of the screen to perform the target function corresponding to the touch type according to the touch operation parameters, and the electronic device can also perform data processing according to the touch operation parameters, such as the processing function of the image, here, this application implements This example is not specifically limited.
针对实现对电子设备的屏幕的控制来说,在一种可选的实施例中,S105可以包括:For controlling the screen of the electronic device, in an optional embodiment, S105 may include:
按照触控操作参数,控制电子设备的屏幕的被控制对象,以执行与触控类型对应的目标功能;According to the touch operation parameters, the controlled object of the screen of the electronic device is controlled to execute the target function corresponding to the touch type;
其中,被控制对象包括以下任一项:页面,界面,控件;也就是说,上述控制方法,除了可以实现对屏幕上的页面的控制,还可以实现对界面的控制,或者实现对屏幕上的某个控件的控制,这里,本申请实施例对此不作具体限定。The controlled object includes any of the following: a page, an interface, and a control; that is, the above control method can not only control the page on the screen, but also control the interface, or realize the control on the screen. The control of a certain control is not specifically limited in this embodiment of the present application.
以页面进行滑动为例,在一种可选的实施例中,按照触控操作参数,控制电子设备的屏幕的被控制对象,以执行与触控类型对应的目标功能,包括:Taking page sliding as an example, in an optional embodiment, the controlled object on the screen of the electronic device is controlled according to the touch operation parameters to execute the target function corresponding to the touch type, including:
按照滑动操作的滑动距离,控制屏幕的页面,以执行滑动功能。According to the sliding distance of the sliding operation, control the page of the screen to perform the sliding function.
示例性的,当用户通过手掌滑动页面时,电子设备响应手掌的滑动动作,确定滑动动作对应的操作为滑动操作,确定出手掌距离屏幕之间的距离,然后确定出该距离下滑动操作的滑动距离,电子设备控制页面按照该距离下滑动操作的滑动距离来执行滑动操作对应的滑动功能,如此,使得与屏幕具有不同距离的手掌在发出滑动动作时,即时相同的滑动动作对应不同的滑动操作的滑动距离,使得用户对页面的控制更加精细化,提高了用户的体验度。Exemplarily, when the user slides the page through the palm, the electronic device responds to the sliding action of the palm, determines that the operation corresponding to the sliding action is a sliding operation, determines the distance between the palm and the screen, and then determines the sliding operation of the sliding operation at the distance. Distance, the electronic device control page executes the sliding function corresponding to the sliding operation according to the sliding distance of the sliding operation under this distance, so that when the palms with different distances from the screen make a sliding motion, the same sliding motion corresponds to different sliding operations. The sliding distance makes the user's control over the page more refined and improves the user's experience.
下面举实例来对上述一个或多个实施例中的控制方法进行描述。The following examples are used to describe the control method in one or more of the above embodiments.
下面以目标对象为手,边界框为手势框为例来对上述控制方法进行说明,图4为本申请实施例提供的一种可选的控制方法的实例一的示意图,如图4所示,该控制方法可以包括:The above control method is described below by taking the target object as the hand and the bounding box as the gesture frame as an example. FIG. 4 is a schematic diagram of Example 1 of an optional control method provided by the embodiment of the present application, as shown in FIG. 4 , The control method may include:
S401:抓拍手势图;S401: Snap a gesture map;
在S401中,手势交互应用会开启前置摄像头,通过前置摄像头抓拍手势图。In S401, the gesture interaction application will turn on the front camera, and capture the gesture map through the front camera.
S402:手势检测;S402: gesture detection;
S403:获取手势框;S403: Get the gesture frame;
在S401-S402中,在抓拍到图片之后,进行图片进行检测,得到手势类别、手势置信度和手势框;在对手势进行检测中,图5为本申请实施例提供的一种可选的控制方法的实例二的示意图,如图5所示,手势的检测方法可以包括:In S401-S402, after the picture is captured, the picture is detected to obtain the gesture category, the gesture confidence and the gesture frame; in the gesture detection, FIG. 5 is an optional control provided by the embodiment of the present application The schematic diagram of the second example of the method, as shown in FIG. 5 , the gesture detection method may include:
S501:获取含有手势的图片;S501: Obtain a picture containing gestures;
S502:对图片进行模型推理;S502: perform model inference on the picture;
S503:模型后处理;S503: model post-processing;
S504:非极大值抑制;S504: non-maximum value suppression;
S505:得到手势框。S505: Get the gesture frame.
这里,在获取到包含手势的图片之后,在对图片进行检测中主要包含模型推理、模型后处理、非极大值抑制等操作,从而可以将一幅图片中的手势检测出来,每个检测结果包含手势框、手势类别(指的是手的姿态变化对应的类别,该类别对应有触控类型)和手势置信度(指的是手势类别对应的触控类型的置信度,例如为0.9),若图片中存在多个手势类别的检测结果,取置信度最高的手势类别作为最终的检测结果。Here, after obtaining a picture containing gestures, the detection of the picture mainly includes model inference, model post-processing, non-maximum suppression and other operations, so that gestures in a picture can be detected, and each detection result Contains gesture frame, gesture category (referring to the category corresponding to the gesture change of the hand, which corresponds to the touch type) and gesture confidence (referring to the confidence of the touch type corresponding to the gesture category, for example, 0.9), If there are detection results of multiple gesture categories in the picture, the gesture category with the highest confidence is taken as the final detection result.
S404:计算手与屏幕的距离;S404: Calculate the distance between the hand and the screen;
这里,根据手势框,计算用户的手与手机屏幕间的距离,即为手距离屏幕的距离。其中,可以通过下述方式计算手与屏幕的距离:Here, according to the gesture frame, the distance between the user's hand and the screen of the mobile phone is calculated, that is, the distance between the hand and the screen. Among them, the distance between the hand and the screen can be calculated in the following ways:
首先,记录一组手与屏幕的距离为D1时手势框的宽度和高度,图6a为本申请实施例提供的一种可选的手势框的实例五的示意图,如图6a所示,记录用户手与屏幕距离为D1时,手势框的宽度W1和高度H1;First, record the width and height of the gesture frame when the distance between a group of hands and the screen is D1. FIG. 6a is a schematic diagram of Example 5 of an optional gesture frame provided by the embodiment of the application. As shown in FIG. 6a, record the user When the distance between the hand and the screen is D1, the width W1 and height H1 of the gesture frame;
然后,再记录一组手与屏幕的距离为D2时手势框的宽度和高度,图6b为本申请实施例提供的一种可选的手势框的实例六的示意图,如图6b所示,记录用户手与屏幕距离为D2时,手势框的宽度W2和高度H2;Then, record the width and height of the gesture frame when the distance between a group of hands and the screen is D2. FIG. 6b is a schematic diagram of Example 6 of an optional gesture frame provided by the embodiment of the present application. As shown in FIG. 6b, record When the distance between the user's hand and the screen is D2, the width W2 and height H2 of the gesture box;
最后,图6c为本申请实施例提供的一种可选的手势框的实例七的示意图,如图6c所示,当距离为D时,当前手势框W和H的计算方式为公式(1)和公式(3)。Finally, FIG. 6c is a schematic diagram of Example 7 of an optional gesture frame provided by the embodiment of the present application. As shown in FIG. 6c, when the distance is D, the calculation method of the current gesture frame W and H is formula (1) and formula (3).
如此,反推可以得到,当手势框的宽和高为W和H时,当前手与屏幕的距离D可以通过公式(2)和公式(4)得到。In this way, the reverse can be obtained. When the width and height of the gesture frame are W and H, the distance D between the current hand and the screen can be obtained by formula (2) and formula (4).
需要说明的是,在计算手与屏幕的距离时可以依据手势框的宽度或高度,当手在屏幕边缘时,可能存在手被屏幕截断的现象,此时可以通过计算手势框四条边到屏幕边缘的距离,判断手被截断时手势框的宽度还是高度是否可用,需要选择一条可用的边框来计算手与屏幕的距离。若宽度和高度都不可用,无法计算手与屏幕的距离。It should be noted that the distance between the hand and the screen can be calculated according to the width or height of the gesture frame. When the hand is at the edge of the screen, the hand may be cut off by the screen. In this case, you can calculate the four sides of the gesture frame to the edge of the screen. To determine whether the width or height of the gesture frame is available when the hand is cut off, you need to select an available frame to calculate the distance between the hand and the screen. If neither width nor height are available, the distance between the hand and the screen cannot be calculated.
S405:更新手势对应的触控类型的灵敏度系数;S405: Update the sensitivity coefficient of the touch type corresponding to the gesture;
在S405中,根据用户手与手机屏幕间的距离,调整手势对应的触控类型的灵敏度系数。In S405, the sensitivity coefficient of the touch type corresponding to the gesture is adjusted according to the distance between the user's hand and the screen of the mobile phone.
S406:控制页面滑动距离。S406: Control the sliding distance of the page.
在S406中,根据当前手势对应的触控类型的灵敏度系数,计算页面的滑动距离。In S406, the sliding distance of the page is calculated according to the sensitivity coefficient of the touch type corresponding to the current gesture.
通过S403计算出的手与屏幕的距离之后,可以调整手势对应的触控类型的灵敏度系数,假设手与屏幕的距离为25cm时,灵敏度系数为1,则距离为D(10厘米<D<60厘米)时,更新后的手势对应的触控类型的灵敏度系数K为:After the distance between the hand and the screen is calculated by S403, the sensitivity coefficient of the touch type corresponding to the gesture can be adjusted. Assuming that the distance between the hand and the screen is 25cm, the sensitivity coefficient is 1, then the distance is D (10cm<D<60 cm), the sensitivity coefficient K of the touch type corresponding to the updated gesture is:
K=1-0.01*(D-25) (5)K=1-0.01*(D-25) (5)
也就是说,当手与屏幕的距离小于25cm时,手势对应的触控类型的灵敏度系数得到提高;当手与屏幕的距离大于25cm时,手势对应的触控类型的灵敏度得到降低。That is to say, when the distance between the hand and the screen is less than 25cm, the sensitivity coefficient of the touch type corresponding to the gesture is improved; when the distance between the hand and the screen is greater than 25cm, the sensitivity of the touch type corresponding to the gesture is reduced.
在根据当前手势对应的触控类型的灵敏度系数,计算手机页面上的滑动距离中,以手势对应的触控类型为滑动类型为例,预设手与屏幕的距离为25cm时,手每发出一次滑动操作,手机页面滑动的距离为M0,则当手与屏幕的距离为D时,手势每发出一次滑动操作,手机页面的滑动距离M为:In calculating the sliding distance on the mobile phone page according to the sensitivity coefficient of the touch type corresponding to the current gesture, taking the touch type corresponding to the gesture as the sliding type as an example, when the preset distance between the hand and the screen is 25cm, every time the hand sends out For sliding operation, the sliding distance of the mobile page is M0, then when the distance between the hand and the screen is D, every time the gesture sends out a sliding operation, the sliding distance M of the mobile page is:
M=M0*K=1-0.01*(D-25) (6)M=M0*K=1-0.01*(D-25) (6)
最后,代入公式(2)或者公式(4)可以计算得到手与屏幕的距离D,再按照D控制页面执行滑动操作对应的滑动功能。Finally, the distance D between the hand and the screen can be calculated by substituting the formula (2) or the formula (4), and then the sliding function corresponding to the sliding operation is performed on the control page according to D.
图7a为本申请实施例提供的一种可选的屏幕的实例一的排布示意图,如图7a所示,屏幕上显示的是通信记录列表,用户可以通过除大拇指以外的四个手指的上下摆动来控制电子设备滑动屏幕页面;图7b为本申请实施例提供的一种可选的屏幕的实例二的排布示意图,如图7b所示,当用户的手与屏幕之间的距离为25cm时,每次四个手指的上下摆动,页面滑动一条通信记录的距离,所以,页面显示的第一条通信记录为“4-17妹妹接入”;图7c为本申请实施例提供的一种可选的屏幕的实例三的排布示意图,如图7c所示,当用户的手与屏幕之间的距离为26cm时,每次四个手指的上下摆动,页面滑动两条通通信记录的距离,所以,页面显示的第一条通信记录为“4-16李四呼出”;如此,通过手与屏幕之间的距离,来确定每次四个手指的上下摆动对应的页面的滑动距离,实现了不同距离对应不同的滑动距离。Fig. 7a is a schematic diagram of the arrangement of example 1 of an optional screen provided by an embodiment of the present application. As shown in Fig. 7a, a list of communication records is displayed on the screen. Swing up and down to control the electronic device to slide the screen page; Figure 7b is a schematic diagram of the arrangement of the second example of an optional screen provided by the embodiment of the application. As shown in Figure 7b, when the distance between the user's hand and the screen is At 25cm, each time the four fingers swing up and down, the page slides the distance of one communication record, so the first communication record displayed on the page is "4-17 sister access"; A schematic diagram of the layout of Example 3 of an optional screen, as shown in Figure 7c, when the distance between the user's hand and the screen is 26cm, each time the four fingers swing up and down, the page slides through two communication records. Therefore, the first communication record displayed on the page is "4-16 Li Si called out"; in this way, the distance between the hand and the screen is used to determine the sliding distance of the page corresponding to the up and down swing of each four fingers. Different distances correspond to different sliding distances.
在本实例中,基于手势检测的手势灵敏度系数智能调节滑动距离,可以通过手势检测结果计算出当前的手与屏幕的距离,根据手与屏幕的距离调整当前手势灵敏度系数,并作用于用户操作手机时页面的滑动距离控制。当用户手距离屏幕较远时,可以获得较低的灵敏度系数,当用户手距离屏幕较近时,可以获得较高的灵敏度系数。In this example, the sliding distance is intelligently adjusted based on the gesture sensitivity coefficient of gesture detection, the current distance between the hand and the screen can be calculated based on the gesture detection result, the current gesture sensitivity coefficient is adjusted according to the distance between the hand and the screen, and acts on the user to operate the mobile phone The sliding distance control of the page. When the user's hand is far from the screen, a lower sensitivity coefficient can be obtained, and when the user's hand is closer to the screen, a higher sensitivity coefficient can be obtained.
如此,实现更加精细化的页面控制,用户可以根据自己的使用习惯,选择舒适的距离来进行手势对页面的控制,提升了手势对页面控制的用户体验。In this way, more refined page control is achieved, and the user can select a comfortable distance to control the page by gestures according to his own usage habits, which improves the user experience of page control by gestures.
另外,需要说明的是,本实例不仅仅可用于手势交互中滑动距离的灵敏度系数的调节,还可以调节滑动操作的滑动速度、点击操作的点击频率、翻页操作的翻页速度、长按操作的长按时间、返回上一级操作的响应时间、控制游戏中的人物的移动操作的移动距离等),还可以应用于人脸识别、身体识别、人眼注视等新型交互的灵敏度系数的控制,为以上场景提供更加舒适的用户体验。In addition, it should be noted that this example can be used not only to adjust the sensitivity coefficient of the sliding distance in gesture interaction, but also to adjust the sliding speed of the sliding operation, the clicking frequency of the clicking operation, the page turning speed of the page turning operation, and the long pressing operation. long-press time, the response time of returning to the previous level operation, the moving distance of controlling the movement operation of the characters in the game, etc.), and can also be applied to the control of the sensitivity coefficient of new interactions such as face recognition, body recognition, and human eye gaze. , to provide a more comfortable user experience for the above scenarios.
本申请实施例提供了一种控制方法,包括:当电子设备的非接触式触控控制功能处于开启状态时,获取非接触式触控操作对应的视频序列,对视频序列进行触控识别,得到目标对象和触控类型,确定目标对象与电子设备之间的第一距离,确定在第一距离下的触控操作参数,按照触控操作参数,控制电子设备以执行与触控类型对应的目标功能;也就是说,在本申请实施例中,在电子设备的非接触式触控控制功能的实现中,通过确定目标对象与电子设备的第一距离,进而确定第一距离下的触控操作参数,再控制电子设备的被控制对象按照触控操作参数执行触控类型对应的目标功能,如此,针对同一触控类型,不同的第一距离可以确定出不同的触控操作参数,那么,电子设备能够根据目标对象与电子设备之间的距离来实现对触控类型的响应,与现有的仅仅对触控类型对应的目标功能进行控制相比,采用同一触控类型利用不同距离下的触控操作参数进行控制有所差别,使得非接触式触控控制更加精细化。An embodiment of the present application provides a control method, including: when a non-contact touch control function of an electronic device is in an on state, acquiring a video sequence corresponding to a non-contact touch operation, performing touch recognition on the video sequence, and obtaining The target object and the touch type, determine the first distance between the target object and the electronic device, determine the touch operation parameters under the first distance, and control the electronic device to execute the target corresponding to the touch type according to the touch operation parameters That is to say, in the embodiment of the present application, in the realization of the non-contact touch control function of the electronic device, the touch operation at the first distance is determined by determining the first distance between the target object and the electronic device parameters, and then control the controlled object of the electronic device to perform the target function corresponding to the touch type according to the touch operation parameters. In this way, for the same touch type, different first distances can determine different touch operation parameters. Then, the electronic The device can respond to the touch type according to the distance between the target object and the electronic device. Compared with the existing control of only the target function corresponding to the touch type, the same touch type is used to utilize the touch at different distances. The control operation parameters are different, which makes the non-contact touch control more refined.
基于前述实施例相同的发明构思,本申请实施例提供一种电子设备,图8为本申请实施例提供的一种可选的电子设备的结构示意图,如图8所示,该电子设备包括:Based on the same inventive concept as the foregoing embodiments, an embodiment of the present application provides an electronic device, and FIG. 8 is a schematic structural diagram of an optional electronic device provided by an embodiment of the present application. As shown in FIG. 8 , the electronic device includes:
获取模块81,用于当电子设备的非接触式触控控制功能处于开启状态时,获取非接触式触控操作对应的视频序列;an
处理模块82,用于对视频序列进行触控识别,得到目标对象和触控类型;The processing module 82 is used to perform touch recognition on the video sequence to obtain the target object and the touch type;
第一确定模块83,用于确定目标对象与电子设备之间的第一距离;a first determining
第二确定模块84,用于确定在第一距离下的触控操作参数;The
控制模块85,用于按照触控操作参数,控制电子设备以执行触控类型对应的目标功能。The control module 85 is configured to control the electronic device to execute the target function corresponding to the touch type according to the touch operation parameters.
在一种可选的实施例中,第一确定模块83,具体用于:In an optional embodiment, the first determining
从视频序列的每个图像帧中的目标对象的边界信息中选取出一个目标对象的边界信息;Select the boundary information of a target object from the boundary information of the target object in each image frame of the video sequence;
根据选取出的目标对象的边界信息,确定目标对象与电子设备的屏幕之间的第一距离。According to the boundary information of the selected target object, the first distance between the target object and the screen of the electronic device is determined.
在一种可选的实施例中,以边界框来表示边界信息时,第一确定模块83根据选取出的目标对象的边界信息,确定目标对象与电子设备之间的第一距离中,包括:In an optional embodiment, when the boundary information is represented by a bounding box, the first determining
确定边界框的各边界与电子设备的屏幕的对应边缘之间的第二距离;determining a second distance between each boundary of the bounding box and a corresponding edge of the screen of the electronic device;
根据各边界与屏幕的对应边缘之间的第二距离,确定目标边界;determining the target boundary according to the second distance between each boundary and the corresponding edge of the screen;
根据目标边界的长度值,利用预设的边界长度值与目标对象至电子设备的距离之间的关系,确定目标对象与电子设备之间的第一距离。According to the length value of the target boundary, the relationship between the preset boundary length value and the distance between the target object and the electronic device is used to determine the first distance between the target object and the electronic device.
在一种可选的实施例中,第一确定模块83根据边界框的各边界与屏幕的对应边缘之间的第二距离,确定目标边界中,包括:In an optional embodiment, the first determining
当各边界与屏幕的对应边缘之间的第二距离均大于预设阈值时,从各边界中选取出一条边界;When the second distance between each boundary and the corresponding edge of the screen is greater than the preset threshold, select a boundary from each boundary;
当各边界与屏幕的对应边缘之间的第二距离中仅存在一条边界与屏幕的对应边缘的第二距离小于等于预设阈值时,从各边界中选取出与屏幕的对应边缘的第二距离小于等于预设阈值的边界;When only one of the second distances between each boundary and the corresponding edge of the screen has a second distance between the boundary and the corresponding edge of the screen that is less than or equal to the preset threshold, select the second distance from each boundary to the corresponding edge of the screen The boundary is less than or equal to the preset threshold;
将选取出的边界确定为目标边界。The selected boundary is determined as the target boundary.
在一种可选的实施例中,第一确定模块83根据目标边界的长度值,利用预设的边界长度值与目标对象至电子设备的距离之间的关系,计算得到目标对象与电子设备之间的第一距离中,包括:In an optional embodiment, the first determining
当目标边界为宽度边界时,根据目标边界的长度值,利用预设的边界宽度值与目标对象至电子设备的距离之间的关系,计算得到目标对象与电子设备之间的第一距离。When the target boundary is a width boundary, the first distance between the target object and the electronic device is calculated by using the relationship between the preset boundary width value and the distance between the target object and the electronic device according to the length value of the target boundary.
在一种可选的实施例中,第一确定模块83根据目标边界的长度值,利用预设的边界长度值与目标对象至电子设备的距离之间的关系,计算得到目标对象与电子设备之间的第一距离,包括:In an optional embodiment, the first determining
当目标边界为高度边界时,根据目标边界的长度值,利用预设的边界高度值与目标对象至电子设备的距离之间的关系,计算得到目标对象与电子设备之间的第一距离。When the target boundary is a height boundary, the first distance between the target object and the electronic device is obtained by calculating the relationship between the preset boundary height value and the distance between the target object and the electronic device according to the length value of the target boundary.
在一种可选的实施例中,第一确定模块83,具体用于:In an optional embodiment, the first determining
根据视频序列中每个图像帧中的目标对象中的边界信息,确定每个图像帧中的目标对象与电子设备之间的距离;Determine the distance between the target object in each image frame and the electronic device according to the boundary information in the target object in each image frame in the video sequence;
将每个图像帧中的目标对象与电子设备之间的距离的平均值,确定为目标对象与电子设备之间的第一距离。The average value of the distances between the target object and the electronic device in each image frame is determined as the first distance between the target object and the electronic device.
在一种可选的实施例中,第一确定模块83将每个图像帧中的目标对象与电子设备之间的距离的平均值,确定为目标对象与电子设备之间的第一距离中,包括:In an optional embodiment, the first determining
当每个图像帧中目标对象与电子设备之间的距离中任意两个之间的差值的绝对值小于等于预设的误差阈值时,将每个图像帧中目标对象与电子设备之间的距离的平均值,确定为目标对象与电子设备之间的第一距离。When the absolute value of the difference between any two of the distances between the target object and the electronic device in each image frame is less than or equal to the preset error threshold, the difference between the target object and the electronic device in each image frame is calculated. The average value of the distance is determined as the first distance between the target object and the electronic device.
在一种可选的实施例中,该电子设备还用于:In an optional embodiment, the electronic device is further used for:
当每个图像帧中目标对象与电子设备之间的距离中任意两个之间的差值的绝对值中存在大于预设的误差阈值的绝对值时,按照预设的在标准距离下的触控操作参数,控制电子设备以执行与触控类型对应的目标功能。When the absolute value of the difference between any two of the distances between the target object and the electronic device in each image frame is greater than the preset error threshold, according to the preset touch at the standard distance control operation parameters, and control the electronic device to perform the target function corresponding to the touch type.
在一种可选的实施例中,第二确定模块84,具体用于:In an optional embodiment, the second determining
基于预设的距离与灵敏度系数的对应关系,计算在第一距离下的灵敏度系数;Calculate the sensitivity coefficient at the first distance based on the preset correspondence between the distance and the sensitivity coefficient;
根据在第一距离下的灵敏度系数,确定在第一距离下的触控操作参数。The touch operation parameters at the first distance are determined according to the sensitivity coefficient at the first distance.
在一种可选的实施例中,第二确定模块84根据在第一距离下的灵敏度系数,确定在第一距离下的触控操作参数中,包括:In an optional embodiment, the
利用在第一距离下的灵敏度系数和预设的在标准距离下的触控操作参数,计算得到在第一距离下的触控操作参数。Using the sensitivity coefficient at the first distance and the preset touch operation parameters at the standard distance, the touch operation parameters at the first distance are calculated.
在一种可选的实施例中,处理模块82对视频序列进行触控识别,得到目标对象和触控类型中,包括:In an optional embodiment, the processing module 82 performs touch recognition on the video sequence, and obtains the target object and touch type, including:
对视频序列进行触控识别,得到目标对象、目标对象的边界框和目标对象的触控操作的置信度值;其中,边界框表示边界信息;Perform touch recognition on the video sequence to obtain the target object, the bounding box of the target object, and the confidence value of the touch operation of the target object; wherein, the bounding box represents boundary information;
基于目标对象的触控操作的置信度值,确定触控类型。The touch type is determined based on the confidence value of the touch operation of the target object.
在一种可选的实施例中,处理模块82基于目标对象的触控操作的置信度值,确定触控类型中,包括:In an optional embodiment, the processing module 82 determines the touch type based on the confidence value of the touch operation of the target object, including:
将目标对象的触控操作的置信度值的最大值对应的触控操作所属的类型,确定为触控类型。The type of the touch operation corresponding to the maximum value of the confidence value of the touch operation of the target object belongs to is determined as the touch type.
在一种可选的实施例中,触控操作参数包括以下任一项:滑动操作的滑动距离,滑动操作的滑动速度,翻页操作的翻页速度,长按操作的长按时间和点击操作的点击频率。In an optional embodiment, the touch operation parameters include any of the following: sliding distance of the sliding operation, sliding speed of the sliding operation, page turning speed of the page turning operation, long pressing time of the long pressing operation and clicking operation click frequency.
在一种可选的实施例中,控制模块85具体用于:In an optional embodiment, the control module 85 is specifically configured to:
按照触控操作参数,控制电子设备的屏幕的被控制对象,以执行与触控类型对应的目标功能;其中,被控制对象包括以下任一项:页面,界面和控件。According to the touch operation parameters, the controlled object on the screen of the electronic device is controlled to execute the target function corresponding to the touch type; wherein, the controlled object includes any one of the following: a page, an interface and a control.
在一种可选的实施例中,控制模块85按照触控操作参数,控制电子设备的屏幕的被控制对象,以执行与触控类型对应的目标功能中,包括:In an optional embodiment, the control module 85 controls the controlled object on the screen of the electronic device according to the touch operation parameters to execute the target function corresponding to the touch type, including:
按照滑动操作的滑动距离,控制屏幕的页面,以执行滑动功能。According to the sliding distance of the sliding operation, control the page of the screen to perform the sliding function.
在实际应用中,上述获取模块81、处理模块82、第一确定模块83、第二确定模块84和控制模块85可由位于电子设备上的处理器实现,具体为中央处理器(CPU,CentralProcessing Unit)、微处理器(MPU,Microprocessor Unit)、数字信号处理器(DSP,DigitalSignal Processing)或现场可编程门阵列(FPGA,Field Programmable Gate Array)等实现。In practical applications, the
图9为本申请实施例提供的另一种可选的电子设备的结构示意图,如图9所示,本申请实施例提供了一种电子设备900,包括:FIG. 9 is a schematic structural diagram of another optional electronic device provided by an embodiment of the present application. As shown in FIG. 9 , an embodiment of the present application provides an electronic device 900, including:
处理器91以及存储有所述处理器91可执行指令的存储介质92,所述存储介质92通过通信总线93依赖所述处理器91执行操作,当所述指令被所述处理器91执行时,执行上述一个或多个实施例中所执行的所述控制方法。The
需要说明的是,实际应用时,终端中的各个组件通过通信总线93耦合在一起。可理解,通信总线93用于实现这些组件之间的连接通信。通信总线93除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图9中将各种总线都标为通信总线93。It should be noted that, in practical application, various components in the terminal are coupled together through the
本申请实施例提供了一种计算机存储介质,存储有可执行指令,当所述可执行指令被一个或多个处理器执行的时候,所述处理器执行如上述一个或多个实施例中第一电子设备执行的所述的控制方法。An embodiment of the present application provides a computer storage medium, which stores executable instructions. When the executable instructions are executed by one or more processors, the processors execute the steps as described in the first step in one or more of the foregoing embodiments. The control method performed by an electronic device.
其中,计算机可读存储介质可以是磁性随机存取存储器(ferromagnetic randomaccess memory,FRAM)、只读存储器(Read Only Memory,ROM)、可编程只读存储器(Programmable Read-Only Memory,PROM)、可擦除可编程只读存储器(ErasableProgrammable Read-Only Memory,EPROM)、电可擦除可编程只读存储器(ElectricallyErasable Programmable Read-Only Memory,EEPROM)、快闪存储器(Flash Memory)、磁表面存储器、光盘、或只读光盘(Compact Disc Read-Only Memory,CD-ROM)等存储器。Wherein, the computer-readable storage medium may be a magnetic random access memory (ferromagnetic random access memory, FRAM), a read only memory (Read Only Memory, ROM), a programmable read only memory (Programmable Read-Only Memory, PROM), an erasable memory In addition to programmable read-only memory (ErasableProgrammable Read-Only Memory, EPROM), electrically erasable programmable read-only memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), flash memory (Flash Memory), magnetic surface memory, optical disks, Or memory such as Compact Disc Read-Only Memory (CD-ROM).
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。As will be appreciated by those skilled in the art, the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the application may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied therein, including but not limited to disk storage, optical storage, and the like.
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present application. It will be understood that each process and/or block in the flowchart illustrations and/or block diagrams, and combinations of processes and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing device to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing device produce Means for implementing the functions specified in a flow or flow of a flowchart and/or a block or blocks of a block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions The apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded on a computer or other programmable data processing device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process such that The instructions provide steps for implementing the functions specified in the flow or blocks of the flowcharts and/or the block or blocks of the block diagrams.
以上所述,仅为本申请的较佳实施例而已,并非用于限定本申请的保护范围。The above descriptions are only preferred embodiments of the present application, and are not intended to limit the protection scope of the present application.
Claims (19)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210507343.1A CN114840086B (en) | 2022-05-10 | 2022-05-10 | Control method, electronic equipment and computer storage medium |
PCT/CN2022/141461 WO2023216613A1 (en) | 2022-05-10 | 2022-12-23 | Control method, electronic device and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210507343.1A CN114840086B (en) | 2022-05-10 | 2022-05-10 | Control method, electronic equipment and computer storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114840086A true CN114840086A (en) | 2022-08-02 |
CN114840086B CN114840086B (en) | 2024-07-30 |
Family
ID=82568888
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210507343.1A Active CN114840086B (en) | 2022-05-10 | 2022-05-10 | Control method, electronic equipment and computer storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114840086B (en) |
WO (1) | WO2023216613A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023216613A1 (en) * | 2022-05-10 | 2023-11-16 | Oppo广东移动通信有限公司 | Control method, electronic device and computer storage medium |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050025345A1 (en) * | 2003-07-30 | 2005-02-03 | Nissan Motor Co., Ltd. | Non-contact information input device |
US20090228841A1 (en) * | 2008-03-04 | 2009-09-10 | Gesture Tek, Inc. | Enhanced Gesture-Based Image Manipulation |
JP2010147784A (en) * | 2008-12-18 | 2010-07-01 | Fujifilm Corp | Three-dimensional imaging device and three-dimensional imaging method |
US20120218183A1 (en) * | 2009-09-21 | 2012-08-30 | Extreme Reality Ltd. | Methods circuits device systems and associated computer executable code for facilitating interfacing with a computing platform display screen |
US20120229377A1 (en) * | 2011-03-09 | 2012-09-13 | Kim Taehyeong | Display device and method for controlling the same |
CN103017730A (en) * | 2012-11-30 | 2013-04-03 | 中兴通讯股份有限公司 | Single-camera ranging method and single-camera ranging system |
CN103472916A (en) * | 2013-09-06 | 2013-12-25 | 东华大学 | Man-machine interaction method based on human body gesture recognition |
US8902198B1 (en) * | 2012-01-27 | 2014-12-02 | Amazon Technologies, Inc. | Feature tracking for device input |
KR20150064597A (en) * | 2013-12-03 | 2015-06-11 | 엘지전자 주식회사 | Video display device and operating method thereof |
US20160026254A1 (en) * | 2013-03-14 | 2016-01-28 | Lg Electronics Inc. | Display device and method for driving the same |
CN105308536A (en) * | 2013-01-15 | 2016-02-03 | 厉动公司 | Dynamic user interactions for display control and customized gesture interpretation |
US20170139482A1 (en) * | 2014-06-03 | 2017-05-18 | Lg Electronics Inc. | Image display apparatus and operation method thereof |
CN107291221A (en) * | 2017-05-04 | 2017-10-24 | 浙江大学 | Across screen self-adaption accuracy method of adjustment and device based on natural gesture |
CN109782906A (en) * | 2018-12-28 | 2019-05-21 | 深圳云天励飞技术有限公司 | A kind of gesture identification method of advertisement machine, exchange method, device and electronic equipment |
CN111084606A (en) * | 2019-10-12 | 2020-05-01 | 深圳壹账通智能科技有限公司 | Vision detection method and device based on image recognition and computer equipment |
CN112394811A (en) * | 2019-08-19 | 2021-02-23 | 华为技术有限公司 | Interaction method for air-separating gesture and electronic equipment |
CN114449153A (en) * | 2020-10-31 | 2022-05-06 | 广东小天才科技有限公司 | Shooting control method of wearable device, wearable device and storage medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8890812B2 (en) * | 2012-10-25 | 2014-11-18 | Jds Uniphase Corporation | Graphical user interface adjusting to a change of user's disposition |
CN110414495B (en) * | 2019-09-24 | 2020-05-19 | 图谱未来(南京)人工智能研究院有限公司 | Gesture recognition method and device, electronic equipment and readable storage medium |
CN112947755A (en) * | 2021-02-24 | 2021-06-11 | Oppo广东移动通信有限公司 | Gesture control method and device, electronic equipment and storage medium |
CN114840086B (en) * | 2022-05-10 | 2024-07-30 | Oppo广东移动通信有限公司 | Control method, electronic equipment and computer storage medium |
-
2022
- 2022-05-10 CN CN202210507343.1A patent/CN114840086B/en active Active
- 2022-12-23 WO PCT/CN2022/141461 patent/WO2023216613A1/en active Application Filing
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050025345A1 (en) * | 2003-07-30 | 2005-02-03 | Nissan Motor Co., Ltd. | Non-contact information input device |
US20090228841A1 (en) * | 2008-03-04 | 2009-09-10 | Gesture Tek, Inc. | Enhanced Gesture-Based Image Manipulation |
JP2010147784A (en) * | 2008-12-18 | 2010-07-01 | Fujifilm Corp | Three-dimensional imaging device and three-dimensional imaging method |
US20120218183A1 (en) * | 2009-09-21 | 2012-08-30 | Extreme Reality Ltd. | Methods circuits device systems and associated computer executable code for facilitating interfacing with a computing platform display screen |
US20120229377A1 (en) * | 2011-03-09 | 2012-09-13 | Kim Taehyeong | Display device and method for controlling the same |
US8902198B1 (en) * | 2012-01-27 | 2014-12-02 | Amazon Technologies, Inc. | Feature tracking for device input |
CN103017730A (en) * | 2012-11-30 | 2013-04-03 | 中兴通讯股份有限公司 | Single-camera ranging method and single-camera ranging system |
CN105308536A (en) * | 2013-01-15 | 2016-02-03 | 厉动公司 | Dynamic user interactions for display control and customized gesture interpretation |
US20160026254A1 (en) * | 2013-03-14 | 2016-01-28 | Lg Electronics Inc. | Display device and method for driving the same |
CN103472916A (en) * | 2013-09-06 | 2013-12-25 | 东华大学 | Man-machine interaction method based on human body gesture recognition |
KR20150064597A (en) * | 2013-12-03 | 2015-06-11 | 엘지전자 주식회사 | Video display device and operating method thereof |
US20170139482A1 (en) * | 2014-06-03 | 2017-05-18 | Lg Electronics Inc. | Image display apparatus and operation method thereof |
CN107291221A (en) * | 2017-05-04 | 2017-10-24 | 浙江大学 | Across screen self-adaption accuracy method of adjustment and device based on natural gesture |
CN109782906A (en) * | 2018-12-28 | 2019-05-21 | 深圳云天励飞技术有限公司 | A kind of gesture identification method of advertisement machine, exchange method, device and electronic equipment |
CN112394811A (en) * | 2019-08-19 | 2021-02-23 | 华为技术有限公司 | Interaction method for air-separating gesture and electronic equipment |
CN111084606A (en) * | 2019-10-12 | 2020-05-01 | 深圳壹账通智能科技有限公司 | Vision detection method and device based on image recognition and computer equipment |
CN114449153A (en) * | 2020-10-31 | 2022-05-06 | 广东小天才科技有限公司 | Shooting control method of wearable device, wearable device and storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023216613A1 (en) * | 2022-05-10 | 2023-11-16 | Oppo广东移动通信有限公司 | Control method, electronic device and computer storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2023216613A1 (en) | 2023-11-16 |
CN114840086B (en) | 2024-07-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10514842B2 (en) | Input techniques for virtual reality headset devices with front touch screens | |
CN106406710B (en) | Screen recording method and mobile terminal | |
US11886643B2 (en) | Information processing apparatus and information processing method | |
EP3049908B1 (en) | Presentation of a control interface on a touch-enabled device based on a motion or absence thereof | |
US11573627B2 (en) | Method of controlling device and electronic device | |
EP3382510A1 (en) | Visibility improvement method based on eye tracking, machine-readable storage medium and electronic device | |
CN107392933B (en) | A kind of image segmentation method and mobile terminal | |
KR20120068253A (en) | Method and apparatus for providing response of user interface | |
CN112585566A (en) | Hand-covering face input sensing for interacting with device having built-in camera | |
EP3757878B1 (en) | Head pose estimation | |
CN106980840A (en) | Shape of face matching process, device and storage medium | |
JP2025508595A (en) | Method, device and storage medium for input recognition in a virtual scene | |
US10444831B2 (en) | User-input apparatus, method and program for user-input | |
CN114840086A (en) | Control method, electronic device and computer storage medium | |
CN110069126B (en) | Virtual object control method and device | |
US12366926B2 (en) | Finger orientation touch detection | |
US12189888B1 (en) | Speed adapted touch detection | |
KR20160022832A (en) | Method and device for character input | |
CN112256126A (en) | Method, electronic circuit, electronic device, and medium for recognizing gesture | |
CN113243000A (en) | Capture range for augmented reality objects | |
US9761009B2 (en) | Motion tracking device control systems and methods | |
KR101558094B1 (en) | Multi-modal system using for intuitive hand motion and control method thereof | |
US11237671B1 (en) | Temporal filter touch detection | |
CN119225538B (en) | Data interaction method and system of palm tablet personal computer | |
CN118625989A (en) | Touch object recognition method, device, storage medium and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |