[go: up one dir, main page]

CN104679270B - System and method for receiving user input, program storage medium and program therefor - Google Patents

System and method for receiving user input, program storage medium and program therefor Download PDF

Info

Publication number
CN104679270B
CN104679270B CN201410527952.9A CN201410527952A CN104679270B CN 104679270 B CN104679270 B CN 104679270B CN 201410527952 A CN201410527952 A CN 201410527952A CN 104679270 B CN104679270 B CN 104679270B
Authority
CN
China
Prior art keywords
user
control area
keys
area
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410527952.9A
Other languages
Chinese (zh)
Other versions
CN104679270A (en
Inventor
张明伟
张家铭
阙志克
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial Technology Research Institute ITRI
Original Assignee
Industrial Technology Research Institute ITRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/303,438 external-priority patent/US9857971B2/en
Application filed by Industrial Technology Research Institute ITRI filed Critical Industrial Technology Research Institute ITRI
Publication of CN104679270A publication Critical patent/CN104679270A/en
Application granted granted Critical
Publication of CN104679270B publication Critical patent/CN104679270B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0425Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
    • G06F3/0426Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected tracking fingers with respect to a virtual keyboard projected or printed on the surface
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Input From Keyboards Or The Like (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A system and method for receiving user input, a program storage medium and a program thereof. The method for receiving the user input comprises the following steps: displaying a virtual keyboard configuration and a control area, wherein the virtual keyboard configuration comprises a plurality of subgroup keys, and each subgroup key corresponds to one of a plurality of areas of the control area; extracting a plurality of positions of an object from at least one extracted image, and identifying the positions of characteristic points of the object; determining a selection area, wherein the position of the characteristic point in all the areas in the control area is the selection area; determining a plurality of keys corresponding to the selected area; and interpreting the movement of the object as input data to a user interface system.

Description

接收用户输入的系统与方法及其程序存储介质与程序System and method for receiving user input and its program storage medium and program

技术领域technical field

本发明涉及一种接收用户输入方法的系统与方法及程序存储介质与程序。The invention relates to a system and method for receiving user input methods, a program storage medium and a program.

背景技术Background technique

近年来,已有各种用户的接口系统及接收用户输入系统被提出。一般而言,用户输入系统指的是系统与其他人经由机器(例如个人计算机(PC))互动。用户接口系统提供一种输入方式允许用户来操作系统,同时也提供一种输出方式,并让用户看到操作结果。In recent years, various user interface systems and systems for receiving user input have been proposed. In general, a user input system refers to a system that interacts with other people via a machine such as a personal computer (PC). The user interface system provides an input method to allow users to operate the system, and also provides an output method to allow users to see the operation results.

大家所希望的是能提供一种简单,高效率及好用的接收用户输入的系统。What everyone hopes is to provide a simple, efficient and easy-to-use system for receiving user input.

发明内容Contents of the invention

本公开是关于一种接收用户输入的系统和方法及程序存储介质与程序。The present disclosure relates to a system and method for receiving user input, a program storage medium and a program.

根据本公开实施例,接收用户输入方法,包括下列步骤:显示虚拟键盘配置和控制区域,该虚拟键盘配置包括多个子群组键,各子群组键分别对应到该控制区域的多个区域之一;从至少一提取图像中提取一物件的多个位置,识别该物件的特征点的位置;决定选取区域,在该控制区域内的所有区域当中,该特征点的位置所在即为该选取区域;决定该选取区域所对应的多个键;以及将该物件的移动解译为对用户接口系统的输入数据。According to an embodiment of the present disclosure, the method for receiving user input includes the following steps: displaying a virtual keyboard configuration and a control area, the virtual keyboard configuration includes a plurality of subgroup keys, and each subgroup key corresponds to one of the plurality of areas of the control area One; extract multiple positions of an object from at least one extracted image, identify the position of the feature point of the object; determine the selected area, and among all the areas in the control area, the position of the feature point is the selected area ; determining a plurality of keys corresponding to the selected area; and interpreting the movement of the object as input data to the user interface system.

本公开另一实施例提供一种接收用户输入系统,该系统包括:显示器,用于显示虚拟键盘配置和控制区域,该虚拟键盘配置包括多个子群组键,各子群组键分别对应到该控制区域的多个区域之一;一或多个传感器,感测物件的移动;计算系统,耦合到该一或多个传感器和该显示器,该计算系统从至少一提取图像中提取该物件的多个位置,以识别该物件的特征点的位置,决定选取区域,在该控制区域的这些区域当中,该特征点所在的位置即为该选取区域,确定对应到该选取区域的多个键,以及将该物件的该移动解译为对该系统的输入数据。Another embodiment of the present disclosure provides a system for receiving user input, the system includes: a display for displaying a virtual keyboard configuration and a control area, the virtual keyboard configuration includes a plurality of subgroup keys, and each subgroup key corresponds to the one of a plurality of areas of the control area; one or more sensors sensing movement of the object; a computing system coupled to the one or more sensors and the display, the computing system extracting multiple images of the object from at least one extracted image position, to identify the position of the feature point of the object, determine the selected area, among these areas in the control area, the position where the feature point is located is the selected area, determine a plurality of keys corresponding to the selected area, and The movement of the object is interpreted as input data to the system.

本公开又一实施例提供一种程序存储介质,所存储的计算机程序使电子装置执行下列步骤:显示虚拟键盘配置和控制区域,该虚拟键盘配置包括多个子群组键,各子群组键分别对应到该控制区域的多个区域之一;从至少一提取图像中提取物件的多个位置,识别该物件的特征点的位置;决定选取区域,在该控制区域的多个区域当中,该特征点的位置所在即为该选取区域;确定对应到该选取区域的多个键;以及将该物件的移动解译为对用户接口系统的输入数据。Yet another embodiment of the present disclosure provides a program storage medium, the stored computer program causes the electronic device to perform the following steps: display a virtual keyboard configuration and a control area, the virtual keyboard configuration includes a plurality of subgroup keys, and each subgroup key is respectively Corresponding to one of the plurality of areas of the control area; extracting multiple positions of the object from at least one extracted image, identifying the position of the feature point of the object; determining the selected area, in the plurality of areas of the control area, the feature The location of the point is the selection area; determining a plurality of keys corresponding to the selection area; and interpreting the movement of the object as input data to the user interface system.

本公开更一实施例提供一种存储在计算机可读介质上的计算机程序产品,包括计算机可读程序,供于电子装置上执行实施如上所述的方法。A further embodiment of the present disclosure provides a computer program product stored on a computer-readable medium, including a computer-readable program, for executing the above-mentioned method on an electronic device.

为了对本申请的上述及其他方面有更佳的了解,下文特举实施例,并配合附图,作详细说明如下:In order to have a better understanding of the above and other aspects of the present application, the following specific embodiments are described in detail as follows in conjunction with the accompanying drawings:

附图说明Description of drawings

图1绘示为依照一实施例所公开的用户接口系统。FIG. 1 illustrates a user interface system disclosed according to an embodiment.

图2A绘示为依照一用户接口系统实施例的可穿载计算装置。FIG. 2A illustrates a wearable computing device according to an embodiment of a user interface system.

图2B绘示为一实施例操作图2A的用户接口系统。FIG. 2B illustrates an embodiment of operating the user interface system of FIG. 2A .

图2C为说明图2A的用户接口系统实施例,显示其中虚拟输入的一模式。FIG. 2C illustrates the embodiment of the user interface system of FIG. 2A, showing a mode of virtual input therein.

图3A-图3D显示在用户切面系统输入数据的实施例,用户穿戴标记或手套来输入数据。Figures 3A-3D show an embodiment of the system inputting data at the user level, where the user wears a marker or glove to input data.

图4为公开另一实施例的键盘配置,该用户穿戴标记或手套来输入数据。Figure 4 is a keypad configuration disclosing another embodiment where the user wears a marker or glove to enter data.

图5为本公开另一实施例,用户空手输入数据。FIG. 5 is another embodiment of the present disclosure, where the user inputs data with bare hands.

图6为本公开另一实施例,用户以双手输入数据。FIG. 6 is another embodiment of the present disclosure, the user inputs data with both hands.

图7为本公开另一实施例,示意该控制区域可结合子群组键,而键的对应可随着手移动而改变。FIG. 7 is another embodiment of the present disclosure, illustrating that the control area can be combined with subgroup keys, and the correspondence of the keys can be changed with the movement of the hand.

图8A与图8B为本公开另一实施例,控制区域可以结合子群组键,而键的对应可随着手移动而改变。FIG. 8A and FIG. 8B are another embodiment of the present disclosure, the control area can be combined with subgroup keys, and the correspondence of the keys can be changed with the movement of the hand.

图8C为本公开另一实施例,用户以一手指打字。FIG. 8C is another embodiment of the present disclosure, the user types with one finger.

图8D至图8E为本公开另一实施例,用户以一手指打字。8D to 8E are another embodiment of the present disclosure, the user uses one finger to type.

图9为根据本公开另一实施例,用户输入系统包括用户接口系统和主机系统。FIG. 9 shows that a user input system includes a user interface system and a host system according to another embodiment of the present disclosure.

图10为图9的系统实施例。FIG. 10 is the system embodiment of FIG. 9 .

图11示意可应用于图1或图9所示的系统的本公开实施例的一方法。FIG. 11 illustrates a method of an embodiment of the present disclosure applicable to the system shown in FIG. 1 or FIG. 9 .

图12示意本公开又一实施例的用于接收用户输入的系统。Fig. 12 illustrates a system for receiving user input according to yet another embodiment of the present disclosure.

在下面的详细描述中,为便于解释,依据具体细节进行了阐述,以便全面的理解所公开的实施例。这些实施例即便没有明显说明具体细节的情况下可实施。在其他实例中,简化附图中众所周知的结构和装置,而示意性地公开。In the following detailed description, for purposes of explanation, set forth are specific details in order to provide a comprehensive understanding of the disclosed embodiments. These embodiments can be practiced even if specific details are not expressly stated. In other instances, well-known structures and devices in the drawings are simplified and schematically disclosed.

【符号说明】【Symbol Description】

100、200、900A:用户接口系统100, 200, 900A: user interface system

110、208、902:传感器110, 208, 902: Sensors

120、904、914:处理器120, 904, 914: Processor

130:存储器130: memory

140、202、918:显示器140, 202, 918: display

204:计算机系统204: Computer Systems

206:摄像机206: Camera

240:虚拟输入图案240: Virtual input pattern

241:QWERTY状键盘241: QWERTY-like keyboard

242:控制区域242: Control area

310:手310: hand

320:手套320: Gloves

330:特征点标记330: Marking of feature points

340:指甲型标记物340: Nail markers

360:活性标识符(active identifier)360: active identifier

370:打字结果370: Typing results

380:手/手指跟踪信息380: Hand/finger tracking information

900:系统900: system

900B:主机系统900B: host system

908:收发器908: Transceiver

910:通信链路910: Communication link

912:接收器912: Receiver

906、916:存储器906, 916: memory

1110~1180:步骤1110~1180: steps

1210:移动装置1210: mobile device

具体实施方式Detailed ways

第一实施例,请参照图1,用户接口系统100可接收数据并显示对应于所接收数据的信息。例如图1,该用户接口系统包括传感器110、处理器120、存储器130和显示器140。In the first embodiment, please refer to FIG. 1 , the user interface system 100 can receive data and display information corresponding to the received data. For example in FIG. 1 , the user interface system includes a sensor 110 , a processor 120 , a memory 130 and a display 140 .

该用户接口系统100可接收数据并显示对应该接收数据的信息。例如,该用户接口系统100可以是一平视(heads-up)显示系统,如眼镜或包括显示器的任何类型近眼(nearto eye)显示器单元。The user interface system 100 can receive data and display information corresponding to the received data. For example, the user interface system 100 may be a heads-up display system, such as eyeglasses or any type of near-eye display unit that includes a display.

传感器110检测用户的手指和用户的手的移动,并把检测结果提供给处理器120。前述用户接口系统可以包括耦合到处理器的一个或多个传感器。The sensor 110 detects the movement of the user's finger and the user's hand, and provides the detection result to the processor 120 . The aforementioned user interface system may include one or more sensors coupled to the processor.

该处理器120可解译(interpret)手/手指的动作,作为输入到用户接口系统100的数据。处理器120可处理要在显示器上显示的数据。处理器120可以是任何类型的处理器,例如微处理器(单核或多核),数字信号处理器(DSP),图形处理单元等。The processor 120 can interpret hand/finger movements as data input to the user interface system 100 . Processor 120 may process data to be displayed on a display. The processor 120 may be any type of processor, such as a microprocessor (single-core or multi-core), a digital signal processor (DSP), a graphics processing unit, and the like.

存储器130是用户接口系统100的内建式数据存储器。存储器130耦合到处理器120。存储器130可存储由处理器120所存取与执行的软件。The storage 130 is a built-in data storage of the user interface system 100 . Memory 130 is coupled to processor 120 . The memory 130 can store software accessed and executed by the processor 120 .

显示器140例如可以是光透视显示器(optical see-through display),光看万能显示器(optical see-around display),视频透视显示器(video see-through display),液晶显示器(LCD),等离子显示器等,或其它类型显示器。该显示器140与所述处理器120之间可以为有线或无线的方式来接收数据。The display 140 can be, for example, an optical see-through display, an optical see-around display, a video see-through display, a liquid crystal display (LCD), a plasma display, etc., or Other types of displays. Data may be received between the display 140 and the processor 120 in a wired or wireless manner.

参考图2A所示,用户接口系统200的一实施例为可佩戴式计算装置,如包括眼镜的一头戴式显示器(head-mounted display,HMD)。用户接口系统200包括显示器202,内建式(on-board)计算机系统204,摄像机(video camera)206和传感器208。Referring to FIG. 2A , an embodiment of the user interface system 200 is a wearable computing device, such as a head-mounted display (HMD) including glasses. User interface system 200 includes display 202 , on-board computer system 204 , video camera 206 and sensor 208 .

显示器202可将计算机所产生的图形(computer-generated graphics)重叠于用户在实体世界的视野。这就是说,如果显示器202显示虚拟键盘,并且用户的手为用户所看见,则用户可以同时看到虚拟键盘和他/她的手。The display 202 can overlay computer-generated graphics on the user's view of the physical world. That is, if the display 202 displays a virtual keyboard and the user's hands are visible to the user, the user can see both the virtual keyboard and his/her hands.

在如图2A所示,单个显示器202位在此眼镜的一镜片的中心处。然而,本申请并不限于此。例如,显示器202可以在其它位置设置。此外,用户接口系统200可以包括一个以上的显示器,例如,提供第二显示器在眼镜的另一镜片,或多个显示器位在同一镜片上。显示器202连接计算机系统204且被其所控制。As shown in FIG. 2A, a single display 202 is located at the center of one lens of the glasses. However, the present application is not limited thereto. For example, display 202 may be located at other locations. In addition, the user interface system 200 may include more than one display, for example, providing a second display on another lens of the glasses, or multiple displays on the same lens. Display 202 is connected to and controlled by computer system 204 .

内建式计算机系统204例如位于眼镜的桥接臂(bridge)或其它位置(例如,在眼镜鼻翼等),但不限于此。该内建式计算机系统204可以包括处理器(例如,处理器120在图1中)和存储器(例如,图1中的存储器130)。内建式计算机系统204可以从摄像机206接收和分析数据(以及其它感测装置和/或用户接口所传来的数据),并相应地控制显示器202。另外,从其他数据源所传来的图形数据,视频数据,图象数据,文字数据等可以从内建式计算机系统204中继到显示器202。The built-in computer system 204 is, for example, located on the bridge of the glasses or other locations (eg, on the nose of the glasses, etc.), but is not limited thereto. The built-in computer system 204 may include a processor (eg, processor 120 in FIG. 1 ) and memory (eg, memory 130 in FIG. 1 ). On-board computer system 204 may receive and analyze data from camera 206 (as well as data from other sensing devices and/or user interfaces) and control display 202 accordingly. In addition, graphic data, video data, image data, text data, etc. from other data sources can be relayed from the built-in computer system 204 to the display 202 .

该摄像机206安装在例如眼镜的框架或在其它位置。该摄像机206可以以不同的解析度(resolutions)和/或以不同的帧率(frame rate)来拍摄图像。摄像机可为小体积摄像机,如在移动电话,网络摄像机等等所使用的摄像机,且可并入用户接口系统200。然而,应当理解,此处所描述的实施例不限于任何特定类型的视频摄像头。除了如图2A所示般配置单一摄像机206,也可配置多个摄像机,并且各摄像机可以配置为提取相同的景象,或者捕捉不同的角度/景像,因而可被安装在眼镜的不同区域。该摄像机206可以是一个二维(2D,two dimensional)摄像机或三维(3D)摄像机。The camera 206 is mounted eg on the frame of eyeglasses or elsewhere. The camera 206 can capture images at different resolutions and/or at different frame rates. The camera can be a small-sized camera, such as those used in mobile phones, web cameras, etc., and can be incorporated into the user interface system 200 . It should be understood, however, that the embodiments described herein are not limited to any particular type of video camera. In addition to configuring a single camera 206 as shown in FIG. 2A, multiple cameras can also be configured, and each camera can be configured to capture the same scene, or capture different angles/scenes, and thus can be installed in different areas of the glasses. The camera 206 may be a two-dimensional (2D, two dimensional) camera or a three-dimensional (3D) camera.

该摄像机206所拍摄的内容可类似用户的视野。其他配置也是可能的。如图2A所示,摄像机206可安装在眼镜的桥接臂。在其他实施例中,摄像机206可以被安装在眼镜鼻翼或近用户的前额/眼睛之间。摄像机206可以被定向为与用户的眼睛在相同的方向,而提取眼镜前面的图像。The content captured by the camera 206 may be similar to the field of view of the user. Other configurations are also possible. As shown in FIG. 2A, the camera 206 may be mounted on the bridge arm of the glasses. In other embodiments, the camera 206 may be mounted on the nose of the glasses or near the user's forehead/eyes. Camera 206 may be oriented to be in the same direction as the user's eyes, taking images in front of the glasses.

显示器202的位置和尺寸被设定为使得所显示图像的出现为“浮”在真实世界中的用户的视图,从而提供了使用者可以感觉到,计算机产生的信息(computer-generatedinformation)可以与实际世界的用户感知合并。这样做,内建式计算机系统204可以分析由摄像机206提取的影像,以确定应该显示哪些影像和影像的显示方式(例如,在显示器上的显示位置,所显示的大小等)。The display 202 is positioned and sized so that the displayed image appears to be the user's view of the "floating" in the real world, thereby providing the user with a sense that computer-generated information can be compared to the real world. The user perception of the world merges. In doing so, the on-board computer system 204 can analyze the images captured by the camera 206 to determine which images should be displayed and how the images should be displayed (eg, where on the display, at which size they are displayed, etc.).

传感器208安装在眼镜的桥接臂,然而,该传感器208可以安装在用户接口系统200的其它区域。此外,另外的传感器可以包括于用户接口系统200。The sensor 208 is mounted on the bridge arm of the glasses, however, the sensor 208 may be mounted in other areas of the user interface system 200 . Additionally, additional sensors may be included in the user interface system 200 .

图2B示意图2A中的用户接口系统200的示例操作。在一实施例中,用户可以佩戴用户接口系统200,如同在戴眼镜一般。图2C示意图2A中的用户接口系统的一个例子。如图2C所示,虚拟输入图案240可显示在眼镜的显示器202上。FIG. 2B illustrates an example operation of the user interface system 200 in 2A. In one embodiment, the user may wear the user interface system 200 as if wearing glasses. Figure 2C schematically illustrates an example of the user interface system in 2A. As shown in FIG. 2C , a virtual input pattern 240 may be displayed on the display 202 of the glasses.

虚拟输入图案240可包括QWERTY状键盘241和控制区域242。图案240是从内建式计算机系统204发送到显示器202。The virtual input pattern 240 may include a QWERTY-like keyboard 241 and a control area 242 . Pattern 240 is sent from on-board computer system 204 to display 202 .

在这个例子中,QWERTY状键盘241可以划分成多个子群组键,每个子群组键对应到控制区域242的相应区域,依预先确定的对应关系。In this example, the QWERTY-shaped keyboard 241 can be divided into a plurality of subgroup keys, and each subgroup key corresponds to a corresponding area of the control area 242 according to a predetermined corresponding relationship.

如图2C所示,QWERTY状键盘241可以划分为9个子群组键。QWERTY状键盘241和控制区域242的9个区域的子群组键之间的关系,例如,可如键-区域对应表般,如下面表1。As shown in FIG. 2C, the QWERTY-shaped keyboard 241 can be divided into 9 subgroup keys. The relationship between the QWERTY-shaped keyboard 241 and the subgroup keys of the 9 areas of the control area 242 can be, for example, a key-area correspondence table, as shown in Table 1 below.

表1Table 1

在上述中,键所对应用户的第一手指、第二手指、第三手指、第四手指与第五手指可以是,例如是用户的右手拇指、食指、中指、无名指和小指。In the above, the user's first finger, second finger, third finger, fourth finger and fifth finger corresponding to the key may be, for example, the user's right thumb, index finger, middle finger, ring finger and little finger.

在另一个例子中,键所对应用户的第一手指,第二手指,第三手指,第四手指与第五手指可以是,例如分别是用户的左手小指,无名指,中指,食指和拇指。In another example, the user's first finger, second finger, third finger, fourth finger and fifth finger corresponding to the key may be, for example, the user's left little finger, ring finger, middle finger, index finger and thumb respectively.

键盘的子群组键对应到各别区域,根据QWERTY状键盘241的子群组键的位置与控制区域的各区域的位置间的关系。例如,在键盘上的左上角的子群组键“qwert”对应到控制区域的左上角的区“0”。甚至,依据这些子群组键在该虚拟键盘配置的各自位置和这些区域在该控制区域的各自位置,将该虚拟键盘配置的各这些子群组键对应到该控制区域的这些区域之一。上述表所公开以及如何在用户接口系统200输入的实施例在下面描述。The subgroup keys of the keyboard correspond to respective areas, according to the relationship between the positions of the subgroup keys of the QWERTY-like keyboard 241 and the positions of the respective areas of the control area. For example, subgroup key "qwert" in the upper left corner on the keyboard corresponds to zone "0" in the upper left corner of the control area. Even, according to the respective positions of these subgroup keys on the virtual keyboard and the respective positions of these areas on the control area, each of these subgroup keys configured on the virtual keyboard corresponds to one of the areas in the control area. Embodiments of what the above tables disclose and how they are entered in the user interface system 200 are described below.

图2A-图2C为用户接口系统包括眼镜的实施例。此外,该系统的组件可以与眼镜分离。例如,相机206和/或传感器208可以跟眼镜分离或者是可拆除式的,并且可以附着到使用者的其它身体部位,包括颈部,胸部,手腕或(腰)带等,且可提取图像,并感测用户的手/手指的动作。2A-2C illustrate an embodiment of a user interface system including glasses. Additionally, components of the system can be separated from the glasses. For example, the camera 206 and/or sensor 208 can be separate from the glasses or be removable, and can be attached to other body parts of the user, including the neck, chest, wrist or (waist) belt, etc., and can extract images, And sense the user's hand/finger movement.

参考图3A-图3D,其显示在用户接口系统输入(凝空打字,air typing)的实施例,其中用户可以戴着标记或手套而输入数据。当用户想打字时,用户将他/她的右手伸至摄像机和传感器的范围之内。在这个例子中,凝空打字之前,用户戴着手套320,手套320具有特征点标记330和5个指甲型标记物340。当用户佩戴手套320时,特征点标记330的位置相关于用户的手的特征点。该特征点为,例如但不局限于,用户的手背的任意点,例如,特征点包括用户的手的虎口或用户手的指缝(finger pit)或用户的手背面的中心点。指甲型标记物340与用户的手310的指甲有关联。例如,指甲型标记物340可以是戒指,可黏至指甲/手指的可移除式胶带,或擦在指甲上的指甲油。在下面的描述中,特征点标记330的位置相关于用户的手背。Referring to FIG. 3A-FIG. 3D, it shows an embodiment of inputting (air typing) at the user interface system, wherein the user may input data while wearing a marker or gloves. When the user wants to type, the user puts his/her right hand within range of the camera and sensor. In this example, before air typing, the user wears a glove 320 with feature point markers 330 and five nail-shaped markers 340 . When the user wears the glove 320, the position of the feature point marker 330 is relative to the feature point of the user's hand. The feature point is, for example but not limited to, any point on the back of the user's hand. For example, the feature point includes the mouth of the user's hand or the finger pit of the user's hand or the center point of the back of the user's hand. Nail markers 340 are associated with the nails of the user's hand 310 . For example, the nail-shaped marker 340 can be a ring, a removable tape that can be adhered to the nail/finger, or nail polish that is rubbed on the nail. In the following description, the position of the feature point mark 330 is relative to the back of the user's hand.

当输入时,用户伸出他/她的右手310,直到内建式计算机系统204确定该手套320的特征点标记330已位于控制区域的选取区域内。例如,图3A所示,用户接口系统200确定是否手套320的特征点标记330已位于控制区域内的区“3”内。如果是这样,则该用户接口系统200确定用户想要输入a,s,d,f和g中的一个键。When inputting, the user extends his/her right hand 310 until the built-in computer system 204 determines that the feature point marker 330 of the glove 320 is within the selected area of the control area. For example, as shown in FIG. 3A, the user interface system 200 determines whether the feature point marker 330 of the glove 320 has been located within zone "3" within the control area. If so, the user interface system 200 determines that the user wants to enter a key among a, s, d, f and g.

为让内建式计算机系统204能确定,传感器110或摄像机206感测/提取特征点标记330和指甲型标记物340,并发送所述感测/提取结果至内建式计算机系统204。For the built-in computer system 204 to determine, the sensor 110 or the camera 206 senses/extracts the feature point marker 330 and the nail-shaped marker 340 , and sends the sensing/extracted result to the built-in computer system 204 .

基于所提取的图像,该内建式计算机系统204提取该特征点标记330和指甲型标记物340的图像之后,该内建式计算机系统204获得特征点标记330和指甲型标记物340的坐标。内建式计算机系统204比较该特征点标记330的坐标与控制区域的区域坐标,以确定该特征点标记330所位于的区域。Based on the extracted images, after the built-in computer system 204 extracts the images of the feature point marker 330 and the nail-shaped marker 340 , the built-in computer system 204 obtains the coordinates of the feature point marker 330 and the nail-shaped marker 340 . The built-in computer system 204 compares the coordinates of the feature point marker 330 with the area coordinates of the control area to determine the area where the feature point marker 330 is located.

在该内建式计算机系统204确定特征点标记320所位于的选取区域后,内建式计算机系统204控制显示器202以显示活性标识符(active identifier)360。该活性标识符360显示哪些键对应到特征点标记330所位于的选取区域。可突出显示此活性标识符360,以便用户可以容易地看到哪些键被对应(或有效)。因此,用户可以在用户接口系统200上打字/输入。After the built-in computer system 204 determines the selected area where the feature point marker 320 is located, the built-in computer system 204 controls the display 202 to display an active identifier 360 . The activity identifier 360 shows which keys correspond to the selection area where the feature point marker 330 is located. This active identifier 360 can be highlighted so that the user can easily see which keys are corresponding (or active). Thus, a user can type/input on the user interface system 200 .

例如,如果用户想要输入对应到区域“3”的键“d”,则用户移动他/她的右手直到用户接口系统200的内建式计算机系统204确定用户右手310的特征点标记330位于控制区域的区域内“3”。然后,内建式计算机系统204控制显示器202以显示活性标识符360。在显示活性标识符360(环绕着键a,s,d,f和g)后,用户移动/敲动/弯曲中指。内建式计算机系统204将这种手指运动视为敲击键“d”。通过识别/翻译这个手指敲击键“d”,内建式计算机系统204控制显示器202显示打字结果370,其显示出“d”,并发送代表键“d”被敲击的打字事件到主机(未示出)。For example, if the user wants to input the key "d" corresponding to the area "3", the user moves his/her right hand until the built-in computer system 204 of the user interface system 200 determines that the feature point mark 330 of the user's right hand 310 is located at the control "3" within the region of the region. On-board computer system 204 then controls display 202 to display activity identifier 360 . After displaying the active identifier 360 (surrounding the keys a, s, d, f and g), the user moves/tap/bends the middle finger. The built-in computer system 204 interprets this finger movement as hitting the key "d". By recognizing/translating this finger hitting the key "d", the built-in computer system 204 controls the display 202 to display the typing result 370, which shows "d", and sends a typing event representing the key "d" being hit to the host ( not shown).

类似地,如图3B所示,如果用户想要输入“h”时,用户移动手套320直到用户接口系统200的内建式计算机系统204确定用户的右手310的特征点标记330位于控制区域的区域“4”内。当用户看到活性标识符360时,用户移动/敲动/弯曲食指,以指示敲击键“h”。藉由将手指敲击识别为敲击键“h”,内建式计算机系统204控制显示器202以显示打字结果370(其显示“dh”),而内建式计算机系统204发送的打字事件则表明键“h”被敲击。图3C和图3D的操作是相似的,为简便起见说明从略。Similarly, as shown in FIG. 3B, if the user wants to input "h", the user moves the glove 320 until the built-in computer system 204 of the user interface system 200 determines that the feature point mark 330 of the user's right hand 310 is located in the area of the control area "4". When the user sees the active identifier 360, the user moves/tap/bends the index finger to indicate hitting the key "h". By recognizing the finger tap as hitting the key "h", the embedded computer system 204 controls the display 202 to display the typing result 370 (which displays "dh"), and the embedded computer system 204 sends a typing event indicating The key "h" is struck. The operations in Fig. 3C and Fig. 3D are similar, and the description is omitted for brevity.

此外,内建式计算机系统204获得指甲型标记物340的坐标后,内建式计算机系统204控制显示器202以显示手/手指跟踪信息380,其表示手套320的指甲型标记物340的位置(即用户的手310的指甲的位置)。In addition, after the built-in computer system 204 obtains the coordinates of the nail-shaped marker 340, the built-in computer system 204 controls the display 202 to display hand/finger tracking information 380, which represents the position of the nail-shaped marker 340 of the glove 320 (i.e. position of the nails of the user's hand 310).

因为QWERTY状键盘241的布局和控制区域242的区域之间的对应是基于人的本能设计,用户可以在几乎没有或很少心理负担就记住了QWERTY键盘像241和对应关系的布局。例如,键q,w,e,r和t(在QWERTY状键盘241的左上角)对应到在控制区242的左上角的区域“0”;同样地,在QWERTY状键盘241的中间的键g,h,j,k和l是对应到控制区域242的中间区域“4”。Because the correspondence between the layout of the QWERTY keyboard 241 and the control area 242 is designed based on human instinct, the user can memorize the layout of the QWERTY keyboard 241 and the corresponding relationship with little or no psychological burden. For example, keys q, w, e, r and t (in the upper left corner of the QWERTY-shaped keyboard 241) correspond to zone "0" in the upper left corner of the control area 242; , h, j, k and l correspond to the middle area "4" of the control area 242 .

参考图4,其显示根据本公开的实施例的另一个键盘布局示意,其中用户可以穿戴标记或手套来输入数据。在这个例子中,QWERTY状键盘可以划分成12个子群组键。同样地,QWERTY状键盘的12个子群组键和控制区域的个别区域之间的关系可概括在键-区域对应表,如表2所示。在本公开的其它可能实施例中,用户可以将标记佩戴在手指上(但用户不戴手套)而来输入数据。在本公开的其它可能实施例中,用户可以佩戴包括特征点但没有标记的手套或手套组来输入数据。也就是说,在本公开中,手套,指甲型标记和特征点标记是选择性的。Referring to FIG. 4 , there is shown another schematic illustration of a keyboard layout in which a user may wear a marker or gloves to enter data according to an embodiment of the present disclosure. In this example, the QWERTY-like keyboard can be divided into 12 subgroup keys. Likewise, the relationship between the 12 subgroup keys of the QWERTY-shaped keyboard and the individual areas of the control area can be summarized in the key-area correspondence table, as shown in Table 2. In other possible embodiments of the present disclosure, the user may enter data with the marker worn on the finger (but the user does not wear gloves). In other possible embodiments of the present disclosure, a user may enter data wearing a glove or set of gloves that includes feature points but no markings. That is to say, in the present disclosure, the glove, the nail type mark and the feature point mark are optional.

表2Table 2

在图4所示的键盘布局的用户操作和图1是类似的,因此为简洁起见,细节在此省略描述。The user operation of the keyboard layout shown in FIG. 4 is similar to that in FIG. 1 , so for the sake of brevity, details are omitted here.

参考图5,其显示根据本公开的另一实施例,其中,在用户可以空手地输入数据。如上所述,在第3A-3D图和图4中,用户佩戴具有特征点标记330和5个指甲型标记物340的手套来将数据输入到用户接口系统。不同的是,在图5,当用户在对用户接口系统200凝空打字时,用户不戴手套。因此,用户接口系统200检测并跟踪用户的手的特征点,以确定是否用户的手的特征点已位于控制区域的选取区域内。类似地,该特征点可以是用户的手背上任意点或虎口。此外,用户接口系统200检测并跟踪用户的手的指尖,并且控制所述显示器来显示手/手指跟踪信息。Referring to FIG. 5 , it shows another embodiment according to the present disclosure, wherein the user can input data with bare hands. As described above, in FIGS. 3A-3D and FIG. 4 , the user wears a glove with feature point markers 330 and five nail-shaped markers 340 to enter data into the user interface system. The difference is that in FIG. 5 , when the user is air typing on the user interface system 200 , the user does not wear gloves. Therefore, the user interface system 200 detects and tracks the feature points of the user's hand to determine whether the feature points of the user's hand are located within the selected area of the control area. Similarly, the feature point can be any point on the back of the user's hand or the mouth of the tiger. Additionally, the user interface system 200 detects and tracks the fingertips of the user's hand, and controls the display to display hand/finger tracking information.

在图5所示的键盘布局的用户操作和图1是类似的,因此为简洁起见,细节在此省略描述。The user operation of the keyboard layout shown in FIG. 5 is similar to that in FIG. 1 , so for the sake of brevity, details are omitted here.

现在,参考图6,其显示本公开实施例的另一例。在图6,打字时,用户不戴具有标记的手套。也就是说,用户可以用双手(空手)输入数据。Referring now to FIG. 6, another example of an embodiment of the present disclosure is shown. In Fig. 6, the user does not wear gloves with markings when typing. That is, the user can input data with both hands (empty hands).

在图6,用户使用两只手来输入数据至用户接口系统200。如图6所示,QWERTY状键盘可以划分成12个子群组键,每个子群组键对应到控制区域的相应区域,依预先确定的对应关系。同样地,QWERTY状键盘的12个子群组键和控制区域的12个区域之间的关系可概括在键-区域对应表如表3中。另外,控制区域的左半区域(例如区域0,1,4,5,8,和9)对应到由用户的左手所会按下的键;控制区域的右半区域(即区域2,3,6,7,10,和11)对应到由用户的右手所会按下的键。In FIG. 6 , the user uses two hands to input data into the user interface system 200 . As shown in FIG. 6 , the QWERTY-shaped keyboard can be divided into 12 subgroup keys, and each subgroup key corresponds to a corresponding area of the control area according to a predetermined corresponding relationship. Likewise, the relationship between the 12 subgroup keys of the QWERTY-shaped keyboard and the 12 areas of the control area can be summarized in a key-area correspondence table such as Table 3. Additionally, the left half of the control area (e.g., areas 0, 1, 4, 5, 8, and 9) corresponds to the keys that would be pressed by the user's left hand; the right half of the control area (i.e., areas 2, 3, 6, 7, 10, and 11) correspond to keys that would be pressed by the user's right hand.

表3table 3

在用户接口系统确定用户任一手的特征点进入控制区域的一区域之后,显示活性标识符,并且用户以左/右手的手指敲击/弯曲来键入用户接口系统200。After the user interface system determines that the feature point of either hand of the user enters an area of the control area, the active identifier is displayed, and the user taps/bends with left/right hand fingers to enter the user interface system 200 .

例如,如果用户想要输入“a”和“&”,用户的左手移动到区域“4”并敲击/弯曲左手的小指;然后用户移动右手进入区域“7”并敲击/弯曲右手拇指。For example, if the user wants to enter "a" and "&", the user's left hand moves to zone "4" and taps/bends the left pinky finger; then the user moves the right hand into zone "7" and taps/bends the right thumb.

现在参考图7,其为本公开的另一实施例。在图7,在打字时,用户不戴具有标记的手套,以及控制区域可以和键的子群组键相组合,而且,藉由手的动作来改变对应键。Referring now to FIG. 7 , another embodiment of the present disclosure is shown. In Fig. 7, when typing, the user does not wear gloves with markings, and the control area can be combined with subgroup keys of keys, and the corresponding keys are changed by hand movements.

在图7的QWERTY状键盘可以划分为6个子群组键。如图7布局所示,键的子群组键与控制区域的区域是重叠的。换句话说,在图7的布局例子中,控制区域的这些区域结合设置至该虚拟键盘配置的子群组键,左侧子群组键(区)分配给该用户的左手,以及右侧的子群组键(区)分配给用户的右手。在用户接口系统200检测并确定用户的左/右手的特征点进入任一子群组键(区)后,用户接口系统200控制显示器以显示出环绕着该特征点所进入的选取区域的活性标识符。例如,如果用户接口系统200检测并确定用户的左/右手的特征点进入左上子群组键(左上区域),用户接口系统200控制显示器来显示环绕着左上区域(其对应键“qwert”)的活性标识符。看到活性标识符后,用户就可以敲击/弯曲手指以打字。The QWERTY-like keyboard in FIG. 7 can be divided into 6 subgroup keys. As shown in the layout of Figure 7, subgroups of keys overlap the area of the control area. In other words, in the layout example of FIG. 7, the areas of the control area are combined with subgroups of keys set to the virtual keyboard configuration, the left subgroup of keys (fields) assigned to the user's left hand, and the right subgroup of keys (fields) assigned to the user's left hand. Subgroup keys (fields) are assigned to the user's right hand. After the user interface system 200 detects and determines that the feature point of the user's left/right hand enters any subgroup key (area), the user interface system 200 controls the display to display the active sign surrounding the selected area entered by the feature point symbol. For example, if the user interface system 200 detects and determines that the feature point of the user's left/right hand enters the upper left subgroup key (upper left area), the user interface system 200 controls the display to display the image surrounding the upper left area (which corresponds to the key "qwert"). Activity identifier. Once the active identifier is seen, the user can tap/bend their fingers to type.

此外,在该示例的图7,只要检测到用户的手位于所述的子群组键/区域内,用户即可敲击/弯曲手指以开始打字。例如,如果用户接口系统检测和确定用户的手的特征点进入到对应到“asdfg”子群组键/区域的范围内,用户接口系统确定该用户要键入左中子群组键(包括键“asdfg”)。相同地,如果用户接口系统检测和确定用户的手的特征点进入到对应到的键“nmSB.”的子群组键/区域中,用户接口系统确定该用户要键入右底子群组键(包括键“nmSB.”)。Furthermore, in Figure 7 of this example, as soon as the user's hand is detected to be within the subgroup key/area, the user can tap/bend the finger to start typing. For example, if the user interface system detects and determines that a feature point of the user's hand enters within a range corresponding to the "asdfg" subgroup key/region, the user interface system determines that the user is about to type the left middle subgroup key (including the key " asdfg"). Likewise, if the user interface system detects and determines that a feature point of the user's hand enters the subgroup key/area corresponding to the key "nmSB.", the user interface system determines that the user is about to type the right bottom subgroup key (including key "nmSB.").

现在参考图8A和图8B,其显示本公开的实施例的另一例,其中,控制区可以结合于子群组键,而且,藉由手的动作来改变对应键。在图8A和图8B所示,输入时,用户不戴具有标记手套。在图8A和图8B示意与图7不同的键布局。图8A和图8B的操作是类似于图7,因而细节这里不再赘述。在图7、图8A和图8B中,特征点包括用户右/左手背部的相对应点。Referring now to FIG. 8A and FIG. 8B , it shows another example of the embodiment of the present disclosure, wherein the control area can be combined with subgroup keys, and the corresponding keys can be changed by hand movements. As shown in FIGS. 8A and 8B , the user does not wear gloves with markings when inputting. A key layout different from that of FIG. 7 is illustrated in FIGS. 8A and 8B . The operation of FIG. 8A and FIG. 8B is similar to that of FIG. 7, so the details are not repeated here. In FIGS. 7 , 8A, and 8B, the feature points include corresponding points on the back of the user's right/left hand.

如图6-图8B所示,用户没有佩戴有标记的手套。此外,当用户键入在图6-图8B的例子时,用户可以佩戴具有标记的手套,以辅助/跟踪特征点及用户手指。As shown in FIGS. 6-8B , the user is not wearing marked gloves. Additionally, the user may wear gloves with markers to assist/track feature points as well as the user's fingers while the user is typing the examples in Figures 6-8B.

另外,在其它可能的实施例中,键盘布局可适合于单手打字,除了图7、图8A或图8B所示的用户以双手来键入的例子。例如,键盘布局的子群组键的布置顺序(例如,从上到下的顺序)为,使得用户可以用单手来输入。这仍是本发明的精神和范围内。在一实施例中,控制区域的这些区域的各自位置可迭合设置于该虚拟键盘配置的这些子群组键的各自位置。Additionally, in other possible embodiments, the keyboard layout may be adapted for one-handed typing, other than the examples shown in Figure 7, Figure 8A, or Figure 8B where the user is typing with both hands. For example, the subgroup keys of the keyboard layout are arranged in an order (eg, a top-to-bottom order) such that the user can input with one hand. This is still within the spirit and scope of the invention. In one embodiment, the respective positions of these areas of the control area can be superimposed on the respective positions of the subgroup keys of the virtual keyboard configuration.

在上述示例中,将键盘布局,控制区域,活性标识符,打字结果和手/手指的跟踪信息显示在用户接口系统中的显示器上。提取用户单手/双手所得的图像未必要显示于用户接口系统的显示器上,因为用户可以用眼睛看到用户的单手/双手。In the above example, the keyboard layout, control area, activity identifier, typing result and hand/finger tracking information are displayed on a display in the user interface system. The image obtained by extracting the user's one hand/both hands is not necessarily displayed on the display of the user interface system, because the user can see the user's one hand/both hands with eyes.

图8C公开本公开内容其中一实施例,用户可以利用单个手指来进行输入的另一个例子。如该图8C中,系统确定该特征点是位于该区域的“3”,则系统会显示出活性标识符360,其围绕着键“asdfg”。系统检测到用户手指的位置(例如但不限定食指),系统确定用户想要输入“d”,则系统会显示另一个活性标识符390,其围绕键“d”。然后,用户可能会敲动他的手指,并且系统将解译该指尖的按压键“d”。FIG. 8C discloses one embodiment of the present disclosure, another example where a user can use a single finger for input. As shown in FIG. 8C , the system determines that the feature point is "3" located in the area, and the system will display the active identifier 360, which surrounds the key "asdfg". The system detects the position of the user's finger (such as but not limited to the index finger), the system determines that the user wants to enter "d", then the system will display another active identifier 390, which surrounds the key "d". The user may then tap his finger, and the system will interpret the fingertip's press of the key "d".

为公开其中用户可以利用单个手指来输入的另一个实施例。如该图8D所示,该系统之后确定该特征点是位于该区域的“3”时,系统显示出活性标识符360,其围绕键“asdfg”。后系统检测到用户手指的位置(例如但不限定食指),该系统还显示键“asdfg”395,其位于用户的手指的位置的附近。之后,系统确定用户想要输入“d”,则系统会显示另一活性标识符397,其环绕着键“asdfg”395的键“d”。然后,用户可能会敲动他的手指,并且系统将解译该指尖如按下键“d”。类似地,图8E显示用户可以移动用户手指来敲击键“a”。 To disclose another embodiment where the user can input with a single finger. As shown in this Figure 8D, when the system then determines that the feature point is "3" located in the region, the system displays the activity identifier 360 surrounding the key "asdfg". After the system detects the location of the user's finger (such as but not limited to the index finger), the system also displays the key "asdfg" 395, which is located near the location of the user's finger. Afterwards, the system determines that the user wants to enter "d", then the system displays another active identifier 397 surrounding the key "d" of the key "asdfg" 395. The user may then tap his finger, and the system will interpret the fingertip as pressing the key "d". Similarly, FIG. 8E shows that the user may move the user's finger to strike the key "a".

显示,用户利用单手指来打字的例子。当然,用户可以利用任一手的任一手指来打字。换句话说,可与图的实施例进行组合,仍然是本公开所述的范围。 An example of a user typing with one finger is shown. Of course, the user can type with any finger of either hand. in other words, can be with Combinations of the embodiments of the figures are still within the scope of the present disclosure.

现在,参考图9,其显示根据本公开的另一实施例的用于接收用户输入的系统900。该系统900包括用户接口系统900A和主机系统900B。用户接口系统900A包括传感器902,处理器904,存储器906,以及收发器908。主机系统900B包括接收器912,处理器914,存储器916,和显示器918。用户接口系统900A和主机系统900B经由通信链路(communicationlink)910耦合。通信链路910可以是无线或有线连接。例如,通信链路910可以为有线连接,比如为USB的串行总线,或者是并行总线。该通信链路可以是无线连接,例如蓝牙,IEEE802.11或者其他的无线通信链路。Referring now to FIG. 9 , there is shown a system 900 for receiving user input according to another embodiment of the present disclosure. The system 900 includes a user interface system 900A and a host system 900B. User interface system 900A includes sensor 902 , processor 904 , memory 906 , and transceiver 908 . Host system 900B includes receiver 912 , processor 914 , memory 916 , and display 918 . User interface system 900A and host system 900B are coupled via a communication link 910 . Communication link 910 may be a wireless or wired connection. For example, communication link 910 may be a wired connection, such as a serial bus such as USB, or a parallel bus. The communication link may be a wireless connection, such as Bluetooth, IEEE802.11 or other wireless communication links.

用户接口系统900A的组件902,904,906是与用户接口系统100/200相似的,不同的是用户接口系统900A额外包括了显示器。用户接口系统900A的收发器908是用于从主机系统900B发送和接收数据。主机系统900B的接收器912是用于从所述用户接口系统900A发送和接收数据。Components 902, 904, 906 of user interface system 900A are similar to user interface systems 100/200, except that user interface system 900A additionally includes a display. The transceiver 908 of the user interface system 900A is used to send and receive data from the host system 900B. The receiver 912 of the host system 900B is used to send and receive data from the user interface system 900A.

用户接口系统100的处理器120的功能和操作可由处理器904和/或处理器914执行。例如,处理器904确定该手套/用户的手的特征点是否是位于选取子群组键/区域内,并解译用户手指敲击为按下键。通过收发器908、通信链路910和接收器912,处理器904的决定结果和所拍摄的图像可发送到处理器914。处理器914控制显示器918来显示键盘布局、控制区域、用户手的图像、活性标识符、打字结果、手/手指的跟踪信息等。The functions and operations of processor 120 of user interface system 100 may be performed by processor 904 and/or processor 914 . For example, the processor 904 determines whether the feature point of the glove/user's hand is located within a selected subgroup of keys/areas, and interprets the user's finger tap as a key press. Through the transceiver 908 , communication link 910 and receiver 912 , the decision results of the processor 904 and the captured images may be sent to the processor 914 . The processor 914 controls the display 918 to display keyboard layouts, control areas, images of the user's hands, activity identifiers, typing results, hand/finger tracking information, and the like.

图10示意图9的系统900的一个例子。在图10所示,用户接口系统900A可为可穿戴式计算装置,以及主机系统900B可以是任何类型的计算设备,例如PC,膝上型计算机,移动电话等等,其将数据以有线或无线方式传输到用户接口系统900A。An example of the system 900 of FIG. 9 is shown in FIG. 10 . As shown in FIG. 10, the user interface system 900A can be a wearable computing device, and the host system 900B can be any type of computing device, such as a PC, laptop computer, mobile phone, etc., that transmits data via wired or wireless The mode is transmitted to the user interface system 900A.

在用户佩戴用户接口系统900A后,用户可将手/手套伸至传感器902的感测范围和/或图像提取装置的提取范围;并且用户可以看到手的图像显示在主机系统900B的显示器918上。After the user wears the user interface system 900A, the user can extend the hand/glove to the sensing range of the sensor 902 and/or the extraction range of the image capture device; and the user can see the image of the hand displayed on the display 918 of the host system 900B.

在用户接口系统900A的处理器904判断出手/手套的特征点已位于子群组键/区域内,活性标识符显示在主机系统900B的显示器918上,并且用户可以轻敲/弯曲手指上以键入于用户接口系统900A。打字结果显示在显示器918上。这就是说,打字过程中,用户观看主机系统900B的显示器918。After the processor 904 of the user interface system 900A determines that the feature point of the hand/glove has been located within the subgroup key/area, the active identifier is displayed on the display 918 of the host system 900B, and the user can tap/flex the finger to type in user interface system 900A. The typing result is displayed on the display 918. That is to say, during typing, the user watches the display 918 of the host system 900B.

所示的键盘布局和控制区域可以适用于图9和图10。此外,键盘布局,控制区域,活性标识符,用户的手(双手)的图像,打字结果和手/手指的跟踪信息可显示在主机系统的显示器上。另一方面,在图9和图10的例子中,用户接口系统可用于拍摄的用户手(双手)的图像和用于感测用户的手(双手)/手指(多个手指)的移动。判定用户的手(双手)是否已位于选取区域内,以及判定用户的手指轻敲可由用户接口系统900A或主机系统900B进行。 The keyboard layout and control areas shown can be adapted for use in FIGS. 9 and 10 . Additionally, keyboard layouts, control areas, active identifiers, images of the user's hands (both hands), typing results and hand/finger tracking information can be displayed on the host system's display. On the other hand, in the examples of FIGS. 9 and 10 , the user interface system may be used to capture images of the user's hand (both hands) and to sense movement of the user's hand (both hands)/finger(s). The determination of whether the user's hand (both hands) has been located within the selection area, and the determination of the user's finger tap can be performed by the user interface system 900A or the host system 900B.

参考图11,其根据本公开的另一实施例的方法。图11的方法可使用于图1或图9所示的系统。图11可包括一个或一个以上的操作,功能或动作。虽然这些步骤是显示为连续顺序,这些步骤也可以为并行执行,和/或以不同的顺序来执行之。此外,各步骤可组合成较少的步骤,或分成多个步骤,和/或根据需要而删除。Referring to FIG. 11 , it is a method according to another embodiment of the present disclosure. The method of FIG. 11 can be used in the system shown in FIG. 1 or FIG. 9 . Figure 11 may include one or more operations, functions or actions. Although the steps are shown in a sequential order, the steps may be performed in parallel, and/or in a different order. Furthermore, steps may be combined into fewer steps, or split into multiple steps, and/or deleted as desired.

此外,对于本文所公开的方法,图11的流程图显示本实施例的功能和操作的一个可能实施方式。在这方面,各步骤可以实施为模块,电路,程序段(segment)或程序代码的一部分,其包括可由处理器所执行的一个或多个指令来实现所公开的功能或步骤。该程序代码可以被存储在任何类型的计算机可读介质,例如,诸如硬盘或快闪存储器等存储装置。该计算机可读介质可包括非临时性计算机可读介质,例如,随机存取存储器(RAM)。该计算机可读介质还可以包括非临时性介质,如只读存储器(ROM),光盘或磁盘,或固件的内置存储器。所述计算机可读介质还可以是任何其它可易失或非易失性存储系统。Furthermore, for the methods disclosed herein, the flowchart of FIG. 11 shows one possible implementation of the functionality and operation of this embodiment. In this regard, various steps may be implemented as a module, circuit, program segment or part of program code comprising one or more instructions executable by a processor to implement the disclosed functions or steps. The program code can be stored on any type of computer readable medium, for example, a storage device such as a hard disk or a flash memory. The computer readable medium may include non-transitory computer readable medium, such as random access memory (RAM). The computer readable medium may also include non-transitory media such as read only memory (ROM), optical or magnetic disks, or built-in memory of firmware. The computer readable medium can also be any other volatile or nonvolatile storage system.

在步骤1110中,显示键盘布局和控制区域。例如,键盘布局和控制区域可以显示在用户接口系统和/或主机系统的显示器上。In step 1110, the keyboard layout and control areas are displayed. For example, the keyboard layout and control areas may be displayed on the user interface system and/or the display of the host system.

在步骤1120中,提取用户的手(双手)的图像。例如,用户的手(双手)的图像可以由用户接口系统的摄像机来提取。步骤1110和1120的顺序是可以互换的。In step 1120, images of the user's hands (both hands) are extracted. For example, images of the user's hands (both hands) may be captured by the camera of the user interface system. The order of steps 1110 and 1120 are interchangeable.

在步骤1130,从所提取的图像来提取手指图像、指尖图像和/或特征点图像。例如,手指图像、指尖图像和/或特征点图像可由用户接口系统和/或主机系统的(多个)处理器来提取。提取多个物件图像的步骤包括可从例如用户手,从来自至少该提取影像所述物件的多个图像中取得一个图像,提取多个手指、多个指尖和/或至少一特征点的位置。In step 1130, finger images, fingertip images and/or feature point images are extracted from the extracted images. For example, finger images, fingertip images, and/or feature point images may be extracted by the user interface system and/or processor(s) of the host system. The step of extracting a plurality of object images may include extracting a plurality of fingers, a plurality of fingertips, and/or at least one feature point from, for example, a user's hand, an image obtained from a plurality of images of the object in at least the extracted image. .

在步骤1140中,决定特征点所在的选取区域。例如,用户接口系统和/或主机系统的(多个)处理器可确定特征点所在的控制区域中的选取区域。In step 1140, the selection area where the feature points are located is determined. For example, the user interface system and/or the processor(s) of the host system may determine a selected area of the control area where the feature point is located.

在步骤1150,确定用户的手指所对应的多个键,例如,由用户接口系统和/或主机系统的(多个)处理器来决定。键对应指的是将键对应到特征点所在的选取区域,和/或,将键对应至用户的手指。如上图6所述,例如,如果该特征点是位于区域“4”内而键“asdfg”对应到用户手指,则键对应是指,将键“asdfg”对应到区域“4”。At step 1150, the number of keys corresponding to the user's fingers is determined, eg, by the user interface system and/or the processor(s) of the host system. Key mapping refers to mapping the key to the selection area where the feature point is located, and/or, mapping the key to the user's finger. As described in Figure 6 above, for example, if the feature point is located in the area "4" and the key "asdfg" corresponds to the user's finger, the key correspondence means that the key "asdfg" is mapped to the area "4".

在步骤1160,将用户手指敲击解译为键敲击,例如,由用户接口系统和/或主机系统的(多个)处理器来解译。At step 1160, the user finger taps are interpreted as key taps, eg, by the user interface system and/or the processor(s) of the host system.

在步骤1170,在解译出手指敲击后,将打字结果和手/手指的跟踪信息传送并显示于,比如,用户接口系统和/或主机系统的显示器上。At step 1170, after interpreting the finger strokes, the typing results and hand/finger tracking information are transmitted and displayed on, for example, a display of the user interface system and/or the host system.

在步骤1180,判断用户是否终止或中断输入数据。例如,用户可以由语音控制或按下实体键或虚拟键,来终止或中断数据输入。如果步骤1180为否的话,该流程返回到步骤1110。如果步骤1180为是的话,则流程结束。In step 1180, it is determined whether the user has terminated or interrupted the input of data. For example, the user can terminate or interrupt data input by voice control or by pressing a physical or virtual key. If step 1180 is no, the process returns to step 1110 . If yes in step 1180, the process ends.

图12示意根据本公开又一实施例的接收用户输入的系统。如图12所示,在数据输入时,用户手持着移动装置1210(例如但不限于移动电话或平板计算机)。移动装置1210包括和图1类似或相同的传感器,处理器,存储器和显示器。在用户输入时,移动设备1210提取并感测用户的一只手。所提取的手的图像、打字结果、虚拟键盘配置和控制区域显示在移动设备1210上。用户输入数据到移动装置1210的操作,和上面所讨论的是相同或相似的,因此这里省略。Fig. 12 illustrates a system for receiving user input according to yet another embodiment of the present disclosure. As shown in FIG. 12, during data entry, the user is holding a mobile device 1210 (such as, but not limited to, a mobile phone or tablet computer). The mobile device 1210 includes similar or identical sensors, processors, memory, and displays as in FIG. 1 . As the user enters, the mobile device 1210 picks up and senses one of the user's hands. The extracted images of hands, typing results, virtual keyboard configurations and control areas are displayed on the mobile device 1210. The operation of the user inputting data into the mobile device 1210 is the same or similar to that discussed above, so it is omitted here.

本公开亦公开出,一种程序存储介质,其所存储的计算机程序可使电子设备执行下列步骤:显示一虚拟键盘配置和一控制区域,该虚拟键盘配置包括多个子群组键,各子群组键系键对应到该控制区域的多个区域之一;从至少一提取图像取出一物件的图像,来识别该物件的一特征点的位置;决定该特征点位于该控制区域的这些区域当中的一选取区域;确定对应到该选取区域的多个键;以及将该物件的移动解译为对一用户接口系统的一输入数据。The present disclosure also discloses a program storage medium, the computer program stored in it can cause the electronic device to perform the following steps: display a virtual keyboard configuration and a control area, the virtual keyboard configuration includes a plurality of subgroup keys, each subgroup The set of keys corresponds to one of a plurality of regions of the control region; an image of an object is taken out from at least one extracted image to identify the position of a feature point of the object; and the feature point is determined to be located among the regions of the control region a selected area; determining keys corresponding to the selected area; and interpreting movement of the object as an input data to a user interface system.

此外,在一个实施例中,所述计算机系统还包括至少一个处理器,用于执行相关的控制程序。In addition, in one embodiment, the computer system further includes at least one processor, configured to execute related control programs.

在另一个实施方案中,该计算机系统可以是电路的设计,可以在一芯片上实现。具体地,实施例可以由硬件描述语言(如Verilog或VHDL等)等实施成电路设计,包括集成电路和布局。电路可由多种硬件描述语言来设计。例如,集成电路制造商可以利用特定应用集成电路(ASIC)或客户导向集成电路来实现。In another embodiment, the computer system may be a circuit design that may be implemented on a chip. Specifically, the embodiments may be implemented as circuit designs, including integrated circuits and layouts, by a hardware description language (such as Verilog or VHDL, etc.). Circuits can be designed using a variety of hardware description languages. For example, integrated circuit manufacturers may implement with application-specific integrated circuits (ASICs) or customer-oriented integrated circuits.

如上所述,因为控制区域的区域是相互非常接近,本公开实施例的特征在于,在凝空打字时,手移动距离可为最小化。相反,在传统键盘上打字的时候,手移动距离通常是长的,以至于用户可能需要花更多的时间在打字。As mentioned above, because the areas of the control area are very close to each other, the embodiment of the present disclosure is characterized in that the hand movement distance can be minimized during condensate typing. On the contrary, when typing on a traditional keyboard, the hand movement distance is usually long, so that the user may need to spend more time typing.

综上所述,虽然本申请已以实施例公开如上,然其并非用以限定本申请。本申请所属领域技术人员在不脱离本申请的精神和范围内,当可作各种的更动与润饰。因此,本申请的保护范围当视所附权利要求书界定范围为准。In summary, although the present application has been disclosed as above with embodiments, it is not intended to limit the present application. Those skilled in the art to which this application belongs may make various changes and modifications without departing from the spirit and scope of this application. Therefore, the scope of protection of the present application should be determined by the appended claims.

Claims (18)

1. a kind of reception user's input method, comprises the following steps:
Show dummy keyboard configuration and control area, dummy keyboard configuration includes multiple subgroup keys, each subgroup key difference Correspond to one of multiple regions of the control area;
Multiple positions of object are extracted from least one extraction image, identify the position of the characteristic point of the object;
It determines chosen area, is the selection area among all areas in the control area, where the position of this feature point Domain;
Determine multiple keys corresponding to the chosen area;And
The movement of the object is interpreted as the input data to user interface system.
2. input method as described in claim 1, wherein each self-alignment configured according to these subgroup keys in the dummy keyboard The respective position in the control area with these regions is put, these each subgroup keys which is configured correspond to the control One of these regions in region processed.
3. input method as described in claim 1, wherein the step of multiple positions of the extraction object include:
From at least one extraction at least image for multiple fingers of image zooming-out user, an at least image for multiple finger tips and/ Or at least image of this feature point of proficiency.
4. input method as described in claim 1, the wherein object are with or without the empty-handed of mark,
This feature point includes the extremely a little less of the back of the hand of at least proficiency of user, and
The user this at least the back of the hand of proficiency should be to a little less at least tiger's jaw of proficiency or the back of the hand including the user Center.
5. input method as described in claim 1, the wherein object are one or more gloves, marked including characteristic point and more A nail phenotypic marker object;And
The step of multiple positions of the extraction object, includes:
Go out the multiple of at least one position and these nail phenotypic markers of this feature point mark from at least one extraction image zooming-out Position.
6. input method as described in claim 1, further comprises the steps:
The active identifier of display, around these keys for corresponding to the chosen area.
7. input method as described in claim 1, further comprises the steps:
Display at least an input results, tracked information and the object this at least one extraction image it is at least one.
8. this of these regions of input method as described in claim 1, the wherein control area and dummy keyboard configuration A little group's bond closes setting or the respective position in these regions of the control area and these subgroups of dummy keyboard configuration The respective position of group key coincides setting.
9. a kind of reception user input systems, the system include:
Display, for showing dummy keyboard configuration and control area, dummy keyboard configuration includes multiple subgroup keys, each son Group's key corresponds to one of multiple regions of the control area respectively;
One or more sensors, the movement of sensing article;
Computing system, is coupled to one or more sensors and the display, which carries from least one extraction image Multiple positions of the object are taken, to identify the position of the characteristic point of the object, chosen area are determined, in these of the control area Among region, the position where this feature point is the chosen area, determines to correspond to multiple keys of the chosen area and incite somebody to action The movement of the object is interpreted as the input data to the system.
10. input system as claimed in claim 9, the wherein computing system are matched somebody with somebody according to these subgroup keys in the dummy keyboard The respective position put and these regions control the display to show dummy keyboard configuration in the respective position of the control area These each subgroup keys correspond to one of this few region of the control area.
11. input system as claimed in claim 9, when wherein the computing system extracts the position of the object, from this at least one Image is extracted, extracts at least the one of an at least image for multiple fingers of user, an at least image for multiple finger tips and the user The image of this feature point of hand it is at least one.
12. input system as claimed in claim 9, the wherein object are with or without the empty-handed of mark,
This feature point includes the extremely a little less of the back of the hand of at least proficiency of user;And
The back of the hand of at least proficiency of the user should be to a little less this including the user at least in the tiger's jaw of proficiency or the back of the hand The heart.
13. input system as claimed in claim 9, wherein
The object is one or more gloves, including characteristic point mark and multiple nail phenotypic marker objects;And
The computing system goes out at least one position and these nail types of this feature point mark from at least one extraction image zooming-out Multiple positions of mark.
14. input system as claimed in claim 9, wherein, which controls the display, with display activity mark Symbol, around these keys for corresponding to the chosen area.
15. input system as claimed in claim 9, wherein, which controls the display to show input results, chase after This of track information and the object at least one extracts at least one of image.
16. input system as claimed in claim 9, wherein, which controls the display to show, the control area The configuration of these regions and the dummy keyboard these subgroups bond close set or these regions of the control area it is respective The respective position of these subgroup keys of position and the dummy keyboard configuration coincides setting.
17. a kind of reception user inputs program recorded medium, it is following that the computer program stored includes electronic equipment execution Step:
Show dummy keyboard configuration and control area, dummy keyboard configuration includes multiple subgroup keys, each subgroup key difference Correspond to one of multiple regions of the control area;
Multiple positions of object are extracted from least one extraction image, identify the position of the characteristic point of the object;
It determines chosen area, is the chosen area among the multiple regions of the control area, where the position of this feature point;
Determine to correspond to multiple keys of the chosen area;And
Judge the movement of the object and corresponded to these keys of the chosen area using as the input to user interface system Data.
18. program recorded medium as claimed in claim 17, the computer program stored is used to hold the electronic device Row includes such as claim 2, the method for 3,4,5,6,7 or 8.
CN201410527952.9A 2013-12-02 2014-10-09 System and method for receiving user input, program storage medium and program therefor Active CN104679270B (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201361910932P 2013-12-02 2013-12-02
US61/910,932 2013-12-02
US14/303,438 US9857971B2 (en) 2013-12-02 2014-06-12 System and method for receiving user input and program storage medium thereof
US14/303,438 2014-06-12
TW103127626 2014-08-12
TW103127626A TWI525477B (en) 2013-12-02 2014-08-12 System and method for receiving user input and program storage medium thereof

Publications (2)

Publication Number Publication Date
CN104679270A CN104679270A (en) 2015-06-03
CN104679270B true CN104679270B (en) 2018-05-25

Family

ID=53314461

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410527952.9A Active CN104679270B (en) 2013-12-02 2014-10-09 System and method for receiving user input, program storage medium and program therefor

Country Status (1)

Country Link
CN (1) CN104679270B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1273649A (en) * 1998-06-30 2000-11-15 皇家菲利浦电子有限公司 Fingerless glove for interacting with a data processing system
JP2010277469A (en) * 2009-05-29 2010-12-09 Brother Ind Ltd Input device
CN102063183A (en) * 2011-02-12 2011-05-18 深圳市亿思达显示科技有限公司 Virtual input device of grove type
CN102906623A (en) * 2010-02-28 2013-01-30 奥斯特豪特集团有限公司 Local advertising content on an interactive head-mounted eyepiece

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040001097A1 (en) * 2002-07-01 2004-01-01 Frank Zngf Glove virtual keyboard for baseless typing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1273649A (en) * 1998-06-30 2000-11-15 皇家菲利浦电子有限公司 Fingerless glove for interacting with a data processing system
JP2010277469A (en) * 2009-05-29 2010-12-09 Brother Ind Ltd Input device
CN102906623A (en) * 2010-02-28 2013-01-30 奥斯特豪特集团有限公司 Local advertising content on an interactive head-mounted eyepiece
CN102063183A (en) * 2011-02-12 2011-05-18 深圳市亿思达显示科技有限公司 Virtual input device of grove type

Also Published As

Publication number Publication date
CN104679270A (en) 2015-06-03

Similar Documents

Publication Publication Date Title
TWI525477B (en) System and method for receiving user input and program storage medium thereof
CN114303120B (en) virtual keyboard
KR101844390B1 (en) Systems and techniques for user interface control
US20220326781A1 (en) Bimanual interactions between mapped hand regions for controlling virtual and graphical elements
CN110310288B (en) Method and system for object segmentation in mixed reality environments
EP3063602B1 (en) Gaze-assisted touchscreen inputs
JP6288372B2 (en) Interface control system, interface control device, interface control method, and program
CN112926423B (en) Pinch gesture detection and recognition method, device and system
EP2943835B1 (en) Head mounted display providing eye gaze calibration and control method thereof
KR102191870B1 (en) Head Mounted Display and controlling method for eye-gaze calibration
US20180101237A1 (en) System, method, and apparatus for man-machine interaction
JP4679661B1 (en) Information presenting apparatus, information presenting method, and program
US20130241927A1 (en) Computer device in form of wearable glasses and user interface thereof
US20130265300A1 (en) Computer device in form of wearable glasses and user interface thereof
JP5765133B2 (en) Input device, input control method, and input control program
US20130002559A1 (en) Desktop computer user interface
US11009949B1 (en) Segmented force sensors for wearable devices
US20150378158A1 (en) Gesture registration device, gesture registration program, and gesture registration method
CN103019377A (en) Head-mounted visual display equipment-based input method and device
CN106817913A (en) Head mounted display, personal digital assistant device, image processing apparatus, display control program, display control method and display system
JP6250024B2 (en) Calibration apparatus, calibration program, and calibration method
US20150331493A1 (en) Wearable Input Device
CN107450717A (en) A kind of information processing method and Wearable
CN104679270B (en) System and method for receiving user input, program storage medium and program therefor
Schneider et al. Towards around-device interaction using corneal imaging

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant