TWI611340B - Method for determining non-contact gesture and device for the same - Google Patents
Method for determining non-contact gesture and device for the same Download PDFInfo
- Publication number
- TWI611340B TWI611340B TW105116464A TW105116464A TWI611340B TW I611340 B TWI611340 B TW I611340B TW 105116464 A TW105116464 A TW 105116464A TW 105116464 A TW105116464 A TW 105116464A TW I611340 B TWI611340 B TW I611340B
- Authority
- TW
- Taiwan
- Prior art keywords
- gesture
- image
- contact
- detected
- processor
- Prior art date
Links
Landscapes
- User Interface Of Digital Computer (AREA)
- Image Analysis (AREA)
Abstract
本發明為一種非接觸式手勢判斷方法及其裝置,係具有影像感測器來偵測影像感測資訊,並利用慣性感測器來偵測慣性感測訊號,再由處理器來判斷是否偵測到物件資訊及是否有慣性事件發生,藉此排除與慣性事件發生於同一及/或相鄰影像圖框中的影像感測資訊,以使得所用以判斷手勢之影像感測資訊均為不受慣性事件影響之影像感測資訊,以提升所判斷之手勢的正確性。The invention relates to a non-contact gesture judging method and device thereof, which has an image sensor for detecting image sensing information, and uses an inertial sensor to detect an inertial sensing signal, and then the processor determines whether to detect Detecting object information and whether there is an inertial event, thereby eliminating image sensing information generated in the same and/or adjacent image frame as the inertial event, so that the image sensing information used to determine the gesture is not Infrared events affect the image sensing information to improve the correctness of the gestures judged.
Description
本發明為一種非接觸式手勢判斷方法及其裝置,係指一種透過光學影像感測方式來判斷非接觸式手勢的方法及其裝置。 The invention relates to a non-contact gesture determination method and device thereof, and relates to a method and device for judging a non-contact gesture through an optical image sensing method.
隨著科技的進步,電子產品的輸入方式也隨之進步,由傳統式的實體按鍵輸入,演進到虛擬的觸控面板輸入,近期更發展出非接觸式的輸入方式,使用者無須接觸電子裝置,電子裝置透過非接觸式的感測器來偵測使用者於空中操作的手勢,加以辨識後執行相對應的指令。尤其對於具有擴增實境(Augmented Reality,AR)功能的設備而言,使用非接觸式的輸入可使得輸入更加直覺化,使用上可更加便利。擴增實境配合電子裝置(尤其是穿戴式電子裝置)可衍生出多種不同的應用,例如常見應用於遊戲、遠距會議、導航地圖等。 With the advancement of technology, the input method of electronic products has also improved, from the traditional physical key input, to the virtual touch panel input, and recently developed a non-contact input mode, the user does not need to touch the electronic device. The electronic device detects the gesture of the user operating in the air through the non-contact sensor, and recognizes and executes the corresponding instruction. Especially for devices with Augmented Reality (AR) functionality, the use of contactless inputs makes the input more intuitive and more convenient to use. Augmented reality with electronic devices (especially wearable electronic devices) can be derived from a variety of different applications, such as games, teleconferencing, navigation maps, and the like.
非接觸式的輸入方式係透過影像感測器,來擷取使用者在空中揮舞之手掌或手指的移動距離、速度、角度等資訊,並判斷為相對應的手勢,以觸發相對應的指令,然而,電子裝置無論為使用者穿戴於身上或使用者手持,容易因使用者本身非刻意的動作而隨之移動,使得電子裝置與未動作的手指或手掌之間可能產生相對移動,此時影像感測器將會因此判斷為使用者手掌 或手指的移動,而誤判手勢,進而觸發非使用者所欲進行之指令,如此則對使用上造成莫大的困擾。 The non-contact input method is used to capture information such as the moving distance, speed, angle, and the like of the palm or finger swiped by the user in the air through the image sensor, and determine the corresponding gesture to trigger the corresponding instruction. However, the electronic device is worn by the user or held by the user, and is easily moved by the user's own unintentional motion, so that relative movement between the electronic device and the non-moving finger or the palm may occur. The sensor will therefore be judged as the user's palm Or the movement of the finger, and the gesture of misjudgment, and then trigger the instructions that the user does not want to do, thus causing great trouble to the use.
有鑑於此,本發明係針對誤判非接觸式手勢的情形加以改進,以解決現有技術中的問題。 In view of this, the present invention is directed to the case of misjudging a contactless gesture to solve the problems in the prior art.
為達到上述之發明目的,本發明所採用的技術手段為創作一種非接觸式手勢判斷方法,其包括:a.偵測一非接觸式感測裝置之感測範圍內的一影像感測資訊,並偵測該非接觸式感測裝置本身的一慣性感測訊號;b.根據該慣性感測訊號,判斷該非接觸式感測裝置是否發生一慣性事件,根據該影像感測資訊,判斷是否偵測到一物件資訊;c.當判斷未發生該慣性事件且偵測到該物件資訊時,則依據至少一筆該影像感測資訊進行手勢判斷並輸出相對應的手勢。 In order to achieve the above object, the technical means adopted by the present invention is to create a non-contact gesture determination method, which comprises: a. detecting an image sensing information within a sensing range of a non-contact sensing device, And detecting an inertial sensing signal of the non-contact sensing device; b. determining, according to the inertial sensing signal, whether an inertial event occurs in the non-contact sensing device, and determining whether to detect according to the image sensing information Go to an object information; c. When it is determined that the inertia event has not occurred and the object information is detected, the gesture is judged according to at least one piece of the image sensing information and the corresponding gesture is output.
進一步而言,配合前述之非接觸式手勢判斷方法,本發明創作一種非接觸式感測裝置,其包括:一處理器;一慣性感測器,其與該處理器相連接;至少一影像感測器,其與該處理器相連接,該處理器執行前述本發明之非接觸式手勢判斷方法中的步驟。 Further, in conjunction with the foregoing non-contact gesture determination method, the present invention creates a non-contact sensing device, which includes: a processor; an inertial sensor connected to the processor; at least one image sense And a processor coupled to the processor, the processor performing the steps in the non-contact gesture determination method of the present invention.
本發明的優點在於,透過慣性感測器的設置以及配合判斷慣性事件發生與否,來使得所輸出的手勢不受慣性事件的影響,因而避免誤判手勢及誤觸指令。 The invention has the advantages that the gesture of the inertial sensor is matched and the occurrence of the inertia event is judged to make the output gesture not be affected by the inertia event, thereby avoiding misjudgment gestures and false touch commands.
本發明另採用一種技術手段為創作一種非接觸式手勢判斷方法,應用於一可攜式電子裝置,該可攜式電子裝置包括一手勢偵測單元用於偵測一使用者的手勢操作,以及一第一感測器用於偵測該可攜式電子裝置的運動,該方法包括:a.根據第一感測器的輸出判斷該可攜式電子裝置發生運動;以及b.在步驟a之後,中斷觸發該使用者的手勢操作。 The present invention further employs a technical means for creating a non-contact gesture determination method, which is applied to a portable electronic device, and the portable electronic device includes a gesture detection unit for detecting a user's gesture operation, and a first sensor for detecting motion of the portable electronic device, the method comprising: a. determining that the portable electronic device is moving according to an output of the first sensor; and b. after step a, The interrupt triggers the gesture operation of the user.
10‧‧‧處理器 10‧‧‧ processor
11‧‧‧儲存單元 11‧‧‧ storage unit
20‧‧‧慣性感測器 20‧‧‧Inertial Sensor
30‧‧‧影像感測器 30‧‧‧Image Sensor
40‧‧‧儲存單元 40‧‧‧ storage unit
圖1為本發明之裝置第一實施例的方塊圖。 BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a block diagram of a first embodiment of the apparatus of the present invention.
圖2為本發明之裝置第二實施例的方塊圖。 Figure 2 is a block diagram of a second embodiment of the apparatus of the present invention.
圖3為本發明之方法的流程圖。 Figure 3 is a flow chart of the method of the present invention.
圖4為本發明之方法第一實施例的流程圖。 4 is a flow chart of a first embodiment of a method of the present invention.
圖5為本發明之方法第二實施例的流程圖。 Figure 5 is a flow chart of a second embodiment of the method of the present invention.
圖6為本發明之方法第三實施例的流程圖。 Figure 6 is a flow chart of a third embodiment of the method of the present invention.
圖7為本發明之方法第四實施例的流程圖。 Figure 7 is a flow chart of a fourth embodiment of the method of the present invention.
圖8為本發明之實施態樣的時序圖。 Figure 8 is a timing diagram of an embodiment of the present invention.
圖9為本發明之另一方法的流程圖。 Figure 9 is a flow chart of another method of the present invention.
以下配合圖式及本發明之實施例,進一步闡述本發明為達成預定發明目的所採取的技術手段。 The technical means adopted by the present invention for achieving the intended purpose of the invention are further explained below in conjunction with the drawings and the embodiments of the present invention.
請參閱圖1所示,本發明之非接觸式感測裝置包含有一處理器10、一慣性感測器(Inertial Measurement Unit,IMU)20及至少一影像感測器30。 As shown in FIG. 1 , the non-contact sensing device of the present invention comprises a processor 10 , an inertial sensor unit (IMU) 20 and at least one image sensor 30 .
在一實施例中(如圖1所示),該處理器10中內建一儲存單元11;在另一實施例中(如圖2所示),該處理器10與一外部儲存單元40相連接。 In one embodiment (as shown in FIG. 1), a memory unit 11 is built into the processor 10; in another embodiment (as shown in FIG. 2), the processor 10 is coupled to an external storage unit 40. connection.
該慣性感測器20與該處理器10相連接,該慣性感測器20將所偵測到之慣性感測訊號SI傳送至該處理器10,該慣性感測器20可為各種可偵測物體運動的感測器。在一實施例中,該慣性感測器20為一加速度計(G-sensor,Accelerometer);在一實施例中,該慣性感測器20為一陀螺儀(Gyroscope);在一實施例中,該慣性感測器20為一磁力儀(magnetometer)。 The inertial sensor 20 is coupled to the processor 10, and the inertial sensor 20 transmits the detected inertial sensing signal S I to the processor 10. The inertial sensor 20 can be various detectable A sensor that measures the motion of an object. In one embodiment, the inertial sensor 20 is an accelerometer (G-sensor, Accelerometer); in one embodiment, the inertial sensor 20 is a gyroscope; in an embodiment, The inertial sensor 20 is a magnetometer.
所述影像感測器30將所偵測到之非接觸式影像感測資訊SG傳送至該處理器10。在一實施例中,本發明之非接觸式感測裝置具有數個影像感測器30,以在不同角度、位置截取該影像感測資訊,藉此提升影像精確度或可供其他影像處理應用。在一實施例中,所述影像感測器30為光學感測器。 The image sensor 30 transmits the detected non-contact image sensing information S G to the processor 10 . In one embodiment, the non-contact sensing device of the present invention has a plurality of image sensors 30 for capturing image sensing information at different angles and positions, thereby improving image accuracy or for other image processing applications. . In an embodiment, the image sensor 30 is an optical sensor.
請參閱圖3所示,本發明之非接觸式手勢判斷方法包含以下步驟: Referring to FIG. 3, the non-contact gesture judging method of the present invention comprises the following steps:
偵測該影像感測資訊SG及慣性感測訊號SI(S1):透過該慣性感測器20偵測對應到穿戴在使用者身上之該非接觸式感測裝置的慣性感測訊號SI,並透過所述影像感測器30偵測位於其感測範圍內的影像感測資訊SG。在一實施例中,前述步驟S1係在一影像圖框(Image Frame)中偵測該慣性感測訊號SI及該影像感測資訊SG。 Detecting the image sensing information S G and an inertial measurement signal S I (S1): corresponding to the inertial sensor 20 detects through to the non-contact sensing means worn on the user of the inertia sensing signal S I And detecting, by the image sensor 30, the image sensing information S G located in the sensing range thereof. In an embodiment, the step S1 detects the inertial sensing signal S I and the image sensing information S G in an image frame.
根據影像感測資訊SG判斷是否偵測到一物件資訊,根據慣性感測訊號SI判斷是否發生一慣性事件(S2):該處理器10依據慣性感測訊號SI判 斷是否發生一慣性事件,並依據該影像感測資訊SG,判斷是否偵測到一物件資訊。若判斷未發生該慣性事件且有偵測到該物件資訊,則進入步驟S3,若任一條件不符合則回到步驟S1。在一實施例中,係在同一影像圖框中判斷是否既未發生慣性事件、又偵測到物件資訊。判斷是否發生慣性事件可由多種判斷方法達成,在一實施例中,將該慣性感測訊號SI之感應量與一門檻值比較後,當該該慣性感測訊號SI之感應量大於該門檻值時,則判斷有該慣性事件發生。判斷是否偵測到物件資訊亦可由多種判斷方法達成,在一實施例中,係將該影像感測資訊SG與預設的物件特定特徵(如指尖、掌心等)加以比較,判斷是否為其中一預設物件,若是,則判斷具有物件資訊。 Determining whether an object information is detected according to the image sensing information S G, determining whether an inertia event occurs according to the inertial sensing signal S I (S2): the processor 10 determines whether an inertia event occurs according to the inertial sensing signal S I And determining whether an object information is detected according to the image sensing information S G . If it is determined that the inertia event has not occurred and the object information is detected, the process proceeds to step S3, and if any of the conditions does not match, the process returns to step S1. In an embodiment, it is determined whether the inertial event has not occurred and the object information is detected in the same image frame. Determining whether an inertial event occurs may be achieved by a plurality of determining methods. In an embodiment, after the sensing amount of the inertial sensing signal S I is compared with a threshold value, when the sensing amount of the inertial sensing signal S I is greater than the threshold When the value is determined, it is determined that the inertia event has occurred. Determining whether the object information is detected may also be achieved by a plurality of judging methods. In an embodiment, the image sensing information S G is compared with a preset object specific feature (such as a fingertip, a palm, etc.) to determine whether it is One of the default objects, if it is, is judged to have object information.
當判斷未發生慣性事件且偵測到物件資訊時,進行手勢判斷並輸出相對應的手勢(S3):當該處理器10判斷為既未發生慣性事件、又偵測到物件資訊時,則依據至少一筆該影像感測資訊進行手勢判斷並輸出相對應的手勢。 When it is determined that the inertia event has not occurred and the object information is detected, the gesture is judged and the corresponding gesture is output (S3): when the processor 10 determines that neither the inertia event nor the object information is detected, then At least one piece of the image sensing information performs a gesture determination and outputs a corresponding gesture.
在一實施例中,請參閱圖4所示,步驟S2包含步驟S21A、S22A:該處理器10根據該慣性感測訊號SI,判斷是否發生一慣性事件(S21A),若是,則回到步驟S1A再繼續偵測影像感測資訊SG及慣性感測訊號SI,若否,則進入步驟S22A;該處理器10接著根據該影像感測資訊SG,判斷是否偵測到一物件資訊(S22A),若否,則回到步驟S1A,若是,則進入步驟S30A。步驟S3包含步驟S30A、S31A、S32A:該處理器10將步驟S22A中判斷為是的影像圖框標記於該儲存單元11、40中(S30A);接著該處理器10依據已標記的該影像圖框中的影像感測資訊,判斷是否滿足一手勢(S31A),若是,則輸出相對應的手勢(S32A),若否,則回到步驟S1A。 In an embodiment, referring to FIG. 4, step S2 includes steps S21A and S22A: the processor 10 determines whether an inertia event occurs according to the inertial sensing signal S I (S21A), and if so, returns to the step. The S1A continues to detect the image sensing information S G and the inertial sensing signal S I . If not, the process proceeds to step S22A. The processor 10 then determines whether an object information is detected according to the image sensing information S G ( S22A), if no, return to step S1A, and if yes, proceed to step S30A. Step S3 includes steps S30A, S31A, and S32A: the processor 10 marks the image frame determined to be YES in step S22A in the storage unit 11, 40 (S30A); then the processor 10 follows the marked image map. The image sensing information in the frame determines whether a gesture is satisfied (S31A), and if so, outputs a corresponding gesture (S32A), and if not, returns to step S1A.
在另一實施例中,請參閱圖5所示,步驟S2包含步驟S21B、S20B、S22B:該處理器10根據該慣性感測訊號SI,判斷是否發生一慣性事件 (S21B),若是,則先標記發生該慣性事件的影像圖框並記錄於該儲存單元11、40中(S20B),接著回到步驟S1B再繼續偵測該影像感測資訊SG及該慣性感測訊號SI,若否,則進入步驟S22B;該處理器10接著根據該影像感測資訊SG,判斷是否偵測到一物件資訊(S22B),若否,則回到步驟S1B,若是,則進入步驟S31B。步驟S3包含步驟S31B、S32B:該處理器10依據未標記的該影像圖框中的影像感測資訊,判斷是否滿足一手勢(S31B),若是,則輸出相對應的手勢(S32B),若否,則回到步驟S1B。 In another embodiment, referring to FIG. 5, step S2 includes steps S21B, S20B, and S22B: the processor 10 determines, according to the inertial sensing signal S I , whether an inertia event occurs (S21B), and if so, Marking the image frame in which the inertia event occurs is recorded in the storage unit 11, 40 (S20B), and then returning to step S1B to continue detecting the image sensing information S G and the inertial sensing signal S I , if Otherwise, the process proceeds to step S22B. The processor 10 then determines whether an object information is detected based on the image sensing information S G (S22B). If not, the process returns to step S1B, and if yes, the process proceeds to step S31B. Step S3 includes steps S31B and S32B: the processor 10 determines whether a gesture is satisfied according to the image sensing information in the unmarked image frame (S31B), and if so, outputs a corresponding gesture (S32B), if not Then, it returns to step S1B.
在一實施例中,請參閱圖6所示,步驟S2包含步驟S21C、S22C:該處理器10根據該影像感測資訊SG,判斷是否偵測到一物件資訊(S21C),若否,則回到步驟S1C再繼續偵測該影像感測資訊SG及該慣性感測訊號SI,若是,則進入步驟S22C;該處理器10接著根據慣性感測訊號SI,判斷是否發生一慣性事件(S22C),若是,則回到步驟S1C,若否,則進入步驟S30C。步驟S3包含步驟S30C、S31C、S32C:該處理器10將步驟S22C中判斷為否的該影像圖框標記並記錄於該儲存單元11、40中(S30C);接著該處理器10依據已標記的該影像圖框中的影像感測資訊,判斷是否滿足一手勢(S31C),若是,則輸出相對應的手勢(S32C),若否,則回到步驟S1C。 In an embodiment, referring to FIG. 6 , step S2 includes steps S21C and S22C: the processor 10 determines, according to the image sensing information S G , whether an object information is detected (S21C), and if not, Returning to step S1C, the image sensing information S G and the inertial sensing signal S I are further detected. If yes, the process proceeds to step S22C. The processor 10 then determines whether an inertia event occurs according to the inertial sensing signal S I . (S22C), if yes, return to step S1C, and if no, proceed to step S30C. Step S3 includes steps S30C, S31C, and S32C: the processor 10 marks and records the image frame in the step S22C as NO in the storage unit 11, 40 (S30C); then the processor 10 is based on the marked The image sensing information in the image frame determines whether a gesture is satisfied (S31C), and if so, outputs a corresponding gesture (S32C), and if not, returns to step S1C.
在另一實施例中,請參閱圖7所示,步驟S2包含步驟S21D、S20D、S22D:該處理器10根據該影像感測資訊SG,判斷是否偵測到一物件資訊(S21D),若是,則進入步驟S22D,若否,則到步驟S1D再繼續偵測該影像感測資訊SG及該慣性感測訊號SI;該處理器10接著根據該慣性感測訊號SI,判斷是否發生一慣性事件(S22D),若是,則先標記發生該慣性事件的影像圖框且紀錄於該儲存單元11、40中(S20D),接著再回到步驟S1D,若否,則進入步驟S31D。步驟S3包含步驟S31D、S32D:該處理器10依據未標記的該影像圖 框中的影像感測資訊,判斷是否滿足一手勢(S31D),若是,則輸出相對應的手勢(S32D),若否,則回到步驟S1D。 In another embodiment, referring to FIG. 7, step S2 includes steps S21D, S20D, and S22D: the processor 10 determines, according to the image sensing information S G , whether an object information (S21D) is detected. Then, the process proceeds to step S22D. If no, the process continues to detect the image sensing information S G and the inertial sensing signal S I in step S1D. The processor 10 then determines whether the occurrence occurs according to the inertial sensing signal S I . An inertia event (S22D), if yes, first marks the image frame in which the inertia event occurred and records in the storage unit 11, 40 (S20D), and then returns to step S1D, and if not, proceeds to step S31D. Step S3 includes steps S31D and S32D: the processor 10 determines whether a gesture is satisfied according to the image sensing information in the unmarked image frame (S31D), and if so, outputs a corresponding gesture (S32D), if not Then, it returns to step S1D.
如圖4及圖5所示之實施例,先依據該慣性感測訊號SI判斷是否發生一慣性事件,若已發生該慣性事件,則此筆影像感測資訊SG已受到該慣性事件之影響而無法真實反應出使用者欲進行的操控或手勢,故無須再進一步判斷是否偵測到一物件資訊。又,如圖6及圖7所示之實施例,先依據該影像感測資訊SG判斷是否偵測到一物件資訊,若判斷無該物件資訊,後續亦無進一步辨識手勢之需求,所以當然無需再判斷是否發生一慣性事件。因此,依據前述圖4至圖7所示之實施例,則該處理器10可依據其中一判斷結果再決定是否進行另一項判斷或直接返回步驟S1,以減少該處理器10的負載,並降低能耗。 As shown in FIG. 4 and FIG. 5, it is determined whether an inertial event occurs according to the inertial sensing signal S I. If the inertial event has occurred, the image sensing information S G has been subjected to the inertia event. The effect does not truly reflect the manipulation or gesture that the user wants to perform, so there is no need to further determine whether an object information is detected. In addition, as shown in FIG. 6 and FIG. 7 , it is first determined whether an object information is detected according to the image sensing information S G. If it is determined that there is no object information, there is no need for further recognition of the gesture, so of course There is no need to judge whether an inertia event has occurred. Therefore, in accordance with the foregoing embodiment shown in FIG. 4 to FIG. 7, the processor 10 can determine whether to perform another determination or directly return to step S1 according to one of the determination results, so as to reduce the load of the processor 10, and Reduce energy consumption.
如圖5及圖7所示之實施例中,對於發生該慣性事件的影像圖框加以標記,為了避免該慣性事件發生的前後時間點也影響到判斷手勢的正確性,於判斷未標記之該影像圖框中的影像感測資訊是否滿足一手勢時(S31B)(S31D),該處理器10將同時排除已標記之該影像圖框及與其相鄰的該其他影像圖框後,再對剩餘未受該慣性事件影響之該數個影像圖框中的影像感測資訊加以判斷是否符合一手勢。如此一來,更能夠有效排除該慣性事件對手勢判斷的影響,以提升手勢判斷的正確性。 In the embodiment shown in FIG. 5 and FIG. 7 , the image frame in which the inertia event occurs is marked, and the time point before and after the occurrence of the inertia event also affects the correctness of the judgment gesture, and the unmarked is determined. When the image sensing information in the image frame satisfies a gesture (S31B) (S31D), the processor 10 will simultaneously exclude the marked image frame and the other image frames adjacent thereto, and then The image sensing information in the plurality of image frames that are not affected by the inertia event is judged to conform to a gesture. In this way, the influence of the inertia event on the gesture judgment can be effectively eliminated, so as to improve the correctness of the gesture judgment.
以下利用一具體實施態樣配合前述實施例說明本發明之方法: The method of the present invention will be described below in conjunction with the foregoing embodiments using a specific embodiment:
請參閱圖8配合圖4及圖6所示,在一偵測時段中產生了數個影像圖框F1~F6,其中當該影像圖框F1依據步驟(S21A、S22A)或步驟(S21C、S22C)判斷偵測到一物件資訊G1且無發生一慣性事件時,係執行步驟S30A、S30C以標記該影像圖框F1,接著再執行步驟S31A、S31C判斷已標記的該影像圖框F1中的影像感測資訊是否符合一手勢,若此時判斷該影像感測資訊仍不足以符合一手勢,故回到步驟S1A、S1C繼續進行偵測;接著該影像圖框F2與前 述影像圖框F1情形相同,僅偵測到該物件資訊G2而未發生慣性事件,故同樣依據前述步驟而標記該影像圖框F2,此時執行步驟S31A、S31C時,將已標記的兩個影像圖框F1、F2中的影像感測資訊結合判斷是否符合一手勢,若符合則執行步驟S32A、S32C以輸出該相對應的手勢,若不符合則回到步驟S1A、S1C繼續進行偵測;當該影像圖框F3依據步驟S21A、S22C判斷發生一慣性事件I3,則回到步驟S1A、S1C再繼續偵測;當該影像圖框F4依據步驟S21A、S22C判斷發生一慣性事件I4,亦回到步驟S1A、S1C繼續偵測;接著該影像圖框F5與前述影像圖框F1情形相同,僅偵測到一物件資訊G5而未發生慣性事件,故同樣依據前述步驟而標記該影像圖框F5,此時執行步驟S31A、S31C時,係將已標記的三個影像圖框F1、F2、F5中的影像感測資訊結合判斷是否符合一手勢,若符合則執行步驟S32A、S32C以輸出該相對應的手勢,若不符合則回到步驟S1A、S1C繼續進行偵測。在一實施例中,於執行步驟S31A、S31C時亦可將與前述未標記的該影像圖框F3、F4相鄰之該影像圖框F2、F5一同排除,僅以該影像圖框F1中的影像感測資訊來判斷是否符合一手勢。 Referring to FIG. 8 and FIG. 4 and FIG. 6 , a plurality of image frames F1 F F6 are generated in a detection period, wherein the image frame F1 is in accordance with the steps (S21A, S22A) or steps (S21C, S22C). When it is determined that an object information G1 is detected and an inertia event has not occurred, steps S30A and S30C are performed to mark the image frame F1, and then steps S31A and S31C are executed to determine the marked image in the image frame F1. Whether the sensing information conforms to a gesture, and if it is determined that the image sensing information is still insufficient to conform to a gesture, return to steps S1A and S1C to continue detecting; then the image frame F2 and the front The image frame F1 is the same. Only the object information G2 is detected and no inertia event occurs. Therefore, the image frame F2 is also marked according to the foregoing steps. When the steps S31A and S31C are executed, the marked two are marked. The image sensing information in the image frames F1 and F2 is combined to determine whether the gesture conforms to a gesture. If yes, the steps S32A and S32C are performed to output the corresponding gesture. If not, the process returns to the steps S1A and S1C to continue the detection. When the image frame F3 determines that an inertial event I3 occurs according to steps S21A and S22C, it returns to steps S1A and S1C to continue detecting; when the image frame F4 determines that an inertial event I4 occurs according to steps S21A and S22C, Steps S1A and S1C continue to detect; then, the image frame F5 is the same as the image frame F1, and only one object information G5 is detected without an inertia event, so the image frame F5 is also marked according to the foregoing steps. When the steps S31A and S31C are performed, the image sensing information in the three image frames F1, F2, and F5 are combined to determine whether the gesture conforms to a gesture. If yes, steps S32A and S32C are performed to output the signal. Corresponding gestures, do not meet go back to step S1A, S1C continue detection. In an embodiment, when the steps S31A and S31C are performed, the image frames F2 and F5 adjacent to the unmarked image frames F3 and F4 may be excluded together, only in the image frame F1. The image sensing information is used to determine whether a gesture is met.
接著,該影像圖框F6與前述影像圖框F1情形相同,僅偵測到一物件資訊G6而未發生慣性事件,故同樣依據前述步驟而標記該影像圖框F6,此時執行步驟S31A、S31C時,將已標記的四個影像圖框F1、F2、F5、F6中的影像感測資訊結合判斷是否符合一手勢,若符合則執行步驟S32A、S32C以輸出該相對應的手勢,若不符合則回到步驟S1A、S1C繼續進行偵測。在一實施例中,於執行步驟S31A、S31C時亦可將與前述未標記的該影像圖框F3、F4相鄰之該影像圖框F2、F5加以排除,僅以該影像圖框F1、F6中的影像感測資訊來判斷是否符合一手勢。 Then, the image frame F6 is the same as the image frame F1. Only an object information G6 is detected and no inertia event occurs. Therefore, the image frame F6 is also marked according to the foregoing steps. At this time, steps S31A and S31C are performed. The image sensing information in the four image frames F1, F2, F5, and F6 that have been marked is combined to determine whether the gesture conforms to a gesture. If yes, the steps S32A and S32C are performed to output the corresponding gesture. Then, the process returns to steps S1A and S1C to continue the detection. In an embodiment, when the steps S31A and S31C are performed, the image frames F2 and F5 adjacent to the unmarked image frames F3 and F4 may be excluded, and only the image frames F1 and F6 may be excluded. The image in the sense information is used to determine whether a gesture is met.
請參閱圖8配合圖5及圖7所示,在一偵測時段中產生了數個影像圖框F1~F6,其中當該影像圖框F1依據步驟(S21B、S22B)或步驟(S21D、 S22D)判斷偵測到一物件資訊G1且無發生一慣性事件時,係執行步驟S31B、S31D判斷未標記的該影像圖框F1中的影像感測資訊是否符合一手勢,此時判斷仍不足以符合一手勢,故回到步驟S1B、S1D繼續進行偵測;接著該影像圖框F2與前述影像圖框F1情形相同,僅發生一物件資訊G2而未發生慣性事件,故同樣依據前述步驟執行步驟S31B、S31D,將未標記的兩個影像圖框F1、F2中的影像感測資訊結合判斷是否符合一手勢,若符合則執行步驟S32B、S32D以輸出該相對應的手勢,若不符合則回到步驟S1B、S1D繼續進行偵測;當該影像圖框F3依據步驟S21B、S22D判斷發生一慣性事件I3,則標記該影像圖框F3並回到步驟S1B、S1D再繼續偵測;當該影像圖框F4依據步驟S21B、S22D判斷發生一慣性事件I4,則標記該影像圖框F4並回到步驟S1B、S1D再繼續偵測;接著該影像圖框F5與前述影像圖框F1情形相同,僅偵測到一物件資訊G5而未發生慣性事件,故同樣執行步驟S31B、S31D,將未標記的三個影像圖框F1、F2、F5中的影像感測資訊結合判斷是否符合一手勢,若符合則執行步驟S32B、S32D以輸出該相對應的手勢,若不符合則回到步驟S1B、S1D繼續進行偵測。在一實施例中,於執行步驟S31B、S31D時亦可將與前述已標記的該影像圖框F3、F4相鄰之該影像圖框F2、F5加以排除,僅以該影像圖框F1中的影像感測資訊來判斷是否符合一手勢。 Referring to FIG. 8 and FIG. 5 and FIG. 7 , a plurality of image frames F1 F F6 are generated in a detection period, wherein the image frame F1 is in accordance with the steps (S21B, S22B) or steps (S21D, S22D) determining that an object information G1 is detected and no inertia event occurs, performing steps S31B, S31D to determine whether the image sensing information in the unmarked image frame F1 conforms to a gesture, and the judgment is still insufficient. After the gesture is met, the process returns to the steps S1B and S1D to continue the detection; then the image frame F2 is the same as the image frame F1, and only one object information G2 occurs without an inertia event, so the steps are also performed according to the foregoing steps. S31B and S31D, combining the image sensing information in the two unmarked image frames F1 and F2 to determine whether the gesture conforms to a gesture, and if yes, executing steps S32B and S32D to output the corresponding gesture, if not, returning Go to step S1B, S1D to continue detecting; when the image frame F3 determines that an inertial event I3 occurs according to steps S21B, S22D, mark the image frame F3 and return to steps S1B, S1D to continue detecting; when the image The frame F4 determines that an inertial event I4 occurs according to steps S21B and S22D, marks the image frame F4 and returns to steps S1B and S1D to continue detecting; then the image frame F5 is compared with the image frame F1. Only an object information G5 is detected and no inertia event occurs. Therefore, steps S31B and S31D are also performed, and the image sensing information in the unmarked three image frames F1, F2, and F5 is combined to determine whether the gesture conforms to a gesture. If yes, steps S32B and S32D are executed to output the corresponding gesture. If not, the process returns to steps S1B and S1D to continue the detection. In an embodiment, when the steps S31B and S31D are performed, the image frames F2 and F5 adjacent to the marked image frames F3 and F4 may be excluded, only in the image frame F1. The image sensing information is used to determine whether a gesture is met.
接著,該影像圖框F6與前述影像圖框F1情形相同,僅偵測到一物件資訊G6而未發生慣性事件,故同樣執行步驟S31B、S31D時,將未標記的四個影像圖框F1、F2、F5、F6中的影像感測資訊結合判斷是否符合一手勢,若符合則執行步驟S32B、S32D以輸出該相對應的手勢,若不符合則回到步驟S1B、S1D繼續進行偵測。在一實施例中,於執行步驟S31B、S31D時亦可將與前述已標記的該影像圖框F3、F4相鄰之該影像圖框F2、F5加以排除,僅以該影像圖框F1、F6中的影像感測資訊來判斷是否符合一手勢。 Then, the image frame F6 is the same as the image frame F1. Only one object information G6 is detected and no inertia event occurs. Therefore, when steps S31B and S31D are also performed, the unmarked four image frames F1 are displayed. The image sensing information in F2, F5, and F6 is combined to determine whether the gesture conforms to a gesture. If yes, steps S32B and S32D are performed to output the corresponding gesture. If not, the processing returns to steps S1B and S1D to continue detecting. In an embodiment, when the steps S31B and S31D are performed, the image frames F2 and F5 adjacent to the marked image frames F3 and F4 may be excluded, and only the image frames F1 and F6 may be used. The image in the sense information is used to determine whether a gesture is met.
綜上所述,根據本發明之裝置及方法,可進一步有效排除使用者誤產生的震動、移動、晃動等慣性事件對非接觸式手勢判斷的的影響,而以未受慣性事件影響之影像感測資訊來判斷非接觸式手勢,因此,可提升所判斷出之非接觸式手勢的正確性。 In summary, according to the device and method of the present invention, the influence of inertial events such as vibration, movement, and shaking caused by the user on the non-contact gesture judgment can be further effectively eliminated, and the image sense is not affected by the inertia event. The information is measured to determine the non-contact gesture, and thus the correctness of the determined contactless gesture can be improved.
在另一實施例中,請參閱圖9所示,本發明之方法應用於一可攜式電子裝置,該可攜式電子裝置包括一手勢偵測單元用於偵測一使用者的手勢操作,以及一第一感測器用於偵測該可攜式電子裝置自身的運動,該方法包括:根據第一感測器的輸出判斷該可攜式電子裝置發生運動(S1E);以及在步驟S1E之後,中斷觸發該使用者的手勢操作(S2E)。其中,該可攜式電子裝置可為一非接觸式感測裝置,該手勢偵測單元可包含一個或多個前述影像感測器30,該第一感測器可包含一個或多個前述慣性感測器20,當判斷該可攜式電子裝置發生運動時,可指該第一感測器所偵測到之感應訊號的感應量大於一預設門檻值;所述中斷觸發該使用者的手勢操作,可為放棄該筆所偵測到之手勢操作。 In another embodiment, as shown in FIG. 9, the method of the present invention is applied to a portable electronic device, and the portable electronic device includes a gesture detecting unit for detecting a gesture operation of a user. And a first sensor for detecting motion of the portable electronic device itself, the method comprising: determining that the portable electronic device is moving according to an output of the first sensor (S1E); and after step S1E The interrupt triggers the gesture operation of the user (S2E). The portable electronic device can be a non-contact sensing device, and the gesture detecting unit can include one or more of the foregoing image sensors 30, and the first sensor can include one or more of the foregoing The sensing device 20, when determining that the portable electronic device is in motion, may mean that the sensing amount of the sensing signal detected by the first sensor is greater than a preset threshold; the interruption triggers the user's Gesture operation, which can abandon the gesture operation detected by the pen.
以上所述僅是本發明的實施例而已,並非對本發明做任何形式上的限制,雖然本發明已以實施例揭露如上,然而並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明技術方案的範圍內,當可利用上述揭示的技術內容作出些許更動或修飾為等同變化的等效實施例,但凡是未脫離本發明技術方案的內容,依據本發明的技術實質對以上實施例所作的任何簡單修改、等同變化與修飾,均仍屬於本發明技術方案的範圍內。 The above is only the embodiment of the present invention, and is not intended to limit the scope of the present invention. The present invention has been disclosed by the embodiments, but is not intended to limit the invention, and any one of ordinary skill in the art, In the scope of the technical solutions of the present invention, equivalent modifications may be made to the equivalents of the embodiments of the present invention without departing from the technical scope of the present invention. Any simple modifications, equivalent changes and modifications made to the above embodiments are still within the scope of the technical solutions of the present invention.
Claims (24)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610464402.6A CN106560766A (en) | 2015-10-04 | 2016-06-23 | Non-contact gesture judgment method and device |
US15/262,315 US10558270B2 (en) | 2015-10-04 | 2016-09-12 | Method for determining non-contact gesture and device for the same |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201562236963P | 2015-10-04 | 2015-10-04 | |
US62/236,963 | 2015-10-04 |
Publications (2)
Publication Number | Publication Date |
---|---|
TW201714075A TW201714075A (en) | 2017-04-16 |
TWI611340B true TWI611340B (en) | 2018-01-11 |
Family
ID=59256593
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW105116464A TWI611340B (en) | 2015-10-04 | 2016-05-26 | Method for determining non-contact gesture and device for the same |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWI611340B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110227820A1 (en) * | 2010-02-28 | 2011-09-22 | Osterhout Group, Inc. | Lock virtual keyboard position in an augmented reality eyepiece |
US20120092328A1 (en) * | 2010-10-15 | 2012-04-19 | Jason Flaks | Fusing virtual content into real content |
TWI437464B (en) * | 2011-09-01 | 2014-05-11 | Ind Tech Res Inst | Head mount personal computer and interactive system using the same |
TWI464640B (en) * | 2012-04-03 | 2014-12-11 | Wistron Corp | Gesture sensing apparatus and electronic system having gesture input function |
-
2016
- 2016-05-26 TW TW105116464A patent/TWI611340B/en active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110227820A1 (en) * | 2010-02-28 | 2011-09-22 | Osterhout Group, Inc. | Lock virtual keyboard position in an augmented reality eyepiece |
US20120092328A1 (en) * | 2010-10-15 | 2012-04-19 | Jason Flaks | Fusing virtual content into real content |
TWI437464B (en) * | 2011-09-01 | 2014-05-11 | Ind Tech Res Inst | Head mount personal computer and interactive system using the same |
TWI464640B (en) * | 2012-04-03 | 2014-12-11 | Wistron Corp | Gesture sensing apparatus and electronic system having gesture input function |
Also Published As
Publication number | Publication date |
---|---|
TW201714075A (en) | 2017-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI457793B (en) | Real-time motion recognition method and inertia sensing and trajectory | |
WO2017041433A1 (en) | Touch control response method and apparatus for wearable device, and wearable device | |
WO2017215375A1 (en) | Information input device and method | |
JP2017527922A5 (en) | ||
TWI489317B (en) | Method and system for operating electric apparatus | |
TWI659331B (en) | Screen capture method and device for smart terminal | |
JP2010134895A5 (en) | ||
JP2017517813A5 (en) | ||
CN105980973A (en) | User-authentication gestures | |
JP5802247B2 (en) | Information processing device | |
TWI478006B (en) | Cursor control device, display device and portable electronic device | |
CN103645844A (en) | Page displaying method and device | |
US20110268365A1 (en) | 3d hand posture recognition system and vision based hand posture recognition method thereof | |
TWI659221B (en) | Tracking system, tracking device and tracking method | |
TWI567592B (en) | Gesture recognition method and wearable apparatus | |
TWI431538B (en) | Image based motion gesture recognition method and system thereof | |
CN103294226B (en) | A virtual input device and method | |
WO2015131590A1 (en) | Method for controlling blank screen gesture processing and terminal | |
CN105975091A (en) | Virtual keyboard human-computer interaction technology based on inertial sensor | |
US10424224B2 (en) | Glove for use in collecting data for sign language recognition | |
KR101548872B1 (en) | Fingers recognition method and system using image processing | |
CN106560766A (en) | Non-contact gesture judgment method and device | |
TWI611340B (en) | Method for determining non-contact gesture and device for the same | |
JP2011081447A5 (en) | ||
KR20070060580A (en) | Character recognition device and method using acceleration sensor |