CN110109599A - The interactive approach of user and stylus, categorizing system and stylus product - Google Patents
The interactive approach of user and stylus, categorizing system and stylus product Download PDFInfo
- Publication number
- CN110109599A CN110109599A CN201810101597.7A CN201810101597A CN110109599A CN 110109599 A CN110109599 A CN 110109599A CN 201810101597 A CN201810101597 A CN 201810101597A CN 110109599 A CN110109599 A CN 110109599A
- Authority
- CN
- China
- Prior art keywords
- stylus
- tapping
- neural network
- acceleration signal
- deep neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0354—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
- G06F3/03545—Pens or stylus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
本揭示提出一种新颖的方法来操作触控笔产品,利用惯性量测单元(Inertial Measurement Unit,IMU)之信号来计算触控笔产品的倾向角,并且收集手指敲击触控笔时所量测到的加速度信号以训练深度神经网络作为敲击分类器,结合倾向角与敲击分类器可让使用者通过转动和敲击触控笔产品,而与周边装置(如触控屏幕)进行互动。本揭示并提供一种敲击分类系统及触控笔产品。
The present disclosure proposes a novel method to operate a stylus product, using the signal of an inertial measurement unit (IMU) to calculate the inclination angle of the stylus product, and collecting the acceleration signal measured when the finger taps the stylus to train a deep neural network as a tap classifier. Combining the inclination angle with the tap classifier allows the user to interact with peripheral devices (such as a touch screen) by rotating and tapping the stylus product. The present disclosure also provides a tap classification system and a stylus product.
Description
技术领域technical field
本揭示涉及一种人机互动技术,特别有关一种使用者与触控笔的互动方法、使用者对触控笔进行的敲击事件的分类系统以及触控笔产品。The present disclosure relates to a human-computer interaction technology, and in particular, to a method for interacting between a user and a stylus, a classification system for tapping events performed by a user on the stylus, and a stylus product.
背景技术Background technique
触控笔作为一种指标工具,通常与触控屏幕搭配使用,实现屏幕上显示之物件的选取功能。由于屏幕解析度的提升以及触控笔的改进,触控笔也常用作书写工具,或作为画笔使用。另一方面,各种等级的绘图板或手写板也需要触控笔的搭配使用。然而,目前触控笔的功能仍显薄弱,因此开发新一代的触控笔以增加其功能及产品竞争力,已成为重要课题之一。As an indicator tool, a stylus is usually used in conjunction with a touch screen to realize the function of selecting objects displayed on the screen. Due to increased screen resolution and improvements in styluses, styluses are also commonly used as writing tools, or as brushes. On the other hand, various grades of drawing or writing tablets also require the use of a stylus. However, the function of the stylus is still weak at present, so developing a new generation of stylus to increase its function and product competitiveness has become one of the important issues.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于提供一种使用者与触控笔的互动方法、使用者对触控笔进行的敲击事件的分类系统以及触控笔产品,以提供一种新颖的互动方式。The purpose of the present invention is to provide a method for interacting between a user and a stylus, a classification system for tapping events performed by a user on the stylus, and a stylus product, so as to provide a novel interaction method.
为达成上述目的,本发明一方面提供一种使用者与触控笔的互动方法,所述方法包含:利用一感测器感测使用者对一触控笔进行敲击而生成的各种敲击事件,以量测出若干个加速度信号;对每一个加速度信号进行取样,针对每一个加速度信号得出若干个特征值;将一个加速度信号的这些特征值及根据该加速度信号对应的敲击事件的类型记录的分类标记作为一个样本,生成包含若干个样本的一样本集;将一个样本中的这些特征值作为输入,自由选取的权重参数组作为调整参数,输入到一深度神经网络中,得出一预测的分类标记;根据该预测的分类标记与该样本中真实的分类标记的误差,采用向后传播的算法,调整该权重参数组;以及将该样本集的样本分批读出,训练该深度神经网络,对该权重参数组进行微调,以决定出优化的权重参数组。In order to achieve the above object, one aspect of the present invention provides a method for interacting between a user and a stylus. The method includes: using a sensor to sense various taps generated by a user tapping a stylus. Click events to measure several acceleration signals; sample each acceleration signal, and obtain several eigenvalues for each acceleration signal; combine these eigenvalues of an acceleration signal with the corresponding tapping events according to the acceleration signal The classification mark of the type of record is used as a sample to generate a sample set containing several samples; these eigenvalues in a sample are used as input, and the freely selected weight parameter group is used as an adjustment parameter, which is input into a deep neural network. generate a predicted classification mark; according to the error between the predicted classification mark and the real classification mark in the sample, adopt the algorithm of backward propagation to adjust the weight parameter group; and read out the samples of the sample set in batches to train The deep neural network fine-tunes the weight parameter group to determine an optimized weight parameter group.
本发明另一方面提供一种使用者对触控笔进行的敲击事件的分类系统,所述系统包含:一触控笔,其内设置有一感测器,用以感测使用者对该触控笔进行敲击而生成的各种敲击事件,以量测出若干个加速度信号;以及一计算机装置,与该触控笔耦接,该计算机装置包含:一处理器,接收该感测器传来的该等加速度信号;以及一存储器,与该处理器连接,该存储器包含可由该处理器执行的若干个程序指令,该处理器执行这些程序指令以执行一方法,所述方法包含:对每一个加速度信号进行取样,针对每一个加速度信号得出若干个特征值;将一个加速度信号的这些特征值及根据该加速度信号对应的敲击事件的类型记录的分类标记,作为一个样本,生成包含若干个样本的一样本集;将一个样本中的这些特征值作为输入,自由选取的权重参数组作为调整参数,输入到一深度神经网络中,得出一预测的分类标记;根据该预测的分类标记与该样本中真实的分类标记的误差,采用向后传播的算法,调整该权重参数组;以及将该样本集的样本分批读出,训练该深度神经网络,对该权重参数组进行微调,以决定出优化的权重参数组。Another aspect of the present invention provides a system for classifying tapping events performed by a user on a stylus pen, the system includes: a stylus pen with a sensor disposed therein for sensing the touch by the user. Various tapping events generated by tapping the stylus to measure several acceleration signals; and a computer device coupled to the stylus, the computer device comprising: a processor for receiving the sensor the transmitted acceleration signals; and a memory connected to the processor, the memory containing a number of program instructions executable by the processor, the processor executing the program instructions to perform a method, the method comprising: Each acceleration signal is sampled, and several eigenvalues are obtained for each acceleration signal; these eigenvalues of an acceleration signal and the classification marks recorded according to the type of the tapping event corresponding to the acceleration signal are used as a sample to generate a sample containing A sample set of several samples; these eigenvalues in a sample are used as input, the freely selected weight parameter group is used as adjustment parameters, and input into a deep neural network to obtain a predicted classification label; according to the predicted classification The error between the label and the real classification label in the sample is adjusted using a backward propagation algorithm to adjust the weight parameter group; and the samples of the sample set are read out in batches, the deep neural network is trained, and the weight parameter group is fine-tuned , to determine the optimal weight parameter group.
本发明再一方面提供一种触控笔产品,所述触控笔产品包含:一感测器,用以感测对该触控笔产品进行的一敲击操作而产生的加速度信号,并用以通过一融合算法计算该触控笔产品的倾向角;一控制器,与该感测器耦接,该控制器中布建有与前述方法之深度神经网络对应的深度神经网络,该控制器用以将该对应的深度神经网络及根据前述方法得出的该优化的权重参数组作为一敲击分类器,并用以将来自该感测器的加速度信号输入该敲击分类器中,以得出一预测的敲击类型;以及一无线传输模块,与该控制器耦接,用以传送携载该预测的互动类型及该计算出的倾斜角的无线信号。Another aspect of the present invention provides a stylus product, the stylus product includes: a sensor for sensing an acceleration signal generated by a tapping operation on the stylus product, and used for The inclination angle of the stylus product is calculated by a fusion algorithm; a controller is coupled to the sensor, a deep neural network corresponding to the deep neural network of the aforementioned method is built in the controller, and the controller is used for The corresponding deep neural network and the optimized weight parameter set obtained according to the aforementioned method are used as a knock classifier, and the acceleration signal from the sensor is input into the knock classifier to obtain a a predicted tap type; and a wireless transmission module coupled to the controller for transmitting a wireless signal carrying the predicted interaction type and the calculated tilt angle.
本揭示将感测器及其融合演算法、以及敲击分类器布建在触控笔产品中,以提供新颖的操作方式。在一些实施例中,用户不需直接或间接接触触控屏幕便能通过转动和敲击触控笔产品来执行许多的便利的互动。The present disclosure deploys a sensor, its fusion algorithm, and a tap classifier in a stylus product to provide a novel way of operation. In some embodiments, the user can perform many convenient interactions by turning and tapping the stylus product without directly or indirectly touching the touch screen.
为让本揭示的上述内容能更明显易懂,下文特举优选实施例,并配合所附图式,作详细说明如下。In order to make the above-mentioned content of the present disclosure more obvious and easy to understand, the preferred embodiments are exemplified below, and are described in detail as follows in conjunction with the accompanying drawings.
附图说明Description of drawings
图1显示使用者手握触控笔的示意图。FIG. 1 shows a schematic diagram of a user holding a stylus.
图2显示根据本揭示实施例实现的一种敲击事件的分类系统的示意图。FIG. 2 shows a schematic diagram of a classification system for tapping events implemented according to an embodiment of the present disclosure.
图3显示根据本揭示实施例实现的一种使用者与触控笔的互动方法的流程图。FIG. 3 shows a flowchart of a method for interacting a user with a stylus according to an embodiment of the present disclosure.
图4显示本揭示实施例中的深度神经网络的示意图。FIG. 4 shows a schematic diagram of a deep neural network in an embodiment of the present disclosure.
图5显示根据本揭示实施例实现的一种触控笔产品的示意图。FIG. 5 shows a schematic diagram of a stylus product implemented according to an embodiment of the present disclosure.
图6显示根据本揭示实施例实现的一种使用者与触控笔的互动方法的流程图。FIG. 6 shows a flowchart of a method for interacting a user with a stylus according to an embodiment of the present disclosure.
图7A显示本揭示实施例中显示在屏幕上的调整条的示意图。FIG. 7A shows a schematic diagram of an adjustment bar displayed on a screen in an embodiment of the present disclosure.
图7B显示本揭示实施例中显示在屏幕上的颜色饼图的示意图。FIG. 7B shows a schematic diagram of a color pie chart displayed on a screen in an embodiment of the present disclosure.
图7C显示本揭示实施例中触控笔产品的一种座标系统的示意图。FIG. 7C shows a schematic diagram of a coordinate system of the stylus product in the embodiment of the present disclosure.
具体实施方式Detailed ways
为使本揭示的目的、技术方案及效果更加清楚、明确,以下参照附图并举实施例对本揭示进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本揭示,本揭示说明书所使用的词语“实施例”意指用作实例、示例或例证,并不用于限定本揭示。此外,本揭示说明书和所附权利要求书中所使用的冠词「一」一般可以被解释为意指「一个或多个」,除非另外指定或从上下文可以清楚确定单数形式。In order to make the objectives, technical solutions and effects of the present disclosure clearer and clearer, the present disclosure will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are only used to explain the present disclosure, and the word "embodiment" used in the specification of the present disclosure is intended to be used as an example, illustration or illustration, and is not used to limit the present disclosure. In addition, as used in this disclosure and the appended claims, the article "a" can generally be construed to mean "one or more" unless specified otherwise or clear from the context in the singular form.
图1显示使用者手握触控笔的示意图。使用者的手部与触控笔10可以进行互动,例如改变触控笔10的倾向角和手指敲击触控笔10等。本揭示的目的是提供一种新颖的互动方式,让使用者可以旋转与敲击触控笔10,而与周边装置(如触控屏幕)进行互动。FIG. 1 shows a schematic diagram of a user holding a stylus. The user's hand can interact with the stylus 10 , such as changing the inclination angle of the stylus 10 and tapping the stylus 10 with fingers. The purpose of the present disclosure is to provide a novel interactive manner, allowing the user to rotate and tap the stylus 10 to interact with peripheral devices (eg, a touch screen).
本揭示采用传感器融合(sensor fusion)技术,结合经由深度学习(deeplearning)得出的敲击分类器,让使用者可以藉由改变触控笔10的倾向角以及敲击触控笔10来进行各种操作。传感器融合技术可以使用六轴或九轴惯性量测单元(InertialMeasurement Unit, IMU)通过融合算法(例如Madgwick’s algorithm 或 Kalman filter)来计算触控笔10之倾向角。此外,对使用者敲击触控笔10所产生的敲击事件进行分类学习而得出敲击分类器。利用此敲击分类器可以将使用者对触控笔10的敲击方式加以分类。The present disclosure adopts the sensor fusion technology, combined with the tapping classifier obtained through deep learning, so that the user can perform various tasks by changing the inclination angle of the stylus 10 and tapping the stylus 10. an operation. The sensor fusion technology can use a six-axis or nine-axis inertial measurement unit (IMU) to calculate the inclination angle of the stylus 10 through a fusion algorithm (eg, Madgwick's algorithm or Kalman filter). In addition, a tapping classifier is obtained by classifying and learning the tapping events generated by the user tapping the stylus 10 . Using the stroke classifier, the user's strokes of the stylus 10 can be classified.
改变倾向角的类型可包含对触控笔10沿X轴旋转的翻滚角度(roll angle),沿Y轴旋转的俯仰角度(pitch angle)以及对Z轴旋转的偏摆角度(yaw angle)之至少一者,这里可以右手定则来决定旋转角度的正负方向(参见第7C图)。The type of changing the tilt angle may include at least one of a roll angle for rotating the stylus 10 along the X axis, a pitch angle for rotating along the Y axis, and a yaw angle for rotating the Z axis First, the right-hand rule can be used to determine the positive and negative directions of the rotation angle (see Figure 7C).
敲击事件的类型例如可取决于使用者手指敲击触控笔10的类型或次数,例如一次敲击、二次敲击和三次敲击等。The type of tap event may, for example, depend on the type or number of times the user's finger taps the stylus 10, such as one tap, two taps, three taps, and the like.
结合触控笔的倾向角和敲击分类技术,可以应用于许多应用情景,例如旋转触控笔来改变笔触之颜色或字体大小,然后用手指敲击触控笔进行确定或取消,这可根据不同的应用情境而有不同的配置。本领域技术人员可以理解,也可将本揭示的发明概念运用在其他应用。当然,互动事件的类型与执行之操作的关系也可由使用者自行定义。Combining the inclination angle of the stylus and the tap classification technology, it can be applied to many application scenarios, such as rotating the stylus to change the color or font size of the stroke, and then tapping the stylus with a finger to confirm or cancel, which can be determined according to the Different application scenarios have different configurations. Those skilled in the art will appreciate that the inventive concepts of the present disclosure can also be applied to other applications. Of course, the relationship between the types of interactive events and the operations performed can also be defined by the user.
图2显示根据本揭示实施例实现的一种敲击事件的分类系统的示意图。该系统包括触控笔10及与触控笔10耦接的计算机装置40。触控笔10可为一电容式触控笔,其可改变一触控面板(未附图)上的电容而产生触控操作。计算机装置40可为具有一定之运算能力的计算器,例如个人计算机、笔记型计算机等。本揭示中,当对使用者与触控笔10的敲击事件进行分类时,需要先收集敲击事件,在此,人为地对触控笔10作出各种动作,将敲击事件对应的信号传送到计算机装置40,计算机装置40采用深度神经网络(deep neural network)进行学习。FIG. 2 shows a schematic diagram of a classification system for tapping events implemented according to an embodiment of the present disclosure. The system includes a stylus 10 and a computer device 40 coupled to the stylus 10 . The stylus 10 can be a capacitive stylus, which can change the capacitance on a touch panel (not shown) to generate touch operations. The computer device 40 can be a calculator with certain computing capabilities, such as a personal computer, a notebook computer, and the like. In the present disclosure, when classifying the tapping events between the user and the stylus 10, the tapping events need to be collected first. Here, various actions are manually performed on the stylus 10, and the signals corresponding to the tapping events It is transmitted to the computer device 40, and the computer device 40 performs learning using a deep neural network.
如图2所示,触控笔10包含至少一感测器20。感测器20可以为(九轴)IMU,例如三轴加速度计、三轴陀螺仪及三轴磁力计的组合。视应用的不同,感测器20也可以为(三轴)加速度计和(三轴)陀螺仪的组合。感测器20可配置在触控笔10内的任一位置。As shown in FIG. 2 , the stylus 10 includes at least one sensor 20 . The sensor 20 may be a (nine-axis) IMU, such as a combination of a three-axis accelerometer, a three-axis gyroscope, and a three-axis magnetometer. Depending on the application, the sensor 20 may also be a combination of a (triaxial) accelerometer and a (triaxial) gyroscope. The sensor 20 may be disposed at any position within the stylus 10 .
计算机装置40通过一接口接收感测器20生成的加速度信号,将其馈入深度神经网络进行分类学习。人为生成敲击事件后,也可将每个敲击事件的类型输入到计算机装置40中,进行监督式学习(supervised learning)。如图2所示,计算机装置40包含一处理器41及一存储器42,处理器41接收感测器20传来的加速度信号,存储器42与处理器41连接,存储器42包含可由处理器41执行的若干个程序指令,处理器41执行这些程序指令以执行该深度神经网络的相关运算。计算机装置40也可利用GPU或TPU来执行该深度神经网络的相关运算,以提升运算速度。The computer device 40 receives the acceleration signal generated by the sensor 20 through an interface, and feeds it into the deep neural network for classification learning. After the tapping events are generated manually, the type of each tapping event can also be input into the computer device 40 to perform supervised learning. As shown in FIG. 2 , the computer device 40 includes a processor 41 and a memory 42 , the processor 41 receives the acceleration signal from the sensor 20 , the memory 42 is connected to the processor 41 , and the memory 42 includes the functions executable by the processor 41 . Several program instructions, the processor 41 executes the program instructions to perform the relevant operations of the deep neural network. The computer device 40 can also use GPU or TPU to perform the related operations of the deep neural network, so as to improve the operation speed.
图3显示根据本揭示实施例实现的一种使用者与触控笔的互动方法的流程图。请配合图2参阅图3,所述方法包括如下步骤S31至S36,这些步骤对应敲击分类器的训练流程。FIG. 3 shows a flowchart of a method for interacting a user with a stylus according to an embodiment of the present disclosure. Please refer to FIG. 3 in conjunction with FIG. 2 , the method includes the following steps S31 to S36 , and these steps correspond to the training process of the tapping classifier.
步骤S31:利用感测器20感测使用者对触控笔10进行敲击而生成的各种敲击事件,以量测出若干个加速度信号。在此步骤中,人为地与触控笔10进行互动以产生各种敲击事件,设置在触控笔10内的感测器20感测敲击事件而生成加速度信号。在本揭示中,感测器20的数量不限于一个,也可以是若干个,感测器20也可以设置在触控笔10中的任一位置。Step S31 : Use the sensor 20 to sense various tapping events generated by the user tapping the stylus 10 to measure several acceleration signals. In this step, the stylus 10 is artificially interacted to generate various tap events, and the sensor 20 disposed in the stylus 10 senses the tap events to generate acceleration signals. In the present disclosure, the number of the sensor 20 is not limited to one, but may also be several, and the sensor 20 may also be disposed at any position in the stylus 10 .
于一实施例中,感测器20可以采用九轴IMU来实现。感测器20侦测到的加速度是时间函数,有三个方向分量,可利用傅里叶转换分别将三个方向分量投影到频率空间。具体来说,所述方法可进一步包含将每一个加速度信号从时间分布转换到频率空间的步骤。In one embodiment, the sensor 20 may be implemented with a nine-axis IMU. The acceleration detected by the sensor 20 is a function of time and has three directional components, and the three directional components can be projected into the frequency space respectively by using Fourier transform. In particular, the method may further comprise the step of transforming each acceleration signal from a time distribution to a frequency space.
在转换到频率空间后,可进一步滤掉低频的直流成分(DC component)与高频杂讯,以避免分类结果受到重力加速度以及杂讯的影响。具体来说,所述方法可进一步包含对每一个加速度信号进行滤波,过滤掉高频和低频的部分的步骤。After converting to frequency space, low-frequency DC components and high-frequency noise can be further filtered out to avoid the influence of gravitational acceleration and noise on the classification result. Specifically, the method may further comprise the step of filtering each acceleration signal, filtering out high frequency and low frequency components.
于另一实施例中,感测器20可以采用三轴加速度计和三轴陀螺仪来实现。三轴加速度计可以量测触控笔10于三维空间中移动的三个加速度分量,从而得出加速度信号。In another embodiment, the sensor 20 may be implemented using a three-axis accelerometer and a three-axis gyroscope. The three-axis accelerometer can measure three acceleration components of the stylus 10 moving in the three-dimensional space, thereby obtaining an acceleration signal.
步骤S32:对每一个加速度信号进行取样,针对每一个加速度信号得出若干个特征值(feature values)。在此步骤中,对感测器20生成的每一个加速度信号进行取样,例如在频率空间中,以一定的频率间隔进行取样,得出若干个数据点,这些数据点即为特征值,可在进行归一化 ( normalization) 后,作为深度神经网络的训练数据。Step S32 : sampling each acceleration signal, and obtaining several feature values for each acceleration signal. In this step, each acceleration signal generated by the sensor 20 is sampled. For example, in the frequency space, sampling is performed at certain frequency intervals to obtain several data points. These data points are eigenvalues, which can be found in After normalization, it is used as training data for deep neural network.
步骤S33:将一个加速度信号的这些特征值及根据该加速度信号对应的敲击事件的类型记录的分类标记(classification label),作为一个样本,生成包含若干个样本的一样本集。在此步骤中,感测器20量测到的加速度信号及对应该加速度信号的敲击事件的类型作为一笔数据,即一个样本,若干个样本构成一样本集。具体来说,一个样本包含一个加速度信号的特征值及对应该加速度信号的分类标记。Step S33 : Use these characteristic values of an acceleration signal and a classification label recorded according to the type of the tapping event corresponding to the acceleration signal as a sample, and generate a sample set including several samples. In this step, the acceleration signal measured by the sensor 20 and the type of the tapping event corresponding to the acceleration signal are regarded as a piece of data, that is, a sample, and a plurality of samples constitute a sample set. Specifically, a sample contains a characteristic value of an acceleration signal and a classification label corresponding to the acceleration signal.
该样本集可分成训练样本集及测试样本集,该训练样本集可用来训练深度神经网络,该测试样本集用来测试训练得出的神经网络模型的分类准确度。The sample set can be divided into a training sample set and a test sample set, the training sample set can be used to train the deep neural network, and the test sample set is used to test the classification accuracy of the neural network model obtained by training.
步骤S34:将一个样本中的这些特征值作为输入,自由选取的权重参数组(weighting parameters)作为调整参数,输入到一深度神经网络中,得出一预测的分类标记。步骤S33中的得出的一个样本中的特征值自输入层输入,通过该深度神经网络输出预测的分类标记。Step S34 : using these feature values in a sample as input, and the freely selected weighting parameters as adjustment parameters, input into a deep neural network, and obtain a predicted classification mark. The feature value in a sample obtained in step S33 is input from the input layer, and the predicted classification label is output through the deep neural network.
图4显示一个深度神经网络的例子,深度神经网络一般可以分为输入层、输出层及介于输入层和输出层间的学习层,样本集的每个样本从输入层输入,预测的分类标记从输出层输出。一般来说,深度神经网络包含许多学习层,其层数相当多(例如50~100层),故可实现深度学习。图4显示的深度神经网络仅为示意,本揭示的深度神经网络并不以此为限。Figure 4 shows an example of a deep neural network. A deep neural network can generally be divided into an input layer, an output layer, and a learning layer between the input layer and the output layer. Each sample of the sample set is input from the input layer, and the predicted classification label output from the output layer. Generally speaking, a deep neural network contains many learning layers, and the number of layers is quite large (for example, 50-100 layers), so deep learning can be realized. The deep neural network shown in FIG. 4 is for illustration only, and the deep neural network of the present disclosure is not limited thereto.
深度神经网络中可包含多个卷积层(convolutional layer)、批次归一层(batchnormalization layer)、池化层(pooling layer)、全连接层(fully connected layer)、线性整流单元(rectified linear unit,ReLu)以及一个 Softmax 输出层等等。本揭示可以采用适当数量的层数进行学习,以在预测准确度与运算效率上取得平衡,但需注意的是,层数过多也可能导致准确度下降。深度神经网络可包含多个级联的子网络,每个子网络与位在其后的各个子网络相连,如DenseNet(Dense Convolutional Network),藉由特征再利用(feature re-usage),以提升预测的准确度。A deep neural network can include multiple convolutional layers, batch normalization layers, pooling layers, fully connected layers, and rectified linear units. , ReLu) and a Softmax output layer, etc. In the present disclosure, an appropriate number of layers can be used for learning to achieve a balance between prediction accuracy and computational efficiency, but it should be noted that too many layers may also lead to a decrease in accuracy. A deep neural network can include multiple cascaded sub-networks, each sub-network is connected to each subsequent sub-network, such as DenseNet (Dense Convolutional Network), to improve prediction by feature re-usage accuracy.
步骤S35:根据该预测的分类标记与该样本中真实的分类标记的误差,采用向后传播(backpropagation)的算法,调整该权重参数组。该深度神经网络的优化目标是使得分类误差(loss)最小,优化的方法采用向后传播算法,也就是说,输出层得出的预测结果与真实的值进行比较,得到一个误差值,然后这个误差值逐层往回传,从而修正每一层的参数。Step S35: According to the error between the predicted classification label and the actual classification label in the sample, a backpropagation algorithm is used to adjust the weight parameter group. The optimization goal of the deep neural network is to minimize the classification error (loss), and the optimization method adopts the back propagation algorithm, that is to say, the prediction result obtained by the output layer is compared with the real value to obtain an error value, and then this The error value is passed back layer by layer to correct the parameters of each layer.
步骤S36:将该样本集的样本分批(mini-batch)读出,训练该深度神经网络,对该权重参数组进行微调,以决定出优化的权重参数组。每使用一批子样本集进行训练时,就会对该权重值参数组进行一次微调,如此迭代地进行,直到分类误差趋向于收敛。最后,选取出对于测试集有最高预测准确度的参数组作为优化的模型参数组。Step S36: Read out the samples of the sample set in mini-batches, train the deep neural network, and fine-tune the weight parameter group to determine an optimized weight parameter group. Each time a batch of subsets is used for training, the weight parameter group is fine-tuned once, and this is done iteratively until the classification error tends to converge. Finally, the parameter group with the highest prediction accuracy for the test set is selected as the optimized model parameter group.
图5显示根据本揭示实施例实现的一种触控笔产品的示意图。如图5所示,该触控笔产品10’包含一或多个感测器20’、一控制器30及一无线传输模块50。感测器20’设置在触控笔产品10’内的任一位置。在一个例子中,感测器20’可为惯性测量单元(IMU),其内建三个方向的加速度计与三轴的陀螺仪,可以感测触控笔产品10’的加速度与角速度。在另一个例子中,感测器20’可为九轴IMU,包含三轴加速度计、三轴陀螺仪和三轴磁力计。FIG. 5 shows a schematic diagram of a stylus product implemented according to an embodiment of the present disclosure. As shown in FIG. 5 , the stylus product 10' includes one or more sensors 20', a controller 30 and a wireless transmission module 50. The sensor 20' is disposed anywhere within the stylus product 10'. In one example, the sensor 20' can be an inertial measurement unit (IMU), which has built-in three-direction accelerometers and three-axis gyroscopes, and can sense the acceleration and angular velocity of the stylus product 10'. In another example, the sensor 20' may be a nine-axis IMU, including a three-axis accelerometer, a three-axis gyroscope, and a three-axis magnetometer.
于一方面,感测器20’用以感测对触控笔产品10’进行的敲击操作而产生加速度信号。另一方面,感测器20’用以通过融合算法(例如Madgwick’s algorithm或 Kalmanfilter)计算该触控笔产品的倾向角。控制器30与感测器20’耦接,接收感测器20’产生的加速度信号及计算出的倾向角。In one aspect, the sensor 20' is used for sensing a tap operation on the stylus product 10' to generate an acceleration signal. On the other hand, the sensor 20' is used to calculate the inclination angle of the stylus product through a fusion algorithm such as Madgwick's algorithm or Kalmanfilter. The controller 30 is coupled to the sensor 20', and receives the acceleration signal and the calculated inclination angle generated by the sensor 20'.
控制器30用以对使用者敲击触控笔产品10’之敲击事件进行分类预测,以得出一预测的敲击类型。举例来说,控制器30中布建有与上述步骤S34至S36中采用的深度神经网络相同或相应的深度神经网络,且存储有上述步骤S36得出的优化的权重参数组。该相应的深度神经网络及该优化的权重参数组可作为一敲击分类器,布建于触控笔产品10’中。控制器30将来自感测器20’的加速度信号输入该敲击分类器中,即可得出相应之敲击事件的分类标记,即得出预测的敲击类型。如此,该触控笔产品10’实现了敲击事件的分类预测。The controller 30 is used for classifying and predicting the tapping events of the user tapping the stylus product 10', so as to obtain a predicted tapping type. For example, the controller 30 deploys the same or corresponding deep neural network as the deep neural network used in the above steps S34 to S36, and stores the optimized weight parameter set obtained in the above step S36. The corresponding deep neural network and the optimized weight parameter set can be deployed in the stylus product 10' as a tap classifier. The controller 30 inputs the acceleration signal from the sensor 20' into the knock classifier, and then the classification mark of the corresponding knock event can be obtained, that is, the predicted knock type can be obtained. In this way, the stylus product 10' realizes the classification prediction of tapping events.
具体来说,该深度神经网络相应的程序码以及该优化的权重参数组可写入控制器30的固件中,控制器30执行该深度神经网络的算法,以预测出敲击事件的类型。Specifically, the corresponding program code of the deep neural network and the optimized weight parameter group can be written into the firmware of the controller 30, and the controller 30 executes the algorithm of the deep neural network to predict the type of the tapping event.
无线传输模块50,与控制器30连接或耦接,用以接收控制器30预测得出的敲击事件的类型以及来自感测器20’的触控笔产品10’的倾向角,并传送携载该倾向角与该预测的敲击类型的无线信号。举例来说,无线传输模块50可为一蓝牙模块,其可与一平板电脑(或笔记型计算机)通信,通过蓝牙信号将这一资讯传送给该平板电脑。The wireless transmission module 50, connected or coupled with the controller 30, is used to receive the type of the tapping event predicted by the controller 30 and the inclination angle of the stylus product 10' from the sensor 20', and transmit the carrying A wireless signal carrying the inclination angle and the predicted tap type. For example, the wireless transmission module 50 can be a Bluetooth module, which can communicate with a tablet computer (or a notebook computer) and transmit this information to the tablet computer through a Bluetooth signal.
请配合图5参阅图6,所述方法还包括如下步骤S38至S41,这些步骤对应将敲击分类器布建到触控笔产品10’中以实现各种应用的流程。Please refer to FIG. 6 in conjunction with FIG. 5 , the method further includes the following steps S38 to S41, and these steps correspond to the process of deploying the tapping classifier into the stylus product 10' to realize various applications.
步骤S38:将该深度神经网络及该优化的权重参数组作为一敲击分类器,布建到一触控笔产品10’中。在此步骤中,该触控笔产品10’具有该敲击分类器,其包含了与上述步骤S34至S36中采用的深度神经网络相同或相应的深度神经网络以及上述步骤S36得出的优化的权重参数组。Step S38: Deploy the deep neural network and the optimized weight parameter set into a stylus product 10' as a tap classifier. In this step, the stylus product 10' has the tapping classifier, which includes the same or corresponding deep neural network as the deep neural network used in the above steps S34 to S36 and the optimized neural network obtained in the above step S36. Weight parameter group.
步骤S39:接收对该触控笔产品进行的一敲击操作所产生的加速度信号,并将该敲击操作所产生的加速度信号输入该敲击分类器中,以得出一预测的敲击类型。在此步骤中,使用者与触控笔产品10’进行互动而生成敲击事件时,该触控笔产品10’中的感测器20’量测加速度信号,将该加速度信号输入该敲击分类器中,并利用控制器30预测得出的敲击事件的类型。Step S39: Receive an acceleration signal generated by a tapping operation on the stylus product, and input the acceleration signal generated by the tapping operation into the tapping classifier to obtain a predicted tapping type . In this step, when the user interacts with the stylus product 10' to generate a tap event, the sensor 20' in the stylus product 10' measures an acceleration signal, and inputs the acceleration signal into the tap classifier, and use the controller 30 to predict the type of tap event.
步骤S40:利用设置于该触控笔产品10’中的IMU,通过一融合算法,计算该触控笔产品10’的倾向角。在此步骤中,IMU可为六轴或九轴IMU,并且可以通过Madgwick’salgorithm或 Kalman filter算法计算该触控笔产品10’的倾向角。Step S40: Calculate the inclination angle of the stylus product 10' through a fusion algorithm using the IMU disposed in the stylus product 10'. In this step, the IMU can be a six-axis or nine-axis IMU, and the inclination angle of the stylus product 10' can be calculated by Madgwick's salgorithm or Kalman filter algorithm.
步骤S41:根据该预测的敲击类型及所计算出的倾向角,执行一预定操作。在此步骤中,触控笔产品10’中的无线传输模块50通过无线信号将控制器30预测得出的敲击事件的类型及该倾向角传送到例如一平板电脑(或笔记型计算机),安装于该平板电脑(或笔记型计算机)的软件便执行可对应预测结果的操作。Step S41: Execute a predetermined operation according to the predicted stroke type and the calculated inclination angle. In this step, the wireless transmission module 50 in the stylus product 10 ′ transmits the type of the tapping event and the inclination angle predicted by the controller 30 to, for example, a tablet computer (or a notebook computer) through a wireless signal, Software installed on the tablet (or laptop) performs actions that correspond to the predicted results.
在一个例示的应用情境中,使用者对触控笔产品进行一次敲击时,可用来进行确认操作;进行二次敲击时,可用来进行取消操作。In an example application scenario, when the user taps the stylus product once, it can be used to perform a confirmation operation; when a second tap is performed, it can be used to perform a cancel operation.
在另一个例示的应用情境中,使用者对触控笔产品进行敲击后,(平板电脑的)屏幕70显示一调整条,用以指示该触控笔产品之笔迹的粗细,如第7A图所示;当使用者进一步旋转该触控笔产品时,该触控笔产品之笔迹的粗细被变更,调整的过程及结果可显示在该调整条上。例如,顺时针旋转该触控笔产品时,笔迹变粗;逆时针旋转该触控笔产品时,笔迹变细。最后,使用者再次敲击触控笔一次或两次用来进行确定或取消设定的变更。In another exemplary application scenario, after the user taps the stylus product, the screen 70 (of the tablet computer) displays an adjustment bar to indicate the thickness of the handwriting of the stylus product, as shown in FIG. 7A When the user further rotates the stylus product, the thickness of the handwriting on the stylus product is changed, and the adjustment process and result can be displayed on the adjustment bar. For example, when the stylus product is rotated clockwise, the handwriting becomes thicker; when the stylus product is rotated counterclockwise, the handwriting becomes thinner. Finally, the user taps the stylus again once or twice to confirm or cancel the setting changes.
在另一个例示的应用情境中,使用者对触控笔产品进行敲击后,(平板电脑的)屏幕70显示一颜色饼图,用以指示该触控笔产品之笔迹的颜色,如第7B图所示;当使用者进一步旋转该触控笔产品时,可以移动指标进而选取笔迹的颜色,调整的过程及结果可显示在该颜色饼图上。In another exemplary application scenario, after the user taps the stylus product, the screen 70 (of the tablet computer) displays a color pie chart to indicate the color of the handwriting of the stylus product, as shown in Section 7B As shown in the figure; when the user further rotates the stylus product, the pointer can be moved to select the color of the handwriting, and the adjustment process and result can be displayed on the color pie chart.
在另一个例示应用情境中,使用者对触控笔产品进行敲击后,可允许对触控笔产品的互动操作来控制视窗(例如Microsoft Word的视窗)的卷轴,上下卷动显示内容。例如,当使用者沿触控笔产品之X轴(见第7C图)进行正方向旋转时,内容将往上卷动,反之,内容将往下卷动。In another exemplary application scenario, after the user taps the stylus product, the interactive operation of the stylus product can be used to control the scroll of a window (eg, a window of Microsoft Word) to scroll up and down the displayed content. For example, when the user rotates in the positive direction along the X-axis of the stylus product (see Figure 7C), the content will scroll up, otherwise, the content will scroll down.
在另一个例示应用情境中,使用者轻敲一下触控笔可进入控制模式,此时触控显示装置显示对应的菜单,使用者沿着第7C图中所示的X轴转动触控笔,即进行翻滚(roll)动作,藉由翻滚角度(roll angle)的变化可以选择笔迹颜色、调整笔尖粗细、进行文字选取或卷动视窗卷轴。之后,使用者再轻敲一下触控笔以确认变更,或者可以轻敲两下来取消变更。上述的调整动作也可以通过改变俯仰角度(pitch angle)来实现,改变俯仰角度也可以对应执行其他类型的操作。In another example application scenario, the user taps the stylus to enter the control mode. At this time, the touch-sensitive display device displays the corresponding menu, and the user rotates the stylus along the X-axis shown in Fig. 7C. That is, a roll action is performed, and the color of the handwriting can be selected, the thickness of the pen tip can be adjusted, the text selection can be performed, or the window scroll can be scrolled by the change of the roll angle. The user then taps the stylus again to confirm the change, or can double-tap to cancel the change. The above adjustment action can also be implemented by changing the pitch angle, and changing the pitch angle can also be performed corresponding to other types of operations.
本揭示采用感应器融合技术并结合敲击分类器,对触控笔产品进行的各种互动操作,让使用者在许多情境下可以不用直接或间接去接触控屏幕即可进行便利的互动操作。The present disclosure adopts the sensor fusion technology combined with the tapping classifier to perform various interactive operations on the stylus product, so that the user can perform convenient interactive operations without directly or indirectly touching the control screen in many situations.
本揭示已用较佳实施例揭露如上,然其并非用以限定本揭示,本揭示所属技术领域中的技术人员,在不脱离本揭示之精神和范围内,当可作各种之更动与润饰,因此本揭示之保护范围当视后附之申请专利范围所界定者为准。The present disclosure has been disclosed above with preferred embodiments, but it is not intended to limit the present disclosure. Those skilled in the art to which the present disclosure pertains can make various modifications and changes without departing from the spirit and scope of the present disclosure. Therefore, the scope of protection of the present disclosure should be determined by the scope of the appended patent application.
Claims (11)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810101597.7A CN110109599A (en) | 2018-02-01 | 2018-02-01 | The interactive approach of user and stylus, categorizing system and stylus product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810101597.7A CN110109599A (en) | 2018-02-01 | 2018-02-01 | The interactive approach of user and stylus, categorizing system and stylus product |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110109599A true CN110109599A (en) | 2019-08-09 |
Family
ID=67483067
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810101597.7A Pending CN110109599A (en) | 2018-02-01 | 2018-02-01 | The interactive approach of user and stylus, categorizing system and stylus product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110109599A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111078064A (en) * | 2019-12-31 | 2020-04-28 | 北京航空航天大学 | A Touch Angle Estimation Method Based on Capacitance Detection and Machine Learning |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101872259A (en) * | 2010-06-03 | 2010-10-27 | 中国人民解放军第二炮兵工程学院 | Natural interaction pen capable of automatically inputting written contents and handwriting detection method |
CN102135823A (en) * | 2011-04-28 | 2011-07-27 | 华南理工大学 | Intelligent electronic handwriting pen |
US20110304573A1 (en) * | 2010-06-14 | 2011-12-15 | Smith George C | Gesture recognition using neural networks |
TW201601011A (en) * | 2014-06-30 | 2016-01-01 | Univ Nat Central | Method and module, computer program product of identifying user of mobile device |
CN105677039A (en) * | 2016-02-16 | 2016-06-15 | 北京博研智通科技有限公司 | Method, device and wearable device for gesture-based driving status detection |
-
2018
- 2018-02-01 CN CN201810101597.7A patent/CN110109599A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101872259A (en) * | 2010-06-03 | 2010-10-27 | 中国人民解放军第二炮兵工程学院 | Natural interaction pen capable of automatically inputting written contents and handwriting detection method |
US20110304573A1 (en) * | 2010-06-14 | 2011-12-15 | Smith George C | Gesture recognition using neural networks |
CN102135823A (en) * | 2011-04-28 | 2011-07-27 | 华南理工大学 | Intelligent electronic handwriting pen |
TW201601011A (en) * | 2014-06-30 | 2016-01-01 | Univ Nat Central | Method and module, computer program product of identifying user of mobile device |
CN105677039A (en) * | 2016-02-16 | 2016-06-15 | 北京博研智通科技有限公司 | Method, device and wearable device for gesture-based driving status detection |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111078064A (en) * | 2019-12-31 | 2020-04-28 | 北京航空航天大学 | A Touch Angle Estimation Method Based on Capacitance Detection and Machine Learning |
CN111078064B (en) * | 2019-12-31 | 2021-01-01 | 北京航空航天大学 | Touch angle estimation method based on capacitance detection and machine learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11287903B2 (en) | User interaction method based on stylus, system for classifying tap events on stylus, and stylus product | |
US7952561B2 (en) | Method and apparatus for controlling application using motion of image pickup unit | |
CN102566896B (en) | For input time and the system and method on date | |
US8059111B2 (en) | Data transfer using hand-held device | |
JP2021034021A (en) | Pose prediction with recurrent neural networks | |
JP2019194892A (en) | Crown input for wearable electronic devices | |
US8810511B2 (en) | Handheld electronic device with motion-controlled cursor | |
CN105389043B (en) | Instrument interface for reducing the effects of irregular motion | |
JP2004227563A (en) | Integration of inertia sensor | |
JP2015072534A (en) | Information processor, and information processing method and program | |
KR101609553B1 (en) | Apparatus and method for 3d motion recognition information input, and recording medium storing program for executing the same | |
TWI567592B (en) | Gesture recognition method and wearable apparatus | |
CN103492986A (en) | Input device, input method, and recording medium | |
CN103294226B (en) | A virtual input device and method | |
CN107390867A (en) | A kind of man-machine interactive system based on Android wrist-watch | |
WO2015049934A1 (en) | Information processing device, information processing method, and program | |
CN110109599A (en) | The interactive approach of user and stylus, categorizing system and stylus product | |
US9256360B2 (en) | Single touch process to achieve dual touch user interface | |
CN103984407B (en) | Method and apparatus for motion recognition using motion sensor fusion | |
US9927917B2 (en) | Model-based touch event location adjustment | |
CN103348307B (en) | User interface | |
US20210311621A1 (en) | Swipe gestures on a virtual keyboard with motion compensation | |
CN110647282A (en) | A method for acquiring handwritten track information | |
EP2965177B1 (en) | Using portable electronic devices for user input on a computer | |
KR101759829B1 (en) | Interfacing method, wearable device and user terminal using sensor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190809 |