[go: up one dir, main page]

CN115454262A - Information input method, device, equipment, medium and program product - Google Patents

Information input method, device, equipment, medium and program product Download PDF

Info

Publication number
CN115454262A
CN115454262A CN202211168201.3A CN202211168201A CN115454262A CN 115454262 A CN115454262 A CN 115454262A CN 202211168201 A CN202211168201 A CN 202211168201A CN 115454262 A CN115454262 A CN 115454262A
Authority
CN
China
Prior art keywords
angle
pointer
motion vector
determining
keyboard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211168201.3A
Other languages
Chinese (zh)
Inventor
杨天翼
尹子硕
陈昊芝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Positive Negative Infinite Technology Co ltd
Original Assignee
Beijing Positive Negative Infinite Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Positive Negative Infinite Technology Co ltd filed Critical Beijing Positive Negative Infinite Technology Co ltd
Priority to CN202211168201.3A priority Critical patent/CN115454262A/en
Publication of CN115454262A publication Critical patent/CN115454262A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0236Character input methods using selection techniques to select from displayed items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides an information input method, an information input device, information input equipment, information input media and program products, and relates to the field of AR. The method is applied to AR equipment provided with a rotary keyboard, and the AR equipment is connected with detection equipment controlled by a target control object; a rotatable pointer is arranged at the center of the rotary keyboard, and all keys are circularly arranged around the pointer; the method specifically comprises the following steps: if the rotary keyboard is in an activated state, detecting a first motion vector by receiving detection equipment, and determining a target key pointed by a pointer according to the first motion vector; in response to the determination operation, target information corresponding to the target key is input. The scheme provided by the embodiment of the application not only realizes higher input efficiency, but also is convenient for a user to input information in a comfortable posture.

Description

Information input method, device, equipment, medium and program product
Technical Field
The present application relates to the field of AR technologies, and in particular, to an information input method, apparatus, electronic device, computer-readable storage medium, and computer program product.
Background
Currently, AR devices generally use a ray-cursor-like virtual keyboard as a text input keyboard. The principle of the ray cursor keyboard is as follows: and sending a ray from the control end as a cursor to position and select keys on the virtual keyboard, and executing an activation event in a control mode such as a button.
However, when using a ray cursor virtual keyboard, the following problems exist:
1. in order to accurately locate the ray to each key, the size of the virtual keyboard needs to be set large enough. On the one hand, a sufficiently large size means that a large amount of display space on the AR device is occupied. On the other hand, a sufficiently large size also means that the distance between each key is large, or that the distance between non-adjacent keys is large, and moving back and forth between different keys takes more time and attention of the user.
2. The control end needs to be stable enough to select the key accurately. Therefore, in order to avoid the ray shift, the user of the AR device needs to spend more time and attention to maintain stable control of the control end.
Disclosure of Invention
An embodiment of the present application provides an information input method, an information input device, and a related product, and aims to solve one of the above technical problems. In order to achieve the purpose, the embodiments of the present application provide the following technical solutions.
On one hand, the embodiment of the application provides an information input method, which is applied to AR equipment provided with a rotary keyboard; the AR equipment is connected with the detection equipment, and the movement of the detection equipment is controlled by a target control object; the method comprises the following steps:
if the rotary keyboard is in an activated state, receiving a first motion vector detected by the detection equipment; a pointer is arranged at the center of the rotary keyboard, and all keys are circularly arranged around the pointer; the first motion vector describes the angle change when the detection equipment rotates; determining a target key pointed by the pointer according to the first motion vector; in response to the determination operation, target information corresponding to the target key is input.
Optionally, before the rotary keyboard is in the activated state, the method includes:
receiving a second motion vector detected by the detection device in response to a trigger operation for the input frame; the second motion vector is used for describing the acceleration of the detection equipment during motion; determining an initial first angle of the detection device according to the second motion vector; determining a second angle of the pointer in the initial frame image based on the initial first angle; the initial frame image is the first frame image displayed when the rotary keyboard is in the activated state.
Optionally, determining an initial first angle of the detection device according to the second motion vector includes:
determining an azimuth angle based on the coordinates of the second motion vector on the first coordinate axis and the second coordinate axis respectively; an initial first angle of the detection device is determined based on the azimuth angle.
Optionally, the detection device includes an infrared positioning function; before the rotary keyboard is in the activated state, the method further comprises:
in response to a trigger operation for the input box, an initial first angle detected by the detection device based on the infrared positioning function is received.
Optionally, determining the target key pointed to by the pointer according to the first motion vector includes:
determining a second angle of the pointer in the next frame image according to the first motion vector; and determining a target key pointed to by the pointer in the next frame image according to the second angle of the pointer in the next frame image.
Optionally, determining a second angle of the pointer in the next frame image according to the first motion vector includes:
acquiring a motion value of the first motion vector on a third coordinate axis; taking the sum of the product of the motion value and the preset multiple and a second angle of the pointer in the current frame image as a second angle of the pointer in the next frame image; and the sum of the motion value and the first angle of the detection device in the display period of the current frame image is the first angle of the detection device in the display period of the next frame image.
Optionally, inputting the target information corresponding to the target key includes:
acquiring a keyboard mode of the rotary keyboard and an information content set corresponding to the target key in the keyboard mode; determining target information from the information content set according to the determination operation; inputting the target information to the input box.
Optionally, the determining operation includes any one of:
determining operation input by triggering a preset button on the detection device or the AR device; a determination operation input by the detection device or the AR device detecting a preset gesture.
On the other hand, the embodiment of the application provides an information input device, which is applied to an AR device configured with a rotary keyboard; the AR equipment is connected with the detection equipment, and the movement of the detection equipment is controlled by a target control object; the device includes:
the receiving and transmitting module is used for receiving a first motion vector detected by the detection equipment if the rotary keyboard is in an activated state; a pointer is arranged at the center of the rotary keyboard, and all keys are circularly arranged around the pointer; the first motion vector is descriptive of the angular change when the detection device is rotated.
And the first determining module is used for determining the target key pointed by the pointer according to the first motion vector.
And the second determining module is used for responding to the determining operation and inputting the target information corresponding to the target key.
In another aspect, an embodiment of the present application provides an electronic device, including: the computer program is characterized in that the processor executes the computer program to realize the steps of the information input method provided by the embodiment of the application.
Embodiments of the present application also provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of an information input method.
Embodiments of the present application further provide a computer program product, which includes a computer program, and when the computer program is executed by a processor, the steps of the information input method are implemented.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
the embodiment of the application provides an information input method, which is applied to AR equipment connected with detection equipment. Wherein the motion of the detection device is controlled by the target control object; a rotary keyboard is arranged on the AR equipment, a rotatable pointer is arranged at the center of the rotary keyboard, and all keys of the rotary keyboard are circularly arranged around the pointer; under the motion control of the target control object, the detection device detects the angle change of the current rotation, i.e. the first motion vector. Specifically, if the rotary keyboard is in an activated state, the receiving detection device detects a first motion vector, and determines a target key pointed by a pointer according to the first motion vector; in response to the determination operation, target information corresponding to the target key is input. That is, the angle change when the target control object controls the detection device to move is key and has no relation with the movement amplitude of the control detection device; the movement or position of the target control object has no direct interaction relationship with the rotary keyboard, and is an indirect interaction relationship.
Therefore, the method shown in the embodiment of the application has the following technical effects: on the one hand, the higher information input efficiency can be achieved through the smaller motion amplitude of the hand of the user, and the user cannot feel tired even if the user inputs information for a long time. On the other hand, the input of the rotary keyboard is not influenced by the position or hand gesture of the user, and the user can conveniently input information in any comfortable gesture.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic flowchart of an information input method according to an embodiment of the present disclosure;
fig. 2a is a schematic structural diagram of a rotary keyboard according to an embodiment of the present application;
fig. 2b is a schematic view of an application scenario of a rotary keyboard according to an embodiment of the present application;
fig. 2c is a schematic view of an application scenario of another rotary keyboard according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an information input device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below in conjunction with the drawings in the present application. It should be understood that the embodiments set forth below in connection with the drawings are exemplary descriptions for explaining technical solutions of the embodiments of the present application, and do not limit the technical solutions of the embodiments of the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, information, data, steps, operations, elements, and/or components, but do not preclude the presence or addition of other features, information, data, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein indicates at least one of the items defined by the term, e.g., "a and/or B" can be implemented as "a", or as "B", or as "a and B".
To make the objects, technical solutions and advantages of the present application more clear, the following detailed description of the embodiments of the present application will be made with reference to the accompanying drawings.
In order to solve the problems shown in the background art, embodiments of the present application provide an information input method, which may be applied to various AR (Augmented Reality) devices. Wherein, the AR device is configured with a rotating keyboard and is connected with a detection device, the motion of which is controlled by a target control object (e.g., a user's hand); the center of the rotary keyboard is provided with a pointer, and all the keys are circularly arranged around the pointer. After the AR device is started, if it is detected that the keyboard is in an activated state when rotating, a first motion vector detected by the detection device may be received, where the first motion vector describes an angle change when the detection device rotates; and determining a target key pointed by the pointer according to the first motion vector, and responding to the determination operation and inputting target information corresponding to the target key. The method provided by the embodiment of the application can achieve higher information input efficiency through smaller motion amplitude of the hand of the user, and cannot be tired even if the information is input for a long time. The user may also be allowed to enter information in a comfortable position.
Alternatively, the AR device may be a head-worn AR helmet, or monocular/binocular/headband AR glasses.
Optionally, the detection device is an intelligent detection bracelet. When the user wore the intellectual detection system bracelet, the motion of driving the intellectual detection system bracelet through the motion of user's hand, the detecting element of intellectual detection system bracelet detects various motion parameters, angle change, acceleration when like the motion. The detection device can also be an intelligent handle, and when a user grips the intelligent handle, the detection unit of the intelligent handle detects various motion parameters such as angle change and acceleration during motion through the power control of the user on the handle.
Optionally, the method provided in the embodiment of the present application may be implemented as an independent application program or a functional module/plug-in of an application program. For example, the application may be a dedicated information input application or an information input function application. By the application program, information input operation can be realized.
In order to more clearly understand the solutions provided in the embodiments of the present application, the following describes several exemplary embodiments to explain the technical solutions of the embodiments of the present application and the technical effects produced by the technical solutions of the present application. It should be noted that the following embodiments may be referred to, referred to or combined with each other, and the description of the same terms, similar features, similar implementation steps and the like in different embodiments is not repeated.
Fig. 1 shows an information input method. Wherein the method is applied to an AR device configured with a rotating keyboard; the AR equipment is connected with the detection equipment, and the movement of the detection equipment is controlled by a target control object. Specifically, the method includes steps S110 to S130.
S110, if the rotary keyboard is in an activated state, receiving a first motion vector detected by detection equipment; a pointer is arranged at the center of the rotary keyboard, and all keys are circularly arranged around the pointer; the first motion vector is descriptive of the angular change when the detection device is rotated.
Alternatively, the rotatable keyboard may be determined to be activated when any input box in the display interface of the AR device is triggered. When the rotary keyboard is activated, an operation interface of the rotary keyboard is displayed in the current page. The rotary keyboard is a virtual keyboard, and keys and pointers on the keyboard are virtual and do not correspond to a real keyboard.
Alternatively, the target control object may operate a detection device, such as a user's hand.
Optionally, the AR device may perform bluetooth pairing and connection with the detection device through a supported bluetooth function; the AR device and the detection device may also be directly connected in a WiFi direct connection manner, and the AR device is a GO (group creator) role in a network architecture of the WiFi direct connection. Among them, wiFi directly connects devices supporting peer-to-peer connection, which supports both infrastructure network and P2P connection.
Optionally, the distance between the center of the rotary keyboard and each key is fixed, that is, each key is located on the circumference of the same circle; optionally, the pitch of each key on the rotary keyboard is the same. Optionally, the rotary keyboard may provide different keyboard modes, such as a numeric mode, an english mode, a chinese mode, an emoticon mode, a picture mode, a music mode, and the like.
Alternatively, different keyboard modes may input different target information. For example, in the digital mode, any one of numbers 0 to 9 may be input; in the English mode, any one of letters A-Z and a-Z can be input; in the Chinese mode, any Chinese character can be input; in the expression mode, any configured expression can be input; in the picture mode, one picture can be preset for each key, namely a plurality of pictures can be set to be selected randomly; in the music mode, one piece of music can be preset for each key, namely, a plurality of pieces of music can be set to be selected randomly. The keyboard modes in various forms are set, so that the practicability and the interestingness of the rotary keyboard can be improved.
As shown in fig. 2a, the embodiment of the present application provides a schematic structural diagram of a rotary keyboard, that is, an operation interface of the keyboard. Wherein, the rotary keyboard provides 10 keys in total. In different keyboard modes, each key corresponds to different information, for example, in a number mode, a key 9 corresponds to a number "9"; in the english mode, the key 9 may correspond to any letter in wxyz, and specifically which letter is selected may be processed according to a determination operation; as in the expression mode, the key 9 indicates a "smile" expression.
And S120, determining a target key pointed by the pointer according to the first motion vector.
And S130, responding to the determination operation, inputting target information corresponding to the target key.
Optionally, the target information may be text content, or an expression, or picture information, etc.
Since the rotary keyboard determines the selection of keys by the pointing direction of the pointer, the pointing direction of the pointer can be determined according to the angle of the pointer. How to obtain the initial angle of the pointer relates to how to determine the key pressing process at the later stage. Next, explanation will be made about how the initial angle of the pointer is acquired.
In an alternative embodiment, the method comprises the following steps Sa1 to Sa3 before the rotary keyboard is in the active state.
Sa1, responding to the trigger operation of the input frame, and receiving a second motion vector detected by the detection device; the second motion vector is descriptive of the acceleration at which the device is detected to be moving.
Specifically, the detection device obtains various motion parameters of the current device, such as an acceleration of the current device during motion, and sends the motion parameters as a second motion vector to the AR device.
The space in which the detection device moves is a three-dimensional space, and thus, the detected motion vector has three-dimensional characteristics. For example, the first motion vector and the second motion vector are both three-dimensional vectors.
Sa2, determining the initial first angle of the detection device according to the second motion vector.
Optionally, the azimuth angle is determined based on coordinates of the second motion vector on the first coordinate axis and the second coordinate axis, respectively; an initial first angle of the detection device is determined based on the azimuth angle.
In one example, the predetermined algorithm is atan2 algorithm (a function that can be used to calculate the azimuth), and the first and second axes are the Y-axis and the Z-axis, respectively. By substituting the Y and Z coordinates of the second motion vector into atan 2's equation, an included angle value with respect to the Z axis can be obtained. Wherein the atan2 algorithm refers to the following equation 1:
Figure BDA0003862290380000081
in other examples, the first and second coordinate axes may also be X and Y axes, or X and Z axes. When using the atan2 algorithm, the value of the included angle with respect to the Y-axis can also be calculated. It should be noted that the setting can be set according to the user's needs, or the comfort level required when turning.
Sa3, determining a second angle of the pointer in the initial frame image based on the initial first angle; the initial frame image is the first frame image displayed when the rotary keyboard is in the activated state.
Specifically, an initial first angle is determined as a second angle of the pointer in the initial image frame. When the AR device activates a first frame image displayed after the rotating keyboard, a second angle of the pointer in the initial image frame is obtained, so that a specific position of the initial image frame is determined.
Since there may be a deviation in the azimuth angle calculated from the accelerometer, the accuracy of the obtained first angle may also be improved by setting the deviation value. Optionally, the sum of the initial first angle and the preset deviation angle is used as a second angle of the pointer in the initial image frame.
In an alternative embodiment, the detection device may also include an infrared positioning function or a magnetometer. Before the rotary keyboard is in the activated state, the method further comprises:
in response to a triggering operation for the input box, an initial first angle detected by the detection device based on the infrared positioning function is received.
Specifically, the rotation angle of the device may be acquired based on the infrared positioning function, and determined as the initial first angle of the detection device. As for the infrared positioning function or the geomagnetic detection function, related art may be referred to.
Optionally, the detection apparatus includes an inertial measurement unit and a magnetometer. For example, the initial first angle may be detected by the inertial measurement unit and the magnetometer. Optionally, the inertia detection unit may further detect the first motion vector and the second motion vector.
When the rotary keyboard is in an activated state, how to convert the motion parameters detected by the detection equipment into keys on the rotary keyboard is a technical problem to be solved by the application.
In an alternative embodiment, the step of determining the target key pointed by the pointer according to the first motion vector may specifically include the following steps Sb1 to Sb2.
And Sb1, determining a second angle of the pointer in the next frame image according to the first motion vector.
Specifically, a motion numerical value of the first motion vector on a third coordinate axis is obtained; and taking the sum of the product of the motion value and the preset multiple and the second angle of the pointer in the current frame image as the second angle of the pointer in the next frame image.
Wherein, since the first motion vector is a vector in a 3-dimensional coordinate system and the first motion vector is a vector describing a change of an angle when the detecting apparatus is rotated, the change is expressed in 3 aspects. On the first hand, a plane is formed by a first coordinate axis and a second coordinate axis, and a first included angle of any vector relative to the plane is formed; in the second aspect, a plane is formed by a second coordinate axis and a third coordinate axis, and a second included angle of any vector relative to the plane is formed; in a third aspect, the vector is a plane formed by the first axis and the third axis, and a third angle of any vector relative to the plane. The first included angle, the second included angle and the third included angle are motion numerical values of the first motion vector on a third coordinate axis, a first coordinate axis and a second coordinate axis respectively.
And obtaining a motion numerical value of the first motion vector on the third coordinate axis, namely obtaining the first included angle. For example, the detection device is defined as a "third axis" which is rotatable. When the third coordinate axis rotates around the first coordinate axis and keeps a state of being perpendicular to the first coordinate axis, an included angle is formed between the third coordinate axis and the second coordinate axis in the rotating process of the third coordinate axis. The angle may also be understood as the angle of the "third axis" to the plane defined by the first axis and the second axis.
Optionally, the sum of the motion value and the first angle of the detection device in the display period of the current frame image is the first angle of the detection device in the display period of the next frame image.
In accordance with the above example, when the first coordinate axis and the second coordinate axis are the Y axis and the Z axis, the third coordinate axis is the X axis. Wherein the first motion vector may be represented as (x 1, y1, z 1). And taking the motion value X1 of the X axis in the angle change as a reference for adjusting the angle change. For example, when the hand rotates to drive the detection device to rotate to a specified angle, the angle of the pointer is changed correspondingly, and the angle change of the pointer is controlled by setting a linkage factor (such as a preset multiple). If the detection device is rotated by 90 degrees, the pointer is rotated by 180 degrees. In the angle that the hand can not reach because of the physiology problem results in rotating, can obtain the turned angle of pointer through this mode to adjust the pointer.
And Sb2, determining a target key pointed by the pointer in the next frame image according to the second angle of the pointer in the next frame image.
Specifically, when the next frame image is rendered, the pointer is rendered according to the second angle of the pointer in the next frame image, so that the pointer points to the target key in the obtained image.
When the pointer is determined to point at the target key, how to further input the content corresponding to the target key into the input box is also a technical problem to be solved by the application.
In an optional embodiment, the process of inputting the target information corresponding to the target key specifically includes the following steps Sc1 to Sc3.
And Sc1, acquiring a keyboard mode of the rotary keyboard and an information content set corresponding to the target key in the keyboard mode.
Optionally, during the stage of activating the rotatable keyboard, the keyboard mode of the rotatable keyboard may be set as one of a plurality of keyboard modes by default. Or, the keyboard mode of the rotary keyboard is adjusted by triggering a preset button on the detection device or the AR device. Wherein each keyboard mode is represented by a preset identifier.
Optionally, the identification information of the current keyboard mode of the rotary keyboard is obtained, so as to determine the keyboard mode of the rotary keyboard.
And Sc2, determining target information from the information content set according to the determination operation.
Optionally, the determining operation may specifically include: determining operation input by triggering a preset button on the detection device or the AR device; a determination operation input by the detection device or the AR device detecting a preset gesture.
For example, if the current keyboard mode is the numeric mode, when the pointer points to key 9, then the target is "9" herein. If the current keyboard mode is the letter mode, when the pointer points to the key 9, an information content set is obtained, and the information content set comprises: "w", "x", "y", "z", the target information can be selected by the length of the key pressing time, and the target information can be selected from the information content set by other methods. It should be noted that, for the selection manner of the keyboard mode and the target information thereof, reference may also be made to the related art, and for simplicity of description, details are not repeated here.
And Sc3, inputting the target information into the input box.
By this, the operation of inputting the target information by the rotary keyboard is completed.
The information input method provided by the embodiment of the application is suitable for scenes in which information can be input in AR equipment, such as a calling scene, an interesting chatting scene, a music playing scene, a picture display scene and the like. In order to more clearly understand the effect of the scheme shown in the embodiment of the present application, the embodiment of the present application further provides an example, and the example refers to fig. 2b and fig. 2c.
In this example, the selected scenario is a call-in scenario. In a calling scene, inputting a telephone number is a key link. Next, the present example implements an input operation of a telephone number based on the number pattern of the rotary keyboard.
In this example, the user is wearing AR glasses and is wearing the intellectual detection system bracelet on the hand.
In this example, the number to make a call is 1895555555555.
As shown in fig. 2b, the hand of the user starts to rotate and drives the rotation of the smart detection bracelet, and the angle of the pointer corresponding to the smart detection bracelet is the angle corresponding to the key 1 in fig. 2 b. And when rendering the next frame of image of the display interface, the AR glasses perform rendering processing according to the angle corresponding to the key 1, so as to obtain the next frame of image. In the next frame image, the pointer points to key 1. Inputting a number '1' corresponding to the key 1 to the input box after the user makes a determined gesture and is captured by the AR glasses; it displays the effect of "1" in the display interface of the AR glasses.
Next, as with the input of the number "1", the numbers "8", "9", and the number "5" are input one after another.
As shown in fig. 2c, the angle of the smart detection bracelet when the pointer points to the key 1 is different from the angle of the corresponding smart detection bracelet when the pointer points to the key 5.
After the last digit "5" is entered, a call-making operation may be initiated.
Fig. 3 shows an information input apparatus 300 applied to an AR device equipped with a rotary keyboard; the AR equipment is connected with the detection equipment, and the movement of the detection equipment is controlled by a target control object; the apparatus 300 comprises:
the transceiver module 310 is configured to receive a first motion vector detected by the detection device if the rotary keyboard is in an activated state; a pointer is arranged at the center of the rotary keyboard, and all keys are circularly arranged around the pointer; the first motion vector is a vector describing the angular change when the detection device is rotated.
The first determining module 320 is configured to determine the target key pointed by the pointer according to the first motion vector.
And a second determining module 330, configured to input target information corresponding to the target key in response to the determining operation.
Optionally, before the rotary keyboard is in the activated state, the method further includes:
a transceiving operation 310, further configured to receive a second motion vector detected by the detection device in response to a triggering operation for the input box; the second motion vector is used for describing the acceleration of the detection equipment during motion;
a first determining module 320, further configured to determine a first angle of the detection device based on the second motion vector; determining a second angle of the pointer in the initial frame image based on the initial first angle; the initial frame of image is the first frame of image displayed when the rotary keyboard is in the activated state.
Optionally, the first determining module 320 is specifically configured to, in determining the initial first angle of the detection device according to the second motion vector:
determining an azimuth angle based on the coordinates of the second motion vector on the first coordinate axis and the second coordinate axis respectively; an initial first angle of the detection device is determined based on the azimuth angle.
Optionally, the detection device includes an infrared positioning function or a geomagnetic detection function; before the rotary keyboard is in the activated state, the transceiver module 310 is further configured to:
in response to a trigger operation for the input box, an initial first angle detected by the detection device based on an infrared positioning function or a geomagnetic detection function is received.
Optionally, the first determining module 320, in determining the target key pointed to by the pointer according to the first motion vector, is specifically configured to:
determining a second angle of the pointer in the next frame image according to the first motion vector; and determining a target key pointed by the pointer in the next frame image according to the second angle of the pointer in the next frame image.
Optionally, the first determining module 320, in determining the second angle of the pointer in the next frame of image according to the first motion vector, is specifically configured to:
acquiring a motion value of the first motion vector on a third coordinate axis; taking the sum of the product of the motion value and the preset multiple and the second angle of the pointer in the current frame image as the second angle of the pointer in the next frame image; and the sum of the motion value and the first angle of the detection device in the display period of the current frame image is the first angle of the detection device in the display period of the next frame image.
Optionally, the second determining module 330 is specifically configured to, in inputting the target information corresponding to the target key:
acquiring a keyboard mode of the rotary keyboard and an information content set corresponding to the target key in the keyboard mode; determining target information from the information content set according to the determination operation; inputting the target information to the input box.
Optionally, the determining operation includes any one of:
determining operation input by triggering a preset button on the detection device or the AR device; a determination operation input by the detection device or the AR device detecting a preset gesture.
The apparatus of the embodiment of the present application may execute the method provided by the embodiment of the present application, and the implementation principle is similar, the actions executed by the modules in the apparatus of the embodiments of the present application correspond to the steps in the method of the embodiments of the present application, and for the detailed functional description of the modules of the apparatus, reference may be specifically made to the description in the corresponding method shown in the foregoing, and details are not repeated here.
The embodiment of the application provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory, wherein the processor executes the computer program to realize the steps of an information input method, compared with the related technology: not only is higher input efficiency realized, but also information input by a user in any comfortable posture is facilitated.
In an alternative embodiment, an electronic device is provided, as shown in fig. 4, the electronic device 4000 shown in fig. 4 comprising: a processor 4001 and a memory 4003. Processor 4001 is coupled to memory 4003, such as via bus 4002. Optionally, the electronic device 4000 may further include a transceiver 4004, and the transceiver 4004 may be used for data interaction between the electronic device and other electronic devices, such as data transmission and/or data reception. In addition, the transceiver 4004 is not limited to one in practical applications, and the structure of the electronic device 4000 is not limited to the embodiment of the present application.
The Processor 4001 may be a CPU (Central Processing Unit), a general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 4001 may also be a combination that performs a computational function, including, for example, a combination of one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
Bus 4002 may include a path that carries information between the aforementioned components. The bus 4002 may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus 4002 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 4, but this does not indicate only one bus or one type of bus.
The Memory 4003 may be a ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read Only Memory) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic disk storage medium, other magnetic storage devices, or any other medium that can be used to carry or store a computer program and that can be Read by a computer, and is not limited herein.
The memory 4003 is used for storing computer programs for executing the embodiments of the present application, and is controlled by the processor 4001 to execute. The processor 4001 is used to execute computer programs stored in the memory 4003 to implement the steps shown in the foregoing method embodiments.
Among them, electronic devices include but are not limited to: a head-worn AR helmet, or monocular/binocular/headband AR glasses.
Embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, and when being executed by a processor, the computer program may implement the steps and corresponding contents of the foregoing method embodiments.
Embodiments of the present application further provide a computer program product, which includes a computer program, and when the computer program is executed by a processor, the steps and corresponding contents of the foregoing method embodiments can be implemented.
The terms "first," "second," "third," "fourth," "1," "2," and the like in the description and claims of this application and in the preceding drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used are interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in other sequences than described or illustrated herein.
It should be understood that, although each operation step is indicated by an arrow in the flowchart of the embodiment of the present application, the implementation order of the steps is not limited to the order indicated by the arrow. In some implementation scenarios of the embodiments of the present application, the implementation steps in the flowcharts may be performed in other sequences as desired, unless explicitly stated otherwise herein. In addition, some or all of the steps in each flowchart may include multiple sub-steps or multiple stages based on an actual implementation scenario. Some or all of these sub-steps or stages may be performed at the same time, or each of these sub-steps or stages may be performed at different times, respectively. In a scenario where execution times are different, an execution sequence of the sub-steps or the phases may be flexibly configured according to requirements, which is not limited in the embodiment of the present application.
The foregoing is only an optional implementation manner of a part of implementation scenarios in this application, and it should be noted that, for those skilled in the art, other similar implementation means based on the technical idea of this application are also within the protection scope of the embodiments of this application without departing from the technical idea of this application.

Claims (12)

1. An information input method is characterized in that the method is applied to AR equipment provided with a rotary keyboard; the AR equipment is connected with detection equipment, and the movement of the detection equipment is controlled by a target control object; the method comprises the following steps:
if the rotary keyboard is in an activated state, receiving a first motion vector detected by the detection equipment; a pointer is arranged at the center of the rotary keyboard, and all keys are circularly arranged around the pointer; the first motion vector is used for describing the angle change of the detection equipment during rotation;
determining a target key pointed by the pointer according to the first motion vector;
and responding to the determined operation, and inputting target information corresponding to the target key.
2. The method of claim 1, wherein prior to the rotary keyboard being in the activated state, the method comprises:
receiving a second motion vector detected by the detection device in response to a trigger operation for an input frame; the second motion vector is used for describing the acceleration of the detection equipment during motion;
determining an initial first angle of the detection device according to the second motion vector;
determining a second angle of the pointer in an initial frame image based on the initial first angle; the initial frame image is a first frame image displayed when the rotary keyboard is in an activated state.
3. The method of claim 2, wherein determining the initial first angle of the detection device based on the second motion vector comprises:
determining an azimuth angle based on the coordinates of the second motion vector on the first coordinate axis and the second coordinate axis respectively;
an initial first angle of the detection device is determined based on the azimuth angle.
4. The method of claim 2, wherein the detection device includes an infrared positioning function; before the rotary keyboard is in the activated state, the method further comprises the following steps:
receiving the initial first angle detected by the detection device based on an infrared positioning function in response to a trigger operation for the input box.
5. The method of claim 1, wherein determining the target key to which the pointer is pointing according to the first motion vector comprises:
determining a second angle of the pointer in the next frame image according to the first motion vector;
and determining a target key pointed by the pointer in the next frame image according to the second angle of the pointer in the next frame image.
6. The method of claim 5, wherein determining the second angle of the pointer in the next frame of image according to the first motion vector comprises:
acquiring a motion value of the first motion vector on a third coordinate axis;
taking the sum of the product of the motion value and a preset multiple and a second angle of the pointer in the current frame image as a second angle of the pointer in the next frame image;
wherein the sum of the motion value and the first angle of the detection device in the display period of the current frame image is the first angle of the detection device in the display period of the next frame image.
7. The method of claim 2, wherein the inputting the destination information corresponding to the destination key comprises:
acquiring a keyboard mode of the rotary keyboard and an information content set corresponding to the target key in the keyboard mode;
determining target information from the set of information content according to the determination operation;
inputting the target information into the input box.
8. The method of claim 7, wherein the determining operation comprises any one of:
determining operation input by triggering a preset button on the detection device or the AR device;
and determining operation input by the detection device or the AR device detecting a preset gesture.
9. An information input device is characterized by being applied to an AR device provided with a rotary keyboard; the AR equipment is connected with detection equipment, and the movement of the detection equipment is controlled by a target control object; the device comprises:
the receiving and transmitting module is used for receiving a first motion vector detected by the detection equipment if the rotary keyboard is in an activated state; a pointer is arranged at the center of the rotary keyboard, and all the keys are circularly arranged around the pointer; the first motion vector is used for describing the angle change of the detection equipment during rotation;
the first determining module is used for determining a target key pointed by the pointer according to the first motion vector;
and the second determining module is used for responding to the determining operation and inputting the target information corresponding to the target key.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory, characterized in that the processor executes the computer program to implement the steps of the method of any of claims 1-8.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
12. A computer program product comprising a computer program, wherein the computer program when executed by a processor performs the steps of the method of any one of claims 1 to 8.
CN202211168201.3A 2022-09-23 2022-09-23 Information input method, device, equipment, medium and program product Pending CN115454262A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211168201.3A CN115454262A (en) 2022-09-23 2022-09-23 Information input method, device, equipment, medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211168201.3A CN115454262A (en) 2022-09-23 2022-09-23 Information input method, device, equipment, medium and program product

Publications (1)

Publication Number Publication Date
CN115454262A true CN115454262A (en) 2022-12-09

Family

ID=84306223

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211168201.3A Pending CN115454262A (en) 2022-09-23 2022-09-23 Information input method, device, equipment, medium and program product

Country Status (1)

Country Link
CN (1) CN115454262A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103197772A (en) * 2013-04-15 2013-07-10 沈正达 Angle value based keyboard/mouse information transmission display method and angle value based keyboard/mouse information transmission display device
CN108376029A (en) * 2017-01-30 2018-08-07 精工爱普生株式会社 Display system
CN111338556A (en) * 2020-02-25 2020-06-26 韦季李 Input method, input device, terminal equipment and storage medium
JP2021077097A (en) * 2019-11-08 2021-05-20 国立大学法人 筑波大学 Information processing device, information processing method, and program
CN114341779A (en) * 2019-09-04 2022-04-12 脸谱科技有限责任公司 System, method, and interface for performing input based on neuromuscular control
CN114546102A (en) * 2020-11-26 2022-05-27 幻蝎科技(武汉)有限公司 Eye tracking sliding input method and system, intelligent terminal and eye tracking device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103197772A (en) * 2013-04-15 2013-07-10 沈正达 Angle value based keyboard/mouse information transmission display method and angle value based keyboard/mouse information transmission display device
CN108376029A (en) * 2017-01-30 2018-08-07 精工爱普生株式会社 Display system
CN114341779A (en) * 2019-09-04 2022-04-12 脸谱科技有限责任公司 System, method, and interface for performing input based on neuromuscular control
JP2021077097A (en) * 2019-11-08 2021-05-20 国立大学法人 筑波大学 Information processing device, information processing method, and program
CN111338556A (en) * 2020-02-25 2020-06-26 韦季李 Input method, input device, terminal equipment and storage medium
CN114546102A (en) * 2020-11-26 2022-05-27 幻蝎科技(武汉)有限公司 Eye tracking sliding input method and system, intelligent terminal and eye tracking device

Similar Documents

Publication Publication Date Title
US8549419B2 (en) Mobile terminal providing graphic user interface and method of providing graphic user interface using the same
US10754546B2 (en) Electronic device and method for executing function using input interface displayed via at least portion of content
CN109260702A (en) Virtual carrier control method, computer equipment and storage medium in virtual scene
EP2814000A1 (en) Image processing apparatus, image processing method, and program
CN107533374A (en) Switching at runtime and the merging on head, gesture and touch input in virtual reality
CN111880648B (en) Three-dimensional element control method and terminal
CN111324250B (en) Three-dimensional image adjusting method, device and equipment and readable storage medium
EP2558924B1 (en) Apparatus, method and computer program for user input using a camera
CN118379428B (en) 3D model for displayed 2D elements
CN108427479B (en) Wearable device, environment image data processing system, method and readable medium
KR20210064378A (en) Perspective rotation method and apparatus, device and storage medium
US20210042980A1 (en) Method and electronic device for displaying animation
CN115480639A (en) Human-computer interaction system, human-computer interaction method, wearable device and head display device
JP2021185498A (en) Method for generating 3d object arranged in augmented reality space
CN109960404B (en) Data processing method and device
JP6088787B2 (en) Program, information processing apparatus, information processing method, and information processing system
CN108355352B (en) Virtual object control method and device, electronic device and storage medium
CN113296605A (en) Force feedback method, force feedback device and electronic equipment
CN111897437A (en) Cross-terminal interaction method and device, electronic equipment and storage medium
CN114564106B (en) Method and device for determining interaction indication line, electronic equipment and storage medium
Cho et al. Multi-scale 7DOF view adjustment
CN115454262A (en) Information input method, device, equipment, medium and program product
WO2024066756A1 (en) Interaction method and apparatus, and display device
CN117111742A (en) Image interaction method and device, electronic equipment and storage medium
CN115981544A (en) Interaction method and device based on augmented reality, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination