[go: up one dir, main page]

CN112462932B - Self-mixing interferometry-based gesture input system for wearable or handheld devices - Google Patents

Self-mixing interferometry-based gesture input system for wearable or handheld devices Download PDF

Info

Publication number
CN112462932B
CN112462932B CN202010885729.7A CN202010885729A CN112462932B CN 112462932 B CN112462932 B CN 112462932B CN 202010885729 A CN202010885729 A CN 202010885729A CN 112462932 B CN112462932 B CN 112462932B
Authority
CN
China
Prior art keywords
smi
wearable device
sensors
sensor
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010885729.7A
Other languages
Chinese (zh)
Other versions
CN112462932A (en
Inventor
M·姆特鲁
A·F·西罕
M·T·温克勒
陈童
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/934,988 external-priority patent/US11409365B2/en
Application filed by Apple Inc filed Critical Apple Inc
Priority to CN202510005863.6A priority Critical patent/CN119916943A/en
Publication of CN112462932A publication Critical patent/CN112462932A/en
Application granted granted Critical
Publication of CN112462932B publication Critical patent/CN112462932B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/491Details of non-pulse systems
    • G01S7/4912Receivers
    • G01S7/4916Receivers using self-mixing in the laser cavity
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • G06F3/0308Detection arrangements using opto-electronic means comprising a plurality of distinctive and separately oriented light emitters or reflectors associated to the pointing device, e.g. remote cursor controller with distinct and separately oriented LEDs at the tip whose radiations are captured by a photo-detector associated to the screen
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本公开涉及带可穿戴或手持设备基于自混合干涉测量的手势输入系统。本发明公开了一种可穿戴设备,该可穿戴设备包括被配置为穿戴在用户的第一表面上的设备外壳、一组一个或多个SMI传感器和处理器。该一组一个或多个SMI传感器安装在该设备外壳内并且被配置为发射一组一个或多个电磁辐射束,其中每个束在远离该第一表面延伸的不同方向上发射。该一组一个或多个SMI传感器还被配置为生成包含关于该设备外壳和第二表面之间的关系的信息的一组一个或多个SMI信号。该处理器被配置为从该一组一个或多个SMI信号的数字化样本中提取该设备外壳和该第二表面之间的该关系。

The present disclosure relates to a gesture input system based on self-mixing interferometry with a wearable or handheld device. The present invention discloses a wearable device, which includes a device housing configured to be worn on a first surface of a user, a set of one or more SMI sensors and a processor. The set of one or more SMI sensors is installed in the device housing and is configured to emit a set of one or more electromagnetic radiation beams, wherein each beam is emitted in a different direction extending away from the first surface. The set of one or more SMI sensors is also configured to generate a set of one or more SMI signals containing information about the relationship between the device housing and the second surface. The processor is configured to extract the relationship between the device housing and the second surface from digitized samples of the set of one or more SMI signals.

Description

Gesture input system with wearable or handheld device based on self-mixing interferometry
Technical Field
This patent application is non-provisional and claims the benefit of U.S. provisional patent application No. 62/896,801 filed on 9, 6, 2019, according to 35 U.S. c. ≡1.19 (e), the contents of which are incorporated herein by reference as if fully set forth herein.
The described embodiments relate generally to devices including self-mixing interferometry (SMI) sensors, and more particularly, to SMI-based gesture input systems including at least one of a wearable device or a handheld device.
Background
Sensor systems are included in many electronic devices today, including electronic devices such as smartphones, computers (e.g., tablet or laptop), wearable electronic devices (e.g., electronic watches or health monitors), game controllers, navigation systems (e.g., vehicle navigation systems or robotic navigation systems), and the like. The sensor system may sense the presence of an object, the distance to or proximity of an object, the movement of an object (e.g., whether an object is moving, or the speed, acceleration, or direction of movement of an object), etc., differently.
Any new development in the configuration or operation of the sensor system may be useful in view of the wide range of applications of the sensor system. New developments that may be particularly useful are developments that reduce the cost, size, complexity, part count, or manufacturing time of the sensor system, or developments that improve the sensitivity or speed of operation of the sensor system.
Disclosure of Invention
Embodiments of the systems, devices, methods, and apparatuses described in this disclosure relate to the configuration and operation of SMI-based gesture input systems including one or more SMI sensors. The SMI sensor may be used to determine a relationship between a wearable device or a handheld device and a surface, or a relationship between a wearable device and a handheld device, or a relationship between different wearable devices or different handheld devices. The relationship may include a characterization of one or more of a position, orientation, or motion of the wearable device or handheld device relative to the one or more surfaces. In some cases, the relationship may be used to identify one or more gestures made by a user of the SMI-based gesture input system.
An SMI sensor is defined herein as a sensor configured to generate electromagnetic radiation (e.g., light), emit electromagnetic radiation from a resonant cavity (e.g., a resonant optical cavity), receive reflected or backscattered electromagnetic radiation (e.g., electromagnetic radiation reflected or backscattered from a surface or object having a surface (collectively referred to herein as a surface) back into the resonant cavity), coherently or partially coherently self-mix the generated and reflected/backscattered electromagnetic radiation within the resonant cavity, and generate an output (i.e., an SMI signal) indicative of self-mixing. The electromagnetic radiation generated, transmitted, and received may be coherent or partially coherent. In some examples, the electromagnetic radiation emitted by the SMI sensor may be generated by an electromagnetic radiation source such as a Vertical Cavity Surface Emitting Laser (VCSEL), a Vertical External Cavity Surface Emitting Laser (VECSEL), a Quantum Dot Laser (QDL), a Quantum Cascade Laser (QCL), or a Light Emitting Diode (LED) (e.g., an Organic LED (OLED), a resonant cavity LED (RC-LED), a micro-LED (mLED), a Superluminescent LED (SLED), or an edge-emitting LED), or the like. The electromagnetic radiation generated, emitted, and received may include, for example, visible or invisible light (e.g., green light, infrared (IR) light, ultraviolet (UV) light, etc.). The output of the SMI sensor (i.e., the SMI signal) may include a photocurrent generated by a photodetector (e.g., a photodiode) that is integrated with or positioned below, above, or beside the electromagnetic radiation source of the sensor. Alternatively or in addition, the output of the SMI sensor may comprise a measurement of the current or junction voltage of the electromagnetic radiation source of the SMI sensor.
In a first aspect, the present disclosure describes a wearable device including a device housing configured to be worn on a first surface of a user, a set of one or more SMI sensors, and a processor. The set of one or more SMI sensors may be mounted within the equipment enclosure and configured to emit a set of one or more beams of electromagnetic radiation, wherein each beam emits in a different direction extending away from the first surface. The set of one or more SMI sensors may be further configured to generate a set of one or more SMI signals containing information about a relationship between the equipment enclosure and the second surface. The processor may be configured to extract the relationship between the equipment enclosure and the second surface from digitized samples of the set of one or more SMI signals.
In another aspect of the disclosure, the disclosure describes a gesture input system. The gesture input system includes a wearable device configured to be worn by a user and an object configured to be held by the user. The gesture input system also includes a set of one or more SMI sensors and a processing system. Each SMI sensor may be mounted within the wearable device or the object and may be configured to emit a beam of electromagnetic radiation and generate an SMI signal. The processing system may be housed within at least one of the wearable device or the object and may be configured to receive a set of one or more SMI signals from the set of one or more SMI sensors. The processing system may be further configured to extract information from the set of one or more SMI signals regarding at least one of a time-varying relationship between the wearable device and the object, or a time-varying relationship between the wearable device and a surface other than the surface of the object.
In another aspect, the present disclosure describes a method of recognizing a gesture type. The method may include transmitting a beam of electromagnetic radiation from each SMI sensor of a set of one or more SMI sensors disposed in a wearable device, sampling SMI signals generated by each SMI sensor to generate a time-varying sample stream for each SMI sensor, determining movement of the wearable device relative to a surface using a processor of the wearable device and the time-varying sample stream of at least one SMI sensor of the set of one or more SMI sensors, and transmitting information indicative of the movement of the wearable device from the wearable device to a remote device.
In addition to the exemplary aspects and embodiments, further aspects and embodiments will become apparent by reference to the drawings and by study of the following descriptions.
Drawings
The disclosure will be readily understood by the following detailed description in conjunction with the accompanying drawings, wherein like reference numerals designate like structural elements, and in which:
FIG. 1 illustrates an exemplary SMI-based gesture input system including a wearable device;
FIGS. 2 and 3 illustrate additional examples of SMI-based gesture input systems including wearable devices;
FIG. 4 illustrates a wearable device having a set of SMI sensors from which a processor of the device may select a subset of the SMI sensors to determine a relationship between the wearable device and a surface;
FIG. 5 illustrates another exemplary SMI-based gesture input system that includes more than one device;
FIG. 6 shows an example of the system described with reference to FIG. 5, wherein the wearable device is a ring and the object is shaped as one or more of a stylus, pen, pencil, marker, or brush;
fig. 7 shows an alternative embodiment of the system described with reference to fig. 5, wherein the object is also a wearable device;
8A-8D illustrate exemplary SMI sensors that may be used in one or more of the SMI-based gesture input systems described with reference to FIGS. 1-7;
9A-9D illustrate different beam shaping or beam steering optics that may be used with any of the SMI sensors described with reference to FIGS. 1-8D;
FIG. 10 illustrates a triangulation process using self-mixing interferometry to determine velocity and absolute distance of a surface (or object);
FIG. 11 shows a block diagram of a system for implementing a spectrum analysis process using the process described with reference to FIG. 10;
FIG. 12 illustrates a sinusoidal bias process for determining displacement of a surface (or object) using orthogonal demodulation with self-mixing interferometry;
FIG. 13 illustrates an exemplary method of recognizing gesture types, and
Fig. 14 shows an exemplary electrical block diagram of an electronic device.
The use of cross-hatching or shading in the drawings is generally provided to clarify the boundaries between adjacent elements and also to facilitate legibility of the drawings. Thus, the presence or absence of a non-cross-hatching or shading does not indicate or indicate any preference or requirement for a particular material, material property, proportion of elements, dimensions of elements, commonalities of similar illustrated elements, or any other characteristic, property, or attribute of any element shown in the drawings.
Furthermore, it should be understood that the proportions and dimensions (relative or absolute) of the various features and elements (and sets and groupings thereof) and the boundaries, spacings, and positional relationships presented therebetween are provided in the drawings, merely to facilitate an understanding of the various embodiments described herein, and thus may be unnecessarily presented or shown to scale and are not intended to indicate any preference or requirement of the illustrated embodiments to exclude embodiments described in connection therewith.
Detailed Description
Reference will now be made in detail to the exemplary embodiments illustrated in the drawings. It should be understood that the following description is not intended to limit the embodiments to one preferred embodiment. On the contrary, it is intended to cover alternatives, modifications and equivalents as may be included within the spirit and scope of the embodiments as defined by the appended claims.
The following description relates to the configuration and operation of an SMI-based gesture input system (i.e., a system that may use signals received from one or more SMI sensors to recognize gestures made by a user). The SMI sensor may be used to optically measure relative motion (displacement) between the SMI sensor and a target (e.g., a surface or object) using sub-wavelength resolution. When the displacement measurement is correlated with the measurement time, the velocity of the target may also be measured. Further, by modulating the SMI sensor with a known wavelength modulation (e.g., triangular modulation), the absolute distance from the SMI sensor to the target may be measured.
In Augmented Reality (AR), virtual Reality (VR), and Mixed Reality (MR) applications, among others, it may be useful to track a user's finger movements and/or to recognize gestures of the user (e.g., gestures made with one or more fingers, hands, arms, etc.). In some applications, it may be useful for a user to be able to provide input to the system by interacting with a surface (e.g., making gestures on any random surface such as a desktop, wall, or paper) or by making gestures in free space. In such applications, the SMI-based gesture input system may be used to track the user's finger movements with reference to any surface (in some cases, including the surface of another finger, the palm of the user, etc.).
An SMI-based gesture input system and a device that may be worn or held by a user are described herein. Some of the systems include a single wearable device or a handheld device. Other systems may include two or more wearable devices and/or handheld devices. The system may be provided with more or fewer SMI sensors that typically enable finer or lower resolution tracking, or more or less complex gesture detection/recognition. For example, scrolling along a single axis may be detected using one SMI sensor. With two SMI sensors, user motion in a plane may be tracked. With three or more SMI sensors, movement in the x, y, and z directions may be tracked. Motion tracking with six degrees of freedom may also be tracked with three or more SMI sensors, and in some cases by modulating the SMI sensors in a particular or different manner.
In contrast to conventional optical tracking methods such as optical flow and spot tracking, SMI-based tracking methods can block ambient light (e.g., sunlight or other ambient light) and track motion with six degrees of freedom without the need for additional sensors for determining distance to the target surface. The SMI-based gesture input system may also be used in a darkened room (e.g., a room without ambient light).
These and other techniques described with reference to fig. 1-14. However, those skilled in the art will readily appreciate that the detailed description given herein with respect to these figures is for explanatory purposes only and should not be construed as limiting.
Directional terms such as "top", "bottom", "upper", "lower", "front", "rear", "above", "below", "left", "right" and the like are used with reference to the orientation of some of the components in some of the figures described below. Because components in various embodiments can be positioned in a number of different orientations, the directional terminology is used for purposes of illustration and is in no way limiting. Directional terminology is intended to be interpreted broadly and therefore should not be interpreted as excluding the components that are oriented in a different manner. Use of alternative terms such as "or" is intended to mean different combinations of alternative elements. For example, a or B is intended to include a or B, or a and B.
FIG. 1 illustrates an exemplary SMI-based gesture input system 100. The system 100 includes a device housing 102, a set of one or more SMI sensors 104 mounted within the device housing 102, a processing system 106 mounted within the device housing 102, and/or a communication interface 108 mounted within the device housing 102.
The device housing 102 may take various forms and, in some cases, may be configured to be worn or held by the user 110. When the device housing 102 is configured to be worn by the user 110, the device housing 102 may define a wearable device, such as a ring, whole or partial glove, sleeve, or the like. When the device housing 102 is configured to be held by the user 110, the device housing 102 may define a stylus, another writing instrument (e.g., a pen or pencil), any object, and so forth. In any event, the device housing 102 may be made of a variety of materials (such as plastic, metal, or ceramic materials). In some cases, the device housing 102 may include multiple components, such as first and second rings that snap together or otherwise (e.g., by adhesive or solder), first and second semicircle tubes that snap together or otherwise (e.g., by adhesive or solder), or one or more pieces that define an open partial circle having one or more open ends that are plugged by a cap.
Each of the SMI sensors 104 may include a source of electromagnetic radiation. The electromagnetic radiation source may include a resonant cavity from which the electromagnetic radiation beam 112 is emitted. The electromagnetic radiation beam 112 may comprise a coherent (or partially coherent) mixture of 1) electromagnetic radiation generated by an electromagnetic radiation source, and 2) electromagnetic radiation received into a resonant cavity of the electromagnetic radiation source after reflection or backscatter from the surface 114. Each of the SMI sensors 104 may include a photodetector that generates an SMI signal 116 containing information about the relationship between the SMI sensor 104 and the surface 114. The SMI signal 116 generated by the SMI sensor 104 contains information corresponding to information contained in the electromagnetic radiation waveform received by the SMI sensor 104. Alternatively, SMI sensor 104 may output a measurement of the current or junction voltage of its electromagnetic radiation source as SMI signal 116.
The one or more SMI sensors 104 may emit a set of one or more beams of electromagnetic radiation 112. Different beams 112 may be emitted in different directions. In some cases, some or all of the bundles 112 may be emitted in a direction extending away from a first surface of the user 110 (e.g., away from a surface of the user 110 on which the device housing 102 is worn). Some (or all) of beams 112 may be emitted toward a second surface (e.g., surface 114). The SMI signal 116 generated by the set of one or more SMI sensors 104 may contain information not only about the relationship between the individual SMI sensors 104 and the surface 114, but also about the relationship between the equipment enclosure 102 and the surface 114, and thus about the position, orientation, or movement of the user 110 who is wearing or holding the equipment enclosure 102.
Processing system 106 may include, for example, one or more digital-to-analog converters 118 (ADCs) (e.g., one ADC 118 per SMI sensor 104), a processor 120, and/or other components for digitizing the SMI signal 116 output by SMI sensors 104. In some cases, processing system 106 may include filters, amplifiers, or other discrete circuitry for processing SMI signal 116. The processor 120 may take various forms, such as a form of a microprocessor, microcontroller, application Specific Integrated Circuit (ASIC), or the like.
Processor 120 may be configured to extract the relationship between equipment enclosure 102 and surface 114 from digitized samples of the one or more SMI signals 116. When system 100 includes only one SMI sensor 104, or when processor 120 uses only one SMI signal 116, processor 120 may determine, for example, movement of equipment enclosure 102 along the axis of emission beam 112 of the SMI sensor (e.g., in the x, y, or z directions of a cartesian coordinate system) (and thus movement of user 110). When system 100 includes only two SMI sensors 104, or when processor 120 uses only two SMI signals 116, processor 120 may determine, for example, movement of equipment enclosure 102 in a plane (e.g., in the xy, xz, or yz plane of a cartesian coordinate system, assuming beam 112 is tilted (i.e., not perpendicular or parallel) to the plane) (and thus movement of user 110). When system 100 includes only at least three SMI sensors 104, or when processor 120 uses at least three SMI signals 116, processor 120 may determine, for example, movement of equipment enclosure 102 in free space (e.g., in xyz space of a cartesian coordinate system) (and thus movement of user 110).
When system 100 includes two or three SMI sensors 104, the beam 112 emitted by the SMI sensors 104 preferably has an orthogonal axis that decouples the SMI signal 116 to improve sensitivity and minimize errors, and this simplifies the processing burden (i.e., computational burden) placed on the processor 120. However, if the angle between the beam 112 and the direction of the measured displacement is known, the beam 112 need not have orthogonal axes. When system 100 generates more SMI signals 116 than are required by processor 120, or when system 100 includes more than three SMI sensors 104, processor 120 may analyze digitized samples of the plurality of SMI signals 116 and identify (based at least in part on the analysis) at least one SMI signal of the plurality of SMI signals 116 from which the relationship between equipment enclosure 102 and surface 114 is extracted. In the latter case, it is recognized that in some cases, the equipment enclosure 102 may be positioned differently such that its SMI sensor 104 may emit the electromagnetic radiation beam 112 in a direction that is not useful, or in a direction that results in different beams 112 impinging on different surfaces. Thus, the processor 120 may analyze digitized samples of the plurality of SMI signals 116 to determine which SMI signals 116 appear to contain useful information about the same surface (e.g., the processor 120 may be programmed to assume that SMI signals 116 indicating that the surface is within a threshold distance are generated by SMI sensors 104 facing the palm or other nearby body parts of the user, and then ignore those SMI signals 116. Alternatively, the user 110 of the system may position the equipment enclosure 102 such that their SMI sensors 104 emit the electromagnetic radiation beam 112 in a useful direction.
In some embodiments, the processor 120 may be configured to transmit information indicative of the relationship between the device housing 102 and the surface 114 using the communication interface 108. The information may be transmitted to a remote device. In some cases, the transmitted information may include a sequence of time-dependent measurements, or a sequence of time-dependent positions, orientations, or movements. In other cases, processor 120 may be configured to recognize one or more gestures made by user 110 and transmit indications of the one or more gestures (these indications being in the form of information indicating the relationship between device housing 102 and surface 114). Processor 120 may identify a gesture of user 110 by comparing a sequence of changes in one or more SMI signals 116 obtained from one or more SMI sensors 104 to one or more stored sequences that have been associated with the one or more gestures. For example, the processor 120 may compare the sequence of changes of the SMI signal 116 to a stored sequence corresponding to the press or stamp, and upon determining a match (or determining that the sequences are sufficiently similar to indicate a match), the processor 120 may indicate that the user 110 has made a gesture of the press or stamp. Similarly, upon comparing the changing sequence of a set of SMI signals 116 to a set of stored sequences corresponding to the user 110 writing the letter "A" or a set of stored sequences corresponding to the user 110 making a circular motion, and determining a match to one of these gestures, the processor 120 may indicate to the user 110 that the letter "A" has been drawn or that a circular gesture has been made. In addition to or instead of comparing the sequence of changes of the one or more SMI signals 116 to stored sequences of changes, the processor 120 may determine a set of time dependent locations, orientations, motion vectors, or other pieces of information in one, two, or three dimensions from the sequence of changes of the one or more SMI signals 116 and may compare the alternative information to stored information that has been associated with one or more predetermined gestures.
When determining the movement of the device housing 102 relative to the surface 114, there is ambiguity between displacement and rotation when using only a sequence of three time dependent measurements. This is because the characterization of motion in a cartesian coordinate system requires the characterization of six degrees of freedom (6 DoF). The 6DoF characterization requires six characterizations of unknown quantities, which therefore requires a sequence of six time-dependent measurements—for example, not only measurements of displacement along three axes (x-axis, y-axis, and z-axis), but also rotation (e.g., yaw, pitch, and roll) about each of the three axes. In other words, the processor 120 cannot solve for six unknowns using a sequence of only three time-dependent measurements. To provide a sequence of three additional time-dependent measurements, processor 120 may use SMI signals 116 obtained by six different SMI sensors 104 that emit beams 112 directed in six different directions toward surface 114. Alternatively, processor 120 may obtain a sequence of two or more time-dependent measurements from each of the fewer number of SMI sensors 104. For example, processor 120 may alternatively modulate the input of each of a set of three SMI sensors 104 using a sinusoidal waveform and a triangular waveform, and obtain a sequence of time-dependent measurements for each modulation type from each of the three SMI sensors 104 (e.g., processor 120 may modulate the input of each SMI sensor 104 using a sinusoidal waveform during a first set of time periods, and modulate the input of each SMI sensor 104 using a triangular waveform during a second set of time periods). Modulating the input with a triangular waveform may provide absolute distance measurements, which may not be obtainable using sinusoidal waveform modulation.
Communication interface 108 may include a communication interface operable to communicate with a remote device (e.g., a wired and/or wireless communication interface through which a mobile phone, electronic watch, tablet or laptop computer communicates (e.g.,Low power consumption (BLE), wi-Fi, or Universal Serial Bus (USB) interface).
Fig. 2 and 3 illustrate examples of SMI-based gesture input systems, which may be embodiments of the systems described with reference to fig. 1. FIG. 2 illustrates an exemplary SMI-based gesture input system in the form of a closed loop 200. The closed loop 200 may be configured to receive a finger 202 of a user (i.e., the closed loop 200 may be a finger loop). A set of SMI sensors 204 housed within the closed loop 200 may emit a beam 206 of electromagnetic radiation through an aperture and/or window element transparent to the wavelength of the emitted beam 206. By way of example, the closed loop 200 includes three SMI sensors 204 that emit orthogonal beams of electromagnetic radiation 206. In alternative embodiments, the closed loop 200 may include more or less SMI sensors 204 that emit orthogonal or non-orthogonal beams 206 of electromagnetic radiation.
FIG. 3 illustrates an exemplary SMI-based gesture input system in the form of an open loop 300. The open loop 300 may be configured to receive a finger 302 of a user (e.g., the open loop 300 may be a ring). The open loop 300 may include SMI sensors 304 disposed to emit a beam 306 of electromagnetic radiation along the open loop body 308 and/or from one or both ends 310, 312 of the open loop body 308 (e.g., from covers at the ends 310, 312 of the open loop body 308). By way of example, the open loop 300 includes three SMI sensors 304 that emit orthogonal beams of electromagnetic radiation 306. In alternative embodiments, open loop 300 may include more or less SMI sensors 304 that emit orthogonal or non-orthogonal beams of electromagnetic radiation 306. While the SMI sensors 304 are shown in FIG. 3 as being proximate to both ends 310, 312 of the open loop 300, alternatively, all SMI sensors 304 (or more or less SMI sensors 304) may be disposed proximate to one end of the open loop 300.
As shown in fig. 3, open loop may be useful because it may not block the inner surface of the user's hand, which in some cases may improve the user's ability to grasp an object, feel texture on a surface, or receive tactile output provided via a surface.
In some embodiments, the wearable device described with reference to any of fig. 1-3 may determine the absolute distance, direction, and speed of a surface relative to the SMI sensor by triangulating the input to the SMI sensor, as described with reference to fig. 10 and 11. The displacement of the surface can then be obtained by integrating the velocity. In some embodiments, the wearable device may use I/Q demodulation to determine the displacement and direction of the surface relative to the SMI sensor (in the time domain), as described with reference to fig. 12. The absolute distance can then be obtained using triangular modulation.
In some cases, a wearable device such as a finger ring may include a deformable or compressible insert that enables the finger ring to be worn farther from or closer to the user's fingertip.
In some cases, the ring may be rotated by the user such that the ring may alternately sense a surface under the user's hand, a surface of an object held by the user, an adjacent finger, and the like.
In some cases, the wearable device may include sensors other than SMI sensors, such as Inertial Measurement Units (IMUs). In some cases, additional sensors may also be used to characterize motion. The wearable device may also include a haptic engine to provide haptic feedback to a user, battery, or other component.
Fig. 4 illustrates a wearable device 400 having a set of SMI sensors 402 from which a processor of device 400 may select subset 404 to determine a relationship between wearable device 400 and surface 406. Alternatively, the processor of device 400 may use the SMI signals generated by different subsets 404, 408 of SMI sensors 402 to determine the relationship between wearable device 400 and different surfaces 406, 410 (e.g., desktop 406 and a user's finger 410 adjacent to the finger on which device 400 is worn). By way of example, the wearable device 400 is shown as a closed finger ring (e.g., a wearable device having a form factor similar to that of the closed ring described with reference to fig. 2). In alternative embodiments, the apparatus 400 may take other forms.
In fig. 4, SMI sensors 402 are grouped into subsets of three SMI sensors 402, and the subsets are located at different locations around the circumference of device 400. In other embodiments, a subset of SMI sensors 402 may have a different number of SMI sensors 402 (in some cases, only one SMI sensor 402 is included). In some embodiments, SMI sensors 402 may not be arranged in discrete subsets, and the processor of apparatus 400 may analyze the SMI signals received from SMI sensors 402 and dynamically identify a subset of one or more of SMI sensors 402 in response to analyzing the SMI signals. The processor may also determine that one or more of the SMI sensors did not generate a useful SMI signal and exclude those SMI sensors from inclusion in any subset (and in some cases, those SMI sensors may not be used until a change in the SMI signal of those SMI sensors is identified).
In some embodiments of the device 400 (or in embodiments of other devices described herein), the device 400 may include one or more sensors for determining an orientation of the device 400 relative to its user (e.g., relative to a finger on which the device 400 is worn, one or more adjacent fingers, a palm of the user, etc.) or a surface (e.g., a desktop, paper, wall, surface of the user's body, etc.). The sensors may include, for example, one or more of a proximity sensor, a contact sensor, a pressure sensor, an accelerometer, an IMU, and the like.
FIG. 5 illustrates another exemplary SMI-based gesture input system 500. In contrast to the system described with reference to fig. 1, system 500 may include more than one device. For example, system 500 may include a wearable device 502 configured to be worn by a user, and an object 504 configured to be held by the user.
In some embodiments, wearable device 502 may be configured similar to the wearable device described with reference to fig. 1, and may include a device housing 506, a set of one or more SMI sensors 508 mounted within device housing 506, a processing system 510 mounted within device housing 506, and/or a communication interface 512 mounted within device housing 102. Equipment housing 506, SMI sensor 508, processing system 510, and/or communication interface 512 may be configured similar to the same components described with reference to fig. 1. In some embodiments, wearable device 502 may be a ring, such as described with reference to fig. 2 or 3.
In some implementations, the object 504 may be shaped as one or more of a stylus, pen, pencil, marker, or brush. The object 504 may also take other forms.
In some cases, one or more of SMI sensors 508 in wearable device 502 may emit beam of electromagnetic radiation 514 that impinges on object 504. As object 504 is moved by a user, such as for writing or drawing, the relationship between wearable device 502 and object 504 may be changed. Processing system 510 may extract information regarding the time-varying relationship between wearable device 502 and object 504 (and/or information regarding the time-varying relationship between wearable device 502 and a surface other than the surface of object 504) from the SMI signal of SMI sensor 508, and in some cases may identify one or more gestures made by the user. In some cases, the gesture may include an alphanumeric string (one or more characters) written by the user. In these cases, processing system 510 may be configured to identify an alphanumeric string from information regarding a time-varying relationship between wearable device 502 and object 504. SMI sensor 508 may also or alternatively be used to determine whether a user is holding object 504 and to track or predict movement of object 504. For example, if object 504 is a writing instrument (e.g., a pen), the SMI signal generated by SMI sensor 508 may be analyzed to determine if the user is holding object 504, and in some cases, whether the user is loosely or tightly holding object 504. The processing system 510 may determine whether the user is about to write, make a gesture, etc., based on the presence of the object 504 and/or the user's grip and/or movement of the object 504. The processing system 510 may then wake the wearable device 502 completely in response to the presence, grasp, and/or movement of the object 504, or begin recording the movement of the object 504 and/or recognize letters written by the user with the object 504, gestures made, and the like. In some implementations, processing system 510 may switch wearable device 502 to a first mode where SMI sensor 508 is used to track movement relative to a desktop or user when object 504 is not detected, and switch wearable device 502 to a second mode where SMI sensor 508 is used to track movement of object 504 when object 504 is detected. In some embodiments, SMI sensor 508 may track the movement of object 504 by tracking the movement of wearable device 502 relative to a desktop or other surface (i.e., a surface other than the surface of object 504). This is because the user's grip on the object 504 may affect the manner in which the user grips their hand or moves their finger, which may indicate the manner in which the user moves the object 504 (e.g., indicates the letter or gesture the user is writing with the object 504). In some cases, wearable device 502 may effectively transform any object (including dumb objects or non-electronic objects) into a smart pen or the like.
In some cases, wearable device 502 may have relatively more SMI sensors 508, such as described with reference to fig. 4. In some cases, object 504 may have one or more SMI sensors 516 therein in addition to wearable device 502 having one or more SMI sensors 508 therein. When provided, SMI sensor 516 may be used similar to SMI sensor 508 included in wearable device 502, and may determine the relationship of object 504 to the wearable device, the user's skin (i.e., the user's surface), or a remote surface (e.g., surface 518). SMI sensor 516 may be positioned along the body of object 504 (e.g., proximate to where the user may hold object 504) or near the tip of object 504 (e.g., proximate to the pointed, written, or painted tip of object 504). In some embodiments, object 504 may include a processing system and/or a communication interface for communicating an SMI signal generated by SMI sensor 516 or information related thereto or derived therefrom to wearable device 502. Alternatively or in addition, the processing system and/or communication interface may receive an SMI signal or information related thereto or derived therefrom from wearable device 502. The wearable device 502 and the object 504 may communicate wirelessly or may be connected by wires, cables, and/or leads. In some implementations, the processing system 510 of the wearable device 502 can bear a majority of the processing burden (e.g., recognize gestures). In other embodiments, the processing system of object 504 may bear a majority of the processing burden, or may share the processing burden. In other embodiments, object 504 may include all system SMI sensors and processing systems.
Fig. 6 shows an example of the system described with reference to fig. 5, where wearable device 502 is a ring and object 504 is shaped as one or more of a stylus, pen, pencil, marker, or brush.
In some cases, the SMI-based gesture input system may include more than one wearable device and/or more than one handheld device. For example, fig. 7 shows an alternative embodiment of the system described with reference to fig. 5, wherein the object 504 is also a wearable device. By way of example, both wearable device 502 and object 504 are shown as rings. For example, rings worn on the user's thumb and forefinger may be used to recognize gestures, such as pinching, zooming, rotating, etc.
An SMI-based gesture input system, such as one of the systems described with reference to fig. 1-7, may be used in some cases to provide input to AR, VR, or MR applications. The SMI-based gesture input system may also be used as an anchor for another system. For example, in a camera-based gesture input system, it is difficult to determine whether a camera or a user's hand (or finger) is moving. The SMI-based gesture input system may replace the camera-based gesture input system or may provide anchoring information to the camera-based gesture input system.
FIG. 8A illustrates a first exemplary SMI sensor 800 that may be used in one or more of the SMI-based gesture input systems described with reference to FIGS. 1-7. In this example, the SMI sensor 800 may include a VCSEL 802 with an integrated resonant cavity (or intra-cavity) photodetector (RCPD) 804.
FIG. 8B illustrates a second exemplary SMI sensor 810 that may be used in one or more of the SMI-based gesture input systems described with reference to FIGS. 1-7. In this example, the SMI sensor 810 may include a VCSEL 812 with an external on-chip RCPD 814. For example, the RCPD 814 may form a disk surrounding the VCSEL 812.
FIG. 8C illustrates a third exemplary SMI sensor 820 that may be used in one or more of the SMI-based gesture input systems described with reference to FIGS. 1-7. In this example, the SMI sensor 820 may include a VCSEL 822 with an external off-chip photodetector 824.
FIG. 8D illustrates a fourth exemplary SMI sensor 830 that may be used in one or more of the SMI-based gesture input systems described with reference to FIGS. 1-7. In this example, the SMI sensor 830 may comprise a dual emission VCSEL 832 with an external off-chip photodetector 834. For example, top emission may be emitted toward the optics and/or another target, and bottom emission may be provided to an external off-chip photodetector 834.
Fig. 9A-9D illustrate different beam shaping or beam steering optics that may be used with any of the SMI sensors described with reference to fig. 1-8D. Fig. 9A illustrates beam shaping optics 900 (e.g., a lens or collimator) that collimates an electromagnetic radiation beam 902 emitted by an SMI sensor 904. The collimated beam may be useful when the range supported by the device is relatively large (e.g., when the device has a range of about ten centimeters). Fig. 9B illustrates beam shaping optics 910 (e.g., a lens) that focus an electromagnetic radiation beam 912 emitted by an SMI sensor 914. Focusing the beam of electromagnetic radiation may be useful when the range supported by the device is limited (e.g., limited to a few centimeters). Fig. 9C shows electromagnetic radiation beam 922 emitted by a set of SMI sensors 924 directed such that the beam converged by beam 922 is diverted to optics 920 (e.g., a lens or a set of lenses). Alternatively, SMI sensor 924 may be configured or oriented such that its beam converges without optics 920. In some embodiments, the beam steering optics 920 may include or be associated with beam shaping optics (such as the beam shaping optics described with reference to fig. 9A or 9B). Fig. 9D shows electromagnetic radiation beam 932 emitted by a set of SMI sensors 934 such that the beam 932 diverges such that the beam is directed toward optics 930 (e.g., a lens or a set of lenses). Alternatively, SMI sensor 934 may be configured or oriented such that its beam is dispersed without optics 930. In some embodiments, the beam steering optics 930 may include or be associated with beam shaping optics (such as the beam shaping optics described with reference to fig. 9A or 9B).
Fig. 10 illustrates a triangulation process 1000 for determining velocity and absolute distance of a surface (or object) using self-mixing interferometry. Process 1000 may be used by one or more of the systems or devices described with reference to fig. 1-7 to modulate an SMI sensor using a triangular waveform, for example, as described with reference to fig. 1.
At an initial stage 1002, an initial signal is generated, such as by a digital or analog signal generator. At stage 1006-1, the generated initial signal is processed as needed to generate a triangle waveform modulation current 1102 (see fig. 11) that is applied to the VCSEL. Stage 1006-1 may be the operation of a DAC (e.g., when the initial signal is the output of a digital step generator), low pass filtering (such as removing quantization noise from the DAC), and voltage-to-current conversion, as desired.
Application of the modulation current 1102 to the VCSEL induces an SMI output 1118 (i.e., a change in the interference characteristics of the VCSEL). For simplicity of discussion, it will be assumed that the SMI output 1118 is from a photodetector, but in other embodiments the SMI output may be from another component.
At initial stage 1004 of FIG. 10, SMI output 1118 is received. At stage 1006-2, initial processing of SMI output 1118 is performed as needed. Stage 1006-2 may include high pass filtering or digital subtraction.
At stage 1008, the processor may equalize the received signals, if desired, to match peak-to-peak, average, root mean square, or any other characteristic value of the signals. For example, the SMI output 1118 may be a primary triangle waveform component that matches the modulation current 1102, with smaller and higher frequency components due to changes in interferometric characteristics. High pass filtering may be applied to the SMI output 1118 to obtain a component signal related to the interferometric characteristic. This stage may also include separating and/or subtracting portions of the SMI output 1118 and the modulation current 1102 corresponding to the rise and fall time intervals of the modulation current 1102. This stage may include sampling the separated information.
At stages 1010 and 1012, a separate Fast Fourier Transform (FFT) is first performed on the portions of the processed SMI output 1118 corresponding to the rise and fall time intervals. The two FFT spectra may be analyzed at stage 1014.
At stage 1016, the FFT spectrum may be further processed, for example, to remove artifacts and reduce noise. Such further processing may include peak detection and gaussian fitting around the detected peaks to improve frequency accuracy. From the processed FFT spectral data, information about absolute distance may be obtained at stage 1018.
Fig. 11 illustrates a block diagram of a system (e.g., a portion or all of the processing system described with reference to fig. 1-7) that may implement the spectral analysis described in the method described above with respect to fig. 10. In the exemplary system shown, the system includes generating an initial digital signal and processing it as needed to produce a modulated current 1102 as an input to the VCSEL 1110. In an illustrative example, the initial step signal may be generated by a number generator to approximate a trigonometric function. The digital output value of the digital generator is used in a digital-to-analog converter (DAC) 1104. The resulting voltage signal may then be filtered by a low pass filter 1106 to remove quantization noise. Alternatively, an integrator-based analog signal generator may be used to directly generate the equivalent voltage signal. The filtered voltage signal is then the input to a voltage-to-current converter 1108 to produce some form of desired modulation current 1102 for input to the VCSEL 1110.
As described above, movement of the target may cause a change in interferometric parameters, such as parameters of the VCSEL 1110 or parameters of a photodetector operating in the system. These changes may be measured to generate the SMI output 1118. In the illustrated embodiment, it will be assumed that the SMI output 1118 is measured by a photodetector. For modulated current 1102 having a triangular waveform, SMI output 1118 may be a triangular wave of similar period combined with smaller and higher frequency signals associated with interferometric properties. In some cases, even though modulation current 1102 is linear, SMI output 1118 may not be perfectly linear. This may be due to the non-linearity of the bias current versus the optical output curve of the VCSEL 1110 (e.g., due to non-idealities such as self-heating effects).
The SMI output 1118 is first passed to a high pass filter 1120 that effectively converts the dominant rising and falling ramp components of the SMI output 1118 to DC offsets. Because the SMI output 1118 may typically be a current, the transimpedance amplifier 1122 may generate a corresponding voltage output (amplified or not) for further processing.
The voltage output may then be sampled and quantized by ADC block 1124. It may be helpful to apply equalization immediately before applying the digital FFT to the output of ADC block 1124. The initial digital signal value from the digital generator used to generate the modulated current 1102 is used as an input to a digital high pass filter 1112 to generate a digital signal associated with the output of the ADC block 1124. The digital variable gain block 1114 may apply an adjustable gain to the output of the digital high pass filter 1112.
The output of the digital variable gain block 1114 serves as one input to a digital equalizer and subtractor block 1116. The other input to the digital equalizer and subtractor block 1116 is the output of the ADC block 1124. The two signals are differential and are used as part of feedback to adjust the gain provided by the digital variable gain block 1114.
Equalization and subtraction may be used to clear the triangle of any remaining artifacts that may be present in the SMI output 1118. For example, if there is a slope error or non-linearity in the SMI output 1118, the digital high pass filter 1112 may not completely eliminate the triangle and artifacts may still be present. In this case, these artifacts may appear as low frequency components after FFT, making peak detection difficult for nearby objects. These artifacts may be partially or completely removed by applying equalization and subtraction.
Once the best correlation is obtained by feedback, the FFT indicated by block 1128 may be applied to the components of the output of ADC block 1124 corresponding to the rising and falling sides of the triangular wave. From the obtained FFT spectrum, the peak frequencies detected on the rising and falling sides may be used to infer absolute distance and/or directional velocity, as described above and indicated by block 1126.
The method just described and its variants involve applying spectral analysis to the SMI output. However, it should be understood that this is an example. In other implementations, alternative methods for determining absolute distance may be obtained directly from the time domain SMI output without applying spectral analysis. Various configurations are possible and contemplated without departing from the scope of this disclosure.
Fig. 12 illustrates a sinusoidal bias process 1200 for determining displacement of a surface (or object) using quadrature demodulation with self-mixing interferometry. Process 1200 may be used by one or more of the systems or devices described with reference to fig. 1-7 to modulate an SMI sensor using a sinusoidal waveform, for example, as described with reference to fig. 1.
As explained in more detail below, fig. 12 shows the components that generate and apply a sinusoidal modulated bias current to the VCSEL. The sinusoidal bias current may generate an output current in the photodetector 1216, depending on the frequency of the sinusoidal bias and the displacement of the structural components of the device. In the circuit of fig. 12, the output current of the photodetector 1216 is digitally sampled and then multiplied by a first sine wave at the original sinusoidal modulation frequency of the bias current and a second sine wave at a frequency twice that original frequency. The two separate multiplied outputs are then each low pass filtered and the phase of the interferometry parameter can be calculated. Thereafter, at least the phase is used to determine the displacement.
The DC voltage generator 1202 is used to generate a constant bias voltage. Sine wave generator 1204 may generate a sine signal of about a single frequency to be combined with a constant voltage. As shown in fig. 12, the sine wave generator 1204 is a digital generator, but in other implementations, the sine wave generator can generate an analog sine wave. The low pass filter 1206-1 provides filtering of the output of the DC voltage generator 1202 to reduce unwanted variations in the constant bias voltage. The band pass filter 1206-2 may be used to reduce distortion and noise in the output of the sine wave generator 1204 to reduce noise, quantization or other distortion, or frequency components whose signals are far from their intended modulation frequency omega m.
The circuit summer 1208 combines the low-pass filtered constant bias voltage with the band-pass filtered sine wave to produce a combined voltage signal on the link 1209, which in the embodiment of fig. 12 has the form V 0+Vmsin(ωm t. The voltage signal is used as an input to a voltage-to-current converter 1210 to generate a current to drive the lasing of the VCSEL 1214. The current on line 1213 from voltage to current converter 1210 may have the form I 0+Imsin(ωm t).
The VCSEL 1214 is thus driven to emit laser light modulated as described above. The reflection of the modulated laser light may then be received back into the laser cavity of the VCSEL 1214 and cause self-mixing interference. The resulting emitted optical power of the VCSEL 1214 may be modified due to self-mixing interference and the modification may be detected by the photodetector 1216. As described above, in this case, the photocurrent output of the photodetector 1216 on link 1215 can have the following form:
Since the I/Q component to be used in the subsequent stage is based on the third term only, the first two terms can be removed or reduced by a differential transimpedance amplifier and anti-aliasing (DTIA/AA) filter 1218. To do this, the scale or scaled value of the first two terms is generated by the voltage divider 1212. The voltage divider 1212 may use as input the combined voltage signal generated by the circuit adder 1208 on the link 1209. The output of voltage divider 1212 on link 1211 may then have the form α (V 0+Vmsin(ωm t)). This output of the photodetector current and voltage divider 1212 may be the input to DTIA/AA filter 1218. The output of DTIA/AA filter 1218 may then be at least largely proportional to the third term of the photodetector current.
The output of DTIA/AA filter 1218 may then be quantized for subsequent computation by ADC block 1220. Further, the output of the ADC block 1220 may have a residual signal component proportional to the sine wave initially generated by the sine wave generator 1204. To filter the residual signal component, the initially generated sine wave may be scaled at multiplier block 1224-3 (such as by multiplying by an indication factor of β) and then subtracted from the output of ADC block 1220 at subtracting block 1222. According to the aboveThe fourier expansion of the term, the filtered output on link 1221, may have the form a+bsin (ω mt)+Ccos(2ωmt)+Dsin(3ωm t) +. The filtered output may then be used to extract the I/Q component by mixing.
Multiplier block 1224-1 mixes (multiplies) the digital sine wave initially generated by sine wave generator 1204 on link 1207 with the filtered output on link 1221. The product is then low pass filtered at block 1228-1 to obtain the Q component described above, possibly after scaling by a number related to the amount of frequency modulation of the laser and the distance to the target.
In addition, the initially generated digital sine wave is used as an input into the squaring/filtering block 1226 to generate a digital cosine wave having a frequency twice that of the initially generated digital sine wave. The digital cosine wave is then mixed (multiplied) with the filtered output of ADC block 1220 on link 1221 at multiplier block 1224-2. The product is then low pass filtered at block 1228-2 to obtain the I component described above, possibly after scaling by a number related to the amount of frequency modulation of the laser and the distance to the target.
The phase calculation section 1230 then uses the Q component and the I component to obtain a phase from which the displacement of the target can be calculated, as described above.
Those skilled in the art will appreciate that while the embodiment shown in fig. 12 utilizes a digital version of the initially generated sine wave generated by sine wave generator 1204 on link 1207, in other embodiments the initially generated sine wave may be an analog signal and mixed with the analog output of DTIA/AA filter 1218. In other embodiments, the voltage divider 1212 may be a variable voltage divider. In other embodiments, the voltage divider 1212 may be omitted and DTIA/AA filter 1218 may be a single-ended DTIA/AA filter. In such implementations, the subtraction may be done digitally only at subtraction block 1222. In other implementations, the subtraction block 1222 may be omitted, and subtraction of the modulation current may not be performed.
The circuit of FIG. 12 may be adapted to implement the modified I/Q method described above using Q'. Alpha. Lowpass { I PD×sin(3ωm t) }. Some such circuit adaptations may include directly generating the mixed signals sin (2ω m t) and sin (3ω m t), and multiplying each mixed signal with the output of the ADC block 1220, and then applying a corresponding low pass filtering, such as by blocks 1228-1, 1228-2. The DTIA/AA filter 1218 may then be replaced by a filter to remove or substantially reduce the entire component of I PD at the initial modulation frequency ω m. Those skilled in the art will recognize other circuit adaptations for implementing the modified I/Q method. For example, the signal sin (3ω m t) may be generated by multiplying the link 1207 with the output of the squaring/filtering block 1226, and then performing band-pass filtering to suppress frequency components other than sin (3ω m t).
In additional and/or alternative embodiments, the I/Q time domain based method just described may be used with the spectrum based method of the first series of embodiments. The first series of spectral methods may be used at a particular time to determine the absolute distance to the target and provide the value L 0. Thereafter, during subsequent time intervals, ΔL may be determined using any of the various I/Q methods just described.
In additional and/or alternative embodiments, a spectral method based on triangular wave modulation of the bias current of the VCSEL can be used as a guide to the I/Q time domain method. In the case of J 1(b)=J2 (b), the I/Q method operates optimally so that the I and Q components have the same amplitude. However, b depends on the distance L. One embodiment may apply triangular wave modulation to the bias current of the VCSEL to determine the distance to the point of interest. This distance is then used to find the optimal peak-to-peak sinusoidal modulation of the bias current for use in the I/Q method. Such dual methods may provide improved signal-to-noise ratio and displacement accuracy obtained according to the I/Q method.
FIG. 13 illustrates an exemplary method 1300 of identifying gesture types. The method 1300 may be performed, for example, by any of the processing systems or processors described herein.
At block 1302, method 1300 may include emitting a beam of electromagnetic radiation from each of a set of one or more SMI sensors disposed in a wearable device. Alternatively, a beam of electromagnetic radiation may be emitted from each SMI sensor in a set of one or more SMI sensors disposed in the handheld device.
At block 1304, method 1300 may include sampling an SMI signal generated by each SMI sensor to generate a time-varying sample stream for each SMI sensor.
At block 1306, method 1300 may include determining movement of the wearable device (or handheld device) relative to the surface using a processor of the wearable device and a time-varying sample stream of at least one SMI sensor of the set of one or more SMI sensors. The operations at block 1306 may also or alternatively include determining a position and/or orientation of the wearable device (or handheld device) relative to the surface.
At block 1308, method 1300 may include transmitting information from the wearable device (or handheld device) to a remote device indicating movement of the wearable device (or handheld device).
In some embodiments, method 1300 may include modulating the input to the SMI sensors (or to each SMI sensor) using a triangular waveform or a sinusoidal waveform. In some embodiments, method 1300 may include 1) modulating an input to an SMI sensor (or to each SMI sensor) using a first type of modulation when generating a first subset of samples in the time-varying sample stream for the SMI sensor, and 2) modulating an input to the SMI sensor (or to each SMI sensor) using a second type of modulation when generating a second subset of samples in the time-varying sample stream for the SMI sensor, wherein the first type of modulation is different from the second type of modulation (e.g., triangle modulation versus sine modulation).
In some embodiments of method 1300, the at least one SMI sensor may comprise three SMI sensors, and determining movement of the wearable device (or handheld device) relative to the surface may comprise determining movement of the wearable device on 6 DoF.
In some embodiments of method 1300, the set of one or more SMI sensors includes a plurality of SMI sensors, and method 1300 may include analyzing a time-varying sample stream generated for the plurality of SMI sensors, and identifying the at least one SMI sensor for determining movement of the wearable device (or handheld device) relative to the surface based at least in part on the analysis.
In some embodiments of method 1300, the at least one SMI sensor may be a first subset of one or more SMI sensors and the surface may be a first surface. In these embodiments, method 1300 may include determining movement of the wearable device (or handheld device) relative to the second surface using a processor of the wearable device (or handheld device) and a time-varying sample stream of the second subset of one or more SMI sensors of the set of one or more SMI sensors.
Fig. 14 illustrates an exemplary electrical block diagram of an electronic device 1400 that may be implemented in some cases as any of the devices described with reference to fig. 1-7 and 13. Electronic device 1400 may include electronic display 1402 (e.g., a light emitting display), processor 1404, power source 1406, memory 1408 or storage device, sensor system 1410, or input/output (I/O) mechanism 1412 (e.g., an input/output device, input/output port, or tactile input/output interface). The processor 1404 may control some or all of the operations of the electronic device 1400. The processor 1404 may communicate directly or indirectly with some or all of the other components of the electronic device 1400. For example, a system bus or other communication mechanism 1414 may provide for communication between the electronic display 1402, the processor 1404, the power supply 1406, the memory 1408, the sensor system 1410, and the I/O mechanism 1412.
The processor 1404 may be implemented as any electronic device capable of processing, receiving, or transmitting data or instructions, whether in the form of software or firmware or otherwise encoded. For example, the processor 1404 may include a microprocessor, a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a controller, or a combination of such devices. As described herein, the term "processor" is intended to encompass a single processor or processing unit, multiple processors, multiple processing units, or one or more other suitably configured computing elements. In some cases, the processor 1404 may provide a portion or all of the processing systems or processors described with reference to fig. 1-7 and 10-13.
It should be noted that the components of the electronic device 1400 may be controlled by multiple processors. For example, select components of the electronic device 1400 (e.g., the sensor system 1410) may be controlled by a first processor and other components of the electronic device 1400 (e.g., the electronic display 1402) may be controlled by a second processor, wherein the first processor and the second processor may or may not be in communication with each other.
The power source 1406 may be implemented with any device capable of providing energy to the electronic device 1400. For example, power source 1406 may include one or more batteries or rechargeable batteries. Additionally or alternatively, the power source 1406 may include a power connector or cord that connects the electronic device 1400 to another power source, such as a wall outlet.
Memory 1408 may store electronic data that may be used by electronic device 1400. For example, the memory 1408 may store electronic data or content such as, for example, audio and video files, documents and applications, device settings and user preferences, timing signals, control signals, and data structures or databases. Memory 1408 may include any type of memory. By way of example only, the memory 1408 may include random access memory, read-only memory, flash memory, removable memory, other types of storage elements, or a combination of these memory types.
The electronic device 1400 may also include one or more sensor systems 1410 positioned at virtually any location on the electronic device 1400. In some cases, the sensor system 1410 may include one or more SMI sensors positioned as described with reference to any of fig. 1-13. The sensor system 1410 may be configured to sense one or more types of parameters such as, but not limited to, vibration, light, touch, force, heat, movement, relative motion, biometric data of a user (e.g., biometric parameters), air quality, proximity, location, connectivity, and the like. By way of example, the sensor system 1410 may include an SMI sensor, a thermal sensor, a position sensor, a light or optical sensor, an accelerometer, a pressure transducer, a gyroscope, a magnetometer, a health monitoring sensor, an air quality sensor, and the like. Further, the one or more sensor systems 1410 may utilize any suitable sensing technology including, but not limited to, interferometry, magnetic force, capacitance, ultrasound, resistance, optics, sound, ultrasound, piezoelectric, or technology.
The I/O mechanism 1412 may transmit or receive data from a user or another electronic device. The I/O mechanism 1412 may include an electronic display 1402, a touch-sensing input surface, a crown, one or more buttons (e.g., graphical user interface "home" buttons), one or more cameras (including an under-display camera), one or more microphones or speakers, one or more ports such as a microphone port and/or a keyboard. Additionally or alternatively, the I/O mechanism 1412 may transmit electronic signals via a communication interface, such as a wireless, wired, and/or optical communication interface. Examples of wireless and wired communication interfaces include, but are not limited to, cellular and Wi-Fi communication interfaces.
The foregoing description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the embodiments. However, it will be apparent to one skilled in the art that the embodiments may be practiced without the specific details after reading this description. Thus, the foregoing descriptions of specific embodiments described herein are presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the embodiments to the precise form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art in light of the above teachings, upon reading this specification.

Claims (17)

1. A wearable device, comprising:
a device housing configured to be worn on a first surface of a user;
a set of one or more SMI sensors, the set of one or more SMI sensors is mounted within the equipment enclosure and configured to:
Transmitting a set of one or more beams of electromagnetic radiation, each beam being transmitted in a different direction extending away from the first surface, and
Generating a set of one or more SMI signals containing information regarding a relationship between the equipment enclosure and a second surface, and
The processor may be configured to perform the steps of, the processor is configured to:
Modulating an input of each SMI sensor of the set of one or more SMI sensors using a sinusoidal waveform during a first set of time periods;
Modulating the input of each SMI sensor of the set of one or more SMI sensors using a triangular waveform during a second set of time periods, and
The relationship between the equipment enclosure and the second surface is extracted from digitized samples of the set of one or more SMI signals generated based on the modulated input of each of the set of one or more SMI sensors using the sinusoidal waveform and the triangular waveform.
2. The wearable device of claim 1, wherein the set of one or more SMI sensors comprises at least three SMI sensors.
3. The wearable device of claim 2, wherein the at least three SMI sensors are configured to emit orthogonal beams of electromagnetic radiation.
4. The wearable device of claim 2, wherein the at least three SMI sensors are configured to emit converging beams of electromagnetic radiation.
5. The wearable device of claim 1, wherein the device housing defines a closed loop configured to receive a finger.
6. The wearable device of claim 1, wherein the device housing defines an open loop configured to receive a finger.
7. The wearable device of claim 1, wherein the set of one or more SMI signals comprises a plurality of SMI signals, and the processor is configured to:
analyzing digitized samples of the plurality of SMI signals, and
At least one SMI signal of the plurality of SMI signals from which the relationship between the equipment enclosure and the second surface is extracted is identified based at least in part on the analysis.
8. The wearable device of claim 1, further comprising:
A wireless communication interface mounted within the device housing, wherein:
The processor is configured to transmit information indicative of the relationship between the device housing and the second surface using the wireless communication interface.
9. A gesture input system comprising:
a wearable device configured to be worn by a user;
an object configured to be held by the user;
a set of one or more SMI sensors, each SMI sensor mounted within the wearable device or the object and configured to:
Emitting a beam of electromagnetic radiation, and
Generating an SMI signal by modulating an input of the set of one or more SMI sensors using a sinusoidal waveform during a first set of time periods and modulating the input of the set of one or more SMI sensors using a triangular waveform during a second set of time periods, and
A processing system housed within at least one of the wearable device or the object, the processing system configured to:
receiving a set of one or more SMI signals from the set of one or more SMI sensors, and
Information is extracted from the set of one or more SMI signals generated by modulating the input of the set of one or more SMI sensors using the sinusoidal waveform and the triangular waveform about a time-varying relationship between the wearable device and the object or a time-varying relationship between the wearable device and a surface other than the surface of the object.
10. The gesture input system of claim 9, wherein the processing system is further configured to identify an alphanumeric string from the information about the time-varying relationship between the wearable device and the object.
11. The gesture input system of claim 9, wherein the wearable device comprises a ring.
12. The gesture input system of claim 9, wherein the object is shaped as at least one of a stylus, pen, pencil, marker, or brush.
13. A method of recognizing gesture types, comprising:
emitting a beam of electromagnetic radiation from each SMI sensor of a set of one or more SMI sensors disposed in the wearable device;
sampling an SMI signal generated by each SMI sensor to generate a time-varying sample stream for each SMI sensor, the SMI signal generated by modulating an input of the set of one or more SMI sensors during a first set of time periods using a first type of modulation and modulating the input of the set of one or more SMI sensors during a second set of time periods using a second type of modulation, the first type of modulation being different from the second type of modulation;
determining movement of the wearable device relative to a surface based on an SMI signal generated by modulating an input of the set of one or more SMI sensors using the first type of modulation and the second type of modulation using a processor of the wearable device and the time-varying sample stream of at least one SMI sensor of the set of one or more SMI sensors, and
Information indicative of the movement of the wearable device is transmitted from the wearable device to a remote device.
14. The method according to claim 13, wherein:
the at least one SMI sensor comprises three SMI sensors, and
Determining the movement of the wearable device relative to the surface includes determining movement of the wearable device in six degrees of freedom.
15. The method of claim 13, wherein the set of one or more SMI sensors comprises a plurality of SMI sensors, the method further comprising:
analyzing the time-varying sample streams generated for the plurality of SMI sensors, and
The at least one SMI sensor to determine the movement of the wearable device relative to the surface is identified based at least in part on the analysis.
16. The method of claim 13, wherein the at least one SMI sensor is a first subset of the set of one or more SMI sensors and the surface is a first surface, the method further comprising:
determining movement of the wearable device relative to a second surface using the processor of the wearable device and the time-varying sample stream of a second subset of one or more SMI sensors of the set of one or more SMI sensors.
17. The method of claim 13, wherein the first type of modulation uses a triangular waveform and the second type of modulation uses a sinusoidal waveform, or the first type of modulation uses a sinusoidal waveform and the second type of modulation uses a triangular waveform.
CN202010885729.7A 2019-09-06 2020-08-28 Self-mixing interferometry-based gesture input system for wearable or handheld devices Active CN112462932B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510005863.6A CN119916943A (en) 2019-09-06 2020-08-28 Self-mixing interferometry-based gesture input system for wearable or handheld devices

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201962896801P 2019-09-06 2019-09-06
US62/896,801 2019-09-06
US16/934,988 2020-07-21
US16/934,988 US11409365B2 (en) 2019-09-06 2020-07-21 Self-mixing interferometry-based gesture input system including a wearable or handheld device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202510005863.6A Division CN119916943A (en) 2019-09-06 2020-08-28 Self-mixing interferometry-based gesture input system for wearable or handheld devices

Publications (2)

Publication Number Publication Date
CN112462932A CN112462932A (en) 2021-03-09
CN112462932B true CN112462932B (en) 2025-01-10

Family

ID=72243210

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010885729.7A Active CN112462932B (en) 2019-09-06 2020-08-28 Self-mixing interferometry-based gesture input system for wearable or handheld devices

Country Status (1)

Country Link
CN (1) CN112462932B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116482854A (en) * 2021-09-22 2023-07-25 苹果公司 Eye tracking using self-mixing interferometry

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018001597A1 (en) * 2016-06-30 2018-01-04 Robert Bosch Gmbh System and method for user identification and/or gesture control

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012014124A1 (en) * 2010-07-26 2012-02-02 Koninklijke Philips Electronics N.V. Apparatus for measuring a distance
CN103946732B (en) * 2011-09-26 2019-06-14 微软技术许可有限责任公司 Video display modification based on sensor input to see-through, near-eye displays
CN110045824B (en) * 2014-02-10 2022-06-17 苹果公司 Motion gesture input detected using optical sensors
WO2017017572A1 (en) * 2015-07-26 2017-02-02 Vocalzoom Systems Ltd. Laser microphone utilizing speckles noise reduction
CN106980131A (en) * 2016-01-19 2017-07-25 阿里巴巴集团控股有限公司 A kind of localization method, device and intelligent terminal
US10704449B2 (en) * 2016-02-05 2020-07-07 Cummins Inc. Systems and methods for equalizing backpressure in engine cylinders
BR112018073691A2 (en) * 2016-05-19 2019-02-26 Koninklijke Philips N.V. particle sensor, air purifier, sensor housing or body-worn device, and method for determining a particle density of a particle flow with an unknown particle flow velocity vector
US9910544B1 (en) * 2016-12-14 2018-03-06 Cypress Semiconductor Corporation Uniformity correction method for low cost and non-rectangular touch sensor matrices

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018001597A1 (en) * 2016-06-30 2018-01-04 Robert Bosch Gmbh System and method for user identification and/or gesture control

Also Published As

Publication number Publication date
CN112462932A (en) 2021-03-09

Similar Documents

Publication Publication Date Title
US11861072B2 (en) Self-mixing interferometry-based input device
US11422638B2 (en) Input devices that use self-mixing interferometry to determine movement within an enclosure
US11137841B2 (en) Devices and methods for determining relative motion
US11243068B1 (en) Configuration and operation of array of self-mixing interferometry sensors
JP6423116B2 (en) Electronic pen
CN102460563B (en) The position measuring system of use location sensitive detectors
US11614806B1 (en) Input device with self-mixing interferometry sensors
CN104536558B (en) A smart ring and method for controlling smart devices
US20120183156A1 (en) Microphone system with a hand-held microphone
US20090183929A1 (en) Writing system with camera
JP2012508408A (en) Mouse controlled through finger movement in the air
US11692809B2 (en) Self-mixing interferometry-based absolute distance measurement with distance reference
JP2010015535A (en) Input device, control system, handheld device, and calibration method
CN114252411A (en) Surface quality sensing using self-mixing interferometry
KR20200112095A (en) the Electronic Device measuring the Blood Pressure and the Method for measuring the Blood Pressure
CN112462932B (en) Self-mixing interferometry-based gesture input system for wearable or handheld devices
Zhang et al. Towards an ubiquitous wireless digital writing instrument using MEMS motion sensing technology
KR100360477B1 (en) Wireless electronic pen
US20100259475A1 (en) Angle sensor-based pointer and a cursor control system with the same
CN106716311A (en) Vibration-based trajectory calculation of a freely-guided device
Silva et al. PHYS. IO: Wearable hand tracking device
CN113039511B (en) Writing device with electromagnetic tracking
JP3712835B2 (en) Pen-type input device
US20250278146A1 (en) Input device including optical sensor
JPH09274535A (en) Pen-type input device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant