WO2024253867A1 - Devices, methods, and graphical user interfaces for presenting content - Google Patents
Devices, methods, and graphical user interfaces for presenting content Download PDFInfo
- Publication number
- WO2024253867A1 WO2024253867A1 PCT/US2024/030858 US2024030858W WO2024253867A1 WO 2024253867 A1 WO2024253867 A1 WO 2024253867A1 US 2024030858 W US2024030858 W US 2024030858W WO 2024253867 A1 WO2024253867 A1 WO 2024253867A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- media item
- display
- user
- displaying
- input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/163—Wearable computers, e.g. on a belt
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/44—Browsing; Visualisation therefor
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0485—Scrolling or panning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/038—Indexing scheme relating to G06F3/038
- G06F2203/0381—Multimodal input, i.e. interface arrangements enabling the user to issue commands by simultaneous use of input devices of different nature, e.g. voice plus gesture on digitizer
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04804—Transparency, e.g. transparent or translucent windows
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04806—Zoom, i.e. interaction techniques or interactors for controlling the zooming operation
Definitions
- the present disclosure relates generally to computer systems that are in communication with one or more display generation components and, optionally, one or more input devices that provide computer-generated experiences, including, but not limited to, electronic devices that provide virtual reality and mixed reality experiences via one or more displays.
- Example augmented reality environments include at least some virtual elements that replace or augment the physical world.
- Input devices such as cameras, controllers, joysticks, touch-sensitive surfaces, and touch-screen displays for computer systems and other electronic computing devices are used to interact with virtual/augmented reality environments.
- Example virtual elements include virtual objects, such as digital images, video, text, icons, and control elements such as buttons and other graphics.
- Some methods and interfaces for presenting content are cumbersome, inefficient, and limited. For example, systems that provide insufficient feedback for performing actions associated with virtual objects, systems that require a series of inputs to achieve a desired outcome in an augmented reality environment, and systems in which manipulation of virtual objects are complex, tedious, and error-prone, create a significant cognitive burden on a user, and detract from the experience with the virtual/augmented reality environment. In addition, these methods take longer than necessary, thereby wasting energy of the computer system.
- the computer system is a desktop computer with an associated display.
- the computer system is portable device (e.g., a notebook computer, tablet computer, or handheld device).
- the computer system is a personal electronic device (e.g., a wearable electronic device, such as a watch, or a head-mounted device).
- the computer system has a touchpad.
- the computer system has one or more cameras.
- the computer system has a touch- sensitive display (also known as a “touch screen” or “touch-screen display”).
- the computer system has one or more eye-tracking components. In some embodiments, the computer system has one or more hand-tracking components. In some embodiments, the computer system has one or more output devices in addition to the display generation component, the output devices including one or more tactile output generators and/or one or more audio output devices. In some embodiments, the computer system has a graphical user interface (GUI), one or more processors, memory and one or more modules, programs or sets of instructions stored in the memory for performing multiple functions.
- GUI graphical user interface
- the user interacts with the GUI through a stylus and/or finger contacts and gestures on the touch-sensitive surface, movement of the user’s eyes and hand in space relative to the GUI (and/or computer system) or the user’s body as captured by cameras and other movement sensors, and/or voice inputs as captured by one or more audio input devices.
- the functions performed through the interactions optionally include image editing, drawing, presenting, word processing, spreadsheet making, game playing, telephoning, video conferencing, e-mailing, instant messaging, workout support, digital photographing, digital videoing, web browsing, digital music playing, note taking, and/or digital video playing. Executable instructions for performing these functions are, optionally, included in a transitory and/or non-transitory computer readable storage medium or other computer program product configured for execution by one or more processors.
- a method comprises: at a computer system that is in communication with one or more display generation components and one or more input devices: displaying, via the one or more display generation components, a first object of a plurality of objects at a first display position; while displaying the first object, detecting, via the one or more input devices, a first user input corresponding to a user request to navigate within the plurality of objects; and in response to detecting the first user input: displaying, via the one or more display generation components, movement of the first object from the first display position to a second display position different from the first display position while reducing a visual prominence of the first object by modifying an opacity of the first object relative to a background over which the first object is displayed.
- a non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more display generation components and one or more input devices.
- the one or more programs including instructions for: displaying, via the one or more display generation components, a first object of a plurality of objects at a first display position; while displaying the first object, detecting, via the one or more input devices, a first user input corresponding to a user request to navigate within the plurality of objects; and in response to detecting the first user input: displaying, via the one or more display generation components, movement of the first object from the first display position to a second display position different from the first display position while reducing a visual prominence of the first object by modifying an opacity of the first object relative to a background over which the first object is displayed.
- the transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more display generation components and one or more input devices.
- the one or more programs including instructions for: displaying, via the one or more display generation components, a first object of a plurality of objects at a first display position; while displaying the first object, detecting, via the one or more input devices, a first user input corresponding to a user request to navigate within the plurality of objects; and in response to detecting the first user input: displaying, via the one or more display generation components, movement of the first object from the first display position to a second display position different from the first display position while reducing a visual prominence of the first object by modifying an opacity of the first object relative to a background over which the first object is displayed.
- a computer system configured to communicate with one or more display generation components and one or more input devices.
- the computer system comprises: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors.
- the one or more programs include instructions for: displaying, via the one or more display generation components, a first object of a plurality of objects at a first display position; while displaying the first object, detecting, via the one or more input devices, a first user input corresponding to a user request to navigate within the plurality of objects; and in response to detecting the first user input: displaying, via the one or more display generation components, movement of the first object from the first display position to a second display position different from the first display position while reducing a visual prominence of the first object by modifying an opacity of the first object relative to a background over which the first object is displayed.
- a computer system configured to communicate with one or more display generation components and one or more input devices.
- the computer system comprises: means for displaying, via the one or more display generation components, a first object of a plurality of objects at a first display position; means for, while displaying the first object, detecting, via the one or more input devices, a first user input corresponding to a user request to navigate within the plurality of objects; and means for, in response to detecting the first user input: displaying, via the one or more display generation components, movement of the first object from the first display position to a second display position different from the first display position while reducing a visual prominence of the first object by modifying an opacity of the first object relative to a background over which the first object is displayed.
- a computer program product comprises one or more programs configured to be executed by one or more processors of a computer system that is in communication one or more display generation components and one or more input devices.
- the one or more programs include instructions for: displaying, via the one or more display generation components, a first object of a plurality of objects at a first display position; while displaying the first object, detecting, via the one or more input devices, a first user input corresponding to a user request to navigate within the plurality of objects; and in response to detecting the first user input: displaying, via the one or more display generation components, movement of the first object from the first display position to a second display position different from the first display position while reducing a visual prominence of the first object by modifying an opacity of the first object relative to a background over which the first object is displayed.
- a method comprises: at a computer system that is in communication with one or more display generation components and one or more input devices: receiving, via the one or more input devices, a sequence of one or more inputs corresponding to a request to change a display state of a first media item relative to a three-dimensional environment; and in response to receiving the sequence of one or more inputs, displaying, via the one or more display generation components, a first media item, in a respective display state in the three-dimensional environment, including: in accordance with a determination that a camera that captured the first media item had a first field of view with a first value for a respective size parameter when capturing the first media item, displaying the first media item at a first size in the three- dimensional environment; and in accordance with a determination that a camera that captured the first media item had a second field of view with a second value for the respective size parameter different from the first value for the respective size parameter when capturing the first media item, displaying
- a non-transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more display generation components and one or more input devices.
- the one or more programs include instructions for: receiving, via the one or more input devices, a sequence of one or more inputs corresponding to a request to change a display state of a first media item relative to a three-dimensional environment; and in response to receiving the sequence of one or more inputs, displaying, via the one or more display generation components, a first media item, in a respective display state in the three- dimensional environment, including: in accordance with a determination that a camera that captured the first media item had a first field of view with a first value for a respective size parameter when capturing the first media item, displaying the first media item at a first size in the three-dimensional environment; and in accordance with a determination that a camera that captured the first media item had a second field of view with a second value for the respective size parameter different from the first value for the respective size parameter when capturing the first media item, displaying the first media item at a second size in the three- dimensional environment, wherein the second size is different from the first size.
- a transitory computer-readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more display generation components and one or more input devices.
- the one or more programs include instructions for: receiving, via the one or more input devices, a sequence of one or more inputs corresponding to a request to change a display state of a first media item relative to a three-dimensional environment; and in response to receiving the sequence of one or more inputs, displaying, via the one or more display generation components, a first media item, in a respective display state in the three- dimensional environment, including: in accordance with a determination that a camera that captured the first media item had a first field of view with a first value for a respective size parameter when capturing the first media item, displaying the first media item at a first size in the three-dimensional environment; and in accordance with a determination that a camera that captured the first media item had a second field of view with a second value for the respective size parameter different from the first value for the respective size parameter when capturing the first media item, displaying the first media item at a second size in the three- dimensional environment, wherein the second size is different from the first size.
- a computer system configured to communicate with one or more display generation components and one or more input devices.
- the computer system comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors.
- the one or more programs include instructions for: receiving, via the one or more input devices, a sequence of one or more inputs corresponding to a request to change a display state of a first media item relative to a three-dimensional environment; and in response to receiving the sequence of one or more inputs, displaying, via the one or more display generation components, a first media item, in a respective display state in the three-dimensional environment, including: in accordance with a determination that a camera that captured the first media item had a first field of view with a first value for a respective size parameter when capturing the first media item, displaying the first media item at a first size in the three- dimensional environment; and in accordance with a determination that a camera that captured the first media item had a second field of view with a second value for the respective size parameter different from the first value for the respective size parameter when capturing the first media item, displaying the first media item at a second size in the three-dimensional environment, wherein the second size is different from the first size.
- a computer system configured to communicate with one or more display generation components and one or more input devices.
- the computer system comprises: means for receiving, via the one or more input devices, a sequence of one or more inputs corresponding to a request to change a display state of a first media item relative to a three-dimensional environment; and means for, in response to receiving the sequence of one or more inputs, displaying, via the one or more display generation components, a first media item, in a respective display state in the three- dimensional environment, including: in accordance with a determination that a camera that captured the first media item had a first field of view with a first value for a respective size parameter when capturing the first media item, displaying the first media item at a first size in the three-dimensional environment; and in accordance with a determination that a camera that captured the first media item had a second field of view with a second value for the respective size parameter different from the first value for the respective size parameter when capturing the first media item, displaying
- a computer program product comprises: one or more programs configured to be executed by one or more processors of a computer system that is in communication with one or more display generation components and one or more input devices.
- the one or more programs include instructions for: receiving, via the one or more input devices, a sequence of one or more inputs corresponding to a request to change a display state of a first media item relative to a three-dimensional environment; and in response to receiving the sequence of one or more inputs, displaying, via the one or more display generation components, a first media item, in a respective display state in the three-dimensional environment, including: in accordance with a determination that a camera that captured the first media item had a first field of view with a first value for a respective size parameter when capturing the first media item, displaying the first media item at a first size in the three-dimensional environment; and in accordance with a determination that a camera that captured the first media item had a second field of view with a second
- Figure 1 A is a block diagram illustrating an operating environment of a computer system for providing XR experiences in accordance with some embodiments.
- Figures 1B-1P are examples of a computer system for providing XR experiences in the operating environment of Figure 1A.
- Figure 2 is a block diagram illustrating a controller of a computer system that is configured to manage and coordinate an XR experience for the user in accordance with some embodiments.
- Figure 3 is a block diagram illustrating a display generation component of a computer system that is configured to provide a visual component of the XR experience to the user in accordance with some embodiments.
- Figure 4 is a block diagram illustrating a hand tracking unit of a computer system that is configured to capture gesture inputs of the user in accordance with some embodiments.
- Figure 5 is a block diagram illustrating an eye tracking unit of a computer system that is configured to capture gaze inputs of the user in accordance with some embodiments.
- Figure 6 is a flow diagram illustrating a glint-assisted gaze tracking pipeline in accordance with some embodiments.
- Figures 7A-7S illustrate example techniques for presenting content, in accordance with some embodiments.
- Figure 8 is a flow diagram of methods of presenting content, in accordance with various embodiments.
- Figure 9 is a flow diagram of methods of presenting content, in accordance with various embodiments.
- the present disclosure relates to user interfaces for providing an extended reality (XR) experience to a user, in accordance with some embodiments.
- XR extended reality
- a computer system displays, a first object of a plurality of objects at a first display position. While displaying the first object, the computer system detects a first user input corresponding to a user request to navigate within the plurality of objects. In response to detecting the first user input, the computer system displays movement of the first object from the first display position to a second display position while reducing a visual prominence of the first object by modifying an opacity of the first object relative to a background over which the first object is displayed. [0035] In some embodiments, a computer system receives a sequence of one or more inputs corresponding to a request to change a display state of a first media item relative to a three-dimensional environment.
- the computer system In response to receiving the sequence of one or more inputs, the computer system displays the first media item in the respective display state in the three- dimensional environment, which includes dynamically determining a size (e.g., a height) at which the first media item will be displayed based on the field of view of the camera that captured the first media item at the time of capturing the first media item. If the camera that captured the first media item had a first field of view with a first value for a respective size parameter when capturing the first media item, the media item is displayed at a first size, and if the camera had a second field of view with a second value for the respective size parameter when capturing the first media item, the media item is displayed at a second size different from the first size.
- a size e.g., a height
- Figures 1 A-6 provide a description of example computer systems for providing XR experiences to users.
- Figures 7A-7S illustrate example techniques for providing content, in accordance with some embodiments.
- Figure 8 is a flow diagram of methods of providing content, in accordance with various embodiments.
- Figure 9 is a flow diagram of methods of providing content, in accordance with various embodiments. The user interfaces in Figures 7A-7S are used to illustrate the processes in Figures 8 and 9.
- the processes described below enhance the operability of the devices and make the user-device interfaces more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) through various techniques, including by providing improved visual feedback to the user, reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, performing an operation when a set of conditions has been met without requiring further user input, improving privacy and/or security, providing a more varied, detailed, and/or realistic user experience while saving storage space, and/or additional techniques. These techniques also reduce power usage and improve battery life of the device by enabling the user to use the device more quickly and efficiently. Saving on battery power, and thus weight, improves the ergonomics of the device.
- These techniques also enable real-time communication, allow for the use of fewer and/or less precise sensors resulting in a more compact, lighter, and cheaper device, and enable the device to be used in a variety of lighting conditions. These techniques reduce energy usage, thereby reducing heat emitted by the device, which is particularly important for a wearable device where a device well within operational parameters for device components can become uncomfortable for a user to wear if it is producing too much heat.
- system or computer readable medium contains instructions for performing the contingent operations based on the satisfaction of the corresponding one or more conditions and thus is capable of determining whether the contingency has or has not been satisfied without explicitly repeating steps of a method until all of the conditions upon which steps in the method are contingent have been met.
- a system or computer readable storage medium can repeat the steps of a method as many times as are needed to ensure that all of the contingent steps have been performed.
- the XR experience is provided to the user via an operating environment 100 that includes a computer system 101.
- the computer system 101 includes a controller 110 (e.g., processors of a portable electronic device or a remote server), a display generation component 120 (e.g., a head-mounted device (HMD), a display, a projector, a touch-screen, etc.), one or more input devices 125 (e.g., an eye tracking device 130, a hand tracking device 140, other input devices 150), one or more output devices 155 (e.g., speakers 160, tactile output generators 170, and other output devices 180), one or more sensors 190 (e.g., image sensors, light sensors, depth sensors, tactile sensors, orientation sensors, proximity sensors, temperature sensors, location sensors, motion sensors, velocity sensors, etc.), and optionally one or more peripheral devices 195 (e.g., home appliances, wearable devices, etc.).
- Physical environment refers to a physical world that people can sense and/or interact with without aid of electronic systems.
- Physical environments such as a physical park, include physical articles, such as physical trees, physical buildings, and physical people. People can directly sense and/or interact with the physical environment, such as through sight, touch, hearing, taste, and smell.
- Extended reality In contrast, an extended reality (XR) environment refers to a wholly or partially simulated environment that people sense and/or interact with via an electronic system.
- XR extended reality
- an XR system may detect a person’s head turning and, in response, adjust graphical content and an acoustic field presented to the person in a manner similar to how such views and sounds would change in a physical environment.
- adjustments to characteristic(s) of virtual object(s) in an XR environment may be made in response to representations of physical motions (e.g., vocal commands).
- a person may sense and/or interact with an XR object using any one of their senses, including sight, sound, touch, taste, and smell.
- a person may sense and/or interact with audio objects that create a 3D or spatial audio environment that provides the perception of point audio sources in 3D space.
- audio objects may enable audio transparency, which selectively incorporates ambient sounds from the physical environment with or without computer-generated audio.
- a person may sense and/or interact only with audio objects.
- Examples of XR include virtual reality and mixed reality.
- Virtual reality A virtual reality (VR) environment refers to a simulated environment that is designed to be based entirely on computer-generated sensory inputs for one or more senses.
- a VR environment comprises a plurality of virtual objects with which a person may sense and/or interact.
- computer-generated imagery of trees, buildings, and avatars representing people are examples of virtual objects.
- a person may sense and/or interact with virtual objects in the VR environment through a simulation of the person’s presence within the computer-generated environment, and/or through a simulation of a subset of the person’s physical movements within the computer-generated environment.
- a mixed reality (MR) environment In contrast to a VR environment, which is designed to be based entirely on computer-generated sensory inputs, a mixed reality (MR) environment refers to a simulated environment that is designed to incorporate sensory inputs from the physical environment, or a representation thereof, in addition to including computer-generated sensory inputs (e.g., virtual objects).
- MR mixed reality
- a mixed reality environment is anywhere between, but not including, a wholly physical environment at one end and virtual reality environment at the other end.
- computer-generated sensory inputs may respond to changes in sensory inputs from the physical environment.
- some electronic systems for presenting an MR environment may track location and/or orientation with respect to the physical environment to enable virtual objects to interact with real objects (that is, physical articles from the physical environment or representations thereof). For example, a system may account for movements so that a virtual tree appears stationary with respect to the physical ground.
- Examples of mixed realities include augmented reality and augmented virtuality.
- Augmented reality refers to a simulated environment in which one or more virtual objects are superimposed over a physical environment, or a representation thereof.
- an electronic system for presenting an AR environment may have a transparent or translucent display through which a person may directly view the physical environment.
- the system may be configured to present virtual objects on the transparent or translucent display, so that a person, using the system, perceives the virtual objects superimposed over the physical environment.
- a system may have an opaque display and one or more imaging sensors that capture images or video of the physical environment, which are representations of the physical environment. The system composites the images or video with virtual objects, and presents the composition on the opaque display.
- a person, using the system indirectly views the physical environment by way of the images or video of the physical environment, and perceives the virtual objects superimposed over the physical environment.
- a video of the physical environment shown on an opaque display is called “pass-through video,” meaning a system uses one or more image sensor(s) to capture images of the physical environment, and uses those images in presenting the AR environment on the opaque display.
- a system may have a projection system that projects virtual objects into the physical environment, for example, as a hologram or on a physical surface, so that a person, using the system, perceives the virtual objects superimposed over the physical environment.
- An augmented reality environment also refers to a simulated environment in which a representation of a physical environment is transformed by computer-generated sensory information.
- a system may transform one or more sensor images to impose a select perspective (e.g., viewpoint) different than the perspective captured by the imaging sensors.
- a representation of a physical environment may be transformed by graphically modifying (e.g., enlarging) portions thereof, such that the modified portion may be representative but not photorealistic versions of the originally captured images.
- a representation of a physical environment may be transformed by graphically eliminating or obfuscating portions thereof.
- Augmented virtuality refers to a simulated environment in which a virtual or computer-generated environment incorporates one or more sensory inputs from the physical environment.
- the sensory inputs may be representations of one or more characteristics of the physical environment.
- an AV park may have virtual trees and virtual buildings, but people with faces photorealistically reproduced from images taken of physical people.
- a virtual object may adopt a shape or color of a physical article imaged by one or more imaging sensors.
- a virtual object may adopt shadows consistent with the position of the sun in the physical environment.
- a view of a three-dimensional environment is visible to a user.
- the view of the three-dimensional environment is typically visible to the user via one or more display generation components (e.g., a display or a pair of display modules that provide stereoscopic content to different eyes of the same user) through a virtual viewport that has a viewport boundary that defines an extent of the three-dimensional environment that is visible to the user via the one or more display generation components.
- display generation components e.g., a display or a pair of display modules that provide stereoscopic content to different eyes of the same user
- a virtual viewport that has a viewport boundary that defines an extent of the three-dimensional environment that is visible to the user via the one or more display generation components.
- the region defined by the viewport boundary is smaller than a range of vision of the user in one or more dimensions (e.g., based on the range of vision of the user, size, optical properties or other physical characteristics of the one or more display generation components, and/or the location and/or orientation of the one or more display generation components relative to the eyes of the user). In some embodiments, the region defined by the viewport boundary is larger than a range of vision of the user in one or more dimensions (e.g., based on the range of vision of the user, size, optical properties or other physical characteristics of the one or more display generation components, and/or the location and/or orientation of the one or more display generation components relative to the eyes of the user).
- the viewport and viewport boundary typically move as the one or more display generation components move (e.g., moving with a head of the user for a head mounted device or moving with a hand of a user for a handheld device such as a tablet or smartphone).
- a viewpoint of a user determines what content is visible in the viewport, a viewpoint generally specfies a location and a direction relative to the three-dimensional environment, and as the viewpoint shifts, the view of the three-dimensional environment will also shift in the viewport.
- a viewpoint is typically based on a location an direction of the head, face, and/or eyes of a user to provide a view of the three- dimensional environment that is perceptually accurate and provides an immersive experience when the user is using the head-mounted device.
- the viewpoint shifts as the handheld or stationed device is moved and/or as a position of a user relative to the handheld or stationed device changes (e.g., a user moving toward, away from, up, down, to the right, and/or to the left of the device).
- portions of the physical environment that are visible (e.g., displayed, and/or projected) via the one or more display generation components are based on a field of view of one or more cameras in communication with the display generation components which typcially move with the display generation components (e.g., moving with a head of the user for a head mounted device or moving with a hand of a user for a handheld device such as a tablet or smartphone) because the viewpoint of the user moves as the field of view of the one or more cameras moves (and the appearance of one or more virtual objects displayed via the one or more display generation components is updated based on the viewpoint of the user (e.g., displayed positions and poses of the virtual objects are updated based on the movement of the viewpoint of the user)).
- the viewpoint of the user e.g., displayed positions and poses of the virtual objects are updated based on the movement of the viewpoint of the user
- portions of the physical environment that are visible (e.g., optically visible through one or more partially or fully transparent portions of the display generation component) via the one or more display generation components are based on a field of view of a user through the partially or fully transparent portion(s) of the display generation component (e.g., moving with a head of the user for a head mounted device or moving with a hand of a user for a handheld device such as a tablet or smartphone) because the viewpoint of the user moves as the field of view of the user through the partially or fully transparent portions of the display generation components moves (and the appearance of one or more virtual objects is updated based on the viewpoint of the user).
- a field of view of a user e.g., moving with a head of the user for a head mounted device or moving with a hand of a user for a handheld device such as a tablet or smartphone
- a representation of a physical environment can be partially or fully obscured by a virtual environment.
- the amount of virtual environment that is displayed is based on an immersion level for the virtual environment (e.g., with respect to the representation of the physical environment). For example, increasing the immersion level optionally causes more of the virtual environment to be displayed, replacing and/or obscuring more of the physical environment, and reducing the immersion level optionally causes less of the virtual environment to be displayed, revealing portions of the physical environment that were previously not displayed and/or obscured.
- one or more first background objects are visually de-emphasized (e.g., dimmed, blurred, and/or displayed with increased transparency) more than one or more second background objects, and one or more third background objects cease to be displayed.
- a level of immersion includes an associated degree to which the virtual content displayed by the computer system (e.g., the virtual environment and/or the virtual content) obscures background content (e.g., content other than the virtual environment and/or the virtual content) around/behind the virtual content, optionally including the number of items of background content displayed and/or the visual characteristics (e.g., colors, contrast, and/or opacity) with which the background content is displayed, the angular range of the virtual content displayed via the display generation component (e.g., 60 degrees of content displayed at low immersion, 120 degrees of content displayed at medium immersion, or 180 degrees of content displayed at high immersion), and/or the proportion of the field of view displayed via the display generation component that is consumed by the virtual content (e.g., 33% of the field of view consumed by the virtual content at low immersion, 66% of the field of view consumed by the virtual content at medium immersion, or 100% of the field of view consumed by the virtual content at high immersion).
- the virtual content displayed by the computer system e.g., the virtual
- the background content is included in a background over which the virtual content is displayed (e.g., background content in the representation of the physical environment).
- the background content includes user interfaces (e.g., user interfaces generated by the computer system corresponding to applications), virtual objects (e.g., files or representations of other users generated by the computer system) not associated with or included in the virtual environment and/or virtual content, and/or real objects (e.g., pass-through objects representing real objects in the physical environment around the user that are visible such that they are displayed via the display generation component and/or a visible via a transparent or translucent component of the display generation component because the computer system does not obscure/prevent visibility of them through the display generation component).
- user interfaces e.g., user interfaces generated by the computer system corresponding to applications
- virtual objects e.g., files or representations of other users generated by the computer system
- real objects e.g., pass-through objects representing real objects in the physical environment around the user that are visible such that they are displayed via the display generation component and
- the background, virtual and/or real objects are displayed in an unobscured manner.
- a virtual environment with a low level of immersion is optionally displayed concurrently with the background content, which is optionally displayed with full brightness, color, and/or translucency.
- the background, virtual and/or real objects are displayed in an obscured manner (e.g., dimmed, blurred, or removed from display).
- a respective virtual environment with a high level of immersion is displayed without concurrently displaying the background content (e.g., in a full screen or fully immersive mode).
- a virtual environment displayed with a medium level of immersion is displayed concurrently with darkened, blurred, or otherwise de-emphasized background content.
- the visual characteristics of the background objects vary among the background objects. For example, at a particular immersion level, one or more first background objects are visually de-emphasized (e.g., dimmed, blurred, and/or displayed with increased transparency) more than one or more second background objects, and one or more third background objects cease to be displayed.
- a null or zero level of immersion corresponds to the virtual environment ceasing to be displayed and instead a representation of a physical environment is displayed (optionally with one or more virtual objects such as application, windows, or virtual three- dimensional objects) without the representation of the physical environment being obscured by the virtual environment.
- Adjusting the level of immersion using a physical input element provides for quick and efficient method of adjusting immersion, which enhances the operability of the computer system and makes the user-device interface more efficient.
- Viewpoint-locked virtual object A virtual object is viewpoint-locked when a computer system displays the virtual object at the same location and/or position in the viewpoint of the user, even as the viewpoint of the user shifts (e.g., changes).
- the viewpoint of the user is locked to the forward facing direction of the user’s head (e.g., the viewpoint of the user is at least a portion of the field-of-view of the user when the user is looking straight ahead); thus, the viewpoint of the user remains fixed even as the user’s gaze is shifted, without moving the user’s head.
- the computer system has a display generation component (e.g., a display screen) that can be repositioned with respect to the user’s head, the viewpoint of the user is the augmented reality view that is being presented to the user on a display generation component of the computer system.
- a viewpoint-locked virtual object that is displayed in the upper left comer of the viewpoint of the user when the viewpoint of the user is in a first orientation (e.g., with the user’s head facing north) continues to be displayed in the upper left comer of the viewpoint of the user, even as the viewpoint of the user changes to a second orientation (e.g., with the user’s head facing west).
- the location and/or position at which the viewpoint-locked virtual object is displayed in the viewpoint of the user is independent of the user’s position and/or orientation in the physical environment.
- the viewpoint of the user is locked to the orientation of the user’s head, such that the virtual object is also referred to as a “head-locked virtual object.”
- Environment-locked virtual object A virtual object is environment-locked (alternatively, “world-locked”) when a computer system displays the virtual object at a location and/or position in the viewpoint of the user that is based on (e.g., selected in reference to and/or anchored to) a location and/or object in the three-dimensional environment (e.g., a physical environment or a virtual environment). As the viewpoint of the user shifts, the location and/or object in the environment relative to the viewpoint of the user changes, which results in the environment-locked virtual object being displayed at a different location and/or position in the viewpoint of the user.
- an environment-locked virtual object that is locked onto a tree that is immediately in front of a user is displayed at the center of the viewpoint of the user.
- the viewpoint of the user shifts to the right (e.g., the user’s head is turned to the right) so that the tree is now left-of-center in the viewpoint of the user (e.g., the tree’s position in the viewpoint of the user shifts)
- the environment-locked virtual object that is locked onto the tree is displayed left-of-center in the viewpoint of the user.
- the location and/or position at which the environment- locked virtual object is displayed in the viewpoint of the user is dependent on the position and/or orientation of the location and/or object in the environment onto which the virtual object is locked.
- the computer system uses a stationary frame of reference (e.g., a coordinate system that is anchored to a fixed location and/or object in the physical environment) in order to determine the position at which to display an environment- locked virtual object in the viewpoint of the user.
- a stationary frame of reference e.g., a coordinate system that is anchored to a fixed location and/or object in the physical environment
- An environment-locked virtual object can be locked to a stationary part of the environment (e.g., a floor, wall, table, or other stationary object) or can be locked to a moveable part of the environment (e.g., a vehicle, animal, person, or even a representation of portion of the users body that moves independently of a viewpoint of the user, such as a user’s hand, wrist, arm, or foot) so that the virtual object is moved as the viewpoint or the portion of the environment moves to maintain a fixed relationship between the virtual object and the portion of the environment.
- a stationary part of the environment e.g., a floor, wall, table, or other stationary object
- a moveable part of the environment e.g., a vehicle, animal, person, or even a representation of portion of the users body that moves independently of a viewpoint of the user, such as a user’s hand, wrist, arm, or foot
- a virtual object that is environment-locked or viewpoint- locked exhibits lazy follow behavior which reduces or delays motion of the environment- locked or viewpoint-locked virtual object relative to movement of a point of reference which the virtual object is following.
- the computer system when exhibiting lazy follow behavior the computer system intentionally delays movement of the virtual object when detecting movement of a point of reference (e.g., a portion of the environment, the viewpoint, or a point that is fixed relative to the viewpoint, such as a point that is between 5-300cm from the viewpoint) which the virtual object is following.
- the virtual object when the point of reference (e.g., the portion of the environment or the viewpoint) moves with a first speed, the virtual object is moved by the device to remain locked to the point of reference but moves with a second speed that is slower than the first speed (e.g., until the point of reference stops moving or slows down, at which point the virtual object starts to catch up to the point of reference).
- the device ignores small amounts of movement of the point of reference (e.g., ignoring movement of the point of reference that is below a threshold amount of movement such as movement by 0-5 degrees or movement by 0-50 cm).
- a distance between the point of reference and the virtual object increases (e.g., because the virtual object is being displayed so as to maintain a fixed or substantially fixed position relative to a viewpoint or portion of the environment that is different from the point of reference to which the virtual object is locked) and when the point of reference (e.g., the portion of the environment or the viewpoint to which the virtual object is locked) moves by a second amount that is greater than the first amount, a distance between the point of reference and the virtual object initially increases (e.g., because the virtual object is being displayed so as to maintain a fixed or substantially fixed position relative to a viewpoint or portion of the environment that is different from the point of reference to which the virtual object is locked) and then decreases as the amount of movement of the point of reference increases above a threshold (e.g., a “lazy follow” threshold) because the virtual object is moved by the computer system to maintain a fixed
- a threshold e.g., a “lazy follow” threshold
- the virtual object maintaining a substantially fixed position relative to the point of reference includes the virtual object being displayed within a threshold distance (e.g., 1, 2, 3, 5, 15, 20, 50 cm) of the point of reference in one or more dimensions (e.g., up/down, left/right, and/or forward/b ackward relative to the position of the point of reference).
- a threshold distance e.g. 1, 2, 3, 5, 15, 20, 50 cm
- Hardware There are many different types of electronic systems that enable a person to sense and/or interact with various XR environments. Examples include headmounted systems, projection-based systems, heads-up displays (HUDs), vehicle windshields having integrated display capability, windows having integrated display capability, displays formed as lenses designed to be placed on a person’s eyes (e.g., similar to contact lenses), headphones/earphones, speaker arrays, input systems (e.g., wearable or handheld controllers with or without haptic feedback), smartphones, tablets, and desktop/laptop computers.
- a head-mounted system may include speakers and/or other audio output devices integrated into the head-mounted system for providing audio output.
- a head-mounted system may have one or more speaker(s) and an integrated opaque display.
- a head-mounted system may be configured to accept an external opaque display (e.g., a smartphone).
- the headmounted system may incorporate one or more imaging sensors to capture images or video of the physical environment, and/or one or more microphones to capture audio of the physical environment.
- a head-mounted system may have a transparent or translucent display.
- the transparent or translucent display may have a medium through which light representative of images is directed to a person’s eyes.
- the display may utilize digital light projection, OLEDs, LEDs, uLEDs, liquid crystal on silicon, laser scanning light source, or any combination of these technologies.
- the medium may be an optical waveguide, a hologram medium, an optical combiner, an optical reflector, or any combination thereof.
- the transparent or translucent display may be configured to become opaque selectively.
- Projection-based systems may employ retinal projection technology that projects graphical images onto a person’s retina.
- Projection systems also may be configured to project virtual objects into the physical environment, for example, as a hologram or on a physical surface.
- the controller 110 is configured to manage and coordinate an XR experience for the user.
- the controller 110 includes a suitable combination of software, firmware, and/or hardware. The controller 110 is described in greater detail below with respect to Figure 2.
- the controller 110 is a computing device that is local or remote relative to the scene 105 (e.g., a physical environment).
- the controller 110 is a local server located within the scene 105.
- the controller 110 is a remote server located outside of the scene 105 (e.g., a cloud server, central server, etc.).
- the controller 110 is communicatively coupled with the display generation component 120 (e.g., an HMD, a display, a projector, a touch-screen, etc.) via one or more wired or wireless communication channels 144 (e.g., BLUETOOTH, IEEE 802.1 lx, IEEE 802.16x, IEEE 802.3x, etc.).
- the display generation component 120 e.g., an HMD, a display, a projector, a touch-screen, etc.
- wired or wireless communication channels 144 e.g., BLUETOOTH, IEEE 802.1 lx, IEEE 802.16x, IEEE 802.3x, etc.
- the controller 110 is included within the enclosure (e.g., a physical housing) of the display generation component 120 (e.g., an HMD, or a portable electronic device that includes a display and one or more processors, etc.), one or more of the input devices 125, one or more of the output devices 155, one or more of the sensors 190, and/or one or more of the peripheral devices 195, or share the same physical enclosure or support structure with one or more of the above.
- the display generation component 120 e.g., an HMD, or a portable electronic device that includes a display and one or more processors, etc.
- the display generation component 120 is configured to provide the XR experience (e.g., at least a visual component of the XR experience) to the user.
- the display generation component 120 includes a suitable combination of software, firmware, and/or hardware. The display generation component 120 is described in greater detail below with respect to Figure 3.
- the functionalities of the controller 110 are provided by and/or combined with the display generation component 120.
- the display generation component 120 provides an XR experience to the user while the user is virtually and/or physically present within the scene 105.
- the display generation component is worn on a part of the user’s body (e.g., on his/her head, on his/her hand, etc.).
- the display generation component 120 includes one or more XR displays provided to display the XR content.
- the display generation component 120 encloses the field- of-view of the user.
- the display generation component 120 is a handheld device (such as a smartphone or tablet) configured to present XR content, and the user holds the device with a display directed towards the field-of-view of the user and a camera directed towards the scene 105.
- the handheld device is optionally placed within an enclosure that is worn on the head of the user.
- the handheld device is optionally placed on a support (e.g., a tripod) in front of the user.
- the display generation component 120 is an XR chamber, enclosure, or room configured to present XR content in which the user does not wear or hold the display generation component 120.
- Many user interfaces described with reference to one type of hardware for displaying XR content e.g., a handheld device or a device on a tripod
- could be implemented on another type of hardware for displaying XR content e.g., an HMD or other wearable computing device.
- a user interface showing interactions with XR content triggered based on interactions that happen in a space in front of a handheld or tripod mounted device could similarly be implemented with an HMD where the interactions happen in a space in front of the HMD and the responses of the XR content are displayed via the HMD.
- a user interface showing interactions with XR content triggered based on movement of a handheld or tripod mounted device relative to the physical environment could similarly be implemented with an HMD where the movement is caused by movement of the HMD relative to the physical environment (e.g., the scene 105 or a part of the user’s body (e.g., the user’s eye(s), head, or hand)).
- Figures 1 A-1P illustrate various examples of a computer system that is used to perform the methods and provide audio, visual and/or haptic feedback as part of user interfaces described herein.
- computer system includes one or more display generation components (e.g., first and second display assemblies l-120a, l-120b and/or first and second optical modules 11.1. l-104a and 11.1. l-104b) for displaying virtual elements and/or a representation of a physical environment to a user of the computer system, optionally generated based on detected events and/or user inputs detected by the computer system.
- display generation components e.g., first and second display assemblies l-120a, l-120b and/or first and second optical modules 11.1. l-104a and 11.1. l-104b
- User interfaces generated by the computer system are optionally corrected by one or more corrective lenses 11.3.2-216 (sometimes referred to as prescription lenses or nonprescription lenses) that are optionally removably attached to one or more of the optical modules to enable the user interfaces to be more easily viewed by users who would otherwise use glasses or contacts to correct their vision. While many user interfaces illustrated herein show a single view of a user interface, user interfaces in a HMD are optionally displayed using two optical modules (e.g., first and second display assemblies l-120a, l-120b and/or first and second optical modules 11.1.
- computer system includes one or more external displays (e.g., display assembly 1-108) for displaying status information for the computer system to the user of the computer system (when the computer system is not being worn) and/or to other people who are near the computer system, optionally generated based on detected events and/or user inputs detected by the computer system.
- external displays e.g., display assembly 1-108
- computer system includes one or more audio output components (e.g., electronic component 1-112) for generating audio feedback, optionally generated based on detected events and/or user inputs detected by the computer system.
- the computer system includes one or more input devices for detecting input such as one or more sensors (e.g., one or more sensors in sensor assembly 1- 356, and/or Figure II) for detecting information about a physical environment of the device which can be used (optionally in conjunction with one or more illuminators such as the illuminators describe in Figure II) to generate a digital passthrough image, capture visual media corresponding to the physical environment (e.g., photos and/or video), or determine a pose (e.g., position and/or orientation) of physical objects and/or surfaces in the physical environment so that virtual objects ban be placed based on a detected pose of physical objects and/or surfaces.
- sensors e.g., one or more sensors in sensor assembly 1- 356, and/or Figure II
- illuminators such as the illuminators describe in Figure II
- the computer system includes one or more input devices for detecting input such as one or more sensors for detecting hand position and/or movement (e.g., one or more sensors in sensor assembly 1-356, and/or Figure II) that can be used (optionally in conjunction with one or more illuminators such as the illuminators 6-124 describe in Figure II) to determine when one or more air gestures have been performed.
- one or more sensors for detecting hand position and/or movement e.g., one or more sensors in sensor assembly 1-356, and/or Figure II
- one or more illuminators such as the illuminators 6-124 describe in Figure II
- the computer system includes one or more input devices for detecting input such as one or more sensors for detecting eye movement (e.g., eye tracking and gaze tracking sensors in Figure II) which can be used (optionally in conjunction with one or more lights such as lights 11.3.2-110 in Figure 10) to determine attention or gaze position and/or gaze movement which can optionally be used to detect gaze-only inputs based on gaze movement and/or dwell.
- one or more sensors for detecting eye movement e.g., eye tracking and gaze tracking sensors in Figure II
- lights e.g., lights 11.3.2-110 in Figure 10
- a combination of the various sensors described above can be used to determine user facial expressions and/or hand movements for use in generating an avatar or representation of the user such as an anthropomorphic avatar or representation for use in a real-time communication session where the avatar has facial expressions, hand movements, and/or body movements that are based on or similar to detected facial expressions, hand movements, and/or body movements of a user of the device.
- Gaze and/or attention information is, optionally, combined with hand tracking information to determine interactions between the user and one or more user interfaces based on direct and/or indirect inputs such as air gestures or inputs that use one or more hardware input devices such as one or more buttons (e.g., first button 1-128, button 11.1.1-114 , second button 1-132, and or dial or button 1-328), knobs (e.g., first button 1-128, button 11.1.1-114, and/or dial or button 1-328), digital crowns (e.g., first button 1-128 which is depressible and twistable or rotatable, button 11.1.1-114, and/or dial or button 1-328), trackpads, touch screens, keyboards, mice and/or other input devices.
- buttons e.g., first button 1-128, button 11.1.1-114 , second button 1-132, and or dial or button 1-328
- knobs e.g., first button 1-128, button 11.1.1-114, and/or dial or button 1-328
- digital crowns e.g.
- buttons are optionally used to perform system operations such as recentering content in three-dimensional environment that is visible to a user of the device, displaying a home user interface for launching applications, starting real-time communication sessions, or initiating display of virtual three-dimensional backgrounds.
- Knobs or digital crowns are optionally rotatable to adjust parameters of the visual content such as a level of immersion of a virtual three-dimensional environment (e.g., a degree to which virtual -content occupies the viewport of the user into the three-dimensional environment) or other parameters associated with the three-dimensional environment and the virtual content that is displayed via the optical modules (e.g., first and second display assemblies l-120a, l-120b and/or first and second optical modules 11.1.1- 104a and 11.1. l-104b).
- the optical modules e.g., first and second display assemblies l-120a, l-120b and/or first and second optical modules 11.1.1- 104a and 11.1. l-104b).
- FIG. IB illustrates a front, top, perspective view of an example of a head- mountable display (HMD) device 1-100 configured to be donned by a user and provide virtual and altered/mixed reality (VR/AR) experiences.
- the HMD 1-100 can include a display unit 1-102 or assembly, an electronic strap assembly 1-104 connected to and extending from the display unit 1-102, and a band assembly 1-106 secured at either end to the electronic strap assembly 1-104.
- the electronic strap assembly 1-104 and the band 1-106 can be part of a retention assembly configured to wrap around a user’s head to hold the display unit 1-102 against the face of the user.
- the band assembly 1-106 can include a first band 1-116 configured to wrap around the rear side of a user’s head and a second band 1-117 configured to extend over the top of a user’s head.
- the second strap can extend between first and second electronic straps l-105a, 1 - 105b of the electronic strap assembly 1-104 as shown.
- the strap assembly 1-104 and the band assembly 1-106 can be part of a securement mechanism extending rearward from the display unit 1-102 and configured to hold the display unit 1-102 against a face of a user.
- the securement mechanism includes a first electronic strap l-105a including a first proximal end 1-134 coupled to the display unit 1-102, for example a housing 1-150 of the display unit 1-102, and a first distal end 1-136 opposite the first proximal end 1-134.
- the securement mechanism can also include a second electronic strap 1- 105b including a second proximal end 1-138 coupled to the housing 1-150 of the display unit 1-102 and a second distal end 1-140 opposite the second proximal end 1-138.
- the securement mechanism can also include the first band 1-116 including a first end 1-142 coupled to the first distal end 1-136 and a second end 1-144 coupled to the second distal end 1-140 and the second band 1-117 extending between the first electronic strap 1-105a and the second electronic strap l-105b.
- the straps l-105a-b and band 1-116 can be coupled via connection mechanisms or assemblies 1-114.
- the second band 1-117 includes a first end 1-146 coupled to the first electronic strap l-105a between the first proximal end 1- 134 and the first distal end 1-136 and a second end 1-148 coupled to the second electronic strap 1 -105b between the second proximal end 1-138 and the second distal end 1-140.
- the first and second electronic straps l-105a-b includes plastic, metal, or other structural materials forming the shape the substantially rigid straps 1- 105a-b.
- the first and second bands 1-116, 1-117 are formed of elastic, flexible materials including woven textiles, rubbers, and the like. The first and second bands 1-116, 1-117 can be flexible to conform to the shape of the user’ head when donning the HMD 1-100.
- one or more of the first and second electronic straps 1- 105a-b can define internal strap volumes and include one or more electronic components disposed in the internal strap volumes.
- the first electronic strap l-105a can include an electronic component 1-112.
- the electronic component 1-112 can include a speaker.
- the electronic component 1-112 can include a computing component such as a processor.
- the housing 1-150 defines a first, front-facing opening 1- 152.
- the front-facing opening is labeled in dotted lines at 1-152 in Figure IB because the front cover assembly 1-108 is disposed to occlude the first opening 1-152 from view when the HMD is assembled.
- the housing 1-150 can also define a rear-facing second opening 1- 154.
- the housing 1-150 also defines an internal volume between the first and second openings 1-152, 1-154.
- the HMD 1-100 includes the display assembly 1-108, which can include a front cover and display screen (shown in other figures) disposed in or across the front opening 1-152 to occlude the front opening 1-152.
- the display screen of the display assembly 1-108 has a curvature configured to follow the curvature of a user’s face.
- the display screen of the display assembly 1-108 can be curved as shown to compliment the user’s facial features and general curvature from one side of the face to the other, for example from left to right and/or from top to bottom where the display unit 1-102 is pressed.
- the housing 1-150 can define a first aperture 1-126 between the first and second openings 1-152, 1-154 and a second aperture 1-130 between the first and second openings 1-152, 1-154.
- the HMD 1-100 can also include a first button 1-128 disposed in the first aperture 1-126 and a second button 1-132 disposed in the second aperture 1-130.
- the first and second buttons 1-128, 1-132 can be depressible through the respective apertures 1-126, 1-130.
- the first button 1-126 and/or second button 1- 130 can be twistable dials as well as depressible buttons.
- the first button 1-128 is a depressible and twistable dial button and the second button 1-132 is a depressible button.
- FIG. 1C illustrates a rear, perspective view of the HMD 1-100.
- the HMD 1-100 can include a light seal 1-110 extending rearward from the housing 1-150 of the display unit 1-108 around a perimeter of the housing 1-150 as shown.
- the light seal 1-110 can be configured to extend from the housing 1-150 to the user’s face around the user’s eyes to block external light from being visible.
- the HMD 1-100 can include first and second display assemblies l-120a, l-120b disposed at or in the rearward facing second opening 1-154 defined by the housing 1-150 and/or disposed in the internal volume of the housing 1-150 and configured to project light through the second opening 1-154.
- each display assembly l-120a-b can include respective display screens l-122a, l-122b configured to project light in a rearward direction through the second opening 1-154 toward the user’s eyes.
- the display assembly 1-108 can be a front-facing, forward display assembly including a display screen configured to project light in a first, forward direction and the rear facing display screens l-122a-b can be configured to project light in a second, rearward direction opposite the first direction.
- the light seal 1-110 can be configured to block light external to the HMD 1-100 from reaching the user’s eyes, including light projected by the forward facing display screen of the display assembly 1-108 shown in the front perspective view of Figure IB.
- the HMD 1-100 can also include a curtain 1-124 occluding the second opening 1-154 between the housing 1-150 and the rear-facing display assemblies l-120a-b.
- the curtain 1-124 can be elastic or at least partially elastic.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown in Figures IB and 1C can be included, either alone or in any combination, in any of the other examples of devices, features, components, and parts shown in Figures 1D-1F and described herein.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown and described with reference to Figures ID- IF can be included, either alone or in any combination, in the example of the devices, features, components, and parts shown in Figures IB and 1C.
- Figure ID illustrates an exploded view of an example of an HMD 1-200 including various portions or parts thereof separated according to the modularity and selective coupling of those parts.
- the HMD 1-200 can include a band 1-216 which can be selectively coupled to first and second electronic straps l-205a, l-205b.
- the first securement strap l-205a can include a first electronic component l-212a and the second securement strap l-205b can include a second electronic component 1-212b.
- the first and second straps l-205a-b can be removably coupled to the display unit 1-202.
- the HMD 1-200 can include a light seal 1-210 configured to be removably coupled to the display unit 1-202.
- the HMD 1-200 can also include lenses 1-218 which can be removably coupled to the display unit 1-202, for example over first and second display assemblies including display screens.
- the lenses 1-218 can include customized prescription lenses configured for corrective vision.
- each part shown in the exploded view of Figure ID and described above can be removably coupled, attached, reattached, and changed out to update parts or swap out parts for different users.
- bands such as the band 1-216, light seals such as the light seal 1-210, lenses such as the lenses 1-218, and electronic straps such as the straps l-205a-b can be swapped out depending on the user such that these parts are customized to fit and correspond to the individual user of the HMD 1-200.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown in Figure ID can be included, either alone or in any combination, in any of the other examples of devices, features, components, and parts shown in Figures IB, 1C, and IE- IF and described herein.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown and described with reference to Figures IB, 1C, and IE- IF can be included, either alone or in any combination, in the example of the devices, features, components, and parts shown in Figure ID.
- Figure IE illustrates an exploded view of an example of a display unit 1-306 of a HMD.
- the display unit 1-306 can include a front display assembly 1-308, a frame/housing assembly 1-350, and a curtain assembly 1-324.
- the display unit 1-306 can also include a sensor assembly 1-356, logic board assembly 1-358, and cooling assembly 1-360 disposed between the frame assembly 1-350 and the front display assembly 1-308.
- the display unit 1-306 can also include a rear-facing display assembly 1-320 including first and second rear-facing display screens l-322a, l-322b disposed between the frame 1-350 and the curtain assembly 1-324.
- the display unit 1-306 can also include a motor assembly 1-362 configured as an adjustment mechanism for adjusting the positions of the display screens l-322a-b of the display assembly 1-320 relative to the frame 1-350.
- the display assembly 1-320 is mechanically coupled to the motor assembly 1-362, with at least one motor for each display screen l-322a-b, such that the motors can translate the display screens l-322a-b to match an interpupillary distance of the user’s eyes.
- the display unit 1-306 can include a dial or button 1-328 depressible relative to the frame 1-350 and accessible to the user outside the frame 1-350.
- the button 1-328 can be electronically connected to the motor assembly 1-362 via a controller such that the button 1-328 can be manipulated by the user to cause the motors of the motor assembly 1-362 to adjust the positions of the display screens l-322a-b.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown in Figure IE can be included, either alone or in any combination, in any of the other examples of devices, features, components, and parts shown in Figures 1B-1D and IF and described herein.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown and described with reference to Figures IB- ID and IF can be included, either alone or in any combination, in the example of the devices, features, components, and parts shown in Figure IE.
- FIG. IF illustrates an exploded view of another example of a display unit 1-406 of a HMD device similar to other HMD devices described herein.
- the display unit 1-406 can include a front display assembly 1-402, a sensor assembly 1-456, a logic board assembly 1- 458, a cooling assembly 1-460, a frame assembly 1-450, a rear-facing display assembly 1- 421, and a curtain assembly 1-424.
- the display unit 1-406 can also include a motor assembly 1-462 for adjusting the positions of first and second display sub-assemblies l-420a, l-420b of the rear-facing display assembly 1-421, including first and second respective display screens for interpupillary adjustments, as described above.
- FIG. IF The various parts, systems, and assemblies shown in the exploded view of Figure IF are described in greater detail herein with reference to Figures 1B-1E as well as subsequent figures referenced in the present disclosure.
- the display unit 1-406 shown in Figure IF can be assembled and integrated with the securement mechanisms shown in Figures 1B-1E, including the electronic straps, bands, and other components including light seals, connection assemblies, and so forth.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown in Figure IF can be included, either alone or in any combination, in any of the other examples of devices, features, components, and parts shown in Figures IB- IE and described herein.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown and described with reference to Figures IB- IE can be included, either alone or in any combination, in the example of the devices, features, components, and parts shown in Figure IF.
- FIG 1G illustrates a perspective, exploded view of a front cover assembly 3-100 of an HMD device described herein, for example the front cover assembly 3-1 of the HMD 3- 100 shown in Figure 1G or any other HMD device shown and described herein.
- the front cover assembly 3-100 shown in Figure IB can include a transparent or semi-transparent cover 3-102, shroud 3-104 (or “canopy”), adhesive layers 3-106, display assembly 3-108 including a lenticular lens panel or array 3-110, and a structural trim 3-112.
- the adhesive layer 3-106 can secure the shroud 3-104 and/or transparent cover 3-102 to the display assembly 3-108 and/or the trim 3-112.
- the trim 3-112 can secure the various components of the front cover assembly 3-100 to a frame or chassis of the HMD device.
- the transparent cover 3-102, shroud 3-104, and display assembly 3-108 can be curved to accommodate the curvature of a user’s face.
- the transparent cover 3-102 and the shroud 3-104 can be curved in two or three dimensions, e.g., vertically curved in the Z- direction in and out of the Z-X plane and horizontally curved in the X-direction in and out of the Z-X plane.
- the display assembly 3-108 can include the lenticular lens array 3-110 as well as a display panel having pixels configured to project light through the shroud 3-104 and the transparent cover 3-102.
- the display assembly 3-108 can be curved in at least one direction, for example the horizontal direction, to accommodate the curvature of a user’s face from one side (e.g., left side) of the face to the other (e.g., right side).
- each layer or component of the display assembly 3-108 which will be shown in subsequent figures and described in more detail, but which can include the lenticular lens array 3-110 and a display layer, can be similarly or concentrically curved in the horizontal direction to accommodate the curvature of the user’s face.
- the shroud 3-104 can include a transparent or semitransparent material through which the display assembly 3-108 projects light.
- the shroud 3-104 can include one or more opaque portions, for example opaque ink-printed portions or other opaque film portions on the rear surface of the shroud 3-104.
- the rear surface can be the surface of the shroud 3-104 facing the user’s eyes when the HMD device is donned.
- opaque portions can be on the front surface of the shroud 3- 104 opposite the rear surface.
- the opaque portion or portions of the shroud 3-104 can include perimeter portions visually hiding any components around an outside perimeter of the display screen of the display assembly 3-108. In this way, the opaque portions of the shroud hide any other components, including electronic components, structural components, and so forth, of the HMD device that would otherwise be visible through the transparent or semi-transparent cover 3-102 and/or shroud 3-104.
- the shroud 3-104 can define one or more apertures transparent portions 3-120 through which sensors can send and receive signals.
- the portions 3-120 are apertures through which the sensors can extend or send and receive signals.
- the portions 3-120 are transparent portions, or portions more transparent than surrounding semi-transparent or opaque portions of the shroud, through which sensors can send and receive signals through the shroud and through the transparent cover 3-102.
- the sensors can include cameras, IR sensors, LUX sensors, or any other visual or non-visual environmental sensors of the HMD device.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown in Figure 1G can be included, either alone or in any combination, in any of the other examples of devices, features, components, and parts described herein.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown and described herein can be included, either alone or in any combination, in the example of the devices, features, components, and parts shown in Figure 1G.
- FIG. 1H illustrates an exploded view of an example of an HMD device 6-100.
- the HMD device 6-100 can include a sensor array or system 6-102 including one or more sensors, cameras, projectors, and so forth mounted to one or more components of the HMD 6-100.
- the sensor system 6-102 can include a bracket 1-338 on which one or more sensors of the sensor system 6-102 can be fixed/secured.
- Figure II illustrates a portion of an HMD device 6-100 including a front transparent cover 6-104 and a sensor system 6-102.
- the sensor system 6-102 can include a number of different sensors, emitters, receivers, including cameras, IR sensors, projectors, and so forth.
- the transparent cover 6-104 is illustrated in front of the sensor system 6-102 to illustrate relative positions of the various sensors and emitters as well as the orientation of each sensor/emitter of the system 6-102.
- “sideways,” “side,” “lateral,” “horizontal,” and other similar terms refer to orientations or directions as indicated by the X- axis shown in Figure 1 J.
- the transparent cover 6-104 can define a front, external surface of the HMD device 6-100 and the sensor system 6-102, including the various sensors and components thereof, can be disposed behind the cover 6-104 in the Y-axis/direction.
- the cover 6-104 can be transparent or semi-transparent to allow light to pass through the cover 6- 104, both light detected by the sensor system 6-102 and light emitted thereby.
- the HMD device 6-100 can include one or more controllers including processors for electrically coupling the various sensors and emitters of the sensor system 6-102 with one or more mother boards, processing units, and other electronic devices such as display screens and the like.
- the various sensors, emitters, and other components of the sensor system 6-102 can be coupled to various structural frame members, brackets, and so forth of the HMD device 6-100 not shown in Figure II.
- Figure II shows the components of the sensor system 6-102 unattached and un-coupled electrically from other components for the sake of illustrative clarity.
- the device can include one or more controllers having processors configured to execute instructions stored on memory components electrically coupled to the processors.
- the instructions can include, or cause the processor to execute, one or more algorithms for self-correcting angles and positions of the various cameras described herein overtime with use as the initial positions, angles, or orientations of the cameras get bumped or deformed due to unintended drop events or other events.
- the sensor system 6-102 can include one or more scene cameras 6-106.
- the system 6-102 can include two scene cameras 6-102 disposed on either side of the nasal bridge or arch of the HMD device 6-100 such that each of the two cameras 6-106 correspond generally in position with left and right eyes of the user behind the cover 6- 103.
- the scene cameras 6-106 are oriented generally forward in the Y-direction to capture images in front of the user during use of the HMD 6-100.
- the scene cameras are color cameras and provide images and content for MR video pass through to the display screens facing the user’s eyes when using the HMD device 6-100.
- the scene cameras 6-106 can also be used for environment and object reconstruction.
- the sensor system 6-102 can include a first depth sensor 6- 108 pointed generally forward in the Y-direction.
- the first depth sensor 6-108 can be used for environment and object reconstruction as well as user hand and body tracking.
- the sensor system 6-102 can include a second depth sensor 6-110 disposed centrally along the width (e.g., along the X-axis) of the HMD device 6-100.
- the second depth sensor 6-110 can be disposed above the central nasal bridge or accommodating features over the nose of the user when donning the HMD 6-100.
- the second depth sensor 6-110 can be used for environment and object reconstruction as well as hand and body tracking.
- the second depth sensor can include a LIDAR sensor.
- the sensor system 6-102 can include a depth projector 6- 112 facing generally forward to project electromagnetic waves, for example in the form of a predetermined pattern of light dots, out into and within a field of view of the user and/or the scene cameras 6-106 or a field of view including and beyond the field of view of the user and/or scene cameras 6-106.
- the depth projector can project electromagnetic waves of light in the form of a dotted light pattern to be reflected off objects and back into the depth sensors noted above, including the depth sensors 6-108, 6-110.
- the depth projector 6-112 can be used for environment and object reconstruction as well as hand and body tracking.
- the sensor system 6-102 can include downward facing cameras 6-114 with a field of view pointed generally downward relative to the HDM device 6-100 in the Z-axis.
- the downward cameras 6-114 can be disposed on left and right sides of the HMD device 6-100 as shown and used for hand and body tracking, headset tracking, and facial avatar detection and creation for display a user avatar on the forward facing display screen of the HMD device 6-100 described elsewhere herein.
- the downward cameras 6-114 can be used to capture facial expressions and movements for the face of the user below the HMD device 6-100, including the cheeks, mouth, and chin.
- the sensor system 6-102 can include jaw cameras 6-116.
- the jaw cameras 6-116 can be disposed on left and right sides of the HMD device 6-100 as shown and used for hand and body tracking, headset tracking, and facial avatar detection and creation for display a user avatar on the forward facing display screen of the HMD device 6-100 described elsewhere herein.
- the jaw cameras 6-116 can be used to capture facial expressions and movements for the face of the user below the HMD device 6-100, including the user’s jaw, cheeks, mouth, and chin, for hand and body tracking, headset tracking, and facial avatar
- the sensor system 6-102 can include side cameras 6-118.
- the side cameras 6-118 can be oriented to capture side views left and right in the X-axis or direction relative to the HMD device 6-100.
- the side cameras 6-118 can be used for hand and body tracking, headset tracking, and facial avatar detection and recreation.
- the sensor system 6-102 can include a plurality of eye tracking and gaze tracking sensors for determining an identity, status, and gaze direction of a user’s eyes during and/or before use.
- the eye/gaze tracking sensors can include nasal eye cameras 6-120 disposed on either side of the user’s nose and adjacent the user’s nose when donning the HMD device 6-100.
- the eye/gaze sensors can also include bottom eye cameras 6-122 disposed below respective user eyes for capturing images of the eyes for facial avatar detection and creation, gaze tracking, and iris identification functions.
- the sensor system 6-102 can include infrared illuminators 6-124 pointed outward from the HMD device 6-100 to illuminate the external environment and any object therein with IR light for IR detection with one or more IR sensors of the sensor system 6-102.
- the sensor system 6-102 can include a flicker sensor 6-126 and an ambient light sensor 6-128.
- the flicker sensor 6- 126 can detect overhead light refresh rates to avoid display flicker.
- the infrared illuminators 6-124 can include light emitting diodes and can be used especially for low light environments for illuminating user hands and other objects in low light for detection by infrared sensors of the sensor system 6-102.
- multiple sensors including the scene cameras 6-106, the downward cameras 6-114, the jaw cameras 6-116, the side cameras 6-118, the depth projector 6-112, and the depth sensors 6-108, 6-110 can be used in combination with an electrically coupled controller to combine depth data with camera data for hand tracking and for size determination for better hand tracking and object recognition and tracking functions of the HMD device 6-100.
- the downward cameras 6-114, jaw cameras 6- 116, and side cameras 6-118 described above and shown in Figure II can be wide angle cameras operable in the visible and infrared spectrums.
- these cameras 6-114, 6-116, 6-118 can operate only in black and white light detection to simplify image processing and gain sensitivity.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown in Figure II can be included, either alone or in any combination, in any of the other examples of devices, features, components, and parts shown in Figures 1J-1L and described herein.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown and described with reference to Figures 1J-1L can be included, either alone or in any combination, in the example of the devices, features, components, and parts shown in Figure II.
- Figure 1 J illustrates a lower perspective view of an example of an HMD 6-200 including a cover or shroud 6-204 secured to a frame 6-230.
- the sensors 6-203 of the sensor system 6-202 can be disposed around a perimeter of the HDM 6- 200 such that the sensors 6-203 are outwardly disposed around a perimeter of a display region or area 6-232 so as not to obstruct a view of the displayed light.
- the sensors can be disposed behind the shroud 6-204 and aligned with transparent portions of the shroud allowing sensors and projectors to allow light back and forth through the shroud 6-204.
- opaque ink or other opaque material or films/layers can be disposed on the shroud 6-204 around the display area 6-232 to hide components of the HMD 6-200 outside the display area 6-232 other than the transparent portions defined by the opaque portions, through which the sensors and projectors send and receive light and electromagnetic signals during operation.
- the shroud 6-204 allows light to pass therethrough from the display (e.g., within the display region 6-232) but not radially outward from the display region around the perimeter of the display and shroud 6- 204.
- the shroud 6-204 includes a transparent portion 6-205 and an opaque portion 6-207, as described above and elsewhere herein.
- the opaque portion 6-207 of the shroud 6-204 can define one or more transparent regions 6-209 through which the sensors 6-203 of the sensor system 6-202 can send and receive signals.
- the sensors 6-203 of the sensor system 6-202 sending and receiving signals through the shroud 6-204, or more specifically through the transparent regions 6-209 of the (or defined by) the opaque portion 6-207 of the shroud 6-204 can include the same or similar sensors as those shown in the example of Figure II, for example depth sensors 6-108 and 6-110, depth projector 6-112, first and second scene cameras 6-106, first and second downward cameras 6-114, first and second side cameras 6-118, and first and second infrared illuminators 6-124. These sensors are also shown in the examples of Figures IK and IL.
- sensors Other sensors, sensor types, number of sensors, and relative positions thereof can be included in one or more other examples of HMDs.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown in Figure 1 J can be included, either alone or in any combination, in any of the other examples of devices, features, components, and parts shown in Figures II and 1K-1L and described herein.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown and described with reference to Figures II and IK- IL can be included, either alone or in any combination, in the example of the devices, features, components, and parts shown in Figure 1 J.
- Figure IK illustrates a front view of a portion of an example of an HMD device 6- 300 including a display 6-334, brackets 6-336, 6-338, and frame or housing 6-330.
- the example shown in Figure IK does not include a front cover or shroud in order to illustrate the brackets 6-336, 6-338.
- the shroud 6-204 shown in Figure 1J includes the opaque portion 6-207 that would visually cover/block a view of anything outside (e.g., radially/peripherally outside) the di splay/di splay region 6-334, including the sensors 6-303 and bracket 6-338.
- the various sensors of the sensor system 6-302 are coupled to the brackets 6-336, 6-338.
- the scene cameras 6-306 include tight tolerances of angles relative to one another.
- the tolerance of mounting angles between the two scene cameras 6-306 can be 0.5 degrees or less, for example 0.3 degrees or less.
- the scene cameras 6-306 can be mounted to the bracket 6-338 and not the shroud.
- the bracket can include cantilevered arms on which the scene cameras 6-306 and other sensors of the sensor system 6-302 can be mounted to remain un-deformed in position and orientation in the case of a drop event by a user resulting in any deformation of the other bracket 6-226, housing 6-330, and/or shroud.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown in Figure IK can be included, either alone or in any combination, in any of the other examples of devices, features, components, and parts shown in Figures II- 1 J and IL and described herein.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown and described with reference to Figures 11-1 J and IL can be included, either alone or in any combination, in the example of the devices, features, components, and parts shown in Figure IK.
- Figure IL illustrates a bottom view of an example of an HMD 6-400 including a front display/cover assembly 6-404 and a sensor system 6-402.
- the sensor system 6-402 can be similar to other sensor systems described above and elsewhere herein, including in reference to Figures 1I-1K.
- the jaw cameras 6-416 can be facing downward to capture images of the user’s lower facial features.
- the jaw cameras 6-416 can be coupled directly to the frame or housing 6-430 or one or more internal brackets directly coupled to the frame or housing 6-430 shown.
- the frame or housing 6-430 can include one or more apertures/openings 6-415 through which the jaw cameras 6-416 can send and receive signals.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown in Figure IL can be included, either alone or in any combination, in any of the other examples of devices, features, components, and parts shown in Figures 1I-1K and described herein.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown and described with reference to Figures 1L1K can be included, either alone or in any combination, in the example of the devices, features, components, and parts shown in Figure IL.
- Figure IM illustrates a rear perspective view of an inter-pupillary distance (IPD) adjustment system 11.1.1-102 including first and second optical modules 11.1. l-104a-b slidably engaging/coupled to respective guide-rods 11.1. l-108a-b and motors 11.1.1-1 lOa-b of left and right adjustment subsystems 11.1. l-106a-b.
- the IPD adjustment system 11.1.1- 102 can be coupled to a bracket 11.1.1-112 and include a button 11.1.1-114 in electrical communication with the motors 11.1.1-1 lOa-b.
- the button 11.1.1-114 can electrically communicate with the first and second motors 11.1.1-1 lOa-b via a processor or other circuitry components to cause the first and second motors 11.1.1-1 lOa-b to activate and cause the first and second optical modules 11.1. l-104a-b, respectively, to change position relative to one another.
- the first and second optical modules 11.1. l-104a-b can include respective display screens configured to project light toward the user’s eyes when donning the HMD 11.1.1-100.
- the user can manipulate (e.g., depress and/or rotate) the button 11.1.1-114 to activate a positional adjustment of the optical modules 11.1. l-104a-b to match the inter-pupillary distance of the user’s eyes.
- 11.1.1-104a-b can also include one or more cameras or other sensors/ sensor systems for imaging and measuring the IPD of the user such that the optical modules 11.1. l-104a-b can be adjusted to match the IPD.
- the user can manipulate the button 11.1.1-114 to cause an automatic positional adjustment of the first and second optical modules 11.1. l-104a-b.
- the user can manipulate the button 11.1.1-114 to cause a manual adjustment such that the optical modules 11.1. l-104a-b move further or closer away, for example when the user rotates the button 11.1.1-114 one way or the other, until the user visually matches her/his own IPD.
- the manual adjustment is electronically communicated via one or more circuits and power for the movements of the optical modules 11.1. l-104a-b via the motors 11.1.1-1 lOa-b is provided by an electrical power source.
- the adjustment and movement of the optical modules 11.1. l-104a-b via a manipulation of the button 11.1.1-114 is mechanically actuated via the movement of the button 11.1.1-114.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown in Figure IM can be included, either alone or in any combination, in any of the other examples of devices, features, components, and parts shown in any other figures shown and described herein.
- Figure IN illustrates a front perspective view of a portion of an HMD 11.1.2-100, including an outer structural frame 11.1.2-102 and an inner or intermediate structural frame
- the HMD 11.1.2-100 can include a first mounting bracket 11.1.2-108 coupled to the inner frame 11.1.2-104.
- the mounting bracket 11.1.2-108 is coupled to the inner frame 11.1.2-104 between the first and second apertures 11.1 ,2-106a-b.
- the mounting bracket 11.1.2-108 can include a middle or central portion 11.1.2- 109 coupled to the inner frame 11.1.2-104. In some examples, the middle or central portion
- the mounting bracket 108 includes a first cantilever arm 11.1.2-112 and a second cantilever arm 11.1.2-114 extending away from the middle portion 11.1.2-109 of the mount bracket
- the outer frame 11.1.2-102 can define a curved geometry on a lower side thereof to accommodate a user’s nose when the user dons the HMD 11.1.2- 100.
- the curved geometry can be referred to as a nose bridge 11.1.2-111 and be centrally located on a lower side of the HMD 11.1.2-100 as shown.
- the mounting bracket 11.1.2-108 can be connected to the inner frame 11.1.2-104 between the apertures 11.1 ,2-106a-b such that the cantilevered arms 11.1.2-112, 11.1.2-114 extend downward and laterally outward away from the middle portion 11.1.2-109 to compliment the nose bridge 11.1.2-111 geometry of the outer frame 11.1.2-102.
- the mounting bracket 11.1.2-108 is configured to accommodate the user’s nose as noted above.
- the nose bridge 11.1.2-111 geometry accommodates the nose in that the nose bridge 11.1.2-111 provides a curvature that curves with, above, over, and around the user’s nose for comfort and fit.
- the first cantilever arm 11.1.2-112 can extend away from the middle portion
- the first and second cantilever arms 11.1.2-112, 11.1.2-114 are referred to as “cantilevered” or “cantilever” arms because each arm 11.1.2-112, 11.1.2-114, includes a distal free end 11.1.2-116, 11.1.2-118, respectively, which are free of affixation from the inner and outer frames 11.1.2-102, 11.1.2- 104. In this way, the arms 11.1.2-112, 11.1.2-114 are cantilevered from the middle portion
- 11.1.2-109 which can be connected to the inner frame 11.1.2-104, with distal ends 11.1.2- 102, 11.1.2-104 unattached.
- the HMD 11.1.2-100 can include one or more components coupled to the mounting bracket 11.1.2-108.
- the components include a plurality of sensors 11.1.2-1 lOa-f.
- Each sensor of the plurality of sensors 11.1.2-1 lOa-f can include various types of sensors, including cameras, IR sensors, and so forth.
- one or more of the sensors 11.1.2-1 lOa-f can be used for object recognition in three-dimensional space such that it is important to maintain a precise relative position of two or more of the plurality of sensors 11.1.2-1 lOa-f.
- the cantilevered nature of the mounting bracket 11.1.2-108 can protect the sensors 11.1.2-1 lOa-f from damage and altered positioning in the case of accidental drops by the user. Because the sensors 11.1.2-1 lOa-f are cantilevered on the arms 11.1.2-112, 11.1.2-114 of the mounting bracket 11.1.2-108, stresses and deformations of the inner and/or outer frames 11.1.2-104, 11.1.2-102 are not transferred to the cantilevered arms 11.1.2-112, 11.1.2-114 and thus do not affect the relative positioning of the sensors 11.1.2-1 lOa-f coupled/mounted to the mounting bracket 11.1.2-108.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown in Figure IN can be included, either alone or in any combination, in any of the other examples of devices, features, components, and described herein.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown and described herein can be included, either alone or in any combination, in the example of the devices, features, components, and parts shown in Figure IN.
- FIG 10 illustrates an example of an optical module 11.3.2-100 for use in an electronic device such as an HMD, including HDM devices described herein.
- the optical module 11.3.2-100 can be one of two optical modules within an HMD, with each optical module aligned to project light toward a user’s eye.
- a first optical module can project light via a display screen toward a user’s first eye
- a second optical module of the same device can project light via another display screen toward the user’s second eye.
- the optical module 11.3.2-100 can include an optical frame or housing 11.3.2-102, which can also be referred to as a barrel or optical module barrel.
- the optical module 11.3.2-100 can also include a display 11.3.2-104, including a display screen or multiple display screens, coupled to the housing 11.3.2-102.
- the display 11.3.2-104 can be coupled to the housing 11.3.2-102 such that the display 11.3.2-104 is configured to project light toward the eye of a user when the HMD of which the display module 11.3.2-100 is a part is donned during use.
- the housing 11.3.2- 102 can surround the display 11.3.2-104 and provide connection features for coupling other components of optical modules described herein.
- the optical module 11.3.2-100 can include one or more cameras
- the optical module 11.3.2-100 can also include a light strip 11.3.2-108 surrounding the display 11.3.2-104.
- the light strip 11.3.2-108 is disposed between the display 11.3.2-104 and the camera 11.3.2-106.
- the light strip 11.3.2-108 can include a plurality of lights 11.3.2-110.
- the plurality of lights can include one or more light emitting diodes (LEDs) or other lights configured to project light toward the user’s eye when the HMD is donned.
- the individual lights 11.3.2-110 of the light strip 11.3.2-108 can be spaced about the strip 11.3.2-108 and thus spaced about the display 11.3.2-104 uniformly or non- uniformly at various locations on the strip 11.3.2-108 and around the display 11.3.2-104.
- the housing 11.3.2-102 defines a viewing opening 11.3.2- 101 through which the user can view the display 11.3.2-104 when the HMD device is donned.
- the LEDs are configured and arranged to emit light through the viewing opening 11.3.2-101 and onto the user’s eye.
- the camera 11.3.2- 106 is configured to capture one or more images of the user’s eye through the viewing opening 11.3.2-101.
- 11.3.2-100 shown in Figure 10 can be replicated in another (e.g., second) optical module disposed with the HMD to interact (e.g., project light and capture images) of another eye of the user.
- another optical module disposed with the HMD to interact (e.g., project light and capture images) of another eye of the user.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown in Figure 10 can be included, either alone or in any combination, in any of the other examples of devices, features, components, and parts shown in Figure IP or otherwise described herein.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown and described with reference to Figure IP or otherwise described herein can be included, either alone or in any combination, in the example of the devices, features, components, and parts shown in Figure 10.
- Figure IP illustrates a cross-sectional view of an example of an optical module
- the housing 11.3.2-202 defines a first aperture or channel 11.3.2-212 and a second aperture or channel 11.3.2-214.
- the channels 11.3.2-212, 11.3.2-214 can be configured to slidably engage respective rails or guide rods of an HMD device to allow the optical module
- the housing 11.3.2-202 can slidably engage the guide rods to secure the optical module 11.3.2-200 in place within the HMD.
- the optical module 11.3.2-200 can also include a lens
- the lens 11.3.2-216 can be configured to direct light from the display assembly 11.3.2-204 to the user’s eye.
- the lens 11.3.2-216 can be a part of a lens assembly including a corrective lens removably attached to the optical module 11.3.2-200.
- the lens 11.3.2- 216 is disposed over the light strip 11.3.2-208 and the one or more eye-tracking cameras
- the camera 11.3.2-206 is configured to capture images of the user’s eye through the lens 11.3.2-216 and the light strip 11.3.2-208 includes lights configured to project light through the lens 11.3.2-216 to the users’ eye during use.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown in Figure IP can be included, either alone or in any combination, in any of the other examples of devices, features, components, and parts and described herein.
- any of the features, components, and/or parts, including the arrangements and configurations thereof shown and described herein can be included, either alone or in any combination, in the example of the devices, features, components, and parts shown in Figure IP.
- FIG. 2 is a block diagram of an example of the controller 110 in accordance with some embodiments. While certain specific features are illustrated, those skilled in the art will appreciate from the present disclosure that various other features have not been illustrated for the sake of brevity, and so as not to obscure more pertinent aspects of the embodiments disclosed herein.
- the controller 110 includes one or more processing units 202 (e.g., microprocessors, application-specific integrated-circuits (ASICs), field-programmable gate arrays (FPGAs), graphics processing units (GPUs), central processing units (CPUs), processing cores, and/or the like), one or more input/output (I/O) devices 206, one or more communication interfaces 208 (e.g., universal serial bus (USB), FIREWIRE, THUNDERBOLT, IEEE 802.3x, IEEE 802.1 lx, IEEE 802.16x, global system for mobile communications (GSM), code division multiple access (CDMA), time division multiple access (TDMA), global positioning system (GPS), infrared (IR), BLUETOOTH, ZIGBEE, and/or the like type interface), one or more programming (e.g., EO) interfaces 210, a memory 220, and one or more communication buses 204 for interconnecting these and various other processing units 202 (e.g., microprocessors, application
- the one or more communication buses 204 include circuitry that interconnects and controls communications between system components.
- the one or more EO devices 206 include at least one of a keyboard, a mouse, a touchpad, a joystick, one or more microphones, one or more speakers, one or more image sensors, one or more displays, and/or the like.
- the memory 220 includes high-speed random-access memory, such as dynamic random-access memory (DRAM), static random-access memory (SRAM), double-data-rate random-access memory (DDR RAM), or other random-access solid-state memory devices.
- the memory 220 includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
- the memory 220 optionally includes one or more storage devices remotely located from the one or more processing units 202.
- the memory 220 comprises a non-transitory computer readable storage medium.
- the memory 220 or the non-transitory computer readable storage medium of the memory 220 stores the following programs, modules and data structures, or a subset thereof including an optional operating system 230 and an XR experience module 240.
- the operating system 230 includes instructions for handling various basic system services and for performing hardware dependent tasks.
- the XR experience module 240 is configured to manage and coordinate one or more XR experiences for one or more users (e.g., a single XR experience for one or more users, or multiple XR experiences for respective groups of one or more users).
- the XR experience module 240 includes a data obtaining unit 241, a tracking unit 242, a coordination unit 246, and a data transmitting unit 248.
- the data obtaining unit 241 is configured to obtain data (e.g., presentation data, interaction data, sensor data, location data, etc.) from at least the display generation component 120 of Figure 1 A, and optionally one or more of the input devices 125, output devices 155, sensors 190, and/or peripheral devices 195.
- the data obtaining unit 241 includes instructions and/or logic therefor, and heuristics and metadata therefor.
- the tracking unit 242 is configured to map the scene 105 and to track the position/location of at least the display generation component 120 with respect to the scene 105 of Figure 1 A, and optionally, to one or more of the input devices 125, output devices 155, sensors 190, and/or peripheral devices 195.
- the tracking unit 242 includes instructions and/or logic therefor, and heuristics and metadata therefor.
- the tracking unit 242 includes hand tracking unit 244 and/or eye tracking unit 243.
- the hand tracking unit 244 is configured to track the position/location of one or more portions of the user’s hands, and/or motions of one or more portions of the user’s hands with respect to the scene 105 of Figure 1 A, relative to the display generation component 120, and/or relative to a coordinate system defined relative to the user’s hand.
- the hand tracking unit 244 is described in greater detail below with respect to Figure 4.
- the eye tracking unit 243 is configured to track the position and movement of the user’s gaze (or more broadly, the user’s eyes, face, or head) with respect to the scene 105 (e.g., with respect to the physical environment and/or to the user (e.g., the user’s hand)) or with respect to the XR content displayed via the display generation component 120.
- the eye tracking unit 243 is described in greater detail below with respect to Figure 5.
- the coordination unit 246 is configured to manage and coordinate the XR experience presented to the user by the display generation component 120, and optionally, by one or more of the output devices 155 and/or peripheral devices 195. To that end, in various embodiments, the coordination unit 246 includes instructions and/or logic therefor, and heuristics and metadata therefor.
- the data transmitting unit 248 is configured to transmit data (e.g., presentation data, location data, etc.) to at least the display generation component 120, and optionally, to one or more of the input devices 125, output devices 155, sensors 190, and/or peripheral devices 195.
- data transmitting unit 248 includes instructions and/or logic therefor, and heuristics and metadata therefor.
- the data obtaining unit 241, the tracking unit 242 (e.g., including the eye tracking unit 243 and the hand tracking unit 244), the coordination unit 246, and the data transmitting unit 248 are shown as residing on a single device (e.g., the controller 110), it should be understood that in other embodiments, any combination of the data obtaining unit 241, the tracking unit 242 (e.g., including the eye tracking unit 243 and the hand tracking unit 244), the coordination unit 246, and the data transmitting unit 248 may be located in separate computing devices.
- Figure 2 is intended more as functional description of the various features that may be present in a particular implementation as opposed to a structural schematic of the embodiments described herein.
- items shown separately could be combined and some items could be separated.
- some functional modules shown separately in Figure 2 could be implemented in a single module and the various functions of single functional blocks could be implemented by one or more functional blocks in various embodiments.
- the actual number of modules and the division of particular functions and how features are allocated among them will vary from one implementation to another and, in some embodiments, depends in part on the particular combination of hardware, software, and/or firmware chosen for a particular implementation.
- FIG. 3 is a block diagram of an example of the display generation component 120 in accordance with some embodiments. While certain specific features are illustrated, those skilled in the art will appreciate from the present disclosure that various other features have not been illustrated for the sake of brevity, and so as not to obscure more pertinent aspects of the embodiments disclosed herein.
- the display generation component 120 includes one or more processing units 302 (e.g., microprocessors, ASICs, FPGAs, GPUs, CPUs, processing cores, and/or the like), one or more input/output (VO) devices and sensors 306, one or more communication interfaces 308 (e.g., USB, FIREWIRE, THUNDERBOLT, IEEE 802.3x, IEEE 802.1 lx, IEEE 802.16x, GSM, CDMA, TDMA, GPS, IR, BLUETOOTH, ZIGBEE, and/or the like type interface), one or more programming (e.g., EO) interfaces 310, one or more XR displays 312, one or more optional interior- and/or exterior-facing image sensors 314, a memory 320, and one or more communication buses 304 for interconnecting these and various other components.
- processing units 302 e.g., microprocessors, ASICs, FPGAs, GPUs, CPUs, processing cores, and/or the like
- VO
- the one or more communication buses 304 include circuitry that interconnects and controls communications between system components.
- the one or more EO devices and sensors 306 include at least one of an inertial measurement unit (EMU), an accelerometer, a gyroscope, a thermometer, one or more physiological sensors (e.g., blood pressure monitor, heart rate monitor, blood oxygen sensor, blood glucose sensor, etc.), one or more microphones, one or more speakers, a haptics engine, one or more depth sensors (e.g., a structured light, a time-of-flight, or the like), and/or the like.
- EMU inertial measurement unit
- an accelerometer e.g., an accelerometer, a gyroscope, a thermometer
- one or more physiological sensors e.g., blood pressure monitor, heart rate monitor, blood oxygen sensor, blood glucose sensor, etc.
- microphones e.g., one or more microphones
- speakers e.g., a haptics engine
- depth sensors
- the one or more XR displays 312 are configured to provide the XR experience to the user.
- the one or more XR displays 312 correspond to holographic, digital light processing (DLP), liquid-crystal display (LCD), liquid-crystal on silicon (LCoS), organic light-emitting field-effect transitory (OLET), organic light-emitting diode (OLED), surface-conduction electron- emitter display (SED), field-emission display (FED), quantum-dot light-emitting diode (QD-LED), micro-electro- mechanical system (MEMS), and/or the like display types.
- DLP digital light processing
- LCD liquid-crystal display
- LCDoS liquid-crystal on silicon
- OLET organic light-emitting field-effect transitory
- OLET organic light-emitting diode
- SED surface-conduction electron- emitter display
- FED field-emission display
- QD-LED quantum-dot light-e
- the one or more XR displays 312 correspond to diffractive, reflective, polarized, holographic, etc. waveguide displays.
- the display generation component 120 e.g., HMD
- the display generation component 120 includes a single XR display.
- the display generation component 120 includes an XR display for each eye of the user.
- the one or more XR displays 312 are capable of presenting MR and VR content.
- the one or more XR displays 312 are capable of presenting MR or VR content.
- the one or more image sensors 314 are configured to obtain image data that corresponds to at least a portion of the face of the user that includes the eyes of the user (and may be referred to as an eye-tracking camera). In some embodiments, the one or more image sensors 314 are configured to obtain image data that corresponds to at least a portion of the user’s hand(s) and optionally arm(s) of the user (and may be referred to as a hand-tracking camera).
- the one or more image sensors 314 are configured to be forward-facing so as to obtain image data that corresponds to the scene as would be viewed by the user if the display generation component 120 (e.g., HMD) was not present (and may be referred to as a scene camera).
- the one or more optional image sensors 314 can include one or more RGB cameras (e.g., with a complimentary metal-oxide- semiconductor (CMOS) image sensor or a charge-coupled device (CCD) image sensor), one or more infrared (IR) cameras, one or more event-based cameras, and/or the like.
- CMOS complimentary metal-oxide- semiconductor
- CCD charge-coupled device
- the memory 320 includes high-speed random-access memory, such as DRAM, SRAM, DDR RAM, or other random-access solid-state memory devices.
- the memory 320 includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
- the memory 320 optionally includes one or more storage devices remotely located from the one or more processing units 302.
- the memory 320 comprises a non-transitory computer readable storage medium.
- the memory 320 or the non-transitory computer readable storage medium of the memory 320 stores the following programs, modules and data structures, or a subset thereof including an optional operating system 330 and an XR presentation module 340.
- the operating system 330 includes instructions for handling various basic system services and for performing hardware dependent tasks.
- the XR presentation module 340 is configured to present XR content to the user via the one or more XR displays 312.
- the XR presentation module 340 includes a data obtaining unit 342, an XR presenting unit 344, an XR map generating unit 346, and a data transmitting unit 348.
- the data obtaining unit 342 is configured to obtain data (e.g., presentation data, interaction data, sensor data, location data, etc.) from at least the controller 110 of Figure 1 A.
- data e.g., presentation data, interaction data, sensor data, location data, etc.
- the data obtaining unit 342 includes instructions and/or logic therefor, and heuristics and metadata therefor.
- the XR presenting unit 344 is configured to present XR content via the one or more XR displays 312. To that end, in various embodiments, the XR presenting unit 344 includes instructions and/or logic therefor, and heuristics and metadata therefor.
- the XR map generating unit 346 is configured to generate an XR map (e.g., a 3D map of the mixed reality scene or a map of the physical environment into which computer-generated objects can be placed to generate the extended reality) based on media content data.
- an XR map e.g., a 3D map of the mixed reality scene or a map of the physical environment into which computer-generated objects can be placed to generate the extended reality
- the XR map generating unit 346 includes instructions and/or logic therefor, and heuristics and metadata therefor.
- the data transmitting unit 348 is configured to transmit data (e.g., presentation data, location data, etc.) to at least the controller 110, and optionally one or more of the input devices 125, output devices 155, sensors 190, and/or peripheral devices 195.
- data transmitting unit 348 includes instructions and/or logic therefor, and heuristics and metadata therefor.
- the data obtaining unit 342, the XR presenting unit 344, the XR map generating unit 346, and the data transmitting unit 348 are shown as residing on a single device (e.g., the display generation component 120 of Figure 1 A), it should be understood that in other embodiments, any combination of the data obtaining unit 342, the XR presenting unit 344, the XR map generating unit 346, and the data transmitting unit 348 may be located in separate computing devices.
- Figure 3 is intended more as a functional description of the various features that could be present in a particular implementation as opposed to a structural schematic of the embodiments described herein.
- items shown separately could be combined and some items could be separated.
- some functional modules shown separately in Figure 3 could be implemented in a single module and the various functions of single functional blocks could be implemented by one or more functional blocks in various embodiments.
- the actual number of modules and the division of particular functions and how features are allocated among them will vary from one implementation to another and, in some embodiments, depends in part on the particular combination of hardware, software, and/or firmware chosen for a particular implementation.
- Figure 4 is a schematic, pictorial illustration of an example embodiment of the hand tracking device 140.
- hand tracking device 140 ( Figure 1 A) is controlled by hand tracking unit 244 ( Figure 2) to track the position/location of one or more portions of the user’s hands, and/or motions of one or more portions of the user’s hands with respect to the scene 105 of Figure 1 A (e.g., with respect to a portion of the physical environment surrounding the user, with respect to the display generation component 120, or with respect to a portion of the user (e.g., the user’s face, eyes, or head), and/or relative to a coordinate system defined relative to the user’s hand).
- hand tracking unit 244 Figure 2
- Figure 2 to track the position/location of one or more portions of the user’s hands, and/or motions of one or more portions of the user’s hands with respect to the scene 105 of Figure 1 A (e.g., with respect to a portion of the physical environment surrounding the user, with respect to the display generation component 120, or with respect to
- the hand tracking device 140 is part of the display generation component 120 (e.g., embedded in or attached to a head-mounted device). In some embodiments, the hand tracking device 140 is separate from the display generation component 120 (e.g., located in separate housings or attached to separate physical support structures).
- the hand tracking device 140 includes image sensors 404 (e.g., one or more IR cameras, 3D cameras, depth cameras, and/or color cameras, etc.) that capture three-dimensional scene information that includes at least a hand 406 of a human user.
- the image sensors 404 capture the hand images with sufficient resolution to enable the fingers and their respective positions to be distinguished.
- the image sensors 404 typically capture images of other parts of the user’s body, as well, or possibly all of the body, and may have either zoom capabilities or a dedicated sensor with enhanced magnification to capture images of the hand with the desired resolution.
- the image sensors 404 also capture 2D color video images of the hand 406 and other elements of the scene.
- the image sensors 404 are used in conjunction with other image sensors to capture the physical environment of the scene 105, or serve as the image sensors that capture the physical environments of the scene 105. In some embodiments, the image sensors 404 are positioned relative to the user or the user’s environment in a way that a field of view of the image sensors or a portion thereof is used to define an interaction space in which hand movement captured by the image sensors are treated as inputs to the controller 110.
- the image sensors 404 output a sequence of frames containing 3D map data (and possibly color image data, as well) to the controller 110, which extracts high-level information from the map data.
- This high-level information is typically provided via an Application Program Interface (API) to an application running on the controller, which drives the display generation component 120 accordingly.
- API Application Program Interface
- the user may interact with software running on the controller 110 by moving his hand 406 and changing his hand posture.
- the image sensors 404 project a pattern of spots onto a scene containing the hand 406 and capture an image of the projected pattern.
- the controller 110 computes the 3D coordinates of points in the scene (including points on the surface of the user’s hand) by triangulation, based on transverse shifts of the spots in the pattern. This approach is advantageous in that it does not require the user to hold or wear any sort of beacon, sensor, or other marker. It gives the depth coordinates of points in the scene relative to a predetermined reference plane, at a certain distance from the image sensors 404.
- the image sensors 404 are assumed to define an orthogonal set of x, y, z axes, so that depth coordinates of points in the scene correspond to z components measured by the image sensors.
- the image sensors 404 e.g., a hand tracking device
- the hand tracking device 140 captures and processes a temporal sequence of depth maps containing the user’s hand, while the user moves his hand (e.g., whole hand or one or more fingers).
- Software running on a processor in the image sensors 404 and/or the controller 110 processes the 3D map data to extract patch descriptors of the hand in these depth maps.
- the software matches these descriptors to patch descriptors stored in a database 408, based on a prior learning process, in order to estimate the pose of the hand in each frame.
- the pose typically includes 3D locations of the user’s hand joints and finger tips.
- the software may also analyze the trajectory of the hands and/or fingers over multiple frames in the sequence in order to identify gestures.
- the pose estimation functions described herein may be interleaved with motion tracking functions, so that patch-based pose estimation is performed only once in every two (or more) frames, while tracking is used to find changes in the pose that occur over the remaining frames.
- the pose, motion, and gesture information are provided via the above-mentioned API to an application program running on the controller 110. This program may, for example, move and modify images presented on the display generation component 120, or perform other functions, in response to the pose and/or gesture information.
- a gesture includes an air gesture.
- An air gesture is a gesture that is detected without the user touching (or independently of) an input element that is part of a device (e.g., computer system 101, one or more input device 125, and/or hand tracking device 140) and is based on detected motion of a portion (e.g., the head, one or more arms, one or more hands, one or more fingers, and/or one or more legs) of the user’s body through the air including motion of the user’s body relative to an absolute reference (e.g., an angle of the user’s arm relative to the ground or a distance of the user’s hand relative to the ground), relative to another portion of the user’s body (e.g., movement of a hand of the user relative to a shoulder of the user, movement of one hand of the user relative to another hand of the user, and/or movement of a finger of the user relative to another finger or portion of a hand of the user), and/or absolute motion of a portion of the user
- input gestures used in the various examples and embodiments described herein include air gestures performed by movement of the user’s finger(s) relative to other finger(s) (or part(s) of the user’s hand) for interacting with an XR environment (e.g., a virtual or mixed-reality environment), in accordance with some embodiments.
- XR environment e.g., a virtual or mixed-reality environment
- an air gesture is a gesture that is detected without the user touching an input element that is part of the device (or independently of an input element that is a part of the device) and is based on detected motion of a portion of the user’s body through the air including motion of the user’s body relative to an absolute reference (e.g., an angle of the user’s arm relative to the ground or a distance of the user’s hand relative to the ground), relative to another portion of the user’s body (e.g., movement of a hand of the user relative to a shoulder of the user, movement of one hand of the user relative to another hand of the user, and/or movement of a finger of the user relative to another finger or portion of a hand of the user), and/or absolute motion of a portion of the user’s body (e.g., a tap gesture that includes movement of a hand in a predetermined pose by a predetermined amount and/or speed, or a shake gesture that includes a predetermined speed or amount of rotation of a portion of the user
- the input gesture is an air gesture (e.g., in the absence of physical contact with an input device that provides the computer system with information about which user interface element is the target of the user input, such as contact with a user interface element displayed on a touchscreen, or contact with a mouse or trackpad to move a cursor to the user interface element), the gesture takes into account the user’s attention (e.g., gaze) to determine the target of the user input (e.g., for direct inputs, as described below).
- attention e.g., gaze
- the input gesture is, for example, detected attention (e.g., gaze) toward the user interface element in combination (e.g., concurrent) with movement of a user's finger(s) and/or hands to perform a pinch and/or tap input, as described in more detail below.
- detected attention e.g., gaze
- the user interface element in combination (e.g., concurrent) with movement of a user's finger(s) and/or hands to perform a pinch and/or tap input, as described in more detail below.
- input gestures that are directed to a user interface object are performed directly or indirectly with reference to a user interface object.
- a user input is performed directly on the user interface object in accordance with performing the input gesture with the user’s hand at a position that corresponds to the position of the user interface object in the three-dimensional environment (e.g., as determined based on a current viewpoint of the user).
- the input gesture is performed indirectly on the user interface object in accordance with the user performing the input gesture while a position of the user’s hand is not at the position that corresponds to the position of the user interface object in the three-dimensional environment while detecting the user’s attention (e.g., gaze) on the user interface object.
- attention e.g., gaze
- the user is enabled to direct the user’s input to the user interface object by initiating the gesture at, or near, a position corresponding to the displayed position of the user interface object (e.g., within 0.5 cm, 1 cm, 5 cm, or a distance between 0-5 cm, as measured from an outer edge of the option or a center portion of the option).
- a position corresponding to the displayed position of the user interface object e.g., within 0.5 cm, 1 cm, 5 cm, or a distance between 0-5 cm, as measured from an outer edge of the option or a center portion of the option.
- the user is enabled to direct the user’s input to the user interface object by paying attention to the user interface object (e.g., by gazing at the user interface object) and, while paying attention to the option, the user initiates the input gesture (e.g., at any position that is detectable by the computer system) (e.g., at a position that does not correspond to the displayed position of the user interface object).
- input gestures used in the various examples and embodiments described herein include pinch inputs and tap inputs, for interacting with a virtual or mixed-reality environment, in accordance with some embodiments.
- the pinch inputs and tap inputs described below are performed as air gestures.
- a pinch input is part of an air gesture that includes one or more of a pinch gesture, a long pinch gesture, a pinch and drag gesture, or a double pinch gesture.
- a pinch gesture that is an air gesture includes movement of two or more fingers of a hand to make contact with one another, that is, optionally, followed by an immediate (e.g., within 0-1 seconds) break in contact from each other.
- a long pinch gesture that is an air gesture includes movement of two or more fingers of a hand to make contact with one another for at least a threshold amount of time (e.g., at least 1 second), before detecting a break in contact with one another.
- a long pinch gesture includes the user holding a pinch gesture (e.g., with the two or more fingers making contact), and the long pinch gesture continues until a break in contact between the two or more fingers is detected.
- a double pinch gesture that is an air gesture comprises two (e.g., or more) pinch inputs (e.g., performed by the same hand) detected in immediate (e.g., within a predefined time period) succession of each other.
- the user performs a first pinch input (e.g., a pinch input or a long pinch input), releases the first pinch input (e.g., breaks contact between the two or more fingers), and performs a second pinch input within a predefined time period (e.g., within 1 second or within 2 seconds) after releasing the first pinch input.
- a first pinch input e.g., a pinch input or a long pinch input
- releases the first pinch input e.g., breaks contact between the two or more fingers
- a second pinch input within a predefined time period (e.g., within 1 second or within 2 seconds) after releasing the first pinch input.
- a pinch and drag gesture that is an air gesture includes a pinch gesture (e.g., a pinch gesture or a long pinch gesture) performed in conjunction with (e.g., followed by) a drag input that changes a position of the user’s hand from a first position (e.g., a start position of the drag) to a second position (e.g., an end position of the drag).
- a pinch gesture e.g., a pinch gesture or a long pinch gesture
- the user maintains the pinch gesture while performing the drag input, and releases the pinch gesture (e.g., opens their two or more fingers) to end the drag gesture (e.g., at the second position).
- the pinch input and the drag input are performed by the same hand (e.g., the user pinches two or more fingers to make contact with one another and moves the same hand to the second position in the air with the drag gesture).
- the pinch input is performed by a first hand of the user and the drag input is performed by the second hand of the user (e.g., the user’s second hand moves from the first position to the second position in the air while the user continues the pinch input with the user’s first hand).
- an input gesture that is an air gesture includes inputs (e.g., pinch and/or tap inputs) performed using both of the user’s two hands.
- the input gesture includes two (e.g., or more) pinch inputs performed in conjunction with (e.g., concurrently with, or within a predefined time period of) each other.
- two pinch inputs performed in conjunction with (e.g., concurrently with, or within a predefined time period of) each other.
- a first pinch gesture performed using a first hand of the user (e.g., a pinch input, a long pinch input, or a pinch and drag input), and, in conjunction with performing the pinch input using the first hand, performing a second pinch input using the other hand (e.g., the second hand of the user’s two hands).
- a tap input (e.g., directed to a user interface element) performed as an air gesture includes movement of a user's finger(s) toward the user interface element, movement of the user's hand toward the user interface element optionally with the user’s finger(s) extended toward the user interface element, a downward motion of a user's finger (e.g., mimicking a mouse click motion or a tap on a touchscreen), or other predefined movement of the user’s hand.
- a tap input that is performed as an air gesture is detected based on movement characteristics of the finger or hand performing the tap gesture movement of a finger or hand away from the viewpoint of the user and/or toward an object that is the target of the tap input followed by an end of the movement.
- the end of the movement is detected based on a change in movement characteristics of the finger or hand performing the tap gesture (e.g., an end of movement away from the viewpoint of the user and/or toward the object that is the target of the tap input, a reversal of direction of movement of the finger or hand, and/or a reversal of a direction of acceleration of movement of the finger or hand).
- a change in movement characteristics of the finger or hand performing the tap gesture e.g., an end of movement away from the viewpoint of the user and/or toward the object that is the target of the tap input, a reversal of direction of movement of the finger or hand, and/or a reversal of a direction of acceleration of movement of the finger or hand.
- attention of a user is determined to be directed to a portion of the three-dimensional environment based on detection of gaze directed to the portion of the three-dimensional environment (optionally, without requiring other conditions).
- attention of a user is determined to be directed to a portion of the three- dimensional environment based on detection of gaze directed to the portion of the three-dimensional environment with one or more additional conditions such as requiring that gaze is directed to the portion of the three-dimensional environment for at least a threshold duration (e.g., a dwell duration) and/or requiring that the gaze is directed to the portion of the three-dimensional environment while the viewpoint of the user is within a distance threshold from the portion of the three-dimensional environment in order for the device to determine that attention of the user is directed to the portion of the three-dimensional environment, where if one of the additional conditions is not met, the device determines that attention is not directed to the portion of the three-dimensional environment toward which gaze is directed (e.g., until the one or more additional conditions are met).
- a threshold duration e.g.,
- the detection of a ready state configuration of a user or a portion of a user is detected by the computer system.
- Detection of a ready state configuration of a hand is used by a computer system as an indication that the user is likely preparing to interact with the computer system using one or more air gesture inputs performed by the hand (e.g., a pinch, tap, pinch and drag, double pinch, long pinch, or other air gesture described herein).
- the ready state of the hand is determined based on whether the hand has a predetermined hand shape (e.g., a pre-pinch shape with a thumb and one or more fingers extended and spaced apart ready to make a pinch or grab gesture or a pre-tap with one or more fingers extended and palm facing away from the user), based on whether the hand is in a predetermined position relative to a viewpoint of the user (e.g., below the user’s head and above the user’s waist and extended out from the body by at least 15, 20, 25, 30, or 50cm), and/or based on whether the hand has moved in a particular manner (e.g., moved toward a region in front of the user above the user’s waist and below the user’s head or moved away from the user’s body or leg).
- the ready state is used to determine whether interactive elements of the user interface respond to attention (e.g., gaze) inputs.
- User inputs can be detected with controls contained in the hardware input device such as one or more touch-sensitive input elements, one or more pressure-sensitive input elements, one or more buttons, one or more knobs, one or more dials, one or more joysticks, one or more hand or finger coverings that can detect a position or change in position of portions of a hand and/or fingers relative to each other, relative to the user’s body, and/or relative to a physical environment of the user, and/or other hardware input device controls, where the user inputs with the controls contained in the hardware input device are used in place of hand and/or finger gestures such as air taps or air pinches in the corresponding air gesture(s).
- controls contained in the hardware input device such as one or more touch-sensitive input elements, one or more pressure-sensitive input elements, one or more buttons, one or more knobs, one or more dials, one or more joysticks, one or more hand or finger coverings that can detect a position or change in position of portions of a hand and/or fingers relative to each other, relative to
- a selection input that is described as being performed with an air tap or air pinch input could be alternatively detected with a button press, a tap on a touch-sensitive surface, a press on a pressure-sensitive surface, or other hardware input.
- a movement input that is described as being performed with an air pinch and drag e.g., an air drag gesture or air swipe gesture
- the hardware input control such as a button press and hold, a touch on a touch-sensitive surface, a press on a pressure-sensitive surface, or other hardware input that is followed by movement of the hardware input device (e.g., along with the hand with which the hardware input device is associated) through space.
- a two-handed input that includes movement of the hands relative to each other could be performed with one air gesture and one hardware input device in the hand that is not performing the air gesture, two hardware input devices held in different hands, or two air gestures performed by different hands using various combinations of air gestures and/or the inputs detected by one or more hardware input devices that are described above.
- the software may be downloaded to the controller 110 in electronic form, over a network, for example, or it may alternatively be provided on tangible, non-transitory media, such as optical, magnetic, or electronic memory media.
- the database 408 is likewise stored in a memory associated with the controller 110.
- some or all of the described functions of the computer may be implemented in dedicated hardware, such as a custom or semi -custom integrated circuit or a programmable digital signal processor (DSP).
- DSP programmable digital signal processor
- controller 110 is shown in Figure 4, by way of example, as a separate unit from the image sensors 404, some or all of the processing functions of the controller may be performed by a suitable microprocessor and software or by dedicated circuitry within the housing of the image sensors 404 (e.g., a hand tracking device) or otherwise associated with the image sensors 404. In some embodiments, at least some of these processing functions may be carried out by a suitable processor that is integrated with the display generation component 120 (e.g., in a television set, a handheld device, or head-mounted device, for example) or with any other suitable computerized device, such as a game console or media player.
- the sensing functions of image sensors 404 may likewise be integrated into the computer or other computerized apparatus that is to be controlled by the sensor output.
- Figure 4 further includes a schematic representation of a depth map 410 captured by the image sensors 404, in accordance with some embodiments.
- the depth map as explained above, comprises a matrix of pixels having respective depth values.
- the pixels 412 corresponding to the hand 406 have been segmented out from the background and the wrist in this map.
- the brightness of each pixel within the depth map 410 corresponds inversely to its depth value, i.e., the measured z distance from the image sensors 404, with the shade of gray growing darker with increasing depth.
- the controller 110 processes these depth values in order to identify and segment a component of the image (i.e., a group of neighboring pixels) having characteristics of a human hand. These characteristics, may include, for example, overall size, shape and motion from frame to frame of the sequence of depth maps.
- Figure 4 also schematically illustrates a hand skeleton 414 that controller 110 ultimately extracts from the depth map 410 of the hand 406, in accordance with some embodiments.
- the hand skeleton 414 is superimposed on a hand background 416 that has been segmented from the original depth map.
- key feature points of the hand e.g., points corresponding to knuckles, finger tips, center of the palm, end of the hand connecting to wrist, etc.
- location and movements of these key feature points over multiple image frames are used by the controller 110 to determine the hand gestures performed by the hand or the current state of the hand, in accordance with some embodiments.
- Figure 5 illustrates an example embodiment of the eye tracking device 130 ( Figure 1 A).
- the eye tracking device 130 is controlled by the eye tracking unit 243 ( Figure 2) to track the position and movement of the user’s gaze with respect to the scene 105 or with respect to the XR content displayed via the display generation component 120.
- the eye tracking device 130 is integrated with the display generation component 120.
- the display generation component 120 is a head-mounted device such as headset, helmet, goggles, or glasses, or a handheld device placed in a wearable frame
- the head-mounted device includes both a component that generates the XR content for viewing by the user and a component for tracking the gaze of the user relative to the XR content.
- the eye tracking device 130 is separate from the display generation component 120.
- the eye tracking device 130 is optionally a separate device from the handheld device or XR chamber.
- the eye tracking device 130 is a head-mounted device or part of a head-mounted device.
- the head-mounted eye-tracking device 130 is optionally used in conjunction with a display generation component that is also head-mounted, or a display generation component that is not head-mounted.
- the eye tracking device 130 is not a head-mounted device, and is optionally used in conjunction with a head-mounted display generation component.
- the eye tracking device 130 is not a head-mounted device, and is optionally part of a non-head-mounted display generation component.
- the display generation component 120 uses a display mechanism (e.g., left and right near-eye display panels) for displaying frames including left and right images in front of a user’s eyes to thus provide 3D virtual views to the user.
- a head-mounted display generation component may include left and right optical lenses (referred to herein as eye lenses) located between the display and the user’s eyes.
- the display generation component may include or be coupled to one or more external video cameras that capture video of the user’s environment for display.
- a head-mounted display generation component may have a transparent or semi-transparent display through which a user may view the physical environment directly and display virtual objects on the transparent or semi-transparent display.
- display generation component projects virtual objects into the physical environment.
- the virtual objects may be projected, for example, on a physical surface or as a holograph, so that an individual, using the system, observes the virtual objects superimposed over the physical environment. In such cases, separate display panels and image frames for the left and right eyes may not be necessary.
- eye tracking device 130 e.g., a gaze tracking device
- eye tracking camera e.g., infrared (IR) or near-IR (NIR) cameras
- illumination sources e.g., IR or NIR light sources such as an array or ring of LEDs
- the eye tracking cameras may be pointed towards the user’s eyes to receive reflected IR or NIR light from the light sources directly from the eyes, or alternatively may be pointed towards “hot” mirrors located between the user’s eyes and the display panels that reflect IR or NIR light from the eyes to the eye tracking cameras while allowing visible light to pass.
- the eye tracking device 130 optionally captures images of the user’s eyes (e.g., as a video stream captured at 60-120 frames per second (fps)), analyze the images to generate gaze tracking information, and communicate the gaze tracking information to the controller 110.
- images of the user’s eyes e.g., as a video stream captured at 60-120 frames per second (fps)
- fps frames per second
- two eyes of the user are separately tracked by respective eye tracking cameras and illumination sources.
- only one eye of the user is tracked by a respective eye tracking camera and illumination sources.
- the eye tracking device 130 is calibrated using a devicespecific calibration process to determine parameters of the eye tracking device for the specific operating environment 100, for example the 3D geometric relationship and parameters of the LEDs, cameras, hot mirrors (if present), eye lenses, and display screen.
- the device-specific calibration process may be performed at the factory or another facility prior to delivery of the AR/VR equipment to the end user.
- the device- specific calibration process may be an automated calibration process or a manual calibration process.
- a user-specific calibration process may include an estimation of a specific user’s eye parameters, for example the pupil location, fovea location, optical axis, visual axis, eye spacing, etc.
- images captured by the eye tracking cameras can be processed using a glint-assisted method to determine the current visual axis and point of gaze of the user with respect to the display, in accordance with some embodiments.
- the eye tracking device 130 (e.g., 130A or 130B) includes eye lens(es) 520, and a gaze tracking system that includes at least one eye tracking camera 540 (e.g., infrared (IR) or near-IR (NIR) cameras) positioned on a side of the user’s face for which eye tracking is performed, and an illumination source 530 (e.g., IR or NIR light sources such as an array or ring of NIR light-emitting diodes (LEDs)) that emit light (e.g., IR or NIR light) towards the user’s eye(s) 592.
- IR infrared
- NIR near-IR
- an illumination source 530 e.g., IR or NIR light sources such as an array or ring of NIR light-emitting diodes (LEDs)
- the eye tracking cameras 540 may be pointed towards mirrors 550 located between the user’s eye(s) 592 and a display 510 (e.g., a left or right display panel of a head-mounted display, or a display of a handheld device, a projector, etc.) that reflect IR or NIR light from the eye(s) 592 while allowing visible light to pass (e.g., as shown in the top portion of Figure 5), or alternatively may be pointed towards the user’s eye(s) 592 to receive reflected IR or NIR light from the eye(s) 592 (e.g., as shown in the bottom portion of Figure 5).
- a display 510 e.g., a left or right display panel of a head-mounted display, or a display of a handheld device, a projector, etc.
- a display 510 e.g., a left or right display panel of a head-mounted display, or a display of a handheld device, a projector, etc.
- the controller 110 renders AR or VR frames 562 (e.g., left and right frames for left and right display panels) and provides the frames 562 to the display 510.
- the controller 110 uses gaze tracking input 542 from the eye tracking cameras 540 for various purposes, for example in processing the frames 562 for display.
- the controller 110 optionally estimates the user’s point of gaze on the display 510 based on the gaze tracking input 542 obtained from the eye tracking cameras 540 using the glint-assisted methods or other suitable methods.
- the point of gaze estimated from the gaze tracking input 542 is optionally used to determine the direction in which the user is currently looking.
- the controller 110 may render virtual content differently based on the determined direction of the user’s gaze. For example, the controller 110 may generate virtual content at a higher resolution in a foveal region determined from the user’s current gaze direction than in peripheral regions. As another example, the controller may position or move virtual content in the view based at least in part on the user’s current gaze direction. As another example, the controller may display particular virtual content in the view based at least in part on the user’s current gaze direction. As another example use case in AR applications, the controller 110 may direct external cameras for capturing the physical environments of the XR experience to focus in the determined direction.
- the autofocus mechanism of the external cameras may then focus on an object or surface in the environment that the user is currently looking at on the display 510.
- the eye lenses 520 may be focusable lenses, and the gaze tracking information is used by the controller to adjust the focus of the eye lenses 520 so that the virtual object that the user is currently looking at has the proper vergence to match the convergence of the user’s eyes 592.
- the controller 110 may leverage the gaze tracking information to direct the eye lenses 520 to adjust focus so that close objects that the user is looking at appear at the right distance.
- the eye tracking device is part of a head-mounted device that includes a display (e.g., display 510), two eye lenses (e.g., eye lens(es) 520), eye tracking cameras (e.g., eye tracking camera(s) 540), and light sources (e.g., illumination sources 530 (e.g., IR or NIR LEDs)), mounted in a wearable housing.
- the light sources emit light (e.g., IR or NIR light) towards the user’s eye(s) 592.
- the light sources may be arranged in rings or circles around each of the lenses as shown in Figure 5.
- eight illumination sources 530 e.g., LEDs
- the display 510 emits light in the visible light range and does not emit light in the IR or NIR range, and thus does not introduce noise in the gaze tracking system.
- the location and angle of eye tracking camera(s) 540 is given by way of example, and is not intended to be limiting.
- a single eye tracking camera 540 is located on each side of the user’s face.
- two or more NIR cameras 540 may be used on each side of the user’s face.
- a camera 540 with a wider field of view (FOV) and a camera 540 with a narrower FOV may be used on each side of the user’s face.
- a camera 540 that operates at one wavelength (e.g., 850nm) and a camera 540 that operates at a different wavelength (e.g., 940nm) may be used on each side of the user’s face.
- Embodiments of the gaze tracking system as illustrated in Figure 5 may, for example, be used in computer-generated reality, virtual reality, and/or mixed reality applications to provide computer-generated reality, virtual reality, augmented reality, and/or augmented virtuality experiences to the user.
- FIG. 6 illustrates a glint-assisted gaze tracking pipeline, in accordance with some embodiments.
- the gaze tracking pipeline is implemented by a glint-assisted gaze tracking system (e.g., eye tracking device 130 as illustrated in Figures 1 A- 1P and 5).
- the glint-assisted gaze tracking system may maintain a tracking state. Initially, the tracking state is off or “NO”. When in the tracking state, the glint-assisted gaze tracking system uses prior information from the previous frame when analyzing the current frame to track the pupil contour and glints in the current frame. When not in the tracking state, the glint-assisted gaze tracking system attempts to detect the pupil and glints in the current frame and, if successful, initializes the tracking state to “YES” and continues with the next frame in the tracking state.
- a glint-assisted gaze tracking system e.g., eye tracking device 130 as illustrated in Figures 1 A- 1P and 5.
- the gaze tracking cameras may capture left and right images of the user’s left and right eyes.
- the captured images are then input to a gaze tracking pipeline for processing beginning at 610.
- the gaze tracking system may continue to capture images of the user’s eyes, for example at a rate of 60 to 120 frames per second.
- each set of captured images may be input to the pipeline for processing. However, in some embodiments or under some conditions, not all captured frames are processed by the pipeline.
- the method proceeds to element 640.
- the tracking state is NO, then as indicated at 620 the images are analyzed to detect the user’s pupils and glints in the images.
- the method proceeds to element 640. Otherwise, the method returns to element 610 to process next images of the user’s eyes.
- the current frames are analyzed to track the pupils and glints based in part on prior information from the previous frames.
- the tracking state is initialized based on the detected pupils and glints in the current frames.
- Results of processing at element 640 are checked to verify that the results of tracking or detection can be trusted. For example, results may be checked to determine if the pupil and a sufficient number of glints to perform gaze estimation are successfully tracked or detected in the current frames.
- the tracking state is set to NO at element 660, and the method returns to element 610 to process next images of the user’s eyes.
- the method proceeds to element 670.
- the tracking state is set to YES (if not already YES), and the pupil and glint information is passed to element 680 to estimate the user’s point of gaze.
- Figure 6 is intended to serve as one example of eye tracking technology that may be used in a particular implementation.
- eye tracking technologies that currently exist or are developed in the future may be used in place of or in combination with the glint-assisted eye tracking technology describe herein in the computer system 101 for providing XR experiences to users, in accordance with various embodiments.
- the captured portions of real world environment 602 are used to provide a XR experience to the user, for example, a mixed reality environment in which one or more virtual objects are superimposed over representations of real world environment 602.
- a three-dimensional environment optionally includes a representation of a table that exists in the physical environment, which is captured and displayed in the three-dimensional environment (e.g., actively via cameras and displays of a computer system, or passively via a transparent or translucent display of the computer system).
- the three-dimensional environment is optionally a mixed reality system in which the three-dimensional environment is based on the physical environment that is captured by one or more sensors of the computer system and displayed via a display generation component.
- the computer system is optionally able to selectively display portions and/or objects of the physical environment such that the respective portions and/or objects of the physical environment appear as if they exist in the three-dimensional environment displayed by the computer system.
- the computer system is optionally able to display virtual objects in the three-dimensional environment to appear as if the virtual objects exist in the real world (e.g., physical environment) by placing the virtual objects at respective locations in the three-dimensional environment that have corresponding locations in the real world.
- the computer system optionally displays a vase such that it appears as if a real vase is placed on top of a table in the physical environment.
- a respective location in the three- dimensional environment has a corresponding location in the physical environment.
- the computer system when the computer system is described as displaying a virtual object at a respective location with respect to a physical object (e.g., such as a location at or near the hand of the user, or at or near a physical table), the computer system displays the virtual object at a particular location in the three-dimensional environment such that it appears as if the virtual object is at or near the physical object in the physical world (e.g., the virtual object is displayed at a location in the three-dimensional environment that corresponds to a location in the physical environment at which the virtual object would be displayed if it were a real object at that particular location).
- a physical object e.g., such as a location at or near the hand of the user, or at or near a physical table
- the computer system displays the virtual object at a particular location in the three-dimensional environment such that it appears as if the virtual object is at or near the physical object in the physical world (e.g., the virtual object is displayed at a location in the three-dimensional environment that corresponds to a location in the
- real world objects that exist in the physical environment that are displayed in the three-dimensional environment can interact with virtual objects that exist only in the three- dimensional environment.
- a three-dimensional environment can include a table and a vase placed on top of the table, with the table being a view of (or a representation of) a physical table in the physical environment, and the vase being a virtual object.
- a three-dimensional environment e.g., a real environment, a virtual environment, or an environment that includes a mix of real and virtual objects
- objects are sometimes referred to as having a depth or simulated depth, or objects are referred to as being visible, displayed, or placed at different depths.
- depth refers to a dimension other than height or width.
- depth is defined relative to a fixed set of coordinates (e.g., where a room or an object has a height, depth, and width defined relative to the fixed set of coordinates).
- depth is defined relative to a location or viewpoint of a user, in which case, the depth dimension varies based on the location of the user and/or the location and angle of the viewpoint of the user.
- depth is defined relative to a location of a user that is positioned relative to a surface of an environment (e.g., a floor of an environment, or a surface of the ground)
- objects that are further away from the user along a line that extends parallel to the surface are considered to have a greater depth in the environment, and/or the depth of an object is measured along an axis that extends outward from a location of the user and is parallel to the surface of the environment (e.g., depth is defined in a cylindrical or substantially cylindrical coordinate system with the position of the user at the center of the cylinder that extends from a head of the user toward feet of the user).
- depth is defined relative to viewpoint of a user (e.g., a direction relative to a point in space that determines which portion of an environment that is visible via a head mounted device or other display)
- objects that are further away from the viewpoint of the user along a line that extends parallel to the direction of the viewpoint of the user are considered to have a greater depth in the environment, and/or the depth of an object is measured along an axis that extends outward from a line that extends from the viewpoint of the user and is parallel to the direction of the viewpoint of the user (e.g., depth is defined in a spherical or substantially spherical coordinate system with the origin of the viewpoint at the center of the sphere that extends outwardly from a head of the user).
- depth is defined relative to a user interface container (e.g., a window or application in which application and/or system content is displayed) where the user interface container has a height and/or width, and depth is a dimension that is orthogonal to the height and/or width of the user interface container.
- a user interface container e.g., a window or application in which application and/or system content is displayed
- depth is a dimension that is orthogonal to the height and/or width of the user interface container.
- the height and or width of the container are typically orthogonal or substantially orthogonal to a line that extends from a location based on the user (e.g., a viewpoint of the user or a location of the user) to the user interface container (e.g., the center of the user interface container, or another characteristic point of the user interface container) when the container is placed in the three- dimensional environment or is initially displayed (e.g., so that the depth dimension for the container extends outward away from the user or the viewpoint of the user).
- a location based on the user e.g., a viewpoint of the user or a location of the user
- the user interface container e.g., the center of the user interface container, or another characteristic point of the user interface container
- depth of an object relative to the user interface container refers to a position of the object along the depth dimension for the user interface container.
- multiple different containers can have different depth dimensions (e.g., different depth dimensions that extend away from the user or the viewpoint of the user in different directions and/or from different starting points).
- the direction of the depth dimension remains constant for the user interface container as the location of the user interface container, the user and/or the viewpoint of the user changes (e.g., or when multiple different viewers are viewing the same container in the three-dimensional environment such as during an in-person collaboration session and/or when multiple participants are in a real-time communication session with shared virtual content including the container).
- the depth dimension optionally extends into a surface of the curved container.
- z-separation e.g., separation of two objects in a depth dimension
- z-height e.g., distance of one object from another in a depth dimension
- z-position e.g., position of one object in a depth dimension
- z-depth e.g., position of one object in a depth dimension
- simulated z dimension e.g., depth used as a dimension of an object, dimension of an environment, a direction in space, and/or a direction in simulated space
- a user is optionally able to interact with virtual objects in the three-dimensional environment using one or more hands as if the virtual objects were real objects in the physical environment.
- one or more sensors of the computer system optionally capture one or more of the hands of the user and display representations of the hands of the user in the three-dimensional environment (e.g., in a manner similar to displaying a real world object in three-dimensional environment described above), or in some embodiments, the hands of the user are visible via the display generation component via the ability to see the physical environment through the user interface due to the transparency/translucency of a portion of the display generation component that is displaying the user interface or due to projection of the user interface onto a transparent/translucent surface or projection of the user interface onto the user’s eye or into a field of view of the user’s eye.
- the hands of the user are displayed at a respective location in the three-dimensional environment and are treated as if they were objects in the three-dimensional environment that are able to interact with the virtual objects in the three-dimensional environment as if they were physical objects in the physical environment.
- the computer system is able to update display of the representations of the user’s hands in the three-dimensional environment in conjunction with the movement of the user’s hands in the physical environment.
- the computer system is optionally able to determine the “effective” distance between physical objects in the physical world and virtual objects in the three-dimensional environment, for example, for the purpose of determining whether a physical object is directly interacting with a virtual object (e.g., whether a hand is touching, grabbing, holding, etc. a virtual object or within a threshold distance of a virtual object).
- a hand directly interacting with a virtual object optionally includes one or more of a finger of a hand pressing a virtual button, a hand of a user grabbing a virtual vase, two fingers of a hand of the user coming together and pinching/holding a user interface of an application, and any of the other types of interactions described here.
- the computer system optionally determines the distance between the hands of the user and virtual objects when determining whether the user is interacting with virtual objects and/or how the user is interacting with virtual objects.
- the computer system determines the distance between the hands of the user and a virtual object by determining the distance between the location of the hands in the three- dimensional environment and the location of the virtual object of interest in the three- dimensional environment.
- the one or more hands of the user are located at a particular position in the physical world, which the computer system optionally captures and displays at a particular corresponding position in the three-dimensional environment (e.g., the position in the three-dimensional environment at which the hands would be displayed if the hands were virtual, rather than physical, hands).
- the position of the hands in the three- dimensional environment is optionally compared with the position of the virtual object of interest in the three-dimensional environment to determine the distance between the one or more hands of the user and the virtual object.
- the computer system optionally determines a distance between a physical object and a virtual object by comparing positions in the physical world (e.g., as opposed to comparing positions in the three- dimensional environment).
- the computer system when determining the distance between one or more hands of the user and a virtual object, the computer system optionally determines the corresponding location in the physical world of the virtual object (e.g., the position at which the virtual object would be located in the physical world if it were a physical object rather than a virtual object), and then determines the distance between the corresponding physical position and the one of more hands of the user. In some embodiments, the same techniques are optionally used to determine the distance between any physical object and any virtual object.
- the computer system when determining whether a physical object is in contact with a virtual object or whether a physical object is within a threshold distance of a virtual object, the computer system optionally performs any of the techniques described above to map the location of the physical object to the three-dimensional environment and/or map the location of the virtual object to the physical environment.
- the same or similar technique is used to determine where and what the gaze of the user is directed to and/or where and at what a physical stylus held by a user is pointed. For example, if the gaze of the user is directed to a particular position in the physical environment, the computer system optionally determines the corresponding position in the three-dimensional environment (e.g., the virtual position of the gaze), and if a virtual object is located at that corresponding virtual position, the computer system optionally determines that the gaze of the user is directed to that virtual object. Similarly, the computer system is optionally able to determine, based on the orientation of a physical stylus, to where in the physical environment the stylus is pointing.
- the computer system is optionally able to determine, based on the orientation of a physical stylus, to where in the physical environment the stylus is pointing.
- the computer system determines the corresponding virtual position in the three-dimensional environment that corresponds to the location in the physical environment to which the stylus is pointing, and optionally determines that the stylus is pointing at the corresponding virtual position in the three-dimensional environment.
- the embodiments described herein may refer to the location of the user (e.g., the user of the computer system) and/or the location of the computer system in the three-dimensional environment.
- the user of the computer system is holding, wearing, or otherwise located at or near the computer system.
- the location of the computer system is used as a proxy for the location of the user.
- the location of the computer system and/or user in the physical environment corresponds to a respective location in the three-dimensional environment.
- the location of the computer system would be the location in the physical environment (and its corresponding location in the three-dimensional environment) from which, if a user were to stand at that location facing a respective portion of the physical environment that is visible via the display generation component, the user would see the objects in the physical environment in the same positions, orientations, and/or sizes as they are displayed by or visible via the display generation component of the computer system in the three-dimensional environment (e.g., in absolute terms and/or relative to each other).
- the location of the computer system and/or user is the position from which the user would see the virtual objects in the physical environment in the same positions, orientations, and/or sizes as they are displayed by the display generation component of the computer system in the three- dimensional environment (e.g., in absolute terms and/or relative to each other and the real world objects).
- various input methods are described with respect to interactions with a computer system.
- each example may be compatible with and optionally utilizes the input device or input method described with respect to another example.
- various output methods are described with respect to interactions with a computer system.
- each example may be compatible with and optionally utilizes the output device or output method described with respect to another example.
- various methods are described with respect to interactions with a virtual environment or a mixed reality environment through a computer system.
- UP user interfaces
- UP user interfaces
- a computer system such as a portable multifunction device or a head-mounted device, in communication with a display generation component, and (optionally) one or more input devices.
- Figures 7A-7S illustrate examples of providing content.
- Figure 8 is a flow diagram of an exemplary method 800 for providing content.
- Figure 9 is a flow diagram of an exemplary method 900 for providing content.
- the user interfaces in Figures 7A-7S are used to illustrate the processes described below, including the processes in Figures 8 and 9.
- FIG. 7A depicts electronic device 700, which is a tablet that includes touch- sensitive display 702, one or more input sensors 704 (e.g., one or more cameras, eye gaze trackers, hand movement trackers, and/or head movement trackers), and one or more buttons 706a-706c.
- electronic device 700 is a tablet.
- electronic device 700 is a smart phone, a wearable device, a wearable smartwatch device, a head-mounted system (e.g., a headset), or other computer system that includes and/or is in communication with one or more display devices (e.g., display screen, projection device, or the like).
- electronic device 700 optionally includes two displays (e.g., one for each eye of a user), with each display displaying respective various content, to enable a user of electronic device 700 to perceive the various depths of the various content (e.g., physical objects and/or virtual objects) of three-dimensional environments.
- Electronic device 700 is a computer system (e.g., computer system 101 in Figure 1 A).
- electronic device 700 displays, via display 702, user interface 710 overlaid on three-dimensional environment 708.
- three-dimensional environment 708 includes objects 708a-708c.
- three-dimensional environment 708 is displayed by a display (e.g., display 702, as depicted in Figure 7A).
- three-dimensional environment 708 includes a virtual environment or an image (or video) of a physical environment captured by one or more cameras (e.g., one or more cameras that are part of input sensors 704 and/or one or more external cameras).
- object 708a is a virtual object that is representative of a physical object that has been captured by one or more cameras and/or detected by one or more sensors; and object 708b is a virtual object that is representative of a second physical object that has been captured by one or more cameras and/or detected by one or more sensors, and so forth.
- three-dimensional environment 708 is visible to a user behind user interface 710 but is not displayed by a display.
- three-dimensional environment 708 is a physical environment (and, for example, objects 708a-708c are physical objects) that is visible to a user (e.g., through one or more transparent displays) behind user interface 710 without being displayed by a display.
- user interface 710 and/or three-dimensional environment 708 are part of an extended reality experience.
- User interface 710 is a media gallery user interface, and includes a plurality of media items 712a-712k arranged in a grid.
- media items 712a-712k in user interface 710 include media items of a plurality of different types including, for example: panoramic images, non-panoramic images, videos, stereoscopic media items, and/or non-stereoscopic media items.
- media items of different types are presented differently and include different features, as will be described in greater detail below.
- electronic device 700 detects user input 714b gaze input 714a directed to media item 712e.
- user input 714b is an air gesture input.
- user input 714b is a pinch air gesture or a tap air gesture.
- electronic device 700 displays an enlarged viewer user interface 713 that includes media item 712e in an enlarged state, labeled as media item 712e-l in Figure 7B, and outputs audio output 720a and haptic output 720b.
- user input 714b e.g., an air pinch gesture, an air tap gesture, a pinch gesture, a tap gesture, an air gesture and drag gesture, an air drag gesture, a drag gesture, a click and drag gesture, a gaze gesture, and/or other gesture
- electronic device 700 displays an enlarged viewer user interface 713 that includes media item 712e in an enlarged state, labeled as media item 712e-l in Figure 7B, and outputs audio output 720a and haptic output 720b.
- media item 712e-l is displayed in a center position of a viewing region 715 of enlarged viewer user interface 713, and is displayed with controls 718a-l, 718b-l.
- Control 718a-l is selectable to cease display of enlarged viewer user interface 713, and return to display of user interface 710 from Figure 7A.
- Control 718b- 1 is selectable to expand enlarged media item 712e-l even further.
- Enlarged viewer user interface 713 also includes scrubber 716, which is displayed below viewing region 715 and includes smaller representations of media items 712b-2, 712c-2, 712d-2, 712e-2, 712f-2, 712g-2, 712k-2.
- a user can interact with scrubber 716 to navigate between different media items within enlarged viewer user interface 713, as will be described in greater detail below.
- Electronic device 700 also displays, within viewing region 715 of enlarged viewer user interface 713, media item 712d- 1 to the left of and behind media item 712e-l, and media item 712f-l to the right of and behind media item 712e-l.
- the plurality of media items from Figure 7A are arranged in an ordered sequence, and media item 712d- 1 immediately precedes media item 712e-l in the ordered sequence, and media item 712f-l immediately follows media item 712e-l in the ordered sequence.
- a user is able to provide a user input on viewing area 715 and/or scrubber 716 to navigate between different media items.
- a swipe gesture e.g., an air swipe gesture
- a swipe gesture e.g., an air swipe gesture
- a swipe gesture e.g., an air swipe gesture
- a swipe gesture e.g., an air swipe gesture
- a swipe gesture e.g., an air swipe gesture
- a swipe gesture e.g., an air swipe gesture
- electronic device 700 detects user input 722b (e.g., a user input that includes movement in a direction (e.g., a swipe gesture and/or an air swipe gesture to the left)) and gaze input 722a-l directed to viewing area 715 or gaze input 722a-2 directed to scrubber 716.
- user input 722b e.g., a user input that includes movement in a direction (e.g., a swipe gesture and/or an air swipe gesture to the left)
- gaze input 722a-l directed to viewing area 715 or gaze input 722a-2 directed to scrubber 716.
- a user can navigate between different media items by interacting with viewing area 715 or by interacting with scrubber 716.
- a user input that includes movement in a first direction and having a first magnitude causes a greater degree of navigation when the user is interacting with scrubber 716 than when the user is interacting with viewing area 715.
- interacting with scrubber 716 causes scrolling of media items at a faster rate than the same
- electronic device 700 in response to detecting user input 722b (e.g., a swipe gesture or air swipe gesture moving to the left) while the gaze of the user is directed to either viewing area 715 or scrubber 716, electronic device 700 outputs audio output 721a and haptic output 721b, and displays media item 712e-l moving away from its center position of viewing area 715 and to the left, and displays media item 712f- 1 moving from the right side of viewing area 715 towards the center position.
- controls 718a- 1, 718b- 1 cease to be displayed, and portions of media item 712e-l become feathered, blurred, and/or cropped.
- media item 712e-l is gradually visually deemphasized by, for example, decreasing the size of media item 712e-l, increasing a transparency of media item 712e-l, decreasing an opacity of media item 712e-l, decreasing a saturation of media item 712e-l, and/or decreasing a brightness of media item 712e-l.
- media item 712f-l As media item 712f-l is moved from its right position of viewing area 715 towards the center position, media item 712f-l is gradually visually emphasized by, for example, increasing the size of media item 712f- 1 , decreasing a transparency of media item 712f- 1 , increasing an opacity of media item 712f- 1 , increasing a saturation of media item 712f- 1 , and/or increasing a brightness of media item 712f-l . Additionally, as media items 712e-l, 712f-l are translated to the left, scrubber 716 is also scrolled to the left. At Figure 7C, electronic device 700 continues to detect user input 722b and gaze input 722a- 1 or gaze input 722a-2.
- electronic device 700 displays media item 712e-l translated further away from the center position and further to the left of viewing area 715 and further visually de-emphasized, while media item 712f- 1 is translated closer to the center position and is further visually emphasized, as described above.
- electronic device 700 continues to detect user input 722b and gaze input 722a- 1 or gaze input 722a-2.
- Control 724b is selectable to further expand media item 712f-l.
- media item 712f-l is a stereoscope media item
- media item 712e-l was a non-stereoscopic media item.
- media item 712f-l is displayed with a different expand control 724b than media item 712e-l was displayed with (control 718b- 1).
- electronic device 700 detects user input 726b and gaze input 726a directed toward control 724b.
- user input 726b is an air gesture input.
- user input 726b is a pinch air gesture or a tap air gesture input.
- electronic device 700 in response to detecting user input 726b (e.g., an air pinch gesture, an air tap gesture, a pinch gesture, a tap gesture, an air gesture and drag gesture, an air drag gesture, a drag gesture, a click and drag gesture, a gaze gesture, and/or other gesture) and gaze input 726a directed toward control 724b, electronic device 700 outputs audio output 728a and haptic output 728b, and displays media item 712f- 1 in a further expanded state, labeled as media item 712f-3.
- user input 726b e.g., an air pinch gesture, an air tap gesture, a pinch gesture, a tap gesture, an air gesture and drag gesture, an air drag gesture, a drag gesture, a click and drag gesture, a gaze gesture, and/or other gesture
- electronic device 700 outputs audio output 728a and haptic output 728b, and displays media item 712f- 1 in a further expanded state, labeled as media item 712f-3.
- media item 712f-l is a stereoscopic image with media captured at the same time from two different cameras (or sets of cameras) that is displayed by displaying an image from a first set of one or more cameras for a first eye of a user and an image from a second set of one or more cameras for a second eye of the user.
- electronic device 700 displays the media item with a set of visual characteristics that correspond to stereoscopic media items. For example, in the depicted embodiment, media item 712f-3 is displayed in a three-dimensional manner with elements of media item 712f-3 expanded along an axis.
- media item 712f-3 is shown with front layer 732c, rear layer 732a, and one or more intermediate layers 732b between front layer 732c and rear layer 732a.
- the displayed media item in accordance with a determination that the displayed media item is a stereoscopic media item, is displayed within a continuous three-dimensional shape with continuous edges and/or blurred edges.
- media item 712f-3 is displayed with a glow effect that emanated outward from media item 712f-3 into three-dimensional environment 708.
- Media item 712f-3 is displayed with option 730, that is selectable to return to display of enlarged viewer user interface 713.
- electronic device 700 detects user input 734b and gaze input 734a directed toward option 730.
- electronic device 700 in response to detecting user input 734b (e.g., an air pinch gesture, an air tap gesture, a pinch gesture, a tap gesture, an air gesture and drag gesture, an air drag gesture, a drag gesture, a click and drag gesture, a gaze gesture, and/or other gesture) and gaze input 734a, electronic device 700 outputs audio output 736a and haptic output 736b, and re-displays enlarged viewer user interface 713.
- user input 734b e.g., an air pinch gesture, an air tap gesture, a pinch gesture, a tap gesture, an air gesture and drag gesture, an air drag gesture, a drag gesture, a click and drag gesture, a gaze gesture, and/or other gesture
- electronic device 700 while displaying enlarged viewer user interface 713 with media item 712f- 1 occupying a center position of viewing area 715, electronic device 700 detects user input 738b and gaze input 738a-l directed to viewing area 715 or gaze input 738a-2 directed to scrubber 716.
- electronic device 700 in response to detecting user input 738b (e.g., an air pinch gesture, an air tap gesture, a pinch gesture, a tap gesture, an air gesture and drag gesture, an air drag gesture, a drag gesture, a click and drag gesture, a gaze gesture, and/or other gesture) and gaze input 738a-l or gaze input 738a-2, electronic device 700 outputs audio output 740a and haptic output 740b, and displays media item 712f- 1 moving away from the center position of viewing area 715 and media item 712k-l moving toward the center position of viewing area 715. As media item 712f-l is moved away from the center position of viewing area 715, it is visually de-emphasized, as discussed above.
- user input 738b e.g., an air pinch gesture, an air tap gesture, a pinch gesture, a tap gesture, an air gesture and drag gesture, an air drag gesture, a drag gesture, a click and drag gesture, a gaze gesture, and/or other gesture
- electronic device 700 outputs
- Figure 7H2 illustrates an embodiment in which user interface 713 and scrubber 716 (e.g., as described in Figures 7B-7H1) are displayed on display module X702 of head-mounted device (HMD) X700.
- device X700 includes a pair of display modules that provide stereoscopic content to different eyes of the same user.
- HMD X700 includes display module X702 (which provides content to a left eye of the user) and a second display module (which provides content to a right eye of the user).
- the second display module displays a slightly different image than display module X702 to generate the illusion of stereoscopic depth.
- HMD X700 outputs audio output X740a and haptic output X740b, and displays media item 712f- 1 moving away from the center position of viewing area 715 and media item 712k-l moving toward the center position of viewing area 715. As media item 712f-l is moved away from the center position of viewing area 715, it is visually de-emphasized, as discussed above.
- HMD X700 continues to detect user input X738b and gaze input X738a-1 or gaze input X738a-2.
- user input X738b includes an air gesture performed by a user of HMD X700.
- HMD X700 detects hands X750A and/or X750B of the user of HMD X700 and determines whether motion of hands X750A and/or X750B perform a predetermined air gesture corresponding to scrolling of media items within viewing area 715.
- the predetermined air gesture of user input X738b includes a pinch and swipe gesture.
- the pinch and swipe gesture includes detecting movement of finger X750C and thumb X750D toward one another, and detecting hand X750B moving in a first direction (e.g., to the left in Figure 7H2).
- HMD X700 detects a user request to scroll media items within viewing area 715 based on a gaze and air gesture input performed by the user of HMD X700.
- detecting the gaze and air gesture input includes detecting that the user of HMD X700 is looking at viewing area 715 and hand X750A of the user of HMD X700 perform a pinch and swipe gesture.
- HMD X700 includes any of the features, components, and/or parts of HMD 1-100, 1-200, 3-100, 6-100, 6-200, 6-300, 6-400, 11.1.1-100, and/or 11.1.2-100, either alone or in any combination.
- display module X702 includes any of the features, components, and/or parts of display unit 1-102, display unit 1-202, display unit 1-306, display unit 1-406, display generation component 120, display screens l-122a-b, first and second rear-facing display screens l-322a, l-322b, display 11.3.2-104, first and second display assemblies l-120a, 1- 120b, display assembly 1-320, display assembly 1-421, first and second display subassemblies l-420a, l-420b, display assembly 3-108, display assembly 11.3.2-204, first and second optical modules 11.1. l-104a and 11.
- HMD X700 includes a sensor that includes any of the features, components, and/or parts of any of sensors 190, sensors 306, image sensors 314, image sensors 404, sensor assembly 1- 356, sensor assembly 1-456, sensor system 6-102, sensor system 6-202, sensors 6-203, sensor system 6-302, sensors 6-303, sensor system 6-402, and/or sensors 11.1.2-1 lOa-f, either alone or in any combination.
- HMD X700 includes one or more input devices, which include any of the features, components, and/or parts of any of first button 1- 128, button 11.1.1-114, second button 1-132, and or dial or button 1-328, either alone or in any combination.
- HMD X700 includes one or more audio output components (e.g., electronic component 1-112) for generating audio feedback (e.g., audio output X740a), optionally generated based on detected events and/or user inputs detected by the HMD X700.
- audio output components e.g., electronic component 1-112
- audio feedback e.g., audio output X740a
- electronic device 700 in response to continued detection of user input 738b and gaze input 738a-l or gaze input 738a-2, electronic device 700 continues to output audio output 740a and haptic output 740b, and displays further movement of media item 712f-l further from the center position and further visual de-emphasis of media item 712f-l .
- electronic device 700 displays further movement of media item 712k-l toward the center position of viewing area 715, and further visual emphasis of media item 712k-l.
- electronic device 700 continues to detect user input 738b and gaze input 738a-l or gaze input 738a-2.
- Control 742 is selectable to display media item 712k-l in a further expanded state.
- media item 712k-l is a panoramic image
- control 742 is displayed based on a determination that media item 712k-l is a panoramic image.
- electronic device 700 detects user input 744b and gaze input 744a directed to control 742b.
- electronic device 700 displays media item 712k-4, which is a panoramic expansion of media item 712k-l.
- media item 712k-4 is displayed projected onto a simulated curved surface, such as a surface of cylinder 748, that is a predetermined distance away from the viewpoint of the user.
- the “simulated curved surface” described here does not correspond to a physical or even a virtual surface but is a curved region onto which content is projected and is optionally visible to the extent that the content is projected on the surface (e.g., and does not otherwise affect the three-dimensional environment in which the content is displayed).
- media item 712k-4 is projected onto a cylinder that has a radius of 10m, 100m, or 1000m.
- electronic device 700 is a headmounted system. Displaying media item 712k-4 on a simulated curved surface that is set away from the user provides the user with an immersive experience that gives the impression of the user being surrounded by and/or positioned within the media item.
- the viewpoint of the user has shifted to the left to view a left end of media item 712k-4 (e.g., the user turns his or her head to the left while wearing electronic device 700), and in Figure 7M, the viewpoint of the user has shifted to the right to view a right end of media item 712k-4 (e.g., the user turns his or her head to the right while wearing electronic device 700).
- the techniques and user interface(s) described in Figures 7A-7R are provided by one or more of the devices described in Figures 1 A-1P.
- Figure 7K2 illustrates an embodiment in which media item 712k-4 (e.g., as described in Figure 7K1) are displayed on display module X702 of head-mounted device (HMD) X700.
- device X700 includes a pair of display modules that provide stereoscopic content to different eyes of the same user.
- HMD X700 includes display module X702 (which provides content to a left eye of the user) and a second display module (which provides content to a right eye of the user).
- the second display module displays a slightly different image than display module X702 to generate the illusion of stereoscopic depth.
- HMD X700 displays media item 712k-4, which is a panoramic expansion of media item 712k-l.
- media item 712k-4 is displayed projected onto a simulated curved surface, such as a surface of cylinder 748, that is a predetermined distance away from the viewpoint of the user.
- media item 712k-4 is projected onto a cylinder that has a radius of 10m, 100m, or 1000m.
- HMD X700 is a head-mounted system. Displaying media item 712k-4 on a simulated curved surface that is set away from the user provides the user with an immersive experience that gives the impression of the user being surrounded by and/or positioned within the media item.
- HMD X700 includes any of the features, components, and/or parts of HMD 1-100, 1-200, 3-100, 6-100, 6-200, 6-300, 6-400, 11.1.1-100, and/or 11.1.2-100, either alone or in any combination.
- display module X702 includes any of the features, components, and/or parts of display unit 1-102, display unit 1-202, display unit 1-306, display unit 1-406, display generation component 120, display screens l-122a-b, first and second rear-facing display screens l-322a, l-322b, display 11.3.2-104, first and second display assemblies l-120a, 1- 120b, display assembly 1-320, display assembly 1-421, first and second display subassemblies l-420a, l-420b, display assembly 3-108, display assembly 11.3.2-204, first and second optical modules 11.1.
- HMD X700 includes a sensor that includes any of the features, components, and/or parts of any of sensors 190, sensors 306, image sensors 314, image sensors 404, sensor assembly 1- 356, sensor assembly 1-456, sensor system 6-102, sensor system 6-202, sensors 6-203, sensor system 6-302, sensors 6-303, sensor system 6-402, and/or sensors 11.1.2-1 lOa-f, either alone or in any combination.
- HMD X700 includes one or more input devices, which include any of the features, components, and/or parts of any of first button 1- 128, button 11.1.1-114, second button 1-132, and or dial or button 1-328, either alone or in any combination.
- HMD X700 includes one or more audio output components (e.g., electronic component 1-112) for generating audio feedback (e.g., audio output X746a), optionally generated based on detected events and/or user inputs detected by the HMD X700.
- audio output components e.g., electronic component 1-112
- audio feedback e.g., audio output X746a
- electronic device 700 modifies one or more size dimensions of media item 712k-4 based on a field of view of a camera that captured media item 712k-4 when it captured media item 712k-4.
- the field of view of a camera can change by changing a focal length of a lens of the camera, or by changing a distance between the camera the captured subject matter, or by changing a viewing angle of the camera.
- By dynamically adjusting the height of the media item based on the field of view of the camera that captured it users are provided with an immersive viewing experience that allows them to view the media item at a scale that imitates the real-life scale of the subject that was captured.
- Figures 7N-7S illustrate various different example scenarios demonstrating how the dimensions of media item 712k-4 are changed based on the field of view of the camera when media item 712k-4 was captured.
- a height of media item 712k-4 is adjusted based on a field of view of the camera at the time media item 712k-4 was captured.
- Figures 7N-7P show media item 712k-4 having a first width
- Figures 7Q-7S show media item 712k-4 having a second, shorter width.
- media item 712k-4 occupies less of cylinder 748 because it has a shorter with than media item 712k-4 in Figures 7N-7P.
- media item 712k-4 is captured with the same camera, but the field of view of the camera is changed by, for example, changing a focal length of the lens and/or changing a viewing angle of the camera.
- D represents the size of an image sensor in the camera
- F represents the focal length of the lens.
- R represents the radius of cylinder 748
- a al, a2, or a3
- H Hl, H2, or H3
- D is the same across all three scenarios, as the same camera is used, and R is also constant, as the radius of cylinder 748 does not change. Accordingly, as F or a change, the height of media item 712k-4 changes.
- the focal length of the camera is Fl
- the viewing angle of the camera is al
- the resulting height of media item 712k-4 is Hl.
- F2 is shorter than Fl
- a2 is greater than al
- resulting in a height, H2 that is greater than the height Hl of Figure 7N or Figure 7Q.
- F3 is longer than Fl, and a3 is smaller than al, resulting in a height, H3, that is shorter than the height Hl of Figure 7N or Figure 7Q.
- FIG 8 is a flow diagram of an exemplary method 800 for providing content, in accordance with some embodiments.
- method 800 is performed at a computer system (e.g., 700 and/or X700) (e.g., computer system 101 in Figure 1 A) (e.g., a smart phone, a smart watch, a tablet, a laptop, a desktop, a wearable device, and/or headmounted device) that is in communication with one or more display generation components (e.g., 702 and/or X702) (e.g., display generation component 120 in Figures 1 A, 3, and 4) (e.g., a visual output device, a 3D display, a display having at least a portion that is transparent or translucent on which images can be projected (e.g., a see-through display), a projector, a heads-up display, and/or a display controller) and one or more input devices (e.g., 702, 704, and/or 706a-706c)
- a computer system
- the method 800 is governed by instructions that are stored in a non-transitory (or transitory) computer-readable storage medium and that are executed by one or more processors of a computer system, such as the one or more processors 202 of computer system 101 (e.g., control 110 in Figure 1 A). Some operations in method 800 are, optionally, combined and/or the order of some operations is, optionally, changed.
- the computer system displays (802), via the one or more display generation components (e.g., 702 and/or X702), a first object (e.g., 712e- 1 ) (e.g., a first media item, a first photo, a first image, or a first video) of a plurality of objects (e.g., a plurality of media items (e.g., photos, images, and/or videos)) at a first display position (e.g., 712e-l in Figure 7B) (e.g., a first display position within a first user interface, and/or a first display position of the one or more display generation components) (e.g., a first display position indicative of the first object being currently selected (e.g., selected for display)).
- a first object e.g., 712e- 1
- a first media item e.g., a first photo, a first image, or a first video
- a first display position e.
- the computer system detects (804), via the one or more input devices, a first user input (e.g., 722a-l, 722a-2, and/or 722b) (e.g., a touchscreen input, a gaze input, a gesture input, and/or a mechanical input (e.g., pressing of a physical button and/or rotation of a rotatable input mechanism), and/or an air gesture input) corresponding to a user request to navigate within the plurality of objects (e.g., a first user input corresponding to a user request to transition from displaying the first object of the plurality of objects to displaying a second object of the plurality of objects; and/or a first user input corresponding to a user request to transition from displaying the first object at the first display position to displaying a second object at the first display position).
- a first user input e.g., 722a-l, 722a-2, and/or 722b
- a first user input e.g., 722
- the computer system displays (806), via the one or more display generation components, movement of the first object (e.g., 712e-l) from the first display position (e.g., Figure 7B) to a second display position (e.g., Figures 7C-7E) different from the first display position (e.g., a second display position within a first user interface, and/or a second display position of the one or more display generation components) (e.g., a second display position indicative of the first object not being and/or no longer being selected (e.g., selected for display)) while reducing a visual prominence of the first object (e.g., 712e- 1) by modifying an opacity (and optionally one or more other visual characteristics such as size, contrast, color, saturation, and/or degree of blurring) of the first object relative to a background (e.g., 708) over which the first object is displayed (e.g., reducing the visual prominence of the first object relative to a
- the visual prominence of the first object is gradually reduced as the first object is moved away from the first display position (and/or towards the second display position).
- a first characteristic is gradually changed from a first value to a second value as the first object is moved away from the first display position (and/or towards the second display position); and, optionally, a second characteristic is gradually changed from a third value to a fourth value as the first object is moved away from the first display position (and/or towards the second display position).
- the first object while the first object is displayed at the first display position, the first object is displayed having a first value for a first characteristic (e.g., size, contrast, color, saturation, and/or opacity); and while the first object is displayed at the second display position, the first object is displayed having a second value for the first characteristic.
- a first characteristic e.g., size, contrast, color, saturation, and/or opacity
- the first object is displayed having a second value for the first characteristic.
- the first object e.g., 712e- 1
- a first intermediate position e.g., Figures 7C and/or 7D
- the first object e.g., 712e- 1
- the first display position e.g., Figure 7B
- the first object is displayed having a third value for a second characteristic (e.g., size, contrast, color, saturation, and/or opacity); and while the first object (e.g., 712e- 1) is displayed at the second display position (e.g., Figures 7C-7E), the first object is displayed having a fourth value for the second characteristic.
- the first object is displayed at the first intermediate position between the first display position and the second display position
- the first object is displayed having an intermediate value for the second characteristic that is between the third value and the fourth value.
- reducing the visual prominence of the first object includes making the first object smaller in size. In some embodiments, reducing the visual prominence of the first object includes changing one or more colors of the first object. In some embodiments, reducing the visual prominence of the first object includes reducing the contrast of the first object. In some embodiments, reducing the visual prominence of the first object includes reducing the saturation of the first object. In some embodiments, reducing the visual prominence of the first object includes reducing the opacity (and/or increasing the transparency) of the first object.
- Displaying movement of an object and reducing a visual prominence of the first object in response to a user request to navigate a set of objects enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with feedback about a state of the device (e.g., the device has detected the user input corresponding to the user request to navigate the set of objects).
- the first object (e.g., 712e- 1) is displayed within a display region (e.g., 715) (e.g., a window; a user interface; a portion of the one or more display generation components; the entire displayable area of the one or more display generation components; a viewpoint of the user; and/or a viewpoint of the computer system);
- the first display position (e.g., Figure 7B) is closer to a center (e.g., a horizontal center and/or a vertical center) of the display region than the second display position;
- the second display position (e.g., Figures 7C-7E) is closer to a first edge of the display region (e.g., a left edge, a right edge, a top edge, and/or a bottom edge) than the first display position.
- Displaying movement of an object from a center position towards an edge of a display region, and reducing a visual prominence of the first object in response to a user request to navigate a set of objects enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with feedback about a state of the device (e.g., the device has detected the user input corresponding to the user request to navigate the set of objects).
- the computer system in response to detecting the first user input (e.g., 722a- 1, 722a-2, and/or 722b), while displaying movement of the first object (e.g., 712e-l) from the first display position (e.g., Figure 7B) to the second display position (e.g., Figures 7C-7E), displays movement of a second object (e.g., 712f-l) of the plurality of objects (e.g., a second media item, a second photo, a second image, and/or a second video) from a third display position different from the first display position to the first display position.
- a second object e.g., 712f-l
- the plurality of objects e.g., a second media item, a second photo, a second image, and/or a second video
- the first display position is closer to a center of the display region than the third display position; and the third display position is closer to a second edge of the display region (e.g., a second edge different from or the same as the first edge) than the first display position.
- Displaying movement of a second object from an edge position towards a center of a display region in response to a user request to navigate a set of objects enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently.
- the computer system in response to detecting the first user input (e.g., 722a- 1, 722a-2, and/or 722b), while displaying movement of the second object (e.g., 712f- 1) from the third display position (e.g., Figure 7B) to the first display position (e.g., Figures 7C-7E) (and, optionally, while displaying movement of the first object from the first display position to the second display position), the computer system increases a visual prominence of the second object (e.g., 712f- 1) by modifying an opacity (and, optionally, one or more other visual characteristics such as size, contrast, color, saturation, and/or degree of blurring) of the second object relative to the background (e.g., 708) over which the first object is displayed.
- an opacity and, optionally, one or more other visual characteristics such as size, contrast, color, saturation, and/or degree of blurring
- the visual prominence of the second object is gradually increased as the second object is moved away from the third display position (and/or towards the first display position).
- a first characteristic is gradually changed from a first value to a second value as the second object is moved away from the third display position (and/or towards the first display position); and, optionally, a second characteristic is gradually changed from a third value to a fourth value as the second object is moved away from the third display position (and/or towards the first display position).
- the second object while the second object is displayed at the third display position, the second object is displayed having a first value for a first characteristic (e.g., size, contrast, color, saturation, and/or opacity); and while the second object is displayed at the first display position, the second object is displayed having a second value for the first characteristic.
- a first characteristic e.g., size, contrast, color, saturation, and/or opacity
- the second object is displayed having a second value for the first characteristic.
- the second object while the second object is displayed at the third display position, the second object is displayed having a third value for a second characteristic (e.g., size, contrast, color, saturation, and/or opacity); and while second first object is displayed at the first display position, the second object is displayed having a fourth value for the second characteristic.
- a second characteristic e.g., size, contrast, color, saturation, and/or opacity
- the second object is displayed having a fourth value for the second characteristic.
- increasing the visual prominence of the second object includes making the second object larger in size.
- increasing the visual prominence of the second object includes changing one or more colors of the second object.
- increasing the visual prominence of the second object includes increasing the contrast of the second object. In some embodiments, increasing the visual prominence of the second object includes increasing the saturation of the second object. In some embodiments, increasing the visual prominence of the second object includes increasing the opacity (and/or decreasing the transparency) of the second object. Displaying movement of a second object from an edge position towards a center of a display region and increasing a visual prominence of the second object in response to a user request to navigate a set of objects enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with feedback about a state of the device (e.g., the device has detected the user input corresponding to the user request to navigate the set of objects).
- reducing the visual prominence of the first object further includes feathering (e.g., gradually blending the edges of the first object with content around the first object using changes in opacity and/or blurring) a first edge of the first object that is furthest (e.g., furthest of all the edges of the first object) from the first display position (e.g., Figures 7C-7D, feathering of the left edge of 712f-l) and closest (e.g., closest of all the edges of the first object) to the second display position.
- feathering e.g., gradually blending the edges of the first object with content around the first object using changes in opacity and/or blurring
- a first edge of the first object that is furthest (e.g., furthest of all the edges of the first object) from the first display position (e.g., Figures 7C-7D, feathering of the left edge of 712f-l) and closest (e.g., closest of all the edges of the first object) to the
- the first edge of the first object is not feathered (e.g., is not blended with content around the first edge using changes in opacity and/or blurring).
- the first edge of the first object is feathered. Feathering a first edge of the first object in response to a user request to navigate a set of objects enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with feedback about a state of the device (e.g., the device has detected the user input corresponding to the user request to navigate the set of objects).
- reducing the visual prominence of the first object includes decreasing an opacity (and increasing a transparency) of the first object (e.g., as the first object is moved from the first display position to the second display position).
- the opacity of the first object is gradually decreased (and/or the transparency of the first object is gradually increased) as the first object moves further from the first display position and/or closer to the second display position.
- the first object is fully opaque such that the background over which the first object is displayed is not visible through the first object.
- the first object while the first object is moving from the first display position towards the second display position, the first object is at least partially transparent such that the background over which the first object is displayed is at least partially visible through the first object.
- Decreasing an opacity and/or increasing a transparency of the first object in response to a user request to navigate a set of objects enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with feedback about a state of the device (e.g., the device has detected the user input corresponding to the user request to navigate the set of objects).
- the background (e.g., 708) over which the first object is displayed includes a three-dimensional environment (e.g., a virtual three-dimensional environment, an optical passthrough environment, and/or a virtual passthrough environment); while the first object (e.g., 712e- 1) is displayed at the first display position (e.g., Figure 7B), the first object has a first level of opacity relative to the three-dimensional environment; and while the first object (e.g., 712e- 1) is moving from the first display position towards the second display position (e.g., Figures 7C-7E), the first object has a second level of opacity that is lower than the first level of opacity such that the three-dimensional environment (e.g., 708) is more visible through the first object than it was when it was displayed at the first display position.
- a three-dimensional environment e.g., a virtual three-dimensional environment, an optical passthrough environment, and/or a virtual passthrough environment
- the first object e.g
- Decreasing an opacity and/or increasing a transparency of the first object to reveal a background three-dimensional environment in response to a user request to navigate a set of objects enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with feedback about a state of the device (e.g., the device has detected the user input corresponding to the user request to navigate the set of objects).
- the three-dimensional environment is a virtual three-dimensional environment (e.g., a virtual passthrough environment, and/or a virtual environment that is representative of a physical environment that surrounds the computer system).
- a virtual three-dimensional environment e.g., a virtual passthrough environment, and/or a virtual environment that is representative of a physical environment that surrounds the computer system.
- Decreasing an opacity and/or increasing a transparency of the first object to reveal a background three-dimensional environment in response to a user request to navigate a set of objects enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with feedback about a state of the device (e.g., the device has detected the user input corresponding to the user request to navigate the set of objects).
- the three-dimensional environment (e.g. ,708) includes a representation of a passthrough environment (e.g., an optical passthrough environment and/or a virtual passthrough environment).
- a passthrough environment e.g., an optical passthrough environment and/or a virtual passthrough environment.
- Decreasing an opacity and/or increasing a transparency of the first object to reveal a background three-dimensional environment in response to a user request to navigate a set of objects enhances the operability of the system and makes the usersystem interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with feedback about a state of the device (e.g., the device has detected the user input corresponding to the user request to navigate the set of objects).
- reducing the visual prominence of the first object further includes reducing a size (e.g., height, width, and/or area) of the first object (e.g., image 712e-l reducing in size from Figures 7B-7E) (e.g., as the first object is moved from the first display position to the second display position).
- a size e.g., height, width, and/or area
- the size of the first object is gradually decreased as the first object moves further from the first display position and/or closer to the second display position.
- the first object is displayed at a first size.
- the first object while the first object is moving away from the first display position towards the second display position, the first object gradually decreased from the first size to a second size that is smaller than the first size. For example, at a first intermediate display position between the first display position and the second display position, the first object is displayed at a first intermediate size that is smaller than the first size and larger than the second size.
- Decreasing the size of the first object in response to a user request to navigate a set of objects enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with feedback about a state of the device (e.g., the device has detected the user input corresponding to the user request to navigate the set of objects).
- reducing the visual prominence of the first object further includes moving the first object relative to (e.g., away from or towards) the viewpoint of the user (e.g., in a z-direction that extends forwards and backwards from the viewpoint of the user).
- moving the first object away from the viewpoint of the user includes moving the first object away from the viewpoint of the user along a z-direction that extends forwards and backwards from the viewpoint of the user.
- the second display position is further away from the viewpoint of the user in the z-direction than the first display position.
- Moving the first object further away from the viewpoint of the user in response to a user request to navigate a set of objects enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with feedback about a state of the device (e.g., the device has detected the user input corresponding to the user request to navigate the set of objects).
- Allowing a user to navigate to a collection user interface with a user input enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently.
- the first user input (e.g., 722a- 1, 722a-2, and/or 722b) includes: a first input portion that includes movement in a first input direction; a second input portion that includes movement in a second input direction different from (e.g., opposite) the first input direction; and a third input portion that includes movement in the first input direction; and displaying movement of the first object (e.g., 712e-l) from the first display position to the second display position includes: in response to detecting the first input portion that includes movement in the first input direction, displaying movement of the first object (e.g., 712e- 1 ) away from the first display position and toward the second display position (e.g., displaying movement of the first object in a first direction that corresponds to the first input direction) (e.g., moving media item 712e-l to the left in Figures 7B-7C); in response to detecting the second input portion that includes movement in the second input direction, displaying movement of the first
- movement of the first object follows a direction of movement of the first user input.
- Displaying movement of an object in different directions based on the direction of movement of a user input enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with feedback about a state of the device (e.g., the device has detected the direction of movement of the user input).
- the computer system displays, via the one or more display generation components, a first set of controls (e.g., 718a-l and/or 718b- 1) (e.g., a first set of controls for interacting with the first object; a first set of controls corresponding to the first object; a close option; an enlarge option; a share option; and/or a media options button); and while displaying movement of the first object from the display position to the second display position (e.g., Figures 7C-7E), the computer system fades out (e.g., reducing a visual prominence such as an opacity or brightness of and/or ceasing to display) at least a portion of the first set of controls (e.g., ceasing display of the first set of controls, and/or reducing a visual prominence of the first set of controls (e.g.
- Fading out a first set of controls while the first object is moved from the first display position to the second display position enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently.
- the computer system displays, via the one or more display generation components, a scrubber (e.g., 716) that includes representations of a first plurality of objects (e.g., a plurality of media items (e.g., photos, images, and/or videos)) (e.g., a first plurality of objects of the plurality of objects, and/or a subset of the plurality of objects) arranged in a first order (e.g., a first sequence and/or a first sequential order), including a representation of the first object (e.g., 712e-2), wherein, while the first object (e.g., 712e- 1 ) is displayed at the first display position (e.g., Figure 7B), the representation of the first object (e.g., 712e-2) is displayed at a first scrubber position along the scrubber
- Displaying movement (e.g., scrolling) of a scrubber concurrently with displaying movement of the first object in response to the first user input enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with feedback about a state of the device (e.g., the device has detected the first user input, and is moving and/or navigating through a set of objects in response to the user input).
- the computer system while displaying the first object (e.g., 712e- 1 ) at the first display position (e.g., Figure 7B), the computer system detects, via the one or more input devices, a scrubber interaction user input (e.g., 722a-2 and 722b) (e.g., a touchscreen input, a gaze input, a gesture input, and/or a mechanical input (e.g., pressing of a physical button and/or rotation of a rotatable input mechanism), and/or an air gesture input) corresponding to a user request to scroll the scrubber (e.g., 716) (e.g., a user request to move representations of objects within the scrubber) (e.g., a user input interacting with the scrubber, and/or a user input (e.g., an air gesture input) that is detected while the user is looking at the scrubber); and in response to detecting the scrubber interaction user input, the computer system navigates through the objects in accordance
- the computer system in response to detecting the scrubber interaction user input, displays, via the one or more display generation components, movement of the first object from the first display position to the second display position while reducing the visual prominence of the first object by modifying an opacity of the first object relative to the background over which the first object is displayed.
- the scrubber includes reduced scale representations of at least some of the plurality of objects, including a reduced scale representation of the first object (and, in some embodiments, a reduced scale representation of a second object different from the first object).
- Allowing a user to navigate through a set of objects by interacting with a scrubber enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently.
- a first edge (e.g., a left edge, a right edge, a top edge, and/or a bottom edge) of the scrubber is visually deemphasized (e.g., blurred, feathered and/or faded).
- a first edge of the scrubber and a second edge of the scrubber different from the first edge are visually deemphasized (e.g., blurred, feathered and/or faded).
- Displaying a blurred edge of the scrubber provides the user with an indication that there are additional objects in the plurality of objects that are not represented in the scrubber, which provides the user with visual feedback about a state of the device, and enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently.
- the computer system in response to detecting the first user input (e.g., 722a- 1, 722a-2, and/or 722b), the computer system outputs first non-visual feedback (e.g., 721 and/or 721b) (e.g., audio feedback and/or haptic feedback) in conjunction with moving one or more of the objects.
- first non-visual feedback e.g., 721 and/or 721b
- Outputting non-visual feedback in response to the first user input enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with feedback about a state of the device (e.g., the device has detected the user input corresponding to the user request to navigate the set of objects).
- displaying the first object (e.g., 712e- 1 ) at the first display position comprises displaying the first object at the first display position at a first size (e.g., in some embodiments, within a user interface in which a single object (e.g., media item) is in a focused state (e.g., a selected state) and/or is visually emphasized while other objects and/or media items are not in the focused state (e.g., selected state) and/or are visually de-emphasized); the computer system displays the first object (e.g., 712e) concurrently with a second plurality of objects (e.g., 712a-k) (e.g., the plurality of objects and/or a subset of the plurality of objects) within a collection user interface (e.g., 710), wherein the first object has a second size, smaller than the first size, in the collection user interface, including displaying the first object (e
- Allowing a user to navigate a collection user interface with one or more user inputs enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently.
- displaying the first object (e.g., 712e- 1 ) at the first display position comprises displaying the first object at the first display position at a first size (e.g., in some embodiments, within a user interface in which a single object (e.g., media item) is in a focused state (e.g., a selected state) and/or is visually emphasized while other objects and/or media items are not in the focused state (e.g., selected state) and/or are visually de-emphasized); the computer system displays the first object concurrently with a second plurality of objects (e.g., the plurality of objects and/or a subset of the plurality of objects) within a collection user interface (e.g., 710), wherein the first object has a second size, smaller than the first size, in the collection user interface, including displaying the first object a first position within the collection user interface; while displaying the first object at the first position within the collection user
- Allowing a user to navigate a collection user interface with one or more user inputs enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently.
- aspects/operations of methods 800 and/or 900 may be interchanged, substituted, and/or added between these methods.
- the first media item recited in method 900 is the first method recited in method 800.
- these details are not repeated here.
- FIG. 9 is a flow diagram of an exemplary method 900 for providing content, in accordance with some embodiments.
- method 900 is performed at a computer system (e.g., 700 and/or X700) (e.g., computer system 101 in Figure 1 A) (e.g., a smart phone, a smart watch, a tablet, a laptop, a desktop, a wearable device, and/or headmounted device) that is in communication with one or more display generation components (e.g., 702 and/or X702) (e.g., display generation component 120 in Figures 1 A, 3, and 4) (e.g., a visual output device, a 3D display, a display having at least a portion that is transparent or translucent on which images can be projected (e.g., a see-through display), a projector, a heads-up display, and/or a display controller) and one or more input devices (e.g., 702, 704, and/or 706a-706
- display generation components
- the method 900 is governed by instructions that are stored in a non-transitory (or transitory) computer-readable storage medium and that are executed by one or more processors of a computer system, such as the one or more processors 202 of computer system 101 (e.g., control 110 in Figure 1 A).
- Some operations in method 800 are, optionally, combined and/or the order of some operations is, optionally, changed.
- the computer system receives (902), via the one or more input devices, a sequence of one or more inputs (e.g., 744a-744b) (e.g., one or more touch inputs, one or more gaze inputs, one or more gesture inputs, one or more air gesture inputs, and/or one or more hardware inputs) corresponding to a request to change a display state (e.g., a request to display the media item, a request to enlarge the media item, or a request to display the media item in a respective display state in which it is not currently displayed such as an immersive display state) of a first media item (e.g., 712k-l) relative to a three-dimensional environment (e.g., 708).
- a display state e.g., a request to display the media item, a request to enlarge the media item, or a request to display the media item in a respective display state in which it is not currently displayed such as an immersive display state
- a first media item e
- the computer system displays (904), via the one or more display generation components, a first media item (e.g., 712k-l and/or 712k-4) (e.g., a photograph, an image, and/or a video), in a respective display state in the three-dimensional environment (e.g., 708), including: in accordance with a determination that a camera that captured the first media item (e.g., 712k-l and/or 712k-4) had a first field of view with a first value for a respective size parameter (e.g., a first focal length, a first angle of view, and/or a first viewing angle) when capturing the first media item (e.g., at the time of capturing the first media item and/or while capturing the first media item), the computer system displays (906) the first media item at a first size (e.g., a first height, a first width, a first area, and/or a first
- a first media item e.g.
- Automatically selecting a display size for a media item based on a field of view of a camera when capturing the media item allows for this operation to be performed automatically without further user input. Furthermore, doing so also enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with a more detailed and/or immersive user experience.
- the first field of view with the first value for the respective size parameter is a narrower field of view (e.g., a larger focal length, a smaller angle of view, and/or a smaller viewing angle) than the second field of view with the second value for the respective size parameter (e.g., in Figure 7N, the camera had a narrower field of view than the camera in Figure 70, and/or in Figure 7P, the camera had a narrower field of view than the camera in Figure 7N or Figure 70); and the first size is smaller than the second size (e.g., the first size is a first height, the second size is a second height, and the first height is smaller than the second height) (e.g., in Figure 7N, media item 712k-4 is shown having a smaller height than media item 712k-4 in Figure 70).
- a narrower field of view e.g., a larger focal length, a smaller angle of view, and/or a smaller viewing angle
- the first size is smaller than the second size (e.g.
- Automatically selecting a display size for a media item based on a field of view of a camera when capturing the media item allows for this operation to be performed automatically without further user input. Furthermore, doing so also enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with a more detailed and/or immersive user experience.
- the first field of view with the first value for the respective size parameter is a wider field of view (e.g., a shorter focal length, a larger angle of view, and/or a larger viewing angle) than the second field of view with the second value for the respective size parameter; and the first size is larger than the second size (e.g., the first size is a first height, the second size is a second height, and the first height is larger than the second height) (e.g., in Figure 70, the camera had a wider field of view than in Figures 7N or 7P, and media item 712k-4 is shown having a larger height than media item 712k-4 in Figure 7N or Figure 7P).
- Automatically selecting a display size for a media item based on a field of view of a camera when capturing the media item allows for this operation to be performed automatically without further user input. Furthermore, doing so also enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with a more detailed and/or immersive user experience.
- displaying the first media item (e.g., 712k-4) at the first size in the three-dimensional environment (e.g., 708) includes displaying the first media item in the respective display state in the three-dimensional environment at a first distance (e.g., a first simulated distance and/or a first virtual distance) (e.g., a first predetermined and/or predefined distance) from a viewpoint of a user of the computer system (e.g., radius R of cylinder 748); and the first size is determined (e.g., calculated) based on the first value for the respective size parameter and the first distance (e.g., the first size is determined using an equation that includes the first value for the respective size and the first distance).
- a first distance e.g., a first simulated distance and/or a first virtual distance
- the first size is determined (e.g., calculated) based on the first value for the respective size parameter and the first distance (e.g., the first size is determined using an equation that includes the first
- displaying the first media item at the second size in the three-dimensional environment includes displaying the first media item in the respective display state in the three-dimensional environment at a second distance (e.g., a second simulated distance and/or a second virtual distance) (e.g., a second predetermined and/or predefined distance) (e.g., a second distance that is the same as the first distance or different from the first distance) from a viewpoint of a user of the computer system; and the first size is determined (e.g., calculated) based on the second value for the respective size parameter and the first distance (e.g., the first size is determined using an equation that includes the first value for the respective size and the first distance).
- a second distance e.g., a second simulated distance and/or a second virtual distance
- a second predetermined and/or predefined distance e.g., a second distance that is the same as the first distance or different from the first distance
- Automatically selecting a display size for a media item based on a field of view of a camera when capturing the media and a distance from a viewpoint of the user at which the media item is to be displayed allows for this operation to be performed automatically without further user input. Furthermore, doing so also enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with a more detailed and/or immersive user experience.
- Automatically selecting a display size for a media item based on a field of view of a camera when capturing the media and a distance from a viewpoint of the user at which the media item is to be displayed also makes the media item appear to be true to scale with the subject of the content that is depicted and/or was captured in the media item.
- displaying the first media item (e.g., 712k-4) in the respective display state in the three-dimensional environment (e.g., 708) comprises displaying the first media item in the respective display state in the three-dimensional environment at a first distance (e.g., radius R of cylinder 748) (e.g., a first simulated distance and/or a first virtual distance) (e.g., a first predetermined and/or predefined distance) from a viewpoint of a user of the computer system (e.g., regardless of the field of view of the camera that captured the first media item and/or without regard for the field of view of the camera that captured the first media item).
- a first distance e.g., radius R of cylinder 748
- a first predetermined and/or predefined distance e.g., a first predetermined and/or predefined distance
- displaying the first media item at the first size in the three-dimensional environment includes displaying the first media item at the first distance from the viewpoint of the user of the computer system; and displaying the first media item at the second size in the three-dimensional environment includes displaying the first media item at the first distance from the viewpoint of the user of the computer system.
- the first distance (e.g., radius R of cylinder 748) is selected to be greater than a threshold distance (e.g., the first distance is selected from a range of distances, but the first distance must be greater than the threshold distance) (e.g., greater than 10m, greater than 25m, or greater than 50m).
- a threshold distance e.g., the first distance is selected from a range of distances, but the first distance must be greater than the threshold distance
- a threshold distance e.g., the first distance is selected from a range of distances, but the first distance must be greater than the threshold distance
- displaying the first media item (e.g., 712k-4) in the respective display state in the three-dimensional environment (e.g., 708) includes displaying the first media item as though it is being projected onto a simulated curved surface (e.g., interior surface of cylinder 748) (e.g., a cylinder and/or a sphere).
- displaying the first media item at the first size in the three-dimensional environment includes projecting the first media item onto a curved surface (e.g., a cylinder and/or a sphere).
- displaying the first media item at the second size in the three- dimensional environment includes projecting the first media item onto a simulated curved surface (e.g., a simulated surface of a cylinder and/or a sphere). Automatically selecting a display size for a media item based on a field of view of a camera when capturing the media, and projecting the media item onto a simulated curved surface, allows for this operation to be performed automatically without further user input.
- the first media item (e.g., 712k-4) curves based on a curvature of the simulated curved surface and a length of the first media item (e.g., media item 712k-4 curves a different amount in Figures 7N-7P than in Figures 7Q-7S based on media item 712k-4 having a shorter width in Figures 7Q-7S).
- displaying the first media item in the respective display state in the three-dimensional environment further includes: in accordance with a determination that the first media item has a first width, displaying the first media item on the simulated curved surface occupying a first arc length of the simulated curved surface (e.g., having a first curved width on the simulated curved surface); and in accordance with a determination that the first media item has a second width smaller than the first width, displaying the first media item on the simulated curved surface occupying a second arc length of the simulated curved surface smaller than the first arc length (e.g., having a second curved width on the simulated curved surface that is smaller than the first curved width).
- Automatically determining a degree of curvature of the first media item based on the width of the first media item allows for this operation to be performed automatically without further user input. Furthermore, doing so also enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with a more detailed and/or immersive user experience.
- the first media item (e.g., 712k-4) is a panoramic image (e.g., an image that has an aspect ratio that is greater than a threshold aspect ratio (e.g., greater than 16:9, greater than 2: 1, and/or greater than 3 : 1)).
- a threshold aspect ratio e.g., greater than 16:9, greater than 2: 1, and/or greater than 3 : 1
- the first media item is identified as a panoramic image and/or determined to be a panoramic image by the computer system based on an aspect ratio of the first media item.
- the first media item is displayed in the respective display state in the three-dimensional environment based on a determination that the first media item is a panoramic image.
- media items that are not identified as panoramic images are not displayed and/or are not displayable in the respective display state in the three-dimensional environment.
- the first media item and/or a panoramic image is generated by stitching multiple images together to create an image that is wider than the field of view of the camera that captured the multiple images.
- the first media item and/or a panoramic image is generated by stitching multiple images together that were captured while a camera is moving to create an image that is wide than the field of view of the camera that captured the multiple images. Automatically selecting a display size for a panoramic image based on a field of view of a camera when capturing the panoramic image allows for this operation to be performed automatically without further user input.
- the first media item (e.g., 712k-4) is a landscape photograph.
- the first media item is identified as a landscape photograph and/or determined to be a landscape photograph by the computer system based on subject matter depicted in the first media item and/or based on depth information associated with and/or corresponding to the first media item.
- the first media item is displayed in the respective display state in the three-dimensional environment based on a determination that the first media item is a landscape photograph.
- media items that are not identified as landscape photographs (e.g., by the computer system) are not displayed and/or are not displayable in the respective display state in the three- dimensional environment.
- the respective display state is enabled for certain types of media items, and disabled and/or suppressed for other types of media items.
- the respective display state is disabled and/or suppressed for media items that have an aspect ratio that is smaller than a threshold aspect ratio (e.g., smaller than 3 :2, smaller than 16:9, smaller than 2: 1, or smaller than 3 : 1).
- the respective display state is disabled and/or suppressed for media items in which the primary subject was closer to the camera that captured the media item than a threshold distance (e.g., closer than 10m, 50m, or 100m).
- the respective display state is disabled and/or suppressed for media items in which a primary subject and/or primary depicted object occupies greater than a threshold area of the media item (e.g., greater than 20%, greater than 25%, greater than 33%, or greater than 50%).
- the respective display state is disabled and/or suppressed by forgoing display of a selectable object that is selectable to display a media item in the respective display state, and/or disabling selection of the selectable object. Automatically selecting a display size for a landscape photograph based on a field of view of a camera when capturing the landscape photograph allows for this operation to be performed automatically without further user input.
- the three-dimensional environment (e.g., 708) is part of an extended reality environment.
- Automatically selecting a display size for a media item displayed within an extended reality environment based on a field of view of a camera when capturing the media item allows for this operation to be performed automatically without further user input. Furthermore, doing so also enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with a more detailed and/or immersive user experience.
- the three-dimensional environment e.g., 708
- the three-dimensional environment e.g., 708
- the computer system reduces a visual prominence of the three- dimensional environment in which the first media item is displayed by modifying at least some visual characteristics of the first set of visual characteristics (e.g., 708 in Figures 7K1 and/or 7K2) (e.g., darkening the three-dimensional environment, decreasing a saturation of the three-dimensional environment, decreasing a contrast of the three
- Reducing a visual prominence of the three-dimensional environment in response to receiving the sequence of one or more inputs enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with a more detailed and/or immersive user experience.
- reducing the visual prominence of the three-dimensional environment comprises: for a first region of the three-dimensional environment that immediately surrounds the first media item (e.g., while the first media item is displayed in the respective display state in the three-dimensional environment), modifying at least some visual characteristics of the first set of visual characteristics by a first amount (e.g., changing one or more values corresponding to the at least some visual characteristics of the first set of visual characteristics by a first amount); and for a second region of the three-dimensional environment that is different from the first region (e.g., the second region is further away from the first media item than the first region is from the first media item and is optionally non-overlapping with the first region and/or that does not immediately surround the first media item (e.g., while the first media item is displayed in the respective display state in the three-dimensional environment)), modifying at least some visual characteristics of the first set of visual characteristics by a second amount (e.g., changing one or more values corresponding to the
- the reduction in visual prominence is stronger in the first region (e.g., closer to the edge of the first media item) than in the second region.
- reducing the visual prominence of the three- dimensional environment comprises reducing a saturation of the three-dimensional environment, and the saturation of the first region is reduced by a greater amount than the saturation of the second region.
- reducing the visual prominence of the three-dimensional environment comprises increasing blurring of the three-dimensional environment, and the first region is blurred more than the second region.
- reducing the visual prominence of the three-dimensional environment comprises darkening the three-dimensional environment, and the first region is darker than the second region.
- Reducing a visual prominence of the three-dimensional environment in response to receiving the sequence of one or more inputs enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with a more detailed and/or immersive user experience.
- displaying the first media item (e.g., 712k-4) in the respective display state in the three-dimensional environment (e.g., 708) comprises displaying a first visual effect (e.g., a simulated light effect and/or a visual effect that is based on content of the first media item or content near an edge of the first media item) that extends from an edge of the first media item outside of the first media item (e.g., displaying a glow effect or other effect that extends from the edge(s) of media item 712k-4 into three- dimensional environment 708) (e.g., into the three-dimensional environment).
- a first visual effect e.g., a simulated light effect and/or a visual effect that is based on content of the first media item or content near an edge of the first media item
- displaying a glow effect or other effect that extends from the edge(s) of media item 712k-4 into three- dimensional environment 708) (e.g., into the three-dimensional environment).
- the first visual effect comprises copying content from an edge region of the first media item (e.g., content displayed at the edges of the first media item) and blurring the copied content (e.g., to create a glow effect).
- Displaying a visual effect at the edge of the first media item in response to receiving the sequence of one or more inputs enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with a more detailed and/or immersive user experience.
- the computer system receives, via one or more input devices, a second sequence of one or more inputs (e.g., 726a-726b and/or 744a-744b) corresponding to a request to change a display state (e.g., a request to display the media item, a request to enlarge the media item, or a request to display the media item in a respective display state in which it is not currently displayed such as an immersive display state) of a respective media item (e.g., 712f- 1 and/or 712k-l) relative to the three-dimensional environment (e.g., 708); and in response to receiving the second sequence of one or more inputs: in accordance with a determination that the respective media item is a media item of a first type (e.g., a standard (e.g., non-panoramic) photograph and/or video, a panoramic photograph and/or video, a landscape photograph
- a first type e.g., a standard (e.g., non
- Automatically displaying different types of media items differently in response to the same user input allows for this operation to be performed automatically without further user input. Furthermore, doing so also enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with a more detailed, interesting, and/or immersive user experience.
- the first type of media item (e.g., 712k-l and/or 712k-4) is panoramic media item (e.g., a media item that has an aspect ratio that is greater than a threshold aspect ratio (e.g., greater than 16:9, greater than 2: 1, and/or greater than 3 : 1)); and the second type of media item (e.g., 712f- 1 and/or 712f-3) is a non-panoramic media item (e.g., a media item that has an aspect ratio that is less than a threshold aspect ratio (e.g., less than 16:9, less than 2: 1, and/or less than 3: 1)).
- a threshold aspect ratio e.g., greater than 16:9, greater than 2: 1, and/or greater than 3 : 1
- a threshold aspect ratio e.g., less than 16:9, less than 2: 1, and/or less than 3: 1
- panoramic media items and non-panoramic media items are displayed differently in response to the second sequence of one or more inputs.
- Automatically displaying different types of media items differently in response to the same user input allows for this operation to be performed automatically without further user input. Furthermore, doing so also enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with a more detailed, interesting, and/or immersive user experience.
- the first type of media item is a spatial media item (e.g., 712f-l and/or 712f-3) (e.g., a photograph and/or video that includes a respective type of depth information (e.g., a stereoscopic media item with media captured at the same time from two different cameras (or sets of cameras) that is displayed by displaying an image from a first set of one or more cameras for a first eye of a user and an image from a second set of one or more cameras for a second eye of the user)), and/or a non-spatial photograph and/or video (e.g., a non-stereoscopic media item (e.g., a media item that is captured by only one camera)); and the second type of media item is a panoramic media item (e.g., 712k-l and/or 712k-4) (e.g., a media item that has an aspect ratio that is greater than a threshold aspect ratio (e.g., greater
- the first type of media item is a spatial media item (e.g., a photograph and/or video that includes a respective type of depth information (e.g., a stereoscopic media item with media captured at the same time from two different cameras (or sets of cameras) that is displayed by displaying an image from a first set of one or more cameras for a first eye of a user and an image from a second set of one or more cameras for a second eye of the user)), and/or a non-spatial photograph and/or video (e.g., a non-stereoscopic media item (e.g., a media item that is captured by only one camera)), and the second type of media item is a non-spatial media item (e.g., a non-stereoscopic media item (e.g., a media item that does not include the respective type of depth information and/or a media
- a non-spatial media item e.g., a non-stereoscopic media item (e.g.,
- Automatically displaying different types of media items differently in response to the same user input allows for this operation to be performed automatically without further user input. Furthermore, doing so also enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with a more detailed, interesting, and/or immersive user experience.
- the computer system displays, via the one or more display generation components, the first media item (e.g., the first media item, a representation of the first media item, and/or a thumbnail of the first media item) (e.g., concurrently with representations of a plurality of other media items) in a collection display state (e.g., within a media collection user interface and/or a media gallery user interface) different from the respective display state (e.g., 712k-l in Figure 7J and/or 712k in Figure 7A), wherein: in the respective display state (e.g., 712k-4 in Figures 7K1 and/or 7K2), the first media item is displayed at a first distance (e.g., a first simulated distance and/or
- the first media item in the collection display state, is displayed concurrently with other media items (e.g., other media items in a collection of media items). In some embodiments, in the respective display state, the first media item is not displayed with other media items (e.g., is displayed by itself within the three-dimensional environment). Displaying the first media item at different distances from the viewpoint of the user in different display modes enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with a more detailed and/or immersive user experience.
- the first media item is displayed at a third size (e.g., a third height, a third width, a third area, and/or a third diagonal size) in the three-dimensional environment (e.g.
- the third size is independent of (e.g., not determined based on) a field of view of a camera that captured the first media item (e.g., the third size is determined independently and/or without regard for the field of view of the camera that captured the first media item).
- the size of the first media item in the respective display state, is determined based on a field of view of a camera that captured the first media item at the time of capturing the first media item; and in the collection display state, the size of the first media item is determined independent of and/or without regard for the field of view of the camera that captured the first media item at the time of capturing the first media item.
- Displaying the first media item at different sizes in different display modes enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with a more detailed and/or immersive user experience.
- the computer system while displaying the first media item in the collection display state (e.g., 712k-l in Figure 7J and/or 712k in Figure 7A), receives, via the one or more input devices, a first navigation input (e.g., an input such as 722a-l, 722a-2, and/or 722b; and/or 738a-l, X738a-1, 738a-2, X738a-2, 738b, and/or X738b) that includes movement in a first direction (e.g., an air gesture input that includes movement in a first direction (e.g., an air pinch gesture and an air swipe in a first direction)); and in response to receiving the first navigation input, the computer system navigates from the first media item (e.g., similar to Figures 7G-7I, navigating from one media item to another) to a second media item (e.g., displaying movement of the first media item and/or the second media item; and
- a first navigation input
- Allowing a user to navigate through different types of media items enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with a more detailed and/or immersive user experience.
- the computer system while displaying the first media item in the collection display state (e.g., 712k in Figure 7A), the computer system receives, via the one or more input devices, a first selection input (e.g., similar to selection input 714a, 714b, but selecting media item 712k) (e.g., an air gesture input (e.g., an air pinch gesture)); and in response to receiving the selection input, the computer system expands the first media item (e.g., displaying the first media at a larger size and/or displaying expansion of the first media item) (e.g., as media item 712e is shown expanding from Figures 7A-7B).
- a first selection input e.g., similar to selection input 714a, 714b, but selecting media item 712k
- an air gesture input e.g., an air pinch gesture
- Allowing a user to expand a media item with a user input enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with a more detailed and/or immersive user experience.
- the computer system receives, via the one or more input devices, a third sequence of one or more inputs (e.g., 744a and/or 744b) corresponding to a request to change a display state (e.g., a request to display the media item, a request to enlarge the media item, or a request to display the media item in a respective display state in which it is not currently displayed such as an immersive display state) of a first respective media item (e.g., a photograph, an image, and/or a video) relative to the three-dimensional environment; and in response to receiving the third sequence of one or more inputs, the computer system displays, via the one or more display generation components, the first respective media item in the respective display state (e.g., 712k-4) in the three-dimensional environment (e.g., 708), including displaying the first respective media item (e.g., 712k-4) at a first respective size, wherein: the first respective media item was captured by a third sequence of one or more inputs
- Automatically selecting a display size for a media item based on a field of view of a camera when capturing the media item and based on post-capture cropping of the media item allows for this operation to be performed automatically without further user input. Furthermore, doing so also enhances the operability of the system and makes the usersystem interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with a more detailed and/or immersive user experience.
- the computer system in response to receiving the sequence of one or more inputs (e.g., 744a and/or 744b), the computer system outputs first non-visual feedback (e.g., 746a, X746a, 746b, and/or X746b) (e.g., first audio feedback and/or first haptic feedback) in conjunction with displaying the first media item (e.g., 7124k-4) (e.g., a photograph, an image, and/or a video), in the respective display state in the three-dimensional environment (e.g., 708); and while displaying the first media item in the respective display state in the three- dimensional environment, the computer system receives, via the one or more input devices, a subsequent sequence of one or more inputs corresponding to a request to change the display state of the first media item from the respective display state to a second respective display state different from the respective display state (e.g., one or more user inputs from Figures 7K1 and/or 7
- the computer system when the computer system transitions into displaying the first media item in the respective display state, the computer system outputs first non-visual feedback; and when the computer system transitions out of displaying the first media item in the respective display state, the computer system outputs second non- visual feedback different from the first non-visual feedback.
- the first media item in the respective display state, is displayed with a size that is determined based on a field of view of the camera at the time the first media item was captured (and/or based on a first respective size parameter of the field of view of the camera at the time the first media item was captured).
- the first media item in the second respective display state, is displayed with a second size that is determined independent of and/or without regard for the field of view of the camera at the time the first media item was captured (and/or without regard for the first respective size parameter of the field of view of the camera at the time the first media item was captured).
- the first media item in the respective display state, is displayed on a simulated curved surface; and in the second respective display state, the first media item is not displayed on a simulated curved surface (e.g., is displayed flat).
- the first media item in the respective display state, is displayed at a simulated distance that is further away from the viewpoint of the user than the second respective display state.
- Outputting different nonvisual feedback when the computer system transitions into or out of the respective display state enhances the operability of the system and makes the user-system interface more efficient (e.g., by helping the user to provide proper inputs and reducing errors) which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the system more quickly and efficiently. Doing so also provides the user with a more detailed and/or immersive user experience. Doing so also provides the user with feedback about a state of the device.
- the computer system displays, via the one or more display generation components, a representation of the first media item (e.g., 712k) in a first media collection user interface (e.g., 710) (e.g., a media collection user interface that includes representations of a plurality of media items) (e.g., a first media collection user interface that is positioned within the three-dimensional environment and/or that is displayed in the three- dimensional environment); while displaying the representation of the first media item in the first media collection user interface, the computer system receives, via the one or more input devices, a first user input (e.g., user input similar to 714a and 714b) (e.g., one or more touch inputs, one or more gaze inputs, one or more gesture inputs, one or more air gesture inputs, and/or one or more hardware inputs) corresponding to a user request to display the first media item in a first expanded view; in response to receiving the first user input: the computer system displays, via the first user input.
- the first media item in the respective display state, is displayed with a size that is determined based on a field of view of the camera at the time the first media item was captured (and/or based on a first respective size parameter of the field of view of the camera at the time the first media item was captured).
- the first media item in the media collection user interface and/or in the first expanded view, is displayed at a size that is determined independent of and/or without regard for the field of view of the camera at the time the first media item was captured (and/or without regard for the first respective size parameter of the field of view of the camera at the time the first media item was captured).
- the first media item in the respective display state, is displayed on a simulated curved surface, and in the media collection user interface and/or in the first expanded view, the first media item is not displayed on a simulated curved surface (e.g., is displayed flat). In some embodiments, in the respective display state, the first media item is displayed at a simulated distance that is further away from the viewpoint of the user than in the media collection user interface and/or in the first expanded view.
- aspects/operations of methods 800 and 900 may be interchanged, substituted, and/or added between these methods.
- the first media item recited in method 900 is the first method recited in method 800.
- these details are not repeated here.
- this gathered data may include personal information data that uniquely identifies or can be used to contact or locate a specific person.
- personal information data can include demographic data, location-based data, telephone numbers, email addresses, twitter IDs, home addresses, data or records relating to a user’s health or level of fitness (e.g., vital signs measurements, medication information, exercise information), date of birth, or any other identifying or personal information.
- the present disclosure recognizes that the use of such personal information data, in the present technology, can be used to the benefit of users.
- the personal information data can be used to improve an XR experience of a user.
- other uses for personal information data that benefit the user are also contemplated by the present disclosure.
- health and fitness data may be used to provide insights into a user’s general wellness, or may be used as positive feedback to individuals using technology to pursue wellness goals.
- I l l entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. Such policies should be easily accessible by users, and should be updated as the collection and/or use of data changes. Personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection/sharing should occur after receiving the informed consent of the users. Additionally, such entities should consider taking any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices.
- policies and practices should be adapted for the particular types of personal information data being collected and/or accessed and adapted to applicable laws and standards, including jurisdiction-specific considerations. For instance, in the US, collection of or access to certain health data may be governed by federal and/or state laws, such as the Health Insurance Portability and Accountability Act (HIPAA); whereas health data in other countries may be subject to other regulations and policies and should be handled accordingly. Hence different privacy practices should be maintained for different personal data types in each country.
- HIPAA Health Insurance Portability and Accountability Act
- the present disclosure also contemplates embodiments in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data.
- the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services or anytime thereafter.
- users can select not to provide data for customization of services.
- users can select to limit the length of time data is maintained or entirely prohibit the development of a customized service.
- the present disclosure contemplates providing notifications relating to the access or use of personal information. For instance, a user may be notified upon downloading an app that their personal information data will be accessed and then reminded again just before personal information data is accessed by the app. [0271] Moreover, it is the intent of the present disclosure that personal information data should be managed and handled in a way to minimize risks of unintentional or unauthorized access or use. Risk can be minimized by limiting the collection of data and deleting data once it is no longer needed. In addition, and when applicable, including in certain health related applications, data de-identification can be used to protect a user’s privacy.
- Deidentification may be facilitated, when appropriate, by removing specific identifiers (e.g., date of birth, etc.), controlling the amount or specificity of data stored (e.g., collecting location data a city level rather than at an address level), controlling how data is stored (e.g., aggregating data across users), and/or other methods.
- specific identifiers e.g., date of birth, etc.
- controlling the amount or specificity of data stored e.g., collecting location data a city level rather than at an address level
- controlling how data is stored e.g., aggregating data across users
- the present disclosure broadly covers use of personal information data to implement one or more various disclosed embodiments, the present disclosure also contemplates that the various embodiments can also be implemented without the need for accessing such personal information data. That is, the various embodiments of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data.
- an XR experience can generated by inferring preferences based on non-personal information data or a bare minimum amount of personal information, such as the content being requested by the device associated with a user, other non-personal information available to the service, or publicly available information.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Hardware Design (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202480037644.XA CN121285790A (en) | 2023-06-04 | 2024-05-23 | Apparatus, method and graphical user interface for presenting content |
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363470927P | 2023-06-04 | 2023-06-04 | |
| US63/470,927 | 2023-06-04 | ||
| US18/615,944 US20240402870A1 (en) | 2023-06-04 | 2024-03-25 | Devices, methods, and graphical user interfaces for presenting content |
| US18/615,944 | 2024-03-25 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| WO2024253867A1 true WO2024253867A1 (en) | 2024-12-12 |
| WO2024253867A4 WO2024253867A4 (en) | 2025-01-23 |
Family
ID=91586048
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2024/030858 Pending WO2024253867A1 (en) | 2023-06-04 | 2024-05-23 | Devices, methods, and graphical user interfaces for presenting content |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2024253867A1 (en) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180157381A1 (en) * | 2016-12-02 | 2018-06-07 | Facebook, Inc. | Systems and methods for media item selection within a grid-based content feed |
| US20180286126A1 (en) * | 2017-04-03 | 2018-10-04 | Microsoft Technology Licensing, Llc | Virtual object user interface display |
| US20180364872A1 (en) * | 2016-06-12 | 2018-12-20 | Apple Inc. | User interfaces for retrieving contextually relevant media content |
| CN111314770A (en) * | 2020-03-06 | 2020-06-19 | 网易(杭州)网络有限公司 | In-game list display method and device and terminal equipment |
| US20200304863A1 (en) * | 2019-03-24 | 2020-09-24 | Apple Inc. | User interfaces for a media browsing application |
| WO2022147146A1 (en) * | 2021-01-04 | 2022-07-07 | Apple Inc. | Devices, methods, and graphical user interfaces for interacting with three-dimensional environments |
-
2024
- 2024-05-23 WO PCT/US2024/030858 patent/WO2024253867A1/en active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180364872A1 (en) * | 2016-06-12 | 2018-12-20 | Apple Inc. | User interfaces for retrieving contextually relevant media content |
| US20180157381A1 (en) * | 2016-12-02 | 2018-06-07 | Facebook, Inc. | Systems and methods for media item selection within a grid-based content feed |
| US20180286126A1 (en) * | 2017-04-03 | 2018-10-04 | Microsoft Technology Licensing, Llc | Virtual object user interface display |
| US20200304863A1 (en) * | 2019-03-24 | 2020-09-24 | Apple Inc. | User interfaces for a media browsing application |
| CN111314770A (en) * | 2020-03-06 | 2020-06-19 | 网易(杭州)网络有限公司 | In-game list display method and device and terminal equipment |
| WO2022147146A1 (en) * | 2021-01-04 | 2022-07-07 | Apple Inc. | Devices, methods, and graphical user interfaces for interacting with three-dimensional environments |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2024253867A4 (en) | 2025-01-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20240103684A1 (en) | Methods for displaying objects relative to virtual surfaces | |
| US20240361835A1 (en) | Methods for displaying and rearranging objects in an environment | |
| US20250029319A1 (en) | Devices, methods, and graphical user interfaces for sharing content in a communication session | |
| US20240104819A1 (en) | Representations of participants in real-time communication sessions | |
| US20240281108A1 (en) | Methods for displaying a user interface object in a three-dimensional environment | |
| US20250031002A1 (en) | Systems, devices, and methods for audio presentation in a three-dimensional environment | |
| US12524142B2 (en) | Devices, methods, and graphical user interfaces for displaying sets of controls in response to gaze and/or gesture inputs | |
| WO2024253979A1 (en) | Methods for moving objects in a three-dimensional environment | |
| CN120266082A (en) | Method for reducing depth jostling in three-dimensional environments | |
| US20240257486A1 (en) | Techniques for interacting with virtual avatars and/or user representations | |
| US20240402869A1 (en) | Devices, methods, and graphical user interfaces for content collaboration and sharing | |
| US12374069B2 (en) | Devices, methods, and graphical user interfaces for real-time communication | |
| EP4569397A1 (en) | User interfaces for managing sharing of content in three-dimensional environments | |
| US20240402870A1 (en) | Devices, methods, and graphical user interfaces for presenting content | |
| WO2024253867A1 (en) | Devices, methods, and graphical user interfaces for presenting content | |
| US20250110569A1 (en) | Devices, Methods, and Graphical User Interfaces for Processing Inputs to a Three-Dimensional Environment | |
| US20240404217A1 (en) | Techniques for displaying representations of physical items within three-dimensional environments | |
| US20240402871A1 (en) | Devices, methods, and graphical user interfaces for managing the display of an overlay | |
| WO2025072024A1 (en) | Devices, methods, and graphical user interfaces for processing inputs to a three-dimensional environment | |
| WO2024020061A1 (en) | Devices, methods, and graphical user interfaces for providing inputs in three-dimensional environments | |
| WO2024253913A1 (en) | Techniques for displaying representations of physical items within three-dimensional environments | |
| WO2024205852A1 (en) | Sound randomization | |
| WO2024158843A1 (en) | Techniques for interacting with virtual avatars and/or user representations | |
| EP4591562A1 (en) | Representations of participants in real-time communication sessions | |
| WO2024253842A1 (en) | Devices, methods, and graphical user interfaces for real-time communication |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24734648 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2024734648 Country of ref document: EP |
|
| ENP | Entry into the national phase |
Ref document number: 2024734648 Country of ref document: EP Effective date: 20251202 |
|
| ENP | Entry into the national phase |
Ref document number: 2024734648 Country of ref document: EP Effective date: 20251202 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |