[go: up one dir, main page]

CN118946871A - Device, method and graphical user interface for a three-dimensional user experience session in an extended reality environment - Google Patents

Device, method and graphical user interface for a three-dimensional user experience session in an extended reality environment Download PDF

Info

Publication number
CN118946871A
CN118946871A CN202380029272.1A CN202380029272A CN118946871A CN 118946871 A CN118946871 A CN 118946871A CN 202380029272 A CN202380029272 A CN 202380029272A CN 118946871 A CN118946871 A CN 118946871A
Authority
CN
China
Prior art keywords
user
user experience
experience session
audio
computer system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202380029272.1A
Other languages
Chinese (zh)
Inventor
P·洛克尔
G·I·布彻
D·D·达尔甘
A·E·德多纳托
J·M·德赛罗
C·C·霍伊特
M·斯陶贝尔
H·D·弗维杰
K·E·鲍尔里
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/108,852 external-priority patent/US20230306695A1/en
Application filed by Apple Inc filed Critical Apple Inc
Priority to CN202411619537.6A priority Critical patent/CN119576128A/en
Priority claimed from PCT/US2023/015826 external-priority patent/WO2023183340A1/en
Publication of CN118946871A publication Critical patent/CN118946871A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure relates to techniques for providing a computer-generated user experience session in an augmented reality environment. In some embodiments, the computer system provides particles that move based on the breathing characteristics of the user to the computer-generated user experience session. In some embodiments, the computer system provides the computer-generated user experience session with an option selected based on characteristics of the XR environment. In some embodiments, the computer system provides a sound scene with randomly selected planned sound components to the computer-generated user experience session.

Description

Apparatus, method, and graphical user interface for three-dimensional user experience sessions in an augmented reality environment
Cross Reference to Related Applications
The present application claims priority from U.S. provisional application number 63/322,502, entitled "DEVICES,METHODS,AND GRAPHICAL USER INTERFACES FOR THREE-DIMENSIONAL USER EXPERIENCE SESSIONS IN AN EXTENDED REALITY ENVIRONMENT", filed on day 22, 3, 2022, and U.S. patent application number 18/108,852, entitled "DEVICES,METHODS,AND GRAPHICAL USER INTERFACES FOR THREE-DIMENSIONAL USER EXPERIENCE SESSIONS IN AN EXTENDED REALITY ENVIRONMENT", filed on day 13, 2, 2023, each of which is incorporated herein by reference in its entirety.
Technical Field
The present disclosure relates generally to computer systems in communication with a display generating component, one or more sensors, and optionally one or more audio generating components that provide a computer-generated experience, including, but not limited to, electronic devices that provide virtual reality and mixed reality experiences via a display.
Background
In recent years, the development of computer systems for augmented reality has increased significantly. An example augmented reality environment includes at least some virtual elements that replace or augment the physical world. Input devices (such as cameras, controllers, joysticks, touch-sensitive surfaces, and touch screen displays) for computer systems and other electronic computing devices are used to interact with the virtual/augmented reality environment. Example virtual elements include virtual objects such as digital images, videos, text, icons, and control elements (such as buttons and other graphics).
Disclosure of Invention
Some methods and interfaces for providing computer-generated user experience sessions in an augmented reality environment are cumbersome, inefficient, and limited. For example, providing a system for insufficient feedback of actions associated with virtual objects, a system that requires a series of inputs to achieve desired results in an augmented reality environment, and a system in which virtual objects are complex, cumbersome, and error-prone to manipulate can create a significant cognitive burden on the user and detract from the experience of the virtual/augmented reality environment. In addition, these methods take longer than necessary, wasting energy from the computer system. This latter consideration is particularly important in battery-powered devices.
Accordingly, there is a need for a computer system having improved methods and interfaces for providing a computer-generated user experience session in an augmented reality environment, thereby making the interaction of the computer-generated user experience session with the computer system more efficient and intuitive for the user. Such methods and interfaces optionally complement or replace conventional methods for providing an augmented reality experience to a user. Such methods and interfaces reduce the number, extent, and/or nature of inputs from a user by helping the user understand the association between the inputs provided and the response of the device to those inputs, thereby forming a more efficient human-machine interface.
The above-described drawbacks and other problems associated with user interfaces of computer systems are reduced or eliminated by the disclosed systems. In some embodiments, the computer system is a desktop computer with an associated display. In some embodiments, the computer system is a portable device (e.g., a notebook computer, tablet computer, or handheld device). In some embodiments, the computer system is a personal electronic device (e.g., a wearable electronic device such as a watch or a head-mounted device). In some embodiments, the computer system has a touch pad. In some embodiments, the computer system has one or more cameras. In some implementations, the computer system has a touch-sensitive display (also referred to as a "touch screen" or "touch screen display"). In some embodiments, the computer system has one or more eye tracking components. In some embodiments, the computer system has one or more hand tracking components. In some embodiments, the computer system has, in addition to the display generating component, one or more output devices including one or more haptic output generators and/or one or more audio output devices. In some embodiments, a computer system has a Graphical User Interface (GUI), one or more processors, memory and one or more modules, a program or set of instructions stored in the memory for performing a plurality of functions. In some embodiments, the user interacts with the GUI through contact and gestures of a stylus and/or finger on the touch-sensitive surface, movements of the user's eyes and hands in space relative to the GUI (and/or computer system) or the user's body (as captured by cameras and other motion sensors), and/or voice input (as captured by one or more audio input devices). In some embodiments, the functions performed by the interactions optionally include image editing, drawing, presentation, word processing, spreadsheet making, game playing, phone calls, video conferencing, email sending and receiving, instant messaging, test support, digital photography, digital video recording, web browsing, digital music playing, notes taking, and/or digital video playing. Executable instructions for performing these functions are optionally included in a transitory and/or non-transitory computer readable storage medium or other computer program product configured for execution by one or more processors.
There is a need for an electronic device with improved methods and interfaces for providing a computer-generated user experience session in an augmented reality environment. Such methods and interfaces may supplement or replace conventional methods for interacting with a three-dimensional environment. Such methods and interfaces reduce the amount, degree, and/or nature of input from a user and result in a more efficient human-machine interface. For battery-powered computing devices, such methods and interfaces conserve power and increase the time interval between battery charges. Such methods and interfaces also provide a more realistic user experience while saving storage space for visual and audio components of the user experience session.
According to some embodiments, a method is described. The method is performed at a computer system in communication with a display generation component and one or more sensors. The method comprises the following steps: displaying, via the display generating component, a user interface for a user experience session, comprising: while the user is experiencing session activity: detecting, via the one or more sensors, one or more respiratory characteristics of a user of the computer system; and displaying a user interface object as having particles that move based on the one or more breathing characteristics of the user of the computer system, comprising: in accordance with a determination that a first respiratory event of the user of the computer system meets a first set of criteria, displaying the particles of the user interface object as moving in a first manner during the first respiratory event of the user of the computer system; and in accordance with a determination that the first respiratory event of the user of the computer system meets a second set of criteria, displaying the particles of the user interface object as moving in a second manner different from the first manner during the first respiratory event of the user of the computer system.
According to some embodiments, a non-transitory computer readable storage medium is described. The non-transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system in communication with the display generation component and the one or more sensors, the one or more programs comprising instructions for: displaying, via the display generating component, a user interface for a user experience session, comprising: while the user is experiencing session activity: detecting, via the one or more sensors, one or more respiratory characteristics of a user of the computer system; and displaying a user interface object as having particles that move based on the one or more breathing characteristics of the user of the computer system, comprising: in accordance with a determination that a first respiratory event of the user of the computer system meets a first set of criteria, displaying the particles of the user interface object as moving in a first manner during the first respiratory event of the user of the computer system; and in accordance with a determination that the first respiratory event of the user of the computer system meets a second set of criteria, displaying the particles of the user interface object as moving in a second manner different from the first manner during the first respiratory event of the user of the computer system.
According to some embodiments, a transitory computer readable storage medium is described. The non-transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system in communication with the display generation component and the one or more sensors, the one or more programs comprising instructions for: displaying, via the display generating component, a user interface for a user experience session, comprising: while the user is experiencing session activity: detecting, via the one or more sensors, one or more respiratory characteristics of a user of the computer system; and displaying a user interface object as having particles that move based on the one or more breathing characteristics of the user of the computer system, comprising: in accordance with a determination that a first respiratory event of the user of the computer system meets a first set of criteria, displaying the particles of the user interface object as moving in a first manner during the first respiratory event of the user of the computer system; and in accordance with a determination that the first respiratory event of the user of the computer system meets a second set of criteria, displaying the particles of the user interface object as moving in a second manner different from the first manner during the first respiratory event of the user of the computer system.
According to some embodiments, a computer system is described. The computer system is configured to communicate with the display generation component and the one or more sensors. The computer system includes: one or more processors; and a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via the display generating component, a user interface for a user experience session, comprising: while the user is experiencing session activity: detecting, via the one or more sensors, one or more respiratory characteristics of a user of the computer system; and displaying a user interface object as having particles that move based on the one or more breathing characteristics of the user of the computer system, comprising: in accordance with a determination that a first respiratory event of the user of the computer system meets a first set of criteria, displaying the particles of the user interface object as moving in a first manner during the first respiratory event of the user of the computer system; and in accordance with a determination that the first respiratory event of the user of the computer system meets a second set of criteria, displaying the particles of the user interface object as moving in a second manner different from the first manner during the first respiratory event of the user of the computer system.
According to some embodiments, a computer system is described. The computer system is configured to communicate with the display generation component and the one or more sensors. The computer system includes: means for displaying a user interface for a user experience session via the display generating component, the display comprising: while the user is experiencing session activity: detecting, via the one or more sensors, one or more respiratory characteristics of a user of the computer system; and displaying a user interface object as having particles that move based on the one or more breathing characteristics of the user of the computer system, comprising: in accordance with a determination that a first respiratory event of the user of the computer system meets a first set of criteria, displaying the particles of the user interface object as moving in a first manner during the first respiratory event of the user of the computer system; and in accordance with a determination that the first respiratory event of the user of the computer system meets a second set of criteria, displaying the particles of the user interface object as moving in a second manner different from the first manner during the first respiratory event of the user of the computer system.
According to some embodiments, a computer program product is described. The computer program product includes one or more programs configured to be executed by one or more processors of a computer system in communication with the display generation component and the one or more sensors, the one or more programs including instructions for: displaying, via the display generating component, a user interface for a user experience session, comprising: while the user is experiencing session activity: detecting, via the one or more sensors, one or more respiratory characteristics of a user of the computer system; and displaying a user interface object as having particles that move based on the one or more breathing characteristics of the user of the computer system, comprising: in accordance with a determination that a first respiratory event of the user of the computer system meets a first set of criteria, displaying the particles of the user interface object as moving in a first manner during the first respiratory event of the user of the computer system; and in accordance with a determination that the first respiratory event of the user of the computer system meets a second set of criteria, displaying the particles of the user interface object as moving in a second manner different from the first manner during the first respiratory event of the user of the computer system.
According to some embodiments, a method is described. The method is performed at a computer system in communication with a display generation component and one or more sensors. The method comprises the following steps: while displaying an XR environment having one or more characteristics, detecting, via the one or more sensors, a request to initiate a user experience session in the XR environment; and in response to detecting the request to initiate the user experience session in the XR environment, initiating the user experience session in the XR environment, comprising: displaying, via the display generating component, a user interface for the user experience session, wherein displaying the user interface for the user experience session comprises: in accordance with a determination that the one or more characteristics of the XR environment meet a first set of criteria, displaying the user interface for the user experience session as having a first set of one or more options enabled for the user experience session; and displaying the user interface for the user experience session as having a second set of one or more options enabled for the user experience session, wherein the second set of one or more options is different from the first set of one or more options, based on a determination that the one or more characteristics of the XR environment satisfy a second set of criteria different from the first set of criteria.
According to some embodiments, a non-transitory computer readable storage medium is described. The non-transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system in communication with the display generation component and the one or more sensors, the one or more programs comprising instructions for: while displaying an XR environment having one or more characteristics, detecting, via the one or more sensors, a request to initiate a user experience session in the XR environment; and in response to detecting the request to initiate the user experience session in the XR environment, initiating the user experience session in the XR environment, comprising: displaying, via the display generating component, a user interface for the user experience session, wherein displaying the user interface for the user experience session comprises: in accordance with a determination that the one or more characteristics of the XR environment meet a first set of criteria, displaying the user interface for the user experience session as having a first set of one or more options enabled for the user experience session; and displaying the user interface for the user experience session as having a second set of one or more options enabled for the user experience session, wherein the second set of one or more options is different from the first set of one or more options, based on a determination that the one or more characteristics of the XR environment satisfy a second set of criteria different from the first set of criteria.
According to some embodiments, a transitory computer readable storage medium is described. The non-transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system in communication with the display generation component and the one or more sensors, the one or more programs comprising instructions for: while displaying an XR environment having one or more characteristics, detecting, via the one or more sensors, a request to initiate a user experience session in the XR environment; and in response to detecting the request to initiate the user experience session in the XR environment, initiating the user experience session in the XR environment, comprising: displaying, via the display generating component, a user interface for the user experience session, wherein displaying the user interface for the user experience session comprises: in accordance with a determination that the one or more characteristics of the XR environment meet a first set of criteria, displaying the user interface for the user experience session as having a first set of one or more options enabled for the user experience session; and displaying the user interface for the user experience session as having a second set of one or more options enabled for the user experience session, wherein the second set of one or more options is different from the first set of one or more options, based on a determination that the one or more characteristics of the XR environment satisfy a second set of criteria different from the first set of criteria.
According to some embodiments, a computer system is described. The computer system is configured to communicate with the display generation component and the one or more sensors. The computer system includes: one or more processors; and a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: while displaying an XR environment having one or more characteristics, detecting, via the one or more sensors, a request to initiate a user experience session in the XR environment; and in response to detecting the request to initiate the user experience session in the XR environment, initiating the user experience session in the XR environment, comprising: displaying, via the display generating component, a user interface for the user experience session, wherein displaying the user interface for the user experience session comprises: in accordance with a determination that the one or more characteristics of the XR environment meet a first set of criteria, displaying the user interface for the user experience session as having a first set of one or more options enabled for the user experience session; and displaying the user interface for the user experience session as having a second set of one or more options enabled for the user experience session, wherein the second set of one or more options is different from the first set of one or more options, based on a determination that the one or more characteristics of the XR environment satisfy a second set of criteria different from the first set of criteria.
According to some embodiments, a computer system is described. The computer system is configured to communicate with the display generation component and the one or more sensors. The computer system includes: means for detecting, via the one or more sensors, a request to initiate a user experience session in an XR environment having one or more characteristics while the XR environment is displayed; and means for initiating the user experience session in the XR environment in response to detecting the request to initiate the user experience session in the XR environment, comprising: means for displaying a user interface for the user experience session via the display generating component, wherein displaying the user interface for the user experience session comprises: in accordance with a determination that the one or more characteristics of the XR environment meet a first set of criteria, displaying the user interface for the user experience session as having a first set of one or more options enabled for the user experience session; and displaying the user interface for the user experience session as having a second set of one or more options enabled for the user experience session, wherein the second set of one or more options is different from the first set of one or more options, based on a determination that the one or more characteristics of the XR environment satisfy a second set of criteria different from the first set of criteria.
According to some embodiments, a computer program product is described. The computer program product includes one or more programs configured to be executed by one or more processors of a computer system in communication with the display generation component and the one or more sensors, the one or more programs including instructions for: while displaying an XR environment having one or more characteristics, detecting, via the one or more sensors, a request to initiate a user experience session in the XR environment; and in response to detecting the request to initiate the user experience session in the XR environment, initiating the user experience session in the XR environment, comprising: displaying, via the display generating component, a user interface for the user experience session, wherein displaying the user interface for the user experience session comprises: in accordance with a determination that the one or more characteristics of the XR environment meet a first set of criteria, displaying the user interface for the user experience session as having a first set of one or more options enabled for the user experience session; and displaying the user interface for the user experience session as having a second set of one or more options enabled for the user experience session, wherein the second set of one or more options is different from the first set of one or more options, based on a determination that the one or more characteristics of the XR environment satisfy a second set of criteria different from the first set of criteria.
According to some embodiments, a method is described. The method is performed at a computer system in communication with a display generation component, an audio generation component, and one or more sensors. The method comprises the following steps: detecting, via the one or more sensors, a request to initiate a corresponding type of user experience session in the XR environment at a first time; in response to detecting the request to initiate the user experience session in the XR environment, initiating the first user experience session of the respective type in the XR environment, comprising: displaying, via the display generating component, a user interface for the first user experience session; and outputting, via the audio generation component, a first audio soundscape for the first user experience session, wherein the first audio soundscape is output concurrently with displaying the user interface for the first user experience session, and outputting the first audio soundscape comprises: outputting the first audio soundscape having a first set of two or more audio components randomly or pseudo-randomly selected from a set of available audio components; detecting, via the one or more sensors, a request to initiate the respective type of user experience session in an XR environment at a second time different from the first time; and in response to detecting the request to initiate the user experience session in the XR environment, initiating a second user experience session of the respective type in the XR environment, comprising: displaying, via the display generating component, a user interface for the second user experience session; and outputting, via the audio generation component, a second audio soundscape for the second user experience session, wherein the second audio soundscape is output concurrently with displaying the user interface for the second user experience session, and outputting the second audio soundscape comprises: the second audio soundscape is output with a second set of two or more audio components randomly or pseudo-randomly selected from the set of available audio components.
According to some embodiments, a non-transitory computer readable storage medium is described. The non-transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system in communication with a display generation component, an audio generation component, and one or more sensors, the one or more programs comprising instructions for: detecting, via the one or more sensors, a request to initiate a corresponding type of user experience session in the XR environment at a first time; in response to detecting the request to initiate the user experience session in the XR environment, initiating the first user experience session of the respective type in the XR environment, comprising: displaying, via the display generating component, a user interface for the first user experience session; and outputting, via the audio generation component, a first audio soundscape for the first user experience session, wherein the first audio soundscape is output concurrently with displaying the user interface for the first user experience session, and outputting the first audio soundscape comprises: outputting the first audio soundscape having a first set of two or more audio components randomly or pseudo-randomly selected from a set of available audio components; detecting, via the one or more sensors, a request to initiate the respective type of user experience session in an XR environment at a second time different from the first time; and in response to detecting the request to initiate the user experience session in the XR environment, initiating a second user experience session of the respective type in the XR environment, comprising: displaying, via the display generating component, a user interface for the second user experience session; and outputting, via the audio generation component, a second audio soundscape for the second user experience session, wherein the second audio soundscape is output concurrently with displaying the user interface for the second user experience session, and outputting the second audio soundscape comprises: the second audio soundscape is output with a second set of two or more audio components randomly or pseudo-randomly selected from the set of available audio components.
According to some embodiments, a transitory computer readable storage medium is described. The non-transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of a computer system in communication with a display generation component, an audio generation component, and one or more sensors, the one or more programs comprising instructions for: detecting, via the one or more sensors, a request to initiate a corresponding type of user experience session in the XR environment at a first time; in response to detecting the request to initiate the user experience session in the XR environment, initiating the first user experience session of the respective type in the XR environment, comprising: displaying, via the display generating component, a user interface for the first user experience session; and outputting, via the audio generation component, a first audio soundscape for the first user experience session, wherein the first audio soundscape is output concurrently with displaying the user interface for the first user experience session, and outputting the first audio soundscape comprises: outputting the first audio soundscape having a first set of two or more audio components randomly or pseudo-randomly selected from a set of available audio components; detecting, via the one or more sensors, a request to initiate the respective type of user experience session in an XR environment at a second time different from the first time; and in response to detecting the request to initiate the user experience session in the XR environment, initiating a second user experience session of the respective type in the XR environment, comprising: displaying, via the display generating component, a user interface for the second user experience session; and outputting, via the audio generation component, a second audio soundscape for the second user experience session, wherein the second audio soundscape is output concurrently with displaying the user interface for the second user experience session, and outputting the second audio soundscape comprises: the second audio soundscape is output with a second set of two or more audio components randomly or pseudo-randomly selected from the set of available audio components.
According to some embodiments, a computer system is described. The computer system is configured to communicate with the display generation component, the audio generation component, and the one or more sensors. The computer system includes: one or more processors; and a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: detecting, via the one or more sensors, a request to initiate a corresponding type of user experience session in the XR environment at a first time; in response to detecting the request to initiate the user experience session in the XR environment, initiating the first user experience session of the respective type in the XR environment, comprising: displaying, via the display generating component, a user interface for the first user experience session; and outputting, via the audio generation component, a first audio soundscape for the first user experience session, wherein the first audio soundscape is output concurrently with displaying the user interface for the first user experience session, and outputting the first audio soundscape comprises: outputting the first audio soundscape having a first set of two or more audio components randomly or pseudo-randomly selected from a set of available audio components; detecting, via the one or more sensors, a request to initiate the respective type of user experience session in an XR environment at a second time different from the first time; and in response to detecting the request to initiate the user experience session in the XR environment, initiating a second user experience session of the respective type in the XR environment, comprising: displaying, via the display generating component, a user interface for the second user experience session; and outputting, via the audio generation component, a second audio soundscape for the second user experience session, wherein the second audio soundscape is output concurrently with displaying the user interface for the second user experience session, and outputting the second audio soundscape comprises: the second audio soundscape is output with a second set of two or more audio components randomly or pseudo-randomly selected from the set of available audio components.
According to some embodiments, a computer system is described. The computer system is configured to communicate with the display generation component, the audio generation component, and the one or more sensors. The computer system includes: means for detecting, via the one or more sensors, a request to initiate a corresponding type of user experience session in the XR environment at a first time; in response to detecting the request to initiate the user experience session in the XR environment, means for initiating the first user experience session of the respective type in the XR environment, comprising: means for displaying a user interface for the first user experience session via the display generating means; and means for outputting, via the audio generation component, a first audio soundscape for the first user experience session, wherein the first audio soundscape is output concurrently with displaying the user interface for the first user experience session, and outputting the first audio soundscape comprises: outputting the first audio soundscape having a first set of two or more audio components randomly or pseudo-randomly selected from a set of available audio components; means for detecting, via the one or more sensors, a request to initiate the respective type of user experience session in an XR environment at a second time different from the first time; and means for initiating a second user experience session of the respective type in the XR environment in response to detecting the request to initiate the user experience session in the XR environment, comprising: means for displaying, via the display generating means, a user interface for the second user experience session; and means for outputting, via the audio generation component, a second audio soundscape for the second user experience session, wherein the second audio soundscape is output concurrently with displaying the user interface for the second user experience session, and outputting the second audio soundscape comprises: the second audio soundscape is output with a second set of two or more audio components randomly or pseudo-randomly selected from the set of available audio components.
According to some embodiments, a computer program product is described. The computer program product includes one or more programs configured to be executed by one or more processors of a computer system in communication with a display generation component, an audio generation component, and one or more sensors, the one or more programs including instructions for: detecting, via the one or more sensors, a request to initiate a corresponding type of user experience session in the XR environment at a first time; in response to detecting the request to initiate the user experience session in the XR environment, initiating the first user experience session of the respective type in the XR environment, comprising: displaying, via the display generating component, a user interface for the first user experience session; and outputting, via the audio generation component, a first audio soundscape for the first user experience session, wherein the first audio soundscape is output concurrently with displaying the user interface for the first user experience session, and outputting the first audio soundscape comprises: outputting the first audio soundscape having a first set of two or more audio components randomly or pseudo-randomly selected from a set of available audio components; detecting, via the one or more sensors, a request to initiate the respective type of user experience session in an XR environment at a second time different from the first time; and in response to detecting the request to initiate the user experience session in the XR environment, initiating a second user experience session of the respective type in the XR environment, comprising: displaying, via the display generating component, a user interface for the second user experience session; and outputting, via the audio generation component, a second audio soundscape for the second user experience session, wherein the second audio soundscape is output concurrently with displaying the user interface for the second user experience session, and outputting the second audio soundscape comprises: the second audio soundscape is output with a second set of two or more audio components randomly or pseudo-randomly selected from the set of available audio components.
It is noted that the various embodiments described above may be combined with any of the other embodiments described herein. The features and advantages described in this specification are not all-inclusive and, in particular, many additional features and advantages will be apparent to one of ordinary skill in the art in view of the drawings, specification, and claims. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter.
Drawings
For a better understanding of the various described embodiments, reference should be made to the following detailed description taken in conjunction with the following drawings, in which like reference numerals designate corresponding parts throughout the several views.
FIG. 1 is a block diagram illustrating an operating environment for a computer system for providing an XR experience, according to some embodiments.
FIG. 2 is a block diagram illustrating a controller of a computer system configured to manage and coordinate a user's XR experience, according to some embodiments.
FIG. 3 is a block diagram illustrating a display generation component of a computer system configured to provide a visual component of an XR experience to a user, according to some embodiments.
FIG. 4 is a block diagram illustrating a hand tracking unit of a computer system configured to capture gesture inputs of a user, according to some embodiments.
Fig. 5 is a block diagram illustrating an eye tracking unit of a computer system configured to capture gaze input of a user, according to some embodiments.
Fig. 6 is a flow diagram illustrating a flash-assisted gaze tracking pipeline in accordance with some embodiments.
Fig. 7A-7L illustrate example techniques for providing a computer-generated user experience session in an augmented reality environment, according to some embodiments.
Fig. 8 is a flow diagram of a method of providing particles that move based on a respiratory characteristic of a user to a computer-generated user experience session, according to various embodiments.
FIG. 9 is a flowchart of a method of providing a computer-generated user experience session with options selected based on characteristics of an XR environment, in accordance with various embodiments.
10A-10B are flowcharts of methods of providing a soundscape with randomly selected planned sound components to a computer-generated user experience session, according to various embodiments.
Detailed Description
According to some embodiments, the present disclosure relates to a user interface for providing an augmented reality (XR) experience to a user.
The systems, methods, and GUIs described herein improve user interface interactions with virtual/augmented reality environments in a variety of ways.
In some embodiments, the computer system provides particles that move based on the breathing characteristics of the user to the computer-generated user experience session. The computer system displays a user interface for the user experience session. When a user experiences conversational activity, the computer system detects one or more respiratory characteristics of the user of the computer system and displays the user interface object as having a plurality of particles that move based on the one or more respiratory characteristics of the user. When a first respiratory event of the user meets a first set of criteria, the computer system displays particles of the user interface object as moving in a first manner during the first respiratory event of the user. When the first respiratory event of the user meets a second set of criteria, the computer system displays particles of the user interface object as moving in a second manner different from the first manner during the first respiratory event of the user.
In some embodiments, the computer system provides the computer-generated user experience session with an option selected based on characteristics of the XR environment. While the computer system displays an XR environment having one or more characteristics, the computer system detects a request to initiate a user experience session in the XR environment. In response to detecting a request to initiate a user experience session in an XR environment, the computer system initiates the user experience session in the XR environment, including displaying a user interface for the user experience session. When one or more characteristics of the XR environment satisfy a first set of criteria, the computer system displays a user interface for the user experience session as having a first set of one or more options enabled for the user experience session. When one or more characteristics of the XR environment meet a second set of criteria different from the first set of criteria, the computer system displays a user interface for the user experience session with a second set of one or more options enabled for the user experience session, wherein the second set of one or more options is different from the first set of one or more options.
In some embodiments, the computer system provides a sound scene with randomly selected planned sound components to the computer-generated user experience session. The computer system detects, at a first time, a request to initiate a corresponding type of user experience session in an XR environment. In response to detecting a request to initiate a user experience session in an XR environment, the computer system initiates a first user experience session of a corresponding type in the XR environment. Initiating the first user experience session includes: displaying a user interface for a first user experience session; and outputting the first audio soundscape for the first user experience session. The first audio soundscape is output concurrently with displaying a user interface for the first user experience session. Outputting the first audio soundscape comprises: a first audio soundscape is output having a first set of two or more audio components randomly or pseudo-randomly selected from a set of available audio components. The computer system detects a request to initiate a corresponding type of user experience session in the XR environment at a second time different from the first time. In response to detecting a request to initiate a user experience session in an XR environment, the computer system initiates a second user experience session of a corresponding type in the XR environment. Initiating the second user experience session includes: displaying a user interface for a second user experience session; and outputting a second audio soundscape for a second user experience session. The second audio soundscape is output concurrently with displaying the user interface for the second user experience session. Outputting the second audio soundscape comprises: a second audio soundscape is output having a second set of two or more audio components randomly or pseudo-randomly selected from the set of available audio components.
Fig. 1-6 provide a description of an example computer system for providing an XR experience to a user. Fig. 7A-7L illustrate example techniques for providing a computer-generated user experience session in an augmented reality environment, according to some embodiments. Fig. 8 is a flow diagram of a method of providing particles that move based on a respiratory characteristic of a user to a computer-generated user experience session, according to various embodiments. FIG. 9 is a flowchart of a method of providing a computer-generated user experience session with options selected based on characteristics of an XR environment, in accordance with various embodiments. 10A-10B are flowcharts of methods of providing a soundscape with randomly selected planned sound components to a computer-generated user experience session, according to various embodiments. The user interfaces in fig. 7A to 7L are used to illustrate the processes in fig. 8, 9, 10A, and 10B.
The processes described below enhance operability of a device and make a user-device interface more efficient (e.g., by helping a user provide appropriate input and reducing user errors in operating/interacting with the device) through various techniques, including by providing improved visual feedback to the user, reducing the number of inputs required to perform an operation, providing additional control options without cluttering the user interface with additional display controls, performing an operation when a set of conditions has been met without further user input, improving privacy and/or security, providing a richer, more detailed and/or more realistic user experience while conserving storage space, and/or additional techniques. These techniques also reduce power usage and extend battery life of the device by enabling a user to use the device faster and more efficiently. Saving battery power and thus weight, improves the ergonomics of the device. These techniques also enable real-time communication, allow fewer and/or less accurate sensors to be used, resulting in a more compact, lighter, and cheaper device, and enable the device to be used under a variety of lighting conditions. These techniques reduce energy usage, and thus heat emitted by the device, which is particularly important for wearable devices, where wearing the device can become uncomfortable for the user if the device generates too much heat completely within the operating parameters of the device components.
The processes described below enhance the operability of the device and make the user-device interface more efficient by providing a richer, more detailed, and/or more realistic user experience while conserving storage space. For example, these techniques allow devices (e.g., computer systems, tablet devices, and/or HMDs) to provide a user experience while conserving space by reducing the amount of visual and/or audio components that need to be stored to generate a user experience session. For example, in some embodiments, the device stores a superset of audio characteristics from which the device can select various combinations of audio characteristics (randomly or pseudo-randomly) to create a corresponding soundscape for the user experience session. This technique saves space because the device is able to generate many different soundtracks from a subset of the audio characteristics without having to store a fully assembled soundtrack for the user experience session. Similarly, the device stores a superset of visual characteristics from which the device can select various combinations of visual characteristics (randomly or pseudo-randomly) to create visual components and visual effects of the user experience session. This technique saves space because the device is able to generate many different views from a subset of the visual characteristics without having to store the fully rendered visual components of the user experience session. Additional examples illustrating these techniques are described below with reference to the accompanying drawings.
Furthermore, in a method described herein in which one or more steps are dependent on one or more conditions having been met, it should be understood that the method may be repeated in multiple iterations such that during the iteration, all conditions that determine steps in the method have been met in different iterations of the method. For example, if a method requires performing a first step (if a condition is met) and performing a second step (if a condition is not met), one of ordinary skill will know that the stated steps are repeated until both the condition and the condition are not met (not sequentially). Thus, a method described as having one or more steps depending on one or more conditions having been met may be rewritten as a method that repeats until each of the conditions described in the method have been met. However, this does not require the system or computer-readable medium to claim that the system or computer-readable medium contains instructions for performing the contingent operation based on the satisfaction of the corresponding condition or conditions, and thus is able to determine whether the contingent situation has been met without explicitly repeating the steps of the method until all conditions to decide on steps in the method have been met. It will also be appreciated by those of ordinary skill in the art that, similar to a method with optional steps, a system or computer readable storage medium may repeat the steps of the method as many times as necessary to ensure that all optional steps have been performed.
In some embodiments, as shown in fig. 1, an XR experience is provided to a user via an operating environment 100 comprising a computer system 101. Computer system 101 includes a controller 110 (e.g., a processor or remote server of a portable electronic device), a display generation component 120 (e.g., a Head Mounted Device (HMD), a display, a projector, a touch screen, etc.), one or more input devices 125 (e.g., eye tracking device 130, hand tracking device 140, other input devices 150), one or more output devices 155 (e.g., speaker 160, haptic output generator 170, and other output devices 180), one or more sensors 190 (e.g., image sensor, light sensor, depth sensor, haptic sensor, orientation sensor, proximity sensor, temperature sensor, position sensor, motion sensor, velocity sensor, speed sensor, etc.), and optionally one or more peripheral devices 195 (e.g., home appliance, wearable device, etc.). In some implementations, one or more of the input device 125, the output device 155, the sensor 190, and the peripheral device 195 are integrated with the display generating component 120 (e.g., in a head-mounted device or a handheld device).
In describing an XR experience, various terms are used to refer differently to several related but different environments that a user may sense and/or interact with (e.g., interact with inputs detected by computer system 101 that generated the XR experience, such inputs causing the computer system that generated the XR experience to generate audio, visual, and/or tactile feedback corresponding to various inputs provided to computer system 101). The following are a subset of these terms:
Physical environment: a physical environment refers to a physical world in which people can sense and/or interact without the assistance of an electronic system. Physical environments such as physical parks include physical objects such as physical trees, physical buildings, and physical people. People can directly sense and/or interact with a physical environment, such as by visual, tactile, auditory, gustatory, and olfactory.
And (3) augmented reality: conversely, an augmented reality (XR) environment refers to a fully or partially simulated environment in which people sense and/or interact via an electronic system. In XR, a subset of the physical movements of the person, or a representation thereof, is tracked, and in response, one or more characteristics of one or more virtual objects simulated in the XR environment are adjusted in a manner consistent with at least one physical law. For example, an XR system may detect a person's head rotation and, in response, adjust the graphical content and sound field presented to the person in a manner similar to the manner in which such views and sounds change in a physical environment. In some cases (e.g., for reachability reasons), the adjustment of the characteristics of the virtual object in the XR environment may be made in response to a representation of the physical motion (e.g., a voice command). A person may utilize any of his senses to sense and/or interact with XR objects, including vision, hearing, touch, taste, and smell. For example, a person may sense and/or interact with audio objects that create a 3D or spatial audio environment that provides perception of a point audio source in 3D space. As another example, an audio object may enable audio transparency that selectively introduces environmental sounds from a physical environment with or without computer generated audio. In some XR environments, a person may sense and/or interact with only audio objects.
Examples of XRs include virtual reality and mixed reality.
Virtual reality: a Virtual Reality (VR) environment refers to a simulated environment designed to be based entirely on computer-generated sensory input for one or more senses. The VR environment includes a plurality of virtual objects that a person can sense and/or interact with. For example, computer-generated images of trees, buildings, and avatars representing people are examples of virtual objects. A person may sense and/or interact with virtual objects in a VR environment through a simulation of the presence of the person within the computer-generated environment and/or through a simulation of a subset of the physical movements of the person within the computer-generated environment.
Mixed reality: in contrast to VR environments designed to be based entirely on computer-generated sensory input, a Mixed Reality (MR) environment refers to a simulated environment designed to introduce sensory input from a physical environment or a representation thereof in addition to including computer-generated sensory input (e.g., virtual objects). On a virtual continuum, a mixed reality environment is any condition between, but not including, a full physical environment as one end and a virtual reality environment as the other end. In some MR environments, the computer-generated sensory input may be responsive to changes in sensory input from the physical environment. In addition, some electronic systems for rendering MR environments may track the position and/or orientation relative to the physical environment to enable virtual objects to interact with real objects (i.e., physical objects or representations thereof from the physical environment). For example, the system may cause the motion such that the virtual tree appears to be stationary relative to the physical ground.
Examples of mixed reality include augmented reality and augmented virtualization.
Augmented reality: an Augmented Reality (AR) environment refers to a simulated environment in which one or more virtual objects are superimposed over a physical environment or a representation of a physical environment. For example, an electronic system for presenting an AR environment may have a transparent or translucent display through which a person may directly view the physical environment. The system may be configured to present the virtual object on a transparent or semi-transparent display such that a person perceives the virtual object superimposed over the physical environment with the system. Alternatively, the system may have an opaque display and one or more imaging sensors that capture images or videos of the physical environment, which are representations of the physical environment. The system combines the image or video with the virtual object and presents the composition on an opaque display. A person utilizes the system to indirectly view the physical environment via an image or video of the physical environment and perceive a virtual object superimposed over the physical environment. As used herein, video of a physical environment displayed on an opaque display is referred to as "pass-through video," meaning that the system captures images of the physical environment using one or more image sensors and uses those images when rendering an AR environment on the opaque display. Further alternatively, the system may have a projection system that projects the virtual object into the physical environment, for example as a hologram or on a physical surface, such that a person perceives the virtual object superimposed on top of the physical environment with the system. An augmented reality environment also refers to a simulated environment in which a representation of a physical environment is transformed by computer-generated sensory information. For example, in providing a passthrough video, the system may transform one or more sensor images to apply a selected viewing angle (e.g., a viewpoint) that is different from the viewing angle captured by the imaging sensor. As another example, the representation of the physical environment may be transformed by graphically modifying (e.g., magnifying) portions thereof such that the modified portions may be representative but not real versions of the original captured image. For another example, the representation of the physical environment may be transformed by graphically eliminating or blurring portions thereof.
Enhanced virtualization: enhanced virtual (AV) environment refers to a simulated environment in which a virtual environment or computer-generated environment incorporates one or more sensory inputs from a physical environment. The sensory input may be a representation of one or more characteristics of the physical environment. For example, an AV park may have virtual trees and virtual buildings, but the face of a person is realistically reproduced from an image taken of a physical person. As another example, the virtual object may take the shape or color of a physical object imaged by one or more imaging sensors. For another example, the virtual object may employ shadows that conform to the positioning of the sun in the physical environment.
Viewpoint-locked virtual object: when the computer system displays the virtual object at the same location and/or position in the user's viewpoint, the virtual object is viewpoint-locked even if the user's viewpoint is offset (e.g., changed). In embodiments in which the computer system is a head-mounted device, the user's point of view is locked to the forward direction of the user's head (e.g., when the user looks directly in front, the user's point of view is at least a portion of the user's field of view); thus, the user's point of view remains fixed without moving the user's head, even when the user's gaze is offset. In embodiments in which the computer system has a display generating component (e.g., a display screen) that is repositionable relative to the user's head, the user's point of view is an augmented reality view presented to the user on the display generating component of the computer system. For example, a viewpoint-locked virtual object displayed in the upper left corner of the user's viewpoint continues to be displayed in the upper left corner of the user's viewpoint when the user's viewpoint is in a first orientation (e.g., the user's head faces north), even when the user's viewpoint changes to a second orientation (e.g., the user's head faces west). In other words, the position and/or orientation of the virtual object in which the viewpoint lock is displayed in the viewpoint of the user is independent of the position and/or orientation of the user in the physical environment. In embodiments in which the computer system is a head-mounted device, the user's point of view is locked to the orientation of the user's head, such that the virtual object is also referred to as a "head-locked virtual object.
Environment-locked visual object: when a computer system displays a virtual object at a location and/or position in a user's point of view, the virtual object is environment-locked (alternatively, "world-locked"), the location and/or position being based on (e.g., selected and/or anchored to) a location and/or object in a three-dimensional environment (e.g., a physical environment or virtual environment) with reference to the location and/or object. As the user's point of view moves, the position and/or object in the environment relative to the user's point of view changes, which results in the environment-locked virtual object being displayed at a different position and/or location in the user's point of view. For example, an environmentally locked virtual object that locks onto a tree immediately in front of the user is displayed at the center of the user's viewpoint. When the user's viewpoint is shifted to the right (e.g., the user's head is turned to the right) such that the tree is now to the left of center in the user's viewpoint (e.g., the tree positioning in the user's viewpoint is shifted), the environmentally locked virtual object that is locked onto the tree is displayed to the left of center in the user's viewpoint. In other words, the position and/or orientation at which the environment-locked virtual object is displayed in the user's viewpoint depends on the position and/or orientation of the object in the environment to which the virtual object is locked. In some embodiments, the computer system uses a stationary frame of reference (e.g., a coordinate system anchored to a fixed location and/or object in the physical environment) in order to determine the location of the virtual object that displays the environmental lock in the viewpoint of the user. The environment-locked virtual object may be locked to a stationary portion of the environment (e.g., a floor, wall, table, or other stationary object), or may be locked to a movable portion of the environment (e.g., a representation of a vehicle, animal, person, or even a portion of a user's body such as a user's hand, wrist, arm, or foot that moves independent of the user's point of view) such that the virtual object moves as the point of view or the portion of the environment moves to maintain a fixed relationship between the virtual object and the portion of the environment.
In some implementations, the environmentally or view-locked virtual object exhibits an inert follow-up behavior that reduces or delays movement of the environmentally or view-locked virtual object relative to movement of a reference point that the virtual object follows. In some embodiments, the computer system intentionally delays movement of the virtual object when detecting movement of a reference point (e.g., a portion of the environment, a viewpoint, or a point fixed relative to the viewpoint, such as a point between 5cm and 300cm from the viewpoint) that the virtual object is following while exhibiting inert follow-up behavior. For example, when a reference point (e.g., a portion or viewpoint of an environment) moves at a first speed, the virtual object is moved by the device to remain locked to the reference point, but moves at a second speed that is slower than the first speed (e.g., until the reference point stops moving or slows down, at which time the virtual object begins to catch up with the reference point). In some embodiments, when the virtual object exhibits inert follow-up behavior, the device ignores small movements of the reference point (e.g., ignores movements of the reference point below a threshold amount of movement, such as movements of 0 to 5 degrees or movements of 0 to 50 cm). For example, when a reference point (e.g., a portion or point of view of an environment to which a virtual object is locked) moves a first amount, the distance between the reference point and the virtual object increases (e.g., because the virtual object is being displayed so as to maintain a fixed or substantially fixed position relative to a different point of view or portion of the environment than the reference point to which the virtual object is locked), and when the reference point (e.g., a portion or point of view of an environment to which the virtual object is locked) moves a second amount greater than the first amount, the distance between the reference point and the virtual object increases (e.g., because the virtual object is being displayed so as to maintain a fixed or substantially fixed position relative to a different point of view or portion of the environment than the point of reference to which the virtual object is locked) then decreases as the amount of movement of the reference point increases above a threshold (e.g., an "inertia following" threshold) because the virtual object is moved by the computer system to maintain a fixed or substantially fixed position relative to the reference point. In some embodiments, maintaining a substantially fixed location of the virtual object relative to the reference point includes the virtual object being displayed within a threshold distance (e.g., 1cm, 2cm, 3cm, 5cm, 15cm, 20cm, 50 cm) of the reference point in one or more dimensions (e.g., up/down, left/right, and/or forward/backward of the location relative to the reference point).
In some implementations, the spatial media includes spatial visual media and/or spatial audio. In some implementations, the spatial capture is a capture of spatial media. In some implementations, a spatially visual medium (also referred to as stereoscopic medium) (e.g., a spatial image and/or spatial video) is a medium that includes two different images or sets of images representing two perspectives with the same or overlapping fields of view for simultaneous display. A first image representing a first viewing angle is presented to a first eye of a viewer and a second image representing a second viewing angle different from the first viewing angle is concurrently presented to a second eye of the viewer. The first image and the second image have the same or overlapping fields of view. In some embodiments, the computer system displays the first image via a first display positioned for viewing by a first eye of the viewer and concurrently displays the second image via a second display, different from the first display, positioned for viewing by a second eye of the viewer. In some implementations, when viewed together, the first image and the second image create a depth effect and provide a viewer with a perception of depth for the content of the images. In some embodiments, a first video representing a first viewing angle is presented to a first eye of a viewer and a second video representing a second viewing angle different from the first viewing angle is presented concurrently to a second eye of the viewer. The first video and the second video have the same or overlapping fields of view. In some implementations, when viewed together, the first video and the second video create a depth effect and provide a viewer with a perception of depth for the content of the video. In some implementations, a spatial audio experience in the headphones is created by manipulating the sound in the two audio channels (e.g., left and right) of the headphones such that they resemble directional sound reaching the ear canal. For example, headphones may reproduce a spatial audio signal that simulates a sound scene around a listener (also referred to as a user). An effective spatial sound reproduction may present sound such that a listener perceives the sound as a location within the sound scene from outside the listener's head, just as the listener would experience sound if it were encountered in the real world.
The geometry of the listener's ear, and in particular the outer ear (pinna), has a significant effect on the sound reaching the listener's eardrum from the sound source. By taking into account the influence of the listener's auricle, the listener's head and the listener's torso on the sound entering the listener's ear canal, a spatial audio sound experience is possible. The geometry of the user's ear is optionally determined using a three-dimensional scanning device that generates a three-dimensional model of at least a portion of the visible portion of the user's ear. The geometry is optionally used to generate filters for producing a spatial audio experience. In some implementations, the spatial audio is audio that has been filtered such that a listener of the audio perceives the audio as coming from one or more directions and/or locations in three-dimensional space (e.g., from above, below, and/or in front of the listener).
An example of such a filter is a Head Related Transfer Function (HRTF) filter. These filters are used to provide effects similar to how human ears, head and torso filter sound. When the geometry of the listener's ears is known, a personalized filter (e.g., a personalized HRTF filter) may be created so that the sound experienced by the listener through headphones (e.g., in-ear headphones, over-the-ear headphones, and/or over-the-ear headphones) is more realistic. In some implementations, two filters-one for each ear-are produced such that each ear of the listener has a corresponding personalized filter (e.g., personalized HRTF filter) because the ears of the listener may have different geometries.
In some implementations, the HRTF filter includes some (or all) of the acoustic information needed to describe how the sound is reflected or diffracted around the listener's head before entering the listener's auditory system. In some implementations, the personalized HRTF filters may be selected from a database of previously determined HRTFs for users having similar anatomical characteristics. In some implementations, the personalized HRTF filters may be generated by digital modeling based on the geometry of the listener's ears. The one or more processors of the computer system optionally apply a personalized HRTF filter for the listener to the audio input signal to generate a spatial input signal for playback by headphones connected (e.g., wirelessly or wired) to the computer system.
Hardware: there are many different types of electronic systems that enable a person to sense and/or interact with various XR environments. Examples include head-mounted systems, projection-based systems, head-up displays (HUDs), vehicle windshields integrated with display capabilities, windows integrated with display capabilities, displays formed as lenses designed for placement on a person's eyes (e.g., similar to contact lenses), headphones/earphones, speaker arrays, input systems (e.g., wearable or handheld controllers with or without haptic feedback), smart phones, tablet devices, and desktop/laptop computers. The head-mounted system may include speakers and/or other audio output devices integrated into the head-mounted system for providing audio output. The head-mounted system may have one or more speakers and an integrated opaque display. Alternatively, the head-mounted system may be configured to accept an external opaque display (e.g., a smart phone). The head-mounted system may incorporate one or more imaging sensors for capturing images or video of the physical environment and/or one or more microphones for capturing audio of the physical environment. The head-mounted system may have a transparent or translucent display instead of an opaque display. The transparent or translucent display may have a medium through which light representing an image is directed to the eyes of a person. The display may utilize digital light projection, OLED, LED, uLED, liquid crystal on silicon, laser scanning light sources, or any combination of these techniques. The medium may be an optical waveguide, a holographic medium, an optical combiner, an optical reflector, or any combination thereof. In one embodiment, the transparent or translucent display may be configured to selectively become opaque. Projection-based systems may employ retinal projection techniques that project a graphical image onto a person's retina. The projection system may also be configured to project the virtual object into the physical environment, for example as a hologram or on a physical surface. In some embodiments, the controller 110 is configured to manage and coordinate the XR experience of the user. In some embodiments, controller 110 includes suitable combinations of software, firmware, and/or hardware. the controller 110 is described in more detail below with respect to fig. 2. In some implementations, the controller 110 is a computing device that is in a local or remote location relative to the scene 105 (e.g., physical environment). For example, the controller 110 is a local server located within the scene 105. As another example, the controller 110 is a remote server (e.g., cloud server, central server, etc.) located outside of the scene 105. In some implementations, the controller 110 is communicatively coupled with the display generation component 120 (e.g., HMD, display, projector, touch-screen, etc.) via one or more wired or wireless communication channels 144 (e.g., bluetooth, IEEE 802.11x, IEEE 802.16x, IEEE 802.3x, etc.). In another example, the controller 110 is included within a housing (e.g., a physical enclosure) of the display generation component 120 (e.g., an HMD or portable electronic device including a display and one or more processors, etc.), one or more of the input devices 125, one or more of the output devices 155, one or more of the sensors 190, and/or one or more of the peripheral devices 195, or shares the same physical housing or support structure with one or more of the above.
In some embodiments, display generation component 120 is configured to provide an XR experience (e.g., at least a visual component of the XR experience) to a user. In some embodiments, display generation component 120 includes suitable combinations of software, firmware, and/or hardware. The display generating section 120 is described in more detail below with respect to fig. 3. In some embodiments, the functionality of the controller 110 is provided by and/or combined with the display generating component 120.
According to some embodiments, display generation component 120 provides an XR experience to a user when the user is virtually and/or physically present within scene 105.
In some embodiments, the display generating component is worn on a portion of the user's body (e.g., on his/her head, on his/her hand, etc.). As such, display generation component 120 includes one or more XR displays provided for displaying XR content. For example, in various embodiments, the display generation component 120 encloses a field of view of a user. In some embodiments, display generation component 120 is a handheld device (such as a smart phone or tablet device) configured to present XR content, and the user holds the device with a display facing the user's field of view and a camera facing scene 105. In some embodiments, the handheld device is optionally placed within a housing that is worn on the head of the user. In some embodiments, the handheld device is optionally placed on a support (e.g., tripod) in front of the user. In some embodiments, display generation component 120 is an XR room, housing, or room configured to present XR content, wherein the user does not wear or hold display generation component 120. Many of the user interfaces described with reference to one type of hardware for displaying XR content (e.g., a handheld device or a device on a tripod) may be implemented on another type of hardware for displaying XR content (e.g., an HMD or other wearable computing device). For example, a user interface showing interactions with XR content triggered based on interactions occurring in a space in front of a handheld device or a tripod-mounted device may similarly be implemented with an HMD, where the interactions occur in the space in front of the HMD and responses to the XR content are displayed via the HMD. Similarly, a user interface showing interaction with XR content triggered based on movement of a handheld device or tripod-mounted device relative to a physical environment (e.g., a scene 105 or a portion of a user's body (e.g., a user's eye, head, or hand)) may similarly be implemented with an HMD, where the movement is caused by movement of the HMD relative to the physical environment (e.g., the scene 105 or a portion of the user's body (e.g., a user's eye, head, or hand)).
While relevant features of the operating environment 100 are shown in fig. 1, those of ordinary skill in the art will recognize from this disclosure that various other features are not shown for the sake of brevity and so as not to obscure more relevant aspects of the example embodiments disclosed herein.
Fig. 2 is a block diagram of an example of controller 110 in some embodiments. While certain specific features are shown, those of ordinary skill in the art will appreciate from the disclosure that various other features are not shown for the sake of brevity and so as not to obscure more pertinent aspects of the embodiments disclosed herein. To this end, as a non-limiting example, in some embodiments, the controller 110 includes one or more processing units 202 (e.g., microprocessors, application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs), graphics Processing Units (GPUs), central Processing Units (CPUs), processing cores, etc.), one or more input/output (I/O) devices 206, one or more communication interfaces 208 (e.g., universal Serial Bus (USB), IEEE 802.3x, IEEE 802.11x, IEEE 802.16x, global system for mobile communications (GSM), code Division Multiple Access (CDMA), time Division Multiple Access (TDMA), global Positioning System (GPS), infrared (IR), bluetooth, ZIGBEE, and/or similar types of interfaces), one or more programming (e.g., I/O) interfaces 210, memory 220, and one or more communication buses 204 for interconnecting these components and various other components.
In some embodiments, one or more of the communication buses 204 include circuitry that interconnects and controls communications between system components. In some embodiments, the one or more I/O devices 206 include at least one of a keyboard, a mouse, a touchpad, a joystick, one or more microphones, one or more speakers, one or more image sensors, one or more displays, and the like.
Memory 220 includes high-speed random access memory such as Dynamic Random Access Memory (DRAM), static Random Access Memory (SRAM), double data rate random access memory (DDR RAM), or other random access solid state memory devices. In some embodiments, memory 220 includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 220 optionally includes one or more storage devices located remotely from the one or more processing units 202. Memory 220 includes a non-transitory computer-readable storage medium. In some embodiments, memory 220 or a non-transitory computer readable storage medium of memory 220 stores the following programs, modules, and data structures, or a subset thereof, including optional operating system 230 and XR experience module 240.
Operating system 230 includes instructions for handling various basic system services and for performing hardware-related tasks. In some embodiments, XR experience module 240 is configured to manage and coordinate single or multiple XR experiences of one or more users (e.g., single XR experiences of one or more users, or multiple XR experiences of a respective group of one or more users). To this end, in various embodiments, the XR experience module 240 includes a data acquisition unit 241, a tracking unit 242, a coordination unit 246, and a data transmission unit 248.
In some embodiments, the data acquisition unit 241 is configured to acquire data (e.g., presentation data, interaction data, sensor data, location data, etc.) from at least the display generation component 120 of fig. 1, and optionally from one or more of the input device 125, the output device 155, the sensor 190, and/or the peripheral device 195. To this end, in various embodiments, the data acquisition unit 241 includes instructions and/or logic for instructions as well as heuristics and metadata for heuristics.
In some embodiments, tracking unit 242 is configured to map scene 105 and track at least the location/position of display generation component 120 relative to scene 105 of fig. 1, and optionally the location/position of one or more of input device 125, output device 155, sensor 190, and/or peripheral device 195. To this end, in various embodiments, the tracking unit 242 includes instructions and/or logic for instructions as well as heuristics and metadata for heuristics. In some embodiments, tracking unit 242 includes a hand tracking unit 244 and/or an eye tracking unit 243. In some embodiments, the hand tracking unit 244 is configured to track the location/position of one or more portions of the user's hand, and/or the motion of one or more portions of the user's hand relative to the scene 105 of fig. 1, relative to the display generating component 120, and/or relative to a coordinate system defined relative to the user's hand. The hand tracking unit 244 is described in more detail below with respect to fig. 4. In some embodiments, the eye tracking unit 243 is configured to track the positioning or movement of the user gaze (or more generally, the user's eyes, face, or head) relative to the scene 105 (e.g., relative to the physical environment and/or relative to the user (e.g., the user's hand)) or relative to XR content displayed via the display generating component 120. The eye tracking unit 243 is described in more detail below with respect to fig. 5.
In some embodiments, coordination unit 246 is configured to manage and coordinate XR experiences presented to a user by display generation component 120, and optionally by one or more of output device 155 and/or peripheral device 195. For this purpose, in various embodiments, coordination unit 246 includes instructions and/or logic for instructions as well as heuristics and metadata for heuristics.
In some embodiments, the data transmission unit 248 is configured to transmit data (e.g., presentation data, location data, etc.) to at least the display generation component 120, and optionally to one or more of the input device 125, the output device 155, the sensor 190, and/or the peripheral device 195. For this purpose, in various embodiments, the data transmission unit 248 includes instructions and/or logic for instructions as well as heuristics and metadata for heuristics.
While the data acquisition unit 241, tracking unit 242 (e.g., including eye tracking unit 243 and hand tracking unit 244), coordination unit 246, and data transmission unit 248 are shown as residing on a single device (e.g., controller 110), it should be understood that in other embodiments, any combination of the data acquisition unit 241, tracking unit 242 (e.g., including eye tracking unit 243 and hand tracking unit 244), coordination unit 246, and data transmission unit 248 may reside in a single computing device.
Furthermore, FIG. 2 is a functional description of various features that may be present in a particular implementation, as opposed to a schematic of the embodiments described herein. As will be appreciated by one of ordinary skill in the art, the individually displayed items may be combined and some items may be separated. For example, some of the functional blocks shown separately in fig. 2 may be implemented in a single block, and the various functions of a single functional block may be implemented by one or more functional blocks in various embodiments. The actual number of modules and the division of particular functions, and how features are allocated among them, will vary depending upon the particular implementation, and in some embodiments, depend in part on the particular combination of hardware, software, and/or firmware selected for a particular implementation.
Fig. 3 is a block diagram of an example of display generation component 120 in some embodiments. While certain specific features are shown, those of ordinary skill in the art will appreciate from the disclosure that various other features are not shown for the sake of brevity and so as not to obscure more pertinent aspects of the embodiments disclosed herein. For this purpose, as a non-limiting example, in some embodiments, display generation component 120 (e.g., HMD) includes one or more processing units 302 (e.g., microprocessors, ASIC, FPGA, GPU, CPU, processing cores, etc.), one or more input/output (I/O) devices and sensors 306, one or more communication interfaces 308 (e.g., ,USB、FIREWIRE、THUNDERBOLT、IEEE 802.3x、IEEE 802.11x、IEEE 802.16x、GSM、CDMA、TDMA、GPS、IR、BLUETOOTH、ZIGBEE and/or similar types of interfaces), one or more programming (e.g., I/O) interfaces 310, one or more XR displays 312, one or more optional internally and/or externally facing image sensors 314, memory 320, and one or more communication buses 304 for interconnecting these components and various other components.
In some embodiments, one or more communication buses 304 include circuitry for interconnecting and controlling communications between various system components. In some embodiments, the one or more I/O devices and sensors 306 include an Inertial Measurement Unit (IMU), an accelerometer, a gyroscope, a thermometer, one or more physiological sensors (e.g., blood pressure monitor, heart rate monitor, blood oxygen sensor, blood glucose sensor, etc.), one or more microphones, one or more speakers, a haptic engine, and/or one or more depth sensors (e.g., structured light, time of flight, etc.), and/or the like.
In some embodiments, one or more XR displays 312 are configured to provide an XR experience to a user. In some embodiments, one or more XR displays 312 correspond to holographic, digital Light Processing (DLP), liquid Crystal Displays (LCD), liquid crystal on silicon (LCoS), organic light emitting field effect transistors (OLET), organic Light Emitting Diodes (OLED), surface conduction electron emitting displays (SED), field Emission Displays (FED), quantum dot light emitting diodes (QD-LED), microelectromechanical systems (MEMS), and/or similar display types. In some embodiments, one or more XR displays 312 correspond to diffractive, reflective, polarizing, holographic, etc. waveguide displays. For example, the display generation component 120 (e.g., HMD) includes a single XR display. In another example, display generation component 120 includes an XR display for each eye of the user. In some embodiments, one or more XR displays 312 are capable of presenting MR and VR content. In some implementations, one or more XR displays 312 can present MR or VR content.
In some embodiments, the one or more image sensors 314 are configured to acquire image data corresponding to at least a portion of the user's face including the user's eyes (and may be referred to as an eye tracking camera). In some embodiments, the one or more image sensors 314 are configured to acquire image data corresponding to at least a portion of the user's hand and optionally the user's arm (and may be referred to as a hand tracking camera). In some implementations, the one or more image sensors 314 are configured to face forward in order to acquire image data corresponding to a scene that a user would see in the absence of the display generating component 120 (e.g., HMD) (and may be referred to as a scene camera). The one or more optional image sensors 314 may include one or more RGB cameras (e.g., with Complementary Metal Oxide Semiconductor (CMOS) image sensors or Charge Coupled Device (CCD) image sensors), one or more Infrared (IR) cameras, and/or one or more event-based cameras, etc.
Memory 320 includes high-speed random access memory such as DRAM, SRAM, DDR RAM or other random access solid state memory devices. In some embodiments, memory 320 includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 320 optionally includes one or more storage devices located remotely from the one or more processing units 302. Memory 320 includes a non-transitory computer-readable storage medium. In some embodiments, memory 320 or a non-transitory computer readable storage medium of memory 320 stores the following programs, modules, and data structures, or a subset thereof, including optional operating system 330 and XR presentation module 340.
Operating system 330 includes processes for handling various basic system services and for performing hardware-related tasks. In some embodiments, XR presentation module 340 is configured to present XR content to a user via one or more XR displays 312. To this end, in various embodiments, the XR presentation module 340 includes a data acquisition unit 342, an XR presentation unit 344, an XR map generation unit 346, and a data transmission unit 348.
In some embodiments, the data acquisition unit 342 is configured to at least acquire data (e.g., presentation data, interaction data, sensor data, location data, etc.) from the controller 110 of fig. 1. For this purpose, in various embodiments, the data acquisition unit 342 includes instructions and/or logic for instructions and heuristics and metadata for heuristics.
In some embodiments, XR presentation unit 344 is configured to present XR content via one or more XR displays 312. For this purpose, in various embodiments, XR presentation unit 344 includes instructions and/or logic for instructions and heuristics and metadata for heuristics.
In some embodiments, XR map generation unit 346 is configured to generate an XR map based on the media content data (e.g., a 3D map of a mixed reality scene or a map of a physical environment in which computer-generated objects may be placed to generate an augmented reality). For this purpose, in various embodiments, XR map generation unit 346 includes instructions and/or logic for the instructions as well as heuristics and metadata for the heuristics.
In some embodiments, the data transmission unit 348 is configured to transmit data (e.g., presentation data, location data, etc.) to at least the controller 110, and optionally one or more of the input device 125, the output device 155, the sensor 190, and/or the peripheral device 195. For this purpose, in various embodiments, the data transmission unit 348 includes instructions and/or logic for instructions and heuristics and metadata for heuristics.
Although the data acquisition unit 342, the XR presentation unit 344, the XR map generation unit 346, and the data transmission unit 348 are shown as residing on a single device (e.g., the display generation component 120 of fig. 1), it should be understood that in other embodiments, any combination of the data acquisition unit 342, the XR presentation unit 344, the XR map generation unit 346, and the data transmission unit 348 may be located in separate computing devices.
Furthermore, fig. 3 is used more as a functional description of various features that may be present in a particular embodiment, as opposed to a schematic of the embodiments described herein. As will be appreciated by one of ordinary skill in the art, the individually displayed items may be combined and some items may be separated. For example, some of the functional blocks shown separately in fig. 3 may be implemented in a single block, and the various functions of a single functional block may be implemented by one or more functional blocks in various embodiments. The actual number of modules and the division of particular functions, and how features are allocated among them, will vary depending upon the particular implementation, and in some embodiments, depend in part on the particular combination of hardware, software, and/or firmware selected for a particular implementation.
Fig. 4 is a schematic illustration of an example embodiment of a hand tracking device 140. In some embodiments, the hand tracking device 140 (fig. 1) is controlled by the hand tracking unit 244 (fig. 2) to track the position/location of one or more portions of the user's hand, and/or the movement of one or more portions of the user's hand relative to the scene 105 of fig. 1 (e.g., relative to a portion of the physical environment surrounding the user, relative to the display generating component 120, or relative to a portion of the user (e.g., the user's face, eyes, or head), and/or relative to a coordinate system defined relative to the user's hand). In some implementations, the hand tracking device 140 is part of the display generation component 120 (e.g., embedded in or attached to a head-mounted device). In some embodiments, the hand tracking device 140 is separate from the display generation component 120 (e.g., in a separate housing or attached to a separate physical support structure).
In some implementations, the hand tracking device 140 includes an image sensor 404 (e.g., one or more IR cameras, 3D cameras, depth cameras, and/or color cameras, etc.) that captures three-dimensional scene information including at least a human user's hand 406. The image sensor 404 captures the hand image with sufficient resolution to enable the fingers and their corresponding locations to be distinguished. The image sensor 404 typically captures images of other parts of the user's body, and possibly also all parts of the body, and may have a zoom capability or a dedicated sensor with increased magnification to capture images of the hand with a desired resolution. In some implementations, the image sensor 404 also captures 2D color video images of the hand 406 and other elements of the scene. In some implementations, the image sensor 404 is used in conjunction with other image sensors to capture the physical environment of the scene 105, or as an image sensor that captures the physical environment of the scene 105. In some embodiments, the image sensor 404, or a portion thereof, is positioned relative to the user or the user's environment in a manner that uses the field of view of the image sensor to define an interaction space in which hand movements captured by the image sensor are considered input to the controller 110.
In some embodiments, the image sensor 404 outputs a sequence of frames containing 3D map data (and, in addition, possible color image data) to the controller 110, which extracts high-level information from the map data. This high-level information is typically provided via an Application Program Interface (API) to an application running on the controller, which drives the display generating component 120 accordingly. For example, a user may interact with software running on the controller 110 by moving his hand 406 and changing his hand pose.
In some implementations, the image sensor 404 projects a speckle pattern onto a scene that includes the hand 406 and captures an image of the projected pattern. In some implementations, the controller 110 calculates 3D coordinates of points in the scene (including points on the surface of the user's hand) by triangulation based on lateral offsets of the blobs in the pattern. This approach is advantageous because it does not require the user to hold or wear any kind of beacon, sensor or other marker. The method gives the depth coordinates of points in the scene relative to a predetermined reference plane at a specific distance from the image sensor 404. In this disclosure, it is assumed that the image sensor 404 defines an orthogonal set of x-axis, y-axis, z-axis such that the depth coordinates of points in the scene correspond to the z-component measured by the image sensor. Alternatively, the image sensor 404 (e.g., a hand tracking device) may use other 3D mapping methods, such as stereoscopic imaging or time-of-flight measurements, based on single or multiple cameras or other types of sensors.
In some implementations, the hand tracking device 140 captures and processes a time series of depth maps containing the user's hand as the user moves his hand (e.g., the entire hand or one or more fingers). Software running on the image sensor 404 and/or a processor in the controller 110 processes the 3D map data to extract image block descriptors of the hand in these depth maps. The software may match these descriptors with image block descriptors stored in database 408 based on previous learning processes in order to estimate the pose of the hand in each frame. The pose typically includes the 3D position of the user's hand joints and finger tips.
The software may also analyze the trajectory of the hand and/or finger over multiple frames in the sequence to identify gestures. The pose estimation functions described herein may alternate with motion tracking functions such that image block-based pose estimation is performed only once every two (or more) frames while tracking changes used to find poses that occur on the remaining frames. Pose, motion, and gesture information are provided to an application running on the controller 110 via the APIs described above. The program may move and modify images presented on the display generation component 120, for example, in response to pose and/or gesture information, or perform other functions.
In some implementations, the gesture includes an air gesture. An air gesture is a motion (including a motion of a user's body relative to an absolute reference (e.g., an angle of a user's arm relative to the ground or a distance of a user's hand relative to the ground), a motion relative to another portion of the user's body (e.g., a movement of a user's hand relative to a shoulder of a user, a movement of a user's hand relative to another hand of a user, and/or a movement of a user's finger relative to another one or a portion of a user), and/or an absolute motion (e.g., a hand gesture including a hand gesture that moves a predetermined amount and/or speed in a predetermined position, or a predetermined shake gesture including a hand gesture) of a portion of a user's body that is detected without the user touching an input element (or being independent of an input element as part of a device (e.g., computer system 101, one or more input device 125, and/or hand tracking device 140)) that is part of the device.
In some embodiments, the input gestures used in the various examples and embodiments described herein include air gestures performed by movement of a user's finger relative to other fingers (or portions of the user's hand) for interacting with an XR environment (e.g., a virtual or mixed reality environment) in some embodiments. In some embodiments, the air gesture is a gesture that is detected without the user touching an input element that is part of the device (or independent of an input element that is part of the device) and based on a detected movement of a portion of the user's body through the air, including a movement of the user's body relative to an absolute reference (e.g., an angle of the user's arm relative to the ground or a distance of the user's hand relative to the ground), a movement relative to another portion of the user's body (e.g., a movement of the user's hand relative to the user's shoulder, a movement of the user's hand relative to the other hand, and/or a movement of the user's finger relative to the other finger or part of the hand), and/or an absolute movement of a portion of the user's body (e.g., a flick gesture that includes the hand moving a predetermined amount and/or speed in a predetermined pose, or a shake gesture that includes a predetermined speed or amount of rotation of the portion of the user's body).
In some embodiments where the input gesture is an air gesture (e.g., in the absence of physical contact with the input device, the input device provides information to the computer system as to which user interface element is the target of the user input, such as contact with a user interface element displayed on a touch screen, or contact with a mouse or touchpad to move a cursor to the user interface element), the gesture takes into account the user's attention (e.g., gaze) to determine the target of the user input (e.g., for direct input, as described below). Thus, in embodiments involving air gestures, for example, an input gesture in combination (e.g., simultaneously) with movement of a user's finger and/or hand detects an attention (e.g., gaze) toward a user interface element to perform pinch and/or tap inputs, as described below.
In some implementations, an input gesture directed to a user interface object is performed with direct or indirect reference to the user interface object. For example, user input is performed directly on a user interface object when an input gesture is performed with the user's hand at a location corresponding to the location of the user interface object in a three-dimensional environment (e.g., as determined based on the user's current viewpoint). In some implementations, upon detecting a user's attention (e.g., gaze) to a user interface object, an input gesture is performed indirectly on the user interface object in accordance with a positioning of a user's hand while the user performs the input gesture not being at the positioning corresponding to the positioning of the user interface object in a three-dimensional environment. For example, for a direct input gesture, the user can direct the user's input to the user interface object by initiating the gesture at or near a location corresponding to the displayed location of the user interface object (e.g., within 0.5cm, 1cm, 5cm, or within a distance between 0 and 5cm measured from the outer edge of the option or the center portion of the option). For indirect input gestures, a user can direct the user's input to a user interface object by focusing on the user interface object (e.g., by looking at the user interface object), and while focusing on an option, the user initiates the input gesture (e.g., at any location that is detectable by the computer system) (e.g., at a location that does not correspond to a display location of the user interface object).
In some embodiments, the input gestures (e.g., air gestures) used in the various examples and embodiments described herein include pinch inputs and tap inputs that are used in some embodiments to interact with a virtual or mixed reality environment. For example, pinch and tap inputs described below are performed as air gestures.
In some implementations, the pinch input is part of an air gesture that includes one or more of: pinch gestures, long pinch gestures, pinch and drag gestures, or double pinch gestures. For example, pinch gestures as air gestures include movements of two or more fingers of a hand to contact each other, i.e., optionally, immediately followed by interruption of contact with each other (e.g., within 0 to 1 second). A long pinch gesture, which is an air gesture, includes movement of two or more fingers of a hand into contact with each other for at least a threshold amount of time (e.g., at least 1 second) before a break in contact with each other is detected. For example, a long pinch gesture includes a user holding a pinch gesture (e.g., where two or more fingers make contact), and the long pinch gesture continues until a break in contact between the two or more fingers is detected. In some implementations, the double pinch gesture as an air gesture includes two (e.g., or more) pinch inputs (e.g., performed by the same hand) that are detected in succession with each other immediately (e.g., within a predefined period of time). For example, the user performs a first pinch input (e.g., a pinch input or a long pinch input), releases the first pinch input (e.g., breaks contact between two or more fingers), and performs a second pinch input within a predefined period of time (e.g., within 1 second or within 2 seconds) after releasing the first pinch input.
In some implementations, the pinch-and-drag gesture as an air gesture includes a pinch gesture (e.g., a pinch gesture or a long pinch gesture) that is performed in conjunction with (e.g., follows) a drag input that changes a position of a user's hand from a first position (e.g., a start position of the drag) to a second position (e.g., an end position of the drag). In some implementations, the user holds the pinch gesture while performing the drag input, and releases the pinch gesture (e.g., opens their two or more fingers) to end the drag gesture (e.g., at the second location). In some implementations, pinch input and drag input are performed by the same hand (e.g., a user pinch two or more fingers to contact each other and move the same hand into a second position in the air with a drag gesture). In some implementations, pinch input is performed by a first hand of the user and drag input is performed by a second hand of the user (e.g., the second hand of the user moves in the air from a first position to a second position as the user continues pinch input with the first hand of the user). In some implementations, the input gesture as an air gesture includes an input (e.g., pinch and/or tap input) performed using both hands of the user. For example, an input gesture includes two (e.g., or more) pinch inputs performed in conjunction with each other (e.g., concurrently or within a predefined time period). For example, a first pinch gesture (e.g., pinch input, long pinch input, or pinch-and-drag input) is performed using a first hand of a user, and a second pinch input is performed using the other hand (e.g., a second hand of the two hands of the user) in combination with the pinch input performed using the first hand. In some embodiments, movement between the user's two hands (e.g., increasing and/or decreasing the distance or relative orientation between the user's two hands).
In some implementations, the tap input (e.g., pointing to the user interface element) performed as an air gesture includes movement of a user's finger toward the user interface element, movement of a user's hand toward the user interface element (optionally, the user's finger extends toward the user interface element), downward movement of the user's finger (e.g., mimicking a mouse click motion or a tap on a touch screen), or other predefined movement of the user's hand. In some embodiments, a flick input performed as an air gesture is detected based on a movement characteristic of a finger or hand performing a flick gesture movement of the finger or hand away from a user's point of view and/or toward an object that is a target of the flick input, followed by an end of the movement. In some embodiments, the end of movement is detected based on a change in movement characteristics of the finger or hand performing the flick gesture (e.g., the end of movement away from the user's point of view and/or toward an object that is the target of the flick input, the reversal of the direction of movement of the finger or hand, and/or the reversal of the acceleration direction of movement of the finger or hand).
In some embodiments, the determination that the user's attention is directed to a portion of the three-dimensional environment is based on detection of gaze directed to that portion (optionally, without other conditions). In some embodiments, the portion of the three-dimensional environment to which the user's attention is directed is determined based on detecting a gaze directed to the portion of the three-dimensional environment with one or more additional conditions, such as requiring the gaze to be directed to the portion of the three-dimensional environment for at least a threshold duration (e.g., dwell duration) and/or requiring the gaze to be directed to the portion of the three-dimensional environment when the point of view of the user is within a distance threshold from the portion of the three-dimensional environment, such that the device determines the portion of the three-dimensional environment to which the user's attention is directed, wherein if one of the additional conditions is not met, the device determines that the attention is not directed to the portion of the three-dimensional environment to which the gaze is directed (e.g., until the one or more additional conditions are met).
In some embodiments, detection of the ready state configuration of the user or a portion of the user is detected by the computer system. Detection of a ready state configuration of a hand is used by a computer system as an indication that a user may be ready to interact with the computer system using one or more air gesture inputs (e.g., pinch, tap, pinch and drag, double pinch, long pinch, or other air gestures described herein) performed by the hand. For example, the ready state of the hand is determined based on whether the hand has a predetermined hand shape (e.g., a pre-pinch shape in which the thumb and one or more fingers extend and are spaced apart in preparation for making a pinch or grasp gesture, or a pre-flick in which the one or more fingers extend and the palm faces away from the user), based on whether the hand is in a predetermined position relative to the user's point of view (e.g., below the user's head and above the user's waist and extending at least 15cm, 20cm, 25cm, 30cm, or 50cm from the body), and/or based on whether the hand has moved in a particular manner (e.g., toward an area above the user's waist and in front of the user's head or away from the user's body or legs). In some implementations, the ready state is used to determine whether an interactive element of the user interface is responsive to an attention (e.g., gaze) input.
In a scenario where input is described with reference to an air gesture, it should be appreciated that a similar gesture may be detected using a hardware input device attached to or held by one or more hands of a user, where the positioning of the hardware input device in space may be tracked using optical tracking, one or more accelerometers, one or more gyroscopes, one or more magnetometers, and/or one or more inertial measurement units, and the positioning and/or movement of the hardware input device is used instead of the positioning and/or movement of one or more hands at the corresponding air gesture. In the context of describing inputs with reference to air gestures, it should be appreciated that similar gestures may be detected using a hardware input device attached to or held by one or more hands of a user, user inputs may be detected using controls contained in the hardware input device, such as one or more touch-sensitive input elements, one or more pressure-sensitive input elements, one or more buttons, one or more knobs, one or more dials, one or more joysticks, one or more hand or finger covers that may detect changes in positioning or location of portions of the hands and/or fingers relative to each other, relative to the body of the user, and/or relative to the physical environment of the user, and/or other hardware input device controls, wherein user inputs made using controls contained in the hardware input device are used in place of tap and/or pinch finger gestures, such as in the air or in the air of corresponding air gestures. For example, selection inputs described as being performed with an air tap or air pinch input may alternatively be detected with a button press, a tap on a touch-sensitive surface, a press on a pressure-sensitive surface, or other hardware input. As another example, movement input described as being performed with air kneading and dragging may optionally be detected based on interactions with hardware input controls, such as button presses and holds, touches on a touch-sensitive surface, presses on a pressure-sensitive surface, or other hardware inputs after movement of a hardware input device (e.g., along with a hand associated with the hardware input device) through space. Similarly, two-handed input, including movement of the hands relative to each other, may be performed using one air gesture and one hardware input device in the hands that is not performing the air gesture, two hardware input devices held in different hands, or two air gestures performed by different hands using the air gesture and/or various combinations of inputs detected by the one or more hardware input devices.
In some embodiments, the software may be downloaded to the controller 110 in electronic form, over a network, for example, or may alternatively be provided on tangible non-transitory media, such as optical, magnetic, or electronic memory media. In some embodiments, database 408 is also stored in a memory associated with controller 110. Alternatively or in addition, some or all of the described functions of the computer may be implemented in dedicated hardware, such as a custom or semi-custom integrated circuit or a programmable Digital Signal Processor (DSP). Although the controller 110 is shown in fig. 4, for example, as a separate unit from the image sensor 404, some or all of the processing functions of the controller may be performed by a suitable microprocessor and software or by dedicated circuitry within the housing of the image sensor 404 (e.g., a hand tracking device) or other devices associated with the image sensor 404. In some embodiments, at least some of these processing functions may be performed by a suitable processor integrated with display generation component 120 (e.g., in a television receiver, handheld device, or head mounted device) or with any other suitable computerized device (such as a game console or media player). The sensing functionality of the image sensor 404 may likewise be integrated into a computer or other computerized device to be controlled by the sensor output.
Fig. 4 also includes a schematic representation of a depth map 410 captured by the image sensor 404 in some embodiments. As described above, the depth map comprises a matrix of pixels having corresponding depth values. The pixels 412 corresponding to the hand 406 have been segmented from the background and wrist in the figure. The brightness of each pixel within the depth map 410 is inversely proportional to its depth value (i.e., the measured z-distance from the image sensor 404), where the gray shade becomes darker with increasing depth. The controller 110 processes these depth values to identify and segment components of the image (i.e., a set of adjacent pixels) that have human hand characteristics. These characteristics may include, for example, overall size, shape, and frame-to-frame motion from a sequence of depth maps.
Fig. 4 also schematically illustrates the hand bones 414 that the controller 110 eventually extracts from the depth map 410 of the hand 406 in some embodiments. In fig. 4, the hand skeleton 414 is superimposed over the hand background 416 that has been segmented from the original depth map. In some embodiments, key feature points of the hand and optionally on the wrist or arm connected to the hand (e.g., points corresponding to knuckles, finger tips, palm centers, ends of the hand connected to the wrist, etc.) are identified and located on the hand bones 414. In some embodiments, the controller 110 uses the positions and movements of these key feature points on the plurality of image frames to determine, in some embodiments, a gesture performed by the hand or a current state of the hand.
Fig. 5 shows an example embodiment of the eye tracking device 130 (fig. 1). In some embodiments, eye tracking device 130 is controlled by eye tracking unit 243 (fig. 2) to track the positioning and movement of the user gaze relative to scene 105 or relative to XR content displayed via display generation component 120. In some embodiments, the eye tracking device 130 is integrated with the display generation component 120. For example, in some embodiments, when display generating component 120 is a head-mounted device (such as a headset, helmet, goggles, or glasses) or a handheld device placed in a wearable frame, the head-mounted device includes both components that generate XR content for viewing by a user and components for tracking the user's gaze with respect to the XR content. In some embodiments, the eye tracking device 130 is separate from the display generation component 120. For example, when the display generating component is a handheld device or an XR chamber, the eye tracking device 130 is optionally a device separate from the handheld device or XR chamber. In some embodiments, the eye tracking device 130 is a head mounted device or a portion of a head mounted device. In some embodiments, the head-mounted eye tracking device 130 is optionally used in combination with a display generating component that is also head-mounted or a display generating component that is not head-mounted. In some embodiments, the eye tracking device 130 is not a head mounted device and is optionally used in conjunction with a head mounted display generating component. In some embodiments, the eye tracking device 130 is not a head mounted device and optionally is part of a non-head mounted display generating component.
In some embodiments, the display generation component 120 uses a display mechanism (e.g., a left near-eye display panel and a right near-eye display panel) to display frames including left and right images in front of the user's eyes, thereby providing a 3D virtual view to the user. For example, the head mounted display generating component may include left and right optical lenses (referred to herein as eye lenses) located between the display and the user's eyes. In some embodiments, the display generation component may include or be coupled to one or more external cameras that capture video of the user's environment for display. In some embodiments, the head mounted display generating component may have a transparent or translucent display and the virtual object is displayed on the transparent or translucent display through which the user may directly view the physical environment. In some embodiments, the display generation component projects the virtual object into the physical environment. The virtual object may be projected, for example, on a physical surface or as a hologram, such that an individual uses the system to observe the virtual object superimposed over the physical environment. In this case, separate display panels and image frames for the left and right eyes may not be required.
As shown in fig. 5, in some embodiments, the eye tracking device 130 (e.g., a gaze tracking device) includes at least one eye tracking camera (e.g., an Infrared (IR) or Near Infrared (NIR) camera) and an illumination source (e.g., an IR or NIR light source, such as an array or ring of LEDs) that emits light (e.g., IR or NIR light) toward the user's eye. The eye-tracking camera may be directed toward the user's eye to receive IR or NIR light reflected directly from the eye by the light source, or alternatively may be directed toward "hot" mirrors located between the user's eye and the display panel that reflect IR or NIR light from the eye to the eye-tracking camera while allowing visible light to pass through. The eye tracking device 130 optionally captures images of the user's eyes (e.g., as a video stream captured at 60-120 frames per second (fps)), analyzes the images to generate gaze tracking information, and communicates the gaze tracking information to the controller 110. In some embodiments, both eyes of the user are tracked separately by the respective eye tracking camera and illumination source. In some embodiments, only one eye of the user is tracked by the respective eye tracking camera and illumination source.
In some embodiments, the eye tracking device 130 is calibrated using a device-specific calibration process to determine parameters of the eye tracking device for the particular operating environment 100, such as 3D geometry and parameters of LEDs, cameras, hot mirrors (if present), eye lenses, and display screens. The device-specific calibration procedure may be performed at the factory or another facility prior to delivering the AR/VR equipment to the end user. The device-specific calibration process may be an automatic calibration process or a manual calibration process. The user-specific calibration process may include estimating eye parameters of a particular user, such as pupil position, foveal position, optical axis, visual axis, eye distance, and the like. In some embodiments, once the device-specific parameters and the user-specific parameters are determined for the eye-tracking device 130, the images captured by the eye-tracking camera may be processed using a flash-assist method to determine the current visual axis and gaze point of the user relative to the display.
As shown in fig. 5, the eye tracking device 130 (e.g., 130A or 130B) includes an eye lens 520 and a gaze tracking system including at least one eye tracking camera 540 (e.g., an Infrared (IR) or Near Infrared (NIR) camera) positioned on a side of the user's face on which eye tracking is performed, and an illumination source 530 (e.g., an IR or NIR light source such as an array or ring of NIR Light Emitting Diodes (LEDs)) that emits light (e.g., IR or NIR light) toward the user's eyes 592. The eye-tracking camera 540 may be directed toward a mirror 550 (which reflects IR or NIR light from the eye 592 while allowing visible light to pass) located between the user's eye 592 and the display 510 (e.g., left or right display panel of a head-mounted display, or display of a handheld device, projector, etc.) (e.g., as shown in the top portion of fig. 5), or alternatively may be directed toward the user's eye 592 to receive reflected IR or NIR light from the eye 592 (e.g., as shown in the bottom portion of fig. 5).
In some implementations, the controller 110 renders AR or VR frames 562 (e.g., left and right frames for left and right display panels) and provides the frames 562 to the display 510. The controller 110 uses the gaze tracking input 542 from the eye tracking camera 540 for various purposes, such as for processing the frames 562 for display. The controller 110 optionally estimates the gaze point of the user on the display 510 based on gaze tracking input 542 acquired from the eye tracking camera 540 using a flash assist method or other suitable method. The gaze point estimated from the gaze tracking input 542 is optionally used to determine the direction in which the user is currently looking.
Several possible use cases of the current gaze direction of the user are described below and are not intended to be limiting. As an example use case, the controller 110 may render virtual content differently based on the determined direction of the user's gaze. For example, the controller 110 may generate virtual content in a foveal region determined according to a current gaze direction of the user at a higher resolution than in a peripheral region. As another example, the controller may position or move virtual content in the view based at least in part on the user's current gaze direction. As another example, the controller may display particular virtual content in the view based at least in part on the user's current gaze direction. As another example use case in an AR application, the controller 110 may direct an external camera used to capture the physical environment of the XR experience to focus in the determined direction. The autofocus mechanism of the external camera may then focus on an object or surface in the environment that the user is currently looking at on display 510. As another example use case, the eye lens 520 may be a focusable lens, and the controller uses the gaze tracking information to adjust the focus of the eye lens 520 such that the virtual object that the user is currently looking at has the appropriate vergence to match the convergence of the user's eyes 592. The controller 110 may utilize the gaze tracking information to direct the eye lens 520 to adjust the focus such that the approaching object the user is looking at appears at the correct distance.
In some embodiments, the eye tracking device is part of a head mounted device that includes a display (e.g., display 510), two eye lenses (e.g., eye lens 520), an eye tracking camera (e.g., eye tracking camera 540), and a light source (e.g., light source 530 (e.g., IR or NIR LED)) mounted in a wearable housing. The light source emits light (e.g., IR or NIR light) toward the user's eye 592. In some embodiments, the light sources may be arranged in a ring or circle around each of the lenses, as shown in fig. 5. In some embodiments, for example, eight light sources 530 (e.g., LEDs) are arranged around each lens 520. However, more or fewer light sources 530 may be used, and other arrangements and locations of light sources 530 may be used.
In some implementations, the display 510 emits light in the visible range and does not emit light in the IR or NIR range, and thus does not introduce noise in the gaze tracking system. Note that the position and angle of the eye tracking camera 540 is given by way of example and is not intended to be limiting. In some implementations, a single eye tracking camera 540 is located on each side of the user's face. In some implementations, two or more NIR cameras 540 may be used on each side of the user's face. In some implementations, a camera 540 with a wider field of view (FOV) and a camera 540 with a narrower FOV may be used on each side of the user's face. In some implementations, a camera 540 operating at one wavelength (e.g., 850 nm) and a camera 540 operating at a different wavelength (e.g., 940 nm) may be used on each side of the user's face.
The embodiment of the gaze tracking system as shown in fig. 5 may be used, for example, in computer-generated reality, virtual reality, and/or mixed reality applications to provide a user with a computer-generated reality, virtual reality, augmented reality, and/or augmented virtual experience.
Fig. 6 illustrates a flash-assisted gaze tracking pipeline in some embodiments. In some embodiments, the gaze tracking pipeline is implemented by a glint-assisted gaze tracking system (e.g., eye tracking device 130 as shown in fig. 1 and 5). The flash-assisted gaze tracking system may maintain a tracking state. Initially, the tracking state is off or "no". When in the tracking state, the glint-assisted gaze tracking system uses previous information from a previous frame when analyzing the current frame to track pupil contours and glints in the current frame. When not in the tracking state, the glint-assisted gaze tracking system attempts to detect pupils and glints in the current frame and, if successful, initializes the tracking state to "yes" and continues with the next frame in the tracking state.
As shown in fig. 6, the gaze tracking camera may capture left and right images of the left and right eyes of the user. The captured image is then input to the gaze tracking pipeline for processing beginning at 610. As indicated by the arrow returning to element 600, the gaze tracking system may continue to capture images of the user's eyes, for example, at a rate of 60 to 120 frames per second. In some embodiments, each set of captured images may be input to a pipeline for processing. However, in some embodiments or under some conditions, not all captured frames are pipelined.
At 610, for the currently captured image, if the tracking state is yes, the method proceeds to element 640. At 610, if the tracking state is no, the image is analyzed to detect a user's pupil and glints in the image, as indicated at 620. At 630, if the pupil and glints are successfully detected, the method proceeds to element 640. Otherwise, the method returns to element 610 to process the next image of the user's eye.
At 640, if proceeding from element 610, the current frame is analyzed to track pupils and glints based in part on previous information from the previous frame. At 640, if proceeding from element 630, a tracking state is initialized based on the pupil and flash detected in the current frame. The results of the processing at element 640 are checked to verify that the results of the tracking or detection may be trusted. For example, the results may be checked to determine if the pupil and a sufficient number of flashes for performing gaze estimation are successfully tracked or detected in the current frame. If the result is unlikely to be authentic at 650, then the tracking state is set to no at element 660 and the method returns to element 610 to process the next image of the user's eye. At 650, if the result is trusted, the method proceeds to element 670. At 670, the tracking state is set to YES (if not already YES), and pupil and glint information is passed to element 680 to estimate the gaze point of the user.
Fig. 6 is intended to serve as one example of an eye tracking technique that may be used in a particular implementation. As will be appreciated by one of ordinary skill in the art, in some embodiments, other eye tracking techniques, either currently existing or developed in the future, may be used in computer system 101 in place of or in combination with the glint-assisted eye tracking techniques described herein for providing an XR experience to a user.
In this disclosure, various input methods are described with respect to interactions with a computer system. When one input device or input method is used to provide an example and another input device or input method is used to provide another example, it should be understood that each example may be compatible with and optionally utilize the input device or input method described with respect to the other example. Similarly, various output methods are described with respect to interactions with a computer system. When one output device or output method is used to provide an example and another output device or output method is used to provide another example, it should be understood that each example may be compatible with and optionally utilize the output device or output method described with respect to the other example. Similarly, the various methods are described with respect to interactions with a virtual environment or mixed reality environment through a computer system. When examples are provided using interactions with a virtual environment, and another example is provided using a mixed reality environment, it should be understood that each example may be compatible with and optionally utilize the methods described with respect to the other example. Thus, the present disclosure discloses embodiments that are combinations of features of multiple examples, without the need to list all features of the embodiments in detail in the description of each example embodiment.
User interface and associated process
Attention is now directed to embodiments of user interfaces ("UIs") and associated processes that may be implemented on a computer system, such as a portable multifunction device or a head-mounted device, in communication with a display generating component, one or more sensors, and (optionally) one or more audio generating components.
Fig. 7A-7L illustrate examples of providing a computer-generated user experience session in an augmented reality (XR) environment. FIG. 8 is a flow chart of an exemplary method 800 for providing particles that move based on a respiratory characteristic of a user to a computer-generated user experience session. FIG. 9 is a flow diagram of an exemplary method 900 for providing a computer-generated user experience session with options selected based on characteristics of an XR environment. Fig. 10A-10B are flowcharts of an exemplary method 1000 for providing a soundscape with randomly selected planned sound components to a computer-generated user experience session. The user interfaces in fig. 7A to 7L are used to illustrate the processes described below, including the processes in fig. 8, 9, 10A, and 10B.
The figures and accompanying description are provided to describe various embodiments of a computer-generated user experience session in an XR environment. Various embodiments are described with respect to an example user experience session provided by a meditation application operating at a computer system. This user experience session is referred to as a "meditation session". The meditation session is described herein as having various stages or parts with respective visual and audio characteristics of the meditation session and/or respective parts of the meditation session. For example, meditation sessions are described as having an introductory portion, a instructional respiratory portion, a reactive portion, and an ending portion. In addition, meditation sessions are described as having various visual characteristics, such as, for example, virtual objects (e.g., shapes, particles, and/or user interfaces), virtual overlays, virtual wallpaper, and visual effects. Meditation sessions are also described as having various audio characteristics, such as, for example, planned soundtracks that may include sound effects, music, and instruction instructions. Meditation sessions are described as being provided in the XR environment. In some implementations, the XR environment is an AR environment. In some implementations, the XR environment is a VR environment. In some embodiments, various options, settings, parameters, and/or characteristics of the meditation session are determined based on whether the meditation session is performed in an AR environment or a VR environment. For example, in some implementations, the audio and/or visual characteristics for the meditation session in the VR environment are selected from a subset of the audio and/or visual characteristics available for the meditation session in the AR environment. In some embodiments, the visual and/or audio characteristics of the meditation session may optionally be randomly (or pseudo-randomly) selected in a manner that favors not repeating the audio and/or visual characteristics of the previous meditation session in order to provide a unique user experience for each session. In some embodiments, the audio and/or visual characteristics are selected pseudo-randomly. For example, a random number generator is used to select audio characteristics from a superset of audio characteristics. Similarly, a random number generator is used to select visual characteristics from a superset of visual characteristics. In some embodiments, when the audio and/or visual characteristics are selected pseudo-randomly (or randomly) and the selected audio and/or visual characteristics have been previously used (or have been previously used with other particular audio and/or visual characteristics), the selected audio and/or visual characteristics are ignored and a different audio and/or visual characteristic is selected pseudo-randomly (or randomly). It should be understood that aspects of the various embodiments described herein may be combined, rearranged and/or omitted in accordance with the examples described and illustrated in the respective figures. Randomly or pseudo-randomly selecting elements from a set of elements requires selecting elements with significant randomness based on numbers generated from a truly random source such as radioactive decay or another non-deterministic source or based on numbers generated from deterministic sources that are statistically random and produce what appears to be a random result, even though the result is not truly random in nature. Selecting elements from a set of elements with significant randomness produces results that have no discernable pattern of certainty for a typical user of the device.
FIG. 7A depicts a physical environment 700, which is a physical room that includes a physical table 700-1, a physical chair 700-2, and a physical drawing 700-3. The user 701 is in a physical environment 700, holding a device 702 (e.g., a tablet device or smart phone) and wearing an audio output device 703 (e.g., a headset or earbud) connected (e.g., wirelessly or through a wired connection) to the device 702. The device 702 includes a display 702-1 and one or more cameras 702-2. The device cameras are collectively referred to as cameras 702-2 and may include cameras located on the display side of the device (front-facing cameras) and/or cameras located on a different side of the device than the display (rear-facing cameras). In the embodiment depicted in FIG. 7A, the physical table 700-1, chair 700-2, and drawing 700-3 are within the field of view of the rear camera of the device 702, and a view of the physical environment is displayed on the device display 702-1 while the user experience session is inactive. The view of the physical environment is a representation 700a of the physical environment 700, including a representation 700-1a of a physical table 700-1, a representation 700-2a of a physical chair 700-2, and a representation 700-3a of a physical drawing 700-3. In the embodiments described herein, the device 702 is used to provide a user experience session (also referred to as a meditation session) in an XR environment displayed on a display 702-1 of the device 702. The audio of the user experience session is output at audio output device 703. However, in some implementations, different audio sources may be used to output audio, such as one or more speakers of device 702. Display 702-1 is a touch screen display that is operable to display an XR environment and detect user inputs (e.g., touch inputs, tap gestures, swipe gestures, and/or text inputs) for interacting with device 702, a user experience session, and/or the XR environment. In some implementations, the camera 702-2 may be used to detect user inputs (e.g., hand gestures, biometric inputs, and/or respiratory actions or gestures) for interacting with the device 702, the user experience session, and/or the XR environment. In some implementations, the device 702 includes a microphone that is operable to detect user input (e.g., voice commands and/or ambient sounds) for interacting with the device 702, user experience session, and/or XR environment.
In the embodiment depicted in fig. 7A-7L, device 702 is a computer system (similar to computer system 101 in fig. 1) for providing a user experience session in an XR environment. However, it should be appreciated that different types of computer systems may be used to provide user experience sessions in an XR environment. For example, instead of (or in addition to) using device 702, the computer system may be a Head Mounted Device (HMD) worn by user 701. In such implementations, the HMD includes a display component similar to display 702-1 and one or more sensors similar to camera 702-2. For example, the display may be an opaque display screen with display components and/or a transparent or translucent display through which the user 701 may directly view the physical environment 700 and on which virtual elements of the user experience session may be displayed or projected. The HMD may also include speakers and/or other audio output devices integrated into the HMD for providing audio output, as well as one or more cameras, microphones, and/or other sensors for capturing images (e.g., video and/or pictures) of the physical environment 700 (e.g., for display at the HMD and/or for detecting input) and receiving user input in the form of hand gestures, voice gestures, gaze gestures, and/or other inputs discussed herein. Although methods for providing user experience sessions in an XR environment are discussed herein with respect to device 702, it should be appreciated that the methods may be performed using other computer systems including, for example, HMDs.
Fig. 7B-7L depict a user interface at the device 702 for an embodiment in which meditation application activities and in some instances meditation user sessions are ongoing. Fig. 7B to 7I depict user interfaces for meditation sessions in an AR environment. Fig. 7J-7L depict user interfaces for meditation sessions in a VR environment.
In fig. 7B, the device 702 is providing a first part of the meditation session (introductory part, starting part and/or part preceding the directed breathing part) and a virtual interface 705 of the displayed representation 700a of the overlay environment 700 is displayed via the display 702-1. In some implementations, the virtual interface is displayed as part of the AR environment for the user experience session. In such embodiments, the virtual interface includes a dimming effect 704 displayed over the representation 700a of the physical environment 700. Dimming effects allow, for example, 98% visibility of a physical environment by displaying the dimming effect with 98% transparency (or 2% opacity) to dim a user's view of the physical environment. In some embodiments, the dimming effect may be greater or lesser and may have varying amounts of dimming at different portions of the device display (e.g., the less opacity of the dimming effect near the center of the display and the greater the opacity of the dimming effect toward the periphery of the device display). The dimming effect may indicate to the user 701 that the user is viewing the AR environment and, in some implementations, help focus the user on aspects of the user experience session by reducing interference that may occur in the physical environment. In some implementations, the virtual interface is displayed as part of the VR environment for the user experience session. In such embodiments, the virtual interface is opaque and the physical environment 700 is not visible to the virtual interface. Examples of such interfaces are shown in fig. 7J and 7L, discussed in more detail below. The values and ranges discussed herein for various visual effects, such as dimming effects, are intended as non-limiting examples unless specifically indicated otherwise.
In some embodiments, virtual interface 705 is displayed as part of a user experience session in an XR environment. For example, in fig. 7B, the device 702 displays the virtual interface 705 as part of the meditation session provided by the meditation application operating at the device 702, and thus includes additional virtual elements associated with the meditation session. In particular, in fig. 7B, device 702 displays virtual object 710, virtual particle 712, and virtual menu 715 as part of an introductory portion or introductory phase of a meditation session in an XR environment. In some embodiments, a virtual interface 705 comprising a dimming effect 704, virtual objects 710, particles 712, and menu 715 is displayed by the device 702 in response to launching the meditation application in the XR environment (e.g., by detecting selection of a virtual icon for the meditation application in the XR environment and/or by detecting audio instructions to launch the meditation application in the XR environment). In some embodiments, virtual interface 705 may include virtual wallpaper.
In some embodiments, the virtual interface is displayed as a portion of the XR environment separate from or disassociated from the user experience session. For example, the virtual interface may be displayed as part of a virtual transfer room or virtual "home" environment that the device displays as part of an XR environment prior to starting the user experience session (e.g., prior to starting the meditation application and/or after starting the meditation application and prior to starting the user experience session). Sometimes, in such embodiments, device 702 does not display virtual object 710, particle 712, or menu 715, but optionally displays icons or other virtual elements that may be selected to manage the user's experience with the XR environment and/or device 702. For example, device 702 may display icons that may be selected to launch applications operating at the device and/or to access various settings of the XR environment and/or device 702. In some embodiments, user 701 may interact with and share an XR environment with other users, including applications launched in the XR environment.
The virtual object 710 is a graphic element that moves and changes appearance when the meditation application is active (e.g., during meditation session or not during meditation session). As shown in fig. 7B, virtual object 710 has a generally triangular macroscopic shape formed by the collection of particles 712. During the introductory portion of the meditation session, the device 702 displays the particles 712 as moving in a rhythmic mode such that the virtual object 710 has pulsatile, rocking, and/or other rhythmic movements (represented by line 714 in the figure) that convey a relaxed or soothing environment to the user 701. In some embodiments, the device 702 displays the particles 712 as moving closer together and then farther apart, causing the virtual object 710 to expand and contract in a rhythm pattern mimicking a predetermined breathing rhythm (or other biometric rhythm, such as heart beat, brain wave, or walking rate) while maintaining a generally triangular macroscopic shape. In some embodiments, virtual object 710 pulses (expands and contracts) at a rate faster or slower than a predetermined breathing rhythm. In addition, the virtual object 710 may have a macroscopic shape or appearance that is different from the macroscopic shape or appearance shown in fig. 7B. For example, in some embodiments, virtual object 710 is a two-dimensional object having a macroscopic shape that is a circle, square, triangle, rectangle, or abstract two-dimensional shape. In some embodiments, virtual object 710 is a three-dimensional object having a macroscopic shape that is a sphere, cube, cuboid, pyramid, or abstract three-dimensional shape (such as a cloud), or a moving arrangement of particles having the appearance of a floating mist or a swirling brush stroke.
In the embodiment shown in fig. 7B-7I, particles 712 are shown as having a triangular shape. However, the particles may have different two-dimensional and/or three-dimensional shapes or appearances, such as circles, spheres, pyramids, petals, leaves, squares, cubes, clouds, droplets, brush strokes, or any combination thereof. In some embodiments, some of the particles have a uniform spacing between adjacent particles. In some embodiments, some of the particles have non-uniform spacing between adjacent particles. In some embodiments, some of the particles have an overlapping arrangement. In some embodiments, as virtual object 710 moves in a rhythmic pattern, the particles move between various appearances and spacing arrangements. For example, in the expanded state of the virtual object 710, the particles 712 have a spaced arrangement, and in the contracted state of the virtual object 710, the particles 712 have an overlapping or smaller pitch arrangement. Further, the virtual object 710 and/or the particle 712 may have different visual characteristics, such as an animation effect or appearance, a translucency of the particle (e.g., partial translucency or full translucency), a simulated reflection of light detected in a physical environment (e.g., a simulated reflection from the particle and/or a macroscopic shape and/or a simulated curvature of the light), a simulated reflection of virtual lighting, or a combination thereof. In some embodiments, the visual characteristics of the particles 712 and/or virtual objects 710 change based on various criteria (such as, for example, physical environment, XR environment, state of meditation session, movement of the user and/or device 702, user input, and/or any combination thereof). In some embodiments, the visual characteristics of the user experience session are randomly (or pseudo-randomly) generated by the device 702. For example, in some embodiments, each time a meditation application is initiated, the device 702 generates virtual objects and/or particles having different, randomly selected visual characteristics. In some implementations, the visual characteristics are randomly selected in a manner that favors a set of visual characteristics that do not repeat a previous display of the user experience session.
In fig. 7B, the device 702 displays a menu 715, which is a virtual menu user interface for selecting or modifying aspects of the meditation session. The menu 715 includes text 715-1 that provides the user 701 with the context of the meditation session (e.g., informs the user of meditation application activity, indicates that the user experience session is a meditation session, and/or instructs the user to prepare for the meditation session). The menu 715 also includes an option element 720 selectable to display an option menu for the meditation session. The option elements 720 further include a duration indication 720-1 indicating the duration of the current selection of the meditation session (e.g. the duration of one or more parts of the meditation session and/or the combined duration of the breathing and reaction parts of the guidance of the meditation session and optionally the ending part) and a coach indication 720-2 indicating the coach of the current selection of the meditation session. As shown in fig. 7B, the coach indication 720-2 indicates that a male coach is currently selected for the meditation session, and the duration indication 720-1 indicates that the duration of the meditation session is selected to be 10 minutes. The menu 715 also includes a start element 725 that is selectable (e.g., via input 724 and/or via audio input) to start the meditation session by transitioning from the introductory portion of the meditation session to the instructional respiratory portion. In some embodiments, the meditation application activity, but during the introductory part and the ending part, the meditation session is not considered to be ongoing. In such embodiments, the meditation session is considered to be started by selecting the start element 725. In some embodiments, during the introductory part and the ending part, the meditation session is considered to be ongoing. In such embodiments, the meditation session is started by launching the meditation application, and the selection start element 725 transitions from one part of the meditation session to a different part of the meditation session (e.g., from the introductory part to the breathing part of the tutorial or to another part of the meditation session).
Fig. 7B also depicts an audio schematic 707 that is a schematic representation of some characteristics of the audio output by the device 702 for the user experience session. The audio schematic 707 is not part of the user interface, but is provided for a better understanding of the described technology. Audio schematic 707 shows the relative positioning of user 701 and audio output device 703 in physical environment 700 (showing a top-down view of the user's head). The audio schematic 707 further comprises an audio indicator 711 representing a specific sound scene output during the meditation session and spatial audio indicators 709-1 and 709-2 indicating the perceived position of the audio of the meditation session relative to the position of the user's head (the position of the audio perceived by the user 701 when playing the audio using the audio output device 703). In the implementation depicted in fig. 7B, the audio is output in stereo. Thus, spatial audio indicator 709-1 represents a perceived location of an audio channel (e.g., left channel and/or first channel) adjacent to the left side of the user's head, and spatial audio indicator 709-2 represents a perceived location of another audio channel (e.g., right channel and/or second channel) adjacent to the right side of the user's head. Thus, as depicted in fig. 7B, the user 701 is facing forward (e.g., facing the physical environment 700 as shown in fig. 7A or/and facing the device 702), and the device 702 is causing the soundscape of the meditation session to be played in stereo using the audio output device 703. In some implementations, the audio output in the implementation depicted in fig. 7B is a preview of the soundscape to be played during the meditation session (e.g., after the introductory portion, during the respiration portion of the instruction, during the reaction portion, and/or during the ending portion). In some implementations, a soundscape having different audio characteristics during the preview than during the meditation session is played. For example, audio is output at a lower volume, as indicated by the smaller size of the audio indicator 711 in fig. 7B (compared to fig. 7D). As another example, as shown in fig. 7D and discussed below, the soundscape is output in stereo rather than in full audio immersion (referred to herein as "spatial audio") with three-dimensional perceived spatial locations.
In some implementations, the device 702 uses the set of planned sound components to create a sound scene for a particular user experience session. The device assembles the sound components to provide a harmonious audio experience for the user. In some embodiments, soundtracks for a particular user experience session are created to convey a particular emotion or theme. For example, for meditation sessions, soundtracks are created to provide relaxed audio that helps users focus on their breathing rhythms or on specific subjects or topics. In some implementations, the device 702 randomly (or pseudo-randomly) selects sound components to create a sound scene. In some embodiments, the sound component is randomly selected, but biased to not repeat a previously created or previously used sound scene. In some embodiments, the sound components available for creating the corresponding soundscapes are selected to be harmonious when played together at randomly selected times. In some implementations, the sound component selected for the soundscape is cyclically repeated for the user experience session or for a portion of the user experience session. In some embodiments, the device 702 incorporates randomly (or pseudo-randomly) generated sound components that are played at various (in some embodiments, random) moments throughout the soundscape in order to introduce diversity to the soundscape. In some embodiments, the soundscapes have common starting and ending sounds. In some implementations, the audio characteristics of the soundscape change based on the animation of the virtual object 710. For example, the audio volume is modulated with a pulse animation of the virtual object 710, increases as the virtual object 710 expands, and decreases as the virtual object 710 contracts. In some implementations, the spatial position of the audio changes as the pulses of virtual object 710 animate. For example, when an object expands, the audio sounds as if it is moving closer towards the user, and when the object contracts, the audio sounds as if it is moving away from the user.
In fig. 7B, device 702 detects input 722 on option element 720 and, in response, displays an option menu 730, as shown in fig. 7C. The options menu 730 is a user interface for customizing various settings of the meditation session. Options menu 730 includes duration option 732, cadence option 734, and guide option 736. Duration option 732 may be selected to set the duration of the meditation session. As shown in fig. 7C, the duration option 732-2 is currently selected and the duration of the meditation session is set to 10 minutes. Other available duration options include a five minute duration option 732-1 and a fifteen minute duration option 732-3. Rhythm option 732 may be selected to set a defined breathing rhythm for the meditation session. In some embodiments, the animation effect (such as the pulsatile rhythm of virtual object 710) is based on the selected breathing rhythm. As shown in FIG. 7C, cadence option 734-2 is currently selected and the breathing cadence is set to seven breaths per minute (bpm). Other available tempo options include a five bpm tempo option 734-1 and a ten bpm tempo option 734-3. The instruction option 736 may be selected to pick a coach for the meditation session. The selected coach is associated with one or more audio recordings (e.g., audio guides 740 and/or other audio recordings) that the device 702 outputs as part of the soundscape during the meditation session, thereby providing instructions, encouragement and/or guidance to the user 701 for the meditation session. In some embodiments, the audio recording consists of different scripts recorded by the coach for playback during the various parts of the meditation session. Each coach has their own different voices with speaking cadence, pitch, intonation, accent and other speech characteristics unique to the coach selected. As shown in FIG. 7C, a coach option 736-1 is currently selected, which indicates that a male coach is selected for the meditation session. Other available coaching options are female coaching option 736-2.
While the device 702 is displaying the options menu 730, the device 702 continues to display the virtual object 710 with the animated, pulsing effect represented by line 714. In fig. 7C, virtual object 710 is shown having an expanded state, as indicated by the increased spacing between particles 712 and the increased size of particles 712 (when compared to fig. 7B). In some embodiments, the spacing of the particles is changed without changing the size of the particles. The device 702 continues to output a preview of the soundscape in stereo at the audio output device 703.
In FIG. 7C, device 702 detects input 731 selecting the five minute duration option 736-1, input 733 selecting the five bpm rhythm option 732-1, and input 735 selecting the female coach option 734-2. In response to the respective input, the device 702 updates the settings of the meditation session based on the selected options. Specifically, the duration of the meditation session is changed from ten minutes to five minutes, the breathing rhythm is changed from 7bpm to 5bpm, and the coach is changed from a male coach to a female coach. The device 702 then detects an input 739 selecting the completion affordance 738. In response to detecting input 739, device 702 displays menu 715, which is similar to the menu shown in FIG. 7B, but is updated based on the selected option. For example, option element 720 is updated to show the selected duration of five minutes and updated to include a representation of the female coach. The device 702 continues to display the animated movement of the virtual object 710 with the rhythmic pulse having the selected breathing rhythm that is now changed to five bpm.
With the variation selected in fig. 7C, fig. 7D and 7E illustrate embodiments in which the device 702 outputs a second portion (e.g., a directed respiratory portion, a portion subsequent to an introductory portion, and/or a non-introductory portion) of the meditation session in response to an input (e.g., input 724 and/or an audio input) selecting the start element 725. When the user selects the start element 725, the device 702 transitions from the introductory part of the meditation session to the breathing part of the guidance of the meditation session shown in fig. 7D and 7E. When transitioning from the introductory portion of the meditation session to the instructional respiratory portion, the device 702 increases the user's visual and audio immersion by displaying virtual objects 710 that grow to a larger size, increasing the dimming effect 704, and transitioning from outputting audio in stereo to outputting spatial audio. During the transition, the device 702 increases the dimming effect 704 by gradually increasing the opacity of the dimming effect from, for example, 2% to 95% or from 5% to 90%, and displays the virtual object 710 and optionally the particles 712 as growing to a larger average size than in the introductory portion. In some embodiments, virtual object 710 and particle 712 are output as three-dimensional objects. In some embodiments, the transition from the introductory portion to the guided respiratory portion comprises: virtual objects and particles that transition from a two-dimensional object to a three-dimensional object are displayed. In some embodiments, other visual characteristics of virtual object 710 and particle 712 change during the transition. For example, the object is displayed with increased brightness, translucency, and/or with varying reflection of simulated or detected light. The device 702 increases the user's audio immersion by outputting a soundscape with an audio effect at the audio output device 703, whereby the volume of the audio gradually increases (as indicated by the larger size of the audio indicator 711) and sounds as if it were moving from a right stereo position and a left stereo position to different perceived positions around the user, as discussed in more detail below. In some embodiments, the device 702 outputs a starting sound (such as, for example, a bell or ringing sound) for indicating a transition from the introductory portion to the introductory respiratory portion via the audio output device 703. In some embodiments, when transitioning to the end of the meditation session, the device 702 outputs the same bell or ringing sound via the audio output device 703.
In the respiratory portion of the instruction, device 702 provides a combination of visual and audio effects to help user 701 focus their breath on a controlled respiratory rate. The device 702 displays the virtual object 710 and optionally the particle 712 as having a larger average size than in the introductory portion. In some embodiments, the device 702 displays the particles 712 as being spaced apart by a greater amount (e.g., a greater average amount and/or a greater instantaneous amount) than during the introductory portion. During the respiratory portion of the instruction of the meditation session, the device 702 displays the virtual object 710 as expanding and contracting at a rate set by the selected breathing rhythm, and outputs (e.g., using the audio output device 703 and/or an audio generating component integrated into the HMD) an audio instruction 740 instructing the user 701 to control their breathing to match the selected breathing rhythm. For example, because the user changes the breathing cadence to 5bpm using input 733, device 702 displays virtual object 710 as expanding and contracting at a rate that matches the breathing cadence of five breaths per minute, and outputs a soundscape having audio characteristics that encourage the user to conform their own breathing rate to the selected breathing cadence that matches the five breaths per minute. In some embodiments, the virtual object 710 changes to a different (e.g., expands to a larger or contracts to a smaller) size in the breathing portion of the instruction than during the introductory portion of the meditation session.
Fig. 7D and 7E depict the respective moments of the breathing portion of the direction of the meditation session. In particular, fig. 7D depicts a contracted state of virtual object 710 that is consistent with the expectation that user 701 is exhaling or just completing exhaling and about to inhale. When the virtual object is displayed in the contracted state, device 702 outputs a sound scene with audio guidance 740-1 instructing the user to inhale in the voice of the female coach (selected via input 735). The device 702 then displays the virtual object 710 as expanding at a steady rate (presumably while the user is inhaling) until it reaches an expanded state, as depicted in fig. 7E, consistent with the expectation that the user 701 has inhaled and is about to exhale. Device 702 then outputs audio guidance 740-2 at audio output device 703 that instructs user 701 to begin exhaling. The device 702 then displays the virtual object 710 as contracting at a steady rate (presumably while the user is exhaling) until it reaches a contracted state, as depicted in fig. 7D. This process is repeated during the breathing part of the direction of the meditation session. In some embodiments, the process is repeated for a predetermined amount of time (e.g., half or 1/3 of the duration selected in fig. 7C). In some embodiments, the process is repeated for a predetermined number of respiratory cycles (e.g., seven inhalations and seven exhalations or ten inhalations and ten exhalations). In some embodiments, this process is repeated until device 702 determines that user 701 has matched their breathing rhythm to the selected breathing rhythm. In some embodiments, the device 702 includes a sensor and/or camera for detecting a breathing rhythm of the user to determine whether the breathing rhythm of the user matches the selected breathing rhythm.
During the respiratory portion of the direction of the meditation session, as the device outputs audio direction 740, the device 702 continues to output sound scenery (represented at least in part by audio indicators 711) and displays the virtual object 710 as expanding and contracting at the selected breathing rhythm. In some embodiments, one or more components of sound Jing Yinpin are output as spatial audio. For example, as shown in fig. 7D, the audio schematic 707 includes spatial audio indicators 709-3 through 709-9, each of which represents a perceived position of audio (e.g., soundscapes and/or audio guides) of the meditation session relative to a position of the user's head. Thus, during the respiratory portion of the direction of the meditation session, the user 701 perceives the audio as being located at multiple points around the user's head. In some embodiments, the spatial position of the audio changes during the meditation session. For example, as virtual object 710 is expanding, device 702 adjusts the spatial position of the audio to sound as if particle 712 is moving closer towards user 701, and as virtual object 710 is contracting, device 702 adjusts the spatial position of the audio to sound as if particle 712 is moving away from user 701. This movement of spatial audio is represented by audio indicators 709-3 through 709-9, which have a position closer to the user's head in fig. 7E and a position farther from the user's head in fig. 7D. In some implementations, the device 702 outputs sound effects as part of the soundscape. In some implementations, portions of the soundscape are selected based on an animation effect of the virtual object and/or particles. For example, as virtual object 710 expands and particle 712 moves closer toward user 701, the soundscape may include increased hissing sound, and include reduced hissing sound as the particle moves away from user 701 as the virtual object contracts.
Fig. 7F-7H depict embodiments in which the device 702 has transitioned from the directed respiratory portion of the meditation session to a third portion (e.g., a reactive portion and/or a portion subsequent to the directed respiratory portion). As depicted in fig. 7F, the device 702 transitions from the respiratory portion of the instructions of the meditation session to the reactive portion and displays the virtual object in a proliferated state in which the particles 712 are displayed in a three-dimensional arrangement. In some implementations, the device 702 displays the transition as an animated effect with the depicting the expansion of the particles 712 from the configuration forming the virtual object 710 to the shock three-dimensional arrangement depicted in fig. 7F.
In the reactive portion, the device 702 prompts the user 701 to focus on or consider a particular topic or theme, and provides a combination of visual and audio effects to provide a soothing, relaxed environment to help the user focus on that topic. In some embodiments, the reactive part is established on relaxation of the user 701 achieved during the breathing part of the instructions of the meditation session. During the reaction portion, the device 702 continues to output sound scenery and displays particles 712 that move based on the detection of the user's breath. When the device 702 detects inhalation by the user 701, the device displays particles 712 as moving towards and/or around the user 701. When the device 702 detects exhalation by the user 701, the device displays particles 712 as moving away from the user 701. Fig. 7F and 7G depict examples of display states of meditation sessions based on detection of user respiration. In some embodiments, the device 702 displays particles that move based on detecting different biometric inputs (such as heart rate, brain waves, or walking speed).
For example, fig. 7F depicts particles 712 in an XR environment prior to inhalation (or after exhalation) by a user, and fig. 7G depicts particles 712 in an XR environment after inhalation (or prior to exhalation) by a user. In response to detecting user inhalation, the device 702 displays the particles 712 as moving toward the user 701, e.g., from the arrangement in fig. 7F to the arrangement in fig. 7G. Conversely, when the device 702 detects a user exhaling, the device displays the particles 712 as moving away from the user 701, e.g., from the arrangement in fig. 7G to the arrangement in fig. 7F. Generally, the process continues with moving particles based on the detected respiration of the user for the duration of the reactive portion of the meditation session.
The embodiments depicted in fig. 7F and 7G are provided as non-limiting examples of movement of particles 712 in response to detecting respiration of user 701. In some implementations, the device 702 displays the particles as moving differently based on the user's breath. For example, in some embodiments, the magnitude of movement of particles 712 is based on the magnitude and/or duration of the user's breath (inward and/or outward). Thus, if the device detects that the user inhales with a greater magnitude and/or duration than in the example discussed above, the device 702 displays the particles 712 as having a greater amount of movement toward the user 701 than shown in fig. 7G. Similarly, if the device detects that the user inhales with a smaller amplitude and/or duration than in the examples discussed above, the device 702 displays the particles 712 as having less movement toward the user 701 than shown in fig. 7G. In some embodiments, the displayed movement of the particles away from the user similarly varies based on the amplitude and/or duration of the user's exhalation as detected by device 702.
In some embodiments, when the user exhales, the device 702 displays the particles 712 as moving at a greater movement speed than when the user inhales (or vice versa). In some embodiments, device 702 displays particles 712 in a three-dimensional arrangement, with some particles being displayed as appearing closer to user 701 and some particles being displayed as appearing farther from user 701. For example, in FIG. 7F, particle 712-1 is shown as appearing closer to user 701 than particle 712-2. In some embodiments, particles displayed closer to user 701 are shown to have a greater amount of movement (e.g., distance and/or amplitude) in the XR environment than particles displayed farther from user 701 when the user breathes. In some implementations, the device 702 displays the particles 712 as having a floating or rocking movement when they do not move based on the user's breath (e.g., between the user's breaths and/or when the user's breath is below a threshold amplitude). In some embodiments, the apparatus 702 displays particles 712 that move according to simulated physical parameters (such as simulated inertia, spring constant, friction, etc.). In some embodiments, the device 702 displays the particles 712 off the screen or moving on the screen as the user breathes. For example, when the device 702 detects a user inhalation, the device displays particles 712 as moving toward the user, with some particles or portions thereof moving off the screen as the user is inhaling, simulating particles moving past the user and/or off the user's field of view. Similarly, when the device 702 detects exhalation by the user, the device displays particles 712 as moving away from the user, with some particles or portions of particles moving on the screen, simulating particles moving into the user's field of view.
During the reaction portion of the meditation session, as the device outputs audio guidance 740, the device 702 continues to output sound scenery (represented at least in part by audio indicators 711) and display particles 712 that move based on the detected user breath. In some embodiments, when the reaction portion begins, the device 702 outputs audio guidance 740-3 that prompts the user (in the voice of the coach) to focus on a particular topic or theme. The device 702 outputs the soundscape in spatial audio and adjusts the audio characteristics of the soundscape during the reaction portion of the meditation session. For example, as the particle 712 moves toward the user, the device 702 adjusts the spatial position of the audio to sound as if the particle 712 is moving closer toward the user 701, and as the particle 712 moves away from the user, the device 702 adjusts the spatial position of the audio to sound as if the particle 712 is moving away from the user 701. For example, in fig. 7F, audio schematic 707 shows spatial audio indicators 709-3 through 709-9 positioned to the sides and front of the representation of the user's head that indicate where the audio is perceived to originate from the front and sides of the user's head. As the user inhales, and as shown in fig. 7G, the particle 712 is displayed toward and moves around the user 701, the device 702 adjusts the spatial position of the audio so as to sound as if the particle 712 is moving around (including behind) the user's head, as indicated by the positions of the spatial audio indicators 709-3 through 709-9 in fig. 7G. When the user exhales and the particles 712 are displayed as being moved away from the user 701 to the arrangement shown in fig. 7F, the device 702 adjusts the spatial position of the audio to sound as if the particles were moving back to the side and front positions of the user, as depicted in fig. 7F.
In some embodiments, the apparatus 702 continues to meditate the reactive portion of the session for a predetermined amount of time (e.g., half or 1/3 of the duration selected in fig. 7C), thereby continuing to modify the display state of the particles based on the detected user breath. In some embodiments, the reactive portion continues for a predetermined number of respiratory cycles (e.g., seven inhalations and seven exhalations or ten inhalations and ten exhalations). In some embodiments, this process continues until the device 702 determines that the user 701 is no longer focusing on the meditation session or has indicated that the user wishes to end the meditation session.
Fig. 7H depicts an embodiment in which the device 702 has detected that the user is not focused on the meditation session. As shown in fig. 7H, the user 701 has diverted their attention from the meditation session, rotating their head and the positioning of the device 702 to see the person 745 located in the physical environment 700. In some embodiments, the device 702 determines whether the user is focused on the meditation session by detecting different attention and concentration-based indicators, such as the user's gaze, respiratory rhythm, heart rate, brain waves, body movement, movement of the device 702, detected sounds (e.g., sounds of another person nearby, background sounds, and/or sounds indicative of movement or agitation of the user), and/or other inputs detected by the device 702 and indicative of the user being disturbed. In some embodiments, the device 702 detects that the user is disturbed when the user's concentration has drifted from the meditation session for at least a threshold amount of time (e.g., a non-zero amount of time, 2 seconds, and/or 5 seconds). In some embodiments, the device 702 detects that the user has regained focus based on detecting an attention and focus based indicator. For example, in some embodiments, device 702 determines that the user has regained focus when the user has returned to the location where they were before being disturbed (e.g., facing forward and/or eyes focused on particles in an XR environment) and their respiration has restored the respiration rate from before being disturbed.
In some embodiments, when the device 702 detects that the user 701 is disturbed, the device 702 prompts the user to concentrate on the meditation session. For example, as shown in fig. 7H, the device prompts the user to focus on their breath, as indicated by audio guidance 740-4. In some implementations, the device 702 pauses the meditation session (e.g., pauses movement of the particles 712 and/or modifies or pauses output of the soundscape) when the user is disturbed. In some embodiments, the device 702 continues to meditate the current portion of the session when the device determines that the user has regained focus. In some embodiments, the device 702 returns to the breathing portion of the instructions of the meditation session to help the user regain focus. Fig. 7H shows that the user has been disturbed during the reaction part, however, the device is able to detect that the user is disturbed in other parts of the meditation session, such as for example the breathing part of the tutorial. In such embodiments, the device pauses the breathing portion of the instruction and encourages the user to concentrate on the meditation session in a manner similar to that described above for the disturbance occurring during the reaction portion.
In some implementations, the user's location (e.g., approximated by the location and/or positioning of device 702) and virtual objects (e.g., virtual object 710 and/or particle 712) in the XR environment are world-locked. For example, in fig. 7H, as the user rotates the positioning of the device 702, the device 702 displays the particles 712 from a changed perspective of the device 702, as the device has rotated from the positioning in fig. 7G to the positioning in fig. 7H. For example, the device 702 displays the particles 712-3 in both fig. 7G and fig. 7H. However, in FIG. 7H, because the user has rotated the device 702 from the previous position in FIG. 7G, and because the positioning of the particles 712 is world-locked, the particles 712-3 are displayed in FIG. 7H in an orientation that is rotated relative to the orientation of the particles 712-3 previously displayed in FIG. 7G. Thus, as the user rotates or moves device 702, the displayed view of the XR environment changes based on the world-locked configuration. In some implementations, the configuration of the world lock is demonstrated by device 702 modifying the output of the soundscape to preserve the spatial audio location of the soundscape relative to the XR environment as the user moves. For example, as shown in fig. 7H, the user's head rotates to the left. Instead of maintaining the spatial audio locked to the positioning of the user's head, device 702 adjusts the spatial audio to remain fixed relative to the XR environment, as indicated by spatial audio indicators 709-3 through 709-9 from the positioning of FIG. 7G that maintain the spatial audio. Thus, as the user moves his head, the user perceives that the audio already has the same relative position in the XR environment. In some embodiments, such as, for example, when device 702 is an HMD, particles 712 are spatially arranged around a viewpoint of user 701 and move relative to the viewpoint of the user as the particles move in the XR environment (e.g., as described with respect to fig. 7D, 7E, 7F, 7G, 7H, and/or 7I).
In some implementations, the user 701 may interact with virtual objects in an XR environment. For example, the particles 712 may react to a user's position or touch detected in an XR environment. In some implementations, device 702 determines that the body of the user (e.g., the hand, wrist, and/or arm) is co-located with particles 712 in the XR environment, and in response, displays particles 712 as changing appearance (e.g., changing color, moving based on the user's touch, emitting light, and/or becoming more or less translucent). In some embodiments, user 701 may interact with other users in an XR environment. For example, in some embodiments, representation 745a is a virtual representation of person 745 sharing an XR environment with user 701. In some embodiments, representation 745a may interact with user 701 and, optionally, view and/or manipulate virtual objects displayed as part of the meditation session.
Fig. 7I depicts the transition of the device 702 from the reaction portion of the meditation session to a fourth portion (e.g., an ending portion and/or a portion following the reaction portion). In some embodiments, the ending section transitions users outside of the meditation session and brings the user's attention back to the physical world (or to a different experience in the XR environment). In some implementations, the device 702 displays the particles 712 as moving together (e.g., with an animated flight effect and/or with coordinated movement) to form the virtual object 710. In some implementations, the device 702 reduces the dimming effect 704 (e.g., reduces the opacity of the dimming effect and/or otherwise reduces the dimming effect), thereby increasing the visibility of the representation 700a of the physical environment 700 on the device 702. In some embodiments, the transition to the ending portion is an inversion of the audio and visual effects provided in the transition from the introductory portion to the guided respiratory portion. In some embodiments, the device 702 darkens the appearance of the particles 712 and reduces movement of the particles 712 from movement in the respiratory and reaction portions of the instructions of the meditation session. In some implementations, the device 702 displays the virtual object 710 as pulsating in a manner similar to that in the introductory portion. In some implementations, the device 702 displays the virtual object 710 with less movement than in the introductory portion.
As shown in fig. 7I, when transitioning to the ending portion, device 702 transitions sound Jing Yinpin from spatial audio output to stereo, gradually decreases the volume of the soundscape (as indicated by the smaller size of audio indicator 711), and prompts the user with the coaching speech output audio instructions 740-5 that increase their awareness of their surroundings. In some embodiments, device 702 outputs an audio effect via audio output device 703, such as a bell or ringing sound similar to when transitioning from the introductory portion to the introductory respiratory portion of the instruction. In some embodiments, the device 702 outputs the same audio when transitioning from the introductory portion to the instructional respiratory portion when transitioning to the ending portion. In some embodiments, the device 702 outputs the same starting sound for each meditation session (e.g., when transitioning from the introductory portion to the directed breathing portion, when ending the introductory portion, and/or when starting the directed breathing portion). In some embodiments, the device 702 outputs the same ending sound for each meditation session (e.g., when transitioning to the reaction portion and/or when transitioning to the ending portion). In some embodiments, the start sound is different from the end sound of the corresponding meditation session. In some embodiments, the start sound is the same as the end sound and the sound is different for a plurality of respective meditation sessions or unique for each respective meditation session.
As depicted in fig. 7I, the apparatus 702 displays a menu 750 in the end portion of the meditation session. The menu 750 includes text providing history data for the meditation session that the user has undergone. Text 754 indicates that the user 701 has spent five minutes in the meditation session today, and text 756 indicates that the user has undergone at least one meditation session in the last three days of the week. Other historical data may be provided such as, for example, the number of meditation sessions, the specific amount of time spent per meditation session, and/or the average amount of time spent for meditation sessions. Menu 750 also includes a coach indicator 752 that instructs the female coach to provide audio guidance 740 for the meditation session. The menu 750 includes a continue option 758 that is selectable to continue or extend the meditation session. For example, in some embodiments, in response to detecting selection of the continue option 758, the device 702 resumes the meditation session for approximately a predetermined time or until the user indicates that they wish to end the meditation session. In some embodiments, when the apparatus 702 resumes the meditation session, the apparatus 702 returns to the breathing portion of the guidance of the meditation session. In some embodiments, when the apparatus 702 resumes the meditation session, the apparatus 702 returns to the reaction portion of the meditation session. In some embodiments, device 702 provides user 701 with the option of selecting whether to return to the breathing portion of the guideline or to the reaction portion. Menu 750 also includes completion option 760. In some embodiments, when the user selects the completion option 760, the device 702 returns to the introductory portion of the meditation session (e.g., displays a UI depicted in fig. 7B and/or displays a UI similar to the UI depicted in fig. 7B) or exits the meditation application (e.g., displays a UI similar to the UI depicted in fig. 7A or returns to a staging room or virtual home environment in the XR environment).
In some implementations, the meditation session can be performed in an AR environment or a VR environment. Fig. 7B-7I depict various embodiments of meditation sessions provided in an AR environment, and fig. 7J-7L depict various embodiments of different meditation sessions provided in a VR environment. The device 702 may provide meditation sessions in a VR environment in a manner similar to meditation sessions in an AR environment, but with various differences, such as, for example, using a virtual environment instead of an AR environment. In some implementations, the visual effects and soundscapes provided for the meditation session in the VR environment are a subset of the visual effects and soundscapes available for the meditation session in the AR environment. In some implementations, the device 702 automatically selects visual and audio characteristics based on characteristics of the XR environment. For example, if the XR environment is an AR environment (e.g., providing a meditation session in the AR environment and/or providing a portion of a meditation session in the AR environment), the device 702 automatically selects visual and audio characteristics based on various aspects of the AR environment (such as, for example, detected lighting conditions, history of previously used audio and/or visual characteristics of the AR meditation session) or simply based on the fact that the XR environment is an AR. As another example, if the XR environment is a VR environment (e.g., providing virtual wallpaper to the meditation session and/or meditation session in the virtual environment), the device 702 automatically selects visual and audio characteristics based on aspects of the VR environment (such as, for example, scenes in the virtual wallpaper, history of previously used audio and/or visual characteristics of the VR meditation session) or simply based on the fact that the XR environment is VR.
Fig. 7J depicts a device 702 displaying introductory portions of different meditation sessions (e.g., sessions occurring after the meditation session depicted in fig. 7B-7I or sessions occurring independent of the meditation session depicted in fig. 7B-7I) to be performed in a VR environment. During the VR meditation session, the device 702 displays virtual wallpaper 765 providing an opaque virtual background for the meditation session. Device 702 displays virtual wallpaper 765 over a portion of representation 700a of physical environment 700. The virtual wallpaper 765 is a virtual interface depicting the beach scene in fig. 7J, however, the virtual wallpaper may have different images or background scenes for different meditation sessions. For example, fig. 7L depicts a different meditation session, where virtual wallpaper 787 has a mountain view. In some implementations, the user can select a background scene from a set of background scenes of the virtual wallpaper. In some embodiments, the device 702 randomly selects a background scene from a set of background scenes (optionally, in combination with other visual characteristics such as virtual objects and/or particles, favors not repeating previously used background scenes).
In the embodiment depicted in fig. 7J, the device 702 displays a virtual object 770 having particles 772. Virtual object 770 and particle 772 have different visual characteristics than virtual object 710 and particle 712. For example, the virtual object 770 has a macroscopic shape that is generally circular or rounded. Further, the particles 772 are circles or spheres that form the virtual object 770 and move around within the shape of the virtual object 770. In some embodiments, particles 772 have different material properties than particles 712. For example, the particles 772 may have different simulated optical properties, shapes, and/or bends, and may have different optical properties (e.g., direction, color, and/or intensity) for the simulated light reflected on the particles, e.g., due to the rounded shape of the particles 772, while the particles 712 are triangular.
Although virtual objects 770 and particles 772 have different visual characteristics than virtual objects 710 and particles 712, in some implementations, virtual objects and particles may exhibit similar behavior based on the various portions of the meditation session being output by device 702. For example, device 702 displays virtual object 770 and particles 772 as rhythmically moving (e.g., pulsing and/or rocking) in the introductory portion and expanding to a larger arrangement that rhythmically moves based on a predetermined breathing rhythm for the directed breathing portion, similar to virtual object 710 and particles 712. In addition, the device 702 displays the particles 772 as moving to a surge state for the reactive portion and moving based on the detected respiration of the user, similar to the particles 712. In the end portion, the device 702 displays the particles 772 as reassembled to form the virtual object 770, similar to the particles 712 forming the virtual object 710, as discussed above with respect to fig. 7I.
In some embodiments, the apparatus 702 selects visual and audio characteristics of meditation sessions to provide a varying user experience for multiple meditation sessions and/or a unique user experience for each meditation session. For example, in the implementation depicted in fig. 7J and 7K, the device 702 outputs a different soundscape than in the implementation discussed above with respect to fig. 7B-7I, as indicated by the audio indicator 774 depicted in fig. 7J. The audio indicator 774 is similar to the audio indicator 711 but has a different appearance (e.g., a different set of notes and/or different placements) to indicate the output of a different soundscape than that output in the meditation session depicted in fig. 7B-7I. In some embodiments, the apparatus 702 outputs different soundtracks for the various meditation sessions. In some embodiments, the device 702 selects or creates a soundscape to be harmonized with the visual characteristics. For example, the device 702 may select or create a relaxed soundscape to be in harmony with the relaxed beach scene and the gentle rocking of the particles 772.
As the device 702 generates or selects different combinations of visual and audio characteristics for the respective meditation session, in some embodiments, some visual and/or audio characteristics may be repeated. For example, virtual objects or particles may have the same appearance in two different meditation sessions, but exhibit different movement characteristics (e.g., rocking in one session and pulsing in another session and/or moving faster/farther in one session than in another session). As another example, the same virtual wallpaper may be used in two different meditation sessions, but the virtual objects, particles, and/or soundtracks are different for these sessions.
In fig. 7J, device 702 displays a menu 767 that is similar to menu 715. Menu 767 includes an option element 773 that is similar to option element 720 and that can be selected to display an option menu similar to that depicted in fig. 7C. In the embodiment depicted in fig. 7J, the meditation session is set to ten minutes duration with the female coach selected for the previous meditation session, as indicated by duration indicator 773-1 and coach indicator 773-2, respectively. Menu 767 also includes a start element 776, which is similar to start element 725. In response to detecting the selection of the start element 776 via the input 778, the device 702 transitions from the introductory portion of the meditation session to the breathing portion of the guidance, as depicted in fig. 7K.
In fig. 7K, the device 702 has transitioned to the directed breathing portion and is displaying virtual wallpaper 765 with an expanded size and displaying virtual objects 770 and particles 772 that move based on a predefined breathing rhythm, similar to the embodiments discussed above with respect to fig. 7D and 7E. In addition, the device 702 has transitioned the soundscape from stereo audio to spatial audio, including gradually increasing the volume, as indicated by the displayed larger size of the audio indicator 774.
In some embodiments, different audio guidance is used for each meditation session. For example, while the embodiments of VR meditation sessions depicted in fig. 7J and 7K use the same voice of a female coach as in an AR meditation session, the audio guidance output by the device 702 is different for the various parts of the meditation session. In some embodiments, the audio guidance is selected from a subset of audio recordings of selected coaches available for respective portions of the meditation session. Thus, in FIG. 7K, device 702 outputs guide 780-1, which provides similar instructions as guide 740-1, but uses different words (and, in some instances, different speaking characteristics such as intonation) because it is a different audio recording than guide 740-1.
Fig. 7L shows examples of different audio and visual characteristics of different meditation sessions, showing the breathing part of the guidance of the different meditation sessions. For example, in fig. 7L, device 702 displays virtual wallpaper 787, which is similar to virtual wallpaper 765, but with a mountain scene rather than a beach scene. The device 702 also displays virtual objects 790 and particles 792 (similar to virtual objects 710 and 770 and particles 712 and 772) that show additional examples of different visual characteristics of different meditation sessions. Device 702 also outputs different soundtracks as indicated by audio indicator 794 and audio guide 785-1, which are similar to audio guides 740-1 and 780-1 but are different audio recordings as indicated by different words included in audio guide 785-1. In some embodiments, the audio guide is a different audio recording that uses the same word, but has different audio characteristics such as pitch, accent, or cadence.
Additional description regarding fig. 7A-7L is provided below with reference to methods 800, 900, and 1000 described with respect to fig. 7A-7L.
Fig. 8 is a flowchart of an exemplary method 800 of providing particles that move based on a respiratory characteristic of a user to a computer-generated user experience session, according to some embodiments. In some embodiments, the method 800 is performed at a computer system (e.g., computer system 101 and/or device 702 in fig. 1) (e.g., a smart phone, device, and/or head mounted display generating component) and one or more sensors (e.g., camera 702-2) (e.g., a gyroscope, accelerometer, motion sensor, microphone, infrared sensor, camera sensor, depth camera, visible light camera, eye tracking sensor, gaze tracking sensor, physiological sensor, image sensor, camera directed downward at the user's hand (e.g., color sensor, infrared sensor, and depth sensor) and/or other cameras directed forward from the user's head) with a display generating component (e.g., display, touch screen, visual output device, 3D display, display (e.g., see-through display and/or transparent display) having at least a portion on which an image may be projected. In some embodiments, the method 800 is managed by instructions stored in a non-transitory (or transient) computer-readable storage medium and executed by one or more processors of a computer system (such as the one or more processors 202 of the computer system 101) (e.g., the controller 110 in fig. 1). Some of the operations in method 800 are optionally combined and/or the order of some of the operations are optionally changed.
In method 800, a computer system (e.g., 702) displays (802) a user interface (e.g., 705, 765, and/or 787) for a user experience session (e.g., UI for an application of an XR environment, optionally including instruction instructions for breathing to relax and/or focus a user) via a display generation component (e.g., 702-1) (e.g., in an XR environment, in a non-XR environment). Upon user experience session activity (e.g., after initiating a user experience session and before the user experience session has ended), the computer system detects (804) one or more respiratory characteristics of a user (e.g., 701) of the computer system (e.g., whether the user is currently exhaling, whether the user is currently inhaling, an inhalation rate, an exhalation rate, an inhalation duration, an exhalation duration, a pause duration (e.g., after inhaling, after exhaling, during inhaling, during exhaling), a change in inhalation and/or exhalation rate, and/or an inhalation and/or exhalation mode) via one or more sensors (e.g., 702-2), and displays (806) a user interface object (e.g., 710, 770, and/or 790) (e.g., a cube, sphere, spheroid, cone, and/or abstract object) having a plurality of particles (e.g., 712, 772, and/or 792) that move based on the one or more respiratory characteristics of the user of the computer system.
As part of displaying a user interface for a user experience session, and in accordance with a determination that a first respiratory event of a user of a computer system (e.g., when the user inhales, exhales, and/or pauses while breathing) meets a first set of criteria (e.g., a characteristic of the respiratory event indicates that the user is inhaling), the computer system (e.g., 702) displays (808) particles (e.g., 712, 772, and/or 792) of a user interface object (e.g., 710, 770, and/or 790) as moving in a first manner (e.g., as depicted in fig. 7G) during the first respiratory event of the user of the computer system (e.g., having a first direction of movement (e.g., expanding away from a fixed point (e.g., in the user's field of view)), a first rate of movement or speed, and/or a first movement pattern (e.g., expanding from the fixed point while the user inhales), wherein the particles move at a rate determined based on the user's rate.
As part of displaying a user interface for a user experience session, and in accordance with a determination that a first respiratory event of a user of a computer system meets a second set of criteria (e.g., a characteristic of the respiratory event indicates that the user is exhaling), the computer system (e.g., 702) displays (810) particles (e.g., 712, 772, and/or 792) of a user interface object (e.g., 710, 770, and/or 790) as moving (e.g., contracting toward a fixed point as the user exhales) in a second manner (e.g., as depicted in fig. 7F) different from the first manner during the first respiratory event of the user of the computer system (e.g., having a second direction of movement (e.g., contracting toward the fixed point (e.g., in the user's field of view)), a second rate of movement or velocity, and/or a second movement pattern (e.g., contracting toward the fixed point as the user exhales), wherein the particles move at a rate determined based on the user exhales). Displaying particles of the user interface object as moving in a first manner in accordance with a determination that the first respiratory event of the user meets the first set of criteria and displaying particles of the user interface object as moving in a second manner in accordance with a determination that the first respiratory event of the user meets the second set of criteria provides feedback regarding a state of the computer system (e.g., a state that provides a user experience session).
In some implementations, a computer system (e.g., 702) communicates with an audio generation component (e.g., 703) (e.g., speakers, bone-conduction audio output devices, and/or audio generation components integrated into an HMD). In some embodiments, as part of displaying a user interface for a user experience session, prior to user experience session activity (e.g., after launching an application available for providing the user experience session and prior to initiating (e.g., starting) the user experience session (e.g., by a user of a computer system)), the computer system concurrently displays a plurality of particles of the user interface object (e.g., particles 712 as depicted in fig. 7B and/or 7C, particles 772 as depicted in fig. 7J, and/or particles 792 as depicted in fig. 7L) via a display generating component (e.g., audio 711 as depicted in fig. 7B and/or 7C, audio 774 as depicted in fig. 7J, and/or audio 794 as depicted in fig. 7L) (e.g., automatically and/or manually selects to create a sound composition of a set of sound plan for the audio environment of the user experience session) via an audio generating component. Displaying representations of a plurality of particles of a user interface object prior to a user experience session activity and outputting an audio soundscape for the user experience session provides feedback regarding the state of the computer system. For example, audio and visual feedback indicates to the user what the user experience session will include when active.
In some embodiments, the computer system outputs a portion of the audio soundscape (e.g., a preview of the audio soundscape) prior to the user experience activity and outputs the complete audio soundscape (e.g., in fig. 7D, 7E, 7F, 7G, 7H, 7K, and/or 7L) while the user experience activity. In some implementations, outputting the audio soundscape includes: an audio soundscape (or a portion of an audio soundscape) having a set of two or more audio components randomly or pseudo-randomly selected from a set of available audio components is output.
In some implementations, as part of displaying a user interface for a user experience session, a computer system (e.g., 702) displays a dimmed appearance (e.g., 704) (e.g., a fade effect and/or a dim effect) (e.g., dimming at least a portion of an environment by another amount of 0.5%, 1%, 2%, 3%, 5%, 7%, 10%, 15%, 20%, and/or greater than 0%) of an environment (e.g., 700 a) (e.g., a virtual environment and/or a physical environment (e.g., visible via a pass-through video and/or due to transparent nature of a display)) of a user experience prior to a user experience session activity (e.g., as depicted in fig. 7B and 7C). The dimmed appearance of the display environment prior to the user experience session activity encourages users of the computer system to focus on the user experience session.
In some embodiments, the dimmed appearance of the display environment includes: the view of the physical environment (e.g., 700 a) that is visible (e.g., via the transparent video and/or due to the transparent nature of the display) to the user (e.g., 701) of the computer system (e.g., 702) is visually obscured. In some embodiments, the dimmed appearance of the display environment includes: a partially transparent virtual overlay (e.g., 704) is displayed to show the physical environment through the virtual overlay. In some embodiments, the appearance of dimming is uniform. In some embodiments, the appearance of dimming is variable.
In some embodiments, displaying a user interface for a user experience session includes: prior to user experience session activity, the computer system (e.g., 702) displays a start option (e.g., 725 and/or 776) (e.g., affordance, graphical user interface object, and/or graphical element) selectable (e.g., via input 724 and/or input 778) (e.g., via pinch gesture, tap input, gaze gesture, gaze dwell gesture, and/or other input gesture) to initiate (e.g., start, and/or activate) the user experience session, and displays an indication (e.g., 720-1, 732-2, 732-3, and/or 773-1) of the duration of the user experience session (e.g., 3 minutes, 5 minutes, 10 minutes, 12 minutes, 15 minutes, 20 minutes) (e.g., when the user experience session is active). Displaying the start affordance and an indication of the duration of the user experience session prior to the user experience session activity provides feedback regarding the state of the computer system. For example, the indication of the duration provides feedback to the user regarding the selected length of the user experience session. In some implementations, in response to detecting an input (e.g., 724 and/or 778) directed to a start option (e.g., pinch gesture, tap input, gaze gesture, gaze stay gesture, and/or other input gesture), the computer system initiates a user experience session.
In some embodiments, displaying a user interface for a user experience session includes: prior to the user experience session activity, the computer system (e.g., 720) displays a set of one or more duration options (e.g., 732-1, 732-2, and/or 732-3) that are selectable (e.g., via a pinch gesture, a tap input, a gaze gesture, a gaze stay gesture, and/or other input gesture) to modify a duration of the user experience session (e.g., a duration of at least a portion of the user experience session such as a respiratory portion and/or a reactive portion of the guide). The computer system detects an input (e.g., 731) directed to a first duration option (e.g., 732-1) of the set of one or more duration options selectable to modify a duration of the user experience session (e.g., pinch gesture, tap input, gaze gesture, gaze dwell gesture, and/or other input gesture), and in response to detecting an input directed to the first duration option of the set of one or more duration options, the computer system selects (e.g., modifies and/or sets based on the detected input) the duration of the user experience session to a first duration (e.g., 5 minutes or 10 minutes) (e.g., from a default duration of 1 minute or from a previously selected duration of 20 minutes). In some implementations, detecting an input directed to a second duration option causes the computer system to set the duration to a second duration different from the first duration. Modifying the duration of the user experience session to the first duration in response to detecting the input directed to the first duration option causes the computer system to automatically optimize the user experience session to satisfy the requested duration provided by the user of the computer system.
In some embodiments, displaying a user interface for a user experience session includes: prior to user experience session activity, the computer system (e.g., 702) displays a set of one or more audio options (e.g., 736-1, and/or 736-2) selectable (e.g., via a pinch gesture, a tap input, a gaze gesture, a gaze stay gesture, and/or other input gesture) to pick audio guidance (e.g., audio source, narrator, speaker, coach, and/or person) from a plurality of audio guidance for the user experience session. The computer system detects an input (e.g., 735) directed to a first audio option (e.g., 736-2) of the set of one or more audio options selectable to pick audio guidance from a plurality of audio guidance for the user experience session. In response to detecting input directed to the first audio option, the computer system selects a first audio guide from a plurality of audio guides for the user experience session (e.g., and optionally does not select a second audio guide for the user experience session). Selecting the first audio guide for the user experience session in response to detecting input directed to a first audio option selectable to pick the audio guide causes the computer system to automatically optimize the user experience session to provide the requested audio guide for the user experience session. In some implementations, in response to detecting an input directed to a second audio option (e.g., 736-1), the computer system selects a second audio guide for the user experience session that is different from the first audio guide. In some embodiments, the selected audio guidance provides verbal instructions, encouragement, teaching, prompting, and/or guidance to the user to experience the user experience session.
In some embodiments, displaying a user interface for a user experience session includes: upon user experience session activity (e.g., after (responsive to) starting a user experience session; And/or in a first phase or portion of the user experience session (e.g., a phase or portion of the user experience session that precedes a subsequent phase or portion of the user experience session), the computer system (e.g., 702) displays the user interface object (e.g., 710, 770, and/or 790) as an animated movement of particles having an animation effect (e.g., user interface object and/or user interface object) (e.g., pulsing the animation (e.g., increasing the size of the user interface object (e.g., expanding the user interface object) and decreasing the size of the user interface object (e.g., contracting the user interface object)) in a repeated alternating pattern, The animation effect is based on a predetermined biometric rhythm (e.g., breathing rhythm (e.g., inhalation and exhalation patterns)) (e.g., 3 breaths per minute, 5 breaths per minute, 7 breaths per minute, 10 breaths per minute) (e.g., as depicted in fig. 7D, 7E, and/or 7K). In some embodiments, the predetermined biometric rhythm is a default setting set by an application for providing the user experience session and/or by an operating system of the computer system. In some embodiments, the predetermined biometric rhythm is a user-selected setting. In some embodiments, the predetermined biometric rhythm comprises heart rate. In some embodiments, the predetermined biometric rhythm comprises walking speed. Displaying the user interface object as having an animation effect based on a predetermined biometric rhythm includes: in accordance with a determination (e.g., based on one or more settings of a user experience session) that the predetermined biometric rhythm is a first biometric rhythm (e.g., 5 breaths per minute (e.g., inhaling and then exhaling (optionally pausing therebetween) five times during one minute) or 6 breaths per minute), the computer system animate the user interface object based on a first pattern corresponding to the first biometric rhythm (e.g., expanding the user interface object with an inhaling portion of the breathing rhythm and contracting the user interface object with an exhaling portion of the breathing rhythm), And/or rotating the user interface object in a first direction with an inspiratory portion of the breathing rhythm and rotating the user interface object in a second direction with an expiratory portion of the breathing rhythm, and based on a determination that the predetermined biometric rhythm is a second biometric rhythm different from the first biometric rhythm (e.g., 7 breaths per minute (e.g., inhale and then exhale (optionally pause therebetween) seven times per minute) or 8 breaths), the computer system animating the user interface object based on a second mode corresponding to the second biometric rhythm (e.g., expanding the user interface object with the inspiratory portion of the breathing rhythm and contracting the user interface object with the expiratory portion of the breathing rhythm), and/or rotating the user interface object in a first direction with an inspiratory portion of the breathing rhythm and rotating the user interface object in a second direction with an expiratory portion of the breathing rhythm). displaying the user interface object in an animated manner based on the first pattern corresponding to the first biometric rhythm in accordance with a determination that the predetermined biometric rhythm is the first biometric rhythm, and displaying the user interface object in an animated manner based on the second pattern corresponding to the second biometric rhythm in accordance with a determination that the predetermined biometric rhythm is the second biometric rhythm, provides feedback regarding the status of the computer system.
In some embodiments, the predetermined biometric rhythm is a biometric rhythm (e.g., user selectable or customizable setting) selected (e.g., via input 733) by a user (e.g., 701) of the computer system (e.g., 702). In some embodiments, the first biometric rhythm is a default biometric rhythm (e.g., the biometric rhythm is set by an application for providing a user experience session and/or by an operating system of the computer system), and the second biometric rhythm is a biometric rhythm selected by a user of the computer system (e.g., user selectable or customizable). In some embodiments, both the first biometric rhythm and the second biometric rhythm are user selectable settings.
In some implementations, a computer system (e.g., 702) communicates with an audio generation component (e.g., 703) (e.g., speakers, bone-conduction audio output devices, and/or audio generation components integrated into an HMD). In some implementations, upon user experience session activity, the computer system outputs audio components (e.g., 711, 774, and/or 794) having perceived spatial positions (e.g., 709-3 to 709-9) that move (e.g., automatically) (e.g., relative to the user's position) based on one or more respiratory characteristics of the user of the computer system (e.g., one or more audio components of a soundscape selected (automatically) for the user experience session) (e.g., as depicted in fig. 7D, 7E, 7F, 7G, 7H, and/or 7K). In accordance with a determination that a second respiratory event (in some embodiments, the second respiratory event is the first respiratory event) of a user (e.g., 701) of the computer system meets a third set of criteria (in some embodiments, the third set of criteria is the first set of criteria), the computer system outputs an audio component having a first perceived spatial location (e.g., 709-3 through 709-9 as depicted in fig. 7G) relative to the user of the computer system. In accordance with a determination that the second respiratory event of the user of the computer system meets a fourth set of criteria (in some embodiments, the fourth set of criteria is the second set of criteria), the computer system outputs an audio component having a second perceived spatial location (e.g., 709-3 to 709-9 as depicted in fig. 7F) that is different from the first perceived spatial location relative to the user of the computer system. Outputting an audio component having a first perceived spatial position relative to a user of the computer system in accordance with a determination that the second respiratory event meets a third set of criteria and outputting an audio component having a second perceived spatial position relative to the user in accordance with a determination that the second respiratory event meets a fourth set of criteria causes the computer system to automatically adjust the spatial position of the audio component based on the detected respiratory event of the user of the computer system.
In some embodiments, the spatial position of the audio component changes in conjunction with changes in respiratory events. In some implementations, the perceived spatial audio position of the audio component changes as particles of the user interface object move. For example, as the particles move in a first manner (e.g., expand away from the fixed point), the perceived spatial audio location moves toward the user of the computer system, and as the particles move in a second manner (e.g., contract toward the fixed point), the perceived spatial audio location moves away from the user of the computer system.
In some implementations, a portion of the physical environment (e.g., 700) of a user (e.g., 701) of a computer system (e.g., 702) is visible (e.g., 700 a) prior to a user experiencing a conversational activity (e.g., via a pass-through video display and/or due to the transparent nature of the display). In some embodiments, the computer system initiates (e.g., starts, begins, and/or activates) the user experience session. In some implementations, as part of initiating the user experience session, the computer system displays, via the display generation component (e.g., 702-1), a dimming effect (e.g., 704) (e.g., a fade-in effect) (e.g., dimming 99.9%, 99%, 98%, 97%, 95%, 90%, 85%, 80%, and/or another amount less than 100%) that gradually reduces the visibility of the physical environment (e.g., as depicted in fig. 7B, 7C, and/or 7D) (e.g., the visibility of the physical environment is reduced at a first time (e.g., 50% visible), is further reduced at a second time after the first time (e.g., 25% visible), is further reduced at a third time after the second time (e.g., 10% visible), and is further reduced at a fourth time after the third time (e.g., 2% visible)). The dimming effect of displaying a gradual decrease in visibility of the physical environment reduces interference and encourages users of the computer system to concentrate on the user experience session.
In some implementations, displaying the dimming effect (e.g., 704) includes: the view of the physical environment (e.g., 700 a) that is visible to a user of the computer system (e.g., 702) (e.g., via a transparent video and/or due to the transparent nature of the display) is visually obscured. In some embodiments, displaying the dimming effect includes: the display portion is opaque and the virtual overlay with increased opacity reduces the visibility of the physical environment through the virtual overlay, resulting in an effect that the physical environment appears to fade out of view (or appears to be weakly visible). In some embodiments, the dimming effect is uniform. In some embodiments, the dimming effect is variable.
In some embodiments, upon user experience session activity, a computer system (e.g., 702) detects one or more concentration-based characteristics (e.g., one or more biometric characteristics (e.g., head movement, body movement, respiration rate, heart rate, and/or eye gaze) of a user (e.g., 701) of the computer system via one or more sensors (e.g., 702-2) and/or user input indicating whether the user is concentrating on the user experience session (e.g., breathing and/or listening to audio guidance) and/or indicating a particular concentration level for the user experience session, and outputs feedback (e.g., 740-4) (e.g., audio feedback and/or visual feedback) based on the one or more concentration-based characteristics of the user of the computer system (e.g., as depicted in fig. 7H). Outputting feedback based on one or more concentration-based characteristics of a user of the computer system enables the computer system to automatically encourage the user of the computer system to participate in the user experience session and provide feedback to the user regarding the detected concentration-based characteristics.
In some embodiments, outputting feedback based on one or more concentration-based characteristics of a user (e.g., 701) of a computer system (e.g., 702) includes: visual feedback (e.g., movement of particles (e.g., 712, 772, and/or 792) of the user interface object and/or text indicating a level of user focus) based on one or more focus-based characteristics of the user of the computer system is displayed via the display generation component. Displaying visual feedback based on one or more concentration-based characteristics of a user of the computer system enables the computer system to automatically encourage the user of the computer system to participate in a user experience session and provide visual feedback to the user regarding the detected concentration-based characteristics. In some embodiments, the visual feedback includes movement of particles of the user interface object when the one or more concentration-based characteristics of the user indicate that the user is concentrating on the user experience session. In some implementations, the visual feedback includes a pause (e.g., a temporary pause) of movement of the particles when the one or more focus-based characteristics of the user indicate that the user is not focused on the user experience session (e.g., as depicted in fig. 7H).
In some embodiments, displaying visual feedback based on one or more concentration-based characteristics of a user (e.g., 701) of a computer system (e.g., 702) includes: in accordance with a determination (in some embodiments, and in accordance with a determination that a first set of criteria is met by a first respiratory event of a user of a computer system) that one or more concentration criteria (e.g., one or more biometric characteristics of the user (e.g., head movement, body movement, respiratory rate, heart rate, and/or eye gaze)) are met by the user indicates that the user is concentrating their attention on a user experience session (e.g., breathing and/or listening to audio guidance)), particles (e.g., 712, 772, and/or 792) of a user interface object are displayed as moving away from a reference location (e.g., a viewpoint or predefined location of the user) during outward breathing (e.g., exhalation) of the user of the computer system (e.g., as depicted in fig. 7F). Displaying particles of the user interface object as moving away from the reference location during outward respiration of the user of the computer system provides feedback regarding the state of the computer system (e.g., the state in which the displayed particles are responsive to the detected outward respiration of the user). In some implementations, displaying particles of the user interface object to move in a first manner during a first respiratory event of the user includes: the particles are shown moving away from the reference position during outward respiration.
In some embodiments, displaying visual feedback based on one or more concentration-based characteristics of a user of the computer system comprises: in accordance with a determination (in some embodiments, and in accordance with a determination that the first set of criteria is met by the first respiratory event of the user) of the computer system that one or more concentration criteria (e.g., one or more biometric characteristics of the user (e.g., head movement, body movement, respiratory rate, heart rate, and/or eye gaze)) indicate that the user is concentrating their attention on a user experience session (e.g., breathing and/or listening to audio guidance)), particles (e.g., 712, 772, and/or 792) of the user interface object are displayed as moving toward a reference location (e.g., a viewpoint or predefined location of the user) during inward breathing (e.g., inhalation) of the user of the computer system (e.g., as depicted in fig. 7G). Displaying particles of the user interface object as moving toward the reference position during inward respiration of a user of the computer system provides feedback regarding a state of the computer system (e.g., a state in which the displayed particles are responsive to the detected inward respiration of the user). In some embodiments, displaying particles of the user interface object to move in a second manner during a first respiratory event of the user comprises: the particles are shown moving towards the reference position during inward breathing.
In some implementations, a computer system (e.g., 702) displays particles (e.g., 712, 772, and/or 792) of a user interface object as moving toward a reference position at a first rate during inward breathing. In some embodiments, the computer system displays particles of the user interface object as moving away from the reference location at a second rate during outward breathing. In some embodiments, the second rate is different (e.g., slower; faster) than the first rate. In some embodiments, the second rate is different from the first rate regardless of the characteristics (e.g., speed, volume, and/or duration) of the inward and outward breaths.
In some implementations, the reference location is a world-locked (e.g., environment-locked) location (e.g., a location and/or object in a three-dimensional environment (e.g., a physical environment or a virtual environment) to which a user (e.g., 701) corresponding to a point of view of the computer system (e.g., 702) is selected and/or anchored based on (e.g., with reference to) the location and/or object (e.g., as depicted in fig. 7H).
In some implementations, a computer system (e.g., 702) communicates with an audio generation component (e.g., 703). In some embodiments, outputting feedback based on one or more concentration-based characteristics of a user of the computer system comprises: audio feedback (e.g., 740-4) based on one or more focus-based characteristics of a user of the computer system (e.g., audio guidance that instructs the user of the computer system to focus on one or more elements of the user experience session (e.g., breathing and/or listening to audio) and/or audio tones) is output via the audio generation component. Outputting audio feedback based on one or more concentration-based characteristics of a user of the computer system enables the computer system to automatically encourage the user of the computer system to participate in the user experience session.
In some embodiments, the one or more concentration-based characteristics of the user (e.g., 701) of the computer system (e.g., 702) include a plurality of biometric indicators (e.g., gaze (e.g., increased movement/glance), head pose (significant changes in head direction (e.g., rotation and/or tilt)), and/or breathing (e.g., increased or irregular frequency)).
In some implementations, the one or more focus-based characteristics of the user (e.g., 701) of the computer system (e.g., 702) include an indication of whether the user's focus failed to meet focus criteria (e.g., the user's focus has stopped focusing on the user experience session) for a threshold amount of time (e.g., 7 seconds, 8 seconds, 9 seconds, or 10 seconds).
In some implementations, displaying particles (e.g., 712, 772, and/or 792) of the user interface object as moving in the first manner or the second manner includes: the computer system (e.g., 702) displays a first set of one or more particles (e.g., 712-2) having a first distance from a location corresponding to a viewpoint of a user of the computer system and having a first amount of movement during a first respiratory event (e.g., as depicted in fig. 7F or 7G) (e.g., moving a first distance from an angle of the user of the computer system toward or away from a reference point), and displays a second set of one or more particles (e.g., 712-1) different from the first set of one or more particles having a second distance from a location corresponding to a viewpoint of the user of the computer system that is different from (e.g., less than; greater than) the first distance and having a second amount of movement during the first respiratory event that is different from (e.g., less than; greater than) the first amount of movement (e.g., moving a second distance toward or away from the reference point as depicted in fig. 7F or 7G) (e.g., moving a second distance from an angle of the user of the computer system). Displaying the second set of one or more particles as having a second distance from the user that is different from the first distance and having a second amount of movement during the first respiratory event that is different from the first amount of movement provides feedback regarding the state of the computer system.
In some embodiments, the first set of one or more particles (e.g., 712-2) is displayed with a perceived distance farther from the user than the second set of one or more particles (e.g., 712-1), and the first set of one or more particles moves a smaller amount during the respective respiratory event than the second set of one or more particles. In some embodiments, from the perspective of the user, the shorter the distance of the respective particle, the greater the amount of movement of the respective particle during the respective respiratory event. For example, as the user inhales or exhales, particles displayed closer to the user appear to move (to the user) a greater amount than particles displayed farther from the user, and particles displayed farther from the user appear to move a smaller amount than particles displayed closer to the user.
In some embodiments, movement of the plurality of particles (e.g., 712, 772, and/or 792) is based on a set of one or more simulated physical parameters (e.g., inertia, spring constant, and/or friction).
In some embodiments, displaying a user interface for a user experience session includes: while the user experiences session activity, the computer system (e.g., 702) detects gaze data indicative of a gaze of a user (e.g., 701) of the computer system via one or more sensors (e.g., 702-2). In some embodiments, the computer system detects updated gaze data while the user interface object is displayed as having a plurality of particles (e.g., 712, 772, and/or 792) that move based on one or more respiratory characteristics of a user of the computer system. In some implementations, in response to detecting the updated gaze data, and in accordance with a determination that the updated gaze data indicates that the user's gaze exceeds a gaze deviation threshold (e.g., the user's gaze is not focused or is not looking at one or more elements (e.g., particles) of the user experience session) for at least a threshold amount of time (e.g., 7 seconds, 8 seconds, 9 seconds, 10 seconds, or 12 seconds), the computer system pauses the user experience session (e.g., temporarily stops the user experience session) (e.g., as depicted in fig. 7H). In some embodiments, in response to detecting the updated gaze data, and in accordance with a determination that the updated gaze data does not indicate that the user's gaze exceeds a gaze deviation threshold, the computer system foregoes suspending the user experience session (e.g., continues or resumes the user experience session). Selectively suspending the user experience session based on a determination of whether the updated gaze data indicates that the user's gaze exceeds a gaze deviation threshold enables the computer system to automatically suspend the user experience session when the user of the computer system is disturbed. In some embodiments, suspending the user experience session includes: the display of the user interface object is stopped. In some embodiments, suspending the user experience session includes: the user interface object is displayed in a stationary state. In some embodiments, suspending the user experience session includes: an audio instruction to resume focus on the user experience session is output.
In some embodiments, displaying a user interface for a user experience session includes: the computer system (e.g., 702) displays the user interface (e.g., 705, 710, 712, 770, 772, 790, and/or 794) with a set of visual characteristics (e.g., shapes (e.g., cubes, spheres, spheroids, clouds, pyramids, and/or abstract objects), components (e.g., particles), and/or visual appearance (e.g., animation effects, translucency, movement characteristics, and/or background of display)) randomly or pseudo-randomly selected from a set of available visual characteristics (e.g., a superset of visual characteristics). In some embodiments, the computer system communicates with an audio generation component (e.g., 703). In some embodiments, the computer system outputs audio soundscapes (e.g., 711, 774, and/or 794) (e.g., a set of planned sound components of an audio environment selected to create the user experience session) for the user experience session via the audio generation component, wherein the audio soundscapes are output concurrently with displaying the user interface for the user experience session, and outputting the audio soundscapes comprises: an audio soundscape is output having a first set of two or more audio components (e.g., a set of sound components) randomly or pseudo-randomly selected from a set of available audio components. Displaying the user interface with a set of visual characteristics randomly or pseudo-randomly selected from a set of available visual characteristics and outputting an audio soundscape with a first set of two or more audio components randomly or pseudo-randomly selected from a set of available audio components enables the computer system to provide a more realistic user experience while saving storage space by not requiring multiple different complete audio tracks and/or visual components to be stored and selected for playback/display. In some implementations, the two or more audio components of the first set of audio components are a subset of the planned sound components selected from a superset of the planned sound components for the audio soundscape. Additional aspects of the soundscape are discussed in more detail below with respect to method 1000.
In some embodiments, displaying the user interface for the user experience session includes, upon displaying the user interface object (e.g., 710, 770, and/or 790) with a first display state (e.g., as depicted in fig. 7B, 7C, and/or 7J) in which a plurality of particles (e.g., 712, 772, and/or 792) are displayed for a first portion (e.g., introductory portion) of the user experience session with a first amount of spacing (e.g., dense, compact, and/or tight groupings of particles; forming a shape (e.g., a sphere, cloud, triangle, and/or cube)), the computer system (e.g., 702) detects a transition from the first portion of the user experience session to a second portion of the user experience session (e.g., a directed respiratory portion of the user experience session). In response to detecting a transition from a first portion of the user experience session to a second portion of the user experience session, the computer system displays the user interface object as having a second display state (e.g., transition to the second display state) that is different from the first display state, wherein the plurality of particles are displayed for the second portion of the user experience session as having a second amount of spacing that is different from (e.g., greater than) the first amount of spacing (e.g., loose grouping of particles; more spaced apart positioning of particles, but still forming a respective shape (e.g., spheres, clouds, triangles, and/or cubes)) (e.g., as depicted in fig. 7D, 7E, 7K, and/or 7L). In response to detecting a transition from a first portion of the user experience session to a second portion of the user experience session, displaying the user interface object to have a second display state that is different from the first display state, wherein the plurality of particles are displayed with a second amount of spacing for the second portion of the user experience session that is different from the first amount of spacing, feedback is provided regarding a state of the computer system (e.g., a state in which the computer system is providing a particular portion of the user experience session).
In some embodiments, displaying a user interface for a user experience session includes: while the computer system (e.g., 702) displays the user interface object (e.g., 710, 770, and/or 790) as having the second display state (e.g., as depicted in fig. 7D, 7E, 7K, and/or 7L), the computer detects a transition from the second portion of the user experience session to a third portion of the user experience session (e.g., a reactive portion of the user experience session). In response to detecting the transition from the second portion of the user experience session to the third portion of the user experience session, the computer system displays the user interface object as having (e.g., transitioning to) a third display state that is different from the first display state and the second display state, wherein the plurality of particles (e.g., 712, 772, and/or 792) are displayed for the third portion of the user experience session as having a third amount of spacing (e.g., the spaced apart positioning of the particles and/or the spacing of the particles that no longer form a different shape) that is different from (e.g., greater than) the second amount of spacing (e.g., as depicted in fig. 7F, 7G, and/or 7H). In response to detecting a transition from the second portion of the user experience session to the third portion of the user experience session, displaying the user interface object having a third display state that is different from the first display state and the second display state, wherein the plurality of particles are displayed with a third amount of spacing for the third portion of the user experience session that is different from the second amount of spacing, feedback is provided regarding a state of the computer system (e.g., a state of the computer system that is providing the particular portion of the user experience session).
In some embodiments, displaying a user interface for a user experience session includes: when a computer system (e.g., 702) displays a user interface object (e.g., 710, 770, and/or 790) as having a plurality of particles (e.g., 712, 772, and/or 792) including an arrangement having a first average spacing between particles (e.g., as depicted in fig. 7F, 7G, and/or 7H) (e.g., spaced apart positioning of particles, spacing of particles that do not form different shapes, uniform spacing of particles, irregular spaced positioning of particles, and/or non-uniform manner of inter-particle spacing) the computer system detects a termination of a user experience session (e.g., a transition to an end portion of a user experience session (e.g., end) or an immediate termination). In response to detecting termination of the user experience session, the computer system displays an animation of the plurality of particles as moving to an arrangement having a second average spacing between the particles, wherein the second average spacing is less than the first average spacing (e.g., as depicted in fig. 7I) (e.g., dense, compact, and/or tight grouping of particles, forming shapes (e.g., spheres, clouds, triangles, pyramids, and/or cubes), uniform spacing of particles, irregular spacing positioning of particles, and/or non-uniform manner of inter-particle spacing) (e.g., and terminates the user experience session (e.g., initiates termination of the user experience session)). Displaying an animation of the plurality of particles as moving to the grouping arrangement in response to detecting termination of the user experience session provides feedback regarding a state of the computer system (e.g., a state in which the computer system is terminating the user experience session). In some embodiments, in response to detecting termination of the user experience session, the computer system terminates the user experience session (or initiates termination of the user experience session) while (or after) the plurality of particles are animated to move to an arrangement having a second average spacing between the particles. In some embodiments, terminating the user experience session includes: transition to the end of the user experience session.
In some embodiments, displaying a user interface for a user experience session includes: when a user experience session is active and the environment (e.g., 700 a) of a user (e.g., 701) of a computer system (e.g., 702) (e.g., an element in a physical environment or a representation of a virtual element in an XR environment) is visually obscured (e.g., via a dimming effect 704) (e.g., dimmed (e.g., 1% visible or 2% visible) or not displayed), the computer system detects a termination of the user experience session (e.g., a transition to an end portion (e.g., end) of the user experience session). In response to detecting the termination of the user experience session, the computer system initiates the termination of the user experience session (e.g., transitions to an end portion (e.g., end) of the user experience session and/or ends the user experience session) and gradually increases the visibility of the user's environment (e.g., fades the environment into view) (e.g., as depicted in fig. 7I). Initiating termination of the user experience session and gradually increasing visibility of the user's environment in response to detecting termination of the user experience session provides feedback regarding the state of the computer system (e.g., the state in which the computer system is terminating the user experience session). In some embodiments, increasing the visibility of the environment includes: reducing dimming effects that are displayed (e.g., overlaid) onto an environment (e.g., a physical environment). In some embodiments, increasing the visibility of the environment includes: a larger portion of the display environment (e.g., virtual or XR environment). In some embodiments, the visibility of the environment increases at a first time (e.g., 10% visible), further increases at a second time after the first time (e.g., 25% visible), further increases at a third time after the second time (e.g., 50% visible), and further increases at a fourth time after the third time (e.g., 98% visible).
In some embodiments, displaying a user interface for a user experience session includes: upon user experience session activity, the computer system (e.g., 702) detects termination of the user experience session (e.g., transition to an ending portion (e.g., end) of the user experience session). In response to detecting termination of the user experience session, the computer system initiates termination of the user experience session (e.g., transitions to an end portion (e.g., end) of the user experience session and/or ends the user experience session) and displays an option (e.g., 758) selectable to continue the user experience session (e.g., resume the user experience session after termination of the session; resume the user experience session before termination of the session). Initiating termination of the user experience session and displaying an option selectable to continue the user experience session reduces the amount of input required to continue the user experience session by automatically displaying the option to continue the user experience session without additional user input. In some embodiments, in response to detecting a selection of an option selectable to continue the user experience session, the computer system resumes the user experience session and stops display of the option selectable to continue the user experience session.
In some embodiments, displaying a user interface for a user experience session includes: upon user experience session activity, the computer system (e.g., 702) detects termination of the user experience session (e.g., transition to an ending portion (e.g., end) of the user experience session). In response to detecting termination of the user experience session, the computer system initiates termination of the user experience session (e.g., transitions to an end portion (e.g., end) of the user experience session and/or ends the user experience session) and displays a data history (in some embodiments, one or more previous user experience sessions including the current user experience session) associated with (e.g., based on or corresponding to) one or more previous user experience sessions (e.g., 754 and/or 756). Initiating termination of a user experience session and displaying a data history related to one or more previous user experience sessions reduces the amount of input required to view the history data by automatically displaying the history data without additional user input.
In some embodiments, displaying a user interface for a user experience session includes: at least a portion of an environment (e.g., 700 a) (e.g., a representation of an element in a physical environment and/or a virtual element in an XR environment) of a user (e.g., 701) of a computer system (e.g., 702) is rendered visible (e.g., via a passthrough video display and/or due to transparent nature of the display) while the user experiences the conversational activity (e.g., as depicted in fig. 7D, 7E, 7F, 7G, and/or 7H). Causing at least a portion of the environment of a user of the computer system to be visible while the user experiences conversational activity provides feedback regarding the state of the computer system. In some embodiments, the user interface for the user experience session is displayed with an amount of opacity or transparency such that the user's environment is visible through (or behind) the user interface. In some embodiments, the user interface is displayed with an opacity of 99.9%, 99%, 98%, 97%, 95%, 90%, 85%, 80%, and/or another amount less than 100%.
In some implementations, when a user experiences session activity and when a computer system (e.g., 702) displays a user interface object (e.g., 710, 770, and/or 790) with a first display orientation (e.g., the user interface object is displayed at a first angle relative to the user) relative to a user (e.g., 701) of the computer system, the computer system receives data indicating a change in location of the user of the computer system from a first location in an environment (e.g., 700) (e.g., as depicted in fig. 7G) to a second location in the environment that is different from the first location (e.g., a change in location of the user relative to a physical environment and/or XR environment of the user) (e.g., as depicted in fig. 7H). In response to receiving data indicative of a change in location of a user of the computer system, the computer system displays the user interface object with a second display orientation (e.g., the user interface object is displayed at a second angle relative to the user) that is different from the first display orientation relative to the user of the computer system (e.g., as depicted in fig. 7H). Displaying the user interface object as having a second display orientation, different from the first display orientation, relative to the user of the computer system in response to receiving data indicative of a change in location of the user of the computer system causes the computer system to automatically adjust the displayed view of the user experience session based on the detected change in location of the user of the computer system.
In some implementations, displaying the user interface object with the second display orientation includes: at least a portion of the user interface object that is not visible (e.g., displayed) when the user interface object is displayed with the first orientation is displayed. In some implementations, displaying the user interface object with the second display orientation includes: at least a portion of the user interface object that is visible (e.g., displayed) when the user interface object is displayed with the first orientation is hidden (stopped from displaying).
In some embodiments, the user interface for the user experience session can be displayed at one or more external computer systems (e.g., devices or computer systems associated with other users (e.g., users in an XR environment)).
In some implementations, the computer system receives data indicating a change in pose of a portion of a user (e.g., 701) of the computer system (e.g., a user's hand and/or arm moving in an XR environment) while the user is experiencing conversational activity and while the computer system (e.g., 702) is displaying a user interface object as having a plurality of particles (e.g., 712, 772, and/or 792) that move based on one or more respiratory characteristics of the user of the computer system. In response to receiving data indicative of a change in pose of a portion of a user of a computer system, the computer system updates a display of a plurality of particles, comprising: modifying display characteristics of the respective particles (e.g., changing color, positioning, and/or orientation of the respective particles, and/or displaying an animation effect based on the intersection of the user's hand with the respective particles) according to a determination that the data indicative of the pose change of the user's portion includes an indication that the user's portion intersects (or intersects during movement) the display position of the respective particles; and discarding modifying display characteristics of the respective particles (e.g., maintaining display colors, positioning, and/or orientation of the respective particles, and/or discarding displaying an animation effect based on the intersection of the user's hand with the respective particles) based on a determination that the data indicative of the pose change of the portion of the user does not include an indication that the portion of the user intersects (or intersects during movement) the display position of the respective particles. Modifying the display characteristics of the respective particles according to the data indicative of the pose change of the portion of the user including a determination of an indication that the portion of the user intersects the display position of the respective particles causes the computer system to automatically adjust the respective particles displayed based on the detected positioning of the portion of the user intersecting the respective particles. In some implementations, some of the particles move and/or change color based on movement of portions of the user (e.g., the user may interact with the particles).
In some embodiments, aspects/operations of methods 900 and/or 1000 may be interchanged, substituted, and/or added between the methods. For the sake of brevity, these details are not repeated here.
Fig. 9 is a flowchart of an exemplary method 900 for providing a computer-generated user experience session with options selected based on characteristics of an XR environment, according to some embodiments. In some embodiments, the method 900 is performed at a computer system (e.g., computer system 101 and/or device 702 in fig. 1) (e.g., a smart phone, a tablet device, and/or a head mounted display generating component) in communication with a display generating component (e.g., display generating component 120 and/or display 702-1 in fig. 1,3, and 4) (e.g., a display, a touch screen, a visual output device, a 3D display, a display (e.g., see-through display) having at least a portion on which an image may be projected, a transparent or translucent display, a projector, a heads-up display, and/or a display controller) and one or more sensors (e.g., camera 702-2) (e.g., a gyroscope, an accelerometer, a motion sensor, a microphone, an infrared sensor, a camera sensor, a depth camera, a visible light camera, an eye tracking sensor, a gaze tracking sensor, a physiological sensor, an image sensor, a camera, a downward pointing camera (e.g., a color sensor, infrared sensor, and/or other camera) pointing forward from the user's head and/or other camera). In some embodiments, method 900 is managed by instructions stored in a non-transitory (or transitory) computer-readable storage medium and executed by one or more processors of a computer system (such as one or more processors 202 of computer system 101) (e.g., control 110 in fig. 1). Some operations in method 900 are optionally combined and/or the order of some operations is optionally changed.
In method 900, upon displaying an XR environment (e.g., 705, 765, and/or 787) having one or more characteristics (e.g., lighting characteristics, display of virtual objects, direct display of physical objects, and/or user activity history in the XR environment), a computer system (e.g., 702) detects (902), via one or more sensors (e.g., 702-1 and/or 702-2), a request to initiate a user experience session in the XR environment (e.g., via input 724 and/or input 778) (e.g., a request to initiate a user experience in the XR environment, the user experience optionally including instruction for breathing exercises to relax and/or focus the user and/or reaction exercises to help the user react to scenes, topics, ideas, concepts, etc.).
In response to detecting a request to initiate a user experience session in an XR environment, a computer system (e.g., 702) initiates (904) a user experience session in the XR environment (e.g., initiates a user experience in the XR environment that optionally includes instructional instructions for relaxing and/or focusing on breathing exercises and/or reaction exercises for helping a user react to scenes, topics, ideas, concepts, etc.). In some embodiments, initiating the user experience in the XR environment comprises: an application for providing a user experience is launched.
As part of initiating the user experience session in the XR environment, the computer system (e.g., 702) displays (906) (e.g., in the XR environment) a user interface (e.g., 705, 765, and/or 787) (e.g., a UI for an application of breathing exercises and/or reaction exercises) for the user experience session via a display generation component (e.g., 702-1). Displaying a user interface for a user experience session includes: upon a determination that one or more characteristics of the XR environment satisfy a first set of criteria (e.g., the XR environment is an AR environment; the XR environment includes a first environment; the XR environment has a first set of lighting conditions; one or more user experience sessions have been previously initiated in the XR environment), the computer system displays (908) a user interface for the user experience session as having a first set of one or more options enabled for the user experience session (e.g., a first set of visual and/or audio characteristics for the session, displaying a first virtual environment, and/or displaying one or more virtual objects as having a first appearance).
In accordance with a determination that one or more characteristics of the XR environment satisfy a second set of criteria different from the first set of criteria (e.g., the XR environment is a VR environment; the XR environment includes a second environment; the XR environment has a second set of lighting conditions; a user experience session has not been previously initiated in the XR environment), the computer system (e.g., 702) displays (910) a user interface for the user experience session as having a second set of one or more options enabled for the user experience session (e.g., a second set of visual and/or audio characteristics for the session, displaying a second virtual environment, and/or displaying one or more virtual objects as having a second appearance), wherein the second set of one or more options is different from the first set of one or more options. Displaying the user interface for the user experience session with the first set of one or more options enabled for the user experience session based on a determination that the one or more characteristics of the XR environment meet the first set of criteria, and displaying the user interface for the user experience session with the second set of one or more options enabled for the user experience session based on a determination that the one or more characteristics of the XR environment meet the second set of criteria, reduces an amount of input required to display the user interface for the user experience session with the particular options enabled for the user experience session.
In some implementations, displaying the user interface for the user experience session as having a first set of one or more options enabled for the user experience session (e.g., a first set of visual and/or audio characteristics for the session) includes: the computer system (e.g., 702) displays an Augmented Reality (AR) environment (e.g., 705) for the user experience session (e.g., as depicted in fig. 7B, 7C, 7D, 7E, 7F, 7G, 7H, and/or 7I). In some implementations, displaying the user interface for the user experience session as having a second set of one or more options enabled for the user experience session (e.g., a second set of visual and/or audio characteristics for the session) includes: the computer system displays a Virtual Reality (VR) environment (e.g., 765 and/or 787) for the user experience session (e.g., as depicted in fig. 7J, 7K, and/or 7L). Displaying the AR environment or VR environment for the user experience session when one or more characteristics of the XR environment satisfy the first set of criteria or the second set of criteria causes the device to automatically enable the AR environment or VR environment based on the characteristics of the XR environment without displaying additional controls.
In some embodiments, the first set of one or more options includes a first subset of options (e.g., visual and/or audio characteristics for a user experience session) for the AR environment (e.g., as depicted in fig. 7B, 7C, 7D, 7E, 7F, 7G, 7H, and/or 7I) selected (e.g., randomly or pseudo-randomly) from a first set of available options (e.g., a superset of options for the AR environment). In some implementations, the second set of one or more options includes a second subset of options (e.g., visual and/or audio characteristics for the user experience session) for the VR environment (e.g., as depicted in fig. 7J, 7K, and/or 7L) selected (e.g., randomly or pseudo-randomly) from a second set of available options (e.g., a superset of options for the VR environment). In some embodiments, the first set of available options includes a different number (fewer or more) of options than the second set of available options. A first subset of options for the AR environment is selected from a first set of available options and a second subset of options for the VR environment is selected from a second set of available options, wherein the first set of available options includes a different number of options than the second set of available options, enabling the computer system to provide a more realistic user experience while conserving storage space by not requiring multiple different complete audio tracks and/or visual components to be stored and selected for playback/display. In some implementations, the options available for VR environments are more limited than the options available for AR environments. In some embodiments, the options available for the VR environment may be used to create a smaller set of recipes for creating the VR environment or include a smaller range of variables than are otherwise available for creating the AR environment.
In some implementations, displaying the user interface for the user experience session as having a first set of one or more options enabled for the user experience session (e.g., a first set of visual and/or audio characteristics for the session) includes: the computer system (e.g., 702) displays a first environment (e.g., 765) (e.g., a first virtual environment) for the user experience session. In some implementations, displaying the user interface for the user experience session as having a second set of one or more options enabled for the user experience session (e.g., a second set of visual and/or audio characteristics for the session) includes: a second environment (e.g., 787) different from the first environment for the user experience session is displayed (e.g., a second virtual environment). Displaying the first environment or the second environment for the user experience session when one or more characteristics of the XR environment satisfy the first set of criteria or the second set of criteria causes the device to automatically display the first environment or the second environment based on the characteristics of the XR environment without displaying additional controls.
In some embodiments, the virtual environment is a curtain or wallpaper of a three-dimensional environment. In some embodiments, the virtual environment may be displayed outside (e.g., before and/or after) the user experience session. In some embodiments, the virtual environment provides a virtual three-dimensional space in which a user performs an activity using a computer system, such as using an application, playing a game, communicating with other users, experiencing co-occurrence with other users, and/or interacting with elements of an operating system of the computer system. In some implementations, the computer system provides respective spatial audio soundtracks for the various virtual environments. In some embodiments, the spatial audio soundscapes are unique to each respective virtual environment. In some embodiments, spatial audio soundtracks are planned for respective virtual environments, for example, in order to convey specific moods and/or themes of the respective virtual environments.
In some implementations, the first set of one or more options includes a first set of variables (e.g., 770, 772, 774, and/or 780-1) (e.g., visual and/or audio characteristics for the user experience session) for the first environment based on one or more characteristics of the first environment (e.g., 765) (e.g., variables for the first environment are customized or optimized for the first environment). In some embodiments, the second set of one or more options includes a second set of variables (e.g., 790, 792, 794, and/or 785-1) for the second environment (e.g., 787) based on one or more characteristics of the second environment (e.g., visual and/or audio characteristics for the user experience session) (e.g., different from the first set of variables) (e.g., the variables for the second environment are customized or optimized for the second environment).
In some implementations, the first set of criteria includes a first criterion that is met when a previous user interface (e.g., 765) for a previous user experience session (e.g., a previous instance of the user experience session) has been displayed with a set of previous visual characteristics (e.g., 770 and/or 772) for the previous user experience session (e.g., visual characteristics, color, shape, size, graphics, and/or animation effects of the displayed components of the previous user experience session). In some implementations, displaying a user interface for a user experience session with a first set of one or more options enabled for the user experience session (e.g., a current instance of the user experience session) includes: the computer system (e.g., 702) displays a user interface (e.g., 787) having a first set of visual characteristics (e.g., 790 and/or 792) for the user experience session that is different from a set of previous visual characteristics for the previous user experience session (e.g., visual characteristics, color, shape, size, graphics, and/or animation effects of the displayed components of the user experience session). Displaying the user interface with a first set of visual characteristics for the user experience session that is different from a set of previous visual characteristics for a previous user experience session enables the computer system to generate a unique or fresh user interface for the user experience session based on a history of the user experience session without displaying additional controls. In some implementations, subsequent instances of the user experience session (e.g., repeated user experience sessions) have different visual characteristics selected as unique or different from previous instances of the user experience session. In some embodiments, the third set of criteria includes the first criteria. In some implementations, the second set of criteria includes a second criterion that is met when a second previous user interface for a second previous user experience session has been displayed with a second set of previous visual characteristics for the second previous user experience session. In some embodiments, displaying the user interface for the user experience session with the second set of one or more options enabled for the user experience session includes: the user interface is displayed with a second set of previous visual characteristics for the user experience session that is different from (and optionally different from) the second set of previous visual characteristics for the second previous user experience session.
In some embodiments, a first set of visual characteristics for a user experience session is randomly or pseudo-randomly selected by a computer system (e.g., 702) from a set of available visual characteristics (e.g., a superset of visual characteristics). Randomly or pseudo-randomly selecting a first set of visual characteristics for a user experience session from the set of available visual characteristics enables the computer system to provide a more realistic user experience while conserving storage space by not requiring multiple different full visual components to be stored and selected for display. In some embodiments, the set of visual characteristics for the user experience session is selected with a bias toward not repeating one or more of the visual characteristics already used in the previous user experience session. For example, the set of available visual characteristics includes the set of previous visual characteristics, and the first set of visual characteristics is selected from a subset of the available visual characteristics that does not include the set of previous visual characteristics.
In some implementations, the set of previous visual characteristics includes a first light attribute (e.g., direction, color, and/or intensity of light) of a simulated lighting effect (e.g., simulated light reflection) for a user interface object (e.g., 710, 712, 770, 772, 790, and/or 792) (e.g., one or more particles forming a cube, sphere, cloud, pyramid, and/or abstract object) displayed in the previous user experience session. In some embodiments, the first set of visual characteristics includes a second light attribute different from the first light attribute for a simulated lighting effect of a user interface object displayed in the user experience session (e.g., a user interface for the user experience session includes a user interface object including a plurality of particles having a set of light attributes of simulated light reflected from the particles). Displaying the user interface with a first set of visual characteristics including a second light attribute different from the first light attribute for a simulated lighting effect of a user interface object displayed in the user experience session enables the computer system to provide a more realistic user experience while saving storage space by not requiring multiple different full visual components to be stored and selected for display.
In some implementations, the set of previous visual characteristics includes a first material property (e.g., simulated optical property, shape, and/or curvature) for user interface objects (e.g., 710, 712, 770, 772, 790, and/or 792) (e.g., one or more particles forming cubes, spheres, spheroids, clouds, pyramids, and/or abstract objects) displayed in the previous user experience session. In some embodiments, the first set of visual properties includes a second material property different from the first material property for a user interface object displayed in the user experience session (e.g., a user interface for the user experience session includes a user interface object including a plurality of particles having a set of material properties). Displaying the user interface with a first set of visual properties including a second material property different from the first material property for user interface objects displayed in the user experience session enables the computer system to provide a more realistic user experience while conserving storage space by not requiring multiple different full visual components to be stored and selected for display.
In some implementations, displaying the user interface for the user experience session as having a first set of one or more options enabled for the user experience session (e.g., a first set of visual and/or audio characteristics for the session) includes: the computer system (e.g., 702) displays the VR environment (e.g., 765 and/or 787) for the user experience session with a first amount of emphasis (e.g., low amount of dimming, no dimming; no dimming effect; no fade-in-fade of VR environment). In some implementations, displaying the user interface for the user experience session as having a second set of one or more options enabled for the user experience session (e.g., a second set of visual and/or audio characteristics for the session) includes: the AR environment (e.g., 705) for the user experience session is displayed with a second amount of emphasis (e.g., 704) that is less than the first amount of emphasis (e.g., the AR environment is displayed with a greater fade-in or dimming effect) (e.g., the AR environment is dimmed by 99.9%, 99%, 98%, 97%, 95%, 90%, 85%, 80%, 50%, 20%, 15%, 10%, 7%, 5%, 3%, 2%, 1%, 0.5%, and/or another amount that is greater than 0%). Displaying the AR environment for the user experience session with a second amount of emphasis that is less than the first amount of emphasis for the VR environment reduces interference to the user experience session.
In some implementations, the AR environment is displayed with a greater amount of dimming than the VR environment. In some embodiments, displaying the dimming effect includes: a view (e.g., 700 a) of a physical environment that is visible (e.g., via a transparent video or due to the transparent nature of the display) to a user (e.g., 701) of a computer system (e.g., 702) is visually obscured. In some implementations, displaying the dimming effect (e.g., 704) includes: the partially transparent virtual overlay is displayed to show the physical environment through the virtual overlay. In some implementations, the AR environment is dimmed to reduce interference in a different manner than in the VR environment. For example, the computer system may coordinate user experience sessions in the VR environment by displaying a particular virtual environment, thereby reducing potential interference to the user, while interference in the AR environment is reduced by the computer system dimming the visibility of objects in the physical environment.
In some embodiments, displaying a user interface for a user experience session includes: a plurality of user interface objects (e.g., 710, 712, 770, 772, 790, and/or 792) (e.g., one or more particles forming cubes, spheres, spheroids, clouds, pyramids, and/or abstract objects) are displayed as having a translucent appearance (e.g., the translucent appearance of particles is affected by elements of the environment for the user experience session) based on the environment (e.g., 700a, 705, 765, and/or 787) used for the user experience session (e.g., physical objects, virtual objects, detected illumination, and/or virtual illumination in an XR environment). Displaying the plurality of user interface objects with a translucent appearance based on the environment for the user experience session enables the computer system to provide a more realistic user experience based on the environment for the user experience session while saving storage space by not requiring the plurality of different full visual components to be stored and selected for display.
In some implementations, displaying the plurality of user interface objects (e.g., 710, 712, 770, 772, 790, and/or 792) as having a semi-transparent appearance based on the environment for the user experience session includes: in accordance with a determination that the environment for the user experience session includes a first virtual lighting effect (e.g., a simulated localization, direction, color, and/or intensity of light in an XR environment), the computer system (e.g., 702) displays the plurality of user interface objects as having a first semi-transparent appearance (e.g., the translucence of a particle has a first localization and/or a first semi-transparent measure on the corresponding particle) (e.g., in fig. 7J and/or fig. 7K). In some implementations, in accordance with a determination that the environment for the user experience session includes a second virtual lighting effect (e.g., a simulated localization, direction, color, and/or intensity of light in the XR environment) that is different from the first virtual lighting effect, the plurality of user interface objects are displayed as having a second translucent appearance that is different from the first translucent appearance (e.g., the translucency of the particles has a second localization and/or a second translucent metric on the respective particles) (e.g., in fig. 7L). Displaying the plurality of user interface objects as having a second translucent appearance in accordance with a determination that the environment includes a second virtual lighting effect enables the computer system to provide a more realistic user experience based on the virtual lighting effects in the environment for the user experience session while saving storage space by not requiring the plurality of different full visual components to be stored and selected for display. In some implementations, the visual appearance of the user interface object having a translucent appearance is affected by the virtual lighting of the AR/VR environment.
In some embodiments, initiating the user experience session in the XR environment comprises: the computer system (e.g., 702) increases one or more immersive aspects of the user experience session (e.g., increases the proportion of the user's field of view occupied by the environment, and/or increases the spatial immersion of the environment's audio) (e.g., as depicted in fig. 7D and/or fig. 7K). Adding one or more immersive aspects of the user experience session provides feedback regarding the state of the computer system and eliminates interference with the user experience session.
In some implementations, adding one or more immersive aspects of the user experience session includes: the computer system (e.g., 702) increases the proportion (e.g., increases the displayed size of virtual wallpaper 765, virtual object 770, and/or particle 772 in fig. 7K) of the user field of view (e.g., the field of view of the display generation component (e.g., 702-1)) occupied by the user interface for the user experience session (e.g., increases the displayed size of virtual interface 705, virtual object 710, and/or particle 712 in fig. 7D). Increasing the proportion of the user field of view occupied by the user interface for the user experience session provides feedback regarding the state of the computer system and eliminates interference with the user experience session.
In some implementations, adding one or more immersive aspects of the user experience session includes: the computer system (e.g., 702) increases the spatial immersion of the audio generated for the user experience session (e.g., causes the perceived spatial position of the audio to move from a first position perceived as a distance away from the user of the computer system to a second position perceived (by the user) as being near and/or around the user) (e.g., transitioning from stereo audio to spatial audio in fig. 7D and/or fig. 7K). Increasing the spatial immersion of audio generated for the user experience session provides feedback about the state of the computer system and eliminates interference with the user experience session.
In some embodiments, aspects/operations of methods 800 and 1000 may be interchanged, substituted, and/or added between the methods. For the sake of brevity, these details are not repeated here.
Fig. 10A-10B are flowcharts of an exemplary method 900 for providing a soundscape with randomly selected planned sound components to a computer-generated user experience session, according to some embodiments. In some embodiments, the method 1000 is performed at a computer system (e.g., computer system 101 and/or device 702 in fig. 1) (e.g., a smart phone, a tablet device, and/or a head-mounted display generating component) that is in communication with a display generating component (e.g., display generating component 120 and/or display 702-1 in fig. 1,3, and 4) (e.g., a display, a touch screen, a visual output device, a 3D display, a display having at least a portion of a transparency or translucency on which an image may be projected (e.g., a see-through display), a projector, a heads-up display, and/or a display controller), an audio generating component (e.g., the audio output device 703 and/or speakers at the device 702, or audio generating components integrated into the HMD) (e.g., speakers and/or bone conduction audio output devices) and one or more sensors (e.g., camera 702-2) (e.g., gyroscopes, accelerometers, motion sensors, movement sensors, microphones, infrared sensors, camera sensors, depth cameras, visible light cameras, eye tracking sensors, gaze tracking sensors, physiological sensors, image sensors, cameras pointing downward at the user's hand (e.g., color sensors, infrared sensors, and other depth sensing cameras), and/or cameras pointing forward from the user's head). In some embodiments, method 1000 is managed by instructions stored in a non-transitory (or transitory) computer-readable storage medium and executed by one or more processors of a computer system (such as one or more processors 202 of computer system 101) (e.g., control 110 in fig. 1). Some operations in method 1000 are optionally combined and/or the order of some operations is optionally changed.
In method 1000, a computer system (702) detects (1002) a request (e.g., input 724, input 778, or the like) to initiate a corresponding type of user experience session in an XR environment via one or more sensors (e.g., 702-1 and/or 702-2) at a first time (e.g., as depicted in fig. 7B or as depicted in fig. 7J) (e.g., a request to initiate a user experience in an XR environment, the user experience optionally including instruction instructions for breathing exercises to relax and/or focus a user and/or reaction exercises to help a user react to a scene, topic, idea, concept, etc.).
In response to detecting a request to initiate a user experience session in an XR environment, a computer system (e.g., 702) initiates (1004) a corresponding type of first user experience session in the XR environment (e.g., as depicted in fig. 7D and as depicted in fig. 7K) (e.g., initiates a user experience in the XR environment that optionally includes instruction instructions for breathing exercises to relax and/or focus the user and/or reaction exercises to help the user react to scenes, topics, ideas, concepts, etc.). In some embodiments, initiating the user experience in the XR environment comprises: an application for providing a user experience is launched.
As part of initiating a respective type of first user experience session in an XR environment, a computer system (e.g., 702) displays (1006) (e.g., in an XR environment) a user interface (e.g., 705, 710, 712, 765, 770, and/or 772) (e.g., a UI for an application of breathing exercise and/or reaction exercise) for the first user experience session via a display generation component (e.g., 702-1). The computer system also outputs (1008), via an audio generation component (e.g., 703), a first audio soundscape (e.g., 711 or 774) for the first user experience session (e.g., a set of planned sound components selected to create an audio environment for the user experience session) (e.g., when displaying a user interface for the first user experience session). The computer system outputs the first audio soundscape concurrently with displaying a user interface for the first user experience session. Outputting the first audio soundscape comprises: a first audio soundscape is output having a first set of two or more audio components randomly or pseudo-randomly selected from a set of available audio components. In some implementations, the two or more audio components of the first set of audio components are a subset of the planned sound components selected from a superset of the planned sound components for the audio soundscape.
In method 1000, a computer system (e.g., 702) detects (1010), via one or more sensors (e.g., 702-1 and/or 702-2), a request (e.g., input 724, input 778, or similar input) to initiate a corresponding type of user experience session in an XR environment at a second time (e.g., depicted in fig. 7B or depicted in fig. 7J) different from the first time (e.g., a request to initiate a user experience in an XR environment, the user experience optionally including instructional instructions for relaxing and/or focusing on breathing exercises and/or reaction exercises to help a user react to a scene, topic, idea, concept, etc.).
In response to detecting a request to initiate a user experience session in an XR environment, a computer system (e.g., 702) initiates (1012) a corresponding type of second user experience session in the XR environment (e.g., depicted in fig. 7L) (e.g., initiates a user experience in the XR environment that optionally includes instruction for breathing exercises to relax and/or focus the user and/or reaction exercises to help the user react to scenes, topics, ideas, concepts, etc.). In some embodiments, initiating the user experience in the XR environment comprises: an application for providing a user experience is launched.
As part of initiating a respective type of second user experience session in an XR environment, a computer system (e.g., 702) displays (1014) (e.g., in an XR environment) a user interface (e.g., 787, 790, and/or 792) (e.g., a UI for breathing exercise and/or an application reflecting exercise) for the second user experience session via a display generation component (e.g., 702-1). The computer system also outputs (1016), via the audio generating component (e.g., 703), a second audio soundscape (e.g., 794) for the second user experience session (e.g., a set of planned sound components selected to create the audio environment for the user experience session) (e.g., when displaying a user interface for the second user experience session). The computer system outputs the second audio soundscape concurrently with displaying the user interface for the second user experience session. Outputting the second audio soundscape comprises: a second audio soundscape is output having a second set of two or more audio components randomly or pseudo-randomly selected from the set of available audio components (e.g., different from the first set of two or more audio components). Outputting the second audio soundscape for the second user experience session, wherein the second audio soundscape is output concurrently with displaying the user interface for the second user experience session, and outputting the second audio soundscape comprises outputting the second audio soundscape having a second set of two or more audio components randomly or pseudo-randomly selected from the set of available audio components, enabling the computer system to provide a more realistic user experience while saving storage space by not requiring a plurality of different complete audio tracks to be stored and selected for playback. In some implementations, the two or more audio components of the second set of audio components are a subset of the planned sound components selected from the superset of planned sound components for the audio soundscape. In some embodiments, at least one audio component of the two or more audio components of the second set of audio components is selected to be different from an audio component of the first set of audio components.
In some implementations, outputting the first audio soundscape (e.g., 711 or 774) for the first user experience session includes: the computer system (e.g., 702) repeats (e.g., continuously repeats over a loop) the first set of two or more audio components during the first user experience session. In some implementations, outputting the second audio soundscape for the second user experience session includes: the second set of two or more audio components is repeated (e.g., repeated continuously over a loop) during the second user experience session. Repeating the first set of two or more audio components during the first user experience session provides feedback regarding the state of the computer system (e.g., outputting the state of the first user experience session).
In some implementations, a first set of two or more audio components of a first audio soundscape (e.g., 711 or 774) is different from a second set of two or more audio components of a second audio soundscape (e.g., 794). Outputting the first soundscape and/or the second soundscape, wherein the first set of two or more audio components is different from the second set of two or more audio components, enables the computer system to provide a more realistic user experience while conserving storage space by not requiring a plurality of different full audio tracks to be stored and selected for playback.
In some implementations, outputting the first audio soundscape (e.g., 711 or 774) for the first user experience session concurrently with displaying the user interface (e.g., 705, 710, 712, 765, 770, and/or 772) for the first user experience session includes the following. When the user interface for the first user experience session includes a first predetermined animation effect (e.g., an animation effect of a visual component displayed for a particular stage or portion of the first user experience session) (e.g., a pulsating animation of a visual component (e.g., a sphere, cube, vortex, and/or cloud) of a directed respiratory component of the first user experience (e.g., an alternating pattern of increasing the size of the visual component and decreasing the size of the visual component)), the computer system (e.g., 702) outputs a first set of two or more audio components having a first set of audio characteristics (e.g., the first set of two or more audio components having a set of audio characteristics determined based on the predetermined animation effect) (e.g., increasing the volume of two or more audio components in the first set when the size of the visual component increases and decreasing the volume of two or more audio components in the first set when the size of the visual component decreases). When the user interface for the first user experience session includes a second predetermined animation effect that is different from the first predetermined animation effect (e.g., a floating/swaying animated appearance for visual components (e.g., particles, triangles, circles, squares, and/or cubes) of the reaction portion of the first user experience), the computer system outputs a first set of two or more audio components having a second set of audio characteristics that is different from the first set of audio characteristics (e.g., audio is output as having a spatial position change based on floating/swaying movements of the visual components). Outputting a first set of two or more audio components having a first set of audio characteristics when the user interface for the first user experience session includes a first predetermined animation effect and outputting the first set of two or more audio components having a second set of audio characteristics different from the first set of audio characteristics when the first user experience session includes a second predetermined animation effect different from the first predetermined animation effect causes the computer system to automatically modify the first audio soundscape based on the animation effect output in the user interface for the first user experience session.
In some implementations, outputting the second audio soundscape (e.g., 794) for the second user experience session concurrently with displaying the user interface (e.g., 787, 790, and/or 792) for the second user experience session includes the following. When the user interface for the second user experience session includes a third predetermined animation effect (in some embodiments, the third predetermined animation effect is the same as the first predetermined animation effect), the computer system (e.g., 702) outputs a second set of two or more audio components having a third set of audio characteristics (in some embodiments, the third set of audio characteristics is the same as the first set of audio characteristics). When the user interface for the second user experience session includes a fourth predetermined animation effect that is different from the third predetermined animation effect (in some embodiments, the fourth predetermined animation effect is the same as the second predetermined animation effect), the computer system outputs a second set of two or more audio components having a fourth set of audio characteristics that is different from the third set of audio characteristics (in some embodiments, the fourth set of audio characteristics is the same as the second set of audio characteristics).
In some implementations, both visual effects (e.g., predetermined animation effects) and audio components (e.g., 711, 774, and/or 794) (e.g., corresponding audio soundtracks) are selected based on some criteria (e.g., the same criteria). For example, when a particular sound Jing Shi is selected (e.g., by a user and/or by a computer system), a corresponding visual effect is automatically selected (e.g., by a computer system) to accompany the selected sound scene, or when a particular visual effect is selected, a corresponding sound scene is automatically selected (e.g., by a computer system) to accompany the selected visual effect. As another example, a particular virtual environment, theme, and/or emotion is selected, and a particular sound scene and/or visual effect is automatically selected (e.g., by a computer system) for the selected environment, theme, and/or emotion.
In some implementations, upon outputting a first audio soundscape (e.g., 711 or 774) for the first user experience session, the computer system (e.g., 702) detects (e.g., determines) (e.g., using a sensor) biometric input (e.g., inward breath, outward breath, heartbeat, and/or body movement). In some embodiments, in response to detecting the biometric input, the computer system modifies the first audio soundscape comprising: based on a determination that the biometric input includes a first biometric input (e.g., inward breath), the computer system modifies the first set of two or more audio components in a first manner (e.g., outputs the first soundscape as having a perceived spatial location of the audio components that is moving toward the user) (e.g., as depicted by the locations of spatial audio indicators 709-3 through 709-9 in fig. 7G). Based on a determination that the biometric input includes a second biometric input (e.g., breathing outward) that is different from the first biometric input, the computer system modifies the first set of two or more audio components in a second manner that is different from the first manner (e.g., outputs the first soundscape as having a perceived spatial location of the audio components that is moving away from the user) (e.g., as depicted by the locations of spatial audio indicators 709-3 through 709-9 in fig. 7F). Modifying the first set of two or more audio components in a first manner when the biometric input includes a first biometric input and modifying the first set of two or more audio components in a second manner different from the first manner when the biometric input includes a second biometric input different from the first biometric input causes the computer system to automatically modify the first audio soundscape based on the biometric input detected during the first user experience session.
In some implementations, the computer system (e.g., 702) detects the biometric input while outputting a second audio soundscape (e.g., 774 or 794) for a second user experience session. In response to detecting the biometric input, the computer system modifies the second audio soundscape comprising: modifying the second set of two or more audio components in a third manner (e.g., in a first manner) based on a determination that the biometric input includes a third biometric input (e.g., a first biometric input) (e.g., an inward breath) (e.g., outputting a second soundtrack as having a perceived spatial location of the audio components that is moving toward the user); and modifying the second set of two or more audio components (e.g., outputting the second sound scene as having a perceived spatial location of the audio component that is moving away from the user) in a fourth manner (e.g., in a second manner) different from the third manner based on a determination that the biometric input includes a fourth biometric input (e.g., a second biometric input) different from the third biometric input (e.g., breathing outward). In some implementations, characteristics of the audio (e.g., spatial position of the audio and/or spatial movement of the audio) are based on user biometric measurements (e.g., respiration, movement, and/or heart rate).
In some implementations, outputting the first audio soundscape (e.g., 711 or 774) for the first user experience session includes: in accordance with a determination that the first set of criteria is met (e.g., a predetermined amount of time has elapsed and/or the first user experience is transitioning (or has transitioned) from a first portion of the user experience to a second portion of the user experience), the computer system (e.g., 702) causes the output volume of the first audio soundscape to gradually increase (e.g., as depicted in fig. 7D and/or fig. 7K); and in accordance with a determination that the second set of criteria is met (e.g., a predetermined amount of time has elapsed and/or the first user experience is transitioning (or has transitioned) from the second portion of the user experience to the third portion of the user experience), cause the output volume of the first audio soundscape to gradually decrease (e.g., as depicted in fig. 7I). Causing the output volume of the first audio soundscape to gradually increase when the first set of criteria is met and causing the output volume of the first audio soundscape to gradually decrease when the second set of criteria is met causes the computer system to automatically modify the first audio soundscape based on the criteria met during the first user experience session.
In some implementations, outputting the second audio soundscape (e.g., 774 or 794) for the second user experience session includes: in accordance with a determination that a third set of criteria is met (e.g., a first set of criteria is met), the computer system (e.g., 702) causes the output volume of the second audio soundscape to gradually increase (e.g., as depicted in fig. 7D and/or fig. 7K); and in accordance with a determination that the fourth set of criteria is met (e.g., the second set of criteria is met), the computer system causes the output volume of the second audio soundscape to gradually decrease (e.g., as depicted in fig. 7I).
In some implementations, initiating (e.g., starting) a respective type of first user experience session in an XR environment includes: the computer system (e.g., 702) outputs a first audio soundscape (e.g., 711 and/or 774) having a corresponding audio component (e.g., a start sound and/or a sound coinciding with the start and/or end of the first user experience session). In some implementations, upon outputting a first audio sound Jing Shi for the first user experience session (e.g., and when the first audio soundscape does not include a respective audio component), the computer system initiates (e.g., in response to user input, after a particular amount of time has elapsed, and/or after at least a portion of the first user experience session is completed) termination of the first user experience session (e.g., as depicted in fig. 7I), wherein terminating the first user experience session includes: a first audio soundstage is output having a corresponding audio component. Outputting the first audio soundscape with the respective audio components when the respective type of first user experience session is initiated and when termination of the first user experience session is initiated provides feedback to a user of the computer system regarding a state of the computer system (e.g., a state of starting or ending the first user experience session).
In some embodiments, initiating a respective type of second user experience session in an XR environment comprises: the computer system (e.g., 702) outputs a second audio soundscape (e.g., 774 and/or 794) having a second corresponding audio component (e.g., a start sound and/or a sound coinciding with the start and/or end of a second user experience session). Upon outputting the second audio sound Jing Shi for the second user experience session (e.g., and when the second audio soundscape does not include the second corresponding audio component), the computer system initiates (e.g., in response to user input, after a certain amount of time has elapsed, and/or after at least a portion of the second user experience session is completed) termination of the second user experience session (e.g., as depicted in fig. 7I), wherein terminating the second user experience session includes: a second audio soundscape having a second corresponding audio composition is output.
In some implementations, the computer system (e.g., 702) does not output the first audio soundscape after termination of the first user experience session or the second user experience session. In some embodiments, the soundscapes have the same sound output at the beginning of a particular user experience session and at the end of the particular user experience session. In some embodiments, the soundscapes have the same starting sound at the beginning of different user experience sessions (and optionally have different ending sounds for each user experience session, which ending sounds are optionally the same ending sound). In some embodiments, the soundscapes have the same ending sound at the end of different user experience sessions (and optionally have different starting sounds for each user experience session, which starting sounds are optionally the same starting sound). In some embodiments, the soundscapes have different starting sounds at the beginning of different user experience sessions. In some embodiments, the soundscapes have different ending sounds at the end of different user experience sessions.
In some implementations, outputting the first audio soundscape (e.g., 711 and/or 774) includes: in outputting a first audio sound Jing Shi having a first set of two or more audio components, the computer system (e.g., 702) outputs a third set of one or more audio components (e.g., different audio components than the first set), wherein the third set of one or more audio components is output at one or more randomly selected or pseudo-randomly selected instances (e.g., moments of time) during the first user experience session. Outputting the third set of one or more audio components at one or more randomly selected or pseudo-randomly selected instances during the first user experience session enables the computer system to provide a more realistic user experience while conserving storage space by not requiring multiple different complete audio tracks to be stored and selected for playback. In some implementations, outputting the second audio soundscape includes: upon outputting the second audio soundscape having the second set of two or more audio components, a fourth set of one or more audio components is output, wherein the fourth set of one or more audio components is output at one or more randomly or pseudo-randomly selected instances during the second user experience session. In some embodiments, the first audio soundscape and/or the second audio soundscape comprises one or more sounds randomly (or pseudo-randomly) generated throughout the soundscape to introduce diversity to the soundscape.
In some embodiments, the set of available audio components includes a plurality of audio components selected to meet a harmony criterion (e.g., the audio components are indicated as harmony) when concurrently output at one or more randomly selected or pseudo-randomly selected instances (e.g., moments) during a respective type of user experience session (e.g., during a first user experience session and/or during a second user experience session).
In some embodiments, the computer system receives updated data (e.g., data indicating a change in state of a first user experience session) when outputting a first audio soundscape (e.g., 711 and/or 774) having a first set of two or more audio components comprising a first perceived spatial location (e.g., 709-1 to 709-9) relative to a user (e.g., 701) of the computer system (e.g., 702). In response to receiving the update data, the computer system updates a state of the first user experience session, including: in accordance with a determination that the update data includes an indication that a first set of audio update criteria is met (e.g., a first predetermined amount of time has elapsed and/or the first user experience is transitioning (or has transitioned) from a first portion of the user experience to a second portion of the user experience), outputting a first audio soundscape having a first set of two or more audio components that includes a second perceived spatial location (e.g., as depicted in fig. 7F) relative to a user of the computer system, wherein the second perceived spatial location is different from the first perceived spatial location (e.g., outputting the first soundscape as having a perceived spatial location of the audio components that is moving toward the user); and outputting, in accordance with a determination that the update data includes an indication that the second set of audio update criteria is met (e.g., that a second predetermined amount of time has elapsed and/or that the first user experience is transitioning (or has transitioned) from the second portion of the user experience to a third portion of the user experience), a first audio soundscape having a first set of two or more audio components that includes a third perceived spatial location (e.g., as depicted in fig. 7G) relative to a user of the computer system, wherein the third perceived spatial location is different from the second perceived spatial location (e.g., outputting the first soundscape as having a perceived spatial location of the audio components that is moving away from the user). In some implementations, the perceived spatial audio position of the soundscape changes over time. Outputting a first audio soundscape having a first set of two or more audio components comprising a second perceived spatial location or a third perceived spatial location relative to a user of the computer system based on whether the update data includes an indication of whether the first set of audio update criteria or the second set of audio update criteria are met causes the computer system to automatically modify the perceived spatial location of the first audio soundscape based on the audio update criteria met during the first user experience session.
In some embodiments, the first set of two or more audio components includes a first audio recording (e.g., 740-1, 740-2, 740-3, and/or 740-4) from a first audio source (e.g., narrative, mentor, coach, and/or person), and the second set of two or more audio components includes a second audio recording (e.g., 780-1 and/or 785-1) from the first audio source. Outputting a first soundscape having a first set of two or more audio components comprising a first audio recording from a first audio source and outputting a second soundscape having a second set of two or more audio components comprising a second audio recording from the first audio source enables the computer system to provide a more realistic user experience while saving storage space by not requiring a plurality of different complete audio tracks to be stored and selected for playback. In some implementations, the second audio recording is different from the first audio recording. In some implementations, the second audio recording has one or more elements (e.g., dialogs, instructions, and/or phrases) that are common to the first audio recording. In some embodiments, the second audio recording has one or more elements (e.g., intonation, tone, rhythm, enhancement, pitch, and/or accent) that are different from the first audio recording.
In some implementations, the first audio recording (e.g., 740-1, 740-2, 740-3, and/or 740-4) from the first audio source includes a first conversation (e.g., spoken words, instructions, and/or directions) having a first set of speaking characteristics (e.g., intonation, rhythm, tone, enhancement, pitch, and/or accent), and the second audio recording (e.g., 780-1 and/or 785-1) from the first audio source includes a first conversation having a second set of speaking characteristics different from the first set of speaking characteristics. Outputting a first soundscape having a first audio recording from a first audio source comprising a first dialog having a first set of speaking characteristics and outputting a second soundscape having a second audio recording comprising a first dialog having a second set of speaking characteristics different from the first set of speaking characteristics enable the computer system to provide a more realistic user experience while conserving storage space by not requiring a plurality of different complete audio tracks to be stored and selected for playback.
In some implementations, the first audio recording (e.g., 740-1, 740-2, 740-3, and/or 740-4) from the first audio source includes a second dialog (e.g., spoken words, instructions, and/or directions), and the second audio recording (e.g., 780-1 and/or 785-1) from the first audio source includes a third dialog that is different from the second dialog. Outputting a first soundscape having a first audio recording from a first audio source comprising a second dialog and outputting a second soundscape having a second audio recording comprising a third dialog different from the second dialog enable the computer system to provide a more realistic user experience while conserving storage space by not requiring multiple different complete audio tracks to be stored and selected for playback.
In some embodiments, aspects/operations of methods 800 and/or 900 may be interchanged, substituted, and/or added between the methods. For the sake of brevity, these details are not repeated here.
The foregoing description, for purposes of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application to thereby enable others skilled in the art to best utilize the invention and various described embodiments with various modifications as are suited to the particular use contemplated.
As described above, one aspect of the present technology is to collect and use data from various sources to improve the XR experience of the user. The present disclosure contemplates that in some instances, such collected data may include personal information data that uniquely identifies or may be used to contact or locate a particular person. Such personal information data may include demographic data, location-based data, telephone numbers, email addresses, tweet IDs, home addresses, data or records related to the user's health or fitness level (e.g., vital sign measurements, medication information, exercise information), date of birth, or any other identification or personal information.
The present disclosure recognizes that the use of such personal information data in the present technology may be used to benefit users. For example, personal information data may be used to improve the XR experience of the user. In addition, the present disclosure contemplates other uses for personal information data that are beneficial to the user. For example, the health and fitness data may be used to provide insight into the general health of the user, or may be used as positive feedback to individuals who use the technology to pursue health goals.
The present disclosure contemplates that entities responsible for the collection, analysis, disclosure, transmission, storage, or other use of such personal information data will adhere to sophisticated privacy policies and/or privacy measures. In particular, such entities should exercise and adhere to privacy policies and practices that are recognized as meeting or exceeding industry or government requirements for maintaining the privacy and security of personal information data. Such policies should be convenient for the user to access and should be updated as the collection and/or use of data changes. Personal information from users should be collected for legitimate and reasonable physical uses and must not be shared or sold outside of these legitimate uses. Further, such collection/sharing should be performed after receiving the user's informed consent. In addition, such entities should consider taking any necessary steps for protecting and securing access to such personal information data and ensuring that other entities having access to the personal information data adhere to their privacy policies and procedures. In addition, such entities may subject themselves to third party evaluations to prove compliance with widely accepted privacy policies and practices. In addition, policies and practices should be adapted to the particular type of personal information data collected and/or accessed, and to applicable laws and standards including consideration of particular jurisdictions. For example, in the united states, the collection or acquisition of certain health data may be governed by federal and/or state law, such as the health insurance flow and liability act (HIPAA); while health data in other countries may be subject to other regulations and policies and should be processed accordingly. Thus, different privacy measures should be claimed for different personal data types in each country.
Regardless of the foregoing, the present disclosure also contemplates embodiments in which a user selectively prevents use or access to personal information data. That is, the present disclosure contemplates hardware elements and/or software elements to prevent or block access to such personal information data. For example, with respect to an XR experience, the present technology may be configured to allow a user to choose to "opt-in" or "opt-out" to participate in the collection of personal information data during or at any time after registration with a service. As another example, the user may choose not to provide data for service customization. For another example, the user may choose to limit the length of time that data is maintained or to prohibit development of the customized service altogether. In addition to providing the "opt-in" and "opt-out" options, the present disclosure contemplates providing notifications related to accessing or using personal information. For example, the user may be notified that his personal information data will be accessed when the application is downloaded, and then be reminded again just before the personal information data is accessed by the application.
Furthermore, it is intended that personal information data should be managed and processed in a manner that minimizes the risk of inadvertent or unauthorized access or use. Once the data is no longer needed, risk can be minimized by limiting the collection and deletion of data. Further, and when applicable, including in certain health-related applications, the data-disengagement identification may be used to protect the privacy of the user. De-identification may be facilitated by removing specific identifiers (e.g., date of birth, etc.), controlling the amount or specificity of stored data (e.g., collecting location data at a city level instead of at an address level), controlling how data is stored (e.g., aggregating data among users), and/or other methods, as appropriate.
Thus, while the present disclosure broadly covers the use of personal information data to implement one or more of the various disclosed embodiments, the present disclosure also contemplates that the various embodiments may be implemented without accessing such personal information data. That is, various embodiments of the present technology do not fail to function properly due to the lack of all or a portion of such personal information data. For example, an XR experience may be generated by inferring preferences based on non-personal information data or absolute minimum metrics of personal information, such as content requested by a device associated with the user, other non-personal information available to the service, or publicly available information.

Claims (85)

1.一种方法,所述方法包括:1. A method, comprising: 在与显示生成部件和一个或多个传感器通信的计算机系统处:At a computer system in communication with a display generating component and one or more sensors: 经由所述显示生成部件显示用于用户体验会话的用户界面,包括:Displaying a user interface for a user experience session via the display generation component includes: 在所述用户体验会话活动时:While the user is experiencing a session activity: 经由所述一个或多个传感器检测所述计算机系统的用户的一个或多个呼吸特性;以及detecting, via the one or more sensors, one or more breathing characteristics of a user of the computer system; and 显示具有基于所述计算机系统的所述用户的所述一个或多个呼吸特性而移动的多个粒子的用户界面对象,包括:Displaying a user interface object having a plurality of particles that move based on the one or more breathing characteristics of the user of the computer system, comprising: 根据确定所述计算机系统的所述用户的第一呼吸事件满足第一组标准,在所述计算机系统的所述用户的所述第一呼吸事件期间将所述用户界面对象的所述粒子显示为以第一方式移动;以及Based on determining that a first breathing event of the user of the computer system satisfies a first set of criteria, displaying the particles of the user interface object as moving in a first manner during the first breathing event of the user of the computer system; and 根据确定所述计算机系统的所述用户的所述第一呼吸事件满足第二组标准,在所述计算机系统的用户的所述第一呼吸事件期间将所述用户界面对象的所述粒子显示为以与所述第一方式不同的第二方式移动。Based on determining that the first respiratory event of the user of the computer system meets a second set of criteria, the particles of the user interface object are displayed as moving in a second manner different from the first manner during the first respiratory event of the user of the computer system. 2.根据权利要求1所述的方法,其中所述计算机系统与音频生成部件通信,并且其中显示用于所述用户体验会话的所述用户界面包括:2. The method of claim 1, wherein the computer system is in communication with an audio generation component, and wherein displaying the user interface for the user experience session comprises: 在所述用户体验会话活动之前,并发地:Prior to the user experience session activity, concurrently: 经由所述显示生成部件显示所述用户界面对象的所述多个粒子的表示;以及displaying, via the display generation component, a representation of the plurality of particles of the user interface object; and 经由所述音频生成部件输出用于所述用户体验会话的音频声景。An audio soundscape for the user experience session is output via the audio generation component. 3.根据权利要求2所述的方法,其中显示用于所述用户体验会话的所述用户界面还包括:在所述用户体验会话活动之前显示所述用户体验的环境的调光的外观。3. The method of claim 2, wherein displaying the user interface for the user experience session further comprises: displaying a dimmed appearance of an environment of the user experience prior to the user experience session being active. 4.根据权利要求1至3中任一项所述的方法,其中显示用于所述用户体验会话的所述用户界面包括:4. The method of any one of claims 1 to 3, wherein displaying the user interface for the user experience session comprises: 在所述用户体验会话活动之前:Prior to the User Experience Session activity: 显示能够选择来发起所述用户体验会话的开始选项;以及displaying a start option selectable to initiate the user experience session; and 显示对所述用户体验会话的持续时间的指示。An indication of a duration of the user experience session is displayed. 5.根据权利要求4所述的方法,其中显示用于所述用户体验会话的所述用户界面包括:5. The method of claim 4, wherein displaying the user interface for the user experience session comprises: 在所述用户体验会话活动之前:Prior to the User Experience Session activity: 显示能够选择来修改所述用户体验会话的所述持续时间的一组一个或多个持续时间选项;displaying a set of one or more duration options selectable to modify the duration of the user experience session; 检测指向所述一组一个或多个持续时间选项中的能够选择来修改所述用户体验会话的所述持续时间的第一持续时间选项的输入;以及detecting input directed to a first duration option of the set of one or more duration options selectable to modify the duration of the user experience session; and 响应于检测到指向所述一组一个或多个持续时间选项中的所述第一持续时间选项的所述输入,将所述用户体验会话的持续时间选择为第一持续时间。In response to detecting the input directed to the first duration option of the set of one or more duration options, a duration of the user experience session is selected as a first duration. 6.根据权利要求1至5中任一项所述的方法,其中显示用于所述用户体验会话的所述用户界面包括:6. The method of any one of claims 1 to 5, wherein displaying the user interface for the user experience session comprises: 在所述用户体验会话活动之前:Prior to the User Experience Session activity: 显示能够选择来从用于所述用户体验会话的多个音频指导挑选音频指导的一组一个或多个音频选项;displaying a set of one or more audio options selectable to select audio guidance from a plurality of audio guidance for the user experience session; 检测指向所述一组一个或多个音频选项中的能够选择来从用于所述用户体验会话的所述多个音频指导挑选音频指导的第一音频选项的输入;以及detecting input directed to a first audio option of the set of one or more audio options selectable to select audio guidance from the plurality of audio guidance for the user experience session; and 响应于检测到指向所述第一音频选项的所述输入,从用于所述用户体验会话的所述多个音频指导选择第一音频指导。In response to detecting the input directed to the first audio option, a first audio guide is selected from the plurality of audio guides for the user experience session. 7.根据权利要求1至6中任一项所述的方法,其中显示用于所述用户体验会话的所述用户界面包括:7. The method of any one of claims 1 to 6, wherein displaying the user interface for the user experience session comprises: 在所述用户体验会话活动时:While the user is experiencing a session activity: 将所述用户界面对象显示为具有基于预先确定的生物识别节律的动画效果,包括:Displaying the user interface object with an animation effect based on a predetermined biometric rhythm comprises: 根据确定所述预先确定的生物识别节律是第一生物识别节律,基于对应于所述第一生物识别节律的第一模式来以动画方式显示所述用户界面对象;以及Based on determining that the predetermined biometric rhythm is a first biometric rhythm, animating the user interface object based on a first pattern corresponding to the first biometric rhythm; and 根据确定所述预先确定的生物识别节律是与所述第一生物识别节律不同的第二生物识别节律,基于对应于所述第二生物识别节律的第二模式来以动画方式显示所述用户界面对象。Based on determining that the predetermined biometric rhythm is a second biometric rhythm different from the first biometric rhythm, the user interface object is displayed in an animated manner based on a second pattern corresponding to the second biometric rhythm. 8.根据权利要求7所述的方法,其中所述预先确定的生物识别节律是由所述计算机系统的用户选择的生物识别节律。8. The method of claim 7, wherein the predetermined biometric rhythm is a biometric rhythm selected by a user of the computer system. 9.根据权利要求1至8中任一项所述的方法,其中所述计算机系统与音频生成部件通信,所述方法还包括:9. The method according to any one of claims 1 to 8, wherein the computer system is in communication with an audio generating component, the method further comprising: 在所述用户体验会话活动时:While the user is experiencing a session activity: 输出具有基于所述计算机系统的用户的所述一个或多个呼吸特性而移动的感知空间位置的音频组成部分,包括:Outputting an audio component having a perceived spatial location that moves based on the one or more breathing characteristics of a user of the computer system comprises: 根据确定所述计算机系统的所述用户的第二呼吸事件满足第三组标准,输出具有相对于所述计算机系统的所述用户的第一感知空间位置的所述音频组成部分;以及outputting the audio component having a first perceptual spatial location relative to the user of the computer system based on determining that a second respiratory event of the user of the computer system satisfies a third set of criteria; and 根据确定所述计算机系统的用户的所述第二呼吸事件满足第四组标准,输出具有相对于所述计算机系统的所述用户的与所述第一感知空间位置不同的第二感知空间位置的所述音频组成部分。Based on determining that the second respiratory event of the user of the computer system satisfies a fourth set of criteria, outputting the audio component having a second perceptual spatial location relative to the user of the computer system that is different from the first perceptual spatial location. 10.根据权利要求1至9中任一项所述的方法,其中所述计算机系统的所述用户的物理环境的部分在所述用户体验会话活动之前可见,所述方法还包括:10. The method of any one of claims 1 to 9, wherein the portion of the physical environment of the user of the computer system is visible prior to the user experience session activity, the method further comprising: 发起所述用户体验会话,包括:Initiating the user experience session includes: 经由所述显示生成部件显示逐渐减小所述物理环境的可见度的调光效果。A dimming effect that gradually reduces visibility of the physical environment is displayed via the display generation component. 11.根据权利要求1至10中任一项所述的方法,所述方法还包括:11. The method according to any one of claims 1 to 10, further comprising: 在所述用户体验会话活动时:While the user is experiencing a session activity: 经由所述一个或多个传感器检测所述计算机系统的所述用户的一个或多个基于专注的特性;以及detecting, via the one or more sensors, one or more concentration-based characteristics of the user of the computer system; and 输出基于所述计算机系统的用户的所述一个或多个基于专注的特性的反馈。Feedback based on the one or more concentration-based characteristics of a user of the computer system is output. 12.根据权利要求11所述的方法,其中输出基于所述计算机系统的用户的所述一个或多个基于专注的特性的反馈包括:经由所述显示生成部件显示基于所述计算机系统的用户的所述一个或多个基于专注的特性的视觉反馈。12. The method of claim 11, wherein outputting feedback based on the one or more concentration-based characteristics of the user of the computer system comprises: displaying, via the display generating component, visual feedback based on the one or more concentration-based characteristics of the user of the computer system. 13.根据权利要求12所述的方法,其中显示基于所述计算机系统的用户的所述一个或多个基于专注的特性的视觉反馈包括:13. The method of claim 12, wherein displaying visual feedback based on the one or more focus-based characteristics of a user of the computer system comprises: 根据确定所述计算机系统的用户的所述一个或多个基于专注的特性满足第一组专注标准,将所述用户界面对象的所述粒子显示为在所述计算机系统的所述用户的向外呼吸期间远离参考位置移动。Based on determining that the one or more concentration-based characteristics of the user of the computer system satisfy a first set of concentration criteria, the particles of the user interface object are displayed as moving away from a reference position during outward breathing of the user of the computer system. 14.根据权利要求13所述的方法,其中显示基于所述计算机系统的用户的所述一个或多个基于专注的特性的视觉反馈包括:14. The method of claim 13, wherein displaying visual feedback based on the one or more focus-based characteristics of a user of the computer system comprises: 根据确定所述计算机系统的用户的所述一个或多个基于专注的特性满足所述第一组专注标准,将所述用户界面对象的所述粒子显示为在所述计算机系统的所述用户的向内呼吸期间朝向所述参考位置移动。Based on determining that the one or more concentration-based characteristics of the user of the computer system satisfy the first set of concentration criteria, the particles of the user interface object are displayed as moving toward the reference location during inward breathing of the user of the computer system. 15.根据权利要求14所述的方法,其中:15. The method according to claim 14, wherein: 所述用户界面对象的所述粒子被显示为在所述向内呼吸期间以第一速率朝向所述参考位置移动;the particles of the user interface object being displayed as moving toward the reference position at a first rate during the inward breathing; 所述用户界面对象的所述粒子被显示为在所述向外呼吸期间以第二速率远离所述参考位置移动;并且The particles of the user interface object are displayed as moving away from the reference position at a second rate during the outward breathing; and 所述第二速率与所述第一速率不同。The second rate is different from the first rate. 16.根据权利要求13所述的方法,其中所述参考位置是对应于所述计算机系统的所述用户在物理环境中的视点的世界锁定的位置。16. The method of claim 13, wherein the reference location is a world-locked location corresponding to a viewpoint of the user of the computer system in a physical environment. 17.根据权利要求11至16中任一项所述的方法,其中所述计算机系统与音频生成部件通信,并且其中输出基于所述计算机系统的用户的所述一个或多个基于专注的特性的反馈包括:经由所述音频生成部件输出基于所述计算机系统的用户的所述一个或多个基于专注的特性的音频反馈。17. A method according to any one of claims 11 to 16, wherein the computer system communicates with an audio generating component, and wherein outputting feedback based on the one or more concentration-based characteristics of a user of the computer system comprises: outputting audio feedback based on the one or more concentration-based characteristics of a user of the computer system via the audio generating component. 18.根据权利要求11至17中任一项所述的方法,其中所述计算机系统的用户的所述一个或多个基于专注的特性包括多个生物识别指示符。18. The method of any one of claims 11 to 17, wherein the one or more concentration-based characteristics of the user of the computer system include a plurality of biometric indicators. 19.根据权利要求11至18中任一项所述的方法,其中所述计算机系统的用户的所述一个或多个基于专注的特性包括对所述用户的专注是否未能满足专注标准达阈值时间量的指示。19. The method of any one of claims 11 to 18, wherein the one or more concentration-based characteristics of the user of the computer system include an indication of whether the user's concentration has failed to meet a concentration criterion for a threshold amount of time. 20.根据权利要求1至19中任一项所述的方法,其中将所述用户界面对象的所述粒子显示为以所述第一方式或所述第二方式移动包括:20. The method of any one of claims 1 to 19, wherein displaying the particles of the user interface object as moving in the first manner or the second manner comprises: 将第一组一个或多个粒子显示为具有距对应于所述计算机系统的用户的所述视点的位置的第一距离并且在所述第一呼吸事件期间具有第一移动量;以及displaying a first set of one or more particles having a first distance from a location corresponding to the viewpoint of a user of the computer system and having a first amount of movement during the first respiratory event; and 将与所述第一组一个或多个粒子不同的第二组一个或多个粒子显示为具有距对应于所述计算机系统的用户的所述视点的所述位置的与所述第一距离不同的第二距离并且在所述第一呼吸事件期间具有与所述第一移动量不同的第二移动量。A second group of one or more particles different from the first group of one or more particles is displayed having a second distance different from the first distance from the location corresponding to the viewpoint of a user of the computer system and having a second amount of movement different from the first amount of movement during the first respiratory event. 21.根据权利要求1至20中任一项所述的方法,其中所述多个粒子的移动基于一组一个或多个模拟物理参数。21. The method of any one of claims 1 to 20, wherein the movement of the plurality of particles is based on a set of one or more simulated physical parameters. 22.根据权利要求1至21中任一项所述的方法,其中显示用于所述用户体验会话的所述用户界面包括:22. The method of any one of claims 1 to 21, wherein displaying the user interface for the user experience session comprises: 在所述用户体验会话活动时,经由所述一个或多个传感器检测指示所述计算机系统的所述用户的注视的注视数据;detecting, via the one or more sensors, gaze data indicative of a gaze of the user of the computer system while the user experience session is active; 在显示具有基于所述计算机系统的用户的所述一个或多个呼吸特性而移动的所述多个粒子的所述用户界面对象时,检测更新的注视数据;以及detecting updated gaze data while displaying the user interface object having the plurality of particles that move based on the one or more breathing characteristics of a user of the computer system; and 响应于检测到所述更新的注视数据:In response to detecting the updated gaze data: 根据确定所述更新的注视数据指示所述用户的所述注视超过注视偏离阈值,暂停所述用户体验会话;以及pausing the user experience session based on determining that the updated gaze data indicates that the gaze of the user exceeds a gaze deviation threshold; and 根据确定所述更新的注视数据不指示所述用户的所述注视超过所述注视偏离阈值,放弃暂停所述用户体验会话。Based on determining that the updated gaze data does not indicate that the gaze of the user exceeds the gaze deviation threshold, pausing the user experience session is abandoned. 23.根据权利要求1至22中任一项所述的方法,其中:23. The method according to any one of claims 1 to 22, wherein: 显示用于所述用户体验会话的所述用户界面包括:将所述用户界面显示为具有从一组可用视觉特性随机地或伪随机地选择的一组视觉特性;并且Displaying the user interface for the user experience session includes: displaying the user interface with a set of visual characteristics randomly or pseudo-randomly selected from a set of available visual characteristics; and 所述计算机系统与音频生成部件通信,所述方法还包括:The computer system is in communication with an audio generating component, the method further comprising: 经由所述音频生成部件输出用于所述用户体验会话的音频声景,其中所述音频声景与显示用于所述用户体验会话的所述用户界面并发地输出,并且输出所述音频声景包括:输出具有从一组可用音频组成部分随机地或伪随机地选择的第一组两个或更多个音频组成部分的所述音频声景。Outputting, via the audio generation component, an audio soundscape for the user experience session, wherein the audio soundscape is output concurrently with displaying the user interface for the user experience session, and outputting the audio soundscape comprises: outputting the audio soundscape having a first set of two or more audio components randomly or pseudo-randomly selected from a set of available audio components. 24.根据权利要求1至23中任一项所述的方法,其中显示用于所述用户体验会话的所述用户界面包括:24. The method of any one of claims 1 to 23, wherein displaying the user interface for the user experience session comprises: 在显示具有第一显示状态的所述用户界面对象时,检测从所述用户体验会话的所述第一部分到所述用户体验会话的第二部分的转变,所述第一显示状态中所述多个粒子针对所述用户体验会话的第一部分被显示为具有第一间距量;以及detecting a transition from the first portion of the user experience session to a second portion of the user experience session while displaying the user interface object having a first display state in which the plurality of particles are displayed with a first spacing amount for the first portion of the user experience session; and 响应于检测到从所述用户体验会话的所述第一部分到所述用户体验会话的所述第二部分的所述转变,显示具有与所述第一显示状态不同的第二显示状态的所述用户界面对象,其中所述多个粒子针对所述用户体验会话的所述第二部分被显示为具有与所述第一间距量不同的第二间距量。In response to detecting the transition from the first portion of the user experience session to the second portion of the user experience session, displaying the user interface object having a second display state different from the first display state, wherein the plurality of particles are displayed for the second portion of the user experience session with a second spacing amount different from the first spacing amount. 25.根据权利要求24所述的方法,其中显示用于所述用户体验会话的所述用户界面包括:25. The method of claim 24, wherein displaying the user interface for the user experience session comprises: 在显示具有所述第二显示状态的所述用户界面对象时,检测从所述用户体验会话的所述第二部分到所述用户体验会话的第三部分的转变;以及While displaying the user interface object having the second display state, detecting a transition from the second portion of the user experience session to a third portion of the user experience session; and 响应于检测到从所述用户体验会话的所述第二部分到所述用户体验会话的所述第三部分的所述转变,显示具有与所述第一显示状态和所述第二显示状态不同的第三显示状态的所述用户界面对象,其中所述多个粒子针对所述用户体验会话的所述第三部分被显示为具有与所述第二间距量不同的第三间距量。In response to detecting the transition from the second portion of the user experience session to the third portion of the user experience session, displaying the user interface object having a third display state different from the first display state and the second display state, wherein the plurality of particles are displayed for the third portion of the user experience session with a third spacing amount different from the second spacing amount. 26.根据权利要求1至25中任一项所述的方法,其中显示用于所述用户体验会话的所述用户界面包括:26. The method of any one of claims 1 to 25, wherein displaying the user interface for the user experience session comprises: 在显示具有包括在粒子之间具有第一平均间距的布置的所述多个粒子的所述用户界面对象时,检测所述用户体验会话的终止;以及detecting termination of the user experience session while displaying the user interface object having the plurality of particles including an arrangement having a first average spacing between particles; and 响应于检测到所述用户体验会话的终止,将所述多个粒子的动画显示为移动到在粒子之间具有第二平均间距的布置,其中所述第二平均间距小于所述第一平均间距。In response to detecting termination of the user experience session, the plurality of particles are animated to move to an arrangement having a second average spacing between particles, wherein the second average spacing is less than the first average spacing. 27.根据权利要求1至26中任一项所述的方法,其中显示用于所述用户体验会话的所述用户界面包括:27. The method of any one of claims 1 to 26, wherein displaying the user interface for the user experience session comprises: 在所述用户体验会话活动并且所述计算机系统的所述用户的环境在视觉上模糊时,检测所述用户体验会话的终止;以及detecting termination of the user experience session while the user experience session is active and an environment of the user of the computer system is visually obscured; and 响应于检测到所述用户体验会话的终止,发起所述用户体验会话的终止并且逐渐增大所述用户的所述环境的可见度。In response to detecting termination of the user experience session, termination of the user experience session is initiated and visibility of the environment to the user is gradually increased. 28.根据权利要求1至27中任一项所述的方法,其中显示用于所述用户体验会话的所述用户界面包括:28. The method of any one of claims 1 to 27, wherein displaying the user interface for the user experience session comprises: 在所述用户体验会话活动时,检测所述用户体验会话的终止;以及detecting termination of the user experience session while the user experience session is active; and 响应于检测到所述用户体验会话的终止,发起所述用户体验会话的终止并且显示能够选择来继续所述用户体验会话的选项。In response to detecting termination of the user experience session, termination of the user experience session is initiated and an option selectable to continue the user experience session is displayed. 29.根据权利要求1至28中任一项所述的方法,其中显示用于所述用户体验会话的所述用户界面包括:29. The method of any one of claims 1 to 28, wherein displaying the user interface for the user experience session comprises: 在所述用户体验会话活动时,检测所述用户体验会话的终止;以及detecting termination of the user experience session while the user experience session is active; and 响应于检测到所述用户体验会话的终止,发起所述用户体验会话的终止并且显示与一个或多个先前用户体验会话相关的数据历史。In response to detecting termination of the user experience session, termination of the user experience session is initiated and a data history associated with one or more previous user experience sessions is displayed. 30.根据权利要求1至29中任一项所述的方法,其中显示用于所述用户体验会话的所述用户界面包括:致使所述计算机系统的所述用户的环境的至少部分在所述用户体验会话活动时可见。30. The method of any one of claims 1 to 29, wherein displaying the user interface for the user experience session comprises causing at least a portion of an environment of the user of the computer system to be visible while the user experience session is active. 31.根据权利要求1至30中任一项所述的方法,所述方法还包括:31. The method according to any one of claims 1 to 30, further comprising: 在所述用户体验会话活动时并且在显示具有相对于所述计算机系统的所述用户的第一显示取向的所述用户界面对象时,接收指示所述计算机系统的所述用户从环境中的第一定位到所述环境中的与所述第一定位不同的第二定位的定位变化的数据;以及receiving, while the user experience session is active and while the user interface object is displayed having a first display orientation relative to the user of the computer system, data indicating a change in position of the user of the computer system from a first position in an environment to a second position in the environment that is different than the first position; and 响应于接收到指示所述计算机系统的所述用户的定位变化的所述数据,显示具有相对于所述计算机系统的所述用户的与所述第一显示取向不同的第二显示取向的所述用户界面对象。In response to receiving the data indicative of a change in the positioning of the user of the computer system, the user interface object is displayed having a second display orientation relative to the user of the computer system that is different from the first display orientation. 32.根据权利要求1至31中任一项所述的方法,其中用于所述用户体验会话的所述用户界面能够在一个或多个外部计算机系统处显示。32. The method of any one of claims 1 to 31, wherein the user interface for the user experience session is displayable at one or more external computer systems. 33.根据权利要求1至32中任一项所述的方法,所述方法还包括:33. The method according to any one of claims 1 to 32, further comprising: 在所述用户体验会话活动时并且在显示具有基于所述计算机系统的用户的所述一个或多个呼吸特性而移动的所述多个粒子的所述用户界面对象时,接收指示所述计算机系统的所述用户的部分的位姿变化的数据;以及receiving data indicating a change in pose of a portion of the user of the computer system while the user experience session is active and while displaying the user interface object having the plurality of particles that move based on the one or more breathing characteristics of the user of the computer system; and 响应于接收到指示所述计算机系统的用户的所述部分的位姿变化的所述数据,更新所述多个粒子的显示,包括:In response to receiving the data indicating a change in the pose of the portion of the user of the computer system, updating the display of the plurality of particles comprises: 根据确定指示所述用户的所述部分的位姿变化的所述数据包括所述用户的所述部分与相应粒子的显示位置相交的指示,In response to determining that the data indicating a change in the posture of the portion of the user includes an indication that the portion of the user intersects a displayed position of a corresponding particle, 修改所述相应粒子的显示特性;以及modifying display characteristics of the corresponding particles; and 根据确定指示所述用户的所述部分的位姿变化的所述数据不包括所述用户的所述部分与所述相应粒子的所述显示位置相交的指示,放弃修改所述相应粒子的所述显示特性。Based on determining that the data indicative of the change in pose of the portion of the user does not include an indication that the portion of the user intersects the display location of the corresponding particle, modifying the display characteristic of the corresponding particle is abandoned. 34.一种非暂态计算机可读存储介质,所述非暂态计算机可读存储介质存储被配置为由与显示生成部件和一个或多个传感器通信的计算机系统的一个或多个处理器执行的一个或多个程序,所述一个或多个程序包括用于执行根据权利要求1至33中任一项所述的方法的指令。34. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that communicates with a display generating component and one or more sensors, the one or more programs comprising instructions for executing the method according to any one of claims 1 to 33. 35.一种被配置为与显示生成部件和一个或多个传感器通信的计算机系统,所述计算机系统包括:35. A computer system configured to communicate with a display generating component and one or more sensors, the computer system comprising: 一个或多个处理器;和one or more processors; and 存储器,所述存储器存储被配置为由所述一个或多个处理器执行的一个或多个程序,所述一个或多个程序包括用于执行根据权利要求1至33中任一项所述的方法的指令。A memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for executing the method according to any one of claims 1 to 33. 36.一种被配置为与显示生成部件和一个或多个传感器通信的计算机系统,所述计算机系统包括:36. A computer system configured to communicate with a display generating component and one or more sensors, the computer system comprising: 用于执行根据权利要求1至33中任一项所述的方法的构件。A component for carrying out the method according to any one of claims 1 to 33. 37.一种计算机程序产品,所述计算机程序产品包括被配置为由与显示生成部件和一个或多个传感器通信的计算机系统的一个或多个处理器执行的一个或多个程序,所述一个或多个程序包括用于执行根据权利要求1至33中任一项所述的方法的指令。37. A computer program product comprising one or more programs configured to be executed by one or more processors of a computer system in communication with a display generating component and one or more sensors, the one or more programs comprising instructions for performing the method according to any one of claims 1 to 33. 38.一种非暂态计算机可读存储介质,所述非暂态计算机可读存储介质存储被配置为由与显示生成部件和一个或多个传感器通信的计算机系统的一个或多个处理器执行的一个或多个程序,所述一个或多个程序包括用于以下操作的指令:38. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system in communication with a display generating component and one or more sensors, the one or more programs comprising instructions for: 经由所述显示生成部件显示用于用户体验会话的用户界面,包括:Displaying a user interface for a user experience session via the display generation component includes: 在所述用户体验会话活动时:While the user is experiencing a session activity: 经由所述一个或多个传感器检测所述计算机系统的用户的一个或多个呼吸特性;以及detecting, via the one or more sensors, one or more breathing characteristics of a user of the computer system; and 显示具有基于所述计算机系统的用户的所述一个或多个呼吸特性而移动的多个粒子的用户界面对象,包括:Displaying a user interface object having a plurality of particles that move based on the one or more breathing characteristics of a user of the computer system, comprising: 根据确定所述计算机系统的所述用户的第一呼吸事件满足第一组标准,在所述计算机系统的用户的所述第一呼吸事件期间将所述用户界面对象的所述粒子显示为以第一方式移动;以及Based on determining that a first breathing event of the user of the computer system satisfies a first set of criteria, displaying the particles of the user interface object as moving in a first manner during the first breathing event of the user of the computer system; and 根据确定所述计算机系统的用户的所述第一呼吸事件满足第二组标准,在所述计算机系统的用户的所述第一呼吸事件期间将所述用户界面对象的所述粒子显示为以与所述第一方式不同的第二方式移动。Based on determining that the first respiratory event of the user of the computer system satisfies a second set of criteria, the particles of the user interface object are displayed as moving in a second manner different from the first manner during the first respiratory event of the user of the computer system. 39.一种被配置为与显示生成部件和一个或多个传感器通信的计算机系统,所述计算机系统包括:39. A computer system configured to communicate with a display generating component and one or more sensors, the computer system comprising: 一个或多个处理器;和one or more processors; and 存储器,所述存储器存储被配置为由所述一个或多个处理器执行的一个或多个程序,所述一个或多个程序包括用于以下操作的指令:A memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: 经由所述显示生成部件显示用于用户体验会话的用户界面,包括:Displaying a user interface for a user experience session via the display generation component includes: 在所述用户体验会话活动时:While the user is experiencing a session activity: 经由所述一个或多个传感器检测所述计算机系统的用户的一个或多个呼吸特性;以及detecting, via the one or more sensors, one or more breathing characteristics of a user of the computer system; and 显示具有基于所述计算机系统的用户的所述一个或多个呼吸特性而移动的多个粒子的用户界面对象,displaying a user interface object having a plurality of particles that move based on the one or more breathing characteristics of a user of the computer system, 包括:include: 根据确定所述计算机系统的所述用户的第一呼吸事件满足第一组标准,在所述计算机系统的用户的所述第一呼吸事件期间将所述用户界面对象的所述粒子显示为以第一方式移动;以及Based on determining that a first breathing event of the user of the computer system satisfies a first set of criteria, displaying the particles of the user interface object as moving in a first manner during the first breathing event of the user of the computer system; and 根据确定所述计算机系统的用户的所述第一呼吸事件满足第二组标准,在所述计算机系统的用户的所述第一呼吸事件期间将所述用户界面对象的所述粒子显示为以与所述第一方式不同的第二方式移动。Based on determining that the first respiratory event of the user of the computer system satisfies a second set of criteria, the particles of the user interface object are displayed as moving in a second manner different from the first manner during the first respiratory event of the user of the computer system. 40.一种被配置为与显示生成部件和一个或多个传感器通信的计算机系统,所述计算机系统包括:40. A computer system configured to communicate with a display generating component and one or more sensors, the computer system comprising: 用于经由所述显示生成部件显示用于用户体验会话的用户界面的构件,所述显示包括:means for displaying, via the display generating component, a user interface for a user experience session, the display comprising: 在所述用户体验会话活动时:While the user is experiencing a session activity: 经由所述一个或多个传感器检测所述计算机系统的用户的一个或多个呼吸特性;以及detecting, via the one or more sensors, one or more breathing characteristics of a user of the computer system; and 显示具有基于所述计算机系统的用户的所述一个或多个呼吸特性而移动的多个粒子的用户界面对象,包括:Displaying a user interface object having a plurality of particles that move based on the one or more breathing characteristics of a user of the computer system, comprising: 根据确定所述计算机系统的所述用户的第一呼吸事件满足第一组标准,在所述计算机系统的用户的所述第一呼吸事件期间将所述用户界面对象的所述粒子显示为以第一方式移动;以及Based on determining that a first breathing event of the user of the computer system satisfies a first set of criteria, displaying the particles of the user interface object as moving in a first manner during the first breathing event of the user of the computer system; and 根据确定所述计算机系统的用户的所述第一呼吸事件满足第二组标准,在所述计算机系统的用户的所述第一呼吸事件期间将所述用户界面对象的所述粒子显示为以与所述第一方式不同的第二方式移动。Based on determining that the first respiratory event of the user of the computer system satisfies a second set of criteria, the particles of the user interface object are displayed as moving in a second manner different from the first manner during the first respiratory event of the user of the computer system. 41.一种计算机程序产品,所述计算机程序产品包括被配置为由与显示生成部件和一个或多个传感器通信的计算机系统的一个或多个处理器执行的一个或多个程序,所述一个或多个程序包括用于以下操作的指令:41. A computer program product comprising one or more programs configured to be executed by one or more processors of a computer system in communication with a display generating component and one or more sensors, the one or more programs comprising instructions for: 经由所述显示生成部件显示用于用户体验会话的用户界面,包括:Displaying a user interface for a user experience session via the display generation component includes: 在所述用户体验会话活动时:While the user is experiencing a session activity: 经由所述一个或多个传感器检测所述计算机系统的用户的一个或多个呼吸特性;以及detecting, via the one or more sensors, one or more breathing characteristics of a user of the computer system; and 显示具有基于所述计算机系统的用户的所述一个或多个呼吸特性而移动的多个粒子的用户界面对象,包括:Displaying a user interface object having a plurality of particles that move based on the one or more breathing characteristics of a user of the computer system, comprising: 根据确定所述计算机系统的所述用户的第一呼吸事件满足第一组标准,在所述计算机系统的用户的所述第一呼吸事件期间将所述用户界面对象的所述粒子显示为以第一方式移动;以及Based on determining that a first breathing event of the user of the computer system satisfies a first set of criteria, displaying the particles of the user interface object as moving in a first manner during the first breathing event of the user of the computer system; and 根据确定所述计算机系统的用户的所述第一呼吸事件满足第二组标准,在所述计算机系统的用户的所述第一呼吸事件期间将所述用户界面对象的所述粒子显示为以与所述第一方式不同的第二方式移动。Based on determining that the first respiratory event of the user of the computer system satisfies a second set of criteria, the particles of the user interface object are displayed as moving in a second manner different from the first manner during the first respiratory event of the user of the computer system. 42.一种方法,所述方法包括:42. A method comprising: 在与显示生成部件和一个或多个传感器通信的计算机系统处:At a computer system in communication with a display generating component and one or more sensors: 在显示具有一个或多个特性的XR环境时,经由所述一个或多个传感器检测在所述XR环境中发起用户体验会话的请求;以及While displaying an XR environment having the one or more characteristics, detecting, via the one or more sensors, a request to initiate a user experience session in the XR environment; and 响应于检测到在所述XR环境中发起所述用户体验会话的所述请求,在所述XR环境中发起所述用户体验会话,包括:In response to detecting the request to initiate the user experience session in the XR environment, initiating the user experience session in the XR environment includes: 经由所述显示生成部件显示用于所述用户体验会话的用户界面,其中显示用于所述用户体验会话的所述用户界面包括:Displaying, via the display generating component, a user interface for the user experience session, wherein displaying the user interface for the user experience session comprises: 根据确定所述XR环境的所述一个或多个特性满足第一组标准,显示具有针对所述用户体验会话启用的第一组一个或多个选项的用于所述用户体验会话的所述用户界面;以及Based on determining that the one or more characteristics of the XR environment satisfy a first set of criteria, displaying the user interface for the user experience session having a first set of one or more options enabled for the user experience session; and 根据确定所述XR环境的所述一个或多个特性满足与所述第一组标准不同的第二组标准,显示具有针对所述用户体验会话启用的第二组一个或多个选项的用于所述用户体验会话的所述用户界面,其中所述第二组一个或多个选项与所述第一组一个或多个选项不同。Based on determining that the one or more characteristics of the XR environment satisfy a second set of criteria that is different from the first set of criteria, displaying the user interface for the user experience session having a second set of one or more options enabled for the user experience session, wherein the second set of one or more options is different from the first set of one or more options. 43.根据权利要求42所述的方法,其中:43. The method of claim 42, wherein: 显示具有针对所述用户体验会话启用的所述第一组一个或多个选项的用于所述用户体验会话的所述用户界面包括:显示用于所述用户体验会话的增强现实(AR)环境;并且Displaying the user interface for the user experience session having the first set of one or more options enabled for the user experience session includes: displaying an augmented reality (AR) environment for the user experience session; and 显示具有针对所述用户体验会话启用的所述第二组一个或多个选项的用于所述用户体验会话的所述用户界面包括:显示用于所述用户体验会话的虚拟现实(VR)环境。Displaying the user interface for the user experience session having the second set of one or more options enabled for the user experience session includes displaying a virtual reality (VR) environment for the user experience session. 44.根据权利要求43所述的方法,其中:44. The method of claim 43, wherein: 所述第一组一个或多个选项包括从第一组可用选项选择的用于所述AR环境的选项的第一子集;the first set of one or more options comprising a first subset of options for the AR environment selected from a first set of available options; 所述第二组一个或多个选项包括从第二组可用选项选择的用于所述VR环境的选项的第二子集;并且The second set of one or more options includes a second subset of options for the VR environment selected from a second set of available options; and 所述第一组可用选项包括与所述第二组可用选项不同数量的选项。The first set of available options includes a different number of options than the second set of available options. 45.根据权利要求42至44中任一项所述的方法,其中:45. A method according to any one of claims 42 to 44, wherein: 显示具有针对所述用户体验会话启用的所述第一组一个或多个选项的用于所述用户体验会话的所述用户界面包括:显示用于所述用户体验会话的第一环境;并且Displaying the user interface for the user experience session having the first set of one or more options enabled for the user experience session includes: displaying a first environment for the user experience session; and 显示具有针对所述用户体验会话启用的所述第二组一个或多个选项的用于所述用户体验会话的所述用户界面包括:显示用于所述用户体验会话的与所述第一环境不同的第二环境。Displaying the user interface for the user experience session having the second set of one or more options enabled for the user experience session includes displaying a second environment for the user experience session that is different from the first environment. 46.根据权利要求45所述的方法,其中:46. The method of claim 45, wherein: 所述第一组一个或多个选项包括用于所述第一环境的基于所述第一环境的一个或多个特性的第一组变量;并且The first set of one or more options includes a first set of variables for the first environment based on one or more characteristics of the first environment; and 所述第二组一个或多个选项包括用于所述第二环境的基于所述第二环境的一个或多个特性的第二组变量。The second set of one or more options includes a second set of variables for the second environment based on one or more characteristics of the second environment. 47.根据权利要求42至46中任一项所述的方法,其中:47. A method according to any one of claims 42 to 46, wherein: 所述第一组标准包括当用于先前用户体验会话的先前用户界面已经被显示为具有用于所述先前用户体验会话的一组先前视觉特性时满足的第一标准;并且The first set of criteria includes a first criterion that is satisfied when a previous user interface for a previous user experience session has been displayed having a previous set of visual characteristics for the previous user experience session; and 显示具有针对所述用户体验会话启用的所述第一组一个或多个选项的用于所述用户体验会话的所述用户界面包括:显示具有与用于所述先前用户体验会话的所述一组先前视觉特性不同的用于所述用户体验会话的第一组视觉特性的所述用户界面。Displaying the user interface for the user experience session having the first set of one or more options enabled for the user experience session includes: displaying the user interface having a first set of visual characteristics for the user experience session that is different from the previous set of visual characteristics used for the previous user experience session. 48.根据权利要求47所述的方法,其中用于所述用户体验会话的所述第一组视觉特性是从一组可用视觉特性随机地或伪随机地选择的。48. The method of claim 47, wherein the first set of visual characteristics for the user experience session is randomly or pseudo-randomly selected from a set of available visual characteristics. 49.根据权利要求47至48中任一项所述的方法,其中:49. A method according to any one of claims 47 to 48, wherein: 所述一组先前视觉特性包括针对在所述先前用户体验会话中显示的用户界面对象的模拟照明效果的第一光属性;并且The set of previous visual characteristics includes a first light property that simulates a lighting effect for a user interface object displayed in the previous user experience session; and 所述第一组视觉特性包括针对在所述用户体验会话中显示的用户界面对象的模拟照明效果的与所述第一光属性不同的第二光属性。The first set of visual characteristics includes second light properties, different from the first light properties, that simulate a lighting effect for a user interface object displayed in the user experience session. 50.根据权利要求47至49中任一项所述的方法,其中:50. The method according to any one of claims 47 to 49, wherein: 所述一组先前视觉特性包括针对在所述先前用户体验会话中显示的用户界面对象的第一材料属性;并且The set of previous visual characteristics includes a first material property for a user interface object displayed in the previous user experience session; and 所述第一组视觉特性包括针对在所述用户体验会话中显示的用户界面对象的与所述第一材料属性不同的第二材料属性。The first set of visual characteristics includes second material properties for user interface objects displayed in the user experience session that are different from the first material properties. 51.根据权利要求42至50中任一项所述的方法,其中:51. A method according to any one of claims 42 to 50, wherein: 显示具有针对所述用户体验会话启用的所述第一组一个或多个选项的用于所述用户体验会话的所述用户界面包括:将用于所述用户体验会话的VR环境显示为具有第一强调量;并且Displaying the user interface for the user experience session having the first set of one or more options enabled for the user experience session includes: displaying the VR environment for the user experience session with a first amount of emphasis; and 显示具有针对所述用户体验会话启用的所述第二组一个或多个选项的用于所述用户体验会话的所述用户界面包括:将用于所述用户体验会话的AR环境显示为具有小于所述第一强调量的第二强调量。Displaying the user interface for the user experience session with the second set of one or more options enabled for the user experience session includes displaying an AR environment for the user experience session with a second amount of emphasis that is less than the first amount of emphasis. 52.根据权利要求42至51中任一项所述的方法,其中显示用于所述用户体验会话的所述用户界面包括:显示具有基于用于所述用户体验会话的环境的半透明外观的多个用户界面对象。52. The method of any one of claims 42 to 51, wherein displaying the user interface for the user experience session comprises displaying a plurality of user interface objects having a translucent appearance based on an environment for the user experience session. 53.根据权利要求52所述的方法,其中显示具有基于所述用户体验会话的所述环境的所述半透明外观的所述多个用户界面对象包括:53. The method of claim 52, wherein displaying the plurality of user interface objects having the translucent appearance based on the environment of the user experience session comprises: 根据确定用于所述用户体验会话的所述环境包括第一虚拟照明效果,显示具有第一半透明外观的所述多个用户界面对象;以及Based on determining that the environment for the user experience session includes a first virtual lighting effect, displaying the plurality of user interface objects having a first translucent appearance; and 根据确定用于所述用户体验会话的所述环境包括与所述第一虚拟照明效果不同的第二虚拟照明效果,显示具有与所述第一半透明外观不同的第二半透明外观的所述多个用户界面对象。Based on determining that the environment for the user experience session includes a second virtual lighting effect different from the first virtual lighting effect, displaying the plurality of user interface objects having a second translucent appearance different from the first translucent appearance. 54.根据权利要求42至53中任一项所述的方法,其中在所述XR环境中发起所述用户体验会话包括:增加所述用户体验会话的一个或多个沉浸式方面。54. The method of any one of claims 42 to 53, wherein initiating the user experience session in the XR environment comprises increasing one or more immersive aspects of the user experience session. 55.根据权利要求54所述的方法,其中增加所述用户体验会话的一个或多个沉浸式方面包括:增加由用于所述用户体验会话的所述用户界面占据的用户视场的比例。55. The method of claim 54, wherein increasing one or more immersive aspects of the user experience session comprises increasing a proportion of a user's field of view occupied by the user interface for the user experience session. 56.根据权利要求54至55中任一项所述的方法,其中增加所述用户体验会话的一个或多个沉浸式方面包括:增加针对所述用户体验会话生成的音频的空间沉浸感。56. The method of any one of claims 54 to 55, wherein increasing one or more immersive aspects of the user experience session comprises increasing spatial immersion of audio generated for the user experience session. 57.一种非暂态计算机可读存储介质,所述非暂态计算机可读存储介质存储被配置为由与显示生成部件和一个或多个传感器通信的计算机系统的一个或多个处理器执行的一个或多个程序,所述一个或多个程序包括用于执行根据权利要求42至56中任一项所述的方法的指令。57. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that communicates with a display generating component and one or more sensors, the one or more programs comprising instructions for executing a method according to any one of claims 42 to 56. 58.一种被配置为与显示生成部件和一个或多个传感器通信的计算机系统,所述计算机系统包括:58. A computer system configured to communicate with a display generating component and one or more sensors, the computer system comprising: 一个或多个处理器;和one or more processors; and 存储器,所述存储器存储被配置为由所述一个或多个处理器执行的一个或多个程序,所述一个或多个程序包括用于执行根据权利要求42至56中任一项所述的方法的指令。A memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for executing the method according to any one of claims 42 to 56. 59.一种被配置为与显示生成部件和一个或多个传感器通信的计算机系统,所述计算机系统包括:59. A computer system configured to communicate with a display generating component and one or more sensors, the computer system comprising: 用于执行根据权利要求42至56中任一项所述的方法的构件。A component for carrying out the method according to any one of claims 42 to 56. 60.一种计算机程序产品,所述计算机程序产品包括被配置为由与显示生成部件和一个或多个传感器通信的计算机系统的一个或多个处理器执行的一个或多个程序,所述一个或多个程序包括用于执行根据权利要求42至56中任一项所述的方法的指令。60. A computer program product comprising one or more programs configured to be executed by one or more processors of a computer system in communication with a display generating component and one or more sensors, the one or more programs comprising instructions for performing a method according to any one of claims 42 to 56. 61.一种非暂态计算机可读存储介质,所述非暂态计算机可读存储介质存储被配置为由与显示生成部件和一个或多个传感器通信的计算机系统的一个或多个处理器执行的一个或多个程序,所述一个或多个程序包括用于以下操作的指令:61. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system in communication with a display generating component and one or more sensors, the one or more programs comprising instructions for: 在显示具有一个或多个特性的XR环境时,经由所述一个或多个传感器检测在所述XR环境中发起用户体验会话的请求;以及While displaying an XR environment having the one or more characteristics, detecting, via the one or more sensors, a request to initiate a user experience session in the XR environment; and 响应于检测到在所述XR环境中发起所述用户体验会话的所述请求,在所述XR环境中发起所述用户体验会话,包括:In response to detecting the request to initiate the user experience session in the XR environment, initiating the user experience session in the XR environment includes: 经由所述显示生成部件显示用于所述用户体验会话的用户界面,其中显示用于所述用户体验会话的所述用户界面包括:Displaying, via the display generating component, a user interface for the user experience session, wherein displaying the user interface for the user experience session comprises: 根据确定所述XR环境的所述一个或多个特性满足第一组标准,显示具有针对所述用户体验会话启用的第一组一个或多个选项的用于所述用户体验会话的所述用户界面;以及Based on determining that the one or more characteristics of the XR environment satisfy a first set of criteria, displaying the user interface for the user experience session having a first set of one or more options enabled for the user experience session; and 根据确定所述XR环境的所述一个或多个特性满足与所述第一组标准不同的第二组标准,显示具有针对所述用户体验会话启用的第二组一个或多个选项的用于所述用户体验会话的所述用户界面,其中所述第二组一个或多个选项与所述第一组一个或多个选项不同。Based on determining that the one or more characteristics of the XR environment satisfy a second set of criteria that is different from the first set of criteria, displaying the user interface for the user experience session having a second set of one or more options enabled for the user experience session, wherein the second set of one or more options is different from the first set of one or more options. 62.一种被配置为与显示生成部件和一个或多个传感器通信的计算机系统,所述计算机系统包括:62. A computer system configured to communicate with a display generating component and one or more sensors, the computer system comprising: 一个或多个处理器;和one or more processors; and 存储器,所述存储器存储被配置为由所述一个或多个处理器执行的一个或多个程序,所述一个或多个程序包括用于以下操作的指令:A memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: 在显示具有一个或多个特性的XR环境时,经由所述一个或多个传感器检测在所述XR环境中发起用户体验会话的请求;以及While displaying an XR environment having the one or more characteristics, detecting, via the one or more sensors, a request to initiate a user experience session in the XR environment; and 响应于检测到在所述XR环境中发起所述用户体验会话的所述请求,在所述XR环境中发起所述用户体验会话,包括:In response to detecting the request to initiate the user experience session in the XR environment, initiating the user experience session in the XR environment includes: 经由所述显示生成部件显示用于所述用户体验会话的用户界面,其中显示用于所述用户体验会话的所述用户界面包括:Displaying, via the display generating component, a user interface for the user experience session, wherein displaying the user interface for the user experience session comprises: 根据确定所述XR环境的所述一个或多个特性满足第一组标准,显示具有针对所述用户体验会话启用的第一组一个或多个选项的用于所述用户体验会话的所述用户界面;以及Based on determining that the one or more characteristics of the XR environment satisfy a first set of criteria, displaying the user interface for the user experience session having a first set of one or more options enabled for the user experience session; and 根据确定所述XR环境的所述一个或多个特性满足与所述第一组标准不同的第二组标准,显示具有针对所述用户体验会话启用的第二组一个或多个选项的用于所述用户体验会话的所述用户界面,其中所述第二组一个或多个选项与所述第一组一个或多个选项不同。Based on determining that the one or more characteristics of the XR environment satisfy a second set of criteria that is different from the first set of criteria, displaying the user interface for the user experience session having a second set of one or more options enabled for the user experience session, wherein the second set of one or more options is different from the first set of one or more options. 63.一种被配置为与显示生成部件和一个或多个传感器通信的计算机系统,所述计算机系统包括:63. A computer system configured to communicate with a display generating component and one or more sensors, the computer system comprising: 用于在显示具有一个或多个特性的XR环境时,经由所述一个或多个传感器检测在所述XR环境中发起用户体验会话的请求的构件;和means for detecting, via the one or more sensors, a request to initiate a user experience session in the XR environment while displaying the XR environment having the one or more characteristics; and 用于响应于检测到在所述XR环境中发起所述用户体验会话的所述请求,在所述XR环境中发起所述用户体验会话的构件,包括:The means for initiating the user experience session in the XR environment in response to detecting the request to initiate the user experience session in the XR environment comprises: 用于经由所述显示生成部件显示用于所述用户体验会话的用户界面的构件,其中显示用于所述用户体验会话的所述用户界面包括:means for displaying, via the display generating component, a user interface for the user experience session, wherein displaying the user interface for the user experience session comprises: 根据确定所述XR环境的所述一个或多个特性满足第一组标准,显示具有针对所述用户体验会话启用的第一组一个或多个选项的用于所述用户体验会话的所述用户界面;以及Based on determining that the one or more characteristics of the XR environment satisfy a first set of criteria, displaying the user interface for the user experience session having a first set of one or more options enabled for the user experience session; and 根据确定所述XR环境的所述一个或多个特性满足与所述第一组标准不同的第二组标准,显示具有针对所述用户体验会话启用的第二组一个或多个选项的用于所述用户体验会话的所述用户界面,其中所述第二组一个或多个选项与所述第一组一个或多个选项不同。Based on determining that the one or more characteristics of the XR environment satisfy a second set of criteria that is different from the first set of criteria, displaying the user interface for the user experience session having a second set of one or more options enabled for the user experience session, wherein the second set of one or more options is different from the first set of one or more options. 64.一种计算机程序产品,所述计算机程序产品包括被配置为由与显示生成部件和一个或多个传感器通信的计算机系统的一个或多个处理器执行的一个或多个程序,所述一个或多个程序包括用于以下操作的指令:64. A computer program product comprising one or more programs configured to be executed by one or more processors of a computer system in communication with a display generating component and one or more sensors, the one or more programs comprising instructions for: 在显示具有一个或多个特性的XR环境时,经由所述一个或多个传感器检测在所述XR环境中发起用户体验会话的请求;以及While displaying an XR environment having the one or more characteristics, detecting, via the one or more sensors, a request to initiate a user experience session in the XR environment; and 响应于检测到在所述XR环境中发起所述用户体验会话的所述请求,在所述XR环境中发起所述用户体验会话,包括:In response to detecting the request to initiate the user experience session in the XR environment, initiating the user experience session in the XR environment includes: 经由所述显示生成部件显示用于所述用户体验会话的用户界面,其中显示用于所述用户体验会话的所述用户界面包括:Displaying, via the display generating component, a user interface for the user experience session, wherein displaying the user interface for the user experience session comprises: 根据确定所述XR环境的所述一个或多个特性满足第一组标准,显示具有针对所述用户体验会话启用的第一组一个或多个选项的用于所述用户体验会话的所述用户界面;以及Based on determining that the one or more characteristics of the XR environment satisfy a first set of criteria, displaying the user interface for the user experience session having a first set of one or more options enabled for the user experience session; and 根据确定所述XR环境的所述一个或多个特性满足与所述第一组标准不同的第二组标准,显示具有针对所述用户体验会话启用的第二组一个或多个选项的用于所述用户体验会话的所述用户界面,其中所述第二组一个或多个选项与所述第一组一个或多个选项不同。Based on determining that the one or more characteristics of the XR environment satisfy a second set of criteria that is different from the first set of criteria, displaying the user interface for the user experience session having a second set of one or more options enabled for the user experience session, wherein the second set of one or more options is different from the first set of one or more options. 65.一种方法,所述方法包括:65. A method comprising: 在与显示生成部件、音频生成部件和一个或多个传感器通信的计算机系统处:At a computer system in communication with a display generating component, an audio generating component, and one or more sensors: 在第一时间经由所述一个或多个传感器检测在XR环境中发起相应类型的用户体验会话的请求;detecting, via the one or more sensors, a request to initiate a user experience session of a corresponding type in the XR environment at a first time; 响应于检测到在所述XR环境中发起所述用户体验会话的所述请求,在所述XR环境中发起所述相应类型的第一用户体验会话,包括:In response to detecting the request to initiate the user experience session in the XR environment, initiating the first user experience session of the corresponding type in the XR environment, comprising: 经由所述显示生成部件显示用于所述第一用户体验会话的用户界面;以及displaying, via the display generating component, a user interface for the first user experience session; and 经由所述音频生成部件输出用于所述第一用户体验会话的第一音频声景,其中所述第一音频声景与显示用于所述第一用户体验会话的所述用户界面并发地输出,并且输出所述第一音频声景包括:输出具有从一组可用音频组成部分随机地或伪随机地选择的第一组两个或更多个音频组成部分的所述第一音频声景;outputting, via the audio generating component, a first audio soundscape for the first user experience session, wherein the first audio soundscape is output concurrently with displaying the user interface for the first user experience session, and outputting the first audio soundscape comprises: outputting the first audio soundscape having a first set of two or more audio components randomly or pseudo-randomly selected from a set of available audio components; 在与所述第一时间不同的第二时间经由所述一个或多个传感器检测在XR环境中发起所述相应类型的用户体验会话的请求;以及detecting, via the one or more sensors, a request to initiate a user experience session of the corresponding type in an XR environment at a second time different from the first time; and 响应于检测到在所述XR环境中发起所述用户体验会话的所述请求,在所述XR环境中发起所述相应类型的第二用户体验会话,包括:In response to detecting the request to initiate the user experience session in the XR environment, initiating a second user experience session of the corresponding type in the XR environment, comprising: 经由所述显示生成部件显示用于所述第二用户体验会话的用户界面;以及displaying, via the display generating component, a user interface for the second user experience session; and 经由所述音频生成部件输出用于所述第二用户体验会话的第二音频声景,其中所述第二音频声景与显示用于所述第二用户体验会话的所述用户界面并发地输出,并且输出所述第二音频声景包括:输出具有从所述一组可用音频组成部分随机地或伪随机地选择的第二组两个或更多个音频组成部分的所述第二音频声景。outputting, via the audio generating component, a second audio soundscape for the second user experience session, wherein the second audio soundscape is output concurrently with displaying the user interface for the second user experience session, and outputting the second audio soundscape comprises outputting the second audio soundscape having a second set of two or more audio components randomly or pseudo-randomly selected from the set of available audio components. 66.根据权利要求65所述的方法,其中输出用于所述第一用户体验会话的所述第一音频声景包括:在所述第一用户体验会话期间重复所述第一组两个或更多个音频组成部分。66. The method of claim 65, wherein outputting the first audio soundscape for the first user experience session comprises repeating the first set of two or more audio components during the first user experience session. 67.根据权利要求65至66中任一项所述的方法,其中所述第一音频声景的所述第一组两个或更多个音频组成部分与所述第二音频声景的所述第二组两个或更多个音频组成部分不同。67. A method according to any one of claims 65 to 66, wherein the first set of two or more audio components of the first audio soundscape is different from the second set of two or more audio components of the second audio soundscape. 68.根据权利要求65至67中任一项所述的方法,其中与显示用于所述第一用户体验会话的所述用户界面并发地输出用于所述第一用户体验会话的所述第一音频声景包括:68. The method of any one of claims 65 to 67, wherein outputting the first audio soundscape for the first user experience session concurrently with displaying the user interface for the first user experience session comprises: 当用于所述第一用户体验会话的所述用户界面包括第一预先确定的动画效果时,输出具有第一组音频特性的所述第一组两个或更多个音频组成部分;以及outputting the first set of two or more audio components having a first set of audio characteristics when the user interface for the first user experience session includes a first predetermined animation effect; and 当用于所述第一用户体验会话的所述用户界面包括与所述第一预先确定的动画效果不同的第二预先确定的动画效果时,输出具有与所述第一组音频特性不同的第二组音频特性的所述第一组两个或更多个音频组成部分。When the user interface for the first user experience session includes a second predetermined animation effect different from the first predetermined animation effect, the first set of two or more audio components are output having a second set of audio characteristics different from the first set of audio characteristics. 69.根据权利要求65至68中任一项所述的方法,所述方法还包括:69. The method according to any one of claims 65 to 68, further comprising: 在输出用于所述第一用户体验会话的所述第一音频声景时,检测生物识别输入;以及detecting biometric input while outputting the first audio soundscape for the first user experience session; and 响应于检测到所述生物识别输入,修改所述第一音频声景,包括:In response to detecting the biometric input, modifying the first audio soundscape, comprising: 根据确定所述生物识别输入包括第一生物识别输入,以第一方式修改所述第一组两个或更多个音频组成部分;以及Based on determining that the biometric input includes a first biometric input, modifying the first set of two or more audio components in a first manner; and 根据确定所述生物识别输入包括与所述第一生物识别输入不同的第二生物识别输入,以与所述第一方式不同的第二方式修改所述第一组两个或更多个音频组成部分。Based on determining that the biometric input includes a second biometric input different from the first biometric input, the first set of two or more audio components is modified in a second manner different from the first manner. 70.根据权利要求65至69中任一项所述的方法,其中输出用于所述第一用户体验会话的所述第一音频声景包括:70. The method of any one of claims 65 to 69, wherein outputting the first audio soundscape for the first user experience session comprises: 根据确定满足第一组标准,致使所述第一音频声景的输出音量逐渐增大;以及In response to determining that the first set of criteria are satisfied, causing the output volume of the first audio soundscape to gradually increase; and 根据确定满足第二组标准,致使所述第一音频声景的所述输出音量逐渐减小。Based on determining that a second set of criteria is met, the output volume of the first audio soundscape is caused to gradually decrease. 71.根据权利要求65至70中任一项所述的方法,其中在所述XR环境中发起所述相应类型的所述第一用户体验会话包括:输出具有相应音频组成部分的所述第一音频声景,所述方法还包括:71. The method of any one of claims 65 to 70, wherein initiating the first user experience session of the corresponding type in the XR environment comprises: outputting the first audio soundscape having the corresponding audio component, the method further comprising: 在输出用于所述第一用户体验会话的所述第一音频声景时,发起所述第一用户体验会话的终止,其中终止所述第一用户体验会话包括:输出具有所述相应音频组成部分的所述第一音频声景。While outputting the first audio soundscape for the first user experience session, termination of the first user experience session is initiated, wherein terminating the first user experience session comprises outputting the first audio soundscape having the corresponding audio component. 72.根据权利要求65至71中任一项所述的方法,其中输出所述第一音频声景包括:72. The method of any one of claims 65 to 71, wherein outputting the first audio soundscape comprises: 在输出具有所述第一组两个或更多个音频组成部分的所述第一音频声景时,输出第三组一个或多个音频组成部分,其中所述第三组一个或多个音频组成部分在所述第一用户体验会话期间的一个或多个随机地选择或伪随机地选择的实例处输出。While outputting the first audio soundscape having the first set of two or more audio components, outputting a third set of one or more audio components, wherein the third set of one or more audio components are output at one or more randomly selected or pseudo-randomly selected instances during the first user experience session. 73.根据权利要求65至72中任一项所述的方法,其中所述一组可用音频组成部分包括被选择来当在所述相应类型的用户体验会话期间的一个或多个随机地选择或伪随机地选择的实例处并发地输出时满足和谐标准的多个音频组成部分。73. A method according to any one of claims 65 to 72, wherein the set of available audio components includes a plurality of audio components selected to meet harmony criteria when output concurrently at one or more randomly selected or pseudo-randomly selected instances during a user experience session of the corresponding type. 74.根据权利要求65至73中任一项所述的方法,所述方法还包括:74. The method of any one of claims 65 to 73, further comprising: 在输出具有包括相对于所述计算机系统的用户的第一感知空间位置的所述第一组两个或更多个音频组成部分的所述第一音频声景时,接收更新数据;以及receiving update data while outputting the first audio soundscape having the first set of two or more audio components including a first perceived spatial location relative to a user of the computer system; and 响应于接收到所述更新数据,更新所述第一用户体验会话的状态,包括:In response to receiving the update data, updating the state of the first user experience session includes: 根据确定所述更新数据包括满足第一组音频更新标准的指示,输出具有包括相对于所述计算机系统的所述用户的第二感知空间位置的所述第一组两个或更多个音频组成部分的所述第一音频声景,其中所述第二感知空间位置与所述第一感知空间位置不同;以及outputting the first audio soundscape having the first set of two or more audio components including a second perceptual spatial position relative to the user of the computer system, wherein the second perceptual spatial position is different from the first perceptual spatial position, based on determining that the update data includes an indication that a first set of audio update criteria is satisfied; 根据确定所述更新数据包括满足第二组音频更新标准的指示,输出具有包括相对于所述计算机系统的所述用户的第三感知空间位置的所述第一组两个或更多个音频组成部分的所述第一音频声景,其中所述第三感知空间位置与所述第二感知空间位置不同。Based on determining that the update data includes an indication that a second set of audio update criteria is satisfied, outputting the first audio soundscape having the first set of two or more audio components including a third perceptual spatial position relative to the user of the computer system, wherein the third perceptual spatial position is different from the second perceptual spatial position. 75.根据权利要求65至74中任一项所述的方法,其中所述第一组两个或更多个音频组成部分包括来自第一音频源的第一音频记录,并且所述第二组两个或更多个音频组成部分包括来自所述第一音频源的第二音频记录。75. A method according to any one of claims 65 to 74, wherein the first set of two or more audio components comprises a first audio recording from a first audio source and the second set of two or more audio components comprises a second audio recording from the first audio source. 76.根据权利要求75所述的方法,其中来自所述第一音频源的所述第一音频记录包括具有第一组说话特性的第一对话,并且来自所述第一音频源的所述第二音频记录包括具有与所述第一组说话特性不同的第二组说话特性的所述第一对话。76. The method of claim 75, wherein the first audio recording from the first audio source includes a first conversation having a first set of speech characteristics, and the second audio recording from the first audio source includes the first conversation having a second set of speech characteristics different from the first set of speech characteristics. 77.根据权利要求75所述的方法,其中来自第一音频源的所述第一音频记录包括第二对话,并且来自所述第一音频源的所述第二音频记录包括与所述第二对话不同的第三对话。77. The method of claim 75, wherein the first audio recording from a first audio source includes a second conversation and the second audio recording from the first audio source includes a third conversation different from the second conversation. 78.一种非暂态计算机可读存储介质,所述非暂态计算机可读存储介质存储被配置为由与显示生成部件、音频生成部件和一个或多个传感器通信的计算机系统的一个或多个处理器执行的一个或多个程序,所述一个或多个程序包括用于执行根据权利要求65至77中任一项所述的方法的指令。78. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that communicates with a display generating component, an audio generating component, and one or more sensors, the one or more programs comprising instructions for executing a method according to any one of claims 65 to 77. 79.一种被配置为与显示生成部件、音频生成部件和一个或多个传感器通信的计算机系统,所述计算机系统包括:79. A computer system configured to communicate with a display generating component, an audio generating component, and one or more sensors, the computer system comprising: 一个或多个处理器;和one or more processors; and 存储器,所述存储器存储被配置为由所述一个或多个处理器执行的一个或多个程序,所述一个或多个程序包括用于执行根据权利要求65至77中任一项所述的方法的指令。A memory storing one or more programs configured to be executed by the one or more processors, the one or more programs comprising instructions for executing the method according to any one of claims 65 to 77. 80.一种被配置为与显示生成部件、音频生成部件和一个或多个传感器通信的计算机系统,所述计算机系统包括:80. A computer system configured to communicate with a display generating component, an audio generating component, and one or more sensors, the computer system comprising: 用于执行根据权利要求65至77中任一项所述的方法的构件。A component for carrying out the method according to any one of claims 65 to 77. 81.一种计算机程序产品,所述计算机程序产品包括被配置为由与显示生成部件、音频生成部件和一个或多个传感器通信的计算机系统的一个或多个处理器执行的一个或多个程序,所述一个或多个程序包括用于执行根据权利要求65至77中任一项所述的方法的指令。81. A computer program product comprising one or more programs configured to be executed by one or more processors of a computer system in communication with a display generating component, an audio generating component, and one or more sensors, the one or more programs comprising instructions for executing a method according to any one of claims 65 to 77. 82.一种非暂态计算机可读存储介质,所述非暂态计算机可读存储介质存储被配置为由与显示生成部件、音频生成部件和一个或多个传感器通信的计算机系统的一个或多个处理器执行的一个或多个程序,所述一个或多个程序包括用于以下操作的指令:82. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system in communication with a display generating component, an audio generating component, and one or more sensors, the one or more programs comprising instructions for: 在第一时间经由所述一个或多个传感器检测在XR环境中发起相应类型的用户体验会话的请求;detecting, via the one or more sensors, a request to initiate a user experience session of a corresponding type in the XR environment at a first time; 响应于检测到在所述XR环境中发起所述用户体验会话的所述请求,在所述XR环境中发起所述相应类型的第一用户体验会话,包括:In response to detecting the request to initiate the user experience session in the XR environment, initiating the first user experience session of the corresponding type in the XR environment, comprising: 经由所述显示生成部件显示用于所述第一用户体验会话的用户界面;以及displaying, via the display generating component, a user interface for the first user experience session; and 经由所述音频生成部件输出用于所述第一用户体验会话的第一音频声景,其中所述第一音频声景与显示用于所述第一用户体验会话的所述用户界面并发地输出,并且输出所述第一音频声景包括:输出具有从一组可用音频组成部分随机地或伪随机地选择的第一组两个或更多个音频组成部分的所述第一音频声景;outputting, via the audio generating component, a first audio soundscape for the first user experience session, wherein the first audio soundscape is output concurrently with displaying the user interface for the first user experience session, and outputting the first audio soundscape comprises: outputting the first audio soundscape having a first set of two or more audio components randomly or pseudo-randomly selected from a set of available audio components; 在与所述第一时间不同的第二时间经由所述一个或多个传感器检测在XR环境中发起所述相应类型的用户体验会话的请求;以及detecting, via the one or more sensors, a request to initiate a user experience session of the corresponding type in an XR environment at a second time different from the first time; and 响应于检测到在所述XR环境中发起所述用户体验会话的所述请求,在所述XR环境中发起所述相应类型的第二用户体验会话,包括:In response to detecting the request to initiate the user experience session in the XR environment, initiating a second user experience session of the corresponding type in the XR environment, comprising: 经由所述显示生成部件显示用于所述第二用户体验会话的用户界面;以及displaying, via the display generating component, a user interface for the second user experience session; and 经由所述音频生成部件输出用于所述第二用户体验会话的第二音频声景,其中所述第二音频声景与显示用于所述第二用户体验会话的所述用户界面并发地输出,并且输出所述第二音频声景包括:输出具有从所述一组可用音频组成部分随机地或伪随机地选择的第二组两个或更多个音频组成部分的所述第二音频声景。Outputting, via the audio generating component, a second audio soundscape for the second user experience session, wherein the second audio soundscape is output concurrently with displaying the user interface for the second user experience session, and outputting the second audio soundscape comprises outputting the second audio soundscape having a second set of two or more audio components randomly or pseudo-randomly selected from the set of available audio components. 83.一种被配置为与显示生成部件、音频生成部件和一个或多个传感器通信的计算机系统,所述计算机系统包括:83. A computer system configured to communicate with a display generating component, an audio generating component, and one or more sensors, the computer system comprising: 一个或多个处理器;和one or more processors; and 存储器,所述存储器存储被配置为由所述一个或多个处理器执行的一个或多个程序,所述一个或多个程序包括用于以下操作的指令:A memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: 在第一时间经由所述一个或多个传感器检测在XR环境中发起相应类型的用户体验会话的请求;detecting, via the one or more sensors, a request to initiate a user experience session of a corresponding type in the XR environment at a first time; 响应于检测到在所述XR环境中发起所述用户体验会话的所述请求,在所述XR环境中发起所述相应类型的第一用户体验会话,包括:In response to detecting the request to initiate the user experience session in the XR environment, initiating the first user experience session of the corresponding type in the XR environment, comprising: 经由所述显示生成部件显示用于所述第一用户体验会话的用户界面;以及displaying, via the display generating component, a user interface for the first user experience session; and 经由所述音频生成部件输出用于所述第一用户体验会话的第一音频声景,其中所述第一音频声景与显示用于所述第一用户体验会话的所述用户界面并发地输出,并且输出所述第一音频声景包括:输出具有从一组可用音频组成部分随机地或伪随机地选择的第一组两个或更多个音频组成部分的所述第一音频声景;outputting, via the audio generating component, a first audio soundscape for the first user experience session, wherein the first audio soundscape is output concurrently with displaying the user interface for the first user experience session, and outputting the first audio soundscape comprises: outputting the first audio soundscape having a first set of two or more audio components randomly or pseudo-randomly selected from a set of available audio components; 在与所述第一时间不同的第二时间经由所述一个或多个传感器检测在XR环境中发起所述相应类型的用户体验会话的请求;以及detecting, via the one or more sensors, a request to initiate a user experience session of the corresponding type in an XR environment at a second time different from the first time; and 响应于检测到在所述XR环境中发起所述用户体验会话的所述请求,在所述XR环境中发起所述相应类型的第二用户体验会话,包括:In response to detecting the request to initiate the user experience session in the XR environment, initiating a second user experience session of the corresponding type in the XR environment, comprising: 经由所述显示生成部件显示用于所述第二用户体验会话的用户界面;以及displaying, via the display generating component, a user interface for the second user experience session; and 经由所述音频生成部件输出用于所述第二用户体验会话的第二音频声景,其中所述第二音频声景与显示用于所述第二用户体验会话的所述用户界面并发地输出,并且输出所述第二音频声景包括:输出具有从所述一组可用音频组成部分随机地或伪随机地选择的第二组两个或更多个音频组成部分的所述第二音频声景。outputting, via the audio generating component, a second audio soundscape for the second user experience session, wherein the second audio soundscape is output concurrently with displaying the user interface for the second user experience session, and outputting the second audio soundscape comprises outputting the second audio soundscape having a second set of two or more audio components randomly or pseudo-randomly selected from the set of available audio components. 84.一种被配置为与显示生成部件、音频生成部件和一个或多个传感器通信的计算机系统,所述计算机系统包括:84. A computer system configured to communicate with a display generating component, an audio generating component, and one or more sensors, the computer system comprising: 用于在第一时间经由所述一个或多个传感器检测在XR环境中发起相应类型的用户体验会话的请求的构件;means for detecting, at a first time, via the one or more sensors, a request to initiate a user experience session of a corresponding type in the XR environment; 用于响应于检测到在所述XR环境中发起所述用户体验会话的所述请求,在所述XR环境中发起所述相应类型的第一用户体验会话的构件,包括:The means for initiating the first user experience session of the corresponding type in the XR environment in response to detecting the request to initiate the user experience session in the XR environment comprises: 用于经由所述显示生成部件显示用于所述第一用户体验会话的用户界面的构件;和means for displaying, via the display generating component, a user interface for the first user experience session; and 用于经由所述音频生成部件输出用于所述第一用户体验会话的第一音频声景的构件,其中所述第一音频声景与显示用于所述第一用户体验会话的所述用户界面并发地输出,并且输出所述第一音频声景包括:输出具有从一组可用音频组成部分随机地或伪随机地选择的第一组两个或更多个音频组成部分的所述第一音频声景;means for outputting, via the audio generating component, a first audio soundscape for the first user experience session, wherein the first audio soundscape is output concurrently with displaying the user interface for the first user experience session, and outputting the first audio soundscape comprises: outputting the first audio soundscape having a first set of two or more audio components randomly or pseudo-randomly selected from a set of available audio components; 用于在与所述第一时间不同的第二时间经由所述一个或多个传感器检测在XR环境中发起所述相应类型的用户体验会话的请求的构件;和means for detecting, via the one or more sensors, at a second time different than the first time, a request to initiate a user experience session of the corresponding type in an XR environment; and 用于响应于检测到在所述XR环境中发起所述用户体验会话的所述请求,在所述XR环境中发起所述相应类型的第二用户体验会话的构件,包括:The means for initiating the second user experience session of the corresponding type in the XR environment in response to detecting the request to initiate the user experience session in the XR environment comprises: 用于经由所述显示生成部件显示用于所述第二用户体验会话的用户界面的构件;和means for displaying, via the display generating component, a user interface for the second user experience session; and 用于经由所述音频生成部件输出用于所述第二用户体验会话的第二音频声景的构件,其中所述第二音频声景与显示用于所述第二用户体验会话的所述用户界面并发地输出,并且输出所述第二音频声景包括:输出具有从所述一组可用音频组成部分随机地或伪随机地选择的第二组两个或更多个音频组成部分的所述第二音频声景。means for outputting, via the audio generating component, a second audio soundscape for the second user experience session, wherein the second audio soundscape is output concurrently with displaying the user interface for the second user experience session, and outputting the second audio soundscape comprises: outputting the second audio soundscape having a second set of two or more audio components randomly or pseudo-randomly selected from the set of available audio components. 85.一种计算机程序产品,所述计算机程序产品包括被配置为由与显示生成部件、音频生成部件和一个或多个传感器通信的计算机系统的一个或多个处理器执行的一个或多个程序,所述一个或多个程序包括用于以下操作的指令:85. A computer program product comprising one or more programs configured to be executed by one or more processors of a computer system in communication with a display generating component, an audio generating component, and one or more sensors, the one or more programs comprising instructions for: 在第一时间经由所述一个或多个传感器检测在XR环境中发起相应类型的用户体验会话的请求;detecting, via the one or more sensors, a request to initiate a user experience session of a corresponding type in the XR environment at a first time; 响应于检测到在所述XR环境中发起所述用户体验会话的所述请求,在所述XR环境中发起所述相应类型的第一用户体验会话,包括:In response to detecting the request to initiate the user experience session in the XR environment, initiating the first user experience session of the corresponding type in the XR environment, comprising: 经由所述显示生成部件显示用于所述第一用户体验会话的用户界面;以及displaying, via the display generating component, a user interface for the first user experience session; and 经由所述音频生成部件输出用于所述第一用户体验会话的第一音频声景,其中所述第一音频声景与显示用于所述第一用户体验会话的所述用户界面并发地输出,并且输出所述第一音频声景包括:输出具有从一组可用音频组成部分随机地或伪随机地选择的第一组两个或更多个音频组成部分的所述第一音频声景;outputting, via the audio generating component, a first audio soundscape for the first user experience session, wherein the first audio soundscape is output concurrently with displaying the user interface for the first user experience session, and outputting the first audio soundscape comprises: outputting the first audio soundscape having a first set of two or more audio components randomly or pseudo-randomly selected from a set of available audio components; 在与所述第一时间不同的第二时间经由所述一个或多个传感器检测在XR环境中发起所述相应类型的用户体验会话的请求;以及detecting, via the one or more sensors, a request to initiate a user experience session of the corresponding type in an XR environment at a second time different from the first time; and 响应于检测到在所述XR环境中发起所述用户体验会话的所述请求,在所述XR环境中发起所述相应类型的第二用户体验会话,包括:In response to detecting the request to initiate the user experience session in the XR environment, initiating a second user experience session of the corresponding type in the XR environment, comprising: 经由所述显示生成部件显示用于所述第二用户体验会话的用户界面;以及displaying, via the display generating component, a user interface for the second user experience session; and 经由所述音频生成部件输出用于所述第二用户体验会话的第二音频声景,其中所述第二音频声景与显示用于所述第二用户体验会话的所述用户界面并发地输出,并且输出所述第二音频声景包括:输出具有从所述一组可用音频组成部分随机地或伪随机地选择的第二组两个或更多个音频组成部分的所述第二音频声景。Outputting, via the audio generating component, a second audio soundscape for the second user experience session, wherein the second audio soundscape is output concurrently with displaying the user interface for the second user experience session, and outputting the second audio soundscape comprises outputting the second audio soundscape having a second set of two or more audio components randomly or pseudo-randomly selected from the set of available audio components.
CN202380029272.1A 2022-03-22 2023-03-21 Device, method and graphical user interface for a three-dimensional user experience session in an extended reality environment Pending CN118946871A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411619537.6A CN119576128A (en) 2022-03-22 2023-03-21 Device, method and graphical user interface for three-dimensional user experience session in an extended reality environment

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/322,502 2022-03-22
US18/108,852 2023-02-13
US18/108,852 US20230306695A1 (en) 2022-03-22 2023-02-13 Devices, methods, and graphical user interfaces for three-dimensional user experience sessions in an extended reality environment
PCT/US2023/015826 WO2023183340A1 (en) 2022-03-22 2023-03-21 Devices, methods, and graphical user interfaces for three-dimensional user experience sessions in an extended reality environment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202411619537.6A Division CN119576128A (en) 2022-03-22 2023-03-21 Device, method and graphical user interface for three-dimensional user experience session in an extended reality environment

Publications (1)

Publication Number Publication Date
CN118946871A true CN118946871A (en) 2024-11-12

Family

ID=93356981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202380029272.1A Pending CN118946871A (en) 2022-03-22 2023-03-21 Device, method and graphical user interface for a three-dimensional user experience session in an extended reality environment

Country Status (1)

Country Link
CN (1) CN118946871A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230306695A1 (en) * 2022-03-22 2023-09-28 Apple Inc. Devices, methods, and graphical user interfaces for three-dimensional user experience sessions in an extended reality environment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230306695A1 (en) * 2022-03-22 2023-09-28 Apple Inc. Devices, methods, and graphical user interfaces for three-dimensional user experience sessions in an extended reality environment

Similar Documents

Publication Publication Date Title
JP7587689B2 (en) DEVICE, METHOD AND GRAPHICAL USER INTERFACE FOR INTERACTING WITH A THREE-DIM
US20230171484A1 (en) Devices, methods, and graphical user interfaces for generating and displaying a representation of a user
US20230384860A1 (en) Devices, methods, and graphical user interfaces for generating and displaying a representation of a user
US20240103678A1 (en) Devices, methods, and graphical user interfaces for interacting with extended reality experiences
US20240104859A1 (en) User interfaces for managing live communication sessions
US20240118746A1 (en) User interfaces for gaze tracking enrollment
US20230306695A1 (en) Devices, methods, and graphical user interfaces for three-dimensional user experience sessions in an extended reality environment
US20240103617A1 (en) User interfaces for gaze tracking enrollment
US20240404227A1 (en) Devices, methods, and graphical user interfaces for real-time communication
US20240372968A1 (en) Devices, methods, and graphical user interfaces for displaying a representation of a person
CN119404170A (en) Occluded objects in a 3D environment
US20240152244A1 (en) Devices, Methods, and Graphical User Interfaces for Interacting with Three-Dimensional Environments
US20240404210A1 (en) Devices, methods, and graphical user interfaces forgenerating reminders for a user experience session in an extended reality environment
US20250031002A1 (en) Systems, devices, and methods for audio presentation in a three-dimensional environment
CN118946871A (en) Device, method and graphical user interface for a three-dimensional user experience session in an extended reality environment
CN119576128A (en) Device, method and graphical user interface for three-dimensional user experience session in an extended reality environment
US20240395073A1 (en) Devices, methods, and graphical user interfaces for biometric feature enrollment
US20240103679A1 (en) Devices, Methods, and Graphical User Interfaces for Interacting with Three-Dimensional Environments
WO2024253834A1 (en) Devices, methods, and graphical user interfaces for generating reminders for a user experience session in an extended reality environment
US20250110551A1 (en) Devices, methods, and graphical user interfaces for displaying presentation environments for a presentation application
US20240370344A1 (en) Devices, methods, and graphical user interfaces for providing environment tracking content
US20250232541A1 (en) Methods of updating spatial arrangements of a plurality of virtual objects within a real-time communication session
US20240404216A1 (en) Devices and methods for presenting system user interfaces in an extended reality environment
WO2023230088A1 (en) Devices, methods, and graphical user interfaces for generating and displaying a representation of a user
KR20250050092A (en) User interfaces for eye tracking registration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination