[go: up one dir, main page]

WO2019118002A1 - Methods and systems for generating augmented reality environments - Google Patents

Methods and systems for generating augmented reality environments Download PDF

Info

Publication number
WO2019118002A1
WO2019118002A1 PCT/US2018/042244 US2018042244W WO2019118002A1 WO 2019118002 A1 WO2019118002 A1 WO 2019118002A1 US 2018042244 W US2018042244 W US 2018042244W WO 2019118002 A1 WO2019118002 A1 WO 2019118002A1
Authority
WO
WIPO (PCT)
Prior art keywords
environment
world environment
physical real
physical
computing device
Prior art date
Application number
PCT/US2018/042244
Other languages
French (fr)
Inventor
Aaron Joseph CAMMARATA
Brian James Clark
Original Assignee
Google Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google Llc filed Critical Google Llc
Publication of WO2019118002A1 publication Critical patent/WO2019118002A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the present disclosure relates generally to augmented reality (AR). More particularly, the present disclosure relates to methods and systems for generating AR environments.
  • AR augmented reality
  • Augmented reality can provide a live direct or indirect view of a physical real-world environment whose elements are“augmented” by computer-generated audio, video, graphics, haptics, and/or the like.
  • AR can enhance a user’s perception of reality and can provide an enriched, immersive experience, and/or the like.
  • AR can be used for entertainment, education, business, communication, data visualization, and/or the like.
  • One example aspect of the present disclosure is directed to a computer- implemented method.
  • the method can include receiving, from at least one of two computing devices, sensor data describing a physical real-world environment in which the two computing devices are located.
  • the method can further include determining, based on the sensor data, locations of the two computing devices relative to one another in the physical real-world environment.
  • the method can further include generating, based on the locations and for display by at least one of the two computing devices, an augmented reality (AR) environment comprising at least a portion of the physical real-world environment.
  • AR augmented reality
  • Another example aspect of the present disclosure is directed to another computer- implemented method.
  • the method can include generating, for display by a computing device, an AR environment comprising one or more virtual elements and at least a portion of a physical real-world environment.
  • the method can further include receiving, from the computing device, data generated by user input moving an element of the one or more virtual elements from a first location within the AR environment to a second location within the AR environment.
  • the method can further include adjusting, based on the data and a physics- based model of the AR environment, one or more aspects of the one or more virtual elements within the AR environment.
  • the method can include generating, for display by a computing device, an AR environment comprising one or more virtual elements and at least a portion of a physical real-world environment.
  • the method can further include determining an area of interest of the AR environment.
  • the method can further include generating, for display by the computing device within the AR environment, one or more virtual elements identifying the area of interest.
  • the method can include generating, for display by a computing device, an AR environment comprising one or more virtual elements and at least a portion of a physical real-world environment.
  • the method can further include identifying a physical surface located in the physical real-world environment.
  • the method can further include generating, for display by the computing device within the AR environment, one or more virtual elements scaled to fit in a space defined at least in part by the physical surface located in the physical real-world environment.
  • Another example aspect of the present disclosure is directed to another computer- implemented method.
  • the method can include generating, for display by a computing device, an AR environment comprising one or more virtual elements and at least a portion of a physical real-world environment.
  • the method can further include identifying one or more locations within the AR environment for locating one or more advertisements.
  • the method can further include generating, for display by the computing device at the one or more locations within the AR environment, one or more virtual elements comprising the one or more advertisements.
  • Other aspects of the present disclosure are directed to various systems, apparatuses, non-transitory computer-readable media, user interfaces, and electronic devices.
  • FIG. 1 depicts an example computing environment according to example embodiments of the present disclosure
  • FIGs. 2, 3, 4, and 5 depict example scenes according to example embodiments of the present disclosure
  • FIG. 6 depicts an example method according to example embodiments of the present disclosure
  • FIG. 7 depicts an example scene according to example embodiments of the present disclosure
  • FIG. 8 depicts an example method according to example embodiments of the present disclosure
  • FIG. 9 depicts an example scene according to example embodiments of the present disclosure.
  • FIG. 10 depicts an example method according to example embodiments of the present disclosure
  • FIGs. 11 and 12 depict example scenes according to example embodiments of the present disclosure
  • FIG. 13 depicts an example method according to example embodiments of the present disclosure
  • FIG. 14 depicts an example scene according to example embodiments of the present disclosure
  • FIG. 15 depicts an example method according to example embodiments of the present disclosure.
  • Example aspects of the present disclosure are directed to methods and systems for generating augmented reality (AR) environments.
  • an AR environment can be generated for display by at least one of two computing devices.
  • the AR environment can include at least a portion of a physical real-world environment in which the two computing devices are located.
  • Data describing the physical real-world environment can be received from at least one of the two computing devices.
  • Such data can be utilized to determine locations of the two computing devices relative to one another in the physical real-world environment.
  • At least a portion of the AR environment e.g., one or more virtual elements, and/or the like
  • Sensor data from at least one of the users’ computing devices can be utilized to determine the locations of the computing devices relative to one another (e.g., the location of the users around the table, and/or the like).
  • One or more elements of the AR environment e.g., virtual elements indicating which user should play next in the game, and/or the like
  • the sensor data can include data generated by one or more cameras, accelerometers, gyroscopes, global position system (GPS) receivers, wireless network interfaces, and/or the like.
  • GPS global position system
  • the sensor data can be received from cameras of the two computing devices.
  • the cameras of the computing devices can capture portions of the physical real-world environment (e.g., from the perspectives of their respective users, and/or the like).
  • the data received from the camera of the first computing device can represent a first portion of the physical real-world environment (e.g., from the perspective of a user of the first computing device, and/or the like)
  • the data received from the camera of the second computing device can represent a second portion of the physical real-world environment (e.g., from the perspective of a user of the second computing device, and/or the like).
  • the physical real-world environment can include one or more stationary objects (e.g., a dollar bill, marker, coin, cutting board, user’s hand, and/or the like), and both the first portion and the second portion can include the stationary object(s) (e.g., from different orientations, perspectives, and/or the like).
  • determining the locations can include comparing the data representing the first portion of the physical real-world environment to the data representing the second portion of the physical real-world environment to determine a difference in vantage point (e.g., orientation, perspective, and/or the like) between the first computing device and the second computing device with respect to the stationary object(s).
  • the data representing the first portion of the physical real-world environment can be utilized to render, for display by the first computing device, an AR environment comprising the first portion of the physical real-world environment and one or more virtual elements (e.g., one or more virtual depictions of the stationary object(s), one or more targeting reticles, and/or the like).
  • the data representing the second portion of the physical real-world environment can be utilized to render, for display by the second computing device, an AR environment comprising the second portion of the physical real-world environment and the virtual element(s) (e.g., the virtual depiction(s) of the stationary object(s), the targeting reticle(s), and/or the like).
  • Users of the computing devices can align one or more of the virtual element(s) with the stationary object(s) (e.g., by manipulating their computing device to move one or more of the virtual element(s), and/or the like), for example, in response to a prompt (e.g., included as part of the virtual element(s), and/or the like).
  • a prompt e.g., included as part of the virtual element(s), and/or the like.
  • data generated by user input aligning the one or more virtual element(s) with the stationary object(s) within the AR environment comprising the first portion of the physical real-world environment can be received from the first computing device.
  • data generated by user input aligning the one or more virtual element(s) with the stationary object(s) within the AR environment comprising the second portion of the physical real-world environment can be received from the second computing device.
  • determining the locations can include comparing the data generated by the user input aligning the one or more virtual element(s) with the stationary object(s) within the AR environment comprising the first portion of the physical real-world environment and the data generated by the user input aligning the one or more virtual element(s) with the stationary object(s) within the AR environment comprising the second portion of the physical real-world environment.
  • the data representing the first portion of the physical real-world environment can be utilized to render, for display by the first computing device, an image comprising the first portion of the physical real-world environment.
  • the data representing the second portion of the physical real-world environment can be utilized to render, for display by the second computing device, an image comprising the second portion of the physical real-world environment.
  • Users of the computing devices can select one or more portions (e.g., comers, and/or the like) of the stationary object(s), for example, in response to a prompt (e.g., included in the image(s) as one or more virtual elements, and/or the like).
  • a prompt e.g., included in the image(s) as one or more virtual elements, and/or the like.
  • data generated by user input selecting the portion(s) of the stationary object(s) within the image comprising the first portion of the physical real- world environment can be received from the first computing device.
  • determining the locations can include comparing the data generated by the user input selecting the portion(s) of the stationary object(s) within the image comprising the first portion of the physical real-world environment and the data generated by the user input selecting the portion(s) of the stationary object(s) within the image comprising the second portion of the physical real-world environment.
  • an image (e.g., of a quick response (QR) code, and/or the like) can be generated for display by the first computing device.
  • a camera of the second computing device can be used to capture the image being displayed by the first computing device, and data representing a portion of the physical real-world environment comprising the image being displayed by the first computing device can be received from the camera of the second computing device.
  • determining the locations can include utilizing the data representing the portion of the physical real-world environment comprising the image to determine an orientation of the image being displayed by the first computing device relative to the second computing device within the physical real-world environment.
  • the AR environment can include one or more virtual elements corresponding to a game being played by users of the computing devices.
  • the locations can be utilized to determine an element of the game (e.g., which user will play next, and/or the like) in accordance with one or more rules of the game (e.g., play to the left, and/or the like).
  • the proximity of the computing devices to one another can be determined (e.g., based on the sensor data, locations, and/or the like). In such
  • a level of audio and/or a degree of physical feedback (e.g., vibration, and/or the like) associated with the AR environment to be produced by at least one of the two computing devices can be determined based on the proximity.
  • users of the computing devices can be playing a game associated with the AR environment
  • a user of the first computing device can experience an event (e.g., a bomb detonation, and/or the like)
  • an extent to which the user of the second computing device will experience e.g., in terms of audio, physical feedback, and/or the like
  • the event e.g., in terms of audio, physical feedback, and/or the like
  • an AR environment comprising one or more virtual elements and at least a portion of a physical real-world environment can be generated for display by a computing device (e.g., a computing device located in the physical real-world environment, and/or the like).
  • a user can move an element of the virtual element(s) from a first location within the AR environment to a second location within the AR environment, and data indicating the user input can be received.
  • Such data and a physics-based model of the AR environment can be utilized to adjust one or more aspects (e.g., positions, rotations, scales, colors, textures, velocities, accelerations, and/or the like) of the virtual element(s) within the AR environment.
  • a user can be playing a game in an AR environment that involves moving a virtual element (e.g., a targeting reticle, and/or the like) around another virtual element (e.g., a game board, and/or the like) such that it aligns with one or more further virtual elements (e.g., one or more cubes appearing on the game board, and/or the like).
  • a virtual element e.g., a targeting reticle, and/or the like
  • another virtual element e.g., a game board, and/or the like
  • one or more further virtual elements e.g., one or more cubes appearing on the game board, and/or the like.
  • one or more aspects of the virtual element(s) can be adjusted based on a physics-based model of the AR environment (e.g., a model that takes into account one or more of the virtual element(s), one or more physical objects in the physical real-world environment, and/or the like).
  • a physics-based model of the AR environment e.g., a model that takes into account one or more of the virtual element(s), one or more physical objects in the physical real-world environment, and/or the like.
  • the aspect(s) can be adjusted based on a distance between the first location and the second location (e.g., a distance the user moved the targeting reticle, and/or the like). Additionally or alternatively, the aspect(s) can be adjusted based on a velocity at which the element was moved from the first location to the second location (e.g., how fast the user moved the targeting reticle, and/or the like).
  • the physics-based model can be based on one or more locations and/or one or more dimensions of the virtual element(s) (e.g., the targeting reticle, the game board, the cube(s), and/or the like). Additionally or alternatively, the physics-based model can be based on one or more locations and/or one or more dimensions of one or more physical objects in the physical real-world environment (e.g., a tabletop surface on which one or more of the virtual element(s) appear to rest, and/or the like).
  • the virtual element(s) e.g., the targeting reticle, the game board, the cube(s), and/or the like.
  • the physics-based model can be based on one or more locations and/or one or more dimensions of one or more physical objects in the physical real-world environment (e.g., a tabletop surface on which one or more of the virtual element(s) appear to rest, and/or the like).
  • an AR environment comprising one or more virtual elements and at least a portion of a physical real-world environment can be generated for display by a computing device (e.g., a computing device located in the physical real-world environment, and/or the like).
  • An area of interest of the AR environment can be determined, and one or more virtual elements identifying the area of interest can be generated for display by the computing device.
  • a user can be utilizing a computing device in a physical space (e.g., a room, and/or the like) that includes a tabletop surface.
  • An AR environment can be generated that includes the tabletop surface and surrounding portions of the physical space.
  • the AR environment can include one or more virtual elements (e.g., one or more cubes, and/or the like) located on the tabletop surface. Accordingly (e.g., due to the location of the cube(s), and/or the like), the tabletop surface can be determined to be an area of interest, and one or more virtual elements identifying the area of interest (e.g., a virtual border delineating the tabletop surface, and/or the like) can be generated for display by the computing device within the AR environment.
  • one or more virtual elements identifying the area of interest e.g., a virtual border delineating the tabletop surface, and/or the like
  • determining the area of interest can include determining one or more locations of one or more virtual elements within the AR environment (e.g., the cubes, and/or the like).
  • the virtual element(s) identifying the area of interest can define a space that includes the location(s) of the virtual element(s) within the AR environment (e.g., a portion of the tabletop surface where the cubes are located, and/or the like).
  • determining the area of interest can include identifying a physical surface located in the physical real-world environment (e.g., the tabletop surface, and/or the like).
  • the virtual element(s) identifying the area of interest can define a space that includes at least a portion of the physical surface (e.g., the tabletop surface, and/or the like).
  • an AR environment comprising one or more virtual elements and at least a portion of a physical real-world environment can be generated for display by a computing device (e.g., a computing device located in the physical real-world environment, and/or the like).
  • a physical surface located in the physical real-world environment can be identified.
  • One or more virtual elements scaled to fit in a space defined at least in part by the physical surface can be generated for display by the computing device within the AR environment.
  • a user can be utilizing a computing device in a physical space (e.g., a room, and/or the like) that includes a tabletop surface.
  • the tabletop surface can define at least in part a space within the physical space (e.g., from the tabletop surface to the ceiling, and/or the like).
  • the tabletop surface can be identified, and one or more virtual elements (e.g., one or more bricks, and/or the like) scaled to fit in the space defined at least in part by the tabletop surface can be generated for display by the computing device within the AR environment (e.g., in the space defined at least in part by the tabletop surface, and/or the like).
  • the virtual element(s) scaled to fit in the space can represent a data set (e.g., search results, and/or the like).
  • the virtual element(s) scaled to fit in the space can be generated based on the size of the data set (e.g., to fill the available space, and/or the like).
  • multiple physical surfaces located in the physical real- world environment can be identified (e.g., the tabletop surface, a floor of the room, a wall of the room, a ceiling of the room, and/or the like).
  • the virtual element(s) scaled to fit in the space e.g., the brick(s), and/or the like
  • the identified surfaces e.g., a space defined by the tabletop surface and the wall of the room, and/or the like.
  • the physical surface (e.g., the tabletop surface, and/or the like) can be selected from amongst multiple physical surfaces located in the physical real- world environment (e.g., the tabletop surface, a floor of the room, a wall of the room, a ceiling of the room, and/or the like).
  • the physical surface e.g., the tabletop surface, and/or the like
  • a user can select the physical surface (e.g., the tabletop surface, and/or the like) by aligning one or more virtual elements (e.g., a virtual surface, plane, and/or the like) with the physical surface (e.g., the tabletop surface, and/or the like).
  • the physical surface e.g., the tabletop surface, and/or the like
  • the virtual elements e.g., a virtual surface, plane, and/or the like
  • one or more dimensions of the space defined at least in part by the physical surface (e.g., a space between the tabletop surface and the ceiling, and/or the like) can be determined.
  • one or more of the virtual element(s) scaled to fit in the space can be selected from amongst multiple different virtual elements (e.g., a set of possible virtual elements that includes the brick(s), the Eiffel Tower, the Washington Monument, and/or the like) based on the dimension(s).
  • the physical real-world environment can include multiple computing devices, and determining the dimension(s) can include determining a distance between the devices (e.g., based on sensor data from one or more of the devices, and/or the like).
  • an AR environment comprising one or more virtual elements and at least a portion of a physical real-world environment can be generated for display by a computing device (e.g., a computing device located in the physical real-world environment, and/or the like).
  • a computing device e.g., a computing device located in the physical real-world environment, and/or the like.
  • One or more locations within the AR environment for locating one or more advertisements can be identified.
  • One or more virtual elements comprising the advertisement(s) can be generated for display by the computing device at the location(s) within the AR environment.
  • a user can be utilizing a computing device in a physical space (e.g., a room, and/or the like) that includes a tabletop surface.
  • a physical object e.g., a piece of paper, and/or the like
  • a surface of the physical object can be identified for locating one or more advertisements, and one or more virtual elements (e.g., virtual text, images, and/or the like) comprising the advertisement(s) can be generated for display by the computing device on the surface of the physical object within the AR environment.
  • virtual elements e.g., virtual text, images, and/or the like
  • one or more of the advertisement(s) can be selected from amongst multiple different possible advertisements.
  • one or more of the advertisement s) can be selected based on: a geographic location in the physical real-world environment at which the computing device is located (e.g., the computing device can be located in a bar, and an advertisement related to the bar, one or more of its products or services, and/or the like can be selected); a search history associated with the computing device (e.g., the computing device can have been utilized to search for a particular product or service, and an advertisement related to the product or service, and/or the like can be selected); a context of the AR environment (e.g., the computing device can be located in a bar where users are utilizing the AR environment to play a trivia game, and an advertisement related to the bar, a trivia question, and/or the like can be selected); user performance within the AR environment (e.g., an advertisement for a discount at the bar can be selected based on the user’s performance in the
  • identifying the location(s) can include identifying a physical object in the physical real-world environment (e.g., the monitor, and/or the like).
  • the virtual element(s) comprising the advertisement s) can include one or more virtual elements that depict at least a portion of the physical object (e.g., a virtual monitor, and/or the like), outline at least a portion of the physical object (e.g., outline the monitor, and/or the like), highlight at least a portion of the physical object (e.g., cover at least a portion of the monitor with a transparent layer, and/or the like), and/or identify at least a portion of the physical object (e.g., draw a box around at least a portion of the monitor, and/or the like).
  • identifying the location(s) can include identifying text in the physical real-world environment recognized by a computing device (e.g., the text identifying the monitor brand, and/or the like).
  • the virtual element(s) comprising the advertisement(s) can include one or more virtual elements that depict at least a portion of the text (e.g., virtual text, and/or the like), outline at least a portion of the text (e.g., outline the text identifying the monitor brand, and/or the like), highlight at least a portion of the text (e.g., cover at least a portion of the text identifying the monitor brand with a transparent layer, and/or the like), and/or identify at least a portion of the text (e.g., draw a box around at least a portion of the text identifying the monitor brand, and/or the like).
  • generating the virtual element(s) comprising the advertisement(s) can include modifying one or more dimensions, colors, finishes, lightings, and/or the like of at least one of the advertisement(s).
  • the physical real-world environment could include a chair, the chair can be identified as a location for an
  • an advertisement related to the chair can be selected, and generating the virtual element(s) comprising the selected advertisement can include modifying one or more dimensions of the advertisement (e.g., so that the advertisement will fit on a surface of the chair, and/or the like), colors of the advertisement (e.g., so that the advertisement will be visible on the surface of the chair, and/or the like), finishes (e.g., matte, glossy, and/or the like) of the advertisement (e.g., so the advertisement will be aesthetically accentuated on the surface of the chair, and/or the like), and/or lightings (e.g., levels of brightness, contrast, and/or the like) of the advertisement (e.g., so the advertisement will be visible on the surface of the chair, and/or the like).
  • dimensions of the advertisement e.g., so that the advertisement will fit on a surface of the chair, and/or the like
  • colors of the advertisement e.g., so that the advertisement will be visible on the surface of the chair, and/or the like
  • a user can invoke (e.g., select, interact with, and/or the like) one or more of the virtual element(s) comprising the advertisement(s), and data generated by the user input can be received. Responsive to receiving the data generated by the user input, one or more virtual elements comprising content associated with one or more of the advertisement s) associated with the virtual element(s) invoked by the user can be generated for display by the computing device within the AR environment. For example, a user can invoke a virtual element located on a surface of the chair, and one or more virtual elements comprising an advertisement for the chair can be generated for display by the computing device within the AR environment (e.g., alongside the chair, on a surface of the chair, and/or the like).
  • an application distinct from an application providing the AR environment can be directed to content associated with one or more of the advertisement(s) associated with the virtual element(s) invoked by the user for display by the computing device within the application distinct from the application associated with the AR
  • the computing device can include one or more applications (e.g., a web browser, an application associated with a merchant, service provider, and/or the like) distinct from an application providing the AR environment, and responsive to the user invoking the virtual element located on the surface of the chair, one or more of such application(s) can be directed (e.g., via an application programming interface (API) of such application(s), an advertisement identifier, a uniform resource locator (URL), and/or the like) to content (e.g., for display by the computing device within such application(s), and/or the like) enabling the user to learn more about the chair, purchase the chair, and/or the like.
  • applications e.g., a web browser, an application associated with a merchant, service provider, and/or the like
  • content e.g., for display by the computing device within such application(s), and/or the like
  • a particular identified location within the AR environment can be generated for display within the AR environment by multiple different computing devices (e.g., utilized by different users located in the physical real-world environment, and/or the like).
  • virtual elements depicting different advertisements can be generated for display at the particular location by the different computing devices.
  • the physical real-world environment can include two different computing devices, one or more virtual elements depicting a first advertisement (e.g., based on a search history associated with the first computing device, and/or the like) can be generated for display at a particular location within the AR environment by the first computing device, and one or more virtual elements depicting a second advertisement (e.g., based on a search history associated with the second computing device, and/or the like) can be generated for display at the particular location within the AR environment by the second computing device.
  • a first advertisement e.g., based on a search history associated with the first computing device, and/or the like
  • a second advertisement e.g., based on a search history associated with the second computing device, and/or the like
  • the technologies described herein can provide a number of technical effects and benefits.
  • the technologies can enable an AR environment comprising a physical real-world environment that includes multiple users to be modified (e.g., calibrated, and/or the like) based on the users’ locations within the physical real-world environment.
  • the technologies can enable an AR environment to be modified in a predictable manner (e.g., based on physics, and/or the like), creating a uniform and consistent user experience, and/or the like.
  • the technologies can allow developers of AR applications to focus and/or guide a user’s experience within the environment, by identifying areas of interest to the user, and/or the like.
  • the technologies can optimize utilization of space within an AR environment by, for example, scaling virtual elements to fit within the available space. Additionally or alternatively, the technologies can support the integration of advertisements and/or supplemental information into AR environments in a contextually useful manner that minimizes their potential intrusiveness on the user experience.
  • FIG. 1 depicts an example computing environment according to example embodiments of the present disclosure.
  • environment 100 can include computing devices 102, 104, and 106 (e.g., user devices, and/or the like), one or more networks 108 (e.g., local area networks (LANs), wide area networks (WANs), portions of the Internet, and/or the like), and computing system 110 (e.g., a backend system, and/or the like).
  • Network(s) 108 can include one or more networks (e.g., wired networks, wireless networks, and/or the like) that interface (e.g., support communications between, and/or the like) devices 102, 104, and/or 106 with one another and/or with system 110.
  • Devices 102, 104, and/or 106 can include one or more computing devices (e.g., laptop computers, desktop computers, tablet computers, mobile devices, smartphones, wearable devices, head-mounted displays, and/or the like) capable of performing one or more of the operations and/or functions described herein. It will be appreciated that references herein to any one of devices 102, 104, and/or 106 could refer to multiple associated computing devices (e.g., a mobile device, wearable device, and/or the like) functioning together (e.g., for a particular user, and/or the like).
  • Device 102 can include one or more processors 112, one or more communication interfaces 114, one or more displays 116, one or more sensors 118, and memory 120.
  • Communication interface(s) 114 can support communications between device 102 and devices 104 and/or 106 and/or system 110 (e.g., via network(s) 108, and/or the like).
  • Display(s) 116 can include one or more devices (e.g., panel displays, touch screens, head-mounted displays, and/or the like) that allow a user of device 102 to view imagery, and/or the like.
  • Sensor(s) 118 can include one or more components (e.g., cameras, accelerometers, gyroscopes, global position system (GPS) receivers, wireless network interfaces, and/or the like) that can perceive one or more aspects of a physical real- world environment in which device 102 is located and can generate data representing those aspect(s).
  • GPS global position system
  • Memory 120 can include (e.g., store, and/or the like) instructions 122, which when executed by processor(s) 112 can cause device 102 to perform one or more of the operations and/or functions described herein. It will be appreciated, that devices 104 and/or 106 can include one or more of the components described above with respect to device 102.
  • System 110 can include one or more computing devices (e.g., servers, and/or the like) capable of performing one or more of the operations and/or functions described herein.
  • System 110 can include one or more processors 124, one or more communication interfaces 126, and memory 128.
  • Communication interface(s) 126 can support communications between system 110 and devices 102, 104, and/or 106 (e.g., via network(s) 108, and/or the like).
  • Memory 128 can include (e.g., store, and/or the like) instructions 130, which when executed by processor(s) 124 can cause system 110 to perform one or more of the operations and/or functions described herein.
  • FIGs. 2, 3, 4, and 5 depict example scenes according to example embodiments of the present disclosure.
  • a computing system e.g., devices 102, 104, and/or 106, system 110, and/or the like
  • can generate an AR environment comprising scene 200 for display (e.g., via display(s) 116, and/or the like) by devices 102, 104, and/or 106.
  • Scene 200 can include a portion of a physical real-world environment (e.g., a room, a field, and/or the like) in which devices 102, 104, and/or 106 can be located.
  • scene 200 can include one or more physical objects.
  • scene 200 can include object 202 (e.g., a dollar bill, and/or the like).
  • Scene 200 can also include virtual elements 204, 206, 208, and/or 210
  • the computing system can receive data describing the physical real-world environment (e.g., data generated by sensor(s) 118, and/or the like) from devices 102, 104, and/or 106, the data can be utilized to determine locations of devices 102, 104, and/or 106 relative to one another in the physical real-world environment, and at least a portion of the AR environment (e.g., element 210, and/or the like) can be generated based on the locations. For example, users of devices 102, 104, and/or 106 can gather around a table to play a game in an AR environment.
  • data describing the physical real-world environment e.g., data generated by sensor(s) 118, and/or the like
  • the data can be utilized to determine locations of devices 102, 104, and/or 106 relative to one another in the physical real-world environment, and at least a portion of the AR environment (e.g., element 210, and/or the like) can be generated based on the locations.
  • Sensor data from devices 102, 104, and/or 106 can be utilized to determine the locations of devices 102, 104, and/or 106 relative to one another (e.g., the location of the users around the table, and/or the like).
  • One or more elements of the AR environment e.g., element 210, and/or the like
  • element 210 can indicate which user should play next in the game, and/or the like.
  • one or more of the generated element(s) can depict one or more of the locations.
  • scene 200 can be displayed by device 102, and element 210 can point toward the location of device 104 in the physical real-world environment.
  • the computing system can determine the proximity of devices 102, 104, and/or 106 to one another (e.g., based on data generated by sensor(s) 118, determined locations of devices 102, 104, and/or 106, and/or the like). In such embodiments, the computing system can determine, based on the proximity of devices 102, 104, and/or 106, a level of audio and/or a degree of physical feedback (e.g., vibration, and/or the like) associated with the AR environment to be produced by devices 102, 104, and/or 106.
  • a level of audio and/or a degree of physical feedback e.g., vibration, and/or the like
  • users of devices 102, 104, and/or 106 can be playing a game associated with the AR environment, a user of device 102 can experience an event (e.g., a bomb detonation, and/or the like), and an extent to which users of devices 104 and/or 106 will experience (e.g., in terms of audio, physical feedback, and/or the like) the event can be determined based on the proximity of devices 104 and/or 106 to device 102 (e.g., at the time of the event, and/or the like).
  • an event e.g., a bomb detonation, and/or the like
  • an extent to which users of devices 104 and/or 106 will experience e.g., in terms of audio, physical feedback, and/or the like
  • the event can be determined based on the proximity of devices 104 and/or 106 to device 102 (e.g., at the time of the event, and/or the like).
  • the computing system can receive data describing the physical real-world environment (e.g., data generated by sensor(s) 118, and/or the like) from cameras of devices 102, 104, and/or 106.
  • the cameras of devices 102, 104, and/or 106 can capture portions of the physical real-world environment (e.g., from the perspectives of their respective users, and/or the like).
  • data received from a camera of device 102 can represent a portion of the physical real-world environment including object 202 (e.g., from the perspective of a user of device 102, and/or the like).
  • data received from a camera of device 104 can represent a portion of the physical real-world environment including object 202 (e.g., from the perspective of a user of device 104, and/or the like); and/or data received from a camera of device 106 can represent a portion of the physical real- world environment including object 202 (e.g., from the perspective of a user of device 106, and/or the like).
  • the portions of the physical real-world environment captured by the cameras of devices 102, 104, and/or 106 can include common features (e.g., object 202, and/or the like), such features can be captured from different perspectives (e.g., the cameras of devices 102, 104, and/or 106 can generate data representing object 202 from different positions (e.g., positions around a tabletop surface on which object 202 is resting, and/or the like) within the physical real-world environment, and/or the like).
  • the computing system can determine the locations of devices 102, 104, and/or 106 by comparing the data representing the portions of the physical real-world environment (e.g., the various portions including object 202, and/or the like) to determine a difference in vantage point (e.g., orientation, perspective, and/or the like) between devices 102, 104, and/or 106 with respect to stationary physical object(s) included in the portions (e.g., object 202, and/or the like).
  • a difference in vantage point e.g., orientation, perspective, and/or the like
  • the computing system can determine the locations of devices 102, 104, and/or 106 by comparing the data received from the cameras of devices 102, 104, and/or 106 to determine a difference in vantage point (e.g., orientation, perspective, and/or the like) between devices 102, 104, and/or 106 with respect to object 202.
  • a difference in vantage point e.g., orientation, perspective, and/or the like
  • the computing system can determine the locations of devices 102, 104, and/or 106 by determining, comparing, and/or the like one or more positions, viewing angles, and/or the like of devices 102, 104, and/or 106 (e.g., using one or more camera projections, and/or the like) relative to a single, common, shared, and/or the like coordinate space for the AR environment (e.g., based on object 202, and/or the like).
  • the computing system can utilize data representing the portions of the physical real-world environment to render, for display by devices 102, 104, and/or 106, one or more AR environments including the portions of the physical real- world environment (e.g., the portions including object 202, and/or the like) and one or more virtual elements (e.g., elements 204, 206, 208, and/or the like).
  • one or more AR environments including the portions of the physical real- world environment (e.g., the portions including object 202, and/or the like) and one or more virtual elements (e.g., elements 204, 206, 208, and/or the like).
  • data representing a portion of the physical real-world environment from the perspective of a user of device 102 can be utilized to render, for display by device 102, an AR environment including the portion of the physical real-world environment from the perspective of the user of device 102, and elements 204, 206, and/or 208.
  • data representing a portion of the physical real-world environment from the perspective of a user of device 104 can be utilized to render, for display by device 104, an AR environment including the portion of the physical real-world environment from the perspective of the user of device 104, and elements 204, 206, and/or 208; and/or data representing a portion of the physical real-world environment from the perspective of a user of device 106 (e.g., data received from a camera of device 106, and/or the like) can be utilized to render, for display by device 106, an AR environment including the portion of the physical real-world environment from the perspective of the user of device 106, and elements 204, 206, and/or 208.
  • the virtual element(s) can include one or more virtual depictions of stationary object(s) included in the portion(s).
  • element 204 can virtually depict object 202.
  • Users of devices 102, 104, and/or 106 can align one or more of the virtual element(s) with the stationary object(s) (e.g., by manipulating devices 102, 104, and/or 106 to move elements 204, 206, 208, and/or the like), for example, in response to a prompt (e.g., provided by elements 206, 208, and/or the like).
  • the computing system can receive (e.g., from device 102, and/or the like) data generated by user input aligning element 204 with object 202 within the AR environment including the portion of the physical real- world environment from the perspective of the user of device 102.
  • the computing system can receive (e.g., from device 104, and/or the like) data generated by user input aligning element 204 with object 202 within the AR environment including the portion of the physical real-world environment from the perspective of the user of device 104; and/or the computing system can receive (e.g., from device 106, and/or the like) data generated by user input aligning element 204 with object 202 within the AR environment including the portion of the physical real-world environment from the perspective of the user of device 106.
  • the computing system can determine the locations of devices 102, 104, and/or 106 by comparing the data generated by the user input aligning the one or more virtual element(s) (e.g., element 204, and/or the like) with stationary object(s) (e.g., object 202, and/or the like) within the AR environments including the portions of the physical real-world environment.
  • virtual element(s) e.g., element 204, and/or the like
  • stationary object(s) e.g., object 202, and/or the like
  • the computing system can determine the locations of devices 102, 104, and/or 106 by comparing data (e.g., received from devices 102, 104, 106, and/or the like) generated by user input aligning element 204 with object 202 within the AR environment including the portion of the physical real-world environment from the perspective of the user of device 102, the AR environment including the portion of the physical real-world environment from the perspective of the user of device 104, and/or the AR environment including the portion of the physical real-world environment from the perspective of the user of device 106.
  • data e.g., received from devices 102, 104, 106, and/or the like
  • a computing system e.g., devices 102, 104, and/or 106, system 110, and/or the like
  • can generate an AR environment comprising scene 300 for display (e.g., via display(s) 116, and/or the like) by devices 102, 104, and/or 106.
  • Scene 300 can include a portion of a physical real-world environment (e.g., a room, a field, and/or the like) in which devices 102, 104, and/or 106 can be located. Accordingly, scene 300 can include one or more physical objects.
  • scene 300 can include object 302 (e.g., a marker, and/or the like) and object 304 (e.g., a coin, and/or the like).
  • Scene 300 can also include virtual elements 306, 308, 310, 312, and/or 314.
  • the computing system can receive data describing the physical real-world environment (e.g., data generated by sensor(s) 118, and/or the like) from devices 102, 104, and/or 106, the data can be utilized to determine locations of devices 102, 104, and/or 106 relative to one another in the physical real-world environment, and at least a portion of the AR environment (e.g., element 314, and/or the like) can be generated based on the locations.
  • one or more of the generated element(s) can depict one or more of the locations.
  • scene 300 can be displayed by device 102, and element 314 can point toward the location of device 104 in the physical real-world environment.
  • the computing system can receive data describing the physical real-world environment (e.g., data generated by sensor(s) 118, and/or the like) from cameras of devices 102, 104, and/or 106.
  • the cameras of devices 102, 104, and/or 106 can capture portions of the physical real-world environment (e.g., from the perspectives of their respective users, and/or the like).
  • data received from a camera of device 102 can represent a portion of the physical real-world environment including objects 302 and/or 304 (e.g., from the perspective of a user of device 102, and/or the like).
  • data received from a camera of device 104 can represent a portion of the physical real-world environment including objects 302 and/or 304 (e.g., from the perspective of a user of device 104, and/or the like); and/or data received from a camera of device 106 can represent a portion of the physical real-world environment including objects 302 and/or 304 (e.g., from the perspective of a user of device 106, and/or the like).
  • the portions of the physical real-world environment captured by the cameras of devices 102, 104, and/or 106 can include common features (e.g., objects 302, 304, and/or the like), such features can be captured from different perspectives (e.g., the cameras of devices 102, 104, and/or 106 can generate data representing objects 302 and/or 304 from different positions (e.g., positions around a tabletop surface on which objects 302 and/or 304 are resting, and/or the like) within the physical real-world environment, and/or the like).
  • common features e.g., objects 302, 304, and/or the like
  • such features can be captured from different perspectives (e.g., the cameras of devices 102, 104, and/or 106 can generate data representing objects 302 and/or 304 from different positions (e.g., positions around a tabletop surface on which objects 302 and/or 304 are resting, and/or the like) within the physical real-world environment, and/or the like).
  • the computing system can determine the locations of devices 102, 104, and/or 106 by comparing the data representing the portions of the physical real-world environment (e.g., the various portions including objects 302, 304, and/or the like) to determine a difference in vantage point (e.g., orientation, perspective, and/or the like) between devices 102, 104, and/or 106 with respect to stationary physical object(s) included in the portions (e.g., objects 302, 304, and/or the like).
  • a difference in vantage point e.g., orientation, perspective, and/or the like
  • the computing system can determine the locations of devices 102, 104, and/or 106 by comparing the data received from the cameras of devices 102, 104, and/or 106 to determine a difference in vantage point (e.g., orientation, perspective, and/or the like) between devices 102, 104, and/or 106 with respect to objects 302 and/or 304.
  • a difference in vantage point e.g., orientation, perspective, and/or the like
  • the computing system can determine the locations of devices 102, 104, and/or 106 by determining, comparing, and/or the like one or more positions, viewing angles, and/or the like of devices 102, 104, and/or 106 (e.g., using one or more camera projections, and/or the like) relative to a single, common, shared, and/or the like coordinate space for the AR environment (e.g., based on objects 302, 304, and/or the like).
  • the computing system can utilize the data representing the portions of the physical real-world environment to render, for display by devices 102, 104, and/or 106, one or more AR environments including the portions of the physical real-world environment (e.g., the portions including objects 302, 304, and/or the like) and one or more virtual elements (e.g., elements 306, 308, 310, 312, and/or the like).
  • one or more AR environments including the portions of the physical real-world environment (e.g., the portions including objects 302, 304, and/or the like) and one or more virtual elements (e.g., elements 306, 308, 310, 312, and/or the like).
  • data representing a portion of the physical real-world environment from the perspective of a user of device 102 can be utilized to render, for display by device 102, an AR environment including the portion of the physical real-world environment from the perspective of the user of device 102, and elements 306, 308, 310, and/or 312.
  • data representing a portion of the physical real-world environment from the perspective of a user of device 104 can be utilized to render, for display by device 104, an AR environment including the portion of the physical real-world environment from the perspective of the user of device 104, and elements 306, 308, 310, and/or 312; and/or data representing a portion of the physical real-world environment from the perspective of a user of device 106 (e.g., data received from a camera of device 106, and/or the like) can be utilized to render, for display by device 106, an AR environment including the portion of the physical real-world environment from the perspective of the user of device 106, and elements 306, 308, 310, and/or 312.
  • Users of devices 102, 104, and/or 106 can align one or more of the virtual element(s) with the stationary object(s) (e.g., by manipulating devices 102, 104, and/or 106 to move elements 306, 308, 310, 312, and/or the like), for example, in response to a prompt (e.g., provided by elements 310, 312, and/or the like).
  • a prompt e.g., provided by elements 310, 312, and/or the like.
  • elements 306 and 308 can be distinguishable from one another (e.g., be different colors, have different shapes, and/or the like), users of devices 102, 104, and/or 106 can agree to align element 306 with object 302 and/or element 308 with object 304, and the computing system can receive (e.g., from device 102, and/or the like) data generated by user input aligning element 306 with object 302 and/or element 308 with object 304 within the AR environment including the portion of the physical real-world environment from the perspective of the user of device 102.
  • the computing system can receive (e.g., from device 104, and/or the like) data generated by user input aligning element 306 with object 302 and/or element 308 with object 304 within the AR environment including the portion of the physical real-world environment from the perspective of the user of device 104; and/or the computing system can receive (e.g., from device 106, and/or the like) data generated by user input aligning element 306 with object 302 and/or element 308 with object 304 within the AR environment including the portion of the physical real-world environment from the perspective of the user of device 106.
  • the computing system can determine the locations of devices 102, 104, and/or 106 by comparing the data generated by the user input aligning the one or more virtual element(s) (e.g., elements 306, 308, and/or the like) with stationary object(s) (e.g., objects 302, 304, and/or the like) within the AR environments including the portions of the physical real-world environment.
  • virtual element(s) e.g., elements 306, 308, and/or the like
  • stationary object(s) e.g., objects 302, 304, and/or the like
  • the computing system can determine the locations of devices 102, 104, and/or 106 by comparing data (e.g., received from devices 102, 104, 106, and/or the like) generated by user input aligning element 306 with object 302 and/or element 308 with object 304 within the AR environment including the portion of the physical real-world environment from the perspective of the user of device 102, the AR environment including the portion of the physical real-world environment from the perspective of the user of device 104, and/or the AR environment including the portion of the physical real-world environment from the perspective of the user of device 106.
  • data e.g., received from devices 102, 104, 106, and/or the like
  • scene 400 can include a portion of a physical real-world environment in which devices 102, 104, and/or 106 can be located.
  • FIG. 4 illustrates a user operating device 102 in a physical real-world environment (e.g., a room, and/or the like) that includes object 402 (e.g., a cutting board, and/or the like).
  • object 402 e.g., a cutting board, and/or the like.
  • a computing system can receive data describing the physical real-world environment (e.g., data generated by sensor(s) 118, and/or the like) from cameras of devices 102, 104, and/or 106.
  • the cameras of devices 102, 104, and/or 106 can capture portions of the physical real-world environment (e.g., from the perspectives of their respective users, and/or the like).
  • data received from a camera of device 102 can represent a portion of the physical real-world environment including object 402 (e.g., from the perspective of a user of device 102, and/or the like).
  • data received from a camera of device 104 can represent a portion of the physical real-world environment including object 402 (e.g., from the perspective of a user of device 104, and/or the like); and/or data received from a camera of device 106 can represent a portion of the physical real-world environment including object 402 (e.g., from the perspective of a user of device 106, and/or the like).
  • the portions of the physical real-world environment captured by the cameras of devices 102, 104, and/or 106 can include common features (e.g., object 402, and/or the like), such features can be captured from different perspectives (e.g., the cameras of devices 102, 104, and/or 106 can generate data representing object 402 from different positions (e.g., positions from a floor surface on which object 402 is resting, and/or the like) within the physical real-world environment, and/or the like).
  • the computing system can utilize the data representing the portions of the physical real-world environment to render, for display by devices 102, 104, and/or 106, one or more images including the portions of the physical real-world environment (e.g., the portions including object 402, and/or the like).
  • data representing a portion of the physical real-world environment from the perspective of a user of device 102 e.g., data received from a camera of device 102, and/or the like
  • data representing a portion of the physical real-world environment from the perspective of a user of device 104 can be utilized to render, for display by device 104, an image of the portion of the physical real-world environment from the perspective of the user of device 104, which can include imagery of object 402; and/or data representing a portion of the physical real-world environment from the perspective of a user of device 106 (e.g., data received from a camera of device 106, and/or the like) can be utilized to render, for display by device 106, an image of the portion of the physical real-world environment from the perspective of the user of device 106, which can include imagery of object 402.
  • Users of devices 102, 104, and/or 106 can select (e.g., by manipulating devices 102, 104, 106, and/or the like) one or more portions (e.g., comers, and/or the like) of one or more objects (e.g., stationary objects, and/or the like) within the image(s), for example, in response to a prompt, an agreement amongst the users, and/or the like.
  • the computing system can receive (e.g., from device 102, and/or the like) data generated by user input selecting one or more portions of imagery 406 of object 402 within image 404.
  • the computing system can receive (e.g., from device 104, and/or the like) data generated by user input selecting the portion(s) of imagery of object 402 within the image of the portion of the physical real-world environment from the perspective of the user of device 104; and/or the computing system can receive (e.g., from device 106, and/or the like) data generated by user input selecting the portion(s) of imagery of object 402 within the image of the portion of the physical real-world environment from the perspective of the user of device 106 [0074] In such embodiments, the computing system can determine the locations of devices 102, 104, and/or 106 by comparing the data generated by the user input selecting the portion(s) of the object(s) (e.g., object 402, and/or the like) within the image(s).
  • the computing system can determine the locations of devices 102, 104, and/or 106 by comparing the data generated by the user input selecting the portion(s) of the object(s) (e.g., object 402, and
  • the computing system can determine the locations of devices 102, 104, and/or 106 by comparing data (e.g., received from devices 102, 104, 106, and/or the like) generated by user input selecting the portion(s) of imagery 406 of object 402 within image 404, the portion(s) of the imagery of object 402 within the image of the portion of the physical real- world environment from the perspective of the user of device 104, and/or the portion(s) of the imagery of object 402 within the image of the portion of the physical real-world environment from the perspective of the user of device 106.
  • data e.g., received from devices 102, 104, 106, and/or the like
  • scene 500 can include a portion of a physical real-world environment in which devices 102, 104, and/or 106 can be located.
  • FIG. 5 illustrates a user operating device 104 in a physical real-world environment (e.g., a room, and/or the like).
  • a computing system e.g., devices 102, 104, and/or 106, system 110, and/or the like
  • an image e.g., of a quick response (QR) code, and/or the like
  • QR quick response
  • image 506 can be generated for display by device 102.
  • the computing system can receive data describing the physical real-world environment (e.g., data generated by sensor(s) 118, and/or the like) from cameras of devices 102, 104, and/or 106.
  • the cameras of devices 102, 104, and/or 106 can capture portions of the physical real-world environment (e.g., from the perspectives of their respective users, and/or the like).
  • data received from a camera of device 104 can represent a portion of the physical real-world environment (e.g., from the perspective of a user of device 104, and/or the like) including device 102 displaying image 506.
  • the computing system can utilize the data representing the portions of the physical real-world environment to render, for display by devices 102, 104, and/or 106, one or more images including the portions of the physical real-world environment.
  • data describing the physical real-world environment e.g., data generated by sensor(s) 118, and/or the like
  • the computing system can utilize the data representing the portions of
  • the computing system can determine the locations of devices 102, 104, and/or 106 by utilizing data representing one or more portions of the physical real-world environment including image 506 being displayed by device 102 to determine an orientation of image 506 relative to devices 102, 104, and/or 106 within the physical real-world environment.
  • the computing system can determine the locations of devices 102 and/or 104 by utilizing the data received from the camera of device 104 including imagery 504 of image 506 being displayed by device 102 to determine an orientation of image 506 (e.g., of device 102, and/or the like) relative to device 104 within the physical real-world environment.
  • FIG. 6 depicts an example method according to example embodiments of the present disclosure.
  • a computing system can receive, from at least one of multiple computing devices, sensor data describing a physical real-world environment in which the computing devices are located.
  • a computing system e.g., devices 102, 104, and/or 106, system 110, and/or the like
  • data e.g., generated by sensor(s) 118, and/or the like
  • a physical real-world environment in which devices 102, 104, and/or 106 are located e.g., a physical real-world environment that includes object 202, and/or the like.
  • the computing system can determine, based on the sensor data, locations of the computing devices relative to one another in the physical real-world environment. For example, the computing system can determine, based on the data received from devices 102, 104, and/or 106, locations of devices 102, 104, and/or 106 relative to one another in the physical real- world environment in which devices 102, 104, and/or 106 are located (e.g., the physical real- world environment that includes object 202, and/or the like).
  • the computing system can generate, based on the locations and for display by at least one of the computing devices, an AR environment including at least a portion of the physical real-world environment. For example, the computing system can generate, based on the locations of devices 102, 104, and/or 106 and for display by devices 102, 104, and/or 106, an AR environment that includes object 202, element 210, and/or the like.
  • FIG. 7 depicts an example scene according to example embodiments of the present disclosure.
  • a computing system e.g., devices 102, 104, and/or 106, system 110, and/or the like
  • can generate an AR environment comprising scene 700 for display (e.g., via display(s) 116, and/or the like) by device 102.
  • Scene 700 can include a portion of a physical real-world environment (e.g., a room, and/or the like) in which device 102 can be located. Accordingly, scene 700 can include one or more physical objects.
  • scene 700 can include object 702 (e.g., a tabletop surface, and/or the like), object 704 (e.g., a wall surface, and/or the like), and/or object 706 (e.g., an electrical outlet, and/or the like).
  • Scene 700 can also include virtual element 708 (e.g., a virtual game board, and/or the like) and/or virtual element 710 (e.g., a virtual cube, and/or the like).
  • a user can move (e.g., by manipulating device 102, and/or the like) one or more virtual elements from a first location within the AR environment to a second location within the AR environment.
  • the computing system can receive data generated by user input moving a virtual element (e.g., a targeting reticle, and/or the like) from location 714 to location 712.
  • a virtual element e.g., a targeting reticle, and/or the like
  • the computing system can utilize the data generated by the user input and a physics-based model of the AR environment to adjust one or more aspects (e.g., positions, rotations, scales, colors, textures, velocities, accelerations, and/or the like) of one or more virtual elements within the AR environment.
  • a user of device 102 can be playing a game in an AR environment comprising scene 700 that involves moving a virtual element (e.g., the targeting reticle, and/or the like) around element 708 (e.g., the game board, and/or the like) such that it aligns with element 710 (e.g., the cube, and/or the like).
  • a virtual element e.g., the targeting reticle, and/or the like
  • element 708 e.g., the game board, and/or the like
  • element 710 e.g., the cube, and/or the like
  • one or more aspects of one or more virtual elements can be adjusted based on a physics-based model of the AR environment comprising scene 700, for example, a model that takes into account one or more of the virtual element(s) (e.g., the targeting reticle, elements 708 and/or 710, and/or the like) and/or one or more physical objects in the physical real-world environment (e.g., objects 702, 704, and/or 706, and/or the like).
  • the aspect(s) can be adjusted based on a distance between the first location and the second location (e.g., a distance between locations 712 and 714, and/or the like). Additionally or alternatively, the aspect(s) can be adjusted based on a velocity at which the virtual element(s) were moved (e.g., how fast the user moved the virtual element (e.g., the targeting reticle, and/or the like) from location 714 to location 712, and/or the like).
  • a distance between the first location and the second location e.g., a distance between locations 712 and 714, and/or the like.
  • a velocity at which the virtual element(s) were moved e.g., how fast the user moved the virtual element (e.g., the targeting reticle, and/or the like) from location 714 to location 712, and/or the like).
  • the physics-based model can be based on one or more locations and/or one or more dimensions of the virtual element(s) (e.g., the targeting reticle, elements 708 and/or 710, and/or the like). Additionally or alternatively, the physics-based model can be based on one or more locations and/or one or more dimensions of one or more physical objects in the physical real-world environment (e.g., objects 702, 704, and/or 706, and/or the like).
  • the virtual element(s) e.g., the targeting reticle, elements 708 and/or 710, and/or the like.
  • the physics-based model can be based on one or more locations and/or one or more dimensions of one or more physical objects in the physical real-world environment (e.g., objects 702, 704, and/or 706, and/or the like).
  • FIG. 8 depicts an example method according to example embodiments of the present disclosure.
  • a computing system can generate, for display by a computing device, an AR environment comprising one or more virtual elements and at least a portion of a physical real-world environment.
  • a computing system e.g., devices 102, 104, and/or 106, system 110, and/or the like
  • the computing system can receive data generated by user input moving an element of the virtual element(s) from a first location within the AR environment to a second location within the AR environment.
  • the computing system can receive data generated by user input moving a virtual element (e.g., a targeting reticle, and/or the like) from location 714 to location 712.
  • the computing system can adjust, based on the data generated by the user input and a physics-based model of the AR environment, one or more aspects of the virtual element(s) within the AR environment.
  • the virtual element e.g., moving the targeting reticle from location 714 to location 712, and/or the like
  • one or more aspects of one or more virtual elements e.g., the targeting reticle, elements 708 and/or 710, and/or the like
  • FIG. 9 depicts an example scene according to example embodiments of the present disclosure.
  • a computing system e.g., devices 102, 104, and/or 106, system 110, and/or the like
  • can generate an AR environment comprising scene 900 for display (e.g., via display(s) 116, and/or the like) by device 102.
  • Scene 900 can include a portion of a physical real-world environment (e.g., a room, and/or the like) in which device 102 can be located. Accordingly, scene 900 can include one or more physical objects.
  • scene 900 can include object 902 (e.g., a tabletop surface, and/or the like), object 904 (e.g., a chair, and/or the like), and/or object 906 (e.g., a computer monitor, and/or the like).
  • object 902 e.g., a tabletop surface, and/or the like
  • object 904 e.g., a chair, and/or the like
  • object 906 e.g., a computer monitor, and/or the like
  • Scene 900 can also include virtual elements 910 (e.g., virtual cubes, and/or the like).
  • the computing system can determine an area of interest of the AR environment and can generate, for display by the computing device within the AR environment, one or more virtual elements identifying the area of interest.
  • the computing system can determine that the area surrounding object 902 (e.g., the tabletop surface, and/or the like) is an area of interest for the AR environment comprising scene 900, and the computing system can generate, for display by device 102 within the AR environment comprising scene 900, virtual element 908 (e.g., a virtual border delineating the tabletop surface, and/or the like) identifying the area of interest.
  • the area surrounding object 902 e.g., the tabletop surface, and/or the like
  • virtual element 908 e.g., a virtual border delineating the tabletop surface, and/or the like
  • the computing system can determine the area of interest by determining one or more locations of one or more virtual elements within the AR environment. For example, the computing system can determine that the area surrounding object 902 (e.g., the tabletop surface, and/or the like) is an area of interest for the AR environment comprising scene 900 by determining one or more locations of elements 910 (e.g., the virtual cubes, and/or the like).
  • the virtual element(s) identifying the area of interest can define a space that includes the location(s) of the virtual element(s) within the AR environment.
  • element 908 can define a space (e.g., a portion of the tabletop surface where the virtual cubes are located, and/or the like) that includes the location(s) of elements 910 (e.g., the virtual cubes, and/or the like).
  • the computing system can determine the area of interest by identifying a physical surface located in the physical real-world environment. For example, the computing system can determine that the area surrounding object 902 (e.g., the tabletop surface, and/or the like) is an area of interest for the AR environment comprising scene 900 by identifying object 902.
  • the virtual element(s) identifying the area of interest can define a space that includes at least a portion of the physical surface.
  • element 908 can define a space that includes at least a portion of object 902 (e.g., a portion of the tabletop surface where the virtual cubes are located, and/or the like).
  • FIG. 10 depicts an example method according to example embodiments of the present disclosure.
  • a computing system can generate, for display by a computing device, an AR environment comprising one or more virtual elements and at least a portion of a physical real-world environment.
  • a computing system e.g., devices 102, 104, and/or 106, system 110, and/or the like
  • the computing system can determine an area of interest of the AR environment.
  • the computing system can determine that the area surrounding object 902 (e.g., the tabletop surface, and/or the like) is an area of interest for the AR environment comprising scene 900.
  • the computing system can generate, for display by the computing device within the AR environment, one or more virtual elements identifying the area of interest.
  • the computing system can generate, for display by device 102 within the AR environment comprising scene 900, virtual element 908 (e.g., a virtual border delineating the tabletop surface, and/or the like) identifying the area of interest.
  • FIGs. 11 and 12 depict example scenes according to example embodiments of the present disclosure.
  • a computing system e.g., devices 102, 104, and/or 106, system 110, and/or the like
  • can generate an AR environment comprising scene 1100 for display (e.g., via display(s) 116, and/or the like) by device 102.
  • Scene 1100 can include a portion of a physical real-world environment (e.g., a room, and/or the like) in which device 102 can be located. Accordingly, scene 1100 can include one or more physical objects.
  • scene 1100 can include object 1102 (e.g., a wall surface, and/or the like), object 1104 (e.g., a floor surface, and/or the like), and/or object 1106 (e.g., a tabletop surface, and/or the like).
  • object 1102 e.g., a wall surface, and/or the like
  • object 1104 e.g., a floor surface, and/or the like
  • object 1106 e.g., a tabletop surface, and/or the like.
  • the computing system can identify one or more physical surfaces located in the physical real-world environment. For example, the computing system can identify (e.g., based on data generated by sensor(s) 118, and/or the like) objects 1102, 1104, and/or 1106.
  • a physical surface of the identified physical surface(s) can be selected as a surface defining at least in part a space for which one or more virtual elements can be scaled.
  • the computing system can select, from amongst objects 1102, 1104, and/or 1106, object 1106 (e.g., the tabletop surface, and/or the like) as a surface defining at least in part a space for which one or more virtual elements can be scaled.
  • a user can select (e.g., by manipulating device 102, and/or the like) the surface defining in part the space for which the virtual element(s) can be scaled.
  • the computing system can receive data generated by user input aligning virtual element 1108 (e.g., a virtual surface, plane, and/or the like) with object 1106 (e.g., the tabletop surface, and/or the like).
  • the computing system can generate one or more virtual elements scaled to fit in the space defined at least in part by the physical surface (e.g., the selected physical surface, and/or the like).
  • the computing system can generate (e.g., for display by device 102, and/or the like) an AR environment comprising scene 1200.
  • scene 1200 can include objects 1102, 1104, and/or 1106.
  • Scene 1200 can also include virtual element 1202 (e.g., a virtual game board, and/or the like) and/or virtual elements 1204 (e.g., virtual bricks, and/or the like).
  • Elements 1202 and/or 1204 can be scaled to fit a space defined at least in part by object 1106 (e.g., the tabletop surface, and/or the like).
  • the virtual element(s) scaled to fit in the space can represent a data set (e.g., search results, and/or the like).
  • the computing system can generate the virtual element(s) scaled to fit in the space based on the size of the data set (e.g., to fill the available space, and/or the like).
  • the virtual element(s) scaled to fit in the space can be generated such that they fit in a space defined at least in part by at least two of the identified surfaces.
  • the computing system can generate elements 1204 such that they fit in a space defined at least in part by objects 1106 and 1102 (e.g., a space defined by the tabletop surface and the wall, and/or the like).
  • the computing system can determine one or more dimensions (e.g., height, width, depth, and/or the like) of the space defined at least in part by the physical surface.
  • the computing system can determine one or more dimensions of the space defined at least in part by object 1106 (e.g., a space between the tabletop surface and the ceiling (or top of the wall), and/or the like).
  • the computing system can select, from amongst multiple different virtual elements, one or more of the virtual element(s) scaled to fit in the space based on the dimension(s).
  • the computing system can select (e.g., from amongst a set of possible virtual elements that includes the bricks, the Eiffel Tower, the Washington Monument, and/or the like) elements 1204 (e.g., the bricks, and/or the like) based on the dimension(s) of the space defined at least in part by object 1106 (e.g., because elements 1204 are size appropriate for such a space, and/or the like).
  • the physical real-world environment can include multiple computing devices, and determining the dimension(s) can include determining a distance between the devices (e.g., based on sensor data from one or more of the devices, and/or the like).
  • the physical real-world environment can include devices 102, 104, and/or 106
  • the computing system can determine a distance between device 102 and devices 104 and/or 106 (e.g., based on data generated by sensor(s) 118, and/or the like), and the computing system can determine one or more of the dimension(s) of the space defined at least in part by object 1106 based on such distance(s).
  • FIG. 13 depicts an example method according to example embodiments of the present disclosure. Referring to FIG. 13, at (1302), a computing system can generate, for display by a computing device, an AR environment comprising one or more virtual elements and at least a portion of a physical real-world environment.
  • a computing system can generate an AR environment comprising scene 1200 for display (e.g., via display(s) 116, and/or the like) by device 102.
  • the computing system can identify a physical surface located in the physical real-world environment.
  • the computing system can identify object 1106 (e.g., the tabletop surface, and/or the like).
  • the computing system can generate, for display by the computing device within the AR environment, one or more virtual elements scaled to fit in a space defined at least in part by the physical surface located in the physical real-world environment.
  • the computing system can generate elements 1202, 1204, and/or the like.
  • FIG. 14 depicts an example scene according to example embodiments of the present disclosure.
  • a computing system e.g., devices 102, 104, and/or 106, system 110, and/or the like
  • can generate an AR environment comprising scene 1400 for display (e.g., via display(s) 116, and/or the like) by device 102.
  • Scene 1400 can include a portion of a physical real-world environment (e.g., a room, and/or the like) in which device 102 can be located. Accordingly, scene 1400 can include one or more physical objects.
  • scene 1400 can include object 1402 (e.g., a chair, and/or the like), object 1404 (e.g., a computer monitor, and/or the like), object 1406 (e.g., a tabletop surface, and/or the like), and/or object 1408 (e.g., a piece of paper, and/or the like).
  • object 1402 e.g., a chair, and/or the like
  • object 1404 e.g., a computer monitor, and/or the like
  • object 1406 e.g., a tabletop surface, and/or the like
  • object 1408 e.g., a piece of paper, and/or the like
  • scene 1400 can also include virtual elements 1410 (e.g., virtual paper weights, and/or the like).
  • the computing system can identify one or more locations within the AR environment for locating one or more advertisements, and the computing system can generate one or more virtual elements comprising the advertisement(s) for display by the computing device at the location(s) within the AR environment.
  • the computing system can identify locations corresponding to objects 1402, 1404, and/or 1408, and/or elements 1410 for locating one or more
  • the computing system can generate (e.g., for display by device 102 within the AR environment comprising scene 1400, and/or the like) virtual elements 1410, 1412, 1414, and/or 1418 comprising the advertisement(s). [0100] In some embodiments, the computing system can select one or more of the advertisement(s) from amongst multiple different possible advertisements.
  • one or more of the advertisement(s) can be selected based on: a geographic location in the physical real-world environment at which the computing device is located (e.g., device 102 can be located in a bar, and an advertisement related to the bar, one or more of its products or services, and/or the like can be selected); a search history associated with the computing device (e.g., device 102 can have been utilized to search for a particular product or service, and an advertisement related to the product or service, and/or the like can be selected); a context of the AR environment (e.g., device 102 can be located in a bar where users are utilizing the AR environment comprising scene 1400 to play a trivia game, and an
  • advertisement related to the bar, a trivia question, and/or the like can be selected); user performance within the AR environment (e.g., an advertisement for a discount at the bar can be selected based on the user’s performance in the trivia game, and/or the like); one or more objects depicted by one or more virtual elements of the AR environment (e.g., the AR environment comprising scene 1400 can include elements 1410 (e.g., virtual paper weights, and/or the like), and an advertisement related to elements 1410 (e.g., related to paper weights, and/or the like) can be selected); one or more physical objects in the physical real-world environment (e.g., the physical real-world environment in which device 102 is located can include object 1404 (e.g., a computer monitor, and/or the like), and an advertisement related to object 1404 (e.g., related to computer monitors, and/or the like) can be selected); text in the physical real-world environment recognized by a computing device (e.g., object 1404 (e
  • the computing system can identify the location(s) by identifying a physical object in the physical real-world environment.
  • the computing system can identify object 1404 (e.g., the monitor, and/or the like).
  • the virtual element(s) comprising the advertisement(s) can include one or more virtual elements that depict at least a portion of object 1404 (e.g., a virtual monitor, and/or the like), outline at least a portion of object 1404 (e.g., outline the monitor, and/or the like), highlight at least a portion of object 1404 (e.g., cover at least a portion of the monitor with a transparent layer, and/or the like), and/or identify at least a portion of object 1404 (e.g., draw a box around at least a portion of the monitor, and/or the like).
  • the computing system can identify the location(s) by identifying text in the physical real-world environment recognized by a computing device.
  • the computing system can identify text 1416 (e.g., the text identifying the brand of the monitor, and/or the like).
  • the virtual element(s) e.g., element 1418, and/or the like
  • the advertisement s can include one or more virtual elements that depict at least a portion of text 1416 (e.g., virtual text, and/or the like), outline at least a portion of text 1416 (e.g., outline the text identifying the brand of the monitor, and/or the like), highlight at least a portion of text 1416 (e.g., cover at least a portion of the text identifying the brand of the monitor with a transparent layer, and/or the like), and/or identify at least a portion of text 1416 (e.g., draw a box around at least a portion of the text identifying the brand of the monitor, and/or the like).
  • generating the virtual element(s) comprising the advertisement(s) can include modifying one or more dimensions, colors, finishes, lightings, and/or the like of at least one of the advertisement(s).
  • the physical real-world environment can include object 1402 (e.g., the chair, and/or the like), and object 1402 can be identified as a location for an advertisement, an advertisement related to object 1402 (e.g., related to chairs, and/or the like) can be selected, and generating the virtual element(s) comprising the selected advertisement (e.g., elements 1414) can include modifying one or more dimensions of the advertisement (e.g., so that the advertisement will fit on a surface of object 1402, and/or the like), colors of the advertisement (e.g., so that the advertisement will be visible on the surface of object 1402, and/or the like), finishes (e.g., matte, glossy, and/or the like) of the advertisement (e.g., so the advertisement will be aesthetically accentuated on the advertisement
  • a user can (e.g., by manipulating device 102, and/or the like) invoke (e.g., select, interact with, and/or the like) one or more of the virtual element(s) comprising the advertisement(s), and the computing system can receive data generated by the user input. Responsive to receiving the data generated by the user input, the computing system can generate (e.g., for display by device 102 within the AR environment comprising scene 1400, and/or the like) one or more virtual elements comprising content associated with one or more of the advertisement(s) associated with the virtual element(s) invoked by the user.
  • a user can invoke a virtual element (e.g., an element of elements 1414, and/or the like) located on a surface of object 1402 (e.g., the chair, and/or the like), and one or more virtual elements (e.g., one or more other elements of elements 1414, and/or the like) comprising an advertisement for object 1402 (e.g., including more details about the chair, and/or the like) can be generated for display by device 102 within the AR environment comprising scene 1400 (e.g., alongside object 1402, on a surface of object 1402, and/or the like).
  • a virtual element e.g., an element of elements 1414, and/or the like
  • one or more virtual elements e.g., one or more other elements of elements 1414, and/or the like
  • an advertisement for object 1402 e.g., including more details about the chair, and/or the like
  • scene 1400 e.g., alongside object 1402, on a surface of object 1402, and/or the
  • the computing system can direct an application distinct from an application providing the AR environment to content associated with one or more of the advertisement(s) associated with the virtual element(s) invoked by the user for display by the computing device within the application distinct from the application associated with the AR
  • device 102 can include (e.g., execute, and/or the like) one or more applications (e.g., a web browser, an application associated with a merchant, service provider, and/or the like) distinct from an application providing the AR environment comprising scene 1400, and responsive to the user invoking the virtual element (e.g., the element of elements 1414, and/or the like) located on the surface of object 1402, one or more of such application(s) can be directed (e.g., via an application programming interface (API) of such application(s), an advertisement identifier, a uniform resource locator (URL), and/or the like) to content for display by device 102 within such application(s) (e.g., enabling the user to learn more about object 1402, purchase object 1402, and/or the like).
  • applications e.g., a web browser, an application associated with a merchant, service provider, and/or the like
  • an application e.g., the element of elements 1414, and/or the like
  • URL uniform
  • the computing system can generate a particular identified location within the AR environment for display within the AR environment by multiple different computing devices (e.g., utilized by different users located in the physical real- world environment, and/or the like).
  • virtual elements depicting different advertisements can be generated for display at the particular location by the different computing devices.
  • the physical real-world environment can include devices 102 and 104, one or more virtual elements (e.g., element 1412, and/or the like) depicting a first advertisement (e.g., based on a search history associated with device 102, and/or the like) can be generated for display at a particular location within the AR
  • one or more virtual elements e.g., element 1412, and/or the like
  • depicting a second advertisement e.g., based on a search history associated with device 104, and/or the like
  • a second advertisement e.g., based on a search history associated with device 104, and/or the like
  • FIG. 15 depicts an example method according to example embodiments of the present disclosure.
  • a computing system can generate, for display by a computing device, an AR environment comprising one or more virtual elements and at least a portion of a physical real-world environment.
  • a computing system e.g., devices 102, 104, and/or 106, system 110, and/or the like
  • the computing system can identify one or more locations within the AR environment for locating one or more advertisements.
  • the computing system can identify locations corresponding to objects 1402, 1404, and/or 1408, and/or elements 1410 for locating one or more advertisements.
  • the computing system can generate, for display by the computing device at the location(s) within the AR environment, one or more virtual elements comprising the advertisement(s).
  • the computing system can generate (e.g., for display by device 102 within the AR environment comprising scene 1400, and/or the like) virtual elements 1410, 1412, 1414, and/or 1418 comprising the advertisement(s).
  • the functions and/or steps described herein can be embodied in computer-usable data and/or computer-executable instructions, executed by one or more computers and/or other devices to perform one or more functions described herein.
  • data and/or instructions include routines, programs, objects, components, data structures, or the like that perform particular tasks and/or implement particular data types when executed by one or more processors in a computer and/or other data-processing device.
  • the computer- executable instructions can be stored on a computer-readable medium such as a hard disk, optical disk, removable storage media, solid-state memory, read-only memory (RAM), or the like.
  • RAM read-only memory
  • the functionality can be embodied in whole or in part in firmware and/or hardware equivalents, such as integrated circuits, application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), or the like.
  • ASICs application-specific integrated circuits
  • FPGAs field-programmable gate arrays
  • Particular data structures can be used to more effectively implement one or more aspects of the disclosure, and such data structures are contemplated to be within the scope of computer- executable instructions and/or computer-usable data described herein.
  • aspects described herein can be embodied as a method, system, apparatus, and/or one or more computer-readable media storing computer-executable instructions. Accordingly, aspects can take the form of an entirely hardware embodiment, an entirely software embodiment, an entirely firmware embodiment, and/or an embodiment combining software, hardware, and/or firmware aspects in any combination.
  • the various methods and acts can be operative across one or more computing devices and/or networks.
  • the functionality can be distributed in any manner or can be located in a single computing device (e.g., server, client computer, user device, or the like).
  • a computer-implemented method comprising:
  • an augmented reality (AR) environment comprising one or more virtual elements and at least a portion of a physical real-world environment
  • adjusting the one or more aspects comprises adjusting the one or more aspects based on a distance between the first location and the second location.
  • adjusting the one or more aspects comprises adjusting the one or more aspects based on a velocity at which the element was moved from the first location to the second location.
  • the computer-implemented method of embodiment 1, wherein the physics-based model is based at least in part on one or more of: one or more locations of one or more physical objects in the physical real-world environment, or one or more dimensions of one or more physical objects in the physical real-world environment.
  • a computer-implemented method comprising:
  • an augmented reality (AR) environment comprising one or more virtual elements and at least a portion of a physical real-world environment
  • determining the area of interest comprises determining one or more locations of one or more virtual elements within the AR environment.
  • the computer-implemented method of embodiment 6, wherein determining the area of interest comprises identifying a physical surface located in the physical real-world environment.
  • an augmented reality (AR) environment comprising one or more virtual elements and at least a portion of a physical real-world environment
  • the computer-implemented method of embodiment 11 comprising selecting, by the computing system, from amongst a plurality of different advertisements, and based on a geographic location in the physical real-world environment at which the computing device is located, at least one of the one or more advertisements.
  • the computer-implemented method of embodiment 11, comprising selecting, by the computing system, from amongst a plurality of different advertisements, and based on a search history associated with the computing device, at least one of the one or more advertisements.
  • the computer-implemented method of embodiment 11, comprising selecting, by the computing system, from amongst a plurality of different advertisements, and based on a context of the AR environment, at least one of the one or more advertisements.
  • the computer-implemented method of embodiment 11, comprising selecting, by the computing system, from amongst a plurality of different advertisements, and based on user performance within the AR environment, at least one of the one or more advertisements.
  • the computer-implemented method of embodiment 11, comprising selecting, by the computing system, from amongst a plurality of different advertisements, and based on one or more physical objects in the physical real-world environment, at least one of the one or more advertisements.
  • the computer-implemented method of embodiment 11, comprising selecting, by the computing system, from amongst a plurality of different advertisements, and based on text in the physical real-world environment recognized by the computing system, at least one of the one or more advertisements.
  • identifying the one or more locations comprises identifying a physical object in the physical real-world environment
  • generating the one or more virtual elements comprising the one or more advertisements comprises generating one or more virtual elements that one or more of depict at least a portion of the physical object, outline at least a portion of the physical object, highlight at least a portion of the physical object, or identify at least a portion of the physical object.
  • identifying the one or more locations comprises identifying text in the physical real-world environment recognized by the computing system.
  • generating the one or more virtual elements comprising the one or more advertisements comprises generating one or more virtual elements that one or more of depict at least a portion of the text, outline at least a portion of the text, highlight at least a portion of the text, or identify at least a portion of the text.
  • the computer-implemented method of embodiment 11, wherein generating the one or more virtual elements comprising the one or more advertisements comprises modifying one or more of:
  • the computer-implemented method of embodiment 11, comprising, responsive to receiving, from the computing device, data generated by user input invoking at least one of the one or more virtual elements comprising the one or more advertisements, generating, by the computing system and for display by the computing device within the AR environment, one or more virtual elements comprising content associated with at least one of the one or more advertisements associated with the at least one of the one or more virtual elements.
  • the computer-implemented method of embodiment 11, comprising, responsive to receiving, from the computing device, data generated by user input invoking at least one of the one or more virtual elements comprising the one or more advertisements, directing, by the computing system and for display by the computing device within an application distinct from an application providing the AR environment, the application distinct from the application providing the AR environment to content associated with at least one of the one or more advertisements associated with the at least one of the one or more virtual elements.
  • identifying the one or more locations within the AR environment for locating the one or more advertisements comprises identifying a particular location within the AR environment
  • generating the one or more virtual elements comprising the one or more advertisements comprises generating, for display by the computing device at the particular location within the AR environment, one or more virtual elements comprising a first advertisement; and the method comprises generating, by the computing system and for display by a different computing device at the particular location within the AR environment, one or more virtual elements comprising a second advertisement, the second advertisement being different from the first advertisement.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure is directed to methods and systems for generating augmented reality (AR) environments. In particular, a computing system can receive, from at least one of two computing devices, sensor data describing a physical real-world environment in which the two computing devices are located. The computing system can determine, based on the sensor data, locations of the two computing devices relative to one another in the physical real-world environment. The computing system can generate, based on the locations and for display by at least one of the two computing devices, an augmented reality (AR) environment comprising at least a portion of the physical real-world environment.

Description

METHODS AND SYSTEMS FOR GENERATING
AUGMENTED REALITY ENVIRONMENTS
PRIORITY CLAIM
[0001] This application claims priority to U.S. Patent Application Serial No. 62/599,432, filed December 15, 2017, and entitled“METHODS AND SYSTEMS FOR GENERATING AUGMENTED REALITY ENVIRONMENTS,” the disclosure of which is incorporated by reference herein in its entirety.
FIELD
[0002] The present disclosure relates generally to augmented reality (AR). More particularly, the present disclosure relates to methods and systems for generating AR environments.
BACKGROUND
[0003] Augmented reality (AR) can provide a live direct or indirect view of a physical real-world environment whose elements are“augmented” by computer-generated audio, video, graphics, haptics, and/or the like. AR can enhance a user’s perception of reality and can provide an enriched, immersive experience, and/or the like. AR can be used for entertainment, education, business, communication, data visualization, and/or the like.
SUMMARY
[0004] Aspects and advantages of embodiments of the present disclosure will be set forth in part in the following description, or can be learned from the description, or can be learned through practice of the embodiments.
[0005] One example aspect of the present disclosure is directed to a computer- implemented method. The method can include receiving, from at least one of two computing devices, sensor data describing a physical real-world environment in which the two computing devices are located. The method can further include determining, based on the sensor data, locations of the two computing devices relative to one another in the physical real-world environment. The method can further include generating, based on the locations and for display by at least one of the two computing devices, an augmented reality (AR) environment comprising at least a portion of the physical real-world environment. [0006] Another example aspect of the present disclosure is directed to another computer- implemented method. The method can include generating, for display by a computing device, an AR environment comprising one or more virtual elements and at least a portion of a physical real-world environment. The method can further include receiving, from the computing device, data generated by user input moving an element of the one or more virtual elements from a first location within the AR environment to a second location within the AR environment. The method can further include adjusting, based on the data and a physics- based model of the AR environment, one or more aspects of the one or more virtual elements within the AR environment.
[0007] Another example aspect of the present disclosure is directed to another computer- implemented method. The method can include generating, for display by a computing device, an AR environment comprising one or more virtual elements and at least a portion of a physical real-world environment. The method can further include determining an area of interest of the AR environment. The method can further include generating, for display by the computing device within the AR environment, one or more virtual elements identifying the area of interest.
[0008] Another example aspect of the present disclosure is directed to another computer- implemented method. The method can include generating, for display by a computing device, an AR environment comprising one or more virtual elements and at least a portion of a physical real-world environment. The method can further include identifying a physical surface located in the physical real-world environment. The method can further include generating, for display by the computing device within the AR environment, one or more virtual elements scaled to fit in a space defined at least in part by the physical surface located in the physical real-world environment.
[0009] Another example aspect of the present disclosure is directed to another computer- implemented method. The method can include generating, for display by a computing device, an AR environment comprising one or more virtual elements and at least a portion of a physical real-world environment. The method can further include identifying one or more locations within the AR environment for locating one or more advertisements. The method can further include generating, for display by the computing device at the one or more locations within the AR environment, one or more virtual elements comprising the one or more advertisements. [0010] Other aspects of the present disclosure are directed to various systems, apparatuses, non-transitory computer-readable media, user interfaces, and electronic devices.
[0011] These and other features, aspects, and advantages of various embodiments of the present disclosure will become better understood with reference to the following description and appended claims. The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate example embodiments of the present disclosure and, together with the description, serve to explain the related principles.
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] Detailed discussion of embodiments directed to one of ordinary skill in the art is set forth in the specification, which makes reference to the appended figures, in which:
[0013] FIG. 1 depicts an example computing environment according to example embodiments of the present disclosure;
[0014] FIGs. 2, 3, 4, and 5 depict example scenes according to example embodiments of the present disclosure;
[0015] FIG. 6 depicts an example method according to example embodiments of the present disclosure;
[0016] FIG. 7 depicts an example scene according to example embodiments of the present disclosure;
[0017] FIG. 8 depicts an example method according to example embodiments of the present disclosure;
[0018] FIG. 9 depicts an example scene according to example embodiments of the present disclosure;
[0019] FIG. 10 depicts an example method according to example embodiments of the present disclosure;
[0020] FIGs. 11 and 12 depict example scenes according to example embodiments of the present disclosure;
[0021] FIG. 13 depicts an example method according to example embodiments of the present disclosure; [0022] FIG. 14 depicts an example scene according to example embodiments of the present disclosure; and
[0023] FIG. 15 depicts an example method according to example embodiments of the present disclosure.
DETAILED DESCRIPTION
[0024] Example aspects of the present disclosure are directed to methods and systems for generating augmented reality (AR) environments. In particular, an AR environment can be generated for display by at least one of two computing devices. The AR environment can include at least a portion of a physical real-world environment in which the two computing devices are located. Data describing the physical real-world environment can be received from at least one of the two computing devices. Such data can be utilized to determine locations of the two computing devices relative to one another in the physical real-world environment. At least a portion of the AR environment (e.g., one or more virtual elements, and/or the like) can be generated based on the locations. For example, multiple users can gather around a table to play a game in an AR environment. Each of the users can utilize a computing device. Sensor data from at least one of the users’ computing devices can be utilized to determine the locations of the computing devices relative to one another (e.g., the location of the users around the table, and/or the like). One or more elements of the AR environment (e.g., virtual elements indicating which user should play next in the game, and/or the like) can be generated based on the determined locations. In some embodiments, one or more of the generated element(s) can depict one or more of the locations. The sensor data can include data generated by one or more cameras, accelerometers, gyroscopes, global position system (GPS) receivers, wireless network interfaces, and/or the like.
[0025] In some embodiments, the sensor data can be received from cameras of the two computing devices. The cameras of the computing devices can capture portions of the physical real-world environment (e.g., from the perspectives of their respective users, and/or the like). For example, the data received from the camera of the first computing device can represent a first portion of the physical real-world environment (e.g., from the perspective of a user of the first computing device, and/or the like), and the data received from the camera of the second computing device can represent a second portion of the physical real-world environment (e.g., from the perspective of a user of the second computing device, and/or the like). The physical real-world environment can include one or more stationary objects (e.g., a dollar bill, marker, coin, cutting board, user’s hand, and/or the like), and both the first portion and the second portion can include the stationary object(s) (e.g., from different orientations, perspectives, and/or the like). In such embodiments, determining the locations can include comparing the data representing the first portion of the physical real-world environment to the data representing the second portion of the physical real-world environment to determine a difference in vantage point (e.g., orientation, perspective, and/or the like) between the first computing device and the second computing device with respect to the stationary object(s).
[0026] Additionally or alternatively, the data representing the first portion of the physical real-world environment can be utilized to render, for display by the first computing device, an AR environment comprising the first portion of the physical real-world environment and one or more virtual elements (e.g., one or more virtual depictions of the stationary object(s), one or more targeting reticles, and/or the like). Similarly, the data representing the second portion of the physical real-world environment can be utilized to render, for display by the second computing device, an AR environment comprising the second portion of the physical real-world environment and the virtual element(s) (e.g., the virtual depiction(s) of the stationary object(s), the targeting reticle(s), and/or the like). Users of the computing devices can align one or more of the virtual element(s) with the stationary object(s) (e.g., by manipulating their computing device to move one or more of the virtual element(s), and/or the like), for example, in response to a prompt (e.g., included as part of the virtual element(s), and/or the like). For example, data generated by user input aligning the one or more virtual element(s) with the stationary object(s) within the AR environment comprising the first portion of the physical real-world environment can be received from the first computing device. Similarly, data generated by user input aligning the one or more virtual element(s) with the stationary object(s) within the AR environment comprising the second portion of the physical real-world environment can be received from the second computing device. In such embodiments, determining the locations can include comparing the data generated by the user input aligning the one or more virtual element(s) with the stationary object(s) within the AR environment comprising the first portion of the physical real-world environment and the data generated by the user input aligning the one or more virtual element(s) with the stationary object(s) within the AR environment comprising the second portion of the physical real-world environment.
[0027] Additionally or alternatively, the data representing the first portion of the physical real-world environment can be utilized to render, for display by the first computing device, an image comprising the first portion of the physical real-world environment.
Similarly, the data representing the second portion of the physical real-world environment can be utilized to render, for display by the second computing device, an image comprising the second portion of the physical real-world environment. Users of the computing devices can select one or more portions (e.g., comers, and/or the like) of the stationary object(s), for example, in response to a prompt (e.g., included in the image(s) as one or more virtual elements, and/or the like). For example, data generated by user input selecting the portion(s) of the stationary object(s) within the image comprising the first portion of the physical real- world environment can be received from the first computing device. Similarly, data generated by user input selecting the portion(s) of the stationary object(s) within the image comprising the second portion of the physical real-world environment can be received from the second computing device. In such embodiments, determining the locations can include comparing the data generated by the user input selecting the portion(s) of the stationary object(s) within the image comprising the first portion of the physical real-world environment and the data generated by the user input selecting the portion(s) of the stationary object(s) within the image comprising the second portion of the physical real-world environment.
[0028] In some embodiments, an image (e.g., of a quick response (QR) code, and/or the like) can be generated for display by the first computing device. A camera of the second computing device can be used to capture the image being displayed by the first computing device, and data representing a portion of the physical real-world environment comprising the image being displayed by the first computing device can be received from the camera of the second computing device. In such embodiments, determining the locations can include utilizing the data representing the portion of the physical real-world environment comprising the image to determine an orientation of the image being displayed by the first computing device relative to the second computing device within the physical real-world environment.
[0029] In some embodiments, the AR environment can include one or more virtual elements corresponding to a game being played by users of the computing devices. In such embodiments, the locations can be utilized to determine an element of the game (e.g., which user will play next, and/or the like) in accordance with one or more rules of the game (e.g., play to the left, and/or the like).
[0030] In some embodiments, the proximity of the computing devices to one another can be determined (e.g., based on the sensor data, locations, and/or the like). In such
embodiments, a level of audio and/or a degree of physical feedback (e.g., vibration, and/or the like) associated with the AR environment to be produced by at least one of the two computing devices can be determined based on the proximity. For example, users of the computing devices can be playing a game associated with the AR environment, a user of the first computing device can experience an event (e.g., a bomb detonation, and/or the like), and an extent to which the user of the second computing device will experience (e.g., in terms of audio, physical feedback, and/or the like) the event can be determined based on the proximity of the devices to one another.
[0031] In accordance with additional aspects of the disclosure, an AR environment comprising one or more virtual elements and at least a portion of a physical real-world environment can be generated for display by a computing device (e.g., a computing device located in the physical real-world environment, and/or the like). A user can move an element of the virtual element(s) from a first location within the AR environment to a second location within the AR environment, and data indicating the user input can be received. Such data and a physics-based model of the AR environment can be utilized to adjust one or more aspects (e.g., positions, rotations, scales, colors, textures, velocities, accelerations, and/or the like) of the virtual element(s) within the AR environment. For example, a user can be playing a game in an AR environment that involves moving a virtual element (e.g., a targeting reticle, and/or the like) around another virtual element (e.g., a game board, and/or the like) such that it aligns with one or more further virtual elements (e.g., one or more cubes appearing on the game board, and/or the like). Responsive to the user moving the virtual element (e.g., the targeting reticle, and/or the like), one or more aspects of the virtual element(s) (e.g., the targeting reticle, the game board, the cube(s), and/or the like) can be adjusted based on a physics-based model of the AR environment (e.g., a model that takes into account one or more of the virtual element(s), one or more physical objects in the physical real-world environment, and/or the like).
[0032] In some embodiments, the aspect(s) can be adjusted based on a distance between the first location and the second location (e.g., a distance the user moved the targeting reticle, and/or the like). Additionally or alternatively, the aspect(s) can be adjusted based on a velocity at which the element was moved from the first location to the second location (e.g., how fast the user moved the targeting reticle, and/or the like).
[0033] In some embodiments, the physics-based model can be based on one or more locations and/or one or more dimensions of the virtual element(s) (e.g., the targeting reticle, the game board, the cube(s), and/or the like). Additionally or alternatively, the physics-based model can be based on one or more locations and/or one or more dimensions of one or more physical objects in the physical real-world environment (e.g., a tabletop surface on which one or more of the virtual element(s) appear to rest, and/or the like).
[0034] In accordance with additional aspects of the disclosure, an AR environment comprising one or more virtual elements and at least a portion of a physical real-world environment can be generated for display by a computing device (e.g., a computing device located in the physical real-world environment, and/or the like). An area of interest of the AR environment can be determined, and one or more virtual elements identifying the area of interest can be generated for display by the computing device. For example, a user can be utilizing a computing device in a physical space (e.g., a room, and/or the like) that includes a tabletop surface. An AR environment can be generated that includes the tabletop surface and surrounding portions of the physical space. The AR environment can include one or more virtual elements (e.g., one or more cubes, and/or the like) located on the tabletop surface. Accordingly (e.g., due to the location of the cube(s), and/or the like), the tabletop surface can be determined to be an area of interest, and one or more virtual elements identifying the area of interest (e.g., a virtual border delineating the tabletop surface, and/or the like) can be generated for display by the computing device within the AR environment.
[0035] In some embodiments, determining the area of interest can include determining one or more locations of one or more virtual elements within the AR environment (e.g., the cubes, and/or the like). In such embodiments, the virtual element(s) identifying the area of interest can define a space that includes the location(s) of the virtual element(s) within the AR environment (e.g., a portion of the tabletop surface where the cubes are located, and/or the like).
[0036] In some embodiments, determining the area of interest can include identifying a physical surface located in the physical real-world environment (e.g., the tabletop surface, and/or the like). In such embodiments, the virtual element(s) identifying the area of interest can define a space that includes at least a portion of the physical surface (e.g., the tabletop surface, and/or the like).
[0037] In accordance with additional aspects of the disclosure, an AR environment comprising one or more virtual elements and at least a portion of a physical real-world environment can be generated for display by a computing device (e.g., a computing device located in the physical real-world environment, and/or the like). A physical surface located in the physical real-world environment can be identified. One or more virtual elements scaled to fit in a space defined at least in part by the physical surface can be generated for display by the computing device within the AR environment. For example, a user can be utilizing a computing device in a physical space (e.g., a room, and/or the like) that includes a tabletop surface. The tabletop surface can define at least in part a space within the physical space (e.g., from the tabletop surface to the ceiling, and/or the like). The tabletop surface can be identified, and one or more virtual elements (e.g., one or more bricks, and/or the like) scaled to fit in the space defined at least in part by the tabletop surface can be generated for display by the computing device within the AR environment (e.g., in the space defined at least in part by the tabletop surface, and/or the like).
[0038] In some embodiments, the virtual element(s) scaled to fit in the space (e.g., the brick(s), and/or the like) can represent a data set (e.g., search results, and/or the like). In such embodiments, the virtual element(s) scaled to fit in the space can be generated based on the size of the data set (e.g., to fill the available space, and/or the like).
[0039] In some embodiments, multiple physical surfaces located in the physical real- world environment can be identified (e.g., the tabletop surface, a floor of the room, a wall of the room, a ceiling of the room, and/or the like). In such embodiments, the virtual element(s) scaled to fit in the space (e.g., the brick(s), and/or the like) can be generated such that they fit in a space defined at least in part by at least two of the identified surfaces (e.g., a space defined by the tabletop surface and the wall of the room, and/or the like).
[0040] In some embodiments, the physical surface (e.g., the tabletop surface, and/or the like) can be selected from amongst multiple physical surfaces located in the physical real- world environment (e.g., the tabletop surface, a floor of the room, a wall of the room, a ceiling of the room, and/or the like). In such embodiments, the physical surface (e.g., the tabletop surface, and/or the like) can be selected based on user input identifying the physical surface. For example, a user can select the physical surface (e.g., the tabletop surface, and/or the like) by aligning one or more virtual elements (e.g., a virtual surface, plane, and/or the like) with the physical surface (e.g., the tabletop surface, and/or the like).
[0041] In some embodiments, one or more dimensions (e.g., height, width, depth, and/or the like) of the space defined at least in part by the physical surface (e.g., a space between the tabletop surface and the ceiling, and/or the like) can be determined. In such embodiments, one or more of the virtual element(s) scaled to fit in the space (e.g., the brick(s), and/or the like) can be selected from amongst multiple different virtual elements (e.g., a set of possible virtual elements that includes the brick(s), the Eiffel Tower, the Washington Monument, and/or the like) based on the dimension(s). In some embodiments, the physical real-world environment can include multiple computing devices, and determining the dimension(s) can include determining a distance between the devices (e.g., based on sensor data from one or more of the devices, and/or the like).
[0042] In accordance with additional aspects of the disclosure, an AR environment comprising one or more virtual elements and at least a portion of a physical real-world environment can be generated for display by a computing device (e.g., a computing device located in the physical real-world environment, and/or the like). One or more locations within the AR environment for locating one or more advertisements can be identified. One or more virtual elements comprising the advertisement(s) can be generated for display by the computing device at the location(s) within the AR environment. For example, a user can be utilizing a computing device in a physical space (e.g., a room, and/or the like) that includes a tabletop surface. A physical object (e.g., a piece of paper, and/or the like) can be located on the tabletop surface. A surface of the physical object can be identified for locating one or more advertisements, and one or more virtual elements (e.g., virtual text, images, and/or the like) comprising the advertisement(s) can be generated for display by the computing device on the surface of the physical object within the AR environment.
[0043] In some embodiments, one or more of the advertisement(s) can be selected from amongst multiple different possible advertisements. For example, one or more of the advertisement s) can be selected based on: a geographic location in the physical real-world environment at which the computing device is located (e.g., the computing device can be located in a bar, and an advertisement related to the bar, one or more of its products or services, and/or the like can be selected); a search history associated with the computing device (e.g., the computing device can have been utilized to search for a particular product or service, and an advertisement related to the product or service, and/or the like can be selected); a context of the AR environment (e.g., the computing device can be located in a bar where users are utilizing the AR environment to play a trivia game, and an advertisement related to the bar, a trivia question, and/or the like can be selected); user performance within the AR environment (e.g., an advertisement for a discount at the bar can be selected based on the user’s performance in the trivia game, and/or the like); one or more objects depicted by one or more virtual elements of the AR environment (e.g., the AR environment can include one or more virtual elements depicting a paper weight, and an advertisement related to paper weights, and/or the like can be selected); one or more physical objects in the physical real- world environment (e.g., the physical real-world environment can include a computer monitor, and an advertisement related to computer monitors, and/or the like can be selected); text in the physical real-world environment recognized by a computing device (e.g., the computer monitor could include text identifying its brand, a camera of the computing device can capture imagery of the text, the computing device (or another computing device) can recognize the text, and an advertisement related to the brand, and/or the like can be selected); and/or the like.
[0044] In some embodiments, identifying the location(s) can include identifying a physical object in the physical real-world environment (e.g., the monitor, and/or the like). In such embodiments, the virtual element(s) comprising the advertisement s) can include one or more virtual elements that depict at least a portion of the physical object (e.g., a virtual monitor, and/or the like), outline at least a portion of the physical object (e.g., outline the monitor, and/or the like), highlight at least a portion of the physical object (e.g., cover at least a portion of the monitor with a transparent layer, and/or the like), and/or identify at least a portion of the physical object (e.g., draw a box around at least a portion of the monitor, and/or the like).
[0045] In some embodiments, identifying the location(s) can include identifying text in the physical real-world environment recognized by a computing device (e.g., the text identifying the monitor brand, and/or the like). In such embodiments, the virtual element(s) comprising the advertisement(s) can include one or more virtual elements that depict at least a portion of the text (e.g., virtual text, and/or the like), outline at least a portion of the text (e.g., outline the text identifying the monitor brand, and/or the like), highlight at least a portion of the text (e.g., cover at least a portion of the text identifying the monitor brand with a transparent layer, and/or the like), and/or identify at least a portion of the text (e.g., draw a box around at least a portion of the text identifying the monitor brand, and/or the like).
[0046] In some embodiments, generating the virtual element(s) comprising the advertisement(s) can include modifying one or more dimensions, colors, finishes, lightings, and/or the like of at least one of the advertisement(s). For example, the physical real-world environment could include a chair, the chair can be identified as a location for an
advertisement, an advertisement related to the chair can be selected, and generating the virtual element(s) comprising the selected advertisement can include modifying one or more dimensions of the advertisement (e.g., so that the advertisement will fit on a surface of the chair, and/or the like), colors of the advertisement (e.g., so that the advertisement will be visible on the surface of the chair, and/or the like), finishes (e.g., matte, glossy, and/or the like) of the advertisement (e.g., so the advertisement will be aesthetically accentuated on the surface of the chair, and/or the like), and/or lightings (e.g., levels of brightness, contrast, and/or the like) of the advertisement (e.g., so the advertisement will be visible on the surface of the chair, and/or the like).
[0047] In some embodiments, a user can invoke (e.g., select, interact with, and/or the like) one or more of the virtual element(s) comprising the advertisement(s), and data generated by the user input can be received. Responsive to receiving the data generated by the user input, one or more virtual elements comprising content associated with one or more of the advertisement s) associated with the virtual element(s) invoked by the user can be generated for display by the computing device within the AR environment. For example, a user can invoke a virtual element located on a surface of the chair, and one or more virtual elements comprising an advertisement for the chair can be generated for display by the computing device within the AR environment (e.g., alongside the chair, on a surface of the chair, and/or the like). Additionally or alternatively, responsive to receiving the data generated by the user input, an application distinct from an application providing the AR environment can be directed to content associated with one or more of the advertisement(s) associated with the virtual element(s) invoked by the user for display by the computing device within the application distinct from the application associated with the AR
environment. For example, the computing device can include one or more applications (e.g., a web browser, an application associated with a merchant, service provider, and/or the like) distinct from an application providing the AR environment, and responsive to the user invoking the virtual element located on the surface of the chair, one or more of such application(s) can be directed (e.g., via an application programming interface (API) of such application(s), an advertisement identifier, a uniform resource locator (URL), and/or the like) to content (e.g., for display by the computing device within such application(s), and/or the like) enabling the user to learn more about the chair, purchase the chair, and/or the like.
[0048] In some embodiments, a particular identified location within the AR environment (e.g., the surface of a sheet of paper, and/or the like) can be generated for display within the AR environment by multiple different computing devices (e.g., utilized by different users located in the physical real-world environment, and/or the like). In such embodiments, virtual elements depicting different advertisements can be generated for display at the particular location by the different computing devices. For example, the physical real-world environment can include two different computing devices, one or more virtual elements depicting a first advertisement (e.g., based on a search history associated with the first computing device, and/or the like) can be generated for display at a particular location within the AR environment by the first computing device, and one or more virtual elements depicting a second advertisement (e.g., based on a search history associated with the second computing device, and/or the like) can be generated for display at the particular location within the AR environment by the second computing device.
[0049] The technologies described herein can provide a number of technical effects and benefits. For example, the technologies can enable an AR environment comprising a physical real-world environment that includes multiple users to be modified (e.g., calibrated, and/or the like) based on the users’ locations within the physical real-world environment. Additionally or alternatively, the technologies can enable an AR environment to be modified in a predictable manner (e.g., based on physics, and/or the like), creating a uniform and consistent user experience, and/or the like. Additionally or alternatively, the technologies can allow developers of AR applications to focus and/or guide a user’s experience within the environment, by identifying areas of interest to the user, and/or the like. Additionally or alternatively, the technologies can optimize utilization of space within an AR environment by, for example, scaling virtual elements to fit within the available space. Additionally or alternatively, the technologies can support the integration of advertisements and/or supplemental information into AR environments in a contextually useful manner that minimizes their potential intrusiveness on the user experience.
[0050] With reference now to the Figures, example embodiments of the present disclosure will be discussed in further detail.
[0051] FIG. 1 depicts an example computing environment according to example embodiments of the present disclosure. Referring to FIG. 1, environment 100 can include computing devices 102, 104, and 106 (e.g., user devices, and/or the like), one or more networks 108 (e.g., local area networks (LANs), wide area networks (WANs), portions of the Internet, and/or the like), and computing system 110 (e.g., a backend system, and/or the like). Network(s) 108 can include one or more networks (e.g., wired networks, wireless networks, and/or the like) that interface (e.g., support communications between, and/or the like) devices 102, 104, and/or 106 with one another and/or with system 110. [0052] Devices 102, 104, and/or 106 can include one or more computing devices (e.g., laptop computers, desktop computers, tablet computers, mobile devices, smartphones, wearable devices, head-mounted displays, and/or the like) capable of performing one or more of the operations and/or functions described herein. It will be appreciated that references herein to any one of devices 102, 104, and/or 106 could refer to multiple associated computing devices (e.g., a mobile device, wearable device, and/or the like) functioning together (e.g., for a particular user, and/or the like). Device 102 can include one or more processors 112, one or more communication interfaces 114, one or more displays 116, one or more sensors 118, and memory 120. Communication interface(s) 114 can support communications between device 102 and devices 104 and/or 106 and/or system 110 (e.g., via network(s) 108, and/or the like). Display(s) 116 can include one or more devices (e.g., panel displays, touch screens, head-mounted displays, and/or the like) that allow a user of device 102 to view imagery, and/or the like. Sensor(s) 118 can include one or more components (e.g., cameras, accelerometers, gyroscopes, global position system (GPS) receivers, wireless network interfaces, and/or the like) that can perceive one or more aspects of a physical real- world environment in which device 102 is located and can generate data representing those aspect(s). Memory 120 can include (e.g., store, and/or the like) instructions 122, which when executed by processor(s) 112 can cause device 102 to perform one or more of the operations and/or functions described herein. It will be appreciated, that devices 104 and/or 106 can include one or more of the components described above with respect to device 102.
[0053] System 110 can include one or more computing devices (e.g., servers, and/or the like) capable of performing one or more of the operations and/or functions described herein. System 110 can include one or more processors 124, one or more communication interfaces 126, and memory 128. Communication interface(s) 126 can support communications between system 110 and devices 102, 104, and/or 106 (e.g., via network(s) 108, and/or the like). Memory 128 can include (e.g., store, and/or the like) instructions 130, which when executed by processor(s) 124 can cause system 110 to perform one or more of the operations and/or functions described herein.
[0054] FIGs. 2, 3, 4, and 5 depict example scenes according to example embodiments of the present disclosure.
[0055] Referring to FIG. 2, a computing system (e.g., devices 102, 104, and/or 106, system 110, and/or the like) can generate an AR environment comprising scene 200 for display (e.g., via display(s) 116, and/or the like) by devices 102, 104, and/or 106. Scene 200 can include a portion of a physical real-world environment (e.g., a room, a field, and/or the like) in which devices 102, 104, and/or 106 can be located. Accordingly, scene 200 can include one or more physical objects. For example, scene 200 can include object 202 (e.g., a dollar bill, and/or the like). Scene 200 can also include virtual elements 204, 206, 208, and/or 210
[0056] In accordance with aspects of the disclosure, the computing system can receive data describing the physical real-world environment (e.g., data generated by sensor(s) 118, and/or the like) from devices 102, 104, and/or 106, the data can be utilized to determine locations of devices 102, 104, and/or 106 relative to one another in the physical real-world environment, and at least a portion of the AR environment (e.g., element 210, and/or the like) can be generated based on the locations. For example, users of devices 102, 104, and/or 106 can gather around a table to play a game in an AR environment. Sensor data from devices 102, 104, and/or 106 can be utilized to determine the locations of devices 102, 104, and/or 106 relative to one another (e.g., the location of the users around the table, and/or the like). One or more elements of the AR environment (e.g., element 210, and/or the like) can be generated based on the determined locations. For example, element 210 can indicate which user should play next in the game, and/or the like. In some embodiments, one or more of the generated element(s) can depict one or more of the locations. For example, scene 200 can be displayed by device 102, and element 210 can point toward the location of device 104 in the physical real-world environment.
[0057] In some embodiments, the computing system can determine the proximity of devices 102, 104, and/or 106 to one another (e.g., based on data generated by sensor(s) 118, determined locations of devices 102, 104, and/or 106, and/or the like). In such embodiments, the computing system can determine, based on the proximity of devices 102, 104, and/or 106, a level of audio and/or a degree of physical feedback (e.g., vibration, and/or the like) associated with the AR environment to be produced by devices 102, 104, and/or 106. For example, users of devices 102, 104, and/or 106 can be playing a game associated with the AR environment, a user of device 102 can experience an event (e.g., a bomb detonation, and/or the like), and an extent to which users of devices 104 and/or 106 will experience (e.g., in terms of audio, physical feedback, and/or the like) the event can be determined based on the proximity of devices 104 and/or 106 to device 102 (e.g., at the time of the event, and/or the like). [0058] In some embodiments, the computing system can receive data describing the physical real-world environment (e.g., data generated by sensor(s) 118, and/or the like) from cameras of devices 102, 104, and/or 106. The cameras of devices 102, 104, and/or 106 can capture portions of the physical real-world environment (e.g., from the perspectives of their respective users, and/or the like). For example, data received from a camera of device 102 can represent a portion of the physical real-world environment including object 202 (e.g., from the perspective of a user of device 102, and/or the like). Similarly, data received from a camera of device 104 can represent a portion of the physical real-world environment including object 202 (e.g., from the perspective of a user of device 104, and/or the like); and/or data received from a camera of device 106 can represent a portion of the physical real- world environment including object 202 (e.g., from the perspective of a user of device 106, and/or the like). It will be appreciated that while the portions of the physical real-world environment captured by the cameras of devices 102, 104, and/or 106 can include common features (e.g., object 202, and/or the like), such features can be captured from different perspectives (e.g., the cameras of devices 102, 104, and/or 106 can generate data representing object 202 from different positions (e.g., positions around a tabletop surface on which object 202 is resting, and/or the like) within the physical real-world environment, and/or the like).
[0059] In such embodiments, the computing system can determine the locations of devices 102, 104, and/or 106 by comparing the data representing the portions of the physical real-world environment (e.g., the various portions including object 202, and/or the like) to determine a difference in vantage point (e.g., orientation, perspective, and/or the like) between devices 102, 104, and/or 106 with respect to stationary physical object(s) included in the portions (e.g., object 202, and/or the like). For example, the computing system can determine the locations of devices 102, 104, and/or 106 by comparing the data received from the cameras of devices 102, 104, and/or 106 to determine a difference in vantage point (e.g., orientation, perspective, and/or the like) between devices 102, 104, and/or 106 with respect to object 202. In some embodiments, the computing system can determine the locations of devices 102, 104, and/or 106 by determining, comparing, and/or the like one or more positions, viewing angles, and/or the like of devices 102, 104, and/or 106 (e.g., using one or more camera projections, and/or the like) relative to a single, common, shared, and/or the like coordinate space for the AR environment (e.g., based on object 202, and/or the like).
[0060] Additionally or alternatively, the computing system can utilize data representing the portions of the physical real-world environment to render, for display by devices 102, 104, and/or 106, one or more AR environments including the portions of the physical real- world environment (e.g., the portions including object 202, and/or the like) and one or more virtual elements (e.g., elements 204, 206, 208, and/or the like). For example, data representing a portion of the physical real-world environment from the perspective of a user of device 102 (e.g., data received from a camera of device 102, and/or the like) can be utilized to render, for display by device 102, an AR environment including the portion of the physical real-world environment from the perspective of the user of device 102, and elements 204, 206, and/or 208. Similarly, data representing a portion of the physical real-world environment from the perspective of a user of device 104 (e.g., data received from a camera of device 104, and/or the like) can be utilized to render, for display by device 104, an AR environment including the portion of the physical real-world environment from the perspective of the user of device 104, and elements 204, 206, and/or 208; and/or data representing a portion of the physical real-world environment from the perspective of a user of device 106 (e.g., data received from a camera of device 106, and/or the like) can be utilized to render, for display by device 106, an AR environment including the portion of the physical real-world environment from the perspective of the user of device 106, and elements 204, 206, and/or 208. In some embodiments, the virtual element(s) can include one or more virtual depictions of stationary object(s) included in the portion(s). For example, element 204 can virtually depict object 202.
[0061] Users of devices 102, 104, and/or 106 can align one or more of the virtual element(s) with the stationary object(s) (e.g., by manipulating devices 102, 104, and/or 106 to move elements 204, 206, 208, and/or the like), for example, in response to a prompt (e.g., provided by elements 206, 208, and/or the like). For example, the computing system can receive (e.g., from device 102, and/or the like) data generated by user input aligning element 204 with object 202 within the AR environment including the portion of the physical real- world environment from the perspective of the user of device 102. Similarly, the computing system can receive (e.g., from device 104, and/or the like) data generated by user input aligning element 204 with object 202 within the AR environment including the portion of the physical real-world environment from the perspective of the user of device 104; and/or the computing system can receive (e.g., from device 106, and/or the like) data generated by user input aligning element 204 with object 202 within the AR environment including the portion of the physical real-world environment from the perspective of the user of device 106. [0062] In such embodiments, the computing system can determine the locations of devices 102, 104, and/or 106 by comparing the data generated by the user input aligning the one or more virtual element(s) (e.g., element 204, and/or the like) with stationary object(s) (e.g., object 202, and/or the like) within the AR environments including the portions of the physical real-world environment. For example, the computing system can determine the locations of devices 102, 104, and/or 106 by comparing data (e.g., received from devices 102, 104, 106, and/or the like) generated by user input aligning element 204 with object 202 within the AR environment including the portion of the physical real-world environment from the perspective of the user of device 102, the AR environment including the portion of the physical real-world environment from the perspective of the user of device 104, and/or the AR environment including the portion of the physical real-world environment from the perspective of the user of device 106.
[0063] Referring to FIG. 3, a computing system (e.g., devices 102, 104, and/or 106, system 110, and/or the like) can generate an AR environment comprising scene 300 for display (e.g., via display(s) 116, and/or the like) by devices 102, 104, and/or 106. Scene 300 can include a portion of a physical real-world environment (e.g., a room, a field, and/or the like) in which devices 102, 104, and/or 106 can be located. Accordingly, scene 300 can include one or more physical objects. For example, scene 300 can include object 302 (e.g., a marker, and/or the like) and object 304 (e.g., a coin, and/or the like). Scene 300 can also include virtual elements 306, 308, 310, 312, and/or 314.
[0064] As indicated above, in accordance with aspects of the disclosure, the computing system can receive data describing the physical real-world environment (e.g., data generated by sensor(s) 118, and/or the like) from devices 102, 104, and/or 106, the data can be utilized to determine locations of devices 102, 104, and/or 106 relative to one another in the physical real-world environment, and at least a portion of the AR environment (e.g., element 314, and/or the like) can be generated based on the locations. In some embodiments, one or more of the generated element(s) can depict one or more of the locations. For example, scene 300 can be displayed by device 102, and element 314 can point toward the location of device 104 in the physical real-world environment.
[0065] In some embodiments, the computing system can receive data describing the physical real-world environment (e.g., data generated by sensor(s) 118, and/or the like) from cameras of devices 102, 104, and/or 106. The cameras of devices 102, 104, and/or 106 can capture portions of the physical real-world environment (e.g., from the perspectives of their respective users, and/or the like). For example, data received from a camera of device 102 can represent a portion of the physical real-world environment including objects 302 and/or 304 (e.g., from the perspective of a user of device 102, and/or the like). Similarly, data received from a camera of device 104 can represent a portion of the physical real-world environment including objects 302 and/or 304 (e.g., from the perspective of a user of device 104, and/or the like); and/or data received from a camera of device 106 can represent a portion of the physical real-world environment including objects 302 and/or 304 (e.g., from the perspective of a user of device 106, and/or the like). It will be appreciated that while the portions of the physical real-world environment captured by the cameras of devices 102, 104, and/or 106 can include common features (e.g., objects 302, 304, and/or the like), such features can be captured from different perspectives (e.g., the cameras of devices 102, 104, and/or 106 can generate data representing objects 302 and/or 304 from different positions (e.g., positions around a tabletop surface on which objects 302 and/or 304 are resting, and/or the like) within the physical real-world environment, and/or the like).
[0066] In such embodiments, the computing system can determine the locations of devices 102, 104, and/or 106 by comparing the data representing the portions of the physical real-world environment (e.g., the various portions including objects 302, 304, and/or the like) to determine a difference in vantage point (e.g., orientation, perspective, and/or the like) between devices 102, 104, and/or 106 with respect to stationary physical object(s) included in the portions (e.g., objects 302, 304, and/or the like). For example, the computing system can determine the locations of devices 102, 104, and/or 106 by comparing the data received from the cameras of devices 102, 104, and/or 106 to determine a difference in vantage point (e.g., orientation, perspective, and/or the like) between devices 102, 104, and/or 106 with respect to objects 302 and/or 304. In some embodiments, the computing system can determine the locations of devices 102, 104, and/or 106 by determining, comparing, and/or the like one or more positions, viewing angles, and/or the like of devices 102, 104, and/or 106 (e.g., using one or more camera projections, and/or the like) relative to a single, common, shared, and/or the like coordinate space for the AR environment (e.g., based on objects 302, 304, and/or the like).
[0067] Additionally or alternatively, the computing system can utilize the data representing the portions of the physical real-world environment to render, for display by devices 102, 104, and/or 106, one or more AR environments including the portions of the physical real-world environment (e.g., the portions including objects 302, 304, and/or the like) and one or more virtual elements (e.g., elements 306, 308, 310, 312, and/or the like).
For example, data representing a portion of the physical real-world environment from the perspective of a user of device 102 (e.g., data received from a camera of device 102, and/or the like) can be utilized to render, for display by device 102, an AR environment including the portion of the physical real-world environment from the perspective of the user of device 102, and elements 306, 308, 310, and/or 312. Similarly, data representing a portion of the physical real-world environment from the perspective of a user of device 104 (e.g., data received from a camera of device 104, and/or the like) can be utilized to render, for display by device 104, an AR environment including the portion of the physical real-world environment from the perspective of the user of device 104, and elements 306, 308, 310, and/or 312; and/or data representing a portion of the physical real-world environment from the perspective of a user of device 106 (e.g., data received from a camera of device 106, and/or the like) can be utilized to render, for display by device 106, an AR environment including the portion of the physical real-world environment from the perspective of the user of device 106, and elements 306, 308, 310, and/or 312.
[0068] Users of devices 102, 104, and/or 106 can align one or more of the virtual element(s) with the stationary object(s) (e.g., by manipulating devices 102, 104, and/or 106 to move elements 306, 308, 310, 312, and/or the like), for example, in response to a prompt (e.g., provided by elements 310, 312, and/or the like). For example, elements 306 and 308 (e.g., targeting reticles, cursors, and/or the like) can be distinguishable from one another (e.g., be different colors, have different shapes, and/or the like), users of devices 102, 104, and/or 106 can agree to align element 306 with object 302 and/or element 308 with object 304, and the computing system can receive (e.g., from device 102, and/or the like) data generated by user input aligning element 306 with object 302 and/or element 308 with object 304 within the AR environment including the portion of the physical real-world environment from the perspective of the user of device 102. Similarly, the computing system can receive (e.g., from device 104, and/or the like) data generated by user input aligning element 306 with object 302 and/or element 308 with object 304 within the AR environment including the portion of the physical real-world environment from the perspective of the user of device 104; and/or the computing system can receive (e.g., from device 106, and/or the like) data generated by user input aligning element 306 with object 302 and/or element 308 with object 304 within the AR environment including the portion of the physical real-world environment from the perspective of the user of device 106. [0069] In such embodiments, the computing system can determine the locations of devices 102, 104, and/or 106 by comparing the data generated by the user input aligning the one or more virtual element(s) (e.g., elements 306, 308, and/or the like) with stationary object(s) (e.g., objects 302, 304, and/or the like) within the AR environments including the portions of the physical real-world environment. For example, the computing system can determine the locations of devices 102, 104, and/or 106 by comparing data (e.g., received from devices 102, 104, 106, and/or the like) generated by user input aligning element 306 with object 302 and/or element 308 with object 304 within the AR environment including the portion of the physical real-world environment from the perspective of the user of device 102, the AR environment including the portion of the physical real-world environment from the perspective of the user of device 104, and/or the AR environment including the portion of the physical real-world environment from the perspective of the user of device 106.
[0070] Referring to FIG. 4, scene 400 can include a portion of a physical real-world environment in which devices 102, 104, and/or 106 can be located. For example, FIG. 4 illustrates a user operating device 102 in a physical real-world environment (e.g., a room, and/or the like) that includes object 402 (e.g., a cutting board, and/or the like).
[0071] As indicated above, a computing system (e.g., devices 102, 104, and/or 106, system 110, and/or the like) can receive data describing the physical real-world environment (e.g., data generated by sensor(s) 118, and/or the like) from cameras of devices 102, 104, and/or 106. The cameras of devices 102, 104, and/or 106 can capture portions of the physical real-world environment (e.g., from the perspectives of their respective users, and/or the like). For example, data received from a camera of device 102 can represent a portion of the physical real-world environment including object 402 (e.g., from the perspective of a user of device 102, and/or the like). Similarly, data received from a camera of device 104 can represent a portion of the physical real-world environment including object 402 (e.g., from the perspective of a user of device 104, and/or the like); and/or data received from a camera of device 106 can represent a portion of the physical real-world environment including object 402 (e.g., from the perspective of a user of device 106, and/or the like). It will be appreciated that while the portions of the physical real-world environment captured by the cameras of devices 102, 104, and/or 106 can include common features (e.g., object 402, and/or the like), such features can be captured from different perspectives (e.g., the cameras of devices 102, 104, and/or 106 can generate data representing object 402 from different positions (e.g., positions from a floor surface on which object 402 is resting, and/or the like) within the physical real-world environment, and/or the like).
[0072] In some embodiments, the computing system can utilize the data representing the portions of the physical real-world environment to render, for display by devices 102, 104, and/or 106, one or more images including the portions of the physical real-world environment (e.g., the portions including object 402, and/or the like). For example, data representing a portion of the physical real-world environment from the perspective of a user of device 102 (e.g., data received from a camera of device 102, and/or the like) can be utilized to render, for display by device 102, image 404 of the portion of the physical real-world environment from the perspective of the user of device 102, which can include imagery 406 of object 402.
Similarly, data representing a portion of the physical real-world environment from the perspective of a user of device 104 (e.g., data received from a camera of device 104, and/or the like) can be utilized to render, for display by device 104, an image of the portion of the physical real-world environment from the perspective of the user of device 104, which can include imagery of object 402; and/or data representing a portion of the physical real-world environment from the perspective of a user of device 106 (e.g., data received from a camera of device 106, and/or the like) can be utilized to render, for display by device 106, an image of the portion of the physical real-world environment from the perspective of the user of device 106, which can include imagery of object 402.
[0073] Users of devices 102, 104, and/or 106 can select (e.g., by manipulating devices 102, 104, 106, and/or the like) one or more portions (e.g., comers, and/or the like) of one or more objects (e.g., stationary objects, and/or the like) within the image(s), for example, in response to a prompt, an agreement amongst the users, and/or the like. For example, the computing system can receive (e.g., from device 102, and/or the like) data generated by user input selecting one or more portions of imagery 406 of object 402 within image 404.
Similarly, the computing system can receive (e.g., from device 104, and/or the like) data generated by user input selecting the portion(s) of imagery of object 402 within the image of the portion of the physical real-world environment from the perspective of the user of device 104; and/or the computing system can receive (e.g., from device 106, and/or the like) data generated by user input selecting the portion(s) of imagery of object 402 within the image of the portion of the physical real-world environment from the perspective of the user of device 106 [0074] In such embodiments, the computing system can determine the locations of devices 102, 104, and/or 106 by comparing the data generated by the user input selecting the portion(s) of the object(s) (e.g., object 402, and/or the like) within the image(s). For example, the computing system can determine the locations of devices 102, 104, and/or 106 by comparing data (e.g., received from devices 102, 104, 106, and/or the like) generated by user input selecting the portion(s) of imagery 406 of object 402 within image 404, the portion(s) of the imagery of object 402 within the image of the portion of the physical real- world environment from the perspective of the user of device 104, and/or the portion(s) of the imagery of object 402 within the image of the portion of the physical real-world environment from the perspective of the user of device 106.
[0075] Referring to FIG. 5, scene 500 can include a portion of a physical real-world environment in which devices 102, 104, and/or 106 can be located. For example, FIG. 5 illustrates a user operating device 104 in a physical real-world environment (e.g., a room, and/or the like).
[0076] In some embodiments, a computing system (e.g., devices 102, 104, and/or 106, system 110, and/or the like) can generate an image (e.g., of a quick response (QR) code, and/or the like) for display by devices 102, 104, and/or 106. For example, image 506 can be generated for display by device 102.
[0077] As indicated above, the computing system can receive data describing the physical real-world environment (e.g., data generated by sensor(s) 118, and/or the like) from cameras of devices 102, 104, and/or 106. The cameras of devices 102, 104, and/or 106 can capture portions of the physical real-world environment (e.g., from the perspectives of their respective users, and/or the like). For example, data received from a camera of device 104 can represent a portion of the physical real-world environment (e.g., from the perspective of a user of device 104, and/or the like) including device 102 displaying image 506. The computing system can utilize the data representing the portions of the physical real-world environment to render, for display by devices 102, 104, and/or 106, one or more images including the portions of the physical real-world environment. For example, data
representing a portion of the physical real-world environment from the perspective of a user of device 104 (e.g., data received from a camera of device 104, and/or the like) can be utilized to render, for display by device 104, image 502 of the portion of the physical real- world environment from the perspective of the user of device 104, which can include imagery 504 of device 102 displaying image 506. [0078] In such embodiments, the computing system can determine the locations of devices 102, 104, and/or 106 by utilizing data representing one or more portions of the physical real-world environment including image 506 being displayed by device 102 to determine an orientation of image 506 relative to devices 102, 104, and/or 106 within the physical real-world environment. For example, the computing system can determine the locations of devices 102 and/or 104 by utilizing the data received from the camera of device 104 including imagery 504 of image 506 being displayed by device 102 to determine an orientation of image 506 (e.g., of device 102, and/or the like) relative to device 104 within the physical real-world environment.
[0079] FIG. 6 depicts an example method according to example embodiments of the present disclosure. Referring to FIG. 6, at (602), a computing system can receive, from at least one of multiple computing devices, sensor data describing a physical real-world environment in which the computing devices are located. For example, a computing system (e.g., devices 102, 104, and/or 106, system 110, and/or the like) can receive, from devices 102, 104, and/or 106, data (e.g., generated by sensor(s) 118, and/or the like) describing a physical real-world environment in which devices 102, 104, and/or 106 are located (e.g., a physical real-world environment that includes object 202, and/or the like). At (604), the computing system can determine, based on the sensor data, locations of the computing devices relative to one another in the physical real-world environment. For example, the computing system can determine, based on the data received from devices 102, 104, and/or 106, locations of devices 102, 104, and/or 106 relative to one another in the physical real- world environment in which devices 102, 104, and/or 106 are located (e.g., the physical real- world environment that includes object 202, and/or the like). At (606), the computing system can generate, based on the locations and for display by at least one of the computing devices, an AR environment including at least a portion of the physical real-world environment. For example, the computing system can generate, based on the locations of devices 102, 104, and/or 106 and for display by devices 102, 104, and/or 106, an AR environment that includes object 202, element 210, and/or the like.
[0080] FIG. 7 depicts an example scene according to example embodiments of the present disclosure. Referring to FIG. 7, a computing system (e.g., devices 102, 104, and/or 106, system 110, and/or the like) can generate an AR environment comprising scene 700 for display (e.g., via display(s) 116, and/or the like) by device 102. Scene 700 can include a portion of a physical real-world environment (e.g., a room, and/or the like) in which device 102 can be located. Accordingly, scene 700 can include one or more physical objects. For example, scene 700 can include object 702 (e.g., a tabletop surface, and/or the like), object 704 (e.g., a wall surface, and/or the like), and/or object 706 (e.g., an electrical outlet, and/or the like). Scene 700 can also include virtual element 708 (e.g., a virtual game board, and/or the like) and/or virtual element 710 (e.g., a virtual cube, and/or the like).
[0081] A user can move (e.g., by manipulating device 102, and/or the like) one or more virtual elements from a first location within the AR environment to a second location within the AR environment. For example, the computing system can receive data generated by user input moving a virtual element (e.g., a targeting reticle, and/or the like) from location 714 to location 712. In accordance with aspects of the disclosure, the computing system can utilize the data generated by the user input and a physics-based model of the AR environment to adjust one or more aspects (e.g., positions, rotations, scales, colors, textures, velocities, accelerations, and/or the like) of one or more virtual elements within the AR environment. For example, a user of device 102 can be playing a game in an AR environment comprising scene 700 that involves moving a virtual element (e.g., the targeting reticle, and/or the like) around element 708 (e.g., the game board, and/or the like) such that it aligns with element 710 (e.g., the cube, and/or the like). Responsive to the user moving the virtual element(s) (e.g., moving the targeting reticle from location 714 to location 712, and/or the like), one or more aspects of one or more virtual elements (e.g., the targeting reticle, elements 708 and/or 710, and/or the like) can be adjusted based on a physics-based model of the AR environment comprising scene 700, for example, a model that takes into account one or more of the virtual element(s) (e.g., the targeting reticle, elements 708 and/or 710, and/or the like) and/or one or more physical objects in the physical real-world environment (e.g., objects 702, 704, and/or 706, and/or the like).
[0082] In some embodiments, the aspect(s) can be adjusted based on a distance between the first location and the second location (e.g., a distance between locations 712 and 714, and/or the like). Additionally or alternatively, the aspect(s) can be adjusted based on a velocity at which the virtual element(s) were moved (e.g., how fast the user moved the virtual element (e.g., the targeting reticle, and/or the like) from location 714 to location 712, and/or the like).
[0083] In some embodiments, the physics-based model can be based on one or more locations and/or one or more dimensions of the virtual element(s) (e.g., the targeting reticle, elements 708 and/or 710, and/or the like). Additionally or alternatively, the physics-based model can be based on one or more locations and/or one or more dimensions of one or more physical objects in the physical real-world environment (e.g., objects 702, 704, and/or 706, and/or the like).
[0084] FIG. 8 depicts an example method according to example embodiments of the present disclosure. Referring to FIG. 8, at (802), a computing system can generate, for display by a computing device, an AR environment comprising one or more virtual elements and at least a portion of a physical real-world environment. For example, a computing system (e.g., devices 102, 104, and/or 106, system 110, and/or the like) can generate an AR environment comprising scene 700 for display (e.g., via display(s) 116, and/or the like) by device 102. At (804), the computing system can receive data generated by user input moving an element of the virtual element(s) from a first location within the AR environment to a second location within the AR environment. For example, the computing system can receive data generated by user input moving a virtual element (e.g., a targeting reticle, and/or the like) from location 714 to location 712. At (806), the computing system can adjust, based on the data generated by the user input and a physics-based model of the AR environment, one or more aspects of the virtual element(s) within the AR environment. For example, responsive to the user moving the virtual element (e.g., moving the targeting reticle from location 714 to location 712, and/or the like), one or more aspects of one or more virtual elements (e.g., the targeting reticle, elements 708 and/or 710, and/or the like) can be adjusted based on a physics-based model of the AR environment comprising scene 700.
[0085] FIG. 9 depicts an example scene according to example embodiments of the present disclosure. Referring to FIG. 9, a computing system (e.g., devices 102, 104, and/or 106, system 110, and/or the like) can generate an AR environment comprising scene 900 for display (e.g., via display(s) 116, and/or the like) by device 102. Scene 900 can include a portion of a physical real-world environment (e.g., a room, and/or the like) in which device 102 can be located. Accordingly, scene 900 can include one or more physical objects. For example, scene 900 can include object 902 (e.g., a tabletop surface, and/or the like), object 904 (e.g., a chair, and/or the like), and/or object 906 (e.g., a computer monitor, and/or the like). Scene 900 can also include virtual elements 910 (e.g., virtual cubes, and/or the like).
[0086] In accordance with aspects of the disclosure, the computing system can determine an area of interest of the AR environment and can generate, for display by the computing device within the AR environment, one or more virtual elements identifying the area of interest. For example, the computing system can determine that the area surrounding object 902 (e.g., the tabletop surface, and/or the like) is an area of interest for the AR environment comprising scene 900, and the computing system can generate, for display by device 102 within the AR environment comprising scene 900, virtual element 908 (e.g., a virtual border delineating the tabletop surface, and/or the like) identifying the area of interest.
[0087] In some embodiments, the computing system can determine the area of interest by determining one or more locations of one or more virtual elements within the AR environment. For example, the computing system can determine that the area surrounding object 902 (e.g., the tabletop surface, and/or the like) is an area of interest for the AR environment comprising scene 900 by determining one or more locations of elements 910 (e.g., the virtual cubes, and/or the like). In such embodiments, the virtual element(s) identifying the area of interest can define a space that includes the location(s) of the virtual element(s) within the AR environment. For example, element 908 can define a space (e.g., a portion of the tabletop surface where the virtual cubes are located, and/or the like) that includes the location(s) of elements 910 (e.g., the virtual cubes, and/or the like).
[0088] In some embodiments, the computing system can determine the area of interest by identifying a physical surface located in the physical real-world environment. For example, the computing system can determine that the area surrounding object 902 (e.g., the tabletop surface, and/or the like) is an area of interest for the AR environment comprising scene 900 by identifying object 902. In such embodiments, the virtual element(s) identifying the area of interest can define a space that includes at least a portion of the physical surface. For example, element 908 can define a space that includes at least a portion of object 902 (e.g., a portion of the tabletop surface where the virtual cubes are located, and/or the like).
[0089] FIG. 10 depicts an example method according to example embodiments of the present disclosure. Referring to FIG. 10, at (1002), a computing system can generate, for display by a computing device, an AR environment comprising one or more virtual elements and at least a portion of a physical real-world environment. For example, a computing system (e.g., devices 102, 104, and/or 106, system 110, and/or the like) can generate an AR environment comprising scene 900 for display (e.g., via display(s) 116, and/or the like) by device 102. At (1004), the computing system can determine an area of interest of the AR environment. For example, the computing system can determine that the area surrounding object 902 (e.g., the tabletop surface, and/or the like) is an area of interest for the AR environment comprising scene 900. At (1006), the computing system can generate, for display by the computing device within the AR environment, one or more virtual elements identifying the area of interest. For example, the computing system can generate, for display by device 102 within the AR environment comprising scene 900, virtual element 908 (e.g., a virtual border delineating the tabletop surface, and/or the like) identifying the area of interest.
[0090] FIGs. 11 and 12 depict example scenes according to example embodiments of the present disclosure.
[0091] Referring to FIG. 11, a computing system (e.g., devices 102, 104, and/or 106, system 110, and/or the like) can generate an AR environment comprising scene 1100 for display (e.g., via display(s) 116, and/or the like) by device 102. Scene 1100 can include a portion of a physical real-world environment (e.g., a room, and/or the like) in which device 102 can be located. Accordingly, scene 1100 can include one or more physical objects. For example, scene 1100 can include object 1102 (e.g., a wall surface, and/or the like), object 1104 (e.g., a floor surface, and/or the like), and/or object 1106 (e.g., a tabletop surface, and/or the like).
[0092] The computing system can identify one or more physical surfaces located in the physical real-world environment. For example, the computing system can identify (e.g., based on data generated by sensor(s) 118, and/or the like) objects 1102, 1104, and/or 1106.
In some embodiments, a physical surface of the identified physical surface(s) can be selected as a surface defining at least in part a space for which one or more virtual elements can be scaled. For example, the computing system can select, from amongst objects 1102, 1104, and/or 1106, object 1106 (e.g., the tabletop surface, and/or the like) as a surface defining at least in part a space for which one or more virtual elements can be scaled. In some embodiments, a user can select (e.g., by manipulating device 102, and/or the like) the surface defining in part the space for which the virtual element(s) can be scaled. For example, the computing system can receive data generated by user input aligning virtual element 1108 (e.g., a virtual surface, plane, and/or the like) with object 1106 (e.g., the tabletop surface, and/or the like).
[0093] In accordance with aspects of the disclosure, the computing system can generate one or more virtual elements scaled to fit in the space defined at least in part by the physical surface (e.g., the selected physical surface, and/or the like). For example, referring to FIG. 12, the computing system can generate (e.g., for display by device 102, and/or the like) an AR environment comprising scene 1200. As illustrated, scene 1200 can include objects 1102, 1104, and/or 1106. Scene 1200 can also include virtual element 1202 (e.g., a virtual game board, and/or the like) and/or virtual elements 1204 (e.g., virtual bricks, and/or the like). Elements 1202 and/or 1204 can be scaled to fit a space defined at least in part by object 1106 (e.g., the tabletop surface, and/or the like).
[0094] In some embodiments, the virtual element(s) scaled to fit in the space (e.g., elements 1204, and/or the like) can represent a data set (e.g., search results, and/or the like).
In such embodiments, the computing system can generate the virtual element(s) scaled to fit in the space based on the size of the data set (e.g., to fill the available space, and/or the like).
[0095] In some embodiments, the virtual element(s) scaled to fit in the space can be generated such that they fit in a space defined at least in part by at least two of the identified surfaces. For example, the computing system can generate elements 1204 such that they fit in a space defined at least in part by objects 1106 and 1102 (e.g., a space defined by the tabletop surface and the wall, and/or the like).
[0096] In some embodiments, the computing system can determine one or more dimensions (e.g., height, width, depth, and/or the like) of the space defined at least in part by the physical surface. For example, the computing system can determine one or more dimensions of the space defined at least in part by object 1106 (e.g., a space between the tabletop surface and the ceiling (or top of the wall), and/or the like). In such embodiments, the computing system can select, from amongst multiple different virtual elements, one or more of the virtual element(s) scaled to fit in the space based on the dimension(s). For example, the computing system can select (e.g., from amongst a set of possible virtual elements that includes the bricks, the Eiffel Tower, the Washington Monument, and/or the like) elements 1204 (e.g., the bricks, and/or the like) based on the dimension(s) of the space defined at least in part by object 1106 (e.g., because elements 1204 are size appropriate for such a space, and/or the like). In some embodiments, the physical real-world environment can include multiple computing devices, and determining the dimension(s) can include determining a distance between the devices (e.g., based on sensor data from one or more of the devices, and/or the like). For example, the physical real-world environment can include devices 102, 104, and/or 106, the computing system can determine a distance between device 102 and devices 104 and/or 106 (e.g., based on data generated by sensor(s) 118, and/or the like), and the computing system can determine one or more of the dimension(s) of the space defined at least in part by object 1106 based on such distance(s). [0097] FIG. 13 depicts an example method according to example embodiments of the present disclosure. Referring to FIG. 13, at (1302), a computing system can generate, for display by a computing device, an AR environment comprising one or more virtual elements and at least a portion of a physical real-world environment. For example, a computing system (e.g., devices 102, 104, and/or 106, system 110, and/or the like) can generate an AR environment comprising scene 1200 for display (e.g., via display(s) 116, and/or the like) by device 102. At (1304), the computing system can identify a physical surface located in the physical real-world environment. For example, the computing system can identify object 1106 (e.g., the tabletop surface, and/or the like). At (1306), the computing system can generate, for display by the computing device within the AR environment, one or more virtual elements scaled to fit in a space defined at least in part by the physical surface located in the physical real-world environment. For example, the computing system can generate elements 1202, 1204, and/or the like.
[0098] FIG. 14 depicts an example scene according to example embodiments of the present disclosure. Referring to FIG. 14, a computing system (e.g., devices 102, 104, and/or 106, system 110, and/or the like) can generate an AR environment comprising scene 1400 for display (e.g., via display(s) 116, and/or the like) by device 102. Scene 1400 can include a portion of a physical real-world environment (e.g., a room, and/or the like) in which device 102 can be located. Accordingly, scene 1400 can include one or more physical objects. For example, scene 1400 can include object 1402 (e.g., a chair, and/or the like), object 1404 (e.g., a computer monitor, and/or the like), object 1406 (e.g., a tabletop surface, and/or the like), and/or object 1408 (e.g., a piece of paper, and/or the like). Scene 1400 can also include virtual elements 1410 (e.g., virtual paper weights, and/or the like).
[0099] In accordance with aspects of the disclosure, the computing system can identify one or more locations within the AR environment for locating one or more advertisements, and the computing system can generate one or more virtual elements comprising the advertisement(s) for display by the computing device at the location(s) within the AR environment. For example, the computing system can identify locations corresponding to objects 1402, 1404, and/or 1408, and/or elements 1410 for locating one or more
advertisements, and the computing system can generate (e.g., for display by device 102 within the AR environment comprising scene 1400, and/or the like) virtual elements 1410, 1412, 1414, and/or 1418 comprising the advertisement(s). [0100] In some embodiments, the computing system can select one or more of the advertisement(s) from amongst multiple different possible advertisements. For example, one or more of the advertisement(s) can be selected based on: a geographic location in the physical real-world environment at which the computing device is located (e.g., device 102 can be located in a bar, and an advertisement related to the bar, one or more of its products or services, and/or the like can be selected); a search history associated with the computing device (e.g., device 102 can have been utilized to search for a particular product or service, and an advertisement related to the product or service, and/or the like can be selected); a context of the AR environment (e.g., device 102 can be located in a bar where users are utilizing the AR environment comprising scene 1400 to play a trivia game, and an
advertisement related to the bar, a trivia question, and/or the like can be selected); user performance within the AR environment (e.g., an advertisement for a discount at the bar can be selected based on the user’s performance in the trivia game, and/or the like); one or more objects depicted by one or more virtual elements of the AR environment (e.g., the AR environment comprising scene 1400 can include elements 1410 (e.g., virtual paper weights, and/or the like), and an advertisement related to elements 1410 (e.g., related to paper weights, and/or the like) can be selected); one or more physical objects in the physical real-world environment (e.g., the physical real-world environment in which device 102 is located can include object 1404 (e.g., a computer monitor, and/or the like), and an advertisement related to object 1404 (e.g., related to computer monitors, and/or the like) can be selected); text in the physical real-world environment recognized by a computing device (e.g., object 1404 (e.g., the computer monitor, and/or the like) can include text 1416 (e.g., identifying the brand of the computer monitor, and/or the like), a camera of device 102 can capture imagery of text 1416, the computing system can recognize text 1416, and an advertisement related to text 1416 (e.g., related to the brand of the computer monitor, and/or the like) can be selected); and/or the like.
[0101] In some embodiments, the computing system can identify the location(s) by identifying a physical object in the physical real-world environment. For example, the computing system can identify object 1404 (e.g., the monitor, and/or the like). In such embodiments, the virtual element(s) comprising the advertisement(s) (e.g., element 1418, and/or the like) can include one or more virtual elements that depict at least a portion of object 1404 (e.g., a virtual monitor, and/or the like), outline at least a portion of object 1404 (e.g., outline the monitor, and/or the like), highlight at least a portion of object 1404 (e.g., cover at least a portion of the monitor with a transparent layer, and/or the like), and/or identify at least a portion of object 1404 (e.g., draw a box around at least a portion of the monitor, and/or the like).
[0102] In some embodiments, the computing system can identify the location(s) by identifying text in the physical real-world environment recognized by a computing device.
For example, the computing system can identify text 1416 (e.g., the text identifying the brand of the monitor, and/or the like). In such embodiments, the virtual element(s) (e.g., element 1418, and/or the like) comprising the advertisement s) can include one or more virtual elements that depict at least a portion of text 1416 (e.g., virtual text, and/or the like), outline at least a portion of text 1416 (e.g., outline the text identifying the brand of the monitor, and/or the like), highlight at least a portion of text 1416 (e.g., cover at least a portion of the text identifying the brand of the monitor with a transparent layer, and/or the like), and/or identify at least a portion of text 1416 (e.g., draw a box around at least a portion of the text identifying the brand of the monitor, and/or the like).
[0103] In some embodiments, generating the virtual element(s) comprising the advertisement(s) can include modifying one or more dimensions, colors, finishes, lightings, and/or the like of at least one of the advertisement(s). For example, the physical real-world environment can include object 1402 (e.g., the chair, and/or the like), and object 1402 can be identified as a location for an advertisement, an advertisement related to object 1402 (e.g., related to chairs, and/or the like) can be selected, and generating the virtual element(s) comprising the selected advertisement (e.g., elements 1414) can include modifying one or more dimensions of the advertisement (e.g., so that the advertisement will fit on a surface of object 1402, and/or the like), colors of the advertisement (e.g., so that the advertisement will be visible on the surface of object 1402, and/or the like), finishes (e.g., matte, glossy, and/or the like) of the advertisement (e.g., so the advertisement will be aesthetically accentuated on the surface of object 1402, and/or the like), and/or lightings (e.g., levels of brightness, contrast, and/or the like) of the advertisement (e.g., so the advertisement will be visible on the surface of object 1402, and/or the like).
[0104] In some embodiments, a user can (e.g., by manipulating device 102, and/or the like) invoke (e.g., select, interact with, and/or the like) one or more of the virtual element(s) comprising the advertisement(s), and the computing system can receive data generated by the user input. Responsive to receiving the data generated by the user input, the computing system can generate (e.g., for display by device 102 within the AR environment comprising scene 1400, and/or the like) one or more virtual elements comprising content associated with one or more of the advertisement(s) associated with the virtual element(s) invoked by the user. For example, a user can invoke a virtual element (e.g., an element of elements 1414, and/or the like) located on a surface of object 1402 (e.g., the chair, and/or the like), and one or more virtual elements (e.g., one or more other elements of elements 1414, and/or the like) comprising an advertisement for object 1402 (e.g., including more details about the chair, and/or the like) can be generated for display by device 102 within the AR environment comprising scene 1400 (e.g., alongside object 1402, on a surface of object 1402, and/or the like). Additionally or alternatively, responsive to receiving the data generated by the user input, the computing system can direct an application distinct from an application providing the AR environment to content associated with one or more of the advertisement(s) associated with the virtual element(s) invoked by the user for display by the computing device within the application distinct from the application associated with the AR
environment. For example, device 102 can include (e.g., execute, and/or the like) one or more applications (e.g., a web browser, an application associated with a merchant, service provider, and/or the like) distinct from an application providing the AR environment comprising scene 1400, and responsive to the user invoking the virtual element (e.g., the element of elements 1414, and/or the like) located on the surface of object 1402, one or more of such application(s) can be directed (e.g., via an application programming interface (API) of such application(s), an advertisement identifier, a uniform resource locator (URL), and/or the like) to content for display by device 102 within such application(s) (e.g., enabling the user to learn more about object 1402, purchase object 1402, and/or the like).
[0105] In some embodiments, the computing system can generate a particular identified location within the AR environment for display within the AR environment by multiple different computing devices (e.g., utilized by different users located in the physical real- world environment, and/or the like). In such embodiments, virtual elements depicting different advertisements can be generated for display at the particular location by the different computing devices. For example, the physical real-world environment can include devices 102 and 104, one or more virtual elements (e.g., element 1412, and/or the like) depicting a first advertisement (e.g., based on a search history associated with device 102, and/or the like) can be generated for display at a particular location within the AR
environment (e.g., on the surface of object 1408, and/or the like) by device 102, and one or more virtual elements (e.g., element 1412, and/or the like) depicting a second advertisement (e.g., based on a search history associated with device 104, and/or the like) can be generated for display at the particular location within the AR environment (e.g., on the surface of object 1408, and/or the like) by device 104.
[0106] FIG. 15 depicts an example method according to example embodiments of the present disclosure. Referring to FIG. 15, at (1502), a computing system can generate, for display by a computing device, an AR environment comprising one or more virtual elements and at least a portion of a physical real-world environment. For example, a computing system (e.g., devices 102, 104, and/or 106, system 110, and/or the like) can generate an AR environment comprising scene 1400 for display (e.g., via display(s) 116, and/or the like) by device 102. At (1404), the computing system can identify one or more locations within the AR environment for locating one or more advertisements. For example, the computing system can identify locations corresponding to objects 1402, 1404, and/or 1408, and/or elements 1410 for locating one or more advertisements. At (1406), the computing system can generate, for display by the computing device at the location(s) within the AR environment, one or more virtual elements comprising the advertisement(s). For example, the computing system can generate (e.g., for display by device 102 within the AR environment comprising scene 1400, and/or the like) virtual elements 1410, 1412, 1414, and/or 1418 comprising the advertisement(s).
[0107] The technology discussed herein makes reference to servers, databases, software applications, and/or other computer-based systems, as well as actions taken and information sent to and/or from such systems. The inherent flexibility of computer-based systems allows for a great variety of possible configurations, combinations, and/or divisions of tasks and/or functionality between and/or among components. For instance, processes discussed herein can be implemented using a single device or component and/or multiple devices or components working in combination. Databases and/or applications can be implemented on a single system and/or distributed across multiple systems. Distributed components can operate sequentially and/or in parallel.
[0108] Various connections between elements are discussed in the above description. These connections are general and, unless specified otherwise, can be direct and/or indirect, wired and/or wireless. In this respect, the specification is not intended to be limiting.
[0109] The depicted and/or described steps are merely illustrative and can be omitted, combined, and/or performed in an order other than that depicted and/or described; the numbering of depicted steps is merely for ease of reference and does not imply any particular ordering is necessary or preferred.
[0110] The functions and/or steps described herein can be embodied in computer-usable data and/or computer-executable instructions, executed by one or more computers and/or other devices to perform one or more functions described herein. Generally, such data and/or instructions include routines, programs, objects, components, data structures, or the like that perform particular tasks and/or implement particular data types when executed by one or more processors in a computer and/or other data-processing device. The computer- executable instructions can be stored on a computer-readable medium such as a hard disk, optical disk, removable storage media, solid-state memory, read-only memory (RAM), or the like. As will be appreciated, the functionality of such instructions can be combined and/or distributed as desired. In addition, the functionality can be embodied in whole or in part in firmware and/or hardware equivalents, such as integrated circuits, application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), or the like. Particular data structures can be used to more effectively implement one or more aspects of the disclosure, and such data structures are contemplated to be within the scope of computer- executable instructions and/or computer-usable data described herein.
[0111] Although not required, one of ordinary skill in the art will appreciate that various aspects described herein can be embodied as a method, system, apparatus, and/or one or more computer-readable media storing computer-executable instructions. Accordingly, aspects can take the form of an entirely hardware embodiment, an entirely software embodiment, an entirely firmware embodiment, and/or an embodiment combining software, hardware, and/or firmware aspects in any combination.
[0112] As described herein, the various methods and acts can be operative across one or more computing devices and/or networks. The functionality can be distributed in any manner or can be located in a single computing device (e.g., server, client computer, user device, or the like).
[0113] Aspects of the disclosure have been described in terms of illustrative
embodiments thereof. Numerous other embodiments, modifications, and/or variations within the scope and spirit of the appended claims can occur to persons of ordinary skill in the art from a review of this disclosure. For example, one or ordinary skill in the art can appreciate that the steps depicted and/or described can be performed in other than the recited order and/or that one or more illustrated steps can be optional and/or combined. Any and all features in the following claims can be combined and/or rearranged in any way possible.
[0114] While the present subject matter has been described in detail with respect to various specific example embodiments thereof, each example is provided by way of explanation, not limitation of the disclosure. Those skilled in the art, upon attaining an understanding of the foregoing, can readily produce alterations to, variations of, and/or equivalents to such embodiments. Accordingly, the subject disclosure does not preclude inclusion of such modifications, variations, and/or additions to the present subject matter as would be readily apparent to one of ordinary skill in the art. For instance, features illustrated and/or described as part of one embodiment can be used with another embodiment to yield a still further embodiment. Thus, it is intended that the present disclosure cover such alterations, variations, and/or equivalents.
EXAMPLE EMBODIMENTS:
1. A computer-implemented method comprising:
generating, by a computing system and for display by a computing device, an augmented reality (AR) environment comprising one or more virtual elements and at least a portion of a physical real-world environment;
receiving, from the computing device, data generated by user input moving an element of the one or more virtual elements from a first location within the AR environment to a second location within the AR environment; and
adjusting, based on the data and a physics-based model of the AR environment, one or more aspects of the one or more virtual elements within the AR environment.
2. The computer-implemented method of embodiment 1, wherein adjusting the one or more aspects comprises adjusting the one or more aspects based on a distance between the first location and the second location.
3. The computer-implemented method of embodiment 1, wherein adjusting the one or more aspects comprises adjusting the one or more aspects based on a velocity at which the element was moved from the first location to the second location. The computer-implemented method of embodiment 1, wherein the physics-based model is based at least in part on one or more of: one or more locations of the one or more virtual elements, or one or more dimensions of the one or more virtual elements. The computer-implemented method of embodiment 1, wherein the physics-based model is based at least in part on one or more of: one or more locations of one or more physical objects in the physical real-world environment, or one or more dimensions of one or more physical objects in the physical real-world environment. A computer-implemented method comprising:
generating, by a computing system and for display by a computing device, an augmented reality (AR) environment comprising one or more virtual elements and at least a portion of a physical real-world environment;
determining, by the computing system, an area of interest of the AR
environment; and
generating, by the computing system and for display by the computing device within the AR environment, one or more virtual elements identifying the area of interest. The computer-implemented method of embodiment 6, wherein determining the area of interest comprises determining one or more locations of one or more virtual elements within the AR environment. The computer-implemented method of embodiment 7, wherein the one or more virtual elements identifying the area of interest define a space comprising the one or more locations of the one or more virtual elements within the AR environment. The computer-implemented method of embodiment 6, wherein determining the area of interest comprises identifying a physical surface located in the physical real-world environment. The computer-implemented method of embodiment 9, wherein the one or more virtual elements identifying the area of interest define a space comprising at least a portion of the physical surface located in the physical real-world environment. A computer-implemented method comprising:
generating, by a computing system and for display by a computing device, an augmented reality (AR) environment comprising one or more virtual elements and at least a portion of a physical real-world environment;
identifying, by the computing system, one or more locations within the AR environment for locating one or more advertisements; and
generating, by the computing system and for display by the computing device at the one or more locations within the AR environment, one or more virtual elements comprising the one or more advertisements. The computer-implemented method of embodiment 11, comprising selecting, by the computing system, from amongst a plurality of different advertisements, and based on a geographic location in the physical real-world environment at which the computing device is located, at least one of the one or more advertisements. The computer-implemented method of embodiment 11, comprising selecting, by the computing system, from amongst a plurality of different advertisements, and based on a search history associated with the computing device, at least one of the one or more advertisements. The computer-implemented method of embodiment 11, comprising selecting, by the computing system, from amongst a plurality of different advertisements, and based on a context of the AR environment, at least one of the one or more advertisements. The computer-implemented method of embodiment 11, comprising selecting, by the computing system, from amongst a plurality of different advertisements, and based on user performance within the AR environment, at least one of the one or more advertisements. The computer-implemented method of embodiment 11, comprising selecting, by the computing system, from amongst a plurality of different advertisements, and based on one or more objects depicted by one or more virtual elements of the AR environment, at least one of the one or more advertisements. The computer-implemented method of embodiment 11, comprising selecting, by the computing system, from amongst a plurality of different advertisements, and based on one or more physical objects in the physical real-world environment, at least one of the one or more advertisements. The computer-implemented method of embodiment 11, comprising selecting, by the computing system, from amongst a plurality of different advertisements, and based on text in the physical real-world environment recognized by the computing system, at least one of the one or more advertisements. The computer-implemented method of embodiment 11, wherein:
identifying the one or more locations comprises identifying a physical object in the physical real-world environment; and
generating the one or more virtual elements comprising the one or more advertisements comprises generating one or more virtual elements that one or more of depict at least a portion of the physical object, outline at least a portion of the physical object, highlight at least a portion of the physical object, or identify at least a portion of the physical object. The computer-implemented method of embodiment 11, wherein:
identifying the one or more locations comprises identifying text in the physical real-world environment recognized by the computing system; and
generating the one or more virtual elements comprising the one or more advertisements comprises generating one or more virtual elements that one or more of depict at least a portion of the text, outline at least a portion of the text, highlight at least a portion of the text, or identify at least a portion of the text. The computer-implemented method of embodiment 11, wherein generating the one or more virtual elements comprising the one or more advertisements comprises modifying one or more of:
one or more dimensions of at least one of the one or more advertisements; one or more colors of at least one of the one or more advertisements;
one or more finishes of at least one of the one or more advertisements; or one or more lightings of at least one of the one or more advertisements. The computer-implemented method of embodiment 11, comprising, responsive to receiving, from the computing device, data generated by user input invoking at least one of the one or more virtual elements comprising the one or more advertisements, generating, by the computing system and for display by the computing device within the AR environment, one or more virtual elements comprising content associated with at least one of the one or more advertisements associated with the at least one of the one or more virtual elements. The computer-implemented method of embodiment 11, comprising, responsive to receiving, from the computing device, data generated by user input invoking at least one of the one or more virtual elements comprising the one or more advertisements, directing, by the computing system and for display by the computing device within an application distinct from an application providing the AR environment, the application distinct from the application providing the AR environment to content associated with at least one of the one or more advertisements associated with the at least one of the one or more virtual elements. The computer-implemented method of embodiment 11, wherein:
identifying the one or more locations within the AR environment for locating the one or more advertisements comprises identifying a particular location within the AR environment;
generating the one or more virtual elements comprising the one or more advertisements comprises generating, for display by the computing device at the particular location within the AR environment, one or more virtual elements comprising a first advertisement; and the method comprises generating, by the computing system and for display by a different computing device at the particular location within the AR environment, one or more virtual elements comprising a second advertisement, the second advertisement being different from the first advertisement.

Claims

WHAT IS CLAIMED IS:
1. A computer-implemented method comprising:
receiving, by a computing system and from at least one of two computing devices, sensor data describing a physical real-world environment in which the two computing devices are located, the receiving comprising:
receiving, from a camera of a first computing device of the two computing devices, data representing a first portion of the physical real-world environment comprising at least one stationary object, and
receiving, from a camera of a second computing device of the two computing devices, data representing a second portion of the physical real- world environment comprising the at least one stationary object; determining, by the computing system and based on the sensor data, locations of the two computing devices relative to one another in the physical real-world environment, the determining comprising comparing the data representing the first portion of the physical real-world environment to the data representing the second portion of the physical real-world environment to determine a difference in vantage point between the first computing device and the second computing device with respect to the at least one stationary object; and
generating, by the computing system, based on the locations, and for display by at least one of the two computing devices, an augmented reality (AR) environment comprising at least a portion of the physical real-world environment.
2. The computer-implemented method of claim 1, wherein generating the AR
environment comprises generating an AR environment that depicts at least one of the locations.
3. The computer-implemented method of claim 1, wherein receiving the sensor data comprises receiving sensor data generated by one or more accelerometers, gyroscopes, global position system (GPS) receivers, or wireless network interfaces.
4. The computer-implemented method of claim 1, comprising:
rendering, for display by the first computing device and based on the data representing the first portion of the physical real-world environment, an AR environment comprising the first portion of the physical real-world environment and one or more virtual elements;
rendering, for display by the second computing device and based on the data representing the second portion of the physical real-world environment, an AR environment comprising the second portion of the physical real-world environment and the one or more virtual elements;
receiving, from the first computing device, data generated by user input aligning the one or more virtual elements with the at least one stationary object within the AR environment comprising the first portion of the physical real-world environment; and
receiving, from the second computing device, data generated by user input aligning the one or more virtual elements with the at least one stationary object within the AR environment comprising the second portion of the physical real-world environment.
5. The computer-implemented method of claim 4, wherein determining the locations comprises comparing the data generated by the user input aligning the one or more virtual elements with the at least one stationary object within the AR environment comprising the first portion of the physical real-world environment and the data generated by the user input aligning the one or more virtual elements with the at least one stationary object within the AR environment comprising the second portion of the physical real-world environment.
6. The computer-implemented method of claim 4, wherein the one or more virtual
elements comprise one or more virtual depictions of the at least one stationary object.
7. The computer-implemented method of claim 1, comprising:
rendering, for display by the first computing device and based on the data representing the first portion of the physical real-world environment, an image comprising the first portion of the physical real-world environment;
rendering, for display by the second computing device and based on the data representing the second portion of the physical real-world environment, an image comprising the second portion of the physical real-world environment; receiving, from the first computing device, data generated by user input selecting one or more portions of the at least one stationary object within the image comprising the first portion of the physical real-world environment; and
receiving, from the second computing device, data generated by user input selecting one or more portions of the at least one stationary object within the image comprising the second portion of the physical real-world environment.
8. The computer-implemented method of claim 7, wherein determining the locations comprises comparing the data generated by the user input selecting the one or more portions of the at least one stationary object within the image comprising the first portion of the physical real-world environment and the data generated by the user input selecting the one or more portions of the at least one stationary object within the image comprising the second portion of the physical real-world environment.
9. The computer-implemented method of claim 1, comprising generating, for display by a first computing device of the two computing devices, an image, and wherein:
receiving the sensor data comprises receiving, from a camera of a second computing device of the two computing devices, data representing a portion of the physical real-world environment comprising the image being displayed by the first computing device; and
determining the locations comprises determining, based on the data representing the portion of the physical real-world environment, an orientation of the image being displayed by the first computing device relative to the second computing device within the physical real-world environment.
10. The computer-implemented method of claim 1, wherein the AR environment
comprises one or more virtual elements corresponding to a game being played by users of the two computing devices, the method comprising determining, based on the locations and in accordance with one or more rules of the game, an element of the game.
11. The computer-implemented method of claim 1, comprising determining a proximity of the two computing devices to one another.
12. The computer-implemented method of claim 11, comprising determining, based on the proximity, one or more of a level of audio associated with the AR environment to be produced by at least one of the two computing devices, or a degree of physical feedback associated with the AR environment to be produced by at least one of the two computing devices.
13. A computer-implemented method comprising:
generating, by a computing system and for display by a computing device, an augmented reality (AR) environment comprising one or more virtual elements and at least a portion of a physical real-world environment comprising the computing device;
identifying, by the computing system, a physical surface located in the physical real-world environment; and
generating, by the computing system and for display by the computing device within the AR environment, one or more virtual elements scaled to fit in a space defined at least in part by the physical surface located in the physical real-world environment.
14. The computer-implemented method of claim 13, wherein:
the one or more virtual elements scaled to fit in the space represent a data set; and
generating the one or more virtual elements scaled to fit in the space comprises generating, based on a size of the data set, the one or more virtual elements scaled to fit in the space.
15. The computer-implemented method of claim 13, wherein:
identifying the physical surface comprises identifying at least two physical surfaces located in the physical real-world environment; and
generating the one or more virtual elements scaled to fit in the space comprises generating one or more virtual elements scaled to fit in a space defined at least in part by the at least two physical surfaces located in the physical real-world environment.
16. The computer-implemented method of claim 13, wherein identifying the physical surface comprises selecting the physical surface from amongst at least two physical surfaces located in the physical real-world environment.
17. The computer-implemented method of claim 16, wherein the selecting is based on user input identifying the physical surface.
18. The computer-implemented method of claim 17, wherein the user input comprises aligning one or more virtual elements with the physical surface.
19. The computer-implemented method of claim 13, comprising determining, by the computing system, one or more dimensions of the space defined at least in part by the physical surface located in the physical real-world environment.
20. The computer-implemented method of claim 19, comprising selecting, from amongst a plurality of different virtual elements and based on the one or more dimensions, at least one of the one or more virtual elements scaled to fit in the space.
21. The computer-implemented method of claim 19, wherein:
the physical real-world environment comprises the another computing device; and
determining the one or more dimensions comprises determining the one or more dimensions based on a distance between the computing device and the another computing device.
PCT/US2018/042244 2017-12-15 2018-07-16 Methods and systems for generating augmented reality environments WO2019118002A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762599432P 2017-12-15 2017-12-15
US62/599,432 2017-12-15

Publications (1)

Publication Number Publication Date
WO2019118002A1 true WO2019118002A1 (en) 2019-06-20

Family

ID=63207857

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/042244 WO2019118002A1 (en) 2017-12-15 2018-07-16 Methods and systems for generating augmented reality environments

Country Status (1)

Country Link
WO (1) WO2019118002A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115004130A (en) * 2020-01-24 2022-09-02 斯纳普公司 Augmented reality presentation
US12299827B2 (en) 2022-10-17 2025-05-13 T-Mobile Usa, Inc. Generating mixed reality content based on a location of a wireless device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130293548A1 (en) * 2009-11-16 2013-11-07 Sony Corporation Information processing apparatus, information processing method, program, and information processing system
US20130328927A1 (en) * 2011-11-03 2013-12-12 Brian J. Mount Augmented reality playspaces with adaptive game rules
US20140362084A1 (en) * 2011-02-15 2014-12-11 Sony Corporation Information processing device, authoring method, and program
US20150356774A1 (en) * 2014-06-09 2015-12-10 Microsoft Corporation Layout design using locally satisfiable proposals
US20160189426A1 (en) * 2014-12-30 2016-06-30 Mike Thomas Virtual representations of real-world objects
US20170076499A1 (en) * 2015-09-11 2017-03-16 Futurewei Technologies, Inc. Markerless Multi-User, Multi-Object Augmented Reality on Mobile Devices

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130293548A1 (en) * 2009-11-16 2013-11-07 Sony Corporation Information processing apparatus, information processing method, program, and information processing system
US20140362084A1 (en) * 2011-02-15 2014-12-11 Sony Corporation Information processing device, authoring method, and program
US20130328927A1 (en) * 2011-11-03 2013-12-12 Brian J. Mount Augmented reality playspaces with adaptive game rules
US20150356774A1 (en) * 2014-06-09 2015-12-10 Microsoft Corporation Layout design using locally satisfiable proposals
US20160189426A1 (en) * 2014-12-30 2016-06-30 Mike Thomas Virtual representations of real-world objects
US20170076499A1 (en) * 2015-09-11 2017-03-16 Futurewei Technologies, Inc. Markerless Multi-User, Multi-Object Augmented Reality on Mobile Devices

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
ARNE SCHMITZ ET AL: "Ad-Hoc Multi-Displays for Mobile Interactive Applications", 31ST ANNUAL CONFERENCE OF THE EUROPEAN ASSOCIATION FOR COMPUTER GRAPHICS (EUROGRAPHICS 2,, vol. 29, no. 2, 3 May 2010 (2010-05-03), pages 1 - 8, XP007920453 *
BARRETT ENS ET AL: "Spatial Constancy of Surface-Embedded Layouts across Multiple Environments", PROCEEDINGS OF THE 3RD ACM SYMPOSIUM ON SPATIAL USER INTERACTION, SUI '15, 8 August 2015 (2015-08-08), New York, New York, USA, pages 65 - 68, XP055530210, ISBN: 978-1-4503-3703-8, DOI: 10.1145/2788940.2788954 *
BROLL W ET AL: "Toward Next-Gen Mobile AR Games", IEEE COMPUTER GRAPHICS AND APPLICATIONS, IEEE SERVICE CENTER, NEW YORK, NY, US, vol. 28, no. 4, 1 July 2008 (2008-07-01), pages 40 - 48, XP011229502, ISSN: 0272-1716, DOI: 10.1109/MCG.2008.85 *
CHEN WEN-JIE ET AL: "Effective Registration for Multiple Users AR System", 2016 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY (ISMAR-ADJUNCT), IEEE, 19 September 2016 (2016-09-19), pages 270 - 271, XP033055463, DOI: 10.1109/ISMAR-ADJUNCT.2016.0092 *
DANPING ZOU ET AL: "CoSLAM: Collaborative Visual SLAM in Dynamic Environments", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE COMPUTER SOCIETY, USA, vol. 35, no. 2, 1 February 2013 (2013-02-01), pages 354 - 366, XP011490796, ISSN: 0162-8828, DOI: 10.1109/TPAMI.2012.104 *
JASON ORLOSKY ET AL: "Dynamic text management for see-through wearable and heads-up display systems", INTELLIGENT USER INTERFACES, ACM, 2 PENN PLAZA, SUITE 701 NEW YORK NY 10121-0701 USA, 19 March 2013 (2013-03-19), pages 363 - 370, XP058064736, ISBN: 978-1-4503-1965-2, DOI: 10.1145/2449396.2449443 *
JENS GRUBERT ET AL: "A Survey of Calibration Methods for Optical See-Through Head-Mounted Displays", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 13 September 2017 (2017-09-13), XP080820787, DOI: 10.1109/TVCG.2017.2754257 *
KIYOUNG KIM ET AL: "Keyframe-based modeling and tracking of multiple 3D objects", MIXED AND AUGMENTED REALITY (ISMAR), 2010 9TH IEEE INTERNATIONAL SYMPOSIUM ON, 1 October 2010 (2010-10-01), Piscataway, NJ, USA, pages 193 - 198, XP055491854, ISBN: 978-1-4244-9343-2, DOI: 10.1109/ISMAR.2010.5643569 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115004130A (en) * 2020-01-24 2022-09-02 斯纳普公司 Augmented reality presentation
US12299827B2 (en) 2022-10-17 2025-05-13 T-Mobile Usa, Inc. Generating mixed reality content based on a location of a wireless device

Similar Documents

Publication Publication Date Title
US10325410B1 (en) Augmented reality for enhancing sporting events
US11854148B2 (en) Virtual content display opportunity in mixed reality
Anthes et al. State of the art of virtual reality technology
KR101637990B1 (en) Spatially correlated rendering of three-dimensional content on display components having arbitrary positions
EP2887322B1 (en) Mixed reality holographic object development
US20170084084A1 (en) Mapping of user interaction within a virtual reality environment
Pucihar et al. Exploring the evolution of mobile augmented reality for future entertainment systems
CN103324453B (en) Display
KR101732839B1 (en) Segmentation of content delivery
CN107168534B (en) Rendering optimization method and projection method based on CAVE system
US11188975B2 (en) Digital model optimization responsive to orientation sensor data
KR20140082610A (en) Method and apaaratus for augmented exhibition contents in portable terminal
KR20210034692A (en) Mixed reality graduated information delivery
KR20160112898A (en) Method and apparatus for providing dynamic service based augmented reality
CN116917842A (en) Systems and methods for generating stable images of real environments in artificial reality
JP2022507502A (en) Augmented Reality (AR) Imprint Method and System
US20170214980A1 (en) Method and system for presenting media content in environment
JP2020509505A (en) Method, apparatus and computer program for providing augmented reality
WO2019118002A1 (en) Methods and systems for generating augmented reality environments
Schmalstieg et al. Augmented reality as a medium for cartography
CN106683152B (en) 3D visual effect analogy method and device
Kim et al. A view direction-driven approach for automatic room mapping in mixed reality
CN111344744A (en) Method for presenting a three-dimensional object, and related computer program product, digital storage medium and computer system
US11302079B2 (en) Systems and methods for displaying and interacting with a dynamic real-world environment
Hamadouche Augmented reality X-ray vision on optical see-through head mounted displays

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18755336

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18755336

Country of ref document: EP

Kind code of ref document: A1