CN104011788A - Systems and methods for augmented and virtual reality - Google Patents
Systems and methods for augmented and virtual reality Download PDFInfo
- Publication number
- CN104011788A CN104011788A CN201280064922.8A CN201280064922A CN104011788A CN 104011788 A CN104011788 A CN 104011788A CN 201280064922 A CN201280064922 A CN 201280064922A CN 104011788 A CN104011788 A CN 104011788A
- Authority
- CN
- China
- Prior art keywords
- user
- virtual
- virtual world
- data
- world
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/40—Support for services or applications
- H04L65/401—Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
- H04L65/4015—Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference where at least one of the additional parallel sessions is real time or time sensitive, e.g. white board sharing, collaboration or spawning of a subconference
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/211—Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/213—Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/215—Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/216—Input arrangements for video game devices characterised by their sensors, purposes or types using geographical information, e.g. location of the game device or player using GPS
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/25—Output arrangements for video game devices
- A63F13/28—Output arrangements for video game devices responding to control signals received from the game device for affecting ambient conditions, e.g. for vibrating players' seats, activating scent dispensers or affecting temperature or light
- A63F13/285—Generating tactile feedback signals via the game input device, e.g. force feedback
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/30—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
- A63F13/35—Details of game servers
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
- A63F13/42—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
- A63F13/424—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
- A63F13/42—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
- A63F13/428—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
- A63F13/65—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
- A63F13/655—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/80—Special adaptations for executing a specific game genre or game mode
- A63F13/812—Ball games, e.g. soccer or baseball
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/90—Constructional details or arrangements of video game devices not provided for in groups A63F13/20 or A63F13/25, e.g. housing, wiring, connections or cabinets
- A63F13/92—Video game devices specially adapted to be hand-held while playing
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
- G02B27/0172—Head mounted characterised by optical features
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B30/00—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/954—Navigation, e.g. using categorised browsing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/002—Specific input/output arrangements not covered by G06F3/01 - G06F3/16
- G06F3/005—Input arrangements through a video camera
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/016—Input arrangements with force or tactile feedback as computer generated output to the user
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/02—Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/131—Protocols for games, networked simulations or virtual reality
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/10—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
- A63F2300/1087—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera
- A63F2300/1093—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera using visible light
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/50—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
- A63F2300/55—Details of game data or player data management
- A63F2300/5546—Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history
- A63F2300/5553—Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history user representation in the game field, e.g. avatar
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/50—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
- A63F2300/57—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of game services offered to the player
- A63F2300/577—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of game services offered to the player for watching a game played by other players
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/66—Methods for processing data by generating or executing the game program for rendering three dimensional images
- A63F2300/6607—Methods for processing data by generating or executing the game program for rendering three dimensional images for animating game characters, e.g. skeleton kinematics
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/69—Involving elements of the real world in the game world, e.g. measurement in live races, real video
- A63F2300/695—Imported photos, e.g. of the player
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/80—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
- A63F2300/8082—Virtual reality
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/014—Head-up displays characterised by optical features comprising information/image processing systems
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0141—Head-up displays characterised by optical features characterised by the informative content of the display
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/011—Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/14—Multichannel or multilink protocols
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- Data Mining & Analysis (AREA)
- Remote Sensing (AREA)
- Optics & Photonics (AREA)
- Environmental & Geological Engineering (AREA)
- Acoustics & Sound (AREA)
- Computer Security & Cryptography (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
- Stereophonic System (AREA)
- Information Transfer Between Computers (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
相关申请数据relevant application data
本申请依据美国法典第35编第119节的规定要求于2011年10月28日提交的申请序列号为61/552,941的美国临时申请的权益。据此通过引用将上述申请以其整体并入到本申请中。This application claims the benefit of US Provisional Application Serial No. 61/552,941 filed October 28, 2011 under 35 USC Section 119. The aforementioned application is hereby incorporated by reference in its entirety into the present application.
技术领域technical field
本发明一般涉及被配置为促进针对一个或多个用户的交互性的虚拟或增强现实环境的系统和方法。The present invention generally relates to systems and methods configured to facilitate interactive virtual or augmented reality environments for one or more users.
背景技术Background technique
由计算机(部分地)使用描述环境的数据来生成虚拟和增强现实环境。这种数据可以描述例如用户可以感知并与其交互的各种对象。这些对象的示例包含:为了使用户看见而被渲染和显示的对象、为了使用户听见而播放的音频、以及为了使用户感觉到的触觉(触感)反馈。用户可以通过各种视觉、听觉和触觉的构件来感知虚拟和增强现实环境并与虚拟和增强现实环境交互。Virtual and augmented reality environments are generated (in part) by computers using data describing the environment. Such data can describe, for example, various objects that a user can perceive and interact with. Examples of these objects include: objects rendered and displayed for the user to see, audio played for the user to hear, and tactile (tactile) feedback for the user to feel. Users can perceive and interact with virtual and augmented reality environments through various visual, auditory and tactile components.
发明内容Contents of the invention
一个实施例针对用于使得两个或更多的用户能够在包括虚拟世界数据的虚拟世界内进行交互的系统,所述系统包括计算机网络,所述计算机网络包括一个或多个计算设备,所述一个或多个计算设备包括:存储器、处理电路和至少部分地存储在所述存储器中并且可以由所述处理电路执行以处理所述虚拟世界数据的至少一部分的软件;其中所述虚拟世界数据的至少第一部分来源于第一用户本地的第一用户虚拟世界,以及其中所述计算机网络可操作地向用于向第二用户呈现的用户设备传输所述第一部分,使得所述第二用户可以从所述第二用户的位置来体验所述第一部分,使得所述第一用户的虚拟世界的方面被高效地传送给所述第二用户。所述第一用户和第二用户可以在不同的物理位置中或基本上在同一物理位置中。所述虚拟世界的至少一部分可以被配置为响应于所述虚拟世界数据的变化而变化。所述虚拟世界的至少一部分可以被配置为响应于由所述用户设备感知的物理对象而变化。所述虚拟世界数据中的变化可以表示表示虚拟对象与所述物理对象具有预定关系。可以将所述虚拟世界数据中的变化提供给第二用户设备以用于根据所述预定关系向所述第二用户呈现。所述虚拟世界可以可操作地由计算机服务器或用户设备中的至少一个来渲染。可以以二维格式来呈现所述虚拟世界。可以以三维格式来呈现所述虚拟世界。所述用户设备可以可操作地提供接口以用于使得能够在增强现实模式中在用户和所述虚拟世界之间进行交互。所述用户设备可以可操作地提供接口以用于使得能够在虚拟现实模式中在用户和所述虚拟世界之间进行交互。所述用户设备可以可操作地提供接口以用于使得能够在增强和虚拟现实模式的组合中在用户和所述虚拟世界之间进行交互。可以在数据网络上来传输所述虚拟世界数据。所述计算机网络可以可操作地接收来自用户设备的所述虚拟世界数据的至少一部分。传输给所述用户设备的所述虚拟世界数据的至少一部分可以包括用于生成所述虚拟世界的至少一部分的指令。可以将所述虚拟世界数据的至少一部分传输给网关以用于处理或分发中的至少一个。所述一个或多个计算机服务器中的至少一个计算机服务器可以可操作地处理由所述网关分发的虚拟世界数据。One embodiment is directed to a system for enabling two or more users to interact within a virtual world including virtual world data, the system comprising a computer network comprising one or more computing devices, the One or more computing devices comprising: memory, processing circuitry, and software at least partially stored in said memory and executable by said processing circuitry to process at least a portion of said virtual world data; wherein said virtual world data at least the first portion originates from the first user's virtual world local to the first user, and wherein the computer network is operable to transmit the first portion to a user device for presentation to the second user such that the second user can access the The second user's location to experience the first portion is such that aspects of the first user's virtual world are efficiently communicated to the second user. The first user and second user may be in different physical locations or substantially in the same physical location. At least a portion of the virtual world may be configured to change in response to changes in the virtual world data. At least a portion of the virtual world may be configured to change in response to physical objects perceived by the user device. A change in the virtual world data may indicate that a virtual object has a predetermined relationship to the physical object. Changes in the virtual world data may be provided to a second user device for presentation to the second user according to the predetermined relationship. The virtual world may be operable to be rendered by at least one of a computer server or user equipment. The virtual world may be presented in a two-dimensional format. The virtual world may be presented in a three-dimensional format. The user device may be operable to provide an interface for enabling interaction between a user and the virtual world in an augmented reality mode. The user device may be operable to provide an interface for enabling interaction between a user and the virtual world in a virtual reality mode. The user device may be operable to provide an interface for enabling interaction between the user and the virtual world in a combination of augmented and virtual reality modes. The virtual world data may be transmitted over a data network. The computer network may be operable to receive at least a portion of the virtual world data from a user device. At least a portion of the virtual world data transmitted to the user device may include instructions for generating at least a portion of the virtual world. At least a portion of the virtual world data may be transmitted to the gateway for at least one of processing or distribution. At least one of the one or more computer servers may be operable to process virtual world data distributed by the gateway.
另一个实施例针对用于虚拟和/或增强用户体验的系统,其中至少部分地基于具有来自音调变化和面部识别软件的可选输入的可穿戴设备上的数据来制作化身的动画。Another embodiment is directed to a system for a virtual and/or enhanced user experience in which an avatar is animated based at least in part on data on a wearable device with optional input from pitch changes and facial recognition software.
另一个实施例针对用于虚拟和/或增强用户体验的系统,其中相机姿势或视点方位和向量可以被布置在世界扇区(sector)中的任何地方。Another embodiment is directed to a system for virtual and/or augmented user experience where camera poses or viewpoint orientations and vectors can be placed anywhere in a world sector.
另一个实施例针对用于虚拟和/或增强用户体验的系统,其中可以在各种和可选择的比例上针对观察用户来渲染世界或其部分。Another embodiment is directed to a system for a virtual and/or augmented user experience where the world or portions thereof can be rendered at various and selectable scales for an observing user.
另一个实施例针对用于虚拟和/或增强用户体验的系统,其中特征(诸如除了姿势加标记的图像外的点或参数线)可以用作用于世界模型的基础数据,从该基础数据,软件机器人或对象识别器可以用于创建真实世界对象的参数表示,标记用于相互包含在分割的对象和世界模型中的源特征。Another embodiment is directed to a system for virtual and/or augmented user experience, where features (such as points or parametric lines in addition to pose-tagged images) can be used as base data for a world model from which software Robot or object recognizers can be used to create parametric representations of real-world objects, labeling source features for mutual inclusion in segmented object and world models.
附图说明Description of drawings
图1说明了用于促进针对多个用户的交互性的虚拟或增强现实环境的所公开的系统的代表性实施例。FIG. 1 illustrates a representative embodiment of the disclosed system for facilitating interactive virtual or augmented reality environments for multiple users.
图2说明了用于与图1中所说明的系统进行交互的用户设备的示例。FIG. 2 illustrates an example of user equipment for interacting with the system illustrated in FIG. 1 .
图3说明了移动的可穿戴的用户设备的示例实施例。Figure 3 illustrates an example embodiment of a mobile wearable user device.
图4说明了当图3的移动的可穿戴的用户设备操作在增强模式中时由用户看到的对象的示例。4 illustrates an example of objects seen by a user when the mobile wearable user device of FIG. 3 is operating in an enhanced mode.
图5说明了当图3的移动的可穿戴的用户设备操作在虚拟模式中时由用户看到的对象的示例。5 illustrates examples of objects seen by a user when the mobile wearable user device of FIG. 3 is operating in a virtual mode.
图6说明了当图3的移动的可穿戴的用户设备操作在混杂虚拟接口模式中时由用户看到的对象的示例。6 illustrates examples of objects seen by a user when the mobile wearable user device of FIG. 3 is operating in a hybrid virtual interface mode.
图7说明了位于不同的地理位置中的两个用户每一个用户通过他们各自的用户设备与另一个用户和共同的虚拟世界进行交互的实施例。Figure 7 illustrates an embodiment in which two users located in different geographic locations each interact with the other user and a common virtual world through their respective user devices.
图8说明了图7的实施例被扩展为包含使用触感设备的实施例。FIG. 8 illustrates the embodiment of FIG. 7 extended to include an embodiment using a haptic device.
图9A说明了混合模式接口连接的示例,其中在混合虚拟接口模式中第一用户通过接口与数字世界连接,以及在虚拟现实模式中第二用户通过接口与同一数字世界接连。Figure 9A illustrates an example of a hybrid mode interfacing, where a first user interfaces with a digital world in a hybrid virtual interface mode, and a second user interfaces with the same digital world in a virtual reality mode.
图9B说明了混合模式接口连接的另一个示例,其中在混杂虚拟接口模式中第一用户通过接口与数字世界连接,以及增强现实模式中第二用户通过接口与同一数字世界连接。9B illustrates another example of hybrid mode interfacing, where a first user interfaces with a digital world in hybrid virtual interface mode, and a second user interfaces with the same digital world in augmented reality mode.
图10说明了当在增强现实模式中通过接口与系统连接时用户的视图的示例说明。Figure 10 illustrates an example illustration of a user's view when interfacing with the system in augmented reality mode.
图11说明了当在增强现实模式中的用户通过接口与系统连接时示出由物理对象触发的虚拟对象的用户视图的示例说明。11 illustrates an example illustration of a user's view showing virtual objects triggered by physical objects when the user in augmented reality mode interfaces with the system.
图12说明了增强和虚拟现实集成配置的一个实施例,其中在增强现实体验中的一个用户可视化在虚拟现实体验中的另一个用户的呈现。Figure 12 illustrates one embodiment of an integrated augmented and virtual reality configuration in which one user in an augmented reality experience visualizes another user's representation in a virtual reality experience.
图13说明了基于时间和/或意外事件的增强现实体验配置的一个实施例。Figure 13 illustrates one embodiment of an augmented reality experience configuration based on time and/or unexpected events.
图14说明了适用于虚拟和/或增强现实体验的用户显示配置的一个实施例。Figure 14 illustrates one embodiment of a user display configuration suitable for a virtual and/or augmented reality experience.
图15说明了基于本地和云的计算协调的一个实施例。Figure 15 illustrates one embodiment of on-premises and cloud-based computing coordination.
图16说明了注册配置的各种方面。Figure 16 illustrates various aspects of the registration configuration.
具体实施方式Detailed ways
参照图1,系统100是用于实现以下描述的过程的代表性硬件。这种代表性系统包括:计算网络105,其由通过一个或多个高带宽接口115连接的一个或多个计算机服务器110组成。在计算网络中的服务器不必是位于一处的。一个或多个服务器110中的每个服务器包括用于执行程序指令的一个或多个处理器。服务器还包含用于存储程序指令和数据的存储器,所述数据在程序指令的引导下由服务器执行的进程来使用和/或生成。Referring to FIG. 1 , system 100 is representative hardware for implementing the processes described below. Such a representative system includes a computing network 105 consisting of one or more computer servers 110 connected by one or more high bandwidth interfaces 115 . Servers in a computing network do not have to be co-located. Each of the one or more servers 110 includes one or more processors for executing program instructions. The server also contains memory for storing program instructions and data for use and/or generation by processes executed by the server at the direction of the program instructions.
计算网络105在服务器110之间以及在服务器和一个或多个用户设备120之间通过一个或多个数据网络连接130来传递数据。此类数据网络的示例包含但不限制于任何或所有类型的公共和私有数据网络(移动的和有线的两者),包含例如通常被称为互联网的许多此类网络的互联。没有特定的介质、拓扑或协议旨在由该图所暗含。Computing network 105 communicates data between servers 110 and between servers and one or more user devices 120 over one or more data network connections 130 . Examples of such data networks include, but are not limited to, any or all types of public and private data networks (both mobile and wired), including, for example, the interconnection of many such networks commonly referred to as the Internet. No particular medium, topology, or protocol is intended to be implied by this diagram.
用户设备被配置为直接与计算网络105或服务器110中的任何服务器通信。备选地,用户设备120与远程服务器110通信,以及可选地,通过用于处理数据和/或用于在网络105和一个或多个本地用户设备120之间传递数据的特别编程的本地网关140与其它用户设备本地地通信。User devices are configured to communicate directly with any server in computing network 105 or servers 110 . Alternatively, user device 120 communicates with remote server 110 and, optionally, through a specially programmed local gateway for processing data and/or for passing data between network 105 and one or more local user devices 120 140 communicates locally with other user equipment.
如说明的,网关140被实现成单独的硬件组件,其包含用于执行软件指令的处理器和用于存储软件指令和数据的存储器。网关具有至数据网络的其自己的有线和/或无线连接以用于与包括计算网络105的服务器110通信。备选地,网关140能够与用户设备120集成,用户设备120由用户穿戴或携带。例如,可以将网关140实现成可下载的软件应用,该软件应用被安装以及在被包含在用户设备120的处理器上运行。在一个实施例中,网关140经由数据网络130提供至计算网络105的一个或多个用户接入。As illustrated, gateway 140 is implemented as a single hardware component that includes a processor for executing software instructions and a memory for storing software instructions and data. The gateway has its own wired and/or wireless connection to the data network for communicating with the server 110 comprising the computing network 105 . Alternatively, gateway 140 can be integrated with user equipment 120, which is worn or carried by the user. For example, gateway 140 may be implemented as a downloadable software application that is installed and runs on a processor contained in user device 120 . In one embodiment, gateway 140 provides one or more user access to computing network 105 via data network 130 .
服务器110每个都包含:例如,用于存储数据和软件程序的工作存储器和存储设备,用于执行程序指令的微处理器,用于渲染和生成图形、图像、视频、音频和多媒体文件的图形处理器和其它专用处理器。计算网络105还可以包括用于存储由服务器110访问、使用或创建的数据。Servers 110 each contain, for example, working memory and storage devices for storing data and software programs, microprocessors for executing program instructions, graphics for rendering and generating graphics, images, video, audio, and multimedia files processors and other special purpose processors. Computing network 105 may also include facilities for storing data accessed, used, or created by servers 110 .
运行在服务器以及可选择地用户设备120和网关140上的软件程序用于生成数字世界(在本文中还被称为虚拟世界),用户使用该数字世界与用户设备120进行交互。数字世界由数据和进程来表示,数据和进程描述和/或定义能够通过用户设备120呈现给用户的虚拟的不存在的实体、环境和条件以用于用户体验和与之交互。例如,当在由用户观看或体验的场景中进行实例化时好像将物理呈现的某些类型的对象、实体或项目可以包含它的外观、它的行为、允许用户如何与它进行交互以及其它特点的描述。用于创建虚拟世界(包含虚拟对象)的环境的数据可以包含:例如,大气数据、地形数据、天气数据、温度数据、位置数据以及用于定义和/或描述虚拟环境的其它数据。另外,定义管理虚拟世界的操作的各种条件的数据可以包含:例如,物理定律、时间、空间关系和可用于定义和/或创建管理虚拟世界(包含虚拟对象)的操作的虚拟条件的其它数据。A software program running on the server and optionally the user device 120 and the gateway 140 is used to generate a digital world (also referred to herein as a virtual world) with which the user interacts with the user device 120 . The digital world is represented by data and processes that describe and/or define virtual non-existent entities, environments and conditions that can be presented to a user via user device 120 for user experience and interaction. For example, a certain type of object, entity, or item that appears to physically appear when instantiated in a scene viewed or experienced by a user may include its appearance, its behavior, how the user is allowed to interact with it, and other characteristics description of. Data used to create the environment of a virtual world (including virtual objects) may include, for example, atmospheric data, terrain data, weather data, temperature data, location data, and other data used to define and/or describe the virtual environment. Additionally, data defining various conditions governing the operation of a virtual world may include, for example, laws of physics, temporal, spatial relationships, and other data that may be used to define and/or create virtual conditions governing the operation of a virtual world (including virtual objects) .
在本文中,除非上下文以其它方式指示,否则将数字世界的实体、对象、条件、特点、行为或其它特征一般地称为对象(例如,数字对象、虚拟对象、渲染的物理对象等)。对象可以是任何类型的动画或非动画的对象,包含但不限于建筑物、植物、车辆、人、动物、生物、机器、数据、视频、文本、图片和其它用户。还可以在用于存储关于实际上呈现在物理世界中的项目、行为或条件的信息的数字世界中来定义对象。在本文中,将描述或定义实体、对象或项目或者存储它的当前状态的数据一般称为对象数据。这种数据由服务器110或取决于实现方式由网关140或用户设备120来处理以实例化对象的实例以及以适当的方式来渲染对象以用于用户通过用户设备进行体验。Unless the context dictates otherwise, an entity, object, condition, characteristic, behavior, or other characteristic of the digital world is generally referred to herein as an object (eg, a digital object, a virtual object, a rendered physical object, etc.), unless the context dictates otherwise. Objects can be any type of animated or non-animated object, including but not limited to buildings, plants, vehicles, people, animals, creatures, machines, data, video, text, pictures, and other users. Objects can also be defined in the digital world for storing information about items, behaviors or conditions that are actually present in the physical world. In this document, data describing or defining an entity, object or item, or storing its current state, will generally be referred to as object data. This data is processed by the server 110 or, depending on the implementation, by the gateway 140 or the user device 120 to instantiate an instance of the object and render the object in an appropriate manner for the user to experience through the user device.
开发和/或策划数字世界的程序员创建或定义对象以及实例化对象的条件。然而,数字世界能够允许他者来创建或修改对象。一旦对象被实例化,则可以允许由体验数字世界的一个或多个用户来改变、控制或操纵该对象的状态。Programmers who develop and/or curate digital worlds create or define objects and the conditions under which they are instantiated. However, the digital world can allow others to create or modify objects. Once an object is instantiated, the state of the object may be allowed to be changed, controlled, or manipulated by one or more users experiencing the digital world.
例如,在一个实施例中,一般由一个或多个系统管理程序员来提供数字世界的开发、制作和管理。在一些实施例中,这可以包含:数字世界中的故事情节、主题和事件的开发、设计和/或执行,以及通过各种形式的事件和媒体(诸如例如电影、数字、网络、移动、增强现实以及现场娱乐)的叙事的分发。系统管理程序员还可以处置数字世界和与其相关联的用户社区的技术管理、节制和策划,以及通常由网络管理人员执行的其它任务。For example, in one embodiment, the development, production and management of the digital world is generally provided by one or more system administration programmers. In some embodiments, this may include: the development, design and/or execution of storylines, themes and events in the digital world, and through various forms of events and media (such as, for example, film, digital, web, mobile, enhanced reality and live entertainment) narrative distribution. Systems administration programmers also deal with the technical administration, moderation, and planning of the digital world and its associated user communities, as well as other tasks typically performed by network administrators.
用户使用某类型的本地技术设备(其通常被表示为用户设备120)与一个或多个数字世界进行交互。此类用户设备的示例包含但不限于:智能电话、平板设备、抬头显示器(HUD)、游戏控制台或能够向用户传递数据和提供接口或显示的任何其它设备,以及此类设备的组合。在一些实施例中,用户设备120可以包含本地外围设备或输入/输出组件(诸如例如,键盘,鼠标,游戏杆,游戏控制器,触感接口设备,运动捕获控制器,光学跟踪设备(诸如那些可以从体感控制(Leap Motion)公司获得的,或那些在商标名Kinect(RTM)下从微软获得的),音频设备,语音设备,投影系统,3D显示器和全息3D接触镜片)或与其进行通信。Users interact with one or more digital worlds using some type of local technology device, generally denoted as user device 120 . Examples of such user devices include, but are not limited to, smartphones, tablet devices, heads-up displays (HUDs), game consoles, or any other device capable of communicating data and providing an interface or display to a user, and combinations of such devices. In some embodiments, user device 120 may include local peripherals or input/output components (such as, for example, keyboards, mice, joysticks, game controllers, tactile interface devices, motion capture controllers, optical tracking devices (such as those that can available from Leap Motion Corporation, or those available from Microsoft under the trade name Kinect (RTM), audio devices, speech devices, projection systems, 3D displays, and holographic 3D contact lenses) or in communication therewith.
在图2中说明的用于与系统100进行交互的用户设备120的示例。在图2示出的示例实施例中,用户210可以通过智能电话220通过接口与一个或多个数字世界连接。由被存储在以及运行在智能电话220上的软件应用230来实现网关。在这个特定示例中,数据网络130包含将用户设备(即,智能电话220)连接到计算机网络105的无线移动网络。An example of a user device 120 for interacting with the system 100 is illustrated in FIG. 2 . In the example embodiment shown in FIG. 2 , a user 210 may interface with one or more digital worlds through a smartphone 220 . The gateway is implemented by a software application 230 stored on and running on the smartphone 220 . In this particular example, data network 130 comprises a wireless mobile network connecting user equipment (ie, smartphone 220 ) to computer network 105 .
在优选实施例的一种实现方式中,系统100能够支持大量的同时存在的用户(例如,数百万用户),每个用户使用某类型的用户设备120通过接口与同一个数字世界或与多个数字世界连接。In one implementation of the preferred embodiment, the system 100 is capable of supporting a large number of simultaneous users (e.g., millions of users), each using some type of user equipment 120 to interface with the same digital world or with multiple digital world connection.
用户设备向用户提供用于使得能够在用户和由服务器110生成的数字世界(包含其它用户和被呈现给用户的对象(真实的或虚拟的))之间的视觉、听觉和/或物理交互。接口向用户提供能够观看、听到或以其它方式感知的渲染的场景,以及与场景实时交互的能力。用户与渲染的场景的交互的方式可以由用户设备的能力来指定。例如,如果用户设备是智能电话,则可以由用户接触触摸屏来实现用户交互。在另一个示例中,如果用户设备是计算机或游戏控制台,则可以使用键盘或游戏控制器来实现用户交互。用户设备可以包含使得能够用户交互的另外的组件(诸如传感器),其中由传感器检测到的对象和信息(包含手势)可以被提供作为表示使用用户设备与虚拟世界的用户交互的输入。User equipment provides the user with means to enable visual, auditory and/or physical interaction between the user and the digital world generated by the server 110, including other users and objects (real or virtual) presented to the user. The interface provides the user with the ability to see, hear, or otherwise perceive a rendered scene, as well as interact with the scene in real time. The manner in which the user interacts with the rendered scene may be dictated by the capabilities of the user's device. For example, if the user device is a smart phone, user interaction may be achieved by the user touching the touch screen. In another example, if the user device is a computer or game console, a keyboard or game controller can be used for user interaction. The user device may contain additional components that enable user interaction, such as sensors, where objects and information (including gestures) detected by the sensors may be provided as input representing user interaction with the virtual world using the user device.
可以以各种格式(诸如例如,二维或三维视觉显示器(包含投影仪)、声音和触感或触觉反馈)来呈现渲染的场景。可以由用户以一种或多种模式(包含:例如增强现实、虚拟现实和其组合)来通过接口与渲染的场景连接。渲染的场景的格式以及接口模式可以由以下中的一个或多个来指定:用户设备、数据处理能力、用户设备连通性、网络容量和系统工作负载。由计算网络105、服务器110、网关组件140(可选地)以及用户设备120使得能够具有大量的用户同时与数字世界进行交互以及数据交换的实时性质。The rendered scene may be presented in various formats such as, for example, two-dimensional or three-dimensional visual displays (including projectors), sound, and tactile or tactile feedback. The rendered scene may be interfaced by the user in one or more modes including, for example, augmented reality, virtual reality, and combinations thereof. The format of the rendered scene and the interface mode may be specified by one or more of: user equipment, data processing capabilities, user equipment connectivity, network capacity, and system workload. The real-time nature of interaction and data exchange with a large number of users simultaneously with the digital world is enabled by computing network 105, server 110, gateway component 140 (optional), and user equipment 120.
在一个示例中,计算网络105由通过高速连接(例如,高带宽接口115)的具有单个和/或多个核心的服务器(即,服务器110)的大规模计算系统组成。计算网络105可以形成云或网格网络。服务器中的每个服务器包含存储器,或与用于存储软件的计算机可读存储器耦合,该软件用于实现数据以创建、设计、改变或处理数字世界的对象。这些对象和它们的实例可以是动态的,进入和离开,随着时间变化以及响应于其它条件而变化。在本文中相对于各种实施例来一般地论述对象的动态能力的示例。在一些实施例中,通过接口与系统100连接的每个用户还被表示成一个或多个数字世界内的对象和/或对象的集合。In one example, computing network 105 consists of a large-scale computing system having single and/or multiple core servers (ie, server 110 ) over high-speed connections (eg, high-bandwidth interface 115 ). Computing network 105 may form a cloud or mesh network. Each of the servers contains memory, or is coupled with computer readable memory for storing software for implementing data to create, design, change or manipulate objects of the digital world. These objects and their instances can be dynamic, coming in and out, changing over time and in response to other conditions. Examples of dynamic capabilities of objects are generally discussed herein with respect to various embodiments. In some embodiments, each user interfaced with system 100 is also represented as an object and/or collection of objects within one or more digital worlds.
计算网络105内的服务器110还存储用于数字世界中的每个数字世界的计算状态数据。计算状态数据(在本文中,还被称为状态数据)可以是对象数据的组成部分,以及一般定义时间中的给定时刻处的对象的实例的状态。因此,计算状态数据可以随着时间而变化,以及可以受维护系统100的一个或多个用户和/或程序员的动作的影响。当用户影响计算状态数据(或包括数字世界的其它数据)时,用户直接改变或以其它方式操纵数字世界。如果数字世界与其它用户共享或由其它用户通过接口连接,则用户的动作可能影响由与数字世界交互的其它用户所体验的数字世界。因此,在一些实施例中,由用户做出的对数字世界的改变将由通过接口与系统100连接的其它用户体验到。Servers 110 within computing network 105 also store computing state data for each of the digital worlds. Computational state data (also referred to herein as state data) may be an integral part of object data, and generally defines the state of an instance of an object at a given moment in time. Thus, computing state data may change over time and may be affected by actions of one or more users and/or programmers maintaining system 100 . When a user affects computing state data (or other data comprising the digital world), the user directly changes or otherwise manipulates the digital world. If the digital world is shared with or interfaced by other users, the user's actions may affect the digital world experienced by other users interacting with the digital world. Thus, in some embodiments, changes to the digital world made by a user will be experienced by other users interfacing with the system 100 .
在一个实施例中,存储在计算网络105内的一个或多个服务器110中的数据以高速度和低时延被传送给或部署到一个或多个用户设备120和/或网关组件140。在一个实施例中,由服务器共享的对象数据可以是完整的数据或者可以是压缩的,以及含有用于在用户侧重新创建全部的对象数据的指令,由用户的本地计算设备(例如,网关140和/或用户设备120)来渲染和可视化。在一些实施例中,运行在计算网络105的服务器110上的软件可以根据用户的特定设备和带宽来针对数字世界内的对象(或者由计算网络105交换的任何数据)适应它生成和发送给特定用户设备120的数据。例如,当用户通过用户设备120与数字世界进行交互时,服务器110可以识别由用户使用的特定类型的设备、设备的连通性和/或用户设备和服务器之间的可用带宽,以及适当地调整大小和平衡将被递送给设备的数据以优化用户交互。这种情况的示例可以包含:将传输的数据的大小减小到低分辨率质量,以便可以在具有低分辨率显示器的特定用户设备上显示数据。在优选实施例中,计算网络105和/或网关组件140以足以操作在15帧/每秒或更高来呈现界面的速率,以及以高清晰度或更高清晰度的分辨率向用户设备120递送数据。In one embodiment, data stored in one or more servers 110 within computing network 105 is transmitted or deployed to one or more user devices 120 and/or gateway component 140 at high speed and low latency. In one embodiment, the object data shared by the server may be complete data or may be compressed, and contain instructions for recreating the entire object data on the user side, by the user's local computing device (e.g., gateway 140 and/or user device 120) for rendering and visualization. In some embodiments, software running on servers 110 of computing network 105 can adapt the objects (or any data exchanged by computing network 105 ) in the digital world it generates and sends to specific Data of the user equipment 120. For example, when a user interacts with the digital world through user device 120, server 110 may identify the particular type of device being used by the user, the connectivity of the device, and/or the available bandwidth between the user device and the server, and adjust the size of the device appropriately. And balance the data to be delivered to the device to optimize user interaction. An example of this could include reducing the size of transmitted data to low-resolution quality so that the data can be displayed on a particular user device with a low-resolution display. In a preferred embodiment, computing network 105 and/or gateway component 140 renders the interface at a rate sufficient to operate at 15 frames per second or higher, and at a resolution of high definition or higher to user device 120 Delivery data.
网关140为一个或多个用户提供至计算网络105的本地连接。在一些实施例中,可以由运行在用户设备120或另一个本地设备(诸如在图2中示出的本地设备)上的可下载的软件应用来实现网关140。在其它实施例中,可以由硬件组件(具有存储在该组件上的适当的软件/固件,该组件具有处理器)来实现网关140,该硬件组件与用户设备120通信(但是不与用户设备120合并或附接到用户设备120),或与用户设备120合并。网关140经由数据网络130与计算网络105通信,以及提供计算网络105和一个或多个本地用户设备120之间的数据交换。如以下更详细地论述的,网关组件140可以包含:软件、固件、存储器和处理电路,以及可能能够处理网络105和一个或多个本地用户设备120之间传递的数据。Gateway 140 provides local connectivity to computing network 105 for one or more users. In some embodiments, gateway 140 may be implemented by a downloadable software application running on user device 120 or another local device, such as the local device shown in FIG. 2 . In other embodiments, gateway 140 may be implemented by a hardware component (with appropriate software/firmware stored on the component, with a processor) that communicates with (but does not communicate with) user device 120 incorporated or attached to user equipment 120), or merged with user equipment 120. Gateway 140 communicates with computing network 105 via data network 130 and provides data exchange between computing network 105 and one or more local user devices 120 . As discussed in more detail below, gateway component 140 may include software, firmware, memory, and processing circuitry, and may be capable of processing data communicated between network 105 and one or more local user devices 120 .
在一些实施例中,网关组件140监测和调节用户设备120和计算机网络105之间交换的数据的速率,以允许针对特定用户设备120的最佳数据处理能力。例如,在一些实施例中,网关140缓冲和下载数字世界的静态和动态方面两者,甚至那些超出了通过与用户设备连接的接口呈现给用户的视野。在此类实施例中,静态对象的实例(结构化数据,实现方法的软件,或这两者)可以存储在存储器中(位于网关组件140、用户设备120或这两者的本地),以及根据本地用户的当前方位(如由计算网络105和/或用户设备120提供的数据来指示)被引用。动态对象的实例(其可以包含例如智能软件代理和由其它用户和/或本地用户控制的对象)可以存储在高速存储缓冲器中。表示呈现给用户的场景内的两维或三维对象的动态对象能够例如被分解成组件形状,诸如移动但不变化的静态形状,以及变化的动态形状。变化的动态对象的部分能够通过计算网络105由来自网关组件140管理的服务器110的实时的线程化的高优先级的数据流来更新。作为优先化的线程化的数据流的一种示例,在用户的眼睛的60度的视野内的数据可以给予比更外围的数据更高的优先级。另一个示例包含优先化背景中的静态对象上的用户视野内的动态角色和/或对象。In some embodiments, gateway component 140 monitors and adjusts the rate of data exchanged between user device 120 and computer network 105 to allow optimal data processing capabilities for a particular user device 120 . For example, in some embodiments, gateway 140 buffers and downloads both static and dynamic aspects of the digital world, even those beyond the field of view presented to the user through an interface connected to the user device. In such embodiments, instances of static objects (structured data, software implementing methods, or both) may be stored in memory (local to gateway component 140, user device 120, or both), and The local user's current position (as indicated by data provided by computing network 105 and/or user device 120 ) is referenced. Instances of dynamic objects (which may include, for example, intelligent software agents and objects controlled by other users and/or local users) may be stored in a cache memory. Dynamic objects representing two-dimensional or three-dimensional objects within a scene presented to a user can, for example, be decomposed into component shapes, such as static shapes that move but do not change, and dynamic shapes that change. The portion of the dynamic object that changes can be updated by a real-time threaded high-priority data stream from the server 110 managed by the gateway component 140 over the computing network 105 . As one example of a prioritized threaded data flow, data within the 60 degree field of view of the user's eyes may be given higher priority than more peripheral data. Another example includes prioritizing dynamic characters and/or objects within the user's field of view over static objects in the background.
除了管理计算网络105和用户设备120之间的数据连接外,网关组件140可以存储和/或处理可以呈现给用户设备120的数据。例如,在一些实施例中,网关组件140可以接收来自计算网络105的例如描述将被渲染的以用于由用户观看的图形对象的压缩数据,以及执行高级渲染技术以减轻从计算网络105传输给用户设备120的数据负载。在另一个示例中,其中网关140是独立的设备,网关140可以存储和/或处理用于对象的本地实例的数据,而不是将该数据传输给计算网络105以用于处理。In addition to managing data connections between computing network 105 and user device 120 , gateway component 140 may store and/or process data that may be presented to user device 120 . For example, in some embodiments, gateway component 140 may receive from computing network 105 compressed data, such as describing graphical objects to be rendered for viewing by a user, and perform advanced rendering techniques to ease transmission from computing network 105 to The data payload of the user equipment 120. In another example, where gateway 140 is a stand-alone device, gateway 140 may store and/or process data for a local instance of an object, rather than transmitting the data to computing network 105 for processing.
现在还参照图3,可以以各种格式(其取决于用户设备的能力)由一个或多个用户来体验数字世界。在一些实施例中,用户设备120可以包含:例如,智能电话、平板设备、抬头显示器(HUD)、游戏控制台或可穿戴设备。一般来说,用户设备将包含用于执行存储在设备上的存储器中的程序代码的处理器,与显示器耦合,以及通信接口。在图3中说明了用户设备的示例实施例,其中用户设备包括移动的可穿戴设备,即头戴式显示系统300。依照本公开的实施例,头戴式显示系统300包含用户界面302、用户感知系统304、环境感知系统306以及处理器308。虽然在图3中处理器308被示出为与头戴式显示系统300分离的单独的组件,但是在备选实施例中,处理器308可以与头戴式显示系统300中的一个或多个组件集成,或可以集成到其它系统100组件(诸如例如网关140)中。Referring now also to FIG. 3, the digital world can be experienced by one or more users in various formats (depending on the capabilities of the user equipment). In some embodiments, user device 120 may include, for example, a smartphone, tablet device, head-up display (HUD), game console, or wearable device. Generally, a user device will include a processor for executing program code stored in memory on the device, coupled to a display, and a communication interface. An example embodiment of a user device comprising a mobile wearable device, ie, a head-mounted display system 300, is illustrated in FIG. According to an embodiment of the present disclosure, the head mounted display system 300 includes a user interface 302 , a user perception system 304 , an environment perception system 306 and a processor 308 . Although processor 308 is shown in FIG. The components are integrated, or can be integrated into other system 100 components such as gateway 140, for example.
用户设备向用户呈现用于与数字世界交互和体验数字世界的界面302。此类交互可以涉及用户和数字世界、通过接口与系统100连接的一个或多个其它用户、以及数字世界内的对象。界面302一般向用户提供图像和/或音频感觉输入(以及在一些实施例中,物理感觉输入)。因此,界面302可以包含:扬声器(未示出)和显示组件303,在一些实施例中,显示组件303能够使得能够立体三维显示和/或体现人类视觉系统的更多自然特点的三维显示。在一些实施例中,显示组件303可以包括透明接口(诸如清晰的OLED),当处于“关闭(off)”设置时,该透明接口使得能够在具有很少至没有光学变形或计算叠加的情况下光学地纠正用户周围的物理环境的视图。如以下更详细地论述的,界面302可以包含考虑到各种视觉/接口性能和功能的另外的设置。The user device presents the user with an interface 302 for interacting with and experiencing the digital world. Such interactions may involve the user and the digital world, one or more other users interfaced with the system 100, and objects within the digital world. Interface 302 generally provides visual and/or audio sensory input (and, in some embodiments, physical sensory input) to the user. Thus, the interface 302 may include a speaker (not shown) and a display component 303 that, in some embodiments, enables a stereoscopic three-dimensional display and/or a three-dimensional display that embodies the more natural characteristics of the human visual system. In some embodiments, display assembly 303 may include a transparent interface (such as a clear OLED) that, when in the "off" setting, enables Optically corrects the view of the physical environment around the user. As discussed in more detail below, interface 302 may contain additional settings to allow for various visual/interface capabilities and functions.
在一些实施例中,用户感知系统304可以包含:一个或多个传感器310,其可操作地检测与穿戴系统300的个体用户有关的某些特征、特点或信息。例如,在一些实施例中,传感器310可以包含:相机或光学检测/扫描电路,其能够实时检测用户的光学特点/测量,诸如例如以下中的一个或多个:瞳孔收缩/扩张、每个瞳孔的角度测量/定位、二次球度(spherocity)、眼睛形状(如随着时间眼睛形状的变化)以及其它解剖学数据。这种数据可以提供或用于计算可以由头戴式系统300和/或接口系统100使用的信息(例如,用户的视觉焦点)以优化用户的观看体验。例如,在一个实施例中,传感器310可以各自测量用户每个眼睛的瞳孔收缩的速率。可以将这种数据传输给处理器308(或网关组件140或服务器110),其中该数据用于确定例如用户对界面显示器303的亮度设置的反应。可以依照用户的反应通过例如如果用户的反应指示显示器303的零度级别太高则调暗显示器303来对界面302进行调节。用户感知系统304可以包含除了上述的或在图3中说明的那些组件以外的其它组件。例如,在一些实施例中,用户感知系统304可以包含:用于接收来自用户的语音输入的麦克风。用户感知系统还可以包含:一个或多个红外线相机传感器、一个或多个可见光频谱相机传感器、结构光发射器和/或传感器、红外线光发射器、相干光发射器和/或传感器、陀螺仪、加速计、磁力仪、接近传感器、GPS传感器、超声波发射器和检测器以及触感接口。In some embodiments, user perception system 304 may include: one or more sensors 310 operable to detect certain characteristics, characteristics, or information about an individual user wearing system 300 . For example, in some embodiments, sensor 310 may include: a camera or optical detection/scanning circuitry capable of detecting optical characteristics/measurements of the user in real-time, such as, for example, one or more of the following: pupil constriction/dilation, per pupil Angle measurement/positioning, secondary spherocity, eye shape (eg change in eye shape over time), and other anatomical data. Such data may provide or be used to calculate information (eg, the user's visual focus) that may be used by head mounted system 300 and/or interface system 100 to optimize the user's viewing experience. For example, in one embodiment, sensors 310 may each measure the rate at which the pupils of each eye of the user contract. Such data may be transmitted to processor 308 (or gateway component 140 or server 110 ), where it is used to determine, for example, a user's response to a brightness setting of interface display 303 . Interface 302 may be adjusted according to the user's response by, for example, dimming display 303 if the user's response indicates that the zero degree level of display 303 is too high. User perception system 304 may contain other components in addition to those described above or illustrated in FIG. 3 . For example, in some embodiments, user perception system 304 may include a microphone for receiving voice input from a user. The user perception system may also include: one or more infrared camera sensors, one or more visible light spectrum camera sensors, structured light emitters and/or sensors, infrared light emitters, coherent light emitters and/or sensors, gyroscopes, Accelerometers, magnetometers, proximity sensors, GPS sensors, ultrasonic emitters and detectors, and haptic interfaces.
环境感知系统306包含用于获得来自用户周围的物理环境的数据的一个或多个传感器312。可以将由传感器检测的对象或信息提供作为至用户设备的输入。在一些实施例中,这种输入可以表示与虚拟世界的用户交互。例如,观看桌面上的虚拟键盘的用户可以使用他的手指做手势,好像它正在虚拟键盘上键入。手指移动的运动可以由传感器312来捕获并且提供给用户设备或系统作为输入,其中该输入可以用于改变虚拟世界或创建新的虚拟对象。例如,可以将手指的运动识别(使用软件程序)为键入,以及所识别的键入的手势可以与虚拟键盘上的虚拟键的已知位置进行组合。然后,系统可以渲染向用户(或通过接口与系统连接的其它用户)显示的虚拟监视器,其中虚拟监视器显示由用户键入的文本。Context awareness system 306 includes one or more sensors 312 for obtaining data from the physical environment around the user. Objects or information detected by the sensors may be provided as input to the user device. In some embodiments, such input may represent user interaction with the virtual world. For example, a user viewing a virtual keyboard on a desktop can use his fingers to gesture as if it were typing on the virtual keyboard. The motion of the finger movement can be captured by the sensor 312 and provided to the user device or system as input, where the input can be used to change the virtual world or create new virtual objects. For example, finger movements can be recognized (using a software program) as typing, and the gesture of the recognized typing can be combined with the known positions of the virtual keys on the virtual keyboard. The system can then render a virtual monitor displayed to the user (or other users interfaced with the system), where the virtual monitor displays text typed by the user.
传感器312可以包含:例如,普遍朝外的相机或用于例如通过不断地和/或间歇地投影红外线结构光来解释场景信息的扫描仪。环境感知系统306可以用于通过检测或注册本地环境来映射用户周围的物理环境的一个或多个元素,包含静态对象、动态对象、人、手势和各种照明、大气和声学条件。因此,在一些实施例中,环境感知系统306可以包含:基于图像的三维重建软件,其被嵌入在本地系统(例如网关组件140或处理器308)中以及可操作地数字化地重建由传感器312检测的一个或多个对象或信息。在一个示例性实施例中,环境感知系统306提供以下中的一个或多个:运动捕获数据(捕获手势识别)、深度感知、面部识别、对象识别、唯一对象特征识别、语音/音频识别和处理、声音源定位、噪音降低、红外线或类似的激光投影、以及单色和/或彩色CMOS传感器(或其它类似传感器)、视野传感器、以及各种其它光学增强传感器。应当了解的是,环境感知系统306可以包含除了上述或在图3中说明的那些组件之外的其它组件。例如,在一些实施例中,环境感知系统306可以包含用于接收来自本地环境的音频的麦克风。用户感知系统还可以包含:一个或多个红外线相机传感器、一个或多个可见光频谱相机传感器、结构光发射器和/或传感器、红外线光发射器、相干光发射器和/或传感器、陀螺仪、加速计、磁力仪、接近传感器、GPS传感器、超声波发射器和检测器以及触感接口。The sensor 312 may comprise, for example, a generally outward-facing camera or a scanner for interpreting scene information, for example by constantly and/or intermittently projecting infrared structured light. The environment awareness system 306 may be used to map one or more elements of the physical environment around the user by detecting or registering the local environment, including static objects, dynamic objects, people, gestures and various lighting, atmospheric and acoustic conditions. Thus, in some embodiments, environment awareness system 306 may comprise: image-based three-dimensional reconstruction software embedded in a local system (eg, gateway component 140 or processor 308 ) and operable to digitally reconstruct images detected by sensor 312 One or more objects or information of . In an exemplary embodiment, the context awareness system 306 provides one or more of: motion capture data (capture gesture recognition), depth perception, facial recognition, object recognition, unique object feature recognition, voice/audio recognition and processing , sound source localization, noise reduction, infrared or similar laser projection, and monochrome and/or color CMOS sensors (or other similar sensors), field of view sensors, and various other optically enhanced sensors. It should be appreciated that the context awareness system 306 may include other components in addition to those described above or illustrated in FIG. 3 . For example, in some embodiments, environment awareness system 306 may include a microphone for receiving audio from the local environment. The user perception system may also include: one or more infrared camera sensors, one or more visible light spectrum camera sensors, structured light emitters and/or sensors, infrared light emitters, coherent light emitters and/or sensors, gyroscopes, Accelerometers, magnetometers, proximity sensors, GPS sensors, ultrasonic emitters and detectors, and haptic interfaces.
如上所述,在一些实施例中,处理器308可以与头戴式系统300中的其它组件集成,与接口系统100中的其它组件集成,或可以是如图3中示出的单独的设备(可穿戴的或与用户分离的)。处理器308可以通过物理的有线连接或通过无线连接(诸如例如,移动网络连接(包含蜂窝电话和数据网络)、Wi-Fi或蓝牙)连接到头戴式系统300中的各种组件和/或接口系统100中的组件。处理器308可以包含:存储模块、集成的和/或另外的图形处理单元、无线和/或有线互联网连通性以及能够将来自源(例如,计算网络105、用户感知系统304、环境感知系统306或网关组件140)的数据变换成图像和音频数据的编解码器和/或固件,其中图像/视频和音频可以经由界面302呈现给用户。As noted above, in some embodiments, processor 308 may be integrated with other components in head-mounted system 300, integrated with other components in interface system 100, or may be a separate device as shown in FIG. wearable or separate from the user). Processor 308 may be connected to various components and/or Components in interface system 100 . Processor 308 may include: memory modules, integrated and/or additional graphics processing units, wireless and/or wired Internet connectivity, and the ability to Gateway component 140) codec and/or firmware that transforms data into image and audio data, where image/video and audio can be presented to the user via interface 302.
处理器308处置用于头戴式系统300中的各种组件的数据处理以及头戴式系统300与网关140以及在一些实施例中计算网络105之间的数据交换。例如,处理器308可以用于缓冲和处理用户和计算网络105之间的数据流,从而使得能够平滑的连续的和高保真度的用户体验。在一些实施例中,处理器308可以以足以实现以320x240分辨率8帧/秒到以高清晰度分辨率(1280x720)24帧/秒,或更高的,诸如60-120帧/秒和4k分辨率和更高的(10k+分辨率和50000帧/秒)之间的任何地方的速率来处理数据。另外,处理器308可以存储和/或处理可以呈现给用户的数据,而不是实时地从计算网络105流出。例如,在一些实施例中,处理器308可以接收来自计算网络105的压缩数据,以及执行高级渲染技术(诸如照明或阴影)以减轻从计算网络105传输给用户设备120的数据负载。在另一个示例中,处理器308可以存储和/或处理本地对象数据,而不是将该数据传输给网关组件140或计算网络105。Processor 308 handles data processing for various components in headset system 300 and data exchange between headset system 300 and gateway 140 and, in some embodiments, computing network 105 . For example, processor 308 may be used to buffer and process data streams between the user and computing network 105, thereby enabling a smooth continuous and high-fidelity user experience. In some embodiments, processor 308 may be powerful enough to achieve 8 frames per second at 320x240 resolution to 24 frames per second at high definition resolution (1280x720), or higher, such as 60-120 frames per second and 4k Process data at rates anywhere between 10k+ resolution and 50,000 frames per second. Additionally, processor 308 may store and/or process data that may be presented to a user rather than streaming from computing network 105 in real time. For example, in some embodiments, processor 308 may receive compressed data from computing network 105 and perform advanced rendering techniques (such as lighting or shading) to alleviate the data load transmitted from computing network 105 to user device 120 . In another example, processor 308 may store and/or process local object data instead of transmitting the data to gateway component 140 or computing network 105 .
在一些实施例中,头戴式系统300可以包含考虑到各种视觉/接口性能和功能的各种设置或模式。模式可以由用户手工地选择,或由头戴式系统300中的组件或网关组件140自动地选择。如先前所述,头戴式系统300的一个示例包含:“关闭”模式,其中界面302基本上不提供数字或虚拟内容。在关闭模式中,显示组件303可以是透明的,从而使得能够在具有很少-至-没有光学变形或计算重叠的情况下来光学地纠正用户周围的物理环境的视图。In some embodiments, head mounted system 300 may incorporate various settings or modes that allow for various visual/interface capabilities and functions. The mode may be selected manually by the user, or automatically by a component in the headset system 300 or the gateway component 140 . As previously mentioned, one example of the head-mounted system 300 includes an "off" mode in which the interface 302 provides substantially no digital or virtual content. In the off mode, the display assembly 303 may be transparent, enabling an optically correct view of the physical environment surrounding the user with little-to-no optical distortion or computational overlap.
在一个示例实施例中,头戴式系统300包含:“增强”模式,其中界面302提供增强现实界面。在增强模式中,界面显示器303可以是基本上透明的,从而允许用户观看本地的物理环境。同时,由计算网络105、处理器308和/或网关组件140提供的虚拟对象数据与物理的本地环境结合被呈现在显示器303上。In one example embodiment, head mounted system 300 includes: an "augmented" mode in which interface 302 provides an augmented reality interface. In enhanced mode, interface display 303 may be substantially transparent, allowing the user to view the local physical environment. Simultaneously, virtual object data provided by computing network 105, processor 308, and/or gateway component 140 is presented on display 303 in conjunction with the physical local environment.
图4说明了当界面302操作在增强模式中时由用户看到的对象的示例实施例。如图4中示出的,界面302呈现物理对象402和虚拟对象404。如图4中说明的实施例中,物理对象402是存在于用户的本地环境中的真实的物理对象,而虚拟对象404是由系统100创建的经由用户界面302显示的对象。在一些实施例中,虚拟对象404可以被显示在物理环境内的固定的方位或位置(例如,虚拟的猴子站在位于物理环境中的特定路标的旁边),或者可以显示给用户作为位于与用户界面显示器303的方位有关的对象(例如,在显示器303的左上方中可以看见的虚拟时钟或温度计)。FIG. 4 illustrates an example embodiment of objects seen by a user when interface 302 is operating in an enhanced mode. As shown in FIG. 4 , interface 302 presents physical objects 402 and virtual objects 404 . In the embodiment illustrated in FIG. 4 , physical objects 402 are real physical objects that exist in the user's local environment, while virtual objects 404 are objects created by system 100 that are displayed via user interface 302 . In some embodiments, the virtual object 404 may be displayed at a fixed orientation or location within the physical environment (e.g., a virtual monkey standing next to a particular landmark located in the physical environment), or may be displayed to the user as a Objects related to the orientation of the interface display 303 (eg, a virtual clock or thermometer visible in the upper left of the display 303).
在一些实施例中,可以提示虚拟对象离开用户的视野内部或外部物理呈现的对象,或者由用户的视野内部或外部物理呈现的对象来触发虚拟对象。提示虚拟对象404离开物理对象402或由物理对象402来触发虚拟对象404。例如,物理对象402可能实际上是凳子,以及可以将如站在凳子上的虚拟动物的虚拟对象404显示给用户(以及在一些实施例中,通过接口与系统100连接的其它用户)。在此类实施例中,环境感知系统306可以使用例如存储在处理器308中的软件和/或固件以识别各种特征和/或形状模式(由传感器312捕获的)以将物理对象402识别为凳子。这些识别的形状模式(诸如例如凳子顶部)可以用于触发虚拟对象404的放置。其它示例包含:墙、桌子、家具、汽车、建筑物、人、楼层、植物、动物-能够被看到的任何对象能够用于触发与对象或多个对象有一些关系的增强现实体验。In some embodiments, the virtual object may be prompted to move away from, or triggered by, an object physically present inside or outside the user's field of view. The virtual object 404 is prompted to leave the physical object 402 or the virtual object 404 is triggered by the physical object 402 . For example, the physical object 402 may actually be a stool, and a virtual object 404 such as a virtual animal standing on the stool may be displayed to the user (and in some embodiments, other users interfaced with the system 100). In such embodiments, environment awareness system 306 may use, for example, software and/or firmware stored in processor 308 to recognize various features and/or shape patterns (captured by sensors 312) to identify physical object 402 as stool. These identified shape patterns (such as, for example, the top of a stool) can be used to trigger placement of virtual object 404 . Other examples include: walls, tables, furniture, cars, buildings, people, floors, plants, animals - any object that can be seen can be used to trigger an augmented reality experience that has some relationship to the object or objects.
在一些实施例中,被触发的特定的虚拟对象404可以由用户选择或者由头戴式系统300中的其它组件或接口系统100自动地选择。另外,在虚拟对象404被自动地触发的实施例中,可以基于提示虚拟对象404离开特定物理对象402(或其特征)或触发虚拟对象404的特定物理对象402(或其特征)来选择特定的虚拟对象404。例如,如果物理对象被识别为在水池上延伸的跳水板,则所触发的虚拟对象可以是穿戴通气管、泳衣、漂浮设备或其它相关项目的生物。In some embodiments, the particular virtual object 404 that is triggered may be selected by the user or automatically by other components in the head mounted system 300 or by the interface system 100 . Additionally, in embodiments where the virtual object 404 is automatically triggered, specific physical objects 402 (or features thereof) may be selected based on prompting the virtual object 404 to move away from a specific physical object 402 (or a feature thereof) or triggering the virtual object 404 . Virtual object 404 . For example, if the physical object is identified as a diving board extending over a pool, the triggered virtual object could be a creature wearing a snorkel, bathing suit, flotation device, or other related item.
在另一个示例实施例中,头戴式系统300可以包含“虚拟”模式,其中界面302提供虚拟现实界面。在虚拟模式中,从显示器303中省略物理环境,以及由计算网络105、处理器308和/或网关组件140提供的虚拟对象数据被呈现在显示器303上。可以通过物理地阻塞视觉显示器303(例如,经由覆盖)或通过界面302的特征来完成物理环境的省略,其中显示器303转变到不透明设置。在虚拟模式中,可以通过界面302将直播和/或存储的视觉和音频感觉呈现给用户,以及用户通过界面302的虚拟模式来体验数字世界(数字对象、其它用户等)并与其进行交互。因此,在虚拟模式中提供给用户的界面由包括虚拟数字世界的虚拟对象数字组成。In another example embodiment, head mounted system 300 may include a "virtual" mode in which interface 302 provides a virtual reality interface. In virtual mode, the physical environment is omitted from display 303 and virtual object data provided by computing network 105 , processor 308 and/or gateway component 140 is presented on display 303 . Omitting the physical environment may be accomplished by physically blocking the visual display 303 (eg, via an overlay) or by features of the interface 302 where the display 303 transitions to an opaque setting. In the virtual mode, live and/or stored visual and audio sensations may be presented to the user through the interface 302, and the user experiences and interacts with the digital world (digital objects, other users, etc.) through the virtual mode of the interface 302. Therefore, the interface provided to the user in the virtual mode is composed of virtual object numbers including the virtual digital world.
图5说明了当头戴式界面302操作在虚拟模式中时用户界面的示例实施例。如图5中示出的,用户界面呈现由数字对象510组成的虚拟世界500,其中数字对象510可以包含:大气、天气、地形、建筑物和人。虽然未在图5中说明,但是数字对象还可以包含:例如,植物、车辆、动物、生物、机器、人工智能、位置信息以及定义虚拟世界500的任何其它对象或信息。FIG. 5 illustrates an example embodiment of a user interface when the head mounted interface 302 is operating in a virtual mode. As shown in FIG. 5, the user interface presents a virtual world 500 composed of digital objects 510, which may include: atmosphere, weather, terrain, buildings, and people. Although not illustrated in FIG. 5 , digital objects may also include, for example, plants, vehicles, animals, creatures, machines, artificial intelligence, location information, and any other objects or information that define virtual world 500 .
在另一个示例实施例中,头戴式系统300可以包含:“混杂”模式,其中可以将头戴式系统300中的各种特征(以及虚拟模式和增强模式中的特征)进行组合以创建一个或多个定制的接口模式。在一个示例定制接口模式中,从显示器303中省略了物理环境,以及以类似于虚拟模式的方式在显示器303上呈现虚拟对象数据。然而,在这个示例定制接口模式中,虚拟对象可以是完全虚拟的(即,它们不存在于本地的物理环境中)或者它们可以是被渲染为界面302中的虚拟对象以替代物理对象的真实的本地物理对象。因此,在这个特定定制模式中(在本文中被称为混杂虚拟接口模式),直播和/或存储的视觉和音频感觉可以通过界面302呈现给用户,以及用户体验并与包括完全虚拟对象和渲染的物理对象的数字世界进行交互。In another example embodiment, the headset 300 may include a "hybrid" mode in which various features in the headset 300 (as well as features in the virtual and augmented modes) may be combined to create a or multiple custom interface patterns. In one example custom interface mode, the physical environment is omitted from the display 303 and virtual object data is presented on the display 303 in a manner similar to the virtual mode. However, in this example custom interface pattern, the virtual objects can be completely virtual (that is, they do not exist in the local physical environment) or they can be rendered as virtual objects in the interface 302 in place of real objects of physical objects. local physics object. Thus, in this particular customization mode (referred to herein as a hybrid virtual interface mode), live and/or stored visual and audio sensations can be presented to the user through the interface 302, as well as user experience and integration including fully virtual objects and rendered interact with the digital world of physical objects.
图6说明了依照混杂虚拟接口模式操作的用户界面的示例实施例。如图6中示出的,用户界面呈现虚拟世界600,该虚拟世界600由完全的虚拟对象610和被渲染的物理对象620(以其它方式物理地呈现在场景中的对象的渲染)组成。依照图6中说明的示例,渲染的物理对象620包含:建筑物620A、地面620B和平台620C,以及使用粗体轮廓630示出以向用户指示对象是渲染的。另外,完全的虚拟对象610包含:另外的用户610A、云610B、太阳610C和平台620C上的火焰610D。应当了解的是,完全的虚拟对象610可以包含:例如,大气、天气、地形、建筑物、人、植物、车辆、动物、生物、机器、人工智能、位置信息以及定义虚拟世界600并且不能从存在于本地的物理环境中的对象渲染的任何其它对象或信息。相反地,渲染的物理对象620是被渲染为界面302中的虚拟对象的真实的本地的物理对象。粗线轮廓630表示用于向用户指示渲染的物理对象的一个示例。照此,可以像这样使用除了本文中公开的那些方法外的方法来指示渲染的物理对象。Figure 6 illustrates an example embodiment of a user interface operating in accordance with a hybrid virtual interface mode. As shown in Figure 6, the user interface presents a virtual world 600 consisting of fully virtual objects 610 and rendered physical objects 620 (renderings of objects that would otherwise physically appear in the scene). Pursuant to the example illustrated in FIG. 6 , rendered physical objects 620 include: building 620A, ground 620B, and platform 620C, and are shown with bold outlines 630 to indicate to the user that the objects are rendered. Additionally, the complete virtual object 610 includes: an additional user 610A, a cloud 610B, a sun 610C, and a flame 610D on a platform 620C. It should be appreciated that complete virtual objects 610 may contain, for example, atmosphere, weather, terrain, buildings, people, plants, vehicles, animals, creatures, machines, artificial intelligence, location information, and Any other objects or information rendered by objects in the local physical environment. In contrast, rendered physical objects 620 are real local physical objects rendered as virtual objects in interface 302 . A bold outline 630 represents one example of a physical object used to indicate rendering to the user. As such, methods other than those disclosed herein may be used to indicate rendered physical objects as such.
在一些实施例中,渲染的物理对象620可以使用环境感知系统306的传感器312(或使用诸如运动或图像捕获系统的其它设备)来检测,并且通过例如存储在处理电路308中的软件和/或固件被转换成数字对象数据。因此,当用户通过接口与混杂虚拟接口模式中的系统100连接时,可以将各种物理对象作为渲染的物理对象显示给用户。对于允许用户通过接口与系统100连接,同时仍然能够安全地导航本地物理环境而言,这可能特别有用。在一些实施例中,用户可能能够有选择地移除渲染的物理对象或将渲染的物理对象添加到界面显示器303。In some embodiments, rendered physical objects 620 may be detected using sensors 312 of environment awareness system 306 (or using other devices such as motion or image capture systems) and detected by, for example, software stored in processing circuitry 308 and/or Firmware is converted into digital object data. Thus, when a user interfaces with the system 100 in the hybrid virtual interface mode, various physical objects may be displayed to the user as rendered physical objects. This may be particularly useful for allowing a user to interface with the system 100 while still being able to safely navigate the local physical environment. In some embodiments, a user may be able to selectively remove or add rendered physical objects to interface display 303 .
在另一个示例用户界面模式中,界面显示器303可以是基本上透明的,从而允许用户观看本地的物理环境,同时各种本地的物理对象作为渲染的物理对象被显示给用户。这种示例的定制接口模式类似于增强模式,除了如以上相对于先前示例所述的,虚拟对象中的一个或多个虚拟对象可能是渲染的物理对象。In another example user interface mode, the interface display 303 may be substantially transparent, allowing the user to view the local physical environment while various local physical objects are displayed to the user as rendered physical objects. The custom interface mode of this example is similar to the enhanced mode, except that one or more of the virtual objects may be rendered physical objects as described above with respect to the previous example.
上述示例定制接口模式表示能够由头戴式系统300的混杂模式提供的各种定制接口模式的若干示例实施例。因此,在不背离本公开的范围的情况下,可以从由头戴式系统300中的组件提供的特征和功能以及上述所述的各种模式的各种组合来创建各种其它定制接口模式。The example custom interface modes described above represent several example embodiments of the various custom interface modes that can be provided by the promiscuous mode of the headset system 300 . Accordingly, various other custom interface modes may be created from various combinations of the features and functions provided by the components in head mounted system 300 and the various modes described above without departing from the scope of the present disclosure.
本文论述的实施例仅描述了用于提供操作在关闭、增强、虚拟或混杂模式中的接口的若干示例,并且不旨在限制头戴式系统300的组件的功能或各自接口模式的范围或内容。例如,在一些实施例中,虚拟对象可以包含显示给用户的数据(时间、温度、海拔等)、由系统100创建和/或选择的对象、由用户创建和/或选择的对象,或甚至表示通过接口与系统100连接的其它用户的对象。另外,虚拟对象可以包含物理对象的延伸(例如,从物理平台长大的虚拟造型),以及可以可视地连接到物理对象,或从物理对象断开连接。The embodiments discussed herein merely describe a few examples for providing an interface operating in off, augmented, virtual, or promiscuous modes, and are not intended to limit the functionality of the components of the head mounted system 300 or the scope or content of the respective interface modes. . For example, in some embodiments, virtual objects may contain data displayed to the user (time, temperature, altitude, etc.), objects created and/or selected by the system 100, objects created and/or selected by the user, or even represent An object for other users that interface with the system 100 . Additionally, virtual objects may contain extensions of physical objects (eg, avatars grown from physical platforms), and may be visibly connected to, or disconnected from, physical objects.
虚拟对象还可以是动态的以及随着时间而变化,依照用户或其它用户、物理对象和其它虚拟对象之间的各种关系(例如,位置、距离等)而变化,和/或依照在头戴式系统300、网关组件140或服务器110的软件和/或固件中指定的各种变量而变化。例如,在某些实施例中,虚拟对象可以响应于用户设备或其组件(例如当触感设备被放置在虚拟球的旁边时虚拟球移动)、物理的或言语的用户交互(例如,当用户接近虚拟生物时,该虚拟生物跑走,或当用户向它说话时,该虚拟生物说话)、椅子被扔向虚拟生物以及该生物躲避椅子、其它虚拟对象(例如,当第一虚拟生物看到第二虚拟生物时,第一虚拟生物做出反应),物理变量(诸如位置、距离、温度、时间等)、或在用户环境中的其它物理对象(例如,当物理汽车经过时,被示出为站在物理街道中的虚拟生物变成扁平的)。Virtual objects can also be dynamic and change over time, according to various relationships (e.g., position, distance, etc.) may vary depending on various variables specified in the software and/or firmware of system 300, gateway component 140, or server 110. For example, in some embodiments, a virtual object may respond to user equipment or components thereof (e.g., a virtual ball moves when a haptic device is placed next to the virtual ball), physical or verbal user interaction (e.g., when a user approaches the virtual creature runs away when the virtual creature runs away, or the virtual creature speaks when the user speaks to it), a chair is thrown at the virtual creature and the creature avoids the chair, other virtual objects (for example, when the first virtual creature sees the first virtual creature Two virtual creatures, the first virtual creature reacts), physical variables (such as location, distance, temperature, time, etc.), or other physical objects in the user's environment (for example, when a physical car passes by, shown as Virtual creatures standing in physical streets become flattened).
本文中论述的各种模式可以应用于除了头戴式系统300之外的用户设备。例如,可以经由移动电话或平板设备来提供增强现实接口。在此类实施例中,电话或平板可以使用相机来捕获用户附近的物理环境,以及可以在电话/平板的显示屏上叠放虚拟对象。另外,可以通过在电话/平板的显示屏上显示数字世界来提供虚拟模式。因此,使用本文所述的电话/平板的组件以及连接到用户设备的其它组件或与用户设备结合使用的其它组件,可以将这些模式混杂以创建如上所述的各种定制接口。例如,可以由缺少与运动或图像捕获系统结合操作的相机的计算机监视器、电视屏幕或其它设备来提供混杂的虚拟接口模式。在这个示例实施例中,可以从监视器/屏幕来观看虚拟世界,以及可以由运动或图像捕获系统来执行对象检测和渲染。The various modes discussed herein may apply to user equipment other than head mounted system 300 . For example, an augmented reality interface may be provided via a mobile phone or tablet device. In such embodiments, the phone or tablet may use a camera to capture the physical environment near the user, and virtual objects may be overlaid on the phone/tablet's display screen. Additionally, a virtual mode can be provided by displaying the digital world on the display screen of the phone/tablet. Thus, using the components of the phone/tablet described herein as well as other components connected to or used in conjunction with the user device, these patterns can be intermixed to create various custom interfaces as described above. For example, a promiscuous virtual interface mode may be provided by a computer monitor, television screen, or other device that lacks a camera that operates in conjunction with a motion or image capture system. In this example embodiment, the virtual world can be viewed from a monitor/screen, and object detection and rendering can be performed by a motion or image capture system.
图7说明了本公开的示例实施例,其中位于不同地理位置中的两个用户每个用户通过他们各自的用户设备与其它用户和共同的虚拟世界进行交互。在这个实施例中,两个用户701和702正在来回地扔虚拟球703(一种类型的虚拟对象),其中每个用户能够观察到虚拟世界中的其它用户的影响(例如,每个用户观察到改变方向的虚拟球,由其它用户抓住的虚拟球等)。因为由计算网络105中的服务器110来跟踪虚拟球(即,虚拟球703)的移动和位置,所以在一些实施例中,系统100可以向用户701和用户702传递相对于每个用户的球703的准确位置和到达计时。例如,如果第一用户701位于伦敦,用户701可以向位于洛杉矶的第二用户702以由系统100计算的速率扔球703。因此,系统100可以向第二用户702(例如,经由电子邮件、文本消息、即时消息等)传递球到达的准确时间和位置。照此,第二用户702可以使用他的设备以看到球703在指定时间和位置到达。当一个或多个虚拟对象虚拟地穿越全球时,一个或多个用户还可以使用地理位置映射软件(或类似的)以跟踪一个或多个虚拟对象。这种情况的示例可以是穿戴三维头戴式显示器的用户在天空中查找和看到飞过头顶的叠加在真实世界上的虚拟飞机。可以由用户、由智能软件代理(运行在用户设备或网关上的软件),可能是本地和/或远程的其它用户,和/或这些组合中的任何组合来驾驶虚拟飞机。FIG. 7 illustrates an example embodiment of the present disclosure in which two users located in different geographic locations each interact with other users and a common virtual world through their respective user devices. In this embodiment, two users 701 and 702 are throwing a virtual ball 703 (a type of virtual object) back and forth, where each user is able to observe the effects of the other users in the virtual world (e.g., each user observes to a virtual ball that changes direction, a virtual ball grabbed by another user, etc.). Because the movement and position of the virtual ball (i.e., virtual ball 703) is tracked by server 110 in computing network 105, in some embodiments, system 100 may communicate to users 701 and 702 relative to each user's ball 703 exact location and timing of arrival. For example, if a first user 701 is located in London, user 701 may throw a ball 703 at a rate calculated by system 100 at a second user 702 located in Los Angeles. Accordingly, the system 100 can communicate to the second user 702 (eg, via email, text message, instant message, etc.) the exact time and location of the ball's arrival. As such, the second user 702 can use his device to see the ball 703 arrive at the specified time and location. One or more users may also use geographic location mapping software (or similar) to track the one or more virtual objects as they virtually traverse the globe. An example of this could be a user wearing a three-dimensional head-mounted display looking up in the sky and seeing a virtual airplane flying overhead superimposed on the real world. The virtual aircraft may be piloted by the user, by an intelligent software agent (software running on the user's device or gateway), other users who may be local and/or remote, and/or any combination of these.
如前所述,用户设备可以包含触感接口设备,其中当系统100确定触感设备位于与虚拟对象有关的物理的空间位置时,触感接口设备向用户提供反馈(例如,阻力、振动、光线、声音等)。例如,如图8中示出的,以上相对于图7描述的实施例可以被扩展为包含使用触感设备802。As previously mentioned, the user device may include a haptic interface device that provides feedback to the user (e.g., resistance, vibration, light, sound, etc.) ). For example, the embodiments described above with respect to FIG. 7 may be extended to include the use of a haptic device 802 as shown in FIG. 8 .
在这个示例实施例中,在虚拟世界中可以将触感设备802显示为棒球棒。当球703到达时,用户702可以向虚拟球703挥舞触感设备802。如果系统100确定由触感设备802提供的虚拟棒与球703进行了“接触”,则触感设备802可以振动或向用户702提供其它反馈,以及虚拟棒703可以在由系统100计算的方向中依照球至棒的接触的检测的速度、方向和计时弹跳离开虚拟棒。In this example embodiment, haptic device 802 may be displayed as a baseball bat in the virtual world. When the ball 703 arrives, the user 702 can wave the haptic device 802 towards the virtual ball 703 . If the system 100 determines that the virtual stick provided by the haptic device 802 has made "contact" with the ball 703, the haptic device 802 may vibrate or provide other feedback to the user 702, and the virtual stick 703 may follow the direction of the ball in the direction calculated by the system 100. The velocity, direction and timing of detection of contact to the stick bounces off the virtual stick.
在一些实施例中,所公开的系统100可以促进混合模式接口连接,其中多个用户可以使用不同的接口模式(例如,增强的、虚拟的、混杂的等)通过接口与共同的虚拟世界进行连接(含有在其中的虚拟对象)。例如,在虚拟接口模式中通过接口与特定虚拟世界连接的第一用户可以与在增强现实模式中通过接口与同一个虚拟世界连接的第二用户进行交互。In some embodiments, the disclosed system 100 can facilitate mixed-mode interfacing, where multiple users can interface with a common virtual world using different interfacing modes (e.g., enhanced, virtual, promiscuous, etc.) (contains dummy objects within it). For example, a first user interfacing with a particular virtual world in a virtual interface mode may interact with a second user interfacing with the same virtual world in an augmented reality mode.
图9a说明了第一用户901(在混合虚拟接口模式中通过接口与系统100的数字世界连接)和第一对象902对于在完全虚拟现实模式中通过接口与系统100的同一数字世界连接的第二用户922而言似乎是虚拟对象的示例。如上所述,当经由混杂的虚拟接口模式通过接口与数字世界连接时,可以将本地的物理对象(例如第一用户901和第一对象902)扫描和渲染成虚拟世界中的虚拟对象。第一用户901例如可以由运动捕获系统或类似的设备来扫描,以及在虚拟世界中(由存储在运动捕获系统、网关组件140、用户设备120、系统服务器110或其它设备中的软件/固件)被渲染成第一渲染物理对象931。类似地,第一对象902例如可以由头戴式接口300的环境感知系统306来扫描,以及在虚拟世界中(由存储在处理器308、网关组件140、系统服务器110或其它设备中的软件/固件)被渲染成第二渲染物理对象932。在图9A的第一部分910中,第一用户901和第一对象902被示出为物理世界中的物理对象。在图9A的第二部分920中,第一用户901和第一对象902被示出为对于在完全虚拟现实模式中与通过接口与系统100的同一数字世界连接的第二用户922而言他们好像是第一渲染的物理对象931和第二渲染的物理对象932。Figure 9a illustrates a first user 901 (interfacing with the digital world of system 100 in hybrid virtual interface mode) and a first subject 902 for a second user 901 (interfacing with the same digital world of system 100 in full virtual reality mode) Appears to be an example of a virtual object to user 922. As described above, when interfacing with the digital world via the hybrid virtual interface mode, local physical objects (eg, first user 901 and first object 902 ) can be scanned and rendered as virtual objects in the virtual world. The first user 901 may, for example, be scanned by a motion capture system or similar device, as well as in the virtual world (by software/firmware stored in the motion capture system, gateway component 140, user device 120, system server 110, or other device) is rendered as a first rendered physical object 931 . Similarly, the first object 902 may be scanned, for example, by the context awareness system 306 of the head mounted interface 300, and in the virtual world (by software/ firmware) is rendered as a second rendered physical object 932. In the first part 910 of Figure 9A, the first user 901 and the first object 902 are shown as physical objects in the physical world. In the second part 920 of FIG. 9A, the first user 901 and the first object 902 are shown as they would appear to a second user 922 in full virtual reality mode connected to the same digital world interfaced with the system 100. are the first rendered physics object 931 and the second rendered physics object 932 .
图9B说明了混合模式接口连接的另一个示例实施例,其中如上所述,在混杂虚拟接口模式中第一用户901通过接口与数字世界连接,以及在增强现实模式中第二用户922通过接口与同一数字世界(以及第二用户的物理的本地环境925)连接。在图9B中的实施例中,第一用户901和第一对象902位于第一物理位置915处,以及第二用户922位于由距离第一位置915的一些距离分开的不同的第二物理位置925。在这个实施例中,可以将虚拟对象931和虚拟对象932实时地(或接近实时地)转换到对应于第二位置925的虚拟世界内的位置。因此,第二用户922在第二用户的物理的本地环境925中可以观察到分别表示第一用户901和第一对象902的渲染的物理对象931和物理对象932并且与其进行交互。FIG. 9B illustrates another example embodiment of hybrid mode interfacing, where a first user 901 interfaces with the digital world in hybrid virtual interface mode, and a second user 922 interfaces with the digital world in augmented reality mode, as described above. The same digital world (and the second user's physical local environment 925) is connected. In the embodiment in FIG. 9B, a first user 901 and a first object 902 are located at a first physical location 915, and a second user 922 is located at a different second physical location 925 separated by some distance from the first location 915 . In this embodiment, virtual object 931 and virtual object 932 may be translated in real time (or near real time) to a location within the virtual world corresponding to second location 925 . Thus, the second user 922 can observe and interact with rendered physical objects 931 and 932 representing the first user 901 and first object 902 respectively in the second user's physical local environment 925 .
图10说明了当在增强现实模式中通过接口与系统100连接时用户的视图的示例说明。如在图10中示出的,用户看到本地的物理环境(即,具有多个建筑物的城市)以及虚拟角色1010(即,虚拟对象)。可以由二维视觉目标(例如,广告牌、明信片或杂志)和/或一个或多个三维参考系(诸如,建筑物、汽车、人、动物、飞行器、建筑物的部分和/或任何三维物理对象、虚拟对象和/或其组合)来触发虚拟角色1010的方位。在图10中说明的示例中,在城市中的建筑物的已知方位可以提供注册基准点和/或信息和关键特征以用于渲染虚拟角色1010。另外,用户的地理空间位置(例如,由GPS,姿态/方位传感器等来提供)或与建筑物有关的移动位置可以包括由计算网络105使用的数据以触发用于显示虚拟角色(多个)1010的数据的传输。在一些实施例中,用于显示虚拟角色1010的数据可以包括渲染的角色101和/或用于渲染虚拟角色1010或其部分的指令(由网关组件140和/或用户设备120来执行)。在一些实施例中,如果用户的地理空间位置是不可获得或未知的,则服务器110、网关组件140和/或用户设备120仍然可以使用估计特定虚拟对象和/或物理对象可能位于的地方的估计算法,使用根据时间和/或其它参数的用户最后已知的方位来显示虚拟对象1010。万一用户的传感器变成闭塞的和/或经历了其它故障时,这还可以用于确定任何虚拟对象的方位。FIG. 10 illustrates an example illustration of a user's view when interfacing with system 100 in augmented reality mode. As shown in FIG. 10, the user sees a local physical environment (ie, a city with multiple buildings) and a virtual character 1010 (ie, a virtual object). 2D visual objects (e.g., billboards, postcards, or magazines) and/or one or more 3D frames of reference (such as buildings, cars, people, animals, aircraft, parts of buildings, and/or any 3D physical objects, virtual objects, and/or combinations thereof) to trigger the orientation of the avatar 1010. In the example illustrated in FIG. 10 , known locations of buildings in a city may provide registration fiducials and/or information and key features for rendering virtual character 1010 . Additionally, the user's geospatial location (e.g., provided by GPS, attitude/orientation sensors, etc.) or mobile location relative to the building may include data used by the computing network 105 to trigger the display of the virtual character(s) 1010 data transmission. In some embodiments, data for displaying virtual character 1010 may include rendered character 101 and/or instructions (performed by gateway component 140 and/or user device 120) for rendering virtual character 1010 or a portion thereof. In some embodiments, if the user's geospatial location is unavailable or unknown, server 110, gateway component 140, and/or user device 120 may still use estimates that estimate where particular virtual and/or physical objects may be located. Algorithm to display the virtual object 1010 using the user's last known position as a function of time and/or other parameters. This can also be used to determine the orientation of any virtual objects in case the user's sensors become occluded and/or experience other malfunctions.
在一些实施例中,虚拟角色或虚拟对象可以包括虚拟塑像,其中由物理对象来触发虚拟塑像的渲染。例如,现在参照图11,可以由真实的物理平台1120来触发虚拟塑像1110。塑像1110的触发可以是响应于由用户设备或系统100的其它组件检测的虚拟对象或特征(例如,基准点、设计特征、几何结构、模式、物理位置、高度等)。当用户不使用用户设备来观看平台1120时,用户看到不具有塑像1110的平台1120。然而,当用户通过用户设备来观看平台1120时,用户看到如图11示出的平台1120上的塑像1110。塑像1110是虚拟对象,以及因此可以是固定的、动画的、随着时间或相对于用户观看的方位而变化,或甚至取决于正在观看塑像1110的那个特定用户而变化。例如,如果用户是小孩,则塑像可能是狗;但是如果观看者是成年男性,则塑像可能是图11中示出的大机器人。这些是用户依从性和/或状态依从性经验的示例。这将使得一个或多个用户能够独立地和/或与物理对象结合来感知一个或多个虚拟对象,以及体验定制的和个性化版本的虚拟对象。可以由包含例如安装在用户设备上的软件/固件的系统的各种组件来渲染塑像1110(或其部分)。使用指示用户设备的位置和姿态的数据,结合虚拟对象(即,塑像1110)的注册特征,虚拟对象(即,塑像1110)形成与物理对象(即,平台1120)的关系。例如,一个或多个虚拟对象与一个或多个物理对象之间的关系可以是距离、定位、时间、地理位置、至一个或多个其它虚拟对象的接近性的函数,和/或包含任何种类的虚拟和/或物理数据的任何其它功能关系。在一些实施例中,用户设备中的图像识别软件还可以增强数字-至-物理对象的关系。In some embodiments, a virtual character or virtual object may comprise a virtual statue, where rendering of the virtual statue is triggered by a physical object. For example, referring now to FIG. 11 , a virtual figurine 1110 may be triggered by a real physical platform 1120 . The triggering of the figurine 1110 may be in response to a virtual object or feature (eg, fiducial, design feature, geometry, pattern, physical location, height, etc.) detected by the user device or other components of the system 100 . When the user views the platform 1120 without using a user device, the user sees the platform 1120 without the figurine 1110 . However, when the user views the platform 1120 through the user equipment, the user sees the statue 1110 on the platform 1120 as shown in FIG. 11 . The statue 1110 is a virtual object, and thus may be fixed, animated, change over time or relative to the user's viewing orientation, or even vary depending on which particular user is viewing the statue 1110. For example, if the user is a child, the figurine might be a dog; but if the viewer is an adult male, the figurine might be the large robot shown in FIG. 11 . These are examples of user adherence and/or state adherence experiences. This would enable one or more users to perceive one or more virtual objects independently and/or in combination with physical objects, as well as experience customized and personalized versions of the virtual objects. Statue 1110 (or portions thereof) may be rendered by various components of the system including, for example, software/firmware installed on a user device. The virtual object (ie, figurine 1110 ) forms a relationship with the physical object (ie, platform 1120 ) using data indicative of the position and pose of the user device, in conjunction with registered features of the virtual object (ie, figurine 1110 ). For example, the relationship between one or more virtual objects and one or more physical objects can be a function of distance, location, time, geographic location, proximity to one or more other virtual objects, and/or include any kind of Any other functional relationship of virtual and/or physical data. In some embodiments, image recognition software in the user device may also enhance digital-to-physical object relationships.
由所公开的系统和方法提供的交互式接口可以被实现以促进各种活动,诸如例如,与一个或多个虚拟环境和对象进行交互,与其它用户进行交互,以及体验各种形式的媒体内容,包含广告、音乐会和电影。因此,所公开的系统促进了用户交互,使得用户不仅看或听媒体内容,而且主动地参与并体验媒体内容。在一些实施例中,用户参与可以包含改变现有的内容或创建将在一个或多个虚拟世界中渲染的新的内容。在一些实施例中,媒体内容和/或用户创建的内容可以围绕一个或多个虚拟世界的神话创作为主题。The interactive interfaces provided by the disclosed systems and methods can be implemented to facilitate various activities, such as, for example, interacting with one or more virtual environments and objects, interacting with other users, and experiencing various forms of media content , containing commercials, concerts, and movies. Thus, the disclosed system facilitates user interaction such that users not only watch or listen to media content, but actively participate in and experience the media content. In some embodiments, user participation may involve changing existing content or creating new content to be rendered in one or more virtual worlds. In some embodiments, media content and/or user-created content may be themed around mythological creations of one or more virtual worlds.
在一个示例中,音乐家(或其它用户)可以创建将被渲染给与特定虚拟世界进行交互的用户的音乐内容。音乐内容可以包含:例如各种信号、EP、专辑、视频、短片和音乐会演出。在一个示例中,许多用户可以通过接口与系统100连接以同时地体验由音乐家表演的虚拟音乐会。In one example, musicians (or other users) may create musical content to be rendered to users interacting with a particular virtual world. Music content can include, for example, various tracks, EPs, albums, videos, short films and concert performances. In one example, many users may interface with the system 100 to simultaneously experience a virtual concert performed by musicians.
在一些实施例中,所产生的媒体可以含有与特定实体(例如,乐队、艺术家、用户等)相关联的唯一标识码。该码可以是一组字母数字字符、UPC码、QR码、二维图像触发器、三维物理对象特征触发器或其它数字标记,以及声音、图像和/或这两者的形式。在一些实施例中,还可以将该码与数字媒体一起嵌入,可以使用系统100通过接口与该数字媒体连接。用户可以获得该码(例如,经由支付费用)以及兑换该码以访问由与标识码相关联的实体产生的媒体内容。可以在用户界面上添加或从用户界面移除媒体内容。In some embodiments, the generated media may contain a unique identification code associated with a particular entity (eg, band, artist, user, etc.). The code can be in the form of a set of alphanumeric characters, a UPC code, a QR code, a two-dimensional image trigger, a three-dimensional physical object characteristic trigger or other digital indicia, as well as sound, image and/or both. In some embodiments, the code may also be embedded with digital media to which system 100 may be used to interface. A user may obtain the code (eg, via payment of a fee) and redeem the code to access media content produced by the entity associated with the identification code. Media content may be added to or removed from the user interface.
在一个实施例中,为了避免从一个计算系统向另一个计算系统(诸如,从云计算系统到耦合到用户的本地处理器)实时地或接近实时地低时延地传送视频数据的计算和带宽的限制,关于各种形状和几何形状的参数信息可以被转移(transfer)以及用于定义表面,同时纹理可以被转移以及添加到这些表面以带来关于静态或动态的细节,诸如根据参数地重现面孔的几何形状来映射的人的面孔的基于位图的视频细节。作为另一个示例,如果系统被配置为识别人的面孔,并且知道人的化身位于增强世界中,则系统可以被配置为以一种相对大的设置转移来传送相关的世界信息以及人的化身的信息,在此之后,剩下的转移给本地计算系统,诸如图1中描绘的那个308,因为可以使本地渲染限制于参数和纹理更新,诸如人的骸骨结构的运动参数以及人的面孔的移动位图-所有这些以低于与实时视频的初始设置转移或传送的带宽有关的数量级。因此,基于云和本地计算资产可以在集成方式中使用,使得云处置不需要相对低时延的计算,以及本地处理资产处置低时延是非常珍贵的任务,以及在此类情况下,由于此类数据(即,参数信息、纹理等对一切事务的实时视频)的总量的形式,转移给本地系统的数据的形式优选地以相对低的带宽来传送。In one embodiment, in order to avoid the computation and bandwidth of transferring video data in real time or near real time with low latency from one computing system to another, such as from a cloud computing system to a local processor coupled to a user , parametric information about various shapes and geometries can be transferred and used to define surfaces, while textures can be transferred and added to these surfaces to bring about static or dynamic details, such as re Bitmap-based video details of a person's face are mapped to the geometry of the face. As another example, if the system is configured to recognize a person's face, and knows that the person's avatar is located in the augmented world, the system can be configured to transmit the relevant world information and The information, after which the rest is transferred to a local computing system, such as the one depicted in FIG. Bitmaps - all of this at an order of magnitude below the bandwidth associated with the initial setup transfer or delivery of live video. Thus, cloud-based and on-premises computing assets can be used in an integrated manner such that cloud processing does not require relatively low-latency computing, and locally processing assets with low-latency processing is a very precious task, and in such cases, due to this The form of data transferred to the local system is preferably transmitted at a relatively low bandwidth, in the form of an aggregate amount of data (ie, parametric information, textures, etc. for real-time video of everything).
向前参照图15,示意性说明了云计算资产(46)和本地计算资产(308,120)之间的合作。在一个实施例中,云(46)资产可操作地诸如经由有线或无线网络(可能期望的是,无线优选地用于移动性,有线优选地用于某些高带宽或高数据量转移)直接耦合到(40,42)本地计算资产(120,308)(诸如可以被安装在结构中的处理器和存储器配置,该结构被配置为与用户头(120)或带子(308)耦合)中的一者或两者。位于用户本地的这些计算资产还可以经由有线和/或无线连通性配置(44)可操作地彼此相互耦合。在一个实施例中,为了维持低惯性和小尺寸的头戴式子系统(120),用户和云(46)之间的主要转移可以经由基于带的子系统(308)和云之间的链路,使用无线连通性(诸如超宽带(“UWB”)连通性,如例如在个人计算外围设备连通性应用中当前使用的)将头戴式子系统(120)的主要数据绑缚到基于带的子系统(308)。Referring ahead to FIG. 15 , collaboration between cloud computing assets ( 46 ) and local computing assets ( 308 , 120 ) is schematically illustrated. In one embodiment, cloud (46) assets are operatively directly connected, such as via a wired or wireless network (wireless is preferred for mobility, wired is preferred for certain high-bandwidth or high-data-volume transfers), as may be desired. coupled (40, 42) to one of local computing assets (120, 308) such as processor and memory configurations that may be installed in a structure configured to couple with a user head (120) or strap (308) or both. These computing assets locally located at the user may also be operably coupled to each other via wired and/or wireless connectivity arrangements (44). In one embodiment, in order to maintain the low inertia and small size of the head-mounted subsystem (120), the primary transfer between the user and the cloud (46) may be via a link between the belt-based subsystem (308) and the cloud. way, using wireless connectivity (such as ultra-wideband (“UWB”) connectivity, as currently used, for example, in personal computing peripheral connectivity applications) to tie the primary data of the head-mounted subsystem (120) to a band-based subsystem (308).
使用高效的本地和远程处理合作,以及针对用户的适当的显示设备,诸如在图3中描述特征的用户界面302或用户“显示设备”,以下参照图14描述的显示设备14,或其变型,关于用户的当前真实或虚拟位置的一个世界的方面可以被转移或“传送”给用户以及以高效的方式进行更新。实际上,在一个实施例中,在增强现实模式中使用虚拟现实系统(“VRS”)的一个人和在完全虚拟模式中使用VRS的另一个人为了探索位于第一个人的本地同一世界,这两个用户可以以各种方式彼此体验位于彼此世界中用户。例如,参照图12,描绘了类似于参照图11描述的场景,外加正在飞行通过所描述的增强现实世界的来自完全虚拟现实场景的第二用户的化身2的虚拟化。也就是说,可以在针对第一人的增强现实中体验和显示图12中描绘的场景-除了在场景中的本地世界附近的真实物理元素(诸如,地面、背景中的建筑物、塑像平台1120)外还显示两个增强现实元素(塑像1110和第二人的飞翔的大黄蜂化身2)。动态更新可以用于允许第一人可视化第二人的化身2的过程,如化身2飞行通过位于第一人的本地的世界。Using efficient local and remote processing collaboration, and an appropriate display device for the user, such as the user interface 302 or user "display device" featured in FIG. 3, the display device 14 described below with reference to FIG. 14, or variations thereof, Aspects of the world about the user's current real or virtual location can be transferred or "transmitted" to the user and updated in an efficient manner. Indeed, in one embodiment, one person using a virtual reality system ("VRS") in augmented reality mode and another person using VRS in fully virtual mode in order to explore the same world local to the first person, These two users may experience each other in various ways as users located in each other's worlds. For example, referring to FIG. 12 , a scene similar to that described with reference to FIG. 11 is depicted, plus a virtualization of a second user's avatar 2 from a full virtual reality scene flying through the described augmented reality world. That is, the scene depicted in FIG. 12 can be experienced and displayed in first-person augmented reality—except for real physical elements near the local world in the scene (such as the ground, buildings in the background, figurine platforms 1120 ) are also shown with two augmented reality elements (figure 1110 and the second person's flying bumblebee avatar 2). Dynamic updates may be used to allow the first person to visualize the process of the second person's avatar 2, such as the avatar 2 flying through the first person's local world.
此外,使用如上所述的配置,其中有一个世界模型,其可以驻留在云计算资源上以及可以从那里进行分布,优选地试着分发实时视频数据或诸如此类,此类世界能够以相对低的带宽形式“可传送“给一个或多个用户。可以由基于云的世界模型来通知站在塑像(即,如图12中示出的)附近的人的增强体验,可以将其的子集向下传送给他们和他们的本地显示设备以完成视图。坐在远程显示设备处的人(其可以是如位于桌子上的个人计算机一样简单)能够高效地下载来自云的信息的同一部分,以及使得它被渲染在他们的显示器上。实际上,事实上呈现在位于塑像附近的公园中的一个人可以与位于远程的朋友在那个公园中散步,该朋友通过虚拟和增强现实来加入。系统将需要知道街道位于何处,树在哪里,塑像在何处-但是在云上具有该信息时,加入的朋友能够从云下载场景的方面,以及然后开始散步,作为与实际上在公园中的人有关的本地的增强现实。Furthermore, using a configuration as described above, where there is a world model that can reside on cloud computing resources and can be distributed from there, preferably trying to distribute real-time video data or the like, such worlds can be distributed at relatively low cost. A form of bandwidth is "deliverable" to one or more users. The augmented experience for people standing near the statue (i.e., as shown in FIG. 12 ) can be informed by a cloud-based world model, a subset of which can be sent down to them and their local display device to complete the view . Someone sitting at a remote display device (which could be as simple as a personal computer sitting on a desk) can efficiently download the same portion of the information from the cloud and have it rendered on their display. In fact, a person who is in fact present in a park located near a statue can go for a walk in that park with a remotely located friend joined by virtual and augmented reality. The system will need to know where the streets are, where the trees are, where the statues are - but with that information on the cloud, the joining friend can download aspects of the scene from the cloud, and then start walking, as if they were actually in the park people about local augmented reality.
参照图13,描绘了基于时间和/或其它偶然性参数的实施例,其中与虚拟和/或增强现实接口(诸如具有图3中描绘特征的用户界面302或用户显示设备、以下参照图14描述的显示设备14,或其组合)结合的人正使用系统(4)以及进入咖啡经营场所以点了一杯咖啡(6)。VRS可以被配置为使用感知和数据收集能力(本地地和/或远程地)以在针对该人在增强和/或虚拟现实中提供显示加强,诸如高亮咖啡经营场所中的门的位置或相关的咖啡菜单(8)的气泡窗口。当人接收到他已经点的一杯咖啡时或在系统检测到一些其它相关参数后,系统可以被配置为在具有显示设备的本地环境中显示(10)一个或多个基于时间的增强或虚拟现实图像、视频和/或声音,诸如来自墙和天花板的马达加斯加岛丛林场景,具有或不具有丛林声音和其它效果,静态或动态的。基于定时参数(即,在已经识别到满杯的咖啡杯以及递给用户后的5分钟;在已经识别到用户走进经营场所的前门之后的10分钟,等)或其它参数(诸如由当用户从杯子摄食了最后一口的咖啡时,系统通过注意到咖啡杯的倒置指向识别到用户已经喝完了咖啡-或由系统识别到用户已经离开经营场所(12)的前门),至用户的此类呈现可以中止。Referring to FIG. 13 , an embodiment based on time and/or other contingency parameters is depicted in which a virtual and/or augmented reality interface (such as the user interface 302 or user display device having the features depicted in FIG. 3 , described below with reference to FIG. 14 ) Display device 14, or a combination thereof) in conjunction with the person using the system (4) and entering the coffee establishment to order a cup of coffee (6). VRS can be configured to use perception and data collection capabilities (locally and/or remotely) to provide display enhancements in augmented and/or virtual reality for that person, such as highlighting the location of doors in a coffee establishment or related Bubbles window for coffee menu (8). When a person receives a cup of coffee that he has ordered or after the system detects some other relevant parameter, the system can be configured to display (10) one or more time-based augmented or virtual realities in a local environment with a display device Images, video and/or sound, such as Madagascar jungle scenes from walls and ceilings, with or without jungle sounds and other effects, static or dynamic. Based on timing parameters (i.e., 5 minutes after a full coffee cup has been identified and handed to the user; 10 minutes after the user has been identified walking through the front door of the establishment, etc.) or other parameters (such as by when the user When the last sip of coffee is ingested from the cup, the system recognizes that the user has finished the coffee by noticing that the coffee cup is pointing upside down - or the system recognizes that the user has left the front door of the establishment (12), to the user's such Rendering can be aborted.
参照图14,示出了合适的用户显示设备(14)的一个实施例,该用户显示设备包括:显示镜头(82),其通过壳体或框架(84)被安装在用户的头或眼。显示镜头(82)可以包括通过用户的眼睛(20)的前面中的壳体(84)定位的一个或多个透明镜子,以及被配置为将投影光反射到眼睛(20)中以及促进光束整形,同时还允许传输来自增强现实配置中的本地环境的至少一些光(在虚拟现实配置中,对于显示系统14而言,可能期望的是能够基本上阻塞来自本地环境的所有光,诸如通过黑暗的遮护,阻塞窗帘,全黑LCD面板模式或诸如此类)。在所描绘的实施例中,两个宽视野的机器视觉相机(16)被耦合到壳体(84)以成像用户周围的环境;在一个实施例中,这些相机(16)是双重捕获可见光/红外线相机。如所示的,所描绘的实施例还包括具有显示镜子和被配置为将光(38)投影到眼睛(20)中的光学器件的一对激光扫描成形波前(即,针对深度)光投影模块。所描绘的实施例还包括:与红外线光源(26,诸如发光二极管“LED”)成对的两个微型红外线相机(24),其被配置为能够跟踪用户的眼睛(20)以支持渲染和用户输入。系统(14)还以传感器总成(39)的特征,其可以包括X,Y和Z轴加速计能力以及磁罗盘和X,Y和Z轴陀螺仪能力,优选地以相对高的频率(诸如200Hz)来提供数据。所描绘系统(14)还包括头部姿势处理器(36),诸如ASIC(专用集成电路)、FPGA(现场可编程门阵列)、和/或ARM处理器(高级精简指令集机器),其可以被配置为从捕获设备(16)输出的宽视野图像信息实时或接近实时地计算用户的头部姿势。还示出示出的是另一个处理器(32),其被配置为执行数字和/或模拟处理以从来自传感器总成(39)的陀螺仪、罗盘和/或加速计数据来导出姿势。所描绘的实施例还以GPS(37,全球定位卫星)子系统以帮助姿势和定位为特征。最后,所描绘的实施例包括渲染引擎(34),其可以以运行软件程序的硬件为特征,该软件程序被配置为针对世界的用户视图提供该用户本地的渲染信息以促进扫描仪的操作以及成像到用户的眼睛中。渲染引擎(34)可以操作地耦合到(81,70,76/78,80;即,经由有线或无线连通性)传感器姿势处理器(32)、图像姿势传感器(36)、眼睛跟踪相机(24)和投影子系统(18),使得以类似于视网膜扫描显示器的方式使用扫描激光装置(18)来投影渲染的增强和/或虚拟现实对象的光。投影光束(38)的波前可以弯曲或聚焦以符合增强和/或虚拟现实对象的期望的焦点距离。微型红外线相机(24)可以用于跟踪眼睛以支持渲染和用户输入(即,用户观看的地方,他聚焦有多深;如以下论述的,眼睛边缘可以用于估计焦点的深度)。GPS(37)、陀螺仪、罗盘和加速计(39)可以用于提供粗略和/或快速姿势估计。相机(16)图像和姿势结合来自相关联的云计算资源的数据可以用于映射本地世界,以及与虚拟和增强现实社区共享用户视图。虽然在图14中描述特征的显示系统(14)中硬件中的大多数硬件被描绘为直接耦合到壳体(84),该壳体(84)邻近显示器(82)和用户的眼睛(20),但是所描绘的硬件组件可以被安装到其它组件内或封装在其它组件内,诸如安装带组件,如例如在图3中示出的。在一个实施例中,在图14中描述特征的系统(14)的所有组件(除了图像姿势处理器(36)、传感器姿势处理器(32)和渲染引擎(34)之外)直接耦合到显示壳体(84),以及上面后三个组件与系统(14)的剩余组件之间的通信可以是通过无线通信(诸如超宽带)或有线通信。所描绘的壳体(84)优选地可以由用户头戴和可以穿戴。它还可以以扬声器(诸如可以被插入到用户的耳朵中以及用于向可能与增强或虚拟现实体验相关的用户提供声音(诸如参照图13提及的丛林声音)的那些扬声器)以及麦克风(其可以用于捕获用户本地的声音)为特征。Referring to Figure 14, one embodiment of a suitable user display device (14) is shown, comprising a display lens (82) mounted to the user's head or eye via a housing or frame (84). Display lens (82) may comprise one or more transparent mirrors positioned through housing (84) in front of user's eye (20) and configured to reflect projected light into eye (20) and facilitate beam shaping , while also allowing transmission of at least some light from the local environment in an augmented reality configuration (in a virtual reality configuration, it may be desirable for the display system 14 to be able to block substantially all light from the local environment, such as through a dark blackout, blocking curtains, full black LCD panel mode or whatnot). In the depicted embodiment, two wide field of view machine vision cameras (16) are coupled to the housing (84) to image the environment around the user; in one embodiment, these cameras (16) are dual capture visible light/ Infrared camera. As shown, the depicted embodiment also includes a pair of laser scanning shaped wavefront (i.e., for depth) light projections with a display mirror and optics configured to project light (38) into the eye (20) module. The depicted embodiment also includes two miniature infrared cameras (24) paired with infrared light sources (26, such as light emitting diodes "LEDs") configured to track the user's eyes (20) to support rendering and user enter. The system (14) also features a sensor assembly (39) which may include X, Y and Z axis accelerometer capabilities as well as magnetic compass and X, Y and Z axis gyroscope capabilities, preferably at relatively high frequencies such as 200Hz) to provide data. The depicted system (14) also includes a head gesture processor (36), such as an ASIC (Application Specific Integrated Circuit), FPGA (Field Programmable Gate Array), and/or ARM processor (Advanced Reduced Instruction Set Machine), which can A user's head pose is calculated in real-time or near real-time from the wide-field image information output by the capture device (16). Also shown is another processor (32) configured to perform digital and/or analog processing to derive pose from gyroscope, compass and/or accelerometer data from the sensor assembly (39). The depicted embodiment also features a GPS (37, Global Positioning Satellite) subsystem to aid in posture and positioning. Finally, the depicted embodiment includes a rendering engine (34), which may feature hardware running a software program configured to provide rendering information local to the user for the user's view of the world to facilitate operation of the scanner and imaged into the user's eyes. The rendering engine (34) may be operatively coupled (81, 70, 76/78, 80; i.e., via wired or wireless connectivity) to the sensor gesture processor (32), image gesture sensor (36), eye tracking camera (24 ) and a projection subsystem (18) such that the light of the rendered augmented and/or virtual reality object is projected using the scanning laser device (18) in a manner similar to a retinal scanning display. The wavefront of the projection beam (38) can be bent or focused to match the desired focal distance of the augmented and/or virtual reality object. A tiny infrared camera (24) can be used to track the eyes to support rendering and user input (ie where the user is looking, how deeply he is focusing; as discussed below, the edge of the eye can be used to estimate the depth of focus). GPS (37), gyroscope, compass and accelerometer (39) can be used to provide coarse and/or fast pose estimation. Camera (16) images and gestures combined with data from associated cloud computing resources can be used to map the local world, as well as share user views with virtual and augmented reality communities. While most of the hardware in the featured display system (14) is depicted in FIG. , but the depicted hardware components may be mounted or housed within other components, such as a mounting strap component, as shown, for example, in FIG. 3 . In one embodiment, all components of the system (14) characterized in FIG. 14 (except image gesture processor (36), sensor gesture processor (32), and rendering engine (34)) are directly coupled to Communication between the housing (84), and the last three components above, and the remaining components of the system (14) may be through wireless communication (such as ultra-wideband) or wired communication. The depicted housing (84) is preferably head-mountable and wearable by the user. It may also feature speakers (such as those that can be inserted into the user's ears and used to provide sounds to the user that may be associated with an augmented or virtual reality experience, such as the jungle sounds mentioned with reference to FIG. It can be used to capture the user's local voice) as a feature.
关于进入用户的眼睛(20)中的光(38)的投影,在一个实施例中,微型相机(24)可以用于测量用户的眼睛(20)的中心在几何上地趋向哪里,一般地其与眼睛(20)的焦点的方位或“焦点的深度”一致。眼睛趋向的所有点的三维表面被称为“两眼视界”。焦点距离可以具有有限数量的深度或可以是无限变化的。来自视线聚合距离的投射的光好像是被聚焦到主题(subject)眼(20),而在视线聚合距离前面或后面的光是模糊不清的。此外,已经发现的是,具有小于大约0.7毫米的光束直径的空间相干光由人的眼睛正确的解析,而不管眼睛聚焦在哪里;给定这种理解,为了创建适当的焦点深度的错觉,可以使用微型相机(24)来跟踪眼睛视线聚合,以及渲染引擎(34)和投影子系统(18)可以用于渲染在焦点对准的两眼视界上或接近两眼视界上的所有对象,以及在散焦不同度数处的所有其它对象(即,使用有意创建的模糊)。被配置为将相关光投影到眼睛中的穿透光引导光线器件可以由诸如Lumus公司的供应商来提供。优选地,系统(14)以每秒约60帧的帧率或更高向用户进行渲染。如上所述,优选地,微型相机(24)可以用于眼睛跟踪,以及软件可以被配置为获取不仅视线聚合几何而且聚焦位置线索以用作用户输入。优选地,此类系统被配置为具有适合于白天和夜晚使用的亮度和对比度。在一个实施例中,此类系统优选地具有小于约20毫秒的时延以用于虚拟对象对齐,小于约0.1度的角度对齐,以及约1弧分的分辨率,其大约是人类眼睛的限制。显示系统(14)可以与定位系统集成,其可以涉及GPS元素、光学跟踪、罗盘、加速计和/或其它数据源以帮助方位和姿势确定;定位信息可以用于促进在相关世界的用户视图中的准确渲染(即,此类信息将促进眼镜知道它们相对于真实世界在什么地方)。With regard to the projection of light (38) into the user's eye (20), in one embodiment, a miniature camera (24) can be used to measure where the center of the user's eye (20) is geometrically heading, generally its Corresponds to the orientation or "depth of focus" of the focus of the eye (20). The three-dimensional surface of all points towards which the eyes tend is called the "binocular field of vision". The focal distance can have a finite number of depths or can be infinitely variable. Projected light from the gaze convergence distance appears to be focused to the subject eye (20), while light in front or behind the gaze convergence distance is blurred. Furthermore, it has been found that spatially coherent light having a beam diameter of less than about 0.7 millimeters is correctly resolved by the human eye, regardless of where the eye is focused; given this understanding, to create the illusion of an appropriate depth of focus, one can Eye gaze convergence is tracked using a tiny camera (24), and the rendering engine (34) and projection subsystem (18) can be used to render all objects at or near the binocular view in focus, and at All other objects are defocused at different degrees (ie, with intentionally created blur). Transmissive light directing light devices configured to project relevant light into the eye may be provided by suppliers such as Lumus Corporation. Preferably, the system (14) renders to the user at a frame rate of about 60 frames per second or higher. As mentioned above, preferably a miniature camera (24) can be used for eye tracking, and the software can be configured to capture not only gaze convergence geometry but focus position cues for use as user input. Preferably such systems are configured with brightness and contrast suitable for day and night use. In one embodiment, such a system preferably has a latency of less than about 20 milliseconds for virtual object alignment, an angular alignment of less than about 0.1 degrees, and a resolution of about 1 arcminute, which is about the limit of the human eye . The display system (14) can be integrated with a positioning system, which can involve GPS elements, optical tracking, compasses, accelerometers, and/or other data sources to aid in orientation and pose determination; positioning information can be used to facilitate positioning in the user's view of the relevant world (i.e., such information will facilitate the glasses knowing where they are relative to the real world).
其它合适的显示设备包含但不限于:台式和移动计算机、智能电话、可以另外地使用软件和硬件特征以促进或仿真三维透视图(例如,在一个实施例中,帧可以可移除地耦合到智能电话,帧以200Hz陀螺仪和加速计传感器子集为特征,具有宽视界镜头的两个小机器视觉相机,以及ARM处理器-以仿真在图14中描述特征的配置的功能中的一些功能)来增强的智能电话、平台计算机、如以上针对智能电话所述的可以增强的平板计算机、使用另外的处理和感知硬件来增强的平板计算机、头戴式系统(其使用智能电话和/或平板计算机以显示增强和虚拟的视点(经由放大光学器件、镜子、接触镜头或光结构元素的视觉调节))、非透明的光发射元素显示器(LCD、OLED、垂直腔表面发射激光器、控制激光束等)、透明显示器(其同时允许人看到自然世界和人工生成的图像(例如,光引导光学元素、照进近焦接触镜头的透明和极化OLED、控制激光束等))、具有光发射元素的接触镜头(诸如,来自华盛顿州的贝尔维尔的Innovega公司的,在商标Ioptik RTM下的可获得的那些具有光发射元素的接触镜头;它们可以与专用的互补眼镜组件进行组合)、具有光发射元素的可植入设备、以及仿真人脑的光学接收器的可植入设备。Other suitable display devices include, but are not limited to: desktop and mobile computers, smart phones, software and hardware features may additionally be used to facilitate or simulate three-dimensional perspective (e.g., in one embodiment, the frame may be removably coupled to Smartphone, frame featuring a subset of 200Hz gyroscope and accelerometer sensors, two small machine vision cameras with wide field of view lenses, and an ARM processor - to emulate some of the functionality of the configuration characterized in Figure 14 ), platform computers, tablet computers that can be enhanced as described above for smartphones, tablet computers enhanced with additional processing and perception hardware, head-mounted systems (which use smartphones and/or tablet Computers to display enhanced and virtual viewpoints (visual accommodation via magnification optics, mirrors, contact lenses or light-structuring elements), non-transparent light-emitting element displays (LCD, OLED, vertical cavity surface emitting lasers, steered laser beams, etc. ), transparent displays (which allow humans to see both the natural world and artificially generated images (e.g., light-guiding optical elements, transparent and polarized OLEDs illuminating near-focus contact lenses, controlling laser beams, etc.)), having light-emitting elements Contact lenses (such as those available under the trademark Ioptik RTM from Innovega Corporation of Belleville, Washington, having light-emitting elements; they can be combined with specialized complementary eyewear components), with light-emitting An implantable device for the Element, and an implantable device for an optical receiver that emulates the human brain.
使用诸如图3和图14中描绘的系统的系统,可以从环境捕获三维点,以及可以确定捕获那些图像或点的相机的姿势(即,有关于世界的向量和/或源方位信息),以便这些点或图像可以使用这个姿势信息“加标记”或与这个姿势信息相关联。然后,由第二相机捕获的点可以用于确定第二相机的姿势。也就是说,一个人能够基于与来自第一相机的标记的图像的比较来定向和定位第二相机。然后,这个知识可以用于提取纹理、制作映射以及创建真实世界的虚拟拷贝(因为然后在被注册的真实世界周围有两个相机)。因此在基面,在一个实施例中,你具有穿戴用于捕获三维点和产生该点的二维图像的系统的人,以及可以将这些点和图像发送出去给云存储和处理资源。还可以将它们与嵌入式姿势信息本地地缓存(即,缓存标记的图像);以便云可以准备好(即,在可用的缓存中)标记的二维图像(即,使用三维姿势来标记),连同三维点。如果用户正在观察某些动态的东西,则他还可以将关于运动的另外的信息向上发送给云(例如,如果看另一个人的面孔,则用户能够获得该面孔的纹理图以及以优化的频率来向上推送该纹理图,即使周围世界在其它方面基本上是静态的)。Using systems such as those depicted in FIGS. 3 and 14 , three-dimensional points can be captured from the environment, and the pose of the camera capturing those images or points can be determined (i.e., with vector and/or source orientation information about the world) so that These points or images can be "tagged" with or associated with this pose information. The points captured by the second camera can then be used to determine the pose of the second camera. That is, one can orient and position a second camera based on a comparison with the marked image from the first camera. This knowledge can then be used to extract textures, make maps, and create a virtual copy of the real world (because then there are two cameras around the real world being registered). So at the base level, in one embodiment, you have a person wearing a system that captures a 3D point and generates a 2D image of that point, and those points and images can be sent out to cloud storage and processing resources. They can also be cached locally (i.e., cache labeled images) with embedded pose information; so that the cloud can have ready (i.e., in available cache) labeled 2D images (i.e., labeled with 3D poses), together with 3D points. If the user is looking at something dynamic, he can also send additional information about the motion up to the cloud (for example, if looking at another person's face, the user can get a texture map of that face and at an optimized frequency to push that texture up even though the surrounding world is otherwise largely static).
云系统可以被配置为保存一些点作为仅用于姿势的基准点,以减少整体的姿势跟踪计算。一般而言,可能期望的是具有一些轮廓特征以便当用户在房间周围移动时能够跟踪用户环境中的主要项目,诸如墙、桌子等,以及用户可能希望能够“共享”世界以及使一些其它用户走进该房间并且也看到这些点。此类有用的和关键的点可以被称为“基准点”,因为它们作为锚点是相当有用的-它们与可以由机器视觉识别的特征以及能够从在用户硬件的不同块上一致地和重复地从世界提取的特征有关。因此,优选地可以将这些基准点保存到云以供进一步使用。The cloud system can be configured to save some points as reference points only for poses, to reduce overall pose tracking calculations. In general, it may be desirable to have some silhouette feature to be able to track major items in the user's environment as the user moves around the room, such as walls, tables, etc., and the user may wish to be able to "share" the world and make some other users walk Go into that room and see the dots too. Such useful and critical points can be called "fiducial points" because they are quite useful as anchor points - they are related to features that can be recognized by machine vision and can be seen from different pieces of user hardware consistently and repeatedly. related to the features extracted from the world. Therefore, preferably these benchmarks can be saved to the cloud for further use.
在一个实施例中,优选的是,在整个相关世界上具有相对均匀分布的基准点,因为它们是相机能够容易地用于识别位置的各种种类的项目。In one embodiment, it is preferable to have a relatively even distribution of fiducials throughout the relevant world, as they are the sorts of items that cameras can easily use to identify locations.
在一个实施例中,相关的云计算配置可以被配置为周期性地整理三维点和任何相关联的元数据的数据库以将来自各种用户的最佳数据用于基准点求精和世界创建两者。也就是说,系统可以被配置为通过使用来自观看相关世界以及在相关世界内执行功能的各种用户的输入来获得最佳的数据集。在一个实施例中,数据库是固有的不规则的碎片形-当用户移动到更接近于对象时,云向此类用户传送更高分辨率的信息。当用户更接近地映射对象时,该数据被发送给云,以及如果新的三维点和基于图像的纹理图比先前存储在数据库中的那些三维点和基于图像的纹理图更好,则云能够将它们添加到数据库。所有一切可被配置为从许多用户同时发生。In one embodiment, the associated cloud computing configuration may be configured to periodically collate the database of 3D points and any associated metadata to use the best data from various users for both fiducial refinement and world creation. By. That is, the system can be configured to obtain an optimal data set by using input from various users viewing and performing functions within the relevant world. In one embodiment, the database is inherently fractal - the cloud delivers higher resolution information to such users as they move closer to objects. As the user maps objects closer, this data is sent to the cloud, and if the new 3D point and image-based texture maps are better than those previously stored in the database, the cloud can Add them to the database. All can be configured to happen simultaneously from many users.
如上所述,增强或虚拟现实体验可以基于识别某些类型的对象。例如,重要的是理解特定对象具有深度以便识别和理解此类对象。识别器软件对象(“识别器”)可以部署在云或本地资源上,以便当用户在世界中导航数据时特别地帮助识别一个或两个平台上的各种对象。例如,如果系统具有针对包括三维点云和姿势标记图像的世界模型的数据,以及具有在其上具有一群点的桌子以及该桌子的图像,则可能不能如人将知道它是桌子那样来确定正在观察的是(实际上)是桌子。也就是说,在空间中的某些三维点和在空间中离开某处的示出桌子的大部分的图像可能不足以立刻识别正在被观察的是桌子。为了帮助这种识别,可以创建特定对象识别器,其将进入原始三维点云,分割一组点,以及例如提取桌子的顶部表面的平面。类似地,可以创建识别器以分割来自三维点的墙,以便用户能够在虚拟或增强现实中改变墙纸或移除墙的一部分,以及具有至另一个房间的入口(在真实世界中其实际上不是存在于那里)。此类识别器操作在世界模型的数据内,以及可以被认为是软件“机器人”,其爬行世界模型以及将该世界模型灌输语义信息,或关于被认为是存在于空间中的点之间的本体论。此类识别器或软件机器人可以被配置为使得它们整体存在是关于在数据的相关世界四处走动以及找到它认为是墙或椅子或其它项目的东西。它们可以被配置为将一组点标记为具有“这组点属于墙”的功能性等同性,以及可以包括基于点的算法和标记姿势的图像分析的组合以用于相互通知系统关于在点中是什么。As noted above, augmented or virtual reality experiences may be based on recognizing certain types of objects. For example, it is important to understand that certain objects have depth in order to recognize and understand such objects. Recognizer software objects ("recognizers") can be deployed on cloud or local resources to specifically help recognize various objects on one or both platforms as users navigate data in the world. For example, if the system has data for a world model including a 3D point cloud and pose-labeled images, and has a table with a cluster of points on it and an image of the table, it may not be able to ascertain that it is a table as a human would know it is a table. What is observed is (actually) the table. That is, an image showing a large portion of a table at certain three-dimensional points in space and somewhere off in space may not be sufficient to immediately recognize that it is a table that is being viewed. To aid in this identification, a specific object recognizer can be created which will take in the raw 3D point cloud, segment a set of points, and eg extract the plane of the top surface of a table. Similarly, a recognizer can be created to segment a wall from a 3D point, so that the user can change the wallpaper or remove part of the wall in virtual or augmented reality, as well as have an entrance to another room (which in the real world is not actually exists there). Such recognizers operate within the data of a world model, and can be thought of as software "robots" that crawl the world model and imbue that world model with semantic information, or about an ontology that is thought to exist between points in space Argument. Such recognizers or software robots can be configured so that their entire existence is about walking around the relevant world of the data and finding what it thinks is a wall or a chair or other item. They can be configured to mark a set of points as having the functional equivalence of "this set of points belongs to a wall", and can include a combination of point-based algorithms and image analysis of marking gestures for mutually informing systems about what is it.
可以取决于视角出于各种效用的许多目的来创建对象识别器。例如,在一个实施例中,咖啡的供应商(诸如星巴克)可以投资于在数据的相关世界内创建星巴克的咖啡杯的准确识别器。此类识别器可以被配置为爬行大大小小的数据世界以搜索星巴克的咖啡杯,因此当操作在相关的附近空间中时(即,可能的是,在刚好位于角落附近的星巴克的批发商店中,当用户在一段时间内看他的星巴克杯时向用户提供咖啡),可以将它们分割出并且识别到用户。随着杯子被分割出,当用户在他的桌子上移动它时,它可以被快速地识别。取决于可以使用的计算资源,此类识别器可以被配置为不仅运行或操作在云计算资源和数据上,而且运行或操作在本地资源和数据上,或者云和本地两者。在一个实施例中,在具有数百万的用户贡献于那个全球模型的云上有世界模型的全球拷贝,但是对于类似于特定城镇中的特定个体的办公室的小世界或子世界而言,全球世界的大多数将不关心该办公室看起来像什么,因此系统可以被配置为整理数据以及移动到本地缓存信息(其被认为是与给定用户最本地相关的)。在一个实施例中,例如,当用户走向桌子时,有关信息(诸如在他的桌子上的特定杯子的分割)可以被配置为仅驻留在他的本地计算资源而不是在云上,因为被识别为经常移动的对象(诸如桌子上的杯子)的对象不必加重云模型以及云和本地资源之间的传输负担的负担。因此,云计算资源可以被配置为分割三维点和图像,因此将永久(即,一般不移动的)对象从可移动对象分解开,以及这可能影响哪里的相关联数据将保留,哪里的将被处理,针对于更永久的对象有关的某种数据而言从可穿戴/本地系统中移除处理负担,允许位置的一次处理(然后,该位置可以与无限的其它用户共享),允许多个数据源在特定物理位置中同时地建立固定和可移动对象的数据库,以及从背景中分割出对象以创建对象特定的基准点和纹理图。Object recognizers can be created for many purposes of various utility depending on the viewing angle. For example, in one embodiment, a supplier of coffee, such as Starbucks, may invest in creating an accurate identifier of Starbucks' coffee cups within the relevant world of data. Such a recognizer can be configured to crawl the world of data large and small in search of Starbucks coffee cups, so when operating in the relevant nearby space (i.e., likely, in a Starbucks wholesale store just around the corner , offering coffee to the user while looking at his Starbucks mug for a period of time), they can be segmented out and the user identified. With the cup segmented out, it can be quickly identified when the user moves it across his desk. Depending on the computing resources available, such recognizers may be configured to run or operate not only on cloud computing resources and data, but also on local resources and data, or both cloud and local. In one embodiment, there is a global copy of the world model on the cloud with millions of users contributing to that global model, but for a small world or subworld like a particular individual's office in a particular town, the global Most of the world will not care what the office looks like, so the system can be configured to collate data and move to locally cached information (which is considered most locally relevant to a given user). In one embodiment, for example, when a user walks up to a table, relevant information (such as a segmentation of a particular cup on his table) may be configured to reside only on his local computing resources and not on the cloud, since Objects identified as frequently moving objects (such as a cup on a table) do not have to burden the cloud model and the transfer burden between cloud and local resources. Accordingly, cloud computing resources may be configured to segment three-dimensional points and images, thus separating permanent (i.e., generally non-moving) objects from movable objects, and this may affect where associated data will remain and where processed, removes the processing burden from the wearable/local system for some kind of data about the more permanent object, allows one processing of a location (the location can then be shared with an unlimited number of other users), allows multiple The data source simultaneously builds a database of fixed and movable objects in specific physical locations, and segments objects from the background to create object-specific fiducials and texture maps.
在一个实施例中,系统可以被配置为向用户询问关于某些对象的身份的输入(例如,系统可以向用户呈现问题,诸如“那是星巴克咖啡杯吗?”),以便用户可以训练系统以及允许系统将语义信息与真实世界中的对象相关联。本体论可以提供关于从世界分割的对象能够做什么、它们如何表现行为等的指导。在一个实施例中,系统可以具有虚拟或真实的小键盘的特征,诸如无线连接的小键盘,至智能电话的小键盘的连通性,或诸如此类,以促进至系统的某种用户输入。In one embodiment, the system may be configured to ask the user for input regarding the identity of certain objects (e.g., the system may present the user with a question such as "Is that a Starbucks coffee mug?") so that the user may train the system and Allows the system to associate semantic information with objects in the real world. Ontologies can provide guidance on what objects segmented from the world can do, how they behave, etc. In one embodiment, the system may feature a virtual or real keypad, such as a wirelessly connected keypad, connectivity to a smartphone's keypad, or the like, to facilitate some user input to the system.
系统可以被配置为与以虚拟或增强现实走进房间的任何用户共享基本元素(墙、窗口、桌子几何形状等),以及在一个实施例中,人的系统将被配置为从他的特定视角来得到图像并且将那些图像上传到云。然后云开始填充旧的和新的数据集合,以及能够运行优化例程以及建立在个体对象上存在的基准点。The system can be configured to share basic elements (walls, windows, table geometry, etc.) to get images and upload those images to the cloud. The cloud then starts populating old and new data sets, and is able to run optimization routines and establish benchmarks that exist on individual objects.
GPS和其它定位信息可以用作至此类处理的输入。此外,其它计算系统和数据(诸如人的在线日程表或脸书账户信息)可以用作输入(例如,在一个实施例中,云和/或本地系统可以被配置为针对飞机票、日期和目的地来分析用户日程表的内容,以便随着时间,可以从云中将信息移动到用户的本地系统中以便为在给定目的地中用户的到达时间做准备)。GPS and other positioning information can be used as input to such processing. Additionally, other computing systems and data (such as a person's online calendar or Facebook account information) can be used as input (e.g., in one embodiment, cloud and/or local systems can be configured to to analyze the contents of the user's schedule so that over time, information can be moved from the cloud to the user's local system in preparation for the user's arrival time in a given destination).
在一个实施例中,可以将诸如QR码的标签和诸如此类插入到世界中以供非统计姿势计算、安全性/访问控制、特别信息的通信、空间消息传送、非统计对象识别等的使用。In one embodiment, tags such as QR codes and the like can be inserted into the world for use in non-statistical gesture computation, security/access control, communication of special information, spatial messaging, non-statistical object recognition, etc.
在一个实施例中,如以上参照“可传送的世界”描述的,云资源可以被配置为在用户之间传送真实和虚拟世界的数字模型,具有由个体用户基于参数和纹理来渲染的模式。这降低了有关于传送实时视频的带宽,允许场景的虚拟视点的渲染,以及允许数百万或更多的用户参与一个虚拟收集而在它们之间相互发送的他们需要观看的数据(诸如视频),因为他们的视图是通过他们的本地计算资源来渲染的。In one embodiment, cloud resources may be configured to transfer digital models of real and virtual worlds between users, with modes rendered by individual users based on parameters and textures, as described above with reference to "transferable worlds." This reduces the bandwidth associated with delivering real-time video, allows the rendering of virtual viewpoints of a scene, and allows millions or more users to participate in a virtual collection while sending each other the data they need to view (such as video) , because their views are rendered from their local computing resources.
虚拟现实系统(“VRS”)可以被配置为通过以下中的一个或多个来注册用户位置和视野(一起被称为“姿势“):使用相机的实时度量的计算机视觉、同时定位和映射技术、地图和来自传感器(诸如陀螺仪、加速计、罗盘、气压计、GPS、无线电信号强度三角测量、信号飞行时间分析、LIDAR范围、RADAR范围、测程法和声纳范围)的数据。可穿戴设备系统可以被配置为同时地绘制地图和定向。例如,在未知的环境中,VRS可以被配置为收集关于环境的信息、确定适用于用户姿势计算的基准点、用于世界建模的其它点、用于提供世界的纹理图的图像。基准点可以用于光学地计算姿势。当使用更多的细节来映射世界时,可以分割出更多的对象以及给出它们自己的纹理图,但是世界仍然优选地使用低分辨率纹理图以简单多边形低空间分辨率来表示。其它传感器,诸如上述论述的那些传感器,可以用于支持这种建模工作。世界可以是固有的不规则形状,在于移动或以其它方式搜寻更好的视图(通过视点、“监督”模式、放大等)要求来自云资源的高分辨率的信息。更接近地移动到对象捕获更高的分辨率数据,以及这可以被发送给云,其可以计算世界模型中的间隙位置处的新的数据和/或在世界模型中的间隙位置处插入新的数据。A virtual reality system ("VRS") may be configured to register a user's position and field of view (together referred to as a "pose") through one or more of: computer vision using real-time metrics from the camera, simultaneous localization and mapping techniques , maps, and data from sensors such as gyroscope, accelerometer, compass, barometer, GPS, radio signal strength triangulation, signal time-of-flight analysis, LIDAR range, RADAR range, odometry, and sonar range. The wearable device system can be configured to simultaneously map and orient. For example, in an unknown environment, the VRS may be configured to gather information about the environment, determine fiducial points suitable for user pose calculations, other points for world modeling, images for providing a texture map of the world. The fiducials can be used to optically calculate the pose. When the world is mapped with more detail, more objects can be segmented out and given their own texture maps, but the world is still preferably represented with simple polygons at low spatial resolution using low resolution texture maps. Other sensors, such as those discussed above, can be used to support such modeling efforts. The world can be inherently irregular in shape, in that moving or otherwise seeking a better view (via viewpoint, "overseeing" mode, zooming in, etc.) requires high-resolution information from cloud resources. Moving closer to objects captures higher resolution data, and this can be sent to the cloud, which can calculate new data and/or insert new data at interstitial positions in the world model data.
现在参照图16,可穿戴系统可以被配置为捕获图像信息以及提取基准点和识别点(52)。可穿戴的本地系统可以使用以下提及的姿势计算技术中的一种姿势计算技术来计算姿势。云(54)可以被配置为使用图形和基准点以从更静态的三维背景中分割出三维对象;图像提供针对对象和世界的纹理图(纹理可以是实时视频)。云资源(56)可以被配置为存储和提供用于世界注册的静态基准点和纹理。云资源可以被配置为管理用于注册的最优点密度的点云。云资源可以被配置为(62)使用所有有效的点和纹理以生成对象的分形实体模型;云可以管理用于最优基准点密度的点云信息。云资源(64)可以被配置为向用户询问关于分割对象和世界的密度的训练;本体论数据库可以使用应答以使用可动作的属性来灌输对象和世界。Referring now to FIG. 16, the wearable system can be configured to capture image information and extract fiducials and identification points (52). The wearable local system may calculate the gesture using one of the gesture calculation techniques mentioned below. The cloud (54) can be configured to use graphics and fiducials to segment three-dimensional objects from a more static three-dimensional background; the images provide texture maps for objects and the world (textures can be real-time video). Cloud resources (56) may be configured to store and serve static fiducials and textures for world registration. Cloud resources can be configured to manage point clouds of optimal point density for registration. The cloud resource can be configured (62) to use all available points and textures to generate a fractal solid model of the object; the cloud can manage point cloud information for optimal fiducial point density. The cloud resource (64) can be configured to ask the user for training about the density of segmented objects and the world; the ontology database can use the responses to imbue the objects and the world with actionable attributes.
以下注册和映射的特定模式以术语“O-姿势”为特征,其表示从光学或相机系统确定的姿势;“s-姿势”,其表示从传感器(即,如上所述的,诸如GPS、陀螺仪、罗盘、加速计等数据的组合)确定的姿势;以及“MLC”,其表示云计算和数据管理资源。The specific modes of registration and mapping below are characterized by the term "O-pose", which signifies a pose determined from an optical or camera system; Combination of data from instruments, compasses, accelerometers, etc.); and "MLC," which stands for cloud computing and data management resources.
1.定向:制作出新环境的基本图1. Orientation: Make a basic map of the new environment
目的:如果环境没有被映射或(等同的)或者如果没有连接到MLC,则建立姿势。Purpose: Build pose if environment is not mapped or (equivalent) or if not connected to MLC.
·从图像提取点,逐帧进行跟踪,使用S-姿势来三间测量基准点。· Extract points from images, track them frame by frame, and use S-pose to measure fiducial points.
·使用S-姿势,因为没有基准点Use S-pose because there is no reference point
·基于持久性过滤差的基准点。• Benchmarks based on persistent filtering.
·这是最基本的模式:对于低精确度的姿势它将总是工作的。使用很少时间以及一些相对运动,它将建立用于O-姿势和/或映射的最小化的基准点集合。• This is the most basic mode: it will always work for low precision poses. Using little time and some relative motion, it will establish a minimal set of fiducial points for O-pose and/or mapping.
·一旦O-姿势可靠就跳出这种模式。• Get out of this mode once the O-position is reliable.
2.映射和O-姿势:映射环境2. Mapping and O-Pose: Mapping the Environment
目的:建立高准确度的姿势,映射环境,以及向MLC提供地图(具有图像)。Purpose: Build high-accuracy poses, map the environment, and provide maps (with images) to the MLC.
·从成熟世界基准点来计算O-姿势。使用S-姿势作为O-姿势解决方案的检查以及用于加速计算(O-姿势是非线性梯度搜索)。• Compute the O-pose from mature world fiducials. Use S-pose as a check for O-pose solutions and for accelerated computation (O-pose is a non-linear gradient search).
·成熟基准点可能来自MLC,或是本地确定的那些基准点。• Maturity benchmarks may come from the MLC, or those benchmarks determined locally.
·从图像提取点,逐帧进行跟踪,使用O-姿势来进行三角测量基准点。· Extract points from images, track frame by frame, use O-poses to triangulate fiducials.
·基于持久性过滤差的基准点· Benchmarks based on persistent filter differences
·向MLC提供基准点和姿势标记的图像Provides fiducials and pose-marked images to the MLC
·后三个步骤不需要实时发生。• The last three steps do not need to happen in real time.
3.O-姿势:确定姿势3. O-posture: determine the posture
目的:使用最小化的处理功率在已经映射的环境中建立高准确度的姿势。Objective: To establish highly accurate poses in already mapped environments using minimal processing power.
·使用历史的S-姿势或O-状态(n-1,n-2,n-3等)以估计在n处的姿势。• Use historical S-poses or O-states (n-1, n-2, n-3, etc.) to estimate the pose at n.
·使用在n处的姿势以将基准点投影到在n处捕获的图像中,然后从该投影来创建图像遮障。• Use the pose at n to project the fiducial into the image captured at n, then create an image occlusion from this projection.
·从遮障区域来提取点(仅通过从图像的遮障子集中来搜索/提取点极大地降低了处理负担)。• Extract points from occluded regions (significantly reduces processing burden by searching/extracting points from only occluded subsets of the image).
·从提取的点和成熟的世界基准点来计算O-姿势。· Compute O-pose from extracted points and mature world fiducials.
·使用在n处的S-姿势和O-姿势以估计在n+1处的姿势。• Use the S-pose and O-pose at n to estimate the pose at n+1.
·选项:向MLC云提供姿势标记的图像/视频。• Option: Provide pose-tagged images/videos to the MLC cloud.
4.超级Res:确定超级分辨率图像和基准点4. Super Res: Determine super-resolution images and reference points
目的:创建超级分辨率图像和基准点。Purpose: Create super-resolution images and fiducials.
·使姿势标记的图像复合以创建超级分辨率图像。• Composite pose-tagged images to create super-resolution images.
·使用超级分辨率图像以增强基准点方位的估计。• Use of super-resolution images to enhance estimation of fiducial orientations.
·从超级分辨率基准点和图像来迭代O-姿势估计。• Iterative O-pose estimation from super-resolution fiducials and images.
·选项:在可穿戴设备(实时地)或(MLC)(用于更好的世界)循环以上步骤。• Option: Loop the above steps on the wearable (in real time) or (MLC) (for a better world).
在一个实施例中,VLS系统可以被配置为具有某些基本功能,以及由可以通过VLS分发以提供某些特定功能的“应用(app)”或应用促进的功能。例如,可以将以下应用安装到主题VLS以提供特定功能。In one embodiment, a VLS system may be configured to have certain basic functions, as well as functions facilitated by "apps" or applications that may be distributed through the VLS to provide certain specific functions. For example, the following applications can be installed to the subject VLS to provide specific functionality.
美术绘制的应用。艺术家创建表示他们看到的世界的图像转换。用户能够使用这些转换,因此“通过”艺术家的眼睛观看世界。Application of art drawing. Artists create image transformations that represent the world they see. Users are able to use these transformations and thus see the world "through" the artist's eyes.
桌面建模应用。用户从被放在桌子上的物理对象来“建立”的对象。Desktop modeling application. Objects that the user "builds" from physical objects that are placed on the table.
虚拟临场应用。用户向其它用户传送空间的虚拟模型,然后其它用户使用虚拟化身在空间附近移动。virtual presence applications. Users transmit a virtual model of the space to other users, who then use the avatars to move around the space.
化身情感应用。微妙的音调变化的测量,监测头部移动、体温、心跳等,在虚拟临场化身上制作微妙的效果的动画。数字化人的状态信息以及使用低于视频的带宽将其传送给远程化身。另外,此类数据是可映射到有情感的非人类化身。例如,狗的化身能够基于兴奋的音调变化通过摇它的尾巴示出兴奋。Avatar emotion app. Measurement of subtle pitch changes, monitoring head movement, body temperature, heartbeat, etc., animating subtle effects on virtual presence avatars. Digitizes a person's state information and transmits it to a remote avatar using less bandwidth than video. Additionally, such data is mappable to sentient non-human avatars. For example, an avatar of a dog can show excitement by wagging its tail based on the inflection of excitement.
与将一切发送回给服务器作为对照,高效的网格类型网络可能是期望用于移动数据。然而,许多网格网络具有次优的性能,因为没有很好地描述方位信息和拓扑的特点。在一个实施例中,系统可以用于以较高的准确性来确定所有用户的位置,以及因此网格网络配置可以用于高性能。As opposed to sending everything back to a server, an efficient mesh-type network might be desirable for moving data. However, many mesh networks have suboptimal performance because the orientation information and topology are not well characterized. In one embodiment, the system can be used to determine the location of all users with high accuracy, and thus a mesh network configuration can be used for high performance.
在一个实施例中,系统可以用于搜索。使用增强现实,例如,用户将生成和留下有关于物理世界的许多方面的内容。这种内容中的许多内容不是文本的,以及因此不易于由典型的方法来搜索。系统可以被配置为提供用于跟踪个人和社交网络内容以用于搜索和参考目的的设施。In one embodiment, the system can be used for searching. Using augmented reality, for example, users will generate and leave content about many aspects of the physical world. Much of this content is not textual, and thus not easily searchable by typical methods. The system can be configured to provide facilities for tracking personal and social networking content for search and reference purposes.
在一个实施例中,如果显示设备通过连续帧来跟踪二维点,则将向量值函数适配到那些点的时间演化,有可能的是简化在时间中的任何点(例如,帧之间)或在不久的将来中的某些点处(通过将向量值函数在时间中向前推进)的向量值函数。这允许创建高分辨率的后处理,以及在下一个图像被实际捕获之前预测未来的姿势(例如,在没有双倍相机帧速率的情况下,双倍注册速度是可能的)。In one embodiment, if the display device tracks two-dimensional points through successive frames, then a vector-valued function is fitted to the temporal evolution of those points, possibly simplifying at any point in time (e.g., between frames) Or a vector-valued function at some point in the near future (by advancing the vector-valued function forward in time). This allows creating high-resolution post-processing, as well as predicting future poses before the next image is actually captured (e.g. doubling the speed of registration is possible without doubling the camera frame rate).
对于身体固定渲染(与头上固定或世界固定渲染对比),期望的是身体的准确视图。而不是测量身体,在一个实施例中通过用户头部的平均方位来导出它的位置是可能的。如果用户的面孔绝大对数时间指向前方,则头部方位的多天的平均值将揭示那个方向。结合重力向量,这提供了合理的稳定的坐标系以用于身体固定的渲染。使用相对于这种长期持续时间的坐标系的头部方位的当前测量允许在用户身体上或用户身体附近的对象的一致性渲染-不使用额外的仪器。对于这种实施例的实现方式,可以启动头部方向向量的单个寄存器的平局值,以及数据的当前和除以德尔塔(delta)-t将给出当前平均的头部方位。保持五个左右的寄存器,在第n-5天、第n-4天、第n-3天、第n-2天、第n-1天启动允许使用前“n”天的滚动平均值。For body-fixed rendering (as opposed to head-fixed or world-fixed rendering), an accurate view of the body is desired. Rather than measuring the body, it is possible in one embodiment to derive the position of the user's head from its average orientation. If the user's face is pointing forward most of the time, a multi-day average of the head orientation will reveal that direction. Combined with the gravity vector, this provides a reasonably stable coordinate system for body-fixed rendering. Using current measurements of head orientation relative to such a long-duration coordinate system allows consistent rendering of objects on or near the user's body - without the use of additional instruments. For an implementation of this embodiment, the average value of a single register of the head orientation vector may be enabled, and the current sum of the data divided by delta-t will give the current average head orientation. Keeping five or so registers starting at day n-5, n-4, n-3, n-2, n-1 allows the use of rolling averages from the previous "n" days.
在一个实施例中,场景可以被缩小以及以小于实际空间呈现给用户。例如,在存在必须以巨大空间(即,诸如足球场)渲染的场景的情况下,可能没有等同的巨大空间呈现,或对于用户而言此类大空间可能是不方便的。在一个实施例中,系统可以被配置为降低场景的比例,以便用户可以观看小型中的它。例如,一个人可能具有神眼视图视频游戏,或世界杯足球游戏,在未缩放的场地中-或按比例缩小场地中进行以及呈现在客厅地板上。系统可以被配置为仅仅转变渲染的视角、比例和相关联的适应距离。In one embodiment, the scene may be zoomed out and presented to the user in a smaller than actual space. For example, where there is a scene that must be rendered in a huge space (ie, such as a football field), there may not be an equivalently large space to render, or such a large space may be inconvenient for the user. In one embodiment, the system can be configured to reduce the scale of the scene so that the user can view it in small form. For example, one might have a god's eye view video game, or World Cup soccer game, played in an unscaled field - or a scaled down field and presented on the living room floor. The system can be configured to only shift the rendered perspective, scale and associated adaptation distance.
系统还被配置为通过操纵虚拟或增强现实对象的焦点,通过高亮它们,变化对比度、亮度、比例等引起用户注意到特定项目。The system is also configured to draw user attention to specific items by manipulating the focus of virtual or augmented reality objects, by highlighting them, varying contrast, brightness, scale, etc.
优选地,系统可以被配置为实现以下模式:Preferably, the system can be configured to implement the following modes:
开放空间渲染:Open space rendering:
·从结构化环境中抓取关键点,然后使用ML渲染在空间之间进行填充。Grab keypoints from a structured environment, then use ML rendering to fill in between spaces.
·潜在的场所:舞台、输出空间、大室内空间(体育馆)。• Potential venues: stages, output spaces, large interior spaces (stadiums).
对象包装:Object wrapper:
·识别真实世界中的三维对象,然后增强它们。· Recognize 3D objects in the real world and then enhance them.
·这里的“识别”意味着具有足够高准确性将三维物件(blob)辨认为锚图像。• "Recognition" here means recognizing a three-dimensional object (blob) as an anchor image with sufficiently high accuracy.
·有两种类型的识别:1)分类对象的类型(例如,“面孔”);2)分类对象的特定实例(例如,乔,一个人)。• There are two types of recognition: 1) the type of classified object (eg, "face"); 2) the specific instance of the classified object (eg, Joe, a person).
·建立针对各种东西(墙、天花板、地板、面孔、路、天空、摩天大楼、低矮的平房、桌子、椅子、汽车、路标、广告牌、门、窗户、书架等)的识别软件对象。· Create recognition software objects for various things (walls, ceilings, floors, faces, roads, sky, skyscrapers, low-rise buildings, tables, chairs, cars, road signs, billboards, doors, windows, bookshelves, etc.).
·一些标识器是类型I,以及具有通用的功能,例如“把我的视频放在那个墙上”,“那是狗”。• Some markers are Type I, and have generic functionality, eg "Put my video on that wall", "That's a dog".
·其它识别器是类型II,以及具有特定的功能,例如“我的电视在_我的_客厅的墙上距离天花板3.2英尺”,“那是菲多(Fido)”(这是更有能力的通用识别器的版本)Other recognizers are Type II, and have specific functions, such as "My TV is 3.2 feet from the ceiling on the wall in _my_ living room", "That's Fido" (which is more capable version of the generic recognizer)
·建立作为软件对象的识别器允许功能的计量释放,以及体验的更细粒度的控制Establishing recognizers as software objects allows metered release of functionality, and finer-grained control of experiences
身体为中心的渲染Body-Centric Rendering
·渲染固定到用户身体的虚拟对象。• Render virtual objects fixed to the user's body.
·一些东西应当在用户身体的周围漂浮,像数字工具带。• Something should float around the user's body, like a digital tool belt.
·这要求知道身体在何处,而不是仅仅是头部。通过具有用户头部方位的长期平局值可以获得身体方位的合理的准确性(头部通常平行于地面指向前方)。• This requires knowing where the body is, not just the head. Reasonable accuracy in body orientation can be obtained by having a long-term average of the orientation of the user's head (the head is generally pointing forward parallel to the ground).
·简单的情况是在头部周围漂浮的对象。• The simple case is an object floating around the head.
透明度/刨面图Transparency / Plane
·对于类型II的识别的对象,示出了刨面图· For Type II identified objects, a planar view is shown
·将类型II的识别的对象链接到三维模型的在线数据库。• Link identified objects of type II to an online database of three-dimensional models.
·应当从具有通常可以获得的三维模型(诸如汽车和公共设施)的对象开始。• One should start with objects that have commonly available 3D models such as cars and public facilities.
虚拟临场virtual presence
·将远程人的化身描绘到开放空间中。· Depicting remote human avatars into open spaces.
ο“开放空间渲染”的子集(以上)。ο Subset of "Open Space Rendering" (above).
ο用户创建本地环境的粗略几何形状,以及迭代地向其它用户发送几何形状和纹理图两者。o Users create a rough geometry of the local environment, and iteratively send both geometry and texture maps to other users.
ο用户必须向其它用户授予准许以进入他们的环境。o Users must grant permission to other users to enter their environment.
ο微妙的声音队列、手势跟踪、和头部运动被发送给远程化身。根据这些模糊输入使化身动画。ο Subtle sound queues, gesture tracking, and head movements are sent to remote avatars. Animate the avatar based on these fuzzy inputs.
ο以上最小化带宽。ο Above minimizes bandwidth.
·使墙成为至另一个房间的“入口”Make the wall an "entrance" to another room
ο正如其它方法,传送几何形状和纹理图。o As with other methods, geometry and texture maps are delivered.
ο替代示出本地房间中的化身,将识别的对象(例如,墙)指定为至其它环境的入口。以这种方式,多个人能够坐在它们自己的房间中,“通过”墙看到其它人的环境。o Instead of showing an avatar in a local room, designate identified objects (eg, walls) as entrances to other environments. In this way, multiple people can sit in their own room and see other people's environment "through" the wall.
虚拟视点virtual viewpoint
·当一组相机(人)查看来自不同视角的场景时,创建区域的密集数字模型。这种丰富的数字模型是可以从至少一个相机能够看到的任何有利位置来渲染的。Create a dense digital model of an area as a set of cameras (people) view the scene from different viewpoints. This rich digital model can be rendered from any vantage point that at least one camera can see.
·示例。在婚礼处的人。由所有的参加者来联合地建模场景。识别器进行区分以及纹理图静止对象不同于移动对象(例如,墙具有稳定的纹理图,人具有更高频的移动纹理图。)• Examples. People at the wedding. The scene is jointly modeled by all participants. The recognizer distinguishes and texture maps stationary objects from moving objects (e.g. walls have a stable texture map, people have a more frequent moving texture map.)
·使用实时更新的丰富数字模型,场景可以从任何视角来渲染。在后面的参加者为了更好的视图能够在空气中飞到前排。· Scenes can be rendered from any viewpoint using rich digital models that update in real time. Participants in the back can fly through the air to the front row for a better view.
·参加者能够示出它们移动的化身,或使它们的视角被隐藏。• Participants can show their moving avatar, or have their perspective hidden.
·场外的参加者能够使用它们的化身或者如果组织者允许不可见地找到“座位”。• Participants outside the venue can use their avatars or find "seats" invisibly if the organizer allows.
·很可能要求非常高的带宽。概念上地,高频数据在高速本地无线上流过人群。低频数据来自MLC。• Very likely to require very high bandwidth. Conceptually, high-frequency data streams through the crowd over high-speed local wireless. Low frequency data are from MLC.
·由于所有的参加者具有高准确性的方位信息,所以为本地网络建立最佳路由路径是简单的。• It is simple to establish an optimal routing path for the local network since all participants have high accuracy position information.
消息传送messaging
·简单的无声的消息传送可能是期望的· Simple silent messaging may be desired
·对于这个和其它应用,可能期望的是具有手指和弦键盘。• For this and other applications, it may be desirable to have a finger chord keyboard.
·触觉手套解决方案可以提供增强的性能。• Haptic glove solutions can provide enhanced performance.
完全虚拟现实(VR)Full Virtual Reality (VR)
·随着视觉系统变暗,示出在真实世界上没有被覆盖的视图。- With the visual system dimmed, shows a view that is not covered in the real world.
·注册系统仍然需要根据头部方位。· The registration system still needs to be based on the head position.
·“睡椅模式”允许用户飞行。· "Couch mode" allows the user to fly.
·“行走模式”将真实世界中的对象重新渲染成虚拟对象,因此用户不会与真实世界混淆。"Walking Mode" re-renders real-world objects as virtual ones, so users don't get confused with the real world.
·对于暂停不相信而言,渲染身体部分是必要的。这意味着在FOV中具有用于跟踪和渲染身体部分的方法。· Rendering body parts is necessary for suspend disbelief. This means having methods for tracking and rendering body parts in FOV.
·非透明面具(visor)是具有许多图像加强优点而不可能具有直接叠加的VR的一种形式。• Non-transparent visors are a form of VR that have many image enhancement advantages without the possibility of direct overlay.
·宽FOV,可能甚至具有看到后部的能力Wide FOV, possibly even the ability to see the rear
·各种形式的“超级视觉”:望远镜、透视、红外线、神眼等。· Various forms of "super vision": telescope, perspective, infrared, magic eye, etc.
在一个实施例中,用于虚拟和/或增加用户体验的系统被配置为使得可以至少部分地基于具有来自诸如语音音调变化分析和面部识别分析(如由相关软件模块执行的)的源的输入的可穿戴设备上的数据使与用户相关联的远程化身进行动画。例如,参照回图12,可以基于在用户面孔上的微笑的面部识别,或基于语音或说话的友好语调(如由被配置为分析至麦克风(其可从用户本地地捕获语音样本)的语音输入的软件来确定),使得蜜蜂化身(2)进行动画以具有友好的微笑。此外,可以以化身好像表达某种情感的方式使得化身角色进行动画。例如,在化身是狗的实施例中,由人类用户的本地的系统检测到的高兴的微笑或语调在化身中可以被表示成摇着尾巴的狗的化身。In one embodiment, the system for virtualizing and/or augmenting the user experience is configured such that it can be based at least in part on having input from sources such as speech pitch change analysis and facial recognition analysis (as performed by associated software modules) The data on the wearable device animates the remote avatar associated with the user. For example, referring back to FIG. 12 , facial recognition could be based on a smile on the user's face, or based on a friendly tone of voice or speaking (such as by voice input configured to analyze to a microphone (which can capture voice samples locally from the user) software to determine), making the bee avatar (2) animate to have a friendly smile. Additionally, the avatar character may be animated in such a way that the avatar appears to express some emotion. For example, in an embodiment where the avatar is a dog, a happy smile or tone of voice detected by the human user's local system may be represented in the avatar as an avatar of a dog wagging its tail.
本文中描述了本发明的各种示例性实施例。以非限制性的含义来参照这些示例。提供它们以说明本发明的更宽广的应用方面。在不背离本发明的真实精神和范围的情况下,可以做出对于所述的本发明的各种变化以及可以使用等同形式来替代。另外,可以做出许多修改以将特定情况、材料、物质的组合、过程、过程动作(多个)或步骤(多个)适应于本发明的目的(多个)、精神或范围。此外,如本领域的技术人员将了解的是,在本文中描述和说明的个体变型中的每个个体变型具有离散的组件和特征,在不背离本发明的范围或精神的情况下,该组件和特征可以容易地与其它若干实施例中的任何实施例的特征分离或与其组合。所有此类修改旨在是落入与本公开相关联的权利要求书的范围内。Various exemplary embodiments of the invention are described herein. Reference is made to these examples in a non-limiting sense. They are provided to illustrate the broader applicability aspects of the invention. Various changes may be made and equivalents may be substituted for the invention described without departing from the true spirit and scope of the invention. In addition, many modifications may be made to adapt a particular situation, material, combination of substances, process, process action(s) or step(s) to the objective(s), spirit or scope of the invention. Furthermore, as those skilled in the art will appreciate, each of the individual variations described and illustrated herein has discrete components and features that can be incorporated without departing from the scope or spirit of the invention. The and features may be readily separated from or combined with features of any of the other several embodiments. All such modifications are intended to come within the scope of the claims associated with this disclosure.
本发明包含可以使用主题设备来执行的方法。该方法可以包括提供此类合适设备的动作。此类提供可以由终端用户来执行。也就是说,“提供”动作仅要求终端用户获得、访问、接近、定位、设置、激活、升高或其它方式的动作来提供主题方法中必须的设备。本文所述方法可以以所述事件的逻辑上可能的任何顺序,以及以时间的所述顺序来执行。The present invention encompasses methods that may be performed using the subject device. The method may include an act of providing such suitable equipment. Such provisioning may be performed by end users. That is, the act of "providing" merely requires the end user to obtain, access, approach, locate, set, activate, raise, or otherwise act to provide the equipment necessary for the subject method. The methods described herein can be carried out in any order of events described which is logically possible, as well as in said order in time.
本发明的示例性方面,连同关于材料选择和制作的细节已经在上面进行了阐述。至于本发明的其它细节,这些细节可以结合上述引用的专利和出版物以及本领域的技术人员众所周知的或了解的来了解。在如通常或逻辑上使用的另外动作方面,相对于本发明的基于方法的方面,这可以同样适用。Exemplary aspects of the invention have been set forth above, along with details regarding material selection and fabrication. As for other details of the present invention, these details can be learned in conjunction with the above-cited patents and publications and are known or understood by those skilled in the art. In terms of additional acts as commonly or logically used, the same may apply with respect to the method-based aspects of the invention.
另外,通过已经参照可选地并入各种特征的若干示例来描述的本发明,本发明不限制于根据相对于本发明的每种变型所预期的来描述或指示的那些方面。在不背离本发明的真实精神和范围的情况下,可以针对所述的本发明做出各种变化,以及使用等同形式(不论是在本文中叙述的还是出于某种简明的原因未包含的)来替代。另外,这里提供了值的范围,应当理解的是,每个介于中间的值(在该范围的上限和下限之间),以及在那个阐明的范围中的任何其它阐明的或介于中间的值被涵盖在本发明内。Furthermore, by having described the invention with reference to several examples optionally incorporating various features, the invention is not limited to those aspects described or indicated according to what is contemplated with respect to each variation of the invention. Various changes may be made to the invention as described, and equivalents employed (whether recited herein or not included for reasons of brevity) without departing from the true spirit and scope of the invention. ) to replace. Additionally, where ranges of values are provided herein, it is understood that each intervening value (between the upper and lower limits of that range), as well as any other stated or intervening values in that stated range values are encompassed within the present invention.
此外,可以预期的是,所述的本发明的变型的任何可选特征可以独立地或与本文所述的特征中的一个或多个特征组合来阐述和要求保护。提及的单数项,包含具有多个相同项目存在的可能性。更具体地,如在本文中和在与本文相关联的权利要求书中使用的,单数形式“一个”、“一种”、“所述”和“该”包含复数指示物,除非以其它方式明确地阐明。也就是说,在以上描述以及与本公开相关联的权利要求书中,冠词的使用允许主题项目中的“至少一个”。还需注意的是,此类权利要求书可以被起草成排除任何可选的元素。照此,这个声明旨在用作针对此类排他性术语(如,“仅”、“只有”和诸如此类)连同要求保护的元素的叙述的使用,或“否定”限制的使用的前置基础。Furthermore, it is contemplated that any optional feature of variations of the invention described may be stated and claimed independently or in combination with one or more of the features described herein. References to an item in the singular include the possibility that there may be a plurality of the same item. More specifically, as used herein and in the claims associated therewith, the singular forms "a," "an," "said," and "the" include plural referents unless otherwise clearly stated. That is, in the description above and the claims associated with this disclosure, use of the articles allows for "at least one" of the subject item. It should also be noted that such claims may be drafted to exclude any optional elements. As such, this statement is intended to be used as an antecedent basis for use of such exclusive terms (eg, "only," "only," and the like) in connection with the recitation of claimed elements, or use of a "negative" limitation.
在不使用此类排他性术语的情况下,在与本公开相关联的权利要求书中的术语“包括”应当允许包含任何另外元素-不管在此类权利要求书中是否列举给定数目的元素,或者特征的添加可以被认为转变在此类权利要求书中阐述的元素的性质。除了本文中明确定义的外,本文中使用的所有技术和科学术语被给予如通常理解的含义尽可能宽的范围同时维持权利要求的有效性。In the absence of the use of such exclusive terms, the term "comprising" in claims associated with the present disclosure shall allow the inclusion of any additional elements - whether or not a given number of elements are recited in such claims, or The addition of features may be considered to alter the nature of the elements recited in such claims. Except as expressly defined herein, all technical and scientific terms used herein are to be given the broadest possible meaning as commonly understood while maintaining claim validity.
本发明的宽度不限制于提供的示例和/或主题说明书,而是相反仅由与本公开相关联的权利要求语言的范围来限制。The breadth of the present invention is not limited by the examples provided and/or the subject description, but rather only by the scope of the language of the claims associated with this disclosure.
Claims (18)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610908994.6A CN106484115B (en) | 2011-10-28 | 2012-10-29 | Systems and methods for augmented and virtual reality |
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201161552941P | 2011-10-28 | 2011-10-28 | |
| US61/552,941 | 2011-10-28 | ||
| PCT/US2012/062500 WO2013085639A1 (en) | 2011-10-28 | 2012-10-29 | System and method for augmented and virtual reality |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201610908994.6A Division CN106484115B (en) | 2011-10-28 | 2012-10-29 | Systems and methods for augmented and virtual reality |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN104011788A true CN104011788A (en) | 2014-08-27 |
| CN104011788B CN104011788B (en) | 2016-11-16 |
Family
ID=48224484
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201280064922.8A Active CN104011788B (en) | 2011-10-28 | 2012-10-29 | Systems and methods for augmented and virtual reality |
| CN201610908994.6A Active CN106484115B (en) | 2011-10-28 | 2012-10-29 | Systems and methods for augmented and virtual reality |
Family Applications After (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201610908994.6A Active CN106484115B (en) | 2011-10-28 | 2012-10-29 | Systems and methods for augmented and virtual reality |
Country Status (12)
| Country | Link |
|---|---|
| US (15) | US9215293B2 (en) |
| EP (6) | EP3258671B1 (en) |
| JP (8) | JP6110866B2 (en) |
| KR (4) | KR101944846B1 (en) |
| CN (2) | CN104011788B (en) |
| AU (6) | AU2012348348B2 (en) |
| BR (1) | BR112014010230A8 (en) |
| CA (4) | CA3164530C (en) |
| IL (5) | IL232281B (en) |
| IN (1) | IN2014CN03300A (en) |
| RU (2) | RU2017115669A (en) |
| WO (1) | WO2013085639A1 (en) |
Cited By (67)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104759015A (en) * | 2015-02-11 | 2015-07-08 | 北京市朝阳区新希望自闭症支援中心 | Computer control based vision training system |
| CN105117931A (en) * | 2015-07-30 | 2015-12-02 | 金华唯见科技有限公司 | Goal-driven virtual reality ecosystem |
| CN105188516A (en) * | 2013-03-11 | 2015-12-23 | 奇跃公司 | System and method for augmented and virtual reality |
| CN105183147A (en) * | 2015-08-03 | 2015-12-23 | 众景视界(北京)科技有限公司 | Head-mounted smart device and method thereof for modeling three-dimensional virtual limb |
| CN105739106A (en) * | 2015-06-12 | 2016-07-06 | 南京航空航天大学 | Somatosensory multi-view point large-size light field real three-dimensional display device and method |
| CN105759958A (en) * | 2016-01-27 | 2016-07-13 | 中国人民解放军信息工程大学 | Data interaction system and method |
| CN105892667A (en) * | 2016-03-31 | 2016-08-24 | 联想(北京)有限公司 | Information processing method in virtual reality scene and electronic equipment |
| CN106310660A (en) * | 2016-09-18 | 2017-01-11 | 三峡大学 | Mechanics-based visual virtual football control system |
| CN106325378A (en) * | 2015-07-01 | 2017-01-11 | 三星电子株式会社 | Method and apparatus for context based application grouping in virtual reality |
| TWI567670B (en) * | 2015-02-26 | 2017-01-21 | 宅妝股份有限公司 | Method and system for management of switching virtual-reality mode and augmented-reality mode |
| CN106484099A (en) * | 2016-08-30 | 2017-03-08 | 王杰 | Content reproduction apparatus, the processing system with the replay device and method |
| CN106919270A (en) * | 2015-12-28 | 2017-07-04 | 宏达国际电子股份有限公司 | Virtual reality device and virtual reality method |
| CN107038738A (en) * | 2015-12-15 | 2017-08-11 | 联想(新加坡)私人有限公司 | Object is shown using modified rendering parameter |
| CN107209950A (en) * | 2015-01-29 | 2017-09-26 | 微软技术许可有限责任公司 | Automatically generate virtual materials from real-world materials |
| CN107219916A (en) * | 2016-03-21 | 2017-09-29 | 埃森哲环球解决方案有限公司 | Generated based on multi-platform experience |
| CN107330855A (en) * | 2017-06-16 | 2017-11-07 | 福州瑞芯微电子股份有限公司 | The method and apparatus of VR interaction datas dimensional uniformity regulation |
| CN107533233A (en) * | 2015-03-05 | 2018-01-02 | 奇跃公司 | System and method for augmented reality |
| CN107548484A (en) * | 2015-04-29 | 2018-01-05 | 索尼移动通讯有限公司 | The orientation of article moved in space and the expression of movement are provided |
| CN107683497A (en) * | 2015-06-15 | 2018-02-09 | 索尼公司 | Message processing device, information processing method and program |
| CN107705349A (en) * | 2016-08-03 | 2018-02-16 | 维布络有限公司 | System and method for augmented reality perceived content |
| TWI621097B (en) * | 2014-11-20 | 2018-04-11 | 財團法人資訊工業策進會 | Mobile device, operating method, and non-transitory computer readable storage medium for storing operating method |
| CN108021229A (en) * | 2016-10-31 | 2018-05-11 | 迪斯尼企业公司 | High fidelity numeral immersion is recorded by computed offline to experience |
| CN108027984A (en) * | 2015-09-25 | 2018-05-11 | 奇跃公司 | Method and system for detecting and combining structural features in 3D reconstruction |
| CN108139801A (en) * | 2015-12-22 | 2018-06-08 | 谷歌有限责任公司 | For performing the system and method for electronical display stabilization via light field rendering is retained |
| CN108139805A (en) * | 2016-02-08 | 2018-06-08 | 谷歌有限责任公司 | For the control system of the navigation in reality environment |
| CN108334377A (en) * | 2017-01-20 | 2018-07-27 | 深圳纬目信息技术有限公司 | A kind of wear shows that the user of equipment uses progress monitoring method |
| CN108983974A (en) * | 2018-07-03 | 2018-12-11 | 百度在线网络技术(北京)有限公司 | AR scene process method, apparatus, equipment and computer readable storage medium |
| CN109475776A (en) * | 2016-06-08 | 2019-03-15 | 伙伴有限公司 | A system that provides a shared environment |
| CN109478097A (en) * | 2016-06-16 | 2019-03-15 | Smi创新传感技术有限公司 | Method and system, client device, server, and computer program product for providing eye-tracking-based information about user behavior |
| CN109478345A (en) * | 2016-07-13 | 2019-03-15 | 株式会社万代南梦宫娱乐 | Simulation system, processing method and information storage medium |
| CN109564504A (en) * | 2016-08-10 | 2019-04-02 | 高通股份有限公司 | For the multimedia device based on mobile processing space audio |
| CN110050295A (en) * | 2016-12-14 | 2019-07-23 | 微软技术许可有限责任公司 | It is drawn for enhancing with the subtracting property of virtual reality system |
| CN110084087A (en) * | 2016-10-26 | 2019-08-02 | 奥康科技有限公司 | For analyzing image and providing the wearable device and method of feedback |
| CN110140099A (en) * | 2017-01-27 | 2019-08-16 | 高通股份有限公司 | System and method for tracking control unit |
| TWI672168B (en) * | 2015-03-20 | 2019-09-21 | 新力電腦娛樂股份有限公司 | System for processing content for a head mounted display(hmd), peripheral device for use in interfacing with a virtual reality scene generated by a computer for presentation on the hmd, and method of simulating a feeling of contact with a virtual object |
| CN110413109A (en) * | 2019-06-28 | 2019-11-05 | 广东虚拟现实科技有限公司 | Method, device, system, electronic device and storage medium for generating virtual content |
| CN111107911A (en) * | 2017-07-07 | 2020-05-05 | 布克斯顿全球企业股份有限公司 | competition simulation |
| US10649211B2 (en) | 2016-08-02 | 2020-05-12 | Magic Leap, Inc. | Fixed-distance virtual and augmented reality systems and methods |
| US10678324B2 (en) | 2015-03-05 | 2020-06-09 | Magic Leap, Inc. | Systems and methods for augmented reality |
| CN111462335A (en) * | 2020-03-18 | 2020-07-28 | Oppo广东移动通信有限公司 | Equipment control method and device based on virtual object interaction, medium and equipment |
| CN111459267A (en) * | 2020-03-02 | 2020-07-28 | 杭州嘉澜创新科技有限公司 | Data processing method, first server, second server and storage medium |
| CN111450521A (en) * | 2015-07-28 | 2020-07-28 | 弗丘伊克斯控股公司 | System and method for soft decoupling of inputs |
| US10762598B2 (en) | 2017-03-17 | 2020-09-01 | Magic Leap, Inc. | Mixed reality system with color virtual content warping and method of generating virtual content using same |
| US10769752B2 (en) | 2017-03-17 | 2020-09-08 | Magic Leap, Inc. | Mixed reality system with virtual content warping and method of generating virtual content using same |
| US10812936B2 (en) | 2017-01-23 | 2020-10-20 | Magic Leap, Inc. | Localization determination for mixed reality systems |
| US10838207B2 (en) | 2015-03-05 | 2020-11-17 | Magic Leap, Inc. | Systems and methods for augmented reality |
| CN111939561A (en) * | 2020-08-31 | 2020-11-17 | 聚好看科技股份有限公司 | Display device and interaction method |
| CN111973979A (en) * | 2019-05-23 | 2020-11-24 | 明日基金知识产权控股有限公司 | Live management of the real world via a persistent virtual world system |
| CN112020836A (en) * | 2018-01-19 | 2020-12-01 | Esb实验室股份有限公司 | Virtual interactive audience interface |
| CN112055033A (en) * | 2019-06-05 | 2020-12-08 | 北京外号信息技术有限公司 | Interaction method and system based on optical communication device |
| US10861237B2 (en) | 2017-03-17 | 2020-12-08 | Magic Leap, Inc. | Mixed reality system with multi-source virtual content compositing and method of generating virtual content using same |
| CN112055034A (en) * | 2019-06-05 | 2020-12-08 | 北京外号信息技术有限公司 | Interaction method and system based on optical communication device |
| CN112100284A (en) * | 2019-06-18 | 2020-12-18 | 明日基金知识产权控股有限公司 | Interacting with real world objects and corresponding databases through virtual twin reality |
| US10909711B2 (en) | 2015-12-04 | 2021-02-02 | Magic Leap, Inc. | Relocalization systems and methods |
| CN112363615A (en) * | 2020-10-27 | 2021-02-12 | 上海影创信息科技有限公司 | Multi-user VR/AR interaction system, method and computer readable storage medium |
| US10943521B2 (en) | 2018-07-23 | 2021-03-09 | Magic Leap, Inc. | Intra-field sub code timing in field sequential displays |
| CN113614710A (en) * | 2019-03-20 | 2021-11-05 | 诺基亚技术有限公司 | Device for presenting a presentation of data and associated method |
| CN113946701A (en) * | 2021-09-14 | 2022-01-18 | 广州市城市规划设计有限公司 | Method and device for dynamically updating urban and rural planning data based on image processing |
| CN114008582A (en) * | 2019-06-28 | 2022-02-01 | 索尼集团公司 | Information processing apparatus, information processing method, and program |
| CN114125523A (en) * | 2020-08-28 | 2022-03-01 | 明日基金知识产权有限公司 | Data processing system and method |
| CN114651435A (en) * | 2019-11-20 | 2022-06-21 | 脸谱科技有限责任公司 | Artificial reality system with virtual wireless channel |
| US11379948B2 (en) | 2018-07-23 | 2022-07-05 | Magic Leap, Inc. | Mixed reality system with virtual content warping and method of generating virtual content using same |
| CN114995647A (en) * | 2016-02-05 | 2022-09-02 | 奇跃公司 | System and method for augmented reality |
| CN115047979A (en) * | 2022-08-15 | 2022-09-13 | 歌尔股份有限公司 | Head-mounted display equipment control system and interaction method |
| US11854150B2 (en) | 2013-03-15 | 2023-12-26 | Magic Leap, Inc. | Frame-by-frame rendering for augmented or virtual reality systems |
| CN119367749A (en) * | 2018-12-20 | 2025-01-28 | 索尼互动娱乐有限责任公司 | Method for allocating resources for online games |
| WO2026001272A1 (en) * | 2024-06-28 | 2026-01-02 | 中国电信股份有限公司技术创新中心 | Augmented reality processing method and apparatus, and communication device |
Families Citing this family (782)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9658473B2 (en) | 2005-10-07 | 2017-05-23 | Percept Technologies Inc | Enhanced optical and perceptual digital eyewear |
| US11428937B2 (en) | 2005-10-07 | 2022-08-30 | Percept Technologies | Enhanced optical and perceptual digital eyewear |
| US20070081123A1 (en) | 2005-10-07 | 2007-04-12 | Lewis Scott W | Digital eyewear |
| GB0522968D0 (en) | 2005-11-11 | 2005-12-21 | Popovich Milan M | Holographic illumination device |
| GB0718706D0 (en) | 2007-09-25 | 2007-11-07 | Creative Physics Ltd | Method and apparatus for reducing laser speckle |
| US9370704B2 (en) * | 2006-08-21 | 2016-06-21 | Pillar Vision, Inc. | Trajectory detection and feedback system for tennis |
| US9812096B2 (en) | 2008-01-23 | 2017-11-07 | Spy Eye, Llc | Eye mounted displays and systems using eye mounted displays |
| US9335604B2 (en) | 2013-12-11 | 2016-05-10 | Milan Momcilo Popovich | Holographic waveguide display |
| US11726332B2 (en) | 2009-04-27 | 2023-08-15 | Digilens Inc. | Diffractive projection apparatus |
| KR20100138700A (en) * | 2009-06-25 | 2010-12-31 | 삼성전자주식회사 | Virtual World Processing Unit and Methods |
| US8694553B2 (en) * | 2010-06-07 | 2014-04-08 | Gary Stephen Shuster | Creation and use of virtual places |
| US9274349B2 (en) | 2011-04-07 | 2016-03-01 | Digilens Inc. | Laser despeckler based on angular diversity |
| EP2705435B8 (en) | 2011-05-06 | 2017-08-23 | Magic Leap, Inc. | Massive simultaneous remote digital presence world |
| WO2016020630A2 (en) | 2014-08-08 | 2016-02-11 | Milan Momcilo Popovich | Waveguide laser illuminator incorporating a despeckler |
| US10670876B2 (en) | 2011-08-24 | 2020-06-02 | Digilens Inc. | Waveguide laser illuminator incorporating a despeckler |
| WO2013027004A1 (en) | 2011-08-24 | 2013-02-28 | Milan Momcilo Popovich | Wearable data display |
| CN104011788B (en) | 2011-10-28 | 2016-11-16 | 奇跃公司 | Systems and methods for augmented and virtual reality |
| US9462210B2 (en) | 2011-11-04 | 2016-10-04 | Remote TelePointer, LLC | Method and system for user interface for interactive devices using a mobile device |
| WO2013102759A2 (en) | 2012-01-06 | 2013-07-11 | Milan Momcilo Popovich | Contact image sensor using switchable bragg gratings |
| US8692840B2 (en) * | 2012-02-05 | 2014-04-08 | Mitsubishi Electric Research Laboratories, Inc. | Method for modeling and estimating rendering errors in virtual images |
| EP2817920A4 (en) | 2012-02-23 | 2015-09-02 | Tyco Electronics Ltd Uk | SYSTEM FOR LOCALIZATION AND IDENTIFICATION OF COMPUTER RESOURCES BASED ON LEVELS |
| US9170648B2 (en) * | 2012-04-03 | 2015-10-27 | The Boeing Company | System and method for virtual engineering |
| JP6238965B2 (en) | 2012-04-25 | 2017-11-29 | ロックウェル・コリンズ・インコーポレーテッド | Holographic wide-angle display |
| WO2013167864A1 (en) | 2012-05-11 | 2013-11-14 | Milan Momcilo Popovich | Apparatus for eye tracking |
| US9671566B2 (en) | 2012-06-11 | 2017-06-06 | Magic Leap, Inc. | Planar waveguide apparatus with diffraction element(s) and system employing same |
| US9129404B1 (en) * | 2012-09-13 | 2015-09-08 | Amazon Technologies, Inc. | Measuring physical objects and presenting virtual articles |
| US20140128161A1 (en) * | 2012-11-06 | 2014-05-08 | Stephen Latta | Cross-platform augmented reality experience |
| JP5787099B2 (en) * | 2012-11-06 | 2015-09-30 | コニカミノルタ株式会社 | Guidance information display device |
| US8764532B1 (en) | 2012-11-07 | 2014-07-01 | Bertec Corporation | System and method for fall and/or concussion prediction |
| US11270498B2 (en) * | 2012-11-12 | 2022-03-08 | Sony Interactive Entertainment Inc. | Real world acoustic and lighting modeling for improved immersion in virtual reality and augmented reality environments |
| US9933684B2 (en) | 2012-11-16 | 2018-04-03 | Rockwell Collins, Inc. | Transparent waveguide display providing upper and lower fields of view having a specific light output aperture configuration |
| US9996150B2 (en) * | 2012-12-19 | 2018-06-12 | Qualcomm Incorporated | Enabling augmented reality using eye gaze tracking |
| US20140198130A1 (en) * | 2013-01-15 | 2014-07-17 | Immersion Corporation | Augmented reality user interface with haptic feedback |
| US10231662B1 (en) | 2013-01-19 | 2019-03-19 | Bertec Corporation | Force measurement system |
| US9770203B1 (en) | 2013-01-19 | 2017-09-26 | Bertec Corporation | Force measurement system and a method of testing a subject |
| US11540744B1 (en) | 2013-01-19 | 2023-01-03 | Bertec Corporation | Force measurement system |
| US10010286B1 (en) | 2013-01-19 | 2018-07-03 | Bertec Corporation | Force measurement system |
| US8704855B1 (en) * | 2013-01-19 | 2014-04-22 | Bertec Corporation | Force measurement system having a displaceable force measurement assembly |
| US10856796B1 (en) | 2013-01-19 | 2020-12-08 | Bertec Corporation | Force measurement system |
| US11052288B1 (en) | 2013-01-19 | 2021-07-06 | Bertec Corporation | Force measurement system |
| US12161477B1 (en) | 2013-01-19 | 2024-12-10 | Bertec Corporation | Force measurement system |
| US11311209B1 (en) | 2013-01-19 | 2022-04-26 | Bertec Corporation | Force measurement system and a motion base used therein |
| US9526443B1 (en) * | 2013-01-19 | 2016-12-27 | Bertec Corporation | Force and/or motion measurement system and a method of testing a subject |
| US11857331B1 (en) | 2013-01-19 | 2024-01-02 | Bertec Corporation | Force measurement system |
| US10413230B1 (en) | 2013-01-19 | 2019-09-17 | Bertec Corporation | Force measurement system |
| US9081436B1 (en) | 2013-01-19 | 2015-07-14 | Bertec Corporation | Force and/or motion measurement system and a method of testing a subject using the same |
| US10646153B1 (en) | 2013-01-19 | 2020-05-12 | Bertec Corporation | Force measurement system |
| US8847989B1 (en) * | 2013-01-19 | 2014-09-30 | Bertec Corporation | Force and/or motion measurement system and a method for training a subject using the same |
| US9449340B2 (en) * | 2013-01-30 | 2016-09-20 | Wal-Mart Stores, Inc. | Method and system for managing an electronic shopping list with gestures |
| JP6281495B2 (en) * | 2013-02-01 | 2018-02-21 | ソニー株式会社 | Information processing apparatus, terminal apparatus, information processing method, and program |
| EP3900641A1 (en) | 2013-03-14 | 2021-10-27 | SRI International Inc. | Wrist and grasper system for a robotic tool |
| JP6396987B2 (en) | 2013-03-15 | 2018-09-26 | エスアールアイ インターナショナルSRI International | Super elaborate surgical system |
| US11181740B1 (en) | 2013-03-15 | 2021-11-23 | Percept Technologies Inc | Digital eyewear procedures related to dry eyes |
| US9092954B2 (en) * | 2013-03-15 | 2015-07-28 | Immersion Corporation | Wearable haptic device |
| WO2014188149A1 (en) | 2013-05-20 | 2014-11-27 | Milan Momcilo Popovich | Holographic waveguide eye tracker |
| US9764229B2 (en) | 2013-05-23 | 2017-09-19 | Disney Enterprises, Inc. | Unlocking of digital content based on geo-location of objects |
| US9977256B2 (en) * | 2013-05-30 | 2018-05-22 | Johnson & Johnson Vision Care, Inc. | Methods for manufacturing and programming an energizable ophthalmic lens with a programmable media insert |
| US10650264B2 (en) * | 2013-05-31 | 2020-05-12 | Nec Corporation | Image recognition apparatus, processing method thereof, and program |
| US9685003B2 (en) * | 2013-06-03 | 2017-06-20 | Microsoft Technology Licensing, Llc | Mixed reality data collaboration |
| US10905943B2 (en) * | 2013-06-07 | 2021-02-02 | Sony Interactive Entertainment LLC | Systems and methods for reducing hops associated with a head mounted system |
| US9874749B2 (en) | 2013-11-27 | 2018-01-23 | Magic Leap, Inc. | Virtual and augmented reality systems and methods |
| US10262462B2 (en) | 2014-04-18 | 2019-04-16 | Magic Leap, Inc. | Systems and methods for augmented and virtual reality |
| KR102098277B1 (en) | 2013-06-11 | 2020-04-07 | 삼성전자주식회사 | Visibility improvement method based on eye tracking, machine-readable storage medium and electronic device |
| US10175483B2 (en) * | 2013-06-18 | 2019-01-08 | Microsoft Technology Licensing, Llc | Hybrid world/body locked HUD on an HMD |
| US9440143B2 (en) | 2013-07-02 | 2016-09-13 | Kabam, Inc. | System and method for determining in-game capabilities based on device information |
| US9014639B2 (en) * | 2013-07-11 | 2015-04-21 | Johnson & Johnson Vision Care, Inc. | Methods of using and smartphone event notification utilizing an energizable ophthalmic lens with a smartphone event indicator mechanism |
| US10295338B2 (en) | 2013-07-12 | 2019-05-21 | Magic Leap, Inc. | Method and system for generating map data from an image |
| US9727772B2 (en) | 2013-07-31 | 2017-08-08 | Digilens, Inc. | Method and apparatus for contact image sensing |
| US9415306B1 (en) | 2013-08-12 | 2016-08-16 | Kabam, Inc. | Clients communicate input technique to server |
| US9656172B2 (en) * | 2013-08-16 | 2017-05-23 | Disney Enterprises, Inc. | Unlocking of virtual content through geo-location |
| US9466266B2 (en) | 2013-08-28 | 2016-10-11 | Qualcomm Incorporated | Dynamic display markers |
| KR101873127B1 (en) * | 2013-09-30 | 2018-06-29 | 피씨엠에스 홀딩스, 인크. | Methods, apparatus, systems, devices, and computer program products for providing an augmented reality display and/or user interface |
| WO2015061799A1 (en) | 2013-10-25 | 2015-04-30 | Gurtowski Louis | Selective capture with rapid sharing of user or mixed reality actions and states using interactive virtual streaming |
| US11165842B2 (en) | 2013-10-25 | 2021-11-02 | Louis Gurtowski | Selective capture with rapid sharing of user or mixed reality actions and states using interactive virtual streaming |
| US9704295B2 (en) * | 2013-11-05 | 2017-07-11 | Microsoft Technology Licensing, Llc | Construction of synthetic augmented reality environment |
| US9623322B1 (en) | 2013-11-19 | 2017-04-18 | Kabam, Inc. | System and method of displaying device information for party formation |
| US9996973B2 (en) | 2013-11-30 | 2018-06-12 | Empire Technology Development Llc | Augmented reality objects based on biometric feedback |
| US9295916B1 (en) | 2013-12-16 | 2016-03-29 | Kabam, Inc. | System and method for providing recommendations for in-game events |
| EP2886171A1 (en) * | 2013-12-18 | 2015-06-24 | Microsoft Technology Licensing, LLC | Cross-platform augmented reality experience |
| WO2015095507A1 (en) * | 2013-12-18 | 2015-06-25 | Joseph Schuman | Location-based system for sharing augmented reality content |
| KR102355118B1 (en) | 2014-01-06 | 2022-01-26 | 삼성전자주식회사 | Electronic device, and method for displaying an event on a virtual reality mode |
| WO2020220724A1 (en) | 2019-04-30 | 2020-11-05 | 深圳市韶音科技有限公司 | Acoustic output apparatus |
| WO2015102464A1 (en) * | 2014-01-06 | 2015-07-09 | 삼성전자 주식회사 | Electronic device and method for displaying event in virtual reality mode |
| US9993335B2 (en) | 2014-01-08 | 2018-06-12 | Spy Eye, Llc | Variable resolution eye mounted displays |
| JP6299234B2 (en) * | 2014-01-23 | 2018-03-28 | 富士通株式会社 | Display control method, information processing apparatus, and display control program |
| US10983805B2 (en) * | 2014-02-21 | 2021-04-20 | Nod, Inc. | Contextual keyboard located on a remote server for implementation on any content delivery and interaction application |
| US11138793B2 (en) | 2014-03-14 | 2021-10-05 | Magic Leap, Inc. | Multi-depth plane display system with reduced switching between depth planes |
| US9677840B2 (en) * | 2014-03-14 | 2017-06-13 | Lineweight Llc | Augmented reality simulator |
| US10430985B2 (en) | 2014-03-14 | 2019-10-01 | Magic Leap, Inc. | Augmented reality systems and methods utilizing reflections |
| US9536352B2 (en) * | 2014-03-27 | 2017-01-03 | Intel Corporation | Imitating physical subjects in photos and videos with augmented reality virtual objects |
| WO2015161307A1 (en) * | 2014-04-18 | 2015-10-22 | Magic Leap, Inc. | Systems and methods for augmented and virtual reality |
| US9767615B2 (en) * | 2014-04-23 | 2017-09-19 | Raytheon Company | Systems and methods for context based information delivery using augmented reality |
| US9672416B2 (en) * | 2014-04-29 | 2017-06-06 | Microsoft Technology Licensing, Llc | Facial expression tracking |
| US9430038B2 (en) * | 2014-05-01 | 2016-08-30 | Microsoft Technology Licensing, Llc | World-locked display quality feedback |
| US9690370B2 (en) * | 2014-05-05 | 2017-06-27 | Immersion Corporation | Systems and methods for viewport-based augmented reality haptic effects |
| US10332314B2 (en) | 2014-05-07 | 2019-06-25 | Commscope Technologies Llc | Hands-free asset identification, location and management system |
| CA2891742C (en) * | 2014-05-15 | 2023-11-28 | Tyco Safety Products Canada Ltd. | System and method for processing control commands in a voice interactive system |
| US10600245B1 (en) | 2014-05-28 | 2020-03-24 | Lucasfilm Entertainment Company Ltd. | Navigating a virtual environment of a media content item |
| KR101888566B1 (en) * | 2014-06-03 | 2018-08-16 | 애플 인크. | Method and system for presenting a digital information related to a real object |
| US9575560B2 (en) | 2014-06-03 | 2017-02-21 | Google Inc. | Radar-based gesture-recognition through a wearable device |
| US10055876B2 (en) * | 2014-06-06 | 2018-08-21 | Matterport, Inc. | Optimal texture memory allocation |
| EP3155560B1 (en) * | 2014-06-14 | 2020-05-20 | Magic Leap, Inc. | Methods and systems for creating virtual and augmented reality |
| US10852838B2 (en) | 2014-06-14 | 2020-12-01 | Magic Leap, Inc. | Methods and systems for creating virtual and augmented reality |
| US10248992B2 (en) | 2014-07-26 | 2019-04-02 | Audi Ag | Presentation device for carrying out a product presentation |
| US9811164B2 (en) | 2014-08-07 | 2017-11-07 | Google Inc. | Radar-based gesture sensing and data transmission |
| WO2016020632A1 (en) | 2014-08-08 | 2016-02-11 | Milan Momcilo Popovich | Method for holographic mastering and replication |
| US10268321B2 (en) | 2014-08-15 | 2019-04-23 | Google Llc | Interactive textiles within hard objects |
| US9778749B2 (en) | 2014-08-22 | 2017-10-03 | Google Inc. | Occluded gesture recognition |
| US11169988B2 (en) | 2014-08-22 | 2021-11-09 | Google Llc | Radar recognition-aided search |
| WO2016032807A1 (en) | 2014-08-25 | 2016-03-03 | Google Inc. | Methods and systems for augmented reality to display virtual representations of robotic device actions |
| JP6467698B2 (en) * | 2014-08-28 | 2019-02-13 | 学校法人立命館 | Baseball batting practice support system |
| WO2016038615A1 (en) * | 2014-09-11 | 2016-03-17 | Selaflex Ltd | An apparatus and method for displaying an output from a display |
| US10241330B2 (en) | 2014-09-19 | 2019-03-26 | Digilens, Inc. | Method and apparatus for generating input images for holographic waveguide displays |
| WO2016046514A1 (en) | 2014-09-26 | 2016-03-31 | LOKOVIC, Kimberly, Sun | Holographic waveguide opticaltracker |
| AU2015323940B2 (en) | 2014-09-29 | 2021-05-20 | Magic Leap, Inc. | Architectures and methods for outputting different wavelength light out of waveguides |
| US9459201B2 (en) | 2014-09-29 | 2016-10-04 | Zyomed Corp. | Systems and methods for noninvasive blood glucose and other analyte detection and measurement using collision computing |
| US9600080B2 (en) | 2014-10-02 | 2017-03-21 | Google Inc. | Non-line-of-sight radar-based gesture recognition |
| CN105892620A (en) * | 2014-10-28 | 2016-08-24 | 贵州师范学院 | Kinect based method for remotely controlling intelligent car by motion sensing |
| JP6551417B2 (en) * | 2014-11-12 | 2019-07-31 | 富士通株式会社 | Wearable device, display control method, and display control program |
| CN104391574A (en) * | 2014-11-14 | 2015-03-04 | 京东方科技集团股份有限公司 | Sight processing method, sight processing system, terminal equipment and wearable equipment |
| US10055892B2 (en) | 2014-11-16 | 2018-08-21 | Eonite Perception Inc. | Active region determination for head mounted displays |
| CN106999768A (en) * | 2014-11-16 | 2017-08-01 | 盖·芬夫特 | Systems and methods for providing an alternate reality ride experience |
| US11250630B2 (en) | 2014-11-18 | 2022-02-15 | Hallmark Cards, Incorporated | Immersive story creation |
| US10235714B2 (en) * | 2014-12-01 | 2019-03-19 | Verizon Patent And Licensing Inc. | Customized virtual reality user environment control |
| US20160195923A1 (en) * | 2014-12-26 | 2016-07-07 | Krush Technologies, Llc | Gyroscopic chair for virtual reality simulation |
| US10154239B2 (en) | 2014-12-30 | 2018-12-11 | Onpoint Medical, Inc. | Image-guided surgery with surface reconstruction and augmented reality visualization |
| CN111323867A (en) | 2015-01-12 | 2020-06-23 | 迪吉伦斯公司 | Environmentally isolated waveguide display |
| EP3245551B1 (en) | 2015-01-12 | 2019-09-18 | DigiLens Inc. | Waveguide light field displays |
| JP6867947B2 (en) | 2015-01-20 | 2021-05-12 | ディジレンズ インコーポレイテッド | Holographic waveguide rider |
| US9852546B2 (en) | 2015-01-28 | 2017-12-26 | CCP hf. | Method and system for receiving gesture input via virtual control objects |
| US10725297B2 (en) | 2015-01-28 | 2020-07-28 | CCP hf. | Method and system for implementing a virtual representation of a physical environment using a virtual reality environment |
| US10726625B2 (en) | 2015-01-28 | 2020-07-28 | CCP hf. | Method and system for improving the transmission and processing of data regarding a multi-user virtual environment |
| US9632226B2 (en) | 2015-02-12 | 2017-04-25 | Digilens Inc. | Waveguide grating device |
| US11468639B2 (en) * | 2015-02-20 | 2022-10-11 | Microsoft Technology Licensing, Llc | Selective occlusion system for augmented reality devices |
| US10684485B2 (en) | 2015-03-06 | 2020-06-16 | Sony Interactive Entertainment Inc. | Tracking system for head mounted display |
| US10088895B2 (en) | 2015-03-08 | 2018-10-02 | Bent Reality Labs, LLC | Systems and processes for providing virtual sexual experiences |
| KR102630754B1 (en) | 2015-03-16 | 2024-01-26 | 매직 립, 인코포레이티드 | Augmented Reality Pulse Oximetry |
| US10459145B2 (en) | 2015-03-16 | 2019-10-29 | Digilens Inc. | Waveguide device incorporating a light pipe |
| WO2016156776A1 (en) | 2015-03-31 | 2016-10-06 | Milan Momcilo Popovich | Method and apparatus for contact image sensing |
| US11501244B1 (en) | 2015-04-06 | 2022-11-15 | Position Imaging, Inc. | Package tracking systems and methods |
| US10853757B1 (en) | 2015-04-06 | 2020-12-01 | Position Imaging, Inc. | Video for real-time confirmation in package tracking systems |
| US11416805B1 (en) * | 2015-04-06 | 2022-08-16 | Position Imaging, Inc. | Light-based guidance for package tracking systems |
| JP6433844B2 (en) * | 2015-04-09 | 2018-12-05 | 株式会社ソニー・インタラクティブエンタテインメント | Information processing apparatus, relay apparatus, information processing system, and software update method |
| KR102063895B1 (en) | 2015-04-20 | 2020-01-08 | 삼성전자주식회사 | Master device, slave device and control method thereof |
| KR102313485B1 (en) * | 2015-04-22 | 2021-10-15 | 삼성전자주식회사 | Method and apparatus for transmitting and receiving image data for virtual reality streaming service |
| KR102002112B1 (en) | 2015-04-30 | 2019-07-19 | 구글 엘엘씨 | RF-based micro-motion tracking for gesture tracking and recognition |
| US10139916B2 (en) | 2015-04-30 | 2018-11-27 | Google Llc | Wide-field radar-based gesture recognition |
| KR102229658B1 (en) | 2015-04-30 | 2021-03-17 | 구글 엘엘씨 | Type-agnostic rf signal representations |
| US10088908B1 (en) | 2015-05-27 | 2018-10-02 | Google Llc | Gesture detection and interactions |
| US9693592B2 (en) | 2015-05-27 | 2017-07-04 | Google Inc. | Attaching electronic components to interactive textiles |
| KR102449800B1 (en) | 2015-06-15 | 2022-09-29 | 매직 립, 인코포레이티드 | Virtual and augmented reality systems and methods |
| US9977493B2 (en) * | 2015-06-17 | 2018-05-22 | Microsoft Technology Licensing, Llc | Hybrid display system |
| JP2017010387A (en) * | 2015-06-24 | 2017-01-12 | キヤノン株式会社 | System, mixed reality display device, information processing method, and program |
| EP3325233A1 (en) | 2015-07-23 | 2018-05-30 | SRI International Inc. | Robotic arm and robotic surgical system |
| US10799792B2 (en) * | 2015-07-23 | 2020-10-13 | At&T Intellectual Property I, L.P. | Coordinating multiple virtual environments |
| US9916506B1 (en) | 2015-07-25 | 2018-03-13 | X Development Llc | Invisible fiducial markers on a robot to visualize the robot in augmented reality |
| US9919427B1 (en) | 2015-07-25 | 2018-03-20 | X Development Llc | Visualizing robot trajectory points in augmented reality |
| US10609438B2 (en) | 2015-08-13 | 2020-03-31 | International Business Machines Corporation | Immersive cognitive reality system with real time surrounding media |
| US10701318B2 (en) | 2015-08-14 | 2020-06-30 | Pcms Holdings, Inc. | System and method for augmented reality multi-view telepresence |
| CN109409251B (en) | 2015-08-18 | 2023-05-16 | 奇跃公司 | Virtual and augmented reality systems and methods |
| US10169917B2 (en) * | 2015-08-20 | 2019-01-01 | Microsoft Technology Licensing, Llc | Augmented reality |
| CN112836664A (en) | 2015-08-21 | 2021-05-25 | 奇跃公司 | Eyelid shape estimation using eye pose measurements |
| IL283014B (en) | 2015-08-21 | 2022-07-01 | Magic Leap Inc | Assessment of eyelid shape |
| US10373392B2 (en) | 2015-08-26 | 2019-08-06 | Microsoft Technology Licensing, Llc | Transitioning views of a virtual model |
| US10491711B2 (en) | 2015-09-10 | 2019-11-26 | EEVO, Inc. | Adaptive streaming of virtual reality data |
| JP2017059891A (en) * | 2015-09-14 | 2017-03-23 | ミライアプリ株式会社 | State determination program, state determination device, wearable device, and management device |
| KR102351060B1 (en) * | 2015-09-16 | 2022-01-12 | 매직 립, 인코포레이티드 | Mixing Head Pose of Audio Files |
| JP6144738B2 (en) * | 2015-09-18 | 2017-06-07 | 株式会社スクウェア・エニックス | Video game processing program, video game processing system, and video game processing method |
| CN118584664A (en) | 2015-09-23 | 2024-09-03 | 奇跃公司 | Eye imaging using off-axis imagers |
| EP3335070B1 (en) * | 2015-09-30 | 2020-11-25 | Sony Interactive Entertainment Inc. | Methods for optimizing positioning of content on a screen of a head mounted display |
| JP6598269B2 (en) | 2015-10-05 | 2019-10-30 | ディジレンズ インコーポレイテッド | Waveguide display |
| US10817065B1 (en) | 2015-10-06 | 2020-10-27 | Google Llc | Gesture recognition using multiple antenna |
| CN105214309B (en) * | 2015-10-10 | 2017-07-11 | 腾讯科技(深圳)有限公司 | A kind of information processing method, terminal and computer-readable storage medium |
| CN114140867A (en) | 2015-10-16 | 2022-03-04 | 奇跃公司 | Eye pose recognition using eye features |
| AU2016341196B2 (en) * | 2015-10-20 | 2021-09-16 | Magic Leap, Inc. | Selecting virtual objects in a three-dimensional space |
| AU2016349891B9 (en) | 2015-11-04 | 2021-05-06 | Magic Leap, Inc. | Dynamic display calibration based on eye-tracking |
| US11231544B2 (en) | 2015-11-06 | 2022-01-25 | Magic Leap, Inc. | Metasurfaces for redirecting light and methods for fabricating |
| USD785656S1 (en) * | 2015-11-24 | 2017-05-02 | Meditech International Inc. | Display screen or portion thereof with graphical user interface |
| CN106802712A (en) * | 2015-11-26 | 2017-06-06 | 英业达科技有限公司 | Interactive augmented reality system |
| GB2545275A (en) * | 2015-12-11 | 2017-06-14 | Nokia Technologies Oy | Causing provision of virtual reality content |
| US10037085B2 (en) * | 2015-12-21 | 2018-07-31 | Intel Corporation | Techniques for real object and hand representation in virtual reality content |
| KR102439768B1 (en) | 2016-01-07 | 2022-09-01 | 매직 립, 인코포레이티드 | Virtual and augmented reality systems and methods with an unequal number of component color images distributed across depth planes |
| JP6952713B2 (en) | 2016-01-19 | 2021-10-20 | マジック リープ, インコーポレイテッドMagic Leap,Inc. | Augmented reality systems and methods that utilize reflection |
| EP3405829A4 (en) | 2016-01-19 | 2019-09-18 | Magic Leap, Inc. | COLLECTION, SELECTION AND COMBINATION OF EYE IMAGES |
| EP3971874A1 (en) | 2016-01-29 | 2022-03-23 | Magic Leap, Inc. | Display for three-dimensional image |
| US10120437B2 (en) * | 2016-01-29 | 2018-11-06 | Rovi Guides, Inc. | Methods and systems for associating input schemes with physical world objects |
| WO2017134412A1 (en) | 2016-02-04 | 2017-08-10 | Milan Momcilo Popovich | Holographic waveguide optical tracker |
| KR102503155B1 (en) | 2016-02-11 | 2023-02-22 | 매직 립, 인코포레이티드 | Multi-depth flat display system with reduced switching between depth planes |
| CN114137729A (en) | 2016-02-24 | 2022-03-04 | 奇跃公司 | Polarizing beam splitter with low light leakage |
| JP6991981B2 (en) | 2016-02-24 | 2022-01-13 | マジック リープ, インコーポレイテッド | Thin interconnect for light emitters |
| KR102799675B1 (en) | 2016-02-26 | 2025-04-22 | 매직 립, 인코포레이티드 | Display system having multiple light pipes for multiple emitters |
| EP4246039A3 (en) | 2016-02-26 | 2023-11-15 | Magic Leap, Inc. | Optical system |
| JP6978423B2 (en) | 2016-03-01 | 2021-12-08 | マジック リープ, インコーポレイテッドMagic Leap, Inc. | Reflection switching device for inputting different wavelengths of light into a waveguide |
| NZ745738A (en) | 2016-03-04 | 2020-01-31 | Magic Leap Inc | Current drain reduction in ar/vr display systems |
| JP6920329B2 (en) | 2016-03-07 | 2021-08-18 | マジック リープ, インコーポレイテッドMagic Leap,Inc. | Blue light regulation for biometric security |
| CN109310476B (en) | 2016-03-12 | 2020-04-03 | P·K·朗 | Devices and methods for surgery |
| EP3779740B1 (en) | 2016-03-22 | 2021-12-08 | Magic Leap, Inc. | Head mounted display system configured to exchange biometric information |
| JP6895451B2 (en) | 2016-03-24 | 2021-06-30 | ディジレンズ インコーポレイテッド | Methods and Devices for Providing Polarized Selective Holography Waveguide Devices |
| KR20180122726A (en) | 2016-03-25 | 2018-11-13 | 매직 립, 인코포레이티드 | Virtual and augmented reality systems and methods |
| US9554738B1 (en) | 2016-03-30 | 2017-01-31 | Zyomed Corp. | Spectroscopic tomography systems and methods for noninvasive detection and measurement of analytes using collision computing |
| KR102551198B1 (en) * | 2016-03-31 | 2023-07-03 | 매직 립, 인코포레이티드 | Interactions with 3d virtual objects using poses and multiple-dof controllers |
| US10762712B2 (en) | 2016-04-01 | 2020-09-01 | Pcms Holdings, Inc. | Apparatus and method for supporting interactive augmented reality functionalities |
| WO2017177019A1 (en) * | 2016-04-08 | 2017-10-12 | Pcms Holdings, Inc. | System and method for supporting synchronous and asynchronous augmented reality functionalities |
| WO2017176898A1 (en) | 2016-04-08 | 2017-10-12 | Magic Leap, Inc. | Augmented reality systems and methods with variable focus lens elements |
| US10890707B2 (en) | 2016-04-11 | 2021-01-12 | Digilens Inc. | Holographic waveguide apparatus for structured light projection |
| CN105903166B (en) * | 2016-04-18 | 2019-05-24 | 北京小鸟看看科技有限公司 | A 3D online sports competition method and system |
| EP3236336B1 (en) * | 2016-04-21 | 2019-03-27 | Nokia Technologies Oy | Virtual reality causal summary content |
| CN114296552A (en) * | 2016-04-21 | 2022-04-08 | 奇跃公司 | Visual halo around the field of vision |
| JP6594254B2 (en) * | 2016-04-26 | 2019-10-23 | 日本電信電話株式会社 | Virtual environment construction device, virtual environment construction method, program |
| KR102384882B1 (en) | 2016-04-26 | 2022-04-07 | 매직 립, 인코포레이티드 | Electromagnetic Tracking Using Augmented Reality Systems |
| US11727645B2 (en) * | 2016-04-27 | 2023-08-15 | Immersion | Device and method for sharing an immersion in a virtual environment |
| US10204444B2 (en) * | 2016-04-28 | 2019-02-12 | Verizon Patent And Licensing Inc. | Methods and systems for creating and manipulating an individually-manipulable volumetric model of an object |
| US10257490B2 (en) * | 2016-04-28 | 2019-04-09 | Verizon Patent And Licensing Inc. | Methods and systems for creating and providing a real-time volumetric representation of a real-world event |
| WO2017192167A1 (en) | 2016-05-03 | 2017-11-09 | Google Llc | Connecting an electronic component to an interactive textile |
| US10250720B2 (en) | 2016-05-05 | 2019-04-02 | Google Llc | Sharing in an augmented and/or virtual reality environment |
| CN113484944A (en) | 2016-05-06 | 2021-10-08 | 奇跃公司 | Supersurface with asymmetric grating for redirecting light and method of making same |
| IL289757B2 (en) | 2016-05-09 | 2024-12-01 | Magic Leap Inc | Augmented reality systems and methods for analyzing user health |
| WO2017196999A1 (en) | 2016-05-12 | 2017-11-16 | Magic Leap, Inc. | Wavelength multiplexing in waveguides |
| WO2017200949A1 (en) | 2016-05-16 | 2017-11-23 | Google Llc | Interactive fabric |
| US10175781B2 (en) | 2016-05-16 | 2019-01-08 | Google Llc | Interactive object with multiple electronics modules |
| US9983697B1 (en) | 2016-05-18 | 2018-05-29 | Meta Company | System and method for facilitating virtual interactions with a three-dimensional virtual environment in response to sensor input into a control device having sensors |
| US10303323B2 (en) * | 2016-05-18 | 2019-05-28 | Meta Company | System and method for facilitating user interaction with a three-dimensional virtual environment in response to user input into a control device having a graphical interface |
| WO2017200760A1 (en) * | 2016-05-19 | 2017-11-23 | Yudofsky Stuart | Methods and devices for behavior modification |
| CN115185366A (en) | 2016-05-20 | 2022-10-14 | 奇跃公司 | Context awareness for user interface menus |
| AU2017273737B2 (en) | 2016-06-03 | 2022-05-05 | Magic Leap, Inc. | Augmented reality identity verification |
| US10748339B2 (en) | 2016-06-03 | 2020-08-18 | A Big Chunk Of Mud Llc | System and method for implementing computer-simulated reality interactions between users and publications |
| CN106096540B (en) * | 2016-06-08 | 2020-07-24 | 联想(北京)有限公司 | Information processing method and electronic equipment |
| KR102448938B1 (en) | 2016-06-10 | 2022-09-28 | 매직 립, 인코포레이티드 | Integral point light source for texture projection bulbs |
| EP3472828B1 (en) | 2016-06-20 | 2022-08-10 | Magic Leap, Inc. | Augmented reality display system for evaluation and modification of neurological conditions, including visual processing and perception conditions |
| US10037077B2 (en) | 2016-06-21 | 2018-07-31 | Disney Enterprises, Inc. | Systems and methods of generating augmented reality experiences |
| CN116777994A (en) | 2016-06-30 | 2023-09-19 | 奇跃公司 | Estimating pose in 3D space |
| US10042604B2 (en) | 2016-07-01 | 2018-08-07 | Metrik LLC | Multi-dimensional reference element for mixed reality environments |
| US10296792B2 (en) | 2016-07-14 | 2019-05-21 | Magic Leap, Inc. | Iris boundary estimation using cornea curvature |
| US10922393B2 (en) | 2016-07-14 | 2021-02-16 | Magic Leap, Inc. | Deep neural network for iris identification |
| CN119644594A (en) | 2016-07-25 | 2025-03-18 | 奇跃公司 | Image modification, display and visualization using augmented and virtual reality glasses |
| CN109788901B (en) | 2016-07-25 | 2024-01-02 | 奇跃公司 | Light field processor system |
| EP3491480B1 (en) | 2016-07-29 | 2022-10-19 | Magic Leap, Inc. | Secure exchange of cryptographically signed records |
| US10249089B2 (en) | 2016-08-01 | 2019-04-02 | Dell Products, Lp | System and method for representing remote participants to a meeting |
| TWI608383B (en) * | 2016-08-11 | 2017-12-11 | 拓景科技股份有限公司 | Methods and systems for generating a guidance in a virtual reality environment, and related computer program products |
| US10627625B2 (en) | 2016-08-11 | 2020-04-21 | Magic Leap, Inc. | Automatic placement of a virtual object in a three-dimensional space |
| KR102892155B1 (en) | 2016-08-12 | 2025-12-01 | 매직 립, 인코포레이티드 | Word flow annotation |
| US11017712B2 (en) | 2016-08-12 | 2021-05-25 | Intel Corporation | Optimized display image rendering |
| ES2992065T3 (en) | 2016-08-16 | 2024-12-09 | Insight Medical Systems Inc | Sensory augmentation systems in medical procedures |
| US10290151B2 (en) * | 2016-08-17 | 2019-05-14 | Blackberry Limited | AR/VR device virtualisation |
| KR102450386B1 (en) | 2016-08-22 | 2022-09-30 | 매직 립, 인코포레이티드 | Dithering methods and apparatus for wearable display device |
| WO2018039076A1 (en) * | 2016-08-22 | 2018-03-01 | Vantedge Group, Llc | Immersive and merged reality experience / environment and data capture via virtural, agumented, and mixed reality device |
| US10402649B2 (en) | 2016-08-22 | 2019-09-03 | Magic Leap, Inc. | Augmented reality display device with deep learning sensors |
| US11269480B2 (en) * | 2016-08-23 | 2022-03-08 | Reavire, Inc. | Controlling objects using virtual rays |
| US20180063205A1 (en) * | 2016-08-30 | 2018-03-01 | Augre Mixed Reality Technologies, Llc | Mixed reality collaboration |
| US11436553B2 (en) | 2016-09-08 | 2022-09-06 | Position Imaging, Inc. | System and method of object tracking using weight confirmation |
| US10773179B2 (en) | 2016-09-08 | 2020-09-15 | Blocks Rock Llc | Method of and system for facilitating structured block play |
| CN113467089A (en) | 2016-09-13 | 2021-10-01 | 奇跃公司 | Sensing glasses |
| CN106249611A (en) * | 2016-09-14 | 2016-12-21 | 深圳众乐智府科技有限公司 | A kind of Smart Home localization method based on virtual reality, device and system |
| JP6978493B2 (en) | 2016-09-21 | 2021-12-08 | マジック リープ, インコーポレイテッドMagic Leap, Inc. | Systems and methods for optical systems with exit pupil expanders |
| IL307292A (en) | 2016-09-22 | 2023-11-01 | Magic Leap Inc | Augmented reality spectroscopy |
| AU2017330454B2 (en) | 2016-09-26 | 2022-08-18 | Magic Leap, Inc. | Calibration of magnetic and optical sensors in a virtual reality or augmented reality display system |
| IL265520B2 (en) | 2016-09-28 | 2023-09-01 | Magic Leap Inc | Face model capture by a wearable device |
| RU2016138608A (en) | 2016-09-29 | 2018-03-30 | Мэджик Лип, Инк. | NEURAL NETWORK FOR SEGMENTING THE EYE IMAGE AND ASSESSING THE QUALITY OF THE IMAGE |
| DE102016118647B4 (en) | 2016-09-30 | 2018-12-06 | Deutsche Telekom Ag | Augmented reality communication system and augmented reality interaction device |
| KR102216019B1 (en) | 2016-10-04 | 2021-02-15 | 매직 립, 인코포레이티드 | Efficient data layouts for convolutional neural networks |
| KR102657100B1 (en) | 2016-10-05 | 2024-04-12 | 매직 립, 인코포레이티드 | Periocular test for mixed reality calibration |
| US10192339B2 (en) | 2016-10-14 | 2019-01-29 | Unchartedvr Inc. | Method for grid-based virtual reality attraction |
| US10105619B2 (en) | 2016-10-14 | 2018-10-23 | Unchartedvr Inc. | Modular solution for delivering a virtual reality attraction |
| US10514769B2 (en) * | 2016-10-16 | 2019-12-24 | Dell Products, L.P. | Volumetric tracking for orthogonal displays in an electronic collaboration setting |
| GB2555378B (en) * | 2016-10-18 | 2021-06-09 | Virtually Live Switzerland Gmbh | HMD delivery system and method |
| WO2018075968A1 (en) | 2016-10-21 | 2018-04-26 | Magic Leap, Inc. | System and method for presenting image content on multiple depth planes by providing multiple intra-pupil parallax views |
| KR102277438B1 (en) * | 2016-10-21 | 2021-07-14 | 삼성전자주식회사 | In multimedia communication between terminal devices, method for transmitting audio signal and outputting audio signal and terminal device performing thereof |
| US9983665B2 (en) | 2016-10-25 | 2018-05-29 | Oculus Vr, Llc | Position tracking system that exploits arbitrary configurations to determine loop closure |
| US10565790B2 (en) | 2016-11-11 | 2020-02-18 | Magic Leap, Inc. | Periocular and audio synthesis of a full face image |
| CN115097937B (en) | 2016-11-15 | 2025-04-29 | 奇跃公司 | Deep learning system for cuboid detection |
| IL294413B2 (en) | 2016-11-16 | 2024-07-01 | Magic Leap Inc | Thermal management systems for wearable components |
| EP3542215B1 (en) | 2016-11-18 | 2022-11-02 | Magic Leap, Inc. | Spatially variable liquid crystal diffraction gratings |
| WO2018094093A1 (en) | 2016-11-18 | 2018-05-24 | Magic Leap, Inc. | Waveguide light multiplexer using crossed gratings |
| US11067860B2 (en) | 2016-11-18 | 2021-07-20 | Magic Leap, Inc. | Liquid crystal diffractive devices with nano-scale pattern and methods of manufacturing the same |
| US10908423B2 (en) | 2016-11-18 | 2021-02-02 | Magic Leap, Inc. | Multilayer liquid crystal diffractive gratings for redirecting light of wide incident angle ranges |
| CN110114046B (en) | 2016-11-28 | 2021-07-13 | 威博外科公司 | Robotic surgical system that reduces undesired vibrations |
| US10649233B2 (en) | 2016-11-28 | 2020-05-12 | Tectus Corporation | Unobtrusive eye mounted display |
| EP3548939A4 (en) | 2016-12-02 | 2020-11-25 | DigiLens Inc. | WAVE GUIDE DEVICE WITH UNIFORM OUTPUT LIGHTING |
| US10531220B2 (en) | 2016-12-05 | 2020-01-07 | Magic Leap, Inc. | Distributed audio capturing techniques for virtual reality (VR), augmented reality (AR), and mixed reality (MR) systems |
| US10579150B2 (en) | 2016-12-05 | 2020-03-03 | Google Llc | Concurrent detection of absolute distance and relative movement for sensing action gestures |
| CN116883628A (en) | 2016-12-05 | 2023-10-13 | 奇跃公司 | Wearable system and method for providing virtual remote control in mixed reality environment |
| EP4002000B1 (en) | 2016-12-08 | 2025-02-19 | Magic Leap, Inc. | Diffractive devices based on cholesteric liquid crystal |
| WO2018111449A2 (en) | 2016-12-13 | 2018-06-21 | Magic Leap. Inc. | Augmented and virtual reality eyewear, systems, and methods for delivering polarized light and determing glucose levels |
| CA3046399A1 (en) | 2016-12-13 | 2018-06-21 | Magic Leap, Inc. | 3d object rendering using detected features |
| KR102550742B1 (en) | 2016-12-14 | 2023-06-30 | 매직 립, 인코포레이티드 | Patterning of liquid crystals using soft-imprint replication of surface alignment patterns |
| US10371896B2 (en) * | 2016-12-22 | 2019-08-06 | Magic Leap, Inc. | Color separation in planar waveguides using dichroic filters |
| CN114675420A (en) | 2016-12-22 | 2022-06-28 | 奇跃公司 | System and method for manipulating light from an ambient light source |
| US10746999B2 (en) | 2016-12-28 | 2020-08-18 | Magic Leap, Inc. | Dual depth exit pupil expander |
| WO2018125428A1 (en) | 2016-12-29 | 2018-07-05 | Magic Leap, Inc. | Automatic control of wearable display device based on external conditions |
| CN106792133A (en) * | 2016-12-30 | 2017-05-31 | 北京华为数字技术有限公司 | Virtual reality server, method of transmitting video data and system |
| US10442727B2 (en) | 2017-01-05 | 2019-10-15 | Magic Leap, Inc. | Patterning of high refractive index glasses by plasma etching |
| US10545346B2 (en) | 2017-01-05 | 2020-01-28 | Digilens Inc. | Wearable heads up displays |
| US12190542B2 (en) | 2017-01-06 | 2025-01-07 | Position Imaging, Inc. | System and method of calibrating a directional light source relative to a camera's field of view |
| CN116230153A (en) | 2017-01-11 | 2023-06-06 | 奇跃公司 | Medical assistant |
| US11751944B2 (en) | 2017-01-16 | 2023-09-12 | Philipp K. Lang | Optical guidance for surgical, medical, and dental procedures |
| CN106774942A (en) * | 2017-01-18 | 2017-05-31 | 华南理工大学 | A kind of real-time 3D remote human-machines interactive system |
| CN110462460B (en) | 2017-01-23 | 2022-10-14 | 奇跃公司 | Eyepiece for virtual, augmented or mixed reality systems |
| KR20240066186A (en) | 2017-01-27 | 2024-05-14 | 매직 립, 인코포레이티드 | Antireflection coatings for metasurfaces |
| IL295913B2 (en) | 2017-01-27 | 2024-03-01 | Magic Leap Inc | Diffraction gratings produced using a surface cell with differently oriented nano-rays |
| US11010601B2 (en) | 2017-02-14 | 2021-05-18 | Microsoft Technology Licensing, Llc | Intelligent assistant device communicating non-verbal cues |
| US10467510B2 (en) | 2017-02-14 | 2019-11-05 | Microsoft Technology Licensing, Llc | Intelligent assistant |
| US11100384B2 (en) | 2017-02-14 | 2021-08-24 | Microsoft Technology Licensing, Llc | Intelligent device user interactions |
| US11347054B2 (en) | 2017-02-16 | 2022-05-31 | Magic Leap, Inc. | Systems and methods for augmented reality |
| KR102574219B1 (en) | 2017-02-23 | 2023-09-01 | 매직 립, 인코포레이티드 | Variable-focus virtual image devices based on polarization conversion |
| IL311069A (en) * | 2017-02-28 | 2024-04-01 | Magic Leap Inc | Virtual and real object registration in a mixed reality device |
| KR102635437B1 (en) | 2017-02-28 | 2024-02-13 | 삼성전자주식회사 | Method for contents sharing and electronic device supporting the same |
| US10525324B2 (en) | 2017-03-07 | 2020-01-07 | vSports, LLC | Mixed-reality kick tracking and simulation |
| US10409363B1 (en) | 2017-03-07 | 2019-09-10 | vGolf, LLC | Mixed-reality golf tracking and simulation |
| WO2018165284A1 (en) | 2017-03-07 | 2018-09-13 | vSports, LLC | Mixed reality sport simulation and training system |
| US10204456B2 (en) | 2017-03-07 | 2019-02-12 | vGolf, LLC | Mixed reality golf simulation and training system |
| EP3596387B1 (en) | 2017-03-14 | 2023-05-10 | Magic Leap, Inc. | Waveguides with light absorbing films and processes for forming the same |
| CN110419049B (en) | 2017-03-17 | 2024-01-23 | 奇跃公司 | Room layout estimation methods and techniques |
| EP4489393A3 (en) | 2017-03-21 | 2025-03-26 | Magic Leap, Inc. | Depth sensing techniques for virtual, augmented, and mixed reality systems |
| AU2018240181A1 (en) | 2017-03-21 | 2019-09-26 | Magic Leap, Inc. | Stacked waveguides having different diffraction gratings for combined field of view |
| KR20240069826A (en) | 2017-03-21 | 2024-05-20 | 매직 립, 인코포레이티드 | Low-profile beam splitter |
| AU2018239264B2 (en) | 2017-03-21 | 2023-05-18 | Magic Leap, Inc. | Eye-imaging apparatus using diffractive optical elements |
| KR102579249B1 (en) | 2017-03-21 | 2023-09-15 | 매직 립, 인코포레이티드 | Methods, devices, and systems for illuminating spatial light modulators |
| EP3602156B1 (en) | 2017-03-21 | 2025-04-23 | Magic Leap, Inc. | Display system with spatial light modulator illumination for divided pupils |
| KR102653388B1 (en) | 2017-03-22 | 2024-04-01 | 매직 립, 인코포레이티드 | Depth based foveated rendering for display systems |
| AU2018239457B2 (en) | 2017-03-22 | 2023-04-20 | A Big Chunk Of Mud Llc | Convertible satchel with integrated head-mounted display |
| US10282909B2 (en) * | 2017-03-23 | 2019-05-07 | Htc Corporation | Virtual reality system, operating method for mobile device, and non-transitory computer readable storage medium |
| US10410349B2 (en) * | 2017-03-27 | 2019-09-10 | Microsoft Technology Licensing, Llc | Selective application of reprojection processing on layer sub-regions for optimizing late stage reprojection power |
| CN107172047A (en) * | 2017-04-05 | 2017-09-15 | 杭州优海信息系统有限公司 | The virtual augmented reality 3D interactive systems of factory and its implementation |
| US10242486B2 (en) * | 2017-04-17 | 2019-03-26 | Intel Corporation | Augmented reality and virtual reality feedback enhancement system, apparatus and method |
| IL269908B (en) | 2017-04-18 | 2022-07-01 | Magic Leap Inc | Waveguides having reflective layers formed by reflective flowable materials |
| IL270002B2 (en) | 2017-04-19 | 2023-11-01 | Magic Leap Inc | Multimodal task execution and text editing for a wearable system |
| TWI610565B (en) * | 2017-04-21 | 2018-01-01 | 國立臺北科技大學 | Stereoscopic image object interaction method, system and stereo interactive film post-production method |
| AU2018258679B2 (en) | 2017-04-27 | 2022-03-31 | Magic Leap, Inc. | Light-emitting user input device |
| JP6343779B1 (en) * | 2017-04-28 | 2018-06-20 | 株式会社コナミデジタルエンタテインメント | Server apparatus and computer program used therefor |
| US11782669B2 (en) | 2017-04-28 | 2023-10-10 | Microsoft Technology Licensing, Llc | Intuitive augmented reality collaboration on visual data |
| US20180349837A1 (en) * | 2017-05-19 | 2018-12-06 | Hcl Technologies Limited | System and method for inventory management within a warehouse |
| EP3625658B1 (en) | 2017-05-19 | 2024-10-09 | Magic Leap, Inc. | Keyboards for virtual, augmented, and mixed reality display systems |
| US10792119B2 (en) | 2017-05-22 | 2020-10-06 | Ethicon Llc | Robotic arm cart and uses therefor |
| CA3059789A1 (en) | 2017-05-22 | 2018-11-29 | Magic Leap, Inc. | Pairing with companion device |
| CA3059984C (en) | 2017-05-30 | 2024-01-30 | Magic Leap, Inc. | Power supply assembly with fan assembly for electronic device |
| KR102799682B1 (en) | 2017-05-31 | 2025-04-23 | 매직 립, 인코포레이티드 | Eye tracking calibration techniques |
| US10856948B2 (en) | 2017-05-31 | 2020-12-08 | Verb Surgical Inc. | Cart for robotic arms and method and apparatus for registering cart to surgical table |
| US10168789B1 (en) * | 2017-05-31 | 2019-01-01 | Meta Company | Systems and methods to facilitate user interactions with virtual content having two-dimensional representations and/or three-dimensional representations |
| US10485623B2 (en) | 2017-06-01 | 2019-11-26 | Verb Surgical Inc. | Robotic arm cart with fine position adjustment features and uses therefor |
| WO2018227011A1 (en) * | 2017-06-07 | 2018-12-13 | vSports, LLC | Mixed-reality sports tracking and simulation |
| AU2018284089B2 (en) | 2017-06-12 | 2022-12-15 | Magic Leap, Inc. | Augmented reality display having multi-element adaptive lens for changing depth planes |
| US12498895B2 (en) | 2017-06-16 | 2025-12-16 | Apple Inc. | Head-mounted device with publicly viewable display |
| US11861255B1 (en) | 2017-06-16 | 2024-01-02 | Apple Inc. | Wearable device for facilitating enhanced interaction |
| CN107229337B (en) * | 2017-06-16 | 2020-08-18 | 上海闻泰电子科技有限公司 | VR display system and method |
| US10913145B2 (en) | 2017-06-20 | 2021-02-09 | Verb Surgical Inc. | Cart for robotic arms and method and apparatus for cartridge or magazine loading of arms |
| RU2682014C1 (en) * | 2017-06-29 | 2019-03-14 | Дмитрий Сергеевич Шаньгин | Virtual reality system |
| US10449440B2 (en) * | 2017-06-30 | 2019-10-22 | Electronic Arts Inc. | Interactive voice-controlled companion application for a video game |
| US10908680B1 (en) | 2017-07-12 | 2021-02-02 | Magic Leap, Inc. | Pose estimation using electromagnetic tracking |
| CN107275904B (en) * | 2017-07-19 | 2024-05-24 | 北京小米移动软件有限公司 | Data cable for virtual reality glasses |
| WO2019022849A1 (en) | 2017-07-26 | 2019-01-31 | Magic Leap, Inc. | Training a neural network with representations of user interface devices |
| US20190107935A1 (en) | 2017-07-28 | 2019-04-11 | Magical Technologies, Llc | Systems, Methods and Apparatuses to Facilitate Physical and Non-Physical Interaction/Action/Reactions Between Alternate Realities |
| KR102595846B1 (en) | 2017-07-28 | 2023-10-30 | 매직 립, 인코포레이티드 | Fan assembly for displaying images |
| US9992449B1 (en) * | 2017-08-10 | 2018-06-05 | Everysight Ltd. | System and method for sharing sensed data between remote users |
| EP3665704A1 (en) * | 2017-08-11 | 2020-06-17 | 3D4Medical Limited | Interactive medical data querying |
| KR102360412B1 (en) | 2017-08-25 | 2022-02-09 | 엘지디스플레이 주식회사 | Image generation method and display device using the same |
| US10521661B2 (en) | 2017-09-01 | 2019-12-31 | Magic Leap, Inc. | Detailed eye shape model for robust biometric applications |
| US11801114B2 (en) | 2017-09-11 | 2023-10-31 | Philipp K. Lang | Augmented reality display for vascular and other interventions, compensation for cardiac and respiratory motion |
| US20190197312A1 (en) | 2017-09-13 | 2019-06-27 | Edward Rashid Lahood | Method, apparatus and computer-readable media for displaying augmented reality information |
| US20190108578A1 (en) * | 2017-09-13 | 2019-04-11 | Magical Technologies, Llc | Systems and methods of rewards object spawning and augmented reality commerce platform supporting multiple seller entities |
| AU2018337653A1 (en) | 2017-09-20 | 2020-01-16 | Magic Leap, Inc. | Personalized neural network for eye tracking |
| IL273397B2 (en) | 2017-09-21 | 2024-09-01 | Magic Leap Inc | Augmented reality display with waveguide configured to capture images of eye and/or environment |
| CN111133368B (en) | 2017-09-27 | 2024-12-13 | 奇跃公司 | Near-eye 3D display with separated phase and amplitude modulators |
| CN111052045B (en) | 2017-09-29 | 2022-07-15 | 苹果公司 | computer-generated reality platform |
| US10922878B2 (en) * | 2017-10-04 | 2021-02-16 | Google Llc | Lighting for inserted content |
| CA3077455A1 (en) | 2017-10-11 | 2019-04-18 | Magic Leap, Inc. | Augmented reality display comprising eyepiece having a transparent emissive display |
| KR20190041385A (en) | 2017-10-12 | 2019-04-22 | 언차티드브이알 인코퍼레이티드 | Smart props for grid-based virtual reality attrection |
| MY205925A (en) * | 2017-10-12 | 2024-11-20 | Fraunhofer Ges Zur Frderung Der Angewandten Forschung E V | Optimizing audio delivery for virtual reality applications |
| EP3698214A4 (en) | 2017-10-16 | 2021-10-27 | Digilens Inc. | SYSTEMS AND METHODS FOR MULTIPLE IMAGE RESOLUTION OF A PIXELED DISPLAY |
| US11966793B1 (en) * | 2017-10-18 | 2024-04-23 | Campfire 3D, Inc. | Systems and methods to extend an interactive space across multiple platforms |
| WO2019079826A1 (en) | 2017-10-22 | 2019-04-25 | Magical Technologies, Llc | Systems, methods and apparatuses of digital assistants in an augmented reality environment and local determination of virtual object placement and apparatuses of single or multi-directional lens as portals between a physical world and a digital world component of the augmented reality environment |
| JP7181928B2 (en) | 2017-10-26 | 2022-12-01 | マジック リープ, インコーポレイテッド | A Gradient Normalization System and Method for Adaptive Loss Balancing in Deep Multitasking Networks |
| EP4246213A3 (en) | 2017-10-26 | 2023-12-20 | Magic Leap, Inc. | Augmented reality display having liquid crystal variable focus element and roll-to-roll method and apparatus for forming the same |
| KR102698364B1 (en) | 2017-10-26 | 2024-08-23 | 매직 립, 인코포레이티드 | Wideband adaptive lens assembly for augmented reality displays |
| KR102668725B1 (en) | 2017-10-27 | 2024-05-29 | 매직 립, 인코포레이티드 | Virtual reticle for augmented reality systems |
| CN111328400B (en) | 2017-11-14 | 2025-08-19 | 奇跃公司 | Meta learning for multitasking learning of neural networks |
| US11720222B2 (en) | 2017-11-17 | 2023-08-08 | International Business Machines Corporation | 3D interaction input for text in augmented reality |
| US12372793B2 (en) | 2017-12-11 | 2025-07-29 | Magic Leap, Inc. | Illumination layout for compact projection system |
| AU2018383595B2 (en) | 2017-12-11 | 2024-06-13 | Magic Leap, Inc. | Waveguide illuminator |
| EP3724855B1 (en) | 2017-12-14 | 2025-09-24 | Magic Leap, Inc. | Contextual-based rendering of virtual avatars |
| US10943120B2 (en) | 2017-12-15 | 2021-03-09 | Magic Leap, Inc. | Enhanced pose determination for display device |
| WO2019116244A1 (en) * | 2017-12-15 | 2019-06-20 | ГИОРГАДЗЕ, Анико Тенгизовна | Interaction of users in a communication system using augmented reality effects |
| AU2018386296B2 (en) | 2017-12-15 | 2023-11-23 | Magic Leap, Inc. | Eyepieces for augmented reality display system |
| US10713840B2 (en) * | 2017-12-22 | 2020-07-14 | Sony Interactive Entertainment Inc. | Space capture, modeling, and texture reconstruction through dynamic camera positioning and lighting using a mobile robot |
| WO2019130213A1 (en) * | 2017-12-29 | 2019-07-04 | ГИОРГАДЗЕ, Анико Тенгизовна | User interaction in a communication system using augmented reality objects |
| US10916060B2 (en) | 2018-01-04 | 2021-02-09 | Magic Leap, Inc. | Optical elements based on polymeric structures incorporating inorganic materials |
| US20190212699A1 (en) | 2018-01-08 | 2019-07-11 | Digilens, Inc. | Methods for Fabricating Optical Waveguides |
| KR20250027583A (en) | 2018-01-08 | 2025-02-26 | 디지렌즈 인코포레이티드. | Systems and methods for high-throughput recording of holographic gratings in waveguide cells |
| KR20250089565A (en) | 2018-01-08 | 2025-06-18 | 디지렌즈 인코포레이티드. | Systems and methods for manufacturing waveguide cells |
| WO2019136476A1 (en) | 2018-01-08 | 2019-07-11 | Digilens, Inc. | Waveguide architectures and related methods of manufacturing |
| US10678335B2 (en) | 2018-01-08 | 2020-06-09 | Facebook Technologies, Llc | Methods, devices, and systems for creating haptic stimulations and tracking motion of a user |
| AU2019209930B2 (en) | 2018-01-17 | 2023-08-03 | Magic Leap, Inc. | Eye center of rotation determination, depth plane selection, and render camera positioning in display systems |
| EP3740847B1 (en) | 2018-01-17 | 2024-03-13 | Magic Leap, Inc. | Display systems and methods for determining registration between a display and a user's eyes |
| US10679412B2 (en) | 2018-01-17 | 2020-06-09 | Unchartedvr Inc. | Virtual experience monitoring mechanism |
| US10558878B2 (en) | 2018-01-19 | 2020-02-11 | Timothy R. Fitzpatrick | System and method for organizing edible or drinkable materials |
| US10904374B2 (en) | 2018-01-24 | 2021-01-26 | Magical Technologies, Llc | Systems, methods and apparatuses to facilitate gradual or instantaneous adjustment in levels of perceptibility of virtual objects or reality object in a digital scene |
| US11348257B2 (en) | 2018-01-29 | 2022-05-31 | Philipp K. Lang | Augmented reality guidance for orthopedic and other surgical procedures |
| US10540941B2 (en) | 2018-01-30 | 2020-01-21 | Magic Leap, Inc. | Eclipse cursor for mixed reality displays |
| US11567627B2 (en) | 2018-01-30 | 2023-01-31 | Magic Leap, Inc. | Eclipse cursor for virtual content in mixed reality displays |
| US11398088B2 (en) | 2018-01-30 | 2022-07-26 | Magical Technologies, Llc | Systems, methods and apparatuses to generate a fingerprint of a physical location for placement of virtual objects |
| JP6453501B1 (en) * | 2018-02-01 | 2019-01-16 | 株式会社Cygames | Mixed reality system, program, method, and portable terminal device |
| US11210826B2 (en) | 2018-02-02 | 2021-12-28 | Disney Enterprises, Inc. | Systems and methods to provide artificial intelligence experiences |
| US10673414B2 (en) | 2018-02-05 | 2020-06-02 | Tectus Corporation | Adaptive tuning of a contact lens |
| AU2019217515A1 (en) | 2018-02-06 | 2020-06-25 | Magic Leap, Inc. | Systems and methods for augmented reality |
| WO2019155311A1 (en) * | 2018-02-09 | 2019-08-15 | ГИОРГАДЗЕ, Анико Тенгизовна | Communication system with automatically appearing augmented reality effects |
| WO2019155368A1 (en) * | 2018-02-09 | 2019-08-15 | ГИОРГАДЗЕ, Анико Тенгизовна | User interaction in a communication system using augmented reality effects appearing in response to user-selected actions to be performed by augmented reality objects |
| CN108427499A (en) * | 2018-02-13 | 2018-08-21 | 视辰信息科技(上海)有限公司 | A kind of AR systems and AR equipment |
| US10735649B2 (en) | 2018-02-22 | 2020-08-04 | Magic Leap, Inc. | Virtual and augmented reality systems and methods using display system control information embedded in image data |
| AU2019227506A1 (en) | 2018-02-27 | 2020-08-06 | Magic Leap, Inc. | Matching meshes for virtual avatars |
| CA3089645A1 (en) | 2018-02-28 | 2019-09-06 | Magic Leap, Inc. | Head scan alignment using ocular registration |
| EP3762765B1 (en) | 2018-03-05 | 2025-12-17 | Magic Leap, Inc. | Display system with low-latency pupil tracker |
| US11014001B2 (en) * | 2018-03-05 | 2021-05-25 | Sony Interactive Entertainment LLC | Building virtual reality (VR) gaming environments using real-world virtual reality maps |
| WO2019173079A1 (en) | 2018-03-06 | 2019-09-12 | Texas State University | Augmented reality/virtual reality platform for a network analyzer |
| JP1618837S (en) * | 2018-03-06 | 2018-11-26 | ||
| AU2019232746A1 (en) | 2018-03-07 | 2020-08-20 | Magic Leap, Inc. | Adaptive lens assemblies including polarization-selective lens stacks for augmented reality display |
| WO2019173524A1 (en) * | 2018-03-07 | 2019-09-12 | Magic Leap, Inc. | Visual tracking of peripheral devices |
| AU2019236460B2 (en) | 2018-03-12 | 2024-10-03 | Magic Leap, Inc. | Tilting array based display |
| US10878620B2 (en) | 2018-03-14 | 2020-12-29 | Magic Leap, Inc. | Display systems and methods for clipping content to increase viewing comfort |
| US11430169B2 (en) | 2018-03-15 | 2022-08-30 | Magic Leap, Inc. | Animating virtual avatar facial movements |
| CN119471906A (en) | 2018-03-16 | 2025-02-18 | 迪吉伦斯公司 | Holographic waveguides incorporating birefringence control and methods for their fabrication |
| EP3766004A4 (en) | 2018-03-16 | 2021-12-15 | Magic Leap, Inc. | FACIAL EXPRESSIONS FROM EYE TRACKING CAMERAS |
| US11238836B2 (en) | 2018-03-16 | 2022-02-01 | Magic Leap, Inc. | Depth based foveated rendering for display systems |
| WO2019183399A1 (en) | 2018-03-21 | 2019-09-26 | Magic Leap, Inc. | Augmented reality system and method for spectroscopic analysis |
| CN111868666A (en) | 2018-03-23 | 2020-10-30 | 脸谱科技有限责任公司 | Methods, devices and systems for determining contacts of users of virtual reality and/or augmented reality devices |
| CN112041716A (en) | 2018-04-02 | 2020-12-04 | 奇跃公司 | Hybrid polymer waveguides and methods for making hybrid polymer waveguides |
| US11886000B2 (en) | 2018-04-02 | 2024-01-30 | Magic Leap, Inc. | Waveguides having integrated spacers, waveguides having edge absorbers, and methods for making the same |
| EP4550031A3 (en) | 2018-04-02 | 2025-07-16 | Magic Leap, Inc. | Waveguides with integrated optical elements |
| US11276219B2 (en) | 2018-04-16 | 2022-03-15 | Magic Leap, Inc. | Systems and methods for cross-application authoring, transfer, and evaluation of rigging control systems for virtual characters |
| WO2019204765A1 (en) | 2018-04-19 | 2019-10-24 | Magic Leap, Inc. | Systems and methods for operating a display system based on user perceptibility |
| US10505394B2 (en) | 2018-04-21 | 2019-12-10 | Tectus Corporation | Power generation necklaces that mitigate energy absorption in the human body |
| WO2019209431A1 (en) | 2018-04-23 | 2019-10-31 | Magic Leap, Inc. | Avatar facial expression representation in multidimensional space |
| US10838239B2 (en) | 2018-04-30 | 2020-11-17 | Tectus Corporation | Multi-coil field generation in an electronic contact lens system |
| US10895762B2 (en) | 2018-04-30 | 2021-01-19 | Tectus Corporation | Multi-coil field generation in an electronic contact lens system |
| WO2019212698A1 (en) | 2018-05-01 | 2019-11-07 | Magic Leap, Inc. | Avatar animation using markov decision process policies |
| CN112384880A (en) * | 2018-05-03 | 2021-02-19 | Pcms控股公司 | System and method for physical proximity and/or gesture-based linking for VR experiences |
| WO2019213220A1 (en) | 2018-05-03 | 2019-11-07 | Magic Leap, Inc. | Using 3d scans of a physical subject to determine positions and orientations of joints for a virtual character |
| US10803671B2 (en) | 2018-05-04 | 2020-10-13 | Microsoft Technology Licensing, Llc | Authoring content in three-dimensional environment |
| WO2019215527A1 (en) * | 2018-05-10 | 2019-11-14 | ГИОРГАДЗЕ, Анико Тенгизовна | Method for speeding up the decision-making process regarding viewing a user's activity on a social network or in a communication application, the activity being relevant to an augmented reality object |
| JP7328993B2 (en) | 2018-05-17 | 2023-08-17 | マジック リープ, インコーポレイテッド | Gradient Adversarial Training of Neural Networks |
| US10790700B2 (en) | 2018-05-18 | 2020-09-29 | Tectus Corporation | Power generation necklaces with field shaping systems |
| WO2019226494A1 (en) | 2018-05-21 | 2019-11-28 | Magic Leap, Inc. | Generating textured polygon strip hair from strand-based hair for a virtual character |
| WO2019226549A1 (en) | 2018-05-22 | 2019-11-28 | Magic Leap, Inc. | Computer generated hair groom transfer tool |
| WO2019226554A1 (en) | 2018-05-22 | 2019-11-28 | Magic Leap, Inc. | Skeletal systems for animating virtual avatars |
| EP3797345A4 (en) | 2018-05-22 | 2022-03-09 | Magic Leap, Inc. | TRANSMODAL INPUT FUSION FOR PORTABLE SYSTEM |
| US11205508B2 (en) | 2018-05-23 | 2021-12-21 | Verb Surgical Inc. | Machine-learning-oriented surgical video analysis system |
| US11307968B2 (en) | 2018-05-24 | 2022-04-19 | The Calany Holding S. À R.L. | System and method for developing, testing and deploying digital reality applications into the real world via a virtual world |
| KR102275520B1 (en) | 2018-05-24 | 2021-07-12 | 티엠알더블유 파운데이션 아이피 앤드 홀딩 에스에이알엘 | Two-way real-time 3d interactive operations of real-time 3d virtual objects within a real-time 3d virtual world representing the real world |
| WO2019226865A1 (en) | 2018-05-25 | 2019-11-28 | Magic Leap, Inc. | Compression of dynamic unstructured point clouds |
| US10700780B2 (en) * | 2018-05-30 | 2020-06-30 | Apple Inc. | Systems and methods for adjusting movable lenses in directional free-space optical communication systems for portable electronic devices |
| US11303355B2 (en) | 2018-05-30 | 2022-04-12 | Apple Inc. | Optical structures in directional free-space optical communication systems for portable electronic devices |
| US11651555B2 (en) * | 2018-05-31 | 2023-05-16 | Microsoft Technology Licensing, Llc | Re-creation of virtual environment through a video call |
| US12087022B2 (en) | 2018-06-01 | 2024-09-10 | Magic Leap, Inc. | Compression of dynamic unstructured point clouds |
| US11157159B2 (en) | 2018-06-07 | 2021-10-26 | Magic Leap, Inc. | Augmented reality scrollbar |
| US10843077B2 (en) * | 2018-06-08 | 2020-11-24 | Brian Deller | System and method for creation, presentation and interaction within multiple reality and virtual reality environments |
| EP3807711A4 (en) | 2018-06-15 | 2022-02-23 | Magic Leap, Inc. | WIDE FIELD OF VIEW POLARIZATION SWITCHES AND METHOD OF MANUFACTURING LIQUID CRYSTAL OPTICAL ELEMENTS WITH FORWARD TILT |
| US11531244B2 (en) | 2018-06-15 | 2022-12-20 | Magic Leap, Inc. | Wide field-of-view polarization switches with liquid crystal optical elements with pretilt |
| EP3807710B1 (en) * | 2018-06-18 | 2024-01-17 | Magic Leap, Inc. | Augmented reality display with frame modulation functionality |
| US11694435B2 (en) | 2018-06-18 | 2023-07-04 | Magic Leap, Inc. | Systems and methods for temporarily disabling user control interfaces during attachment of an electronic device |
| WO2019246044A1 (en) | 2018-06-18 | 2019-12-26 | Magic Leap, Inc. | Head-mounted display systems with power saving functionality |
| US20190391647A1 (en) * | 2018-06-25 | 2019-12-26 | Immersion Corporation | Real-world haptic interactions for a virtual reality user |
| WO2020003014A1 (en) * | 2018-06-26 | 2020-01-02 | ГИОРГАДЗЕ, Анико Тенгизовна | Eliminating gaps in information comprehension arising during user interaction in communications systems using augmented reality objects |
| US11151793B2 (en) | 2018-06-26 | 2021-10-19 | Magic Leap, Inc. | Waypoint creation in map detection |
| US20190392641A1 (en) * | 2018-06-26 | 2019-12-26 | Sony Interactive Entertainment Inc. | Material base rendering |
| USD875729S1 (en) | 2018-06-27 | 2020-02-18 | Magic Leap, Inc. | Portion of an augmented reality headset |
| US11077365B2 (en) | 2018-06-27 | 2021-08-03 | Niantic, Inc. | Low latency datagram-responsive computer network protocol |
| US12154295B2 (en) | 2018-07-02 | 2024-11-26 | Magic Leap, Inc. | Methods and systems for interpolation of disparate inputs |
| EP3818530A4 (en) | 2018-07-02 | 2022-03-30 | Magic Leap, Inc. | DISPARATE INPUT INTERPOLATION METHODS AND SYSTEMS |
| WO2020010271A1 (en) | 2018-07-05 | 2020-01-09 | Magic Leap, Inc. | Waveguide-based illumination for head mounted display system |
| CN109308737A (en) * | 2018-07-11 | 2019-02-05 | 重庆邮电大学 | A three-stage point cloud registration method for mobile robot V-SLAM method |
| US11137622B2 (en) | 2018-07-15 | 2021-10-05 | Tectus Corporation | Eye-mounted displays including embedded conductive coils |
| US10792034B2 (en) | 2018-07-16 | 2020-10-06 | Ethicon Llc | Visualization of surgical devices |
| WO2020018938A1 (en) | 2018-07-19 | 2020-01-23 | Magic Leap, Inc. | Content interaction driven by eye metrics |
| JP7519742B2 (en) | 2018-07-23 | 2024-07-22 | マジック リープ, インコーポレイテッド | Method and system for resolving hemispheric ambiguity using position vectors - Patents.com |
| WO2020023399A1 (en) | 2018-07-23 | 2020-01-30 | Magic Leap, Inc. | Deep predictor recurrent neural network for head pose prediction |
| US11627587B2 (en) | 2018-07-23 | 2023-04-11 | Magic Leap, Inc. | Coexistence interference avoidance between two different radios operating in the same band |
| US11422620B2 (en) | 2018-07-24 | 2022-08-23 | Magic Leap, Inc. | Display systems and methods for determining vertical alignment between left and right displays and a user's eyes |
| WO2020023404A1 (en) | 2018-07-24 | 2020-01-30 | Magic Leap, Inc. | Flicker mitigation when toggling eyepiece display illumination in augmented reality systems |
| JP7418400B2 (en) | 2018-07-24 | 2024-01-19 | マジック リープ, インコーポレイテッド | Diffractive optical elements and related systems and methods with mitigation of rebounce-induced light loss |
| WO2020023491A1 (en) | 2018-07-24 | 2020-01-30 | Magic Leap, Inc. | Thermal management system for electronic device |
| USD930614S1 (en) | 2018-07-24 | 2021-09-14 | Magic Leap, Inc. | Totem controller having an illumination region |
| USD918176S1 (en) | 2018-07-24 | 2021-05-04 | Magic Leap, Inc. | Totem controller having an illumination region |
| WO2020023542A1 (en) | 2018-07-24 | 2020-01-30 | Magic Leap, Inc. | Display systems and methods for determining registration between a display and eyes of a user |
| USD924204S1 (en) | 2018-07-24 | 2021-07-06 | Magic Leap, Inc. | Totem controller having an illumination region |
| JP7210180B2 (en) * | 2018-07-25 | 2023-01-23 | キヤノン株式会社 | Image processing device, image display device, image processing method, and program |
| US11402801B2 (en) | 2018-07-25 | 2022-08-02 | Digilens Inc. | Systems and methods for fabricating a multilayer optical structure |
| US10950024B2 (en) | 2018-07-27 | 2021-03-16 | Magic Leap, Inc. | Pose space dimensionality reduction for pose space deformation of a virtual character |
| EP3831053B1 (en) * | 2018-08-03 | 2024-05-08 | Magic Leap, Inc. | Method and system for subgrid calibration of a display device |
| EP3830674A4 (en) | 2018-08-03 | 2022-04-20 | Magic Leap, Inc. | SELECTION OF DEPTH LEVELS FOR MULTI-LEVEL SCREEN DISPLAY SYSTEMS BY A USER CLASSIFICATION |
| CN109255819B (en) * | 2018-08-14 | 2020-10-13 | 清华大学 | Kinect calibration method and device based on plane mirror |
| US10581940B1 (en) * | 2018-08-20 | 2020-03-03 | Dell Products, L.P. | Head-mounted devices (HMDs) discovery in co-located virtual, augmented, and mixed reality (xR) applications |
| KR102508286B1 (en) * | 2018-08-27 | 2023-03-09 | 삼성전자 주식회사 | Electronic device and methodfor providing information in virtual reality |
| CN112639579B (en) | 2018-08-31 | 2023-09-15 | 奇跃公司 | Spatially resolved dynamic dimming for augmented reality devices |
| JP6527627B1 (en) * | 2018-08-31 | 2019-06-05 | 株式会社バーチャルキャスト | Content distribution server, content distribution system, content distribution method and program |
| US11157739B1 (en) | 2018-08-31 | 2021-10-26 | Apple Inc. | Multi-user computer generated reality platform |
| US11103763B2 (en) | 2018-09-11 | 2021-08-31 | Real Shot Inc. | Basketball shooting game using smart glasses |
| US11141645B2 (en) | 2018-09-11 | 2021-10-12 | Real Shot Inc. | Athletic ball game using smart glasses |
| US10529107B1 (en) | 2018-09-11 | 2020-01-07 | Tectus Corporation | Projector alignment in a contact lens |
| US10679743B2 (en) | 2018-09-12 | 2020-06-09 | Verb Surgical Inc. | Method and system for automatically tracking and managing inventory of surgical tools in operating rooms |
| US11982809B2 (en) | 2018-09-17 | 2024-05-14 | Apple Inc. | Electronic device with inner display and externally accessible input-output device |
| USD950567S1 (en) | 2018-09-18 | 2022-05-03 | Magic Leap, Inc. | Mobile computing support system having an illumination region |
| USD955396S1 (en) | 2018-09-18 | 2022-06-21 | Magic Leap, Inc. | Mobile computing support system having an illumination region |
| USD934873S1 (en) | 2018-09-18 | 2021-11-02 | Magic Leap, Inc. | Mobile computing support system having an illumination region |
| USD934872S1 (en) | 2018-09-18 | 2021-11-02 | Magic Leap, Inc. | Mobile computing support system having an illumination region |
| US10569164B1 (en) * | 2018-09-26 | 2020-02-25 | Valve Corporation | Augmented reality (AR) system for providing AR in video games |
| EP3857289B1 (en) | 2018-09-26 | 2025-04-23 | Magic Leap, Inc. | Eyewear with pinhole and slit cameras |
| US11733523B2 (en) | 2018-09-26 | 2023-08-22 | Magic Leap, Inc. | Diffractive optical elements with optical power |
| CN113196209A (en) | 2018-10-05 | 2021-07-30 | 奇跃公司 | Rendering location-specific virtual content at any location |
| WO2020078354A1 (en) * | 2018-10-16 | 2020-04-23 | 北京凌宇智控科技有限公司 | Video streaming system, video streaming method and apparatus |
| EP3871034B1 (en) | 2018-10-26 | 2025-08-27 | Magic Leap, Inc. | Ambient electromagnetic distortion correction for electromagnetic tracking |
| US11051206B2 (en) | 2018-10-31 | 2021-06-29 | Hewlett Packard Enterprise Development Lp | Wi-fi optimization for untethered multi-user virtual reality |
| US10846042B2 (en) | 2018-10-31 | 2020-11-24 | Hewlett Packard Enterprise Development Lp | Adaptive rendering for untethered multi-user virtual reality |
| US11047691B2 (en) * | 2018-10-31 | 2021-06-29 | Dell Products, L.P. | Simultaneous localization and mapping (SLAM) compensation for gesture recognition in virtual, augmented, and mixed reality (xR) applications |
| US11592896B2 (en) * | 2018-11-07 | 2023-02-28 | Wild Technology, Inc. | Technological infrastructure for enabling multi-user collaboration in a virtual reality environment |
| TWI722332B (en) * | 2018-11-13 | 2021-03-21 | 晶睿通訊股份有限公司 | Pedestrian detection method and related monitoring camera |
| JP7389116B2 (en) | 2018-11-15 | 2023-11-29 | マジック リープ, インコーポレイテッド | Deep neural network pose estimation system |
| WO2020106824A1 (en) | 2018-11-20 | 2020-05-28 | Magic Leap, Inc. | Eyepieces for augmented reality display system |
| US10838232B2 (en) | 2018-11-26 | 2020-11-17 | Tectus Corporation | Eye-mounted displays including embedded solenoids |
| WO2020112561A1 (en) | 2018-11-30 | 2020-06-04 | Magic Leap, Inc. | Multi-modal hand location and orientation for avatar movement |
| US10776933B2 (en) | 2018-12-06 | 2020-09-15 | Microsoft Technology Licensing, Llc | Enhanced techniques for tracking the movement of real-world objects for improved positioning of virtual objects |
| US11282282B2 (en) * | 2018-12-14 | 2022-03-22 | Vulcan Inc. | Virtual and physical reality integration |
| US10644543B1 (en) | 2018-12-20 | 2020-05-05 | Tectus Corporation | Eye-mounted display system including a head wearable object |
| JP7457714B2 (en) | 2018-12-28 | 2024-03-28 | マジック リープ, インコーポレイテッド | Variable pixel density display system with mechanically actuated image projector |
| CN113508580A (en) | 2018-12-28 | 2021-10-15 | 奇跃公司 | Augmented and virtual reality display system with left and right eye shared display |
| CN113453774A (en) * | 2018-12-28 | 2021-09-28 | 奇跃公司 | Low motion to photon delay architecture for augmented and virtual reality display systems |
| JP7585206B2 (en) | 2019-01-11 | 2024-11-18 | マジック リープ, インコーポレイテッド | Time-multiplexed display of virtual content at different depths |
| WO2020149956A1 (en) | 2019-01-14 | 2020-07-23 | Digilens Inc. | Holographic waveguide display with light control layer |
| CN113614783A (en) | 2019-01-25 | 2021-11-05 | 奇跃公司 | Eye tracking using images with different exposure times |
| US12105289B2 (en) | 2019-02-01 | 2024-10-01 | Magic Leap, Inc. | Inline in-coupling optical elements |
| EP4439152A3 (en) | 2019-02-01 | 2024-11-27 | Magic Leap, Inc. | Display system having 1-dimensional pixel array with scanning mirror |
| US20200247017A1 (en) | 2019-02-05 | 2020-08-06 | Digilens Inc. | Methods for Compensating for Optical Surface Nonuniformity |
| US11403823B2 (en) | 2019-02-14 | 2022-08-02 | Lego A/S | Toy system for asymmetric multiplayer game play |
| US11553969B1 (en) | 2019-02-14 | 2023-01-17 | Onpoint Medical, Inc. | System for computation of object coordinates accounting for movement of a surgical site for spinal and other procedures |
| US11857378B1 (en) | 2019-02-14 | 2024-01-02 | Onpoint Medical, Inc. | Systems for adjusting and tracking head mounted displays during surgery including with surgical helmets |
| US20220283377A1 (en) | 2019-02-15 | 2022-09-08 | Digilens Inc. | Wide Angle Waveguide Display |
| CN113692544B (en) | 2019-02-15 | 2025-04-22 | 迪吉伦斯公司 | Method and apparatus for providing holographic waveguide displays using integrated gratings |
| JP7349793B2 (en) * | 2019-02-15 | 2023-09-25 | キヤノン株式会社 | Image processing device, image processing method, and program |
| JP7185068B2 (en) | 2019-02-25 | 2022-12-06 | ナイアンティック, インコーポレイテッド | Augmented reality mobile edge computer |
| EP3931625B1 (en) | 2019-02-28 | 2024-09-18 | Magic Leap, Inc. | Display system and method for providing variable accommodation cues using multiple intra-pupil parallax views formed by light emitter arrays |
| RU2712417C1 (en) * | 2019-02-28 | 2020-01-28 | Публичное Акционерное Общество "Сбербанк России" (Пао Сбербанк) | Method and system for recognizing faces and constructing a route using augmented reality tool |
| US11467656B2 (en) | 2019-03-04 | 2022-10-11 | Magical Technologies, Llc | Virtual object control of a physical device and/or physical device control of a virtual object |
| JP7498191B2 (en) | 2019-03-12 | 2024-06-11 | マジック リープ, インコーポレイテッド | Waveguide with high refractive index material and method for fabricating same |
| EP3938818B1 (en) | 2019-03-12 | 2024-10-09 | Magic Leap, Inc. | Method of fabricating display device having patterned lithium-based transition metal oxide |
| JP2022525165A (en) | 2019-03-12 | 2022-05-11 | ディジレンズ インコーポレイテッド | Holographic Waveguide Backlights and Related Manufacturing Methods |
| RU2702495C1 (en) * | 2019-03-13 | 2019-10-08 | Общество с ограниченной ответственностью "ТрансИнжКом" | Method and system for collecting information for a combined reality device in real time |
| US12257013B2 (en) | 2019-03-15 | 2025-03-25 | Cilag Gmbh International | Robotic surgical systems with mechanisms for scaling camera magnification according to proximity of surgical tool to tissue |
| CN113841005A (en) | 2019-03-20 | 2021-12-24 | 奇跃公司 | system for collecting light |
| WO2020189903A1 (en) * | 2019-03-20 | 2020-09-24 | 엘지전자 주식회사 | Point cloud data transmitting device, point cloud data transmitting method, point cloud data receiving device, and point cloud data receiving method |
| WO2020191170A1 (en) | 2019-03-20 | 2020-09-24 | Magic Leap, Inc. | System for providing illumination of the eye |
| JP7536781B2 (en) * | 2019-03-25 | 2024-08-20 | マジック リープ, インコーポレイテッド | SYSTEMS AND METHODS FOR VIRTUAL AND AUGMENTED REALITY - Patent application |
| WO2020214272A1 (en) | 2019-04-15 | 2020-10-22 | Magic Leap, Inc. | Sensor fusion for electromagnetic tracking |
| US11107293B2 (en) * | 2019-04-23 | 2021-08-31 | XRSpace CO., LTD. | Head mounted display system capable of assigning at least one predetermined interactive characteristic to a virtual object in a virtual environment created according to a real object in a real environment, a related method and a related non-transitory computer readable storage medium |
| US11380011B2 (en) * | 2019-04-23 | 2022-07-05 | Kreatar, Llc | Marker-based positioning of simulated reality |
| US11294453B2 (en) * | 2019-04-23 | 2022-04-05 | Foretell Studios, LLC | Simulated reality cross platform system |
| GB201905847D0 (en) | 2019-04-26 | 2019-06-12 | King S College London | MRI scanner-compatible virtual reality system |
| CN115755397A (en) | 2019-04-26 | 2023-03-07 | 苹果公司 | Head mounted display with low light operation |
| EP3734419A1 (en) * | 2019-05-03 | 2020-11-04 | XRSpace CO., LTD. | Head mounted display system capable of assigning at least one predetermined interactive characteristic to a virtual object in a virtual environment created according to a real object in a real environment, a related method and a related non-transitory computer readable storage medium |
| TWI715026B (en) * | 2019-05-06 | 2021-01-01 | 宏碁股份有限公司 | Network virtual reality control method and host using the same |
| US11783554B2 (en) | 2019-05-10 | 2023-10-10 | BadVR, Inc. | Systems and methods for collecting, locating and visualizing sensor signals in extended reality |
| WO2020236827A1 (en) | 2019-05-20 | 2020-11-26 | Magic Leap, Inc. | Systems and techniques for estimating eye pose |
| CN116249918A (en) | 2019-05-24 | 2023-06-09 | 奇跃公司 | variable focus component |
| JP7357081B2 (en) | 2019-05-28 | 2023-10-05 | マジック リープ, インコーポレイテッド | Thermal management system for portable electronic devices |
| USD962981S1 (en) | 2019-05-29 | 2022-09-06 | Magic Leap, Inc. | Display screen or portion thereof with animated scrollbar graphical user interface |
| JP7765292B2 (en) | 2019-06-07 | 2025-11-06 | ディジレンズ インコーポレイテッド | Waveguides incorporating transmission and reflection gratings and related methods of manufacture |
| WO2020251540A1 (en) * | 2019-06-10 | 2020-12-17 | Brian Deller | System and method for creation, presentation and interaction within multiple reality and virtual reality environments |
| US10897564B1 (en) | 2019-06-17 | 2021-01-19 | Snap Inc. | Shared control of camera device by multiple devices |
| US12039354B2 (en) * | 2019-06-18 | 2024-07-16 | The Calany Holding S. À R.L. | System and method to operate 3D applications through positional virtualization technology |
| CN112100798A (en) | 2019-06-18 | 2020-12-18 | 明日基金知识产权控股有限公司 | System and method for deploying virtual copies of real world elements into persistent virtual world systems |
| US12040993B2 (en) | 2019-06-18 | 2024-07-16 | The Calany Holding S. À R.L. | Software engine virtualization and dynamic resource and task distribution across edge and cloud |
| US12033271B2 (en) | 2019-06-18 | 2024-07-09 | The Calany Holding S. À R.L. | 3D structure engine-based computation platform |
| CN114286962A (en) | 2019-06-20 | 2022-04-05 | 奇跃公司 | Eyepiece for augmented reality display system |
| WO2020256973A1 (en) | 2019-06-21 | 2020-12-24 | Magic Leap, Inc. | Secure authorization via modal window |
| US12535685B2 (en) | 2019-06-24 | 2026-01-27 | Magic Leap, Inc. | Waveguides having integral spacers and related systems and methods |
| WO2020263866A1 (en) | 2019-06-24 | 2020-12-30 | Magic Leap, Inc. | Waveguides having integral spacers and related systems and methods |
| US11055920B1 (en) * | 2019-06-27 | 2021-07-06 | Facebook Technologies, Llc | Performing operations using a mirror in an artificial reality environment |
| US11036987B1 (en) | 2019-06-27 | 2021-06-15 | Facebook Technologies, Llc | Presenting artificial reality content using a mirror |
| US11145126B1 (en) | 2019-06-27 | 2021-10-12 | Facebook Technologies, Llc | Movement instruction using a mirror in an artificial reality environment |
| US11549799B2 (en) | 2019-07-01 | 2023-01-10 | Apple Inc. | Self-mixing interference device for sensing applications |
| US11372474B2 (en) * | 2019-07-03 | 2022-06-28 | Saec/Kinetic Vision, Inc. | Systems and methods for virtual artificial intelligence development and testing |
| JP7542563B2 (en) | 2019-07-05 | 2024-08-30 | マジック リープ, インコーポレイテッド | Eye tracking latency improvement |
| US11283982B2 (en) | 2019-07-07 | 2022-03-22 | Selfie Snapper, Inc. | Selfie camera |
| AU2020309528B2 (en) | 2019-07-07 | 2026-01-29 | Selfie Snapper, Inc. | Electroadhesion device holder |
| US11029805B2 (en) | 2019-07-10 | 2021-06-08 | Magic Leap, Inc. | Real-time preview of connectable objects in a physically-modeled virtual space |
| EP3999940A4 (en) | 2019-07-16 | 2023-07-26 | Magic Leap, Inc. | Eye center of rotation determination with one or more eye tracking cameras |
| CN114502991A (en) | 2019-07-19 | 2022-05-13 | 奇跃公司 | Display device with diffraction grating having reduced polarization sensitivity |
| CN114514443A (en) | 2019-07-19 | 2022-05-17 | 奇跃公司 | Method for manufacturing diffraction grating |
| US11340857B1 (en) | 2019-07-19 | 2022-05-24 | Snap Inc. | Shared control of a virtual object by multiple devices |
| US11068284B2 (en) * | 2019-07-25 | 2021-07-20 | Huuuge Global Ltd. | System for managing user experience and method therefor |
| US12118897B2 (en) * | 2019-07-25 | 2024-10-15 | International Business Machines Corporation | Augmented reality tutorial generation |
| US11907417B2 (en) | 2019-07-25 | 2024-02-20 | Tectus Corporation | Glance and reveal within a virtual environment |
| US10773157B1 (en) | 2019-07-26 | 2020-09-15 | Arkade, Inc. | Interactive computing devices and accessories |
| US10893127B1 (en) | 2019-07-26 | 2021-01-12 | Arkade, Inc. | System and method for communicating interactive data between heterogeneous devices |
| US10946272B2 (en) | 2019-07-26 | 2021-03-16 | Arkade, Inc. | PC blaster game console |
| JP2022543571A (en) | 2019-07-29 | 2022-10-13 | ディジレンズ インコーポレイテッド | Method and Apparatus for Multiplying Image Resolution and Field of View for Pixelated Displays |
| WO2021021957A1 (en) | 2019-07-30 | 2021-02-04 | Magic Leap, Inc. | Angularly segmented hot mirror for eye tracking |
| US12211151B1 (en) | 2019-07-30 | 2025-01-28 | Onpoint Medical, Inc. | Systems for optimizing augmented reality displays for surgical procedures |
| JP7599475B2 (en) | 2019-07-31 | 2024-12-13 | マジック リープ, インコーポレイテッド | User Data Management for Augmented Reality Using Distributed Ledgers |
| US10944290B2 (en) | 2019-08-02 | 2021-03-09 | Tectus Corporation | Headgear providing inductive coupling to a contact lens |
| JP7359941B2 (en) | 2019-08-12 | 2023-10-11 | マジック リープ, インコーポレイテッド | Systems and methods for virtual reality and augmented reality |
| US20210056391A1 (en) * | 2019-08-20 | 2021-02-25 | Mind Machine Learning, Inc. | Systems and Methods for Simulating Sense Data and Creating Perceptions |
| US11210856B2 (en) * | 2019-08-20 | 2021-12-28 | The Calany Holding S. À R.L. | System and method for interaction-level based telemetry and tracking within digital realities |
| CN114450608A (en) | 2019-08-29 | 2022-05-06 | 迪吉伦斯公司 | Vacuum Bragg grating and method of manufacture |
| US11030815B2 (en) | 2019-09-05 | 2021-06-08 | Wipro Limited | Method and system for rendering virtual reality content |
| WO2021050924A1 (en) | 2019-09-11 | 2021-03-18 | Magic Leap, Inc. | Display device with diffraction grating having reduced polarization sensitivity |
| WO2021061821A1 (en) | 2019-09-27 | 2021-04-01 | Magic Leap, Inc. | Individual viewing in a shared space |
| US11276246B2 (en) | 2019-10-02 | 2022-03-15 | Magic Leap, Inc. | Color space mapping for intuitive surface normal visualization |
| US11176757B2 (en) | 2019-10-02 | 2021-11-16 | Magic Leap, Inc. | Mission driven virtual character for user interaction |
| US11521356B2 (en) * | 2019-10-10 | 2022-12-06 | Meta Platforms Technologies, Llc | Systems and methods for a shared interactive environment |
| CN114586071A (en) * | 2019-10-15 | 2022-06-03 | 奇跃公司 | Cross-reality system supporting multiple device types |
| US11475637B2 (en) * | 2019-10-21 | 2022-10-18 | Wormhole Labs, Inc. | Multi-instance multi-user augmented reality environment |
| US11662807B2 (en) | 2020-01-06 | 2023-05-30 | Tectus Corporation | Eye-tracking user interface for virtual tool control |
| US10901505B1 (en) | 2019-10-24 | 2021-01-26 | Tectus Corporation | Eye-based activation and tool selection systems and methods |
| CN114641713A (en) | 2019-11-08 | 2022-06-17 | 奇跃公司 | Metasurfaces with light redirecting structures comprising multiple materials and methods of making |
| US11493989B2 (en) | 2019-11-08 | 2022-11-08 | Magic Leap, Inc. | Modes of user interaction |
| USD982593S1 (en) | 2019-11-08 | 2023-04-04 | Magic Leap, Inc. | Portion of a display screen with animated ray |
| EP4062380A4 (en) | 2019-11-18 | 2023-11-29 | Magic Leap, Inc. | Mapping and localization of a passable world |
| CN114730111A (en) | 2019-11-22 | 2022-07-08 | 奇跃公司 | Method and system for patterning liquid crystal layer |
| US12094139B2 (en) | 2019-11-22 | 2024-09-17 | Magic Leap, Inc. | Systems and methods for enhanced depth determination using projection spots |
| CN114761859A (en) | 2019-11-26 | 2022-07-15 | 奇跃公司 | Augmented eye tracking for augmented or virtual reality display systems |
| DE102019132173A1 (en) * | 2019-11-27 | 2021-05-27 | Endress+Hauser Conducta Gmbh+Co. Kg | Method for the configuration for the transmission of data from a field device |
| CN114788251A (en) | 2019-12-06 | 2022-07-22 | 奇跃公司 | Encoding stereoscopic splash screens in still images |
| WO2021113322A1 (en) | 2019-12-06 | 2021-06-10 | Magic Leap, Inc. | Dynamic browser stage |
| USD952673S1 (en) | 2019-12-09 | 2022-05-24 | Magic Leap, Inc. | Portion of a display screen with transitional graphical user interface for guiding graphics |
| USD940189S1 (en) | 2019-12-09 | 2022-01-04 | Magic Leap, Inc. | Portion of a display screen with transitional graphical user interface for guiding graphics |
| USD940748S1 (en) | 2019-12-09 | 2022-01-11 | Magic Leap, Inc. | Portion of a display screen with transitional graphical user interface for guiding graphics |
| CN114762008A (en) | 2019-12-09 | 2022-07-15 | 奇跃公司 | Simplified virtual content programmed cross reality system |
| USD941353S1 (en) | 2019-12-09 | 2022-01-18 | Magic Leap, Inc. | Portion of a display screen with transitional graphical user interface for guiding graphics |
| USD940749S1 (en) | 2019-12-09 | 2022-01-11 | Magic Leap, Inc. | Portion of a display screen with transitional graphical user interface for guiding graphics |
| USD941307S1 (en) | 2019-12-09 | 2022-01-18 | Magic Leap, Inc. | Portion of a display screen with graphical user interface for guiding graphics |
| US11288876B2 (en) | 2019-12-13 | 2022-03-29 | Magic Leap, Inc. | Enhanced techniques for volumetric stage mapping based on calibration object |
| US11789527B1 (en) * | 2019-12-17 | 2023-10-17 | Snap Inc. | Eyewear device external face tracking overlay generation |
| CA3165313A1 (en) | 2019-12-20 | 2021-06-24 | Niantic, Inc. | Data hierarchy protocol for data transmission pathway selection |
| CN115175749B (en) * | 2019-12-20 | 2025-10-28 | 奈安蒂克公司 | Method and computer-readable storage medium for positioning a camera |
| US12207881B2 (en) | 2019-12-30 | 2025-01-28 | Cilag Gmbh International | Surgical systems correlating visualization data and powered surgical instrument data |
| US11759283B2 (en) | 2019-12-30 | 2023-09-19 | Cilag Gmbh International | Surgical systems for generating three dimensional constructs of anatomical organs and coupling identified anatomical structures thereto |
| US11896442B2 (en) | 2019-12-30 | 2024-02-13 | Cilag Gmbh International | Surgical systems for proposing and corroborating organ portion removals |
| EP3846008B1 (en) | 2019-12-30 | 2025-11-05 | TMRW Foundation IP SARL | Method and system for enabling enhanced user-to-user communication in digital realities |
| US12453592B2 (en) | 2019-12-30 | 2025-10-28 | Cilag Gmbh International | Adaptive surgical system control according to surgical smoke cloud characteristics |
| US11284963B2 (en) | 2019-12-30 | 2022-03-29 | Cilag Gmbh International | Method of using imaging devices in surgery |
| US11832996B2 (en) | 2019-12-30 | 2023-12-05 | Cilag Gmbh International | Analyzing surgical trends by a surgical system |
| US12053223B2 (en) | 2019-12-30 | 2024-08-06 | Cilag Gmbh International | Adaptive surgical system control according to surgical smoke particulate characteristics |
| US11744667B2 (en) | 2019-12-30 | 2023-09-05 | Cilag Gmbh International | Adaptive visualization by a surgical system |
| US12002571B2 (en) | 2019-12-30 | 2024-06-04 | Cilag Gmbh International | Dynamic surgical visualization systems |
| US11776144B2 (en) | 2019-12-30 | 2023-10-03 | Cilag Gmbh International | System and method for determining, adjusting, and managing resection margin about a subject tissue |
| AU2020419320A1 (en) | 2019-12-31 | 2022-08-18 | Selfie Snapper, Inc. | Electroadhesion device with voltage control module |
| US11095855B2 (en) | 2020-01-16 | 2021-08-17 | Microsoft Technology Licensing, Llc | Remote collaborations with volumetric space indications |
| US11340695B2 (en) | 2020-01-24 | 2022-05-24 | Magic Leap, Inc. | Converting a 2D positional input into a 3D point in space |
| CN115380236B (en) | 2020-01-24 | 2025-05-06 | 奇跃公司 | Content movement and interaction using a single controller |
| US11574424B2 (en) | 2020-01-27 | 2023-02-07 | Magic Leap, Inc. | Augmented reality map curation |
| EP4097684B1 (en) | 2020-01-27 | 2025-12-31 | Magic Leap, Inc. | IMPROVED CONDITION CONTROL FOR ANCHOR-BASED CROSS-REALITY APPLICATIONS |
| EP4097685A4 (en) | 2020-01-27 | 2024-02-21 | Magic Leap, Inc. | Neutral avatars |
| USD949200S1 (en) | 2020-01-27 | 2022-04-19 | Magic Leap, Inc. | Portion of a display screen with a set of avatars |
| USD936704S1 (en) | 2020-01-27 | 2021-11-23 | Magic Leap, Inc. | Portion of a display screen with avatar |
| USD948574S1 (en) | 2020-01-27 | 2022-04-12 | Magic Leap, Inc. | Portion of a display screen with a set of avatars |
| USD948562S1 (en) | 2020-01-27 | 2022-04-12 | Magic Leap, Inc. | Portion of a display screen with avatar |
| WO2021154437A1 (en) | 2020-01-27 | 2021-08-05 | Magic Leap, Inc. | Gaze timer based augmentation of functionality of a user input device |
| CN115039012A (en) | 2020-01-31 | 2022-09-09 | 奇跃公司 | Augmented and virtual reality display system for eye assessment |
| EP4010753A1 (en) * | 2020-02-04 | 2022-06-15 | Google LLC | Systems, devices, and methods for directing and managing image data from a camera in wearable devices |
| US11276248B2 (en) | 2020-02-10 | 2022-03-15 | Magic Leap, Inc. | Body-centric content positioning relative to three-dimensional container in a mixed reality environment |
| US11382699B2 (en) * | 2020-02-10 | 2022-07-12 | Globus Medical Inc. | Extended reality visualization of optical tool tracking volume for computer assisted navigation in surgery |
| US11709363B1 (en) | 2020-02-10 | 2023-07-25 | Avegant Corp. | Waveguide illumination of a spatial light modulator |
| JP7768888B2 (en) | 2020-02-13 | 2025-11-12 | マジック リープ, インコーポレイテッド | Cross-reality system with map processing using multi-resolution frame descriptors |
| CN115398894A (en) | 2020-02-14 | 2022-11-25 | 奇跃公司 | Virtual object motion velocity profiles for virtual and augmented reality display systems |
| EP4111425A4 (en) | 2020-02-26 | 2024-03-13 | Magic Leap, Inc. | Cross reality system with fast localization |
| CN115151784A (en) | 2020-02-26 | 2022-10-04 | 奇跃公司 | procedural electron beam lithography |
| CN115190837A (en) | 2020-02-28 | 2022-10-14 | 奇跃公司 | Method of manufacturing a mold for forming an eyepiece with an integral spacer |
| US11262588B2 (en) | 2020-03-10 | 2022-03-01 | Magic Leap, Inc. | Spectator view of virtual and physical objects |
| WO2021188926A1 (en) | 2020-03-20 | 2021-09-23 | Magic Leap, Inc. | Systems and methods for retinal imaging and tracking |
| US12101360B2 (en) | 2020-03-25 | 2024-09-24 | Snap Inc. | Virtual interaction session to facilitate augmented reality based communication between multiple users |
| US12182903B2 (en) | 2020-03-25 | 2024-12-31 | Snap Inc. | Augmented reality based communication between multiple users |
| US11985175B2 (en) | 2020-03-25 | 2024-05-14 | Snap Inc. | Virtual interaction session to facilitate time limited augmented reality based communication between multiple users |
| CN115698782A (en) | 2020-03-25 | 2023-02-03 | 奇跃公司 | Optical device with a single-way mirror |
| IT202000006472A1 (en) * | 2020-03-27 | 2021-09-27 | Invisible Cities S R L | System configured to associate a virtual scenario with a real scenario during the movement of a vehicle within a geographical area of interest. |
| US11593997B2 (en) | 2020-03-31 | 2023-02-28 | Snap Inc. | Context based augmented reality communication |
| JP7775214B2 (en) | 2020-04-03 | 2025-11-25 | マジック リープ, インコーポレイテッド | Wearable display system with nanowire LED microdisplay |
| JP7578711B2 (en) | 2020-04-03 | 2024-11-06 | マジック リープ, インコーポレイテッド | Avatar customization for optimal gaze discrimination |
| US11577161B2 (en) * | 2020-04-20 | 2023-02-14 | Dell Products L.P. | Range of motion control in XR applications on information handling systems |
| US11436579B2 (en) | 2020-05-04 | 2022-09-06 | Bank Of America Corporation | Performing enhanced deposit item processing using cognitive automation tools |
| CN111589107B (en) * | 2020-05-14 | 2023-04-28 | 北京代码乾坤科技有限公司 | Behavior prediction method and device of virtual model |
| WO2021237115A1 (en) | 2020-05-22 | 2021-11-25 | Magic Leap, Inc. | Augmented and virtual reality display systems with correlated in-coupling and out-coupling optical regions |
| JP7635264B2 (en) | 2020-06-05 | 2025-02-25 | マジック リープ, インコーポレイテッド | Improved eye tracking techniques based on neural network analysis of images. |
| EP4165647B1 (en) | 2020-06-11 | 2025-12-17 | CareFusion 303, Inc. | Hands-free medication tracking |
| US20210386219A1 (en) * | 2020-06-12 | 2021-12-16 | Selfie Snapper, Inc. | Digital mirror |
| US11665284B2 (en) * | 2020-06-20 | 2023-05-30 | Science House LLC | Systems, methods, and apparatus for virtual meetings |
| USD939607S1 (en) | 2020-07-10 | 2021-12-28 | Selfie Snapper, Inc. | Selfie camera |
| US11275942B2 (en) * | 2020-07-14 | 2022-03-15 | Vicarious Fpc, Inc. | Method and system for generating training data |
| CN116033864A (en) | 2020-07-15 | 2023-04-28 | 奇跃公司 | Eye tracking using a non-spherical cornea model |
| RU201742U1 (en) * | 2020-07-22 | 2020-12-30 | Общество с ограниченной ответственностью "МИКСАР ДЕВЕЛОПМЕНТ" | Augmented reality glasses for use in hazardous production environments |
| JP2023537486A (en) | 2020-08-07 | 2023-09-01 | マジック リープ, インコーポレイテッド | Adjustable cylindrical lens and head mounted display containing same |
| WO2022037758A1 (en) * | 2020-08-18 | 2022-02-24 | Siemens Aktiengesellschaft | Remote collaboration using augmented and virtual reality |
| EP4200847A4 (en) | 2020-08-24 | 2024-10-02 | FD IP & Licensing LLC | PREVIEW DEVICES AND SYSTEMS FOR THE FILM INDUSTRY |
| US11360733B2 (en) | 2020-09-10 | 2022-06-14 | Snap Inc. | Colocated shared augmented reality without shared backend |
| EP3968143B1 (en) * | 2020-09-15 | 2025-02-19 | Nokia Technologies Oy | Audio processing |
| US12529830B2 (en) | 2020-09-16 | 2026-01-20 | Magic Leap, Inc. | Eyepieces for augmented reality display system |
| US11176756B1 (en) * | 2020-09-16 | 2021-11-16 | Meta View, Inc. | Augmented reality collaboration system |
| US11554320B2 (en) | 2020-09-17 | 2023-01-17 | Bogie Inc. | System and method for an interactive controller |
| JP2023545653A (en) | 2020-09-29 | 2023-10-31 | エイヴギャント コーポレイション | Architecture for illuminating display panels |
| US20220101002A1 (en) * | 2020-09-30 | 2022-03-31 | Kyndryl, Inc. | Real-world object inclusion in a virtual reality experience |
| US11736545B2 (en) * | 2020-10-16 | 2023-08-22 | Famous Group Technologies Inc. | Client user interface for virtual fan experience |
| US20220130058A1 (en) * | 2020-10-27 | 2022-04-28 | Raytheon Company | Mission early launch tracker |
| US11893698B2 (en) * | 2020-11-04 | 2024-02-06 | Samsung Electronics Co., Ltd. | Electronic device, AR device and method for controlling data transfer interval thereof |
| US12053247B1 (en) | 2020-12-04 | 2024-08-06 | Onpoint Medical, Inc. | System for multi-directional tracking of head mounted displays for real-time augmented reality guidance of surgical procedures |
| WO2022130414A1 (en) * | 2020-12-17 | 2022-06-23 | Patel Lokesh | Virtual presence device which uses trained humans to represent their hosts using man machine interface |
| IT202000030866A1 (en) * | 2020-12-18 | 2022-06-18 | Caldarola S R L | DEVICE AND RELATED METHOD FOR SUPPORT AND DISTANCE IN AUGMENTED REALITY MODE FOR SIGHT AND HEARING FOR INDOOR ENVIRONMENTS |
| EP4252048A4 (en) | 2020-12-21 | 2024-10-16 | Digilens Inc. | EYEGLOW SUPPRESSION IN WAVEGUIDE-BASED DISPLAYS |
| EP4268054A4 (en) | 2020-12-22 | 2024-01-24 | Telefonaktiebolaget LM Ericsson (publ) | Methods and devices related to extended reality |
| US12026800B1 (en) | 2020-12-31 | 2024-07-02 | Apple Inc. | Blitting a display-locked object |
| US11831665B2 (en) * | 2021-01-04 | 2023-11-28 | Bank Of America Corporation | Device for monitoring a simulated environment |
| WO2022150841A1 (en) | 2021-01-07 | 2022-07-14 | Digilens Inc. | Grating structures for color waveguides |
| US11042028B1 (en) * | 2021-01-12 | 2021-06-22 | University Of Central Florida Research Foundation, Inc. | Relative pose data augmentation of tracked devices in virtual environments |
| RU2760179C1 (en) * | 2021-01-20 | 2021-11-22 | Виктор Александрович Епифанов | Augmented reality system |
| JP7663915B2 (en) | 2021-02-08 | 2025-04-17 | サイトフル コンピューターズ リミテッド | Extended Reality for Productivity |
| EP4295314A4 (en) | 2021-02-08 | 2025-04-16 | Sightful Computers Ltd | AUGMENTED REALITY CONTENT SHARING |
| EP4288950A4 (en) | 2021-02-08 | 2024-12-25 | Sightful Computers Ltd | User interactions in extended reality |
| JP7570939B2 (en) * | 2021-02-10 | 2024-10-22 | 株式会社コロプラ | PROGRAM, INFORMATION PROCESSING METHOD, INFORMATION PROCESSING APPARATUS, AND SYSTEM |
| EP4291973A4 (en) * | 2021-02-12 | 2024-08-14 | Magic Leap, Inc. | Lidar simultaneous localization and mapping |
| CN112843680A (en) * | 2021-03-04 | 2021-05-28 | 腾讯科技(深圳)有限公司 | Picture display method and device, terminal equipment and storage medium |
| US12158612B2 (en) | 2021-03-05 | 2024-12-03 | Digilens Inc. | Evacuated periodic structures and methods of manufacturing |
| EP4304490A4 (en) | 2021-03-10 | 2025-04-09 | Onpoint Medical, Inc. | Augmented reality guidance for imaging systems and robotic surgery |
| WO2022197603A1 (en) | 2021-03-15 | 2022-09-22 | Magic Leap, Inc. | Optical devices and head-mounted displays employing tunable cylindrical lenses |
| WO2022195322A1 (en) * | 2021-03-15 | 2022-09-22 | One Game Studio S.A.P.I De C.V. | System and method for the interaction of at least two users in an augmented reality environment for a videogame |
| US20220308659A1 (en) * | 2021-03-23 | 2022-09-29 | Htc Corporation | Method for interacting with virtual environment, electronic device, and computer readable storage medium |
| CN113761281B (en) * | 2021-04-26 | 2024-05-14 | 腾讯科技(深圳)有限公司 | Virtual resource processing method, device, medium and electronic equipment |
| CN115243095B (en) * | 2021-04-30 | 2024-07-19 | 百度在线网络技术(北京)有限公司 | Method and device for pushing data to be broadcasted and broadcasting data |
| US11938402B2 (en) * | 2021-05-03 | 2024-03-26 | Rakuten Mobile, Inc. | Multi-player interactive system and method of using |
| US20240244392A1 (en) | 2021-05-05 | 2024-07-18 | Telefonaktiebolaget Lm Ericsson (Publ) | Methods and devices related to extended reality |
| US11664923B2 (en) * | 2021-05-12 | 2023-05-30 | T-Mobile Usa, Inc. | Optimizing use of existing telecommunication infrastructure for wireless connectivity |
| US12387435B2 (en) | 2021-05-14 | 2025-08-12 | Gridraster, Inc. | Digital twin sub-millimeter alignment using multimodal 3D deep learning fusion system and method |
| US11250637B1 (en) | 2021-05-14 | 2022-02-15 | Gridraster, Inc. | Multimodal 3D deep learning fusion system and method for reducing the need of 3D training dataset of 3D object tracking for enterprise digital twin mixed reality |
| US12182956B2 (en) | 2021-07-01 | 2024-12-31 | Microport Orthopedics Holdings Inc. | Systems and methods of using three-dimensional image reconstruction to aid in assessing bone or soft tissue aberrations for orthopedic surgery |
| WO2023276141A1 (en) * | 2021-07-02 | 2023-01-05 | Mitsubishi Electric Corporation | Providing system and providing method |
| JP2024526601A (en) | 2021-07-20 | 2024-07-19 | マイクロポート オーソペディックス ホールディングス インク | Systems and methods for creating patient-specific guides for orthopaedic surgery using photogrammetry - Patents.com |
| WO2023009580A2 (en) | 2021-07-28 | 2023-02-02 | Multinarity Ltd | Using an extended reality appliance for productivity |
| US12014030B2 (en) | 2021-08-18 | 2024-06-18 | Bank Of America Corporation | System for predictive virtual scenario presentation |
| EP4387517A4 (en) * | 2021-08-18 | 2025-06-18 | Advanced Neuromodulation Systems, Inc. | DIGITAL HEALTH SERVICE DELIVERY SYSTEMS AND METHODS |
| US11908083B2 (en) | 2021-08-31 | 2024-02-20 | Snap Inc. | Deforming custom mesh based on body mesh |
| WO2023034010A1 (en) * | 2021-09-01 | 2023-03-09 | Snap Inc. | Physical action-based augmented reality communication exchanges |
| US12299825B2 (en) | 2021-09-01 | 2025-05-13 | Snap Inc. | Handcrafted augmented reality effort evidence |
| KR20240056558A (en) | 2021-09-01 | 2024-04-30 | 스냅 인코포레이티드 | Handcrafted Augmented Reality Experiences |
| WO2023034017A1 (en) | 2021-09-02 | 2023-03-09 | Snap Inc. | Augmented reality prop interactions |
| US11663792B2 (en) | 2021-09-08 | 2023-05-30 | Snap Inc. | Body fitted accessory with physics simulation |
| US12135471B2 (en) | 2021-09-10 | 2024-11-05 | Tectus Corporation | Control of an electronic contact lens using eye gestures |
| US12179106B2 (en) | 2021-09-12 | 2024-12-31 | Sony Interactive Entertainment Inc. | Local environment scanning to characterize physical environment for use in VR/AR |
| US11798238B2 (en) | 2021-09-14 | 2023-10-24 | Snap Inc. | Blending body mesh into external mesh |
| US11836866B2 (en) | 2021-09-20 | 2023-12-05 | Snap Inc. | Deforming real-world object using an external mesh |
| US12413043B2 (en) | 2021-09-21 | 2025-09-09 | Apple Inc. | Self-mixing interference device with tunable microelectromechanical system |
| US12169902B2 (en) | 2021-09-21 | 2024-12-17 | Apple Inc. | Methods and systems for composing and executing a scene |
| CN117858682A (en) | 2021-09-30 | 2024-04-09 | 微创骨科学控股股份有限公司 | Systems and methods for intraoperative alignment of surgical elements using photogrammetry |
| US11790614B2 (en) | 2021-10-11 | 2023-10-17 | Snap Inc. | Inferring intent from pose and speech input |
| US11836862B2 (en) | 2021-10-11 | 2023-12-05 | Snap Inc. | External mesh with vertex attributes |
| US11763481B2 (en) | 2021-10-20 | 2023-09-19 | Snap Inc. | Mirror-based augmented reality experience |
| US11592899B1 (en) | 2021-10-28 | 2023-02-28 | Tectus Corporation | Button activation within an eye-controlled user interface |
| US12045940B2 (en) * | 2021-11-03 | 2024-07-23 | Tencent America LLC | Method for streaming dynamic 5G AR/MR experience to 5G devices with updatable scenes |
| EP4441587A1 (en) * | 2021-12-03 | 2024-10-09 | InterDigital CE Patent Holdings, SAS | Adaptation of a haptic signal to device capabilities |
| US20230182031A1 (en) * | 2021-12-09 | 2023-06-15 | Universal City Studios Llc | Amusement content processing systems and methods |
| KR20230103379A (en) | 2021-12-31 | 2023-07-07 | 삼성전자주식회사 | Method and apparatus for processing augmented reality |
| US11619994B1 (en) | 2022-01-14 | 2023-04-04 | Tectus Corporation | Control of an electronic contact lens using pitch-based eye gestures |
| US12433761B1 (en) | 2022-01-20 | 2025-10-07 | Onpoint Medical, Inc. | Systems and methods for determining the shape of spinal rods and spinal interbody devices for use with augmented reality displays, navigation systems and robots in minimally invasive spine procedures |
| US12380238B2 (en) | 2022-01-25 | 2025-08-05 | Sightful Computers Ltd | Dual mode presentation of user interface elements |
| US11948263B1 (en) | 2023-03-14 | 2024-04-02 | Sightful Computers Ltd | Recording the complete physical and extended reality environments of a user |
| US12175614B2 (en) | 2022-01-25 | 2024-12-24 | Sightful Computers Ltd | Recording the complete physical and extended reality environments of a user |
| US12293433B2 (en) | 2022-04-25 | 2025-05-06 | Snap Inc. | Real-time modifications in augmented reality experiences |
| US12244605B2 (en) | 2022-04-29 | 2025-03-04 | Bank Of America Corporation | System and method for geotagging users for authentication |
| US11874961B2 (en) | 2022-05-09 | 2024-01-16 | Tectus Corporation | Managing display of an icon in an eye tracking augmented reality device |
| US12254551B2 (en) | 2022-07-13 | 2025-03-18 | Fd Ip & Licensing Llc | Method and application for animating computer generated images |
| US12222512B2 (en) | 2022-08-18 | 2025-02-11 | Apple Inc. | Displaying content based on state information |
| US11948259B2 (en) | 2022-08-22 | 2024-04-02 | Bank Of America Corporation | System and method for processing and intergrating real-time environment instances into virtual reality live streams |
| US12032737B2 (en) * | 2022-08-22 | 2024-07-09 | Meta Platforms Technologies, Llc | Gaze adjusted avatars for immersive reality applications |
| US12518417B2 (en) * | 2022-09-23 | 2026-01-06 | Apple Inc. | Method and device for generating metadata estimations based on metadata subdivisions |
| EP4595015A1 (en) | 2022-09-30 | 2025-08-06 | Sightful Computers Ltd | Adaptive extended reality content presentation in multiple physical environments |
| US12420189B2 (en) | 2022-10-05 | 2025-09-23 | Sony Interactive Entertainment Inc. | Systems and methods for integrating real-world content in a game |
| WO2024085353A1 (en) | 2022-10-20 | 2024-04-25 | 삼성전자주식회사 | Electronic device and method for controlling camera on basis of location to obtain media corresponding to location, and method therefor |
| US12101303B2 (en) * | 2022-11-02 | 2024-09-24 | Truist Bank | Secure packet record in multi-source VR environment |
| EP4639489A1 (en) * | 2022-12-22 | 2025-10-29 | Inter Ikea Systems B.V. | Creating a mixed reality meeting room |
| CN118678990A (en) * | 2023-01-17 | 2024-09-20 | 谷歌有限责任公司 | Ultra wideband radar apparatus for cloud-based game control |
| US12353674B2 (en) * | 2023-01-24 | 2025-07-08 | Ancestry.Com Operations Inc. | Artificial reality family history experience |
| US20240268936A1 (en) * | 2023-02-15 | 2024-08-15 | Align Technology, Inc. | Intraoral 3d scanner with inaccurate focus lens |
| US12309163B2 (en) | 2023-04-25 | 2025-05-20 | Bank Of America Corporation | System and method for managing metaverse instances |
| US11875492B1 (en) | 2023-05-01 | 2024-01-16 | Fd Ip & Licensing Llc | Systems and methods for digital compositing |
| US12482131B2 (en) | 2023-07-10 | 2025-11-25 | Snap Inc. | Extended reality tracking using shared pose data |
| US20250061678A1 (en) * | 2023-08-15 | 2025-02-20 | Brent Hubbard Burgess | System and method for enhanced physical interaction with virtual environments through pattern recognition and feedback mechanisms |
| WO2025078394A1 (en) * | 2023-10-10 | 2025-04-17 | Interdigital Ce Patent Holdings, Sas | Poses of trackable objects |
| WO2025116544A1 (en) * | 2023-11-28 | 2025-06-05 | 삼성전자주식회사 | Electronic device, method, and computer-readable medium for rendering images |
| WO2025250998A1 (en) * | 2024-05-30 | 2025-12-04 | The Johns Hopkins University | Scalable sensor arrays through row column compressive sensing |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101005514A (en) * | 2006-12-27 | 2007-07-25 | 北京航空航天大学 | Multiple server organizing method for network game |
| US20080024594A1 (en) * | 2004-05-19 | 2008-01-31 | Ritchey Kurtis J | Panoramic image-based virtual reality/telepresence audio-visual system and method |
| CN101359432A (en) * | 2008-09-02 | 2009-02-04 | 浙江理工大学 | Interactive 3D virtual logistics simulation integration method and system |
| CN101539804A (en) * | 2009-03-11 | 2009-09-23 | 上海大学 | Real time human-machine interaction method and system based on augmented virtual reality and anomalous screen |
| CN101923462A (en) * | 2009-06-10 | 2010-12-22 | 成都如临其境创意科技有限公司 | FlashVR-based three-dimensional mini-scene network publishing engine |
Family Cites Families (121)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4852988A (en) | 1988-09-12 | 1989-08-01 | Applied Science Laboratories | Visor and camera providing a parallax-free field-of-view image for a head-mounted eye movement measurement system |
| JPH05328260A (en) * | 1992-05-26 | 1993-12-10 | Olympus Optical Co Ltd | Head mount type display device |
| JP3336687B2 (en) * | 1993-07-21 | 2002-10-21 | セイコーエプソン株式会社 | Glasses-type display device |
| US5491743A (en) * | 1994-05-24 | 1996-02-13 | International Business Machines Corporation | Virtual conference system and terminal apparatus therefor |
| US6847336B1 (en) | 1996-10-02 | 2005-01-25 | Jerome H. Lemelson | Selectively controllable heads-up display system |
| US6329986B1 (en) | 1998-02-21 | 2001-12-11 | U.S. Philips Corporation | Priority-based virtual environment |
| US6118456A (en) | 1998-04-02 | 2000-09-12 | Adaptive Media Technologies | Method and apparatus capable of prioritizing and streaming objects within a 3-D virtual environment |
| JP3456419B2 (en) * | 1998-07-23 | 2003-10-14 | 日本電信電話株式会社 | Method and system for displaying shared space in virtual space and storage medium storing program for displaying shared space in virtual space |
| US6215498B1 (en) * | 1998-09-10 | 2001-04-10 | Lionhearth Technologies, Inc. | Virtual command post |
| US6433760B1 (en) | 1999-01-14 | 2002-08-13 | University Of Central Florida | Head mounted display with eyetracking capability |
| US6289299B1 (en) * | 1999-02-17 | 2001-09-11 | Westinghouse Savannah River Company | Systems and methods for interactive virtual reality process control and simulation |
| US6753857B1 (en) * | 1999-04-16 | 2004-06-22 | Nippon Telegraph And Telephone Corporation | Method and system for 3-D shared virtual environment display communication virtual conference and programs therefor |
| US6491391B1 (en) | 1999-07-02 | 2002-12-10 | E-Vision Llc | System, apparatus, and method for reducing birefringence |
| EP1060772B1 (en) | 1999-06-11 | 2012-02-01 | Canon Kabushiki Kaisha | Apparatus and method to represent mixed reality space shared by plural operators, game apparatus using mixed reality apparatus and interface method thereof |
| CA2316473A1 (en) | 1999-07-28 | 2001-01-28 | Steve Mann | Covert headworn information display or data display or viewfinder |
| US6331909B1 (en) * | 1999-08-05 | 2001-12-18 | Microvision, Inc. | Frequency tunable resonant scanner |
| JP2001197400A (en) * | 2000-01-12 | 2001-07-19 | Mixed Reality Systems Laboratory Inc | Display device, head installation-type display device, control method of head installation-type display device, picture generation method for the same, computer and program storage medium |
| US20020010734A1 (en) * | 2000-02-03 | 2002-01-24 | Ebersole John Franklin | Internetworked augmented reality system and method |
| US6525732B1 (en) * | 2000-02-17 | 2003-02-25 | Wisconsin Alumni Research Foundation | Network-based viewing of images of three-dimensional objects |
| JP2001312356A (en) | 2000-04-28 | 2001-11-09 | Tomohiro Kuroda | Integrated wearable computer provided with image input interface to utilize body |
| US6891518B2 (en) | 2000-10-05 | 2005-05-10 | Siemens Corporate Research, Inc. | Augmented reality visualization device |
| US8316450B2 (en) * | 2000-10-10 | 2012-11-20 | Addn Click, Inc. | System for inserting/overlaying markers, data packets and objects relative to viewable content and enabling live social networking, N-dimensional virtual environments and/or other value derivable from the content |
| JP2002157607A (en) * | 2000-11-17 | 2002-05-31 | Canon Inc | Image generation system, image generation method, and storage medium |
| CA2362895A1 (en) | 2001-06-26 | 2002-12-26 | Steve Mann | Smart sunglasses or computer information display built into eyewear having ordinary appearance, possibly with sight license |
| DE10132872B4 (en) | 2001-07-06 | 2018-10-11 | Volkswagen Ag | Head mounted optical inspection system |
| US7071897B2 (en) | 2001-07-18 | 2006-07-04 | Hewlett-Packard Development Company, L.P. | Immersive augmentation for display systems |
| US7452279B2 (en) | 2001-08-09 | 2008-11-18 | Kabushiki Kaisha Sega | Recording medium of game program and game device using card |
| US20030030597A1 (en) | 2001-08-13 | 2003-02-13 | Geist Richard Edwin | Virtual display apparatus for mobile activities |
| US20030179249A1 (en) | 2002-02-12 | 2003-09-25 | Frank Sauer | User interface for three-dimensional data sets |
| US6917370B2 (en) | 2002-05-13 | 2005-07-12 | Charles Benton | Interacting augmented reality and virtual reality |
| CA2388766A1 (en) | 2002-06-17 | 2003-12-17 | Steve Mann | Eyeglass frames based computer display or eyeglasses with operationally, actually, or computationally, transparent frames |
| US6943754B2 (en) | 2002-09-27 | 2005-09-13 | The Boeing Company | Gaze tracking system, eye-tracking assembly and an associated method of calibration |
| JP4185052B2 (en) | 2002-10-15 | 2008-11-19 | ユニバーシティ オブ サザン カリフォルニア | Enhanced virtual environment |
| US7793801B2 (en) | 2002-11-18 | 2010-09-14 | David Carl Drummond | Positive pressure liquid transfer and removal system configured for operation by a hand and by a foot |
| US7347551B2 (en) | 2003-02-13 | 2008-03-25 | Fergason Patent Properties, Llc | Optical system for monitoring eye movement |
| US7500747B2 (en) | 2003-10-09 | 2009-03-10 | Ipventure, Inc. | Eyeglasses with electrical components |
| JP2005061931A (en) * | 2003-08-11 | 2005-03-10 | Nippon Telegr & Teleph Corp <Ntt> | Three-dimensional shape recovery method, its apparatus, and three-dimensional shape recovery program |
| US7546343B2 (en) * | 2003-09-23 | 2009-06-09 | Alcatel-Lucent Usa Inc. | System and method for supporting virtual conferences |
| JP2005165778A (en) * | 2003-12-03 | 2005-06-23 | Canon Inc | Head-mounted display device and control method thereof |
| JP2005182331A (en) | 2003-12-18 | 2005-07-07 | Sony Corp | Information processing system, service providing device and method, information processor and information processing method, program and recording medium |
| RU2339083C2 (en) * | 2003-12-19 | 2008-11-20 | ТиДиВижн Корпорейшн Эс.Эй.ДЕ Си.Ви | System of three dimensional video games |
| CN100456328C (en) | 2003-12-19 | 2009-01-28 | Td视觉有限公司 | 3D video game system |
| CA2561287C (en) | 2004-04-01 | 2017-07-11 | William C. Torch | Biosensors, communicators, and controllers monitoring eye movement and methods for using them |
| US20050289590A1 (en) | 2004-05-28 | 2005-12-29 | Cheok Adrian D | Marketing platform |
| JP4677269B2 (en) * | 2005-04-08 | 2011-04-27 | キヤノン株式会社 | Information processing method and system |
| US7403268B2 (en) | 2005-02-11 | 2008-07-22 | Deltasphere, Inc. | Method and apparatus for determining the geometric correspondence between multiple 3D rangefinder data sets |
| WO2006099596A1 (en) * | 2005-03-17 | 2006-09-21 | Massachusetts Institute Of Technology | System for and method of motion and force synchronization with time delay reduction in multi-user shared virtual environments |
| JP4738870B2 (en) * | 2005-04-08 | 2011-08-03 | キヤノン株式会社 | Information processing method, information processing apparatus, and remote mixed reality sharing apparatus |
| US20070081123A1 (en) | 2005-10-07 | 2007-04-12 | Lewis Scott W | Digital eyewear |
| US9658473B2 (en) | 2005-10-07 | 2017-05-23 | Percept Technologies Inc | Enhanced optical and perceptual digital eyewear |
| US8696113B2 (en) | 2005-10-07 | 2014-04-15 | Percept Technologies Inc. | Enhanced optical and perceptual digital eyewear |
| US11428937B2 (en) | 2005-10-07 | 2022-08-30 | Percept Technologies | Enhanced optical and perceptual digital eyewear |
| US8730156B2 (en) * | 2010-03-05 | 2014-05-20 | Sony Computer Entertainment America Llc | Maintaining multiple views on a shared stable virtual space |
| US7739339B2 (en) * | 2006-06-28 | 2010-06-15 | The Boeing Company | System and method of communications within a virtual environment |
| US9205329B2 (en) * | 2006-07-25 | 2015-12-08 | Mga Entertainment, Inc. | Virtual world electronic game |
| US8010474B1 (en) * | 2006-09-05 | 2011-08-30 | Aol Inc. | Translating paralinguisitic indicators |
| US8726195B2 (en) * | 2006-09-05 | 2014-05-13 | Aol Inc. | Enabling an IM user to navigate a virtual world |
| US8012023B2 (en) | 2006-09-28 | 2011-09-06 | Microsoft Corporation | Virtual entertainment |
| JP2008108246A (en) * | 2006-10-23 | 2008-05-08 | Internatl Business Mach Corp <Ibm> | Method, system and computer program for generating virtual image according to position of browsing person |
| JP5309448B2 (en) * | 2007-01-26 | 2013-10-09 | ソニー株式会社 | Display device and display method |
| US8135018B1 (en) * | 2007-03-29 | 2012-03-13 | Qurio Holdings, Inc. | Message propagation in a distributed virtual world |
| US8601386B2 (en) * | 2007-04-20 | 2013-12-03 | Ingenio Llc | Methods and systems to facilitate real time communications in virtual reality |
| US8433656B1 (en) * | 2007-06-13 | 2013-04-30 | Qurio Holdings, Inc. | Group licenses for virtual objects in a distributed virtual world |
| US20090054084A1 (en) | 2007-08-24 | 2009-02-26 | Motorola, Inc. | Mobile virtual and augmented reality system |
| US9111285B2 (en) * | 2007-08-27 | 2015-08-18 | Qurio Holdings, Inc. | System and method for representing content, user presence and interaction within virtual world advertising environments |
| US20090083051A1 (en) * | 2007-09-26 | 2009-03-26 | Bokor Brian R | Interactive Self-Contained Business Transaction Virtual Object Generation |
| US7792801B2 (en) | 2007-10-12 | 2010-09-07 | International Business Machines Corporation | Controlling and using virtual universe wish lists |
| JP4950834B2 (en) | 2007-10-19 | 2012-06-13 | キヤノン株式会社 | Image processing apparatus and image processing method |
| US20090109240A1 (en) * | 2007-10-24 | 2009-04-30 | Roman Englert | Method and System for Providing and Reconstructing a Photorealistic Three-Dimensional Environment |
| US8786675B2 (en) | 2008-01-23 | 2014-07-22 | Michael F. Deering | Systems using eye mounted displays |
| US8231465B2 (en) * | 2008-02-21 | 2012-07-31 | Palo Alto Research Center Incorporated | Location-aware mixed-reality gaming platform |
| US20090232355A1 (en) | 2008-03-12 | 2009-09-17 | Harris Corporation | Registration of 3d point cloud data using eigenanalysis |
| US8756530B2 (en) | 2008-05-27 | 2014-06-17 | International Business Machines Corporation | Generation and synchronization of offline 3D virtual world content |
| GB2461294B (en) | 2008-06-26 | 2011-04-06 | Light Blue Optics Ltd | Holographic image display systems |
| EP2163284A1 (en) * | 2008-09-02 | 2010-03-17 | Zero Point Holding A/S | Integration of audio input to a software application |
| JP5094663B2 (en) | 2008-09-24 | 2012-12-12 | キヤノン株式会社 | Position / orientation estimation model generation apparatus, position / orientation calculation apparatus, image processing apparatus, and methods thereof |
| US9480919B2 (en) | 2008-10-24 | 2016-11-01 | Excalibur Ip, Llc | Reconfiguring reality using a reality overlay device |
| US20100162121A1 (en) * | 2008-12-22 | 2010-06-24 | Nortel Networks Limited | Dynamic customization of a virtual world |
| JP2010169976A (en) * | 2009-01-23 | 2010-08-05 | Sony Corp | Spatial image display |
| JP2010217719A (en) * | 2009-03-18 | 2010-09-30 | Ricoh Co Ltd | Wearable display device, and control method and program therefor |
| JP5178607B2 (en) | 2009-03-31 | 2013-04-10 | 株式会社バンダイナムコゲームス | Program, information storage medium, mouth shape control method, and mouth shape control device |
| US20100325154A1 (en) * | 2009-06-22 | 2010-12-23 | Nokia Corporation | Method and apparatus for a virtual image world |
| US8966380B2 (en) * | 2009-07-21 | 2015-02-24 | UnisFair, Ltd. | Apparatus and method for a virtual environment center and venues thereof |
| US8939840B2 (en) * | 2009-07-29 | 2015-01-27 | Disney Enterprises, Inc. | System and method for playsets using tracked objects and corresponding virtual worlds |
| US20110084983A1 (en) | 2009-09-29 | 2011-04-14 | Wavelength & Resonance LLC | Systems and Methods for Interaction With a Virtual Environment |
| US20110165939A1 (en) * | 2010-01-05 | 2011-07-07 | Ganz | Method and system for providing a 3d activity in a virtual presentation |
| WO2011106797A1 (en) | 2010-02-28 | 2011-09-01 | Osterhout Group, Inc. | Projection triggering through an external marker in an augmented reality eyepiece |
| US20110213664A1 (en) | 2010-02-28 | 2011-09-01 | Osterhout Group, Inc. | Local advertising content on an interactive head-mounted eyepiece |
| US8890946B2 (en) | 2010-03-01 | 2014-11-18 | Eyefluence, Inc. | Systems and methods for spatially controlled scene illumination |
| KR101208911B1 (en) * | 2010-06-24 | 2012-12-06 | 전자부품연구원 | Operation System and Method For Virtual World |
| US8531355B2 (en) | 2010-07-23 | 2013-09-10 | Gregory A. Maltz | Unitized, vision-controlled, wireless eyeglass transceiver |
| US8941559B2 (en) * | 2010-09-21 | 2015-01-27 | Microsoft Corporation | Opacity filter for display device |
| US8764571B2 (en) | 2010-09-24 | 2014-07-01 | Nokia Corporation | Methods, apparatuses and computer program products for using near field communication to implement games and applications on devices |
| US8661354B2 (en) | 2010-09-24 | 2014-02-25 | Nokia Corporation | Methods, apparatuses and computer program products for using near field communication to implement games and applications on devices |
| US9122053B2 (en) * | 2010-10-15 | 2015-09-01 | Microsoft Technology Licensing, Llc | Realistic occlusion for a head mounted augmented reality display |
| US9348141B2 (en) | 2010-10-27 | 2016-05-24 | Microsoft Technology Licensing, Llc | Low-latency fusing of virtual and real content |
| US9292973B2 (en) | 2010-11-08 | 2016-03-22 | Microsoft Technology Licensing, Llc | Automatic variable virtual focus for augmented reality displays |
| US8620730B2 (en) * | 2010-12-15 | 2013-12-31 | International Business Machines Corporation | Promoting products in a virtual world |
| WO2012103649A1 (en) * | 2011-01-31 | 2012-08-09 | Cast Group Of Companies Inc. | System and method for providing 3d sound |
| WO2012135546A1 (en) * | 2011-03-29 | 2012-10-04 | Qualcomm Incorporated | Anchoring virtual images to real world surfaces in augmented reality systems |
| EP2705435B8 (en) | 2011-05-06 | 2017-08-23 | Magic Leap, Inc. | Massive simultaneous remote digital presence world |
| US8692738B2 (en) * | 2011-06-10 | 2014-04-08 | Disney Enterprises, Inc. | Advanced Pepper's ghost projection system with a multiview and multiplanar display |
| US9323325B2 (en) | 2011-08-30 | 2016-04-26 | Microsoft Technology Licensing, Llc | Enhancing an object of interest in a see-through, mixed reality display device |
| US20130050499A1 (en) * | 2011-08-30 | 2013-02-28 | Qualcomm Incorporated | Indirect tracking |
| US20130063429A1 (en) | 2011-09-08 | 2013-03-14 | Parham Sina | System and method for distributing three-dimensional virtual world data |
| US20130077147A1 (en) | 2011-09-22 | 2013-03-28 | Los Alamos National Security, Llc | Method for producing a partially coherent beam with fast pattern update rates |
| US9286711B2 (en) | 2011-09-30 | 2016-03-15 | Microsoft Technology Licensing, Llc | Representing a location at a previous time period using an augmented reality display |
| CN104011788B (en) | 2011-10-28 | 2016-11-16 | 奇跃公司 | Systems and methods for augmented and virtual reality |
| US8929589B2 (en) | 2011-11-07 | 2015-01-06 | Eyefluence, Inc. | Systems and methods for high-resolution gaze tracking |
| US8611015B2 (en) | 2011-11-22 | 2013-12-17 | Google Inc. | User interface |
| US8235529B1 (en) | 2011-11-30 | 2012-08-07 | Google Inc. | Unlocking a screen using eye tracking information |
| US9182815B2 (en) * | 2011-12-07 | 2015-11-10 | Microsoft Technology Licensing, Llc | Making static printed content dynamic with virtual data |
| US10013053B2 (en) | 2012-01-04 | 2018-07-03 | Tobii Ab | System for gaze interaction |
| US8638498B2 (en) | 2012-01-04 | 2014-01-28 | David D. Bohn | Eyebox adjustment for interpupillary distance |
| US9274338B2 (en) | 2012-03-21 | 2016-03-01 | Microsoft Technology Licensing, Llc | Increasing field of view of reflective waveguide |
| US8989535B2 (en) | 2012-06-04 | 2015-03-24 | Microsoft Technology Licensing, Llc | Multiple waveguide imaging structure |
| WO2014089542A1 (en) | 2012-12-06 | 2014-06-12 | Eyefluence, Inc. | Eye tracking wearable devices and methods for use |
| KR20150103723A (en) | 2013-01-03 | 2015-09-11 | 메타 컴퍼니 | Extramissive spatial imaging digital eye glass for virtual or augmediated vision |
| US20140195918A1 (en) | 2013-01-07 | 2014-07-10 | Steven Friedlander | Eye tracking user interface |
| IL240822B (en) | 2014-09-23 | 2020-03-31 | Heidelberger Druckmasch Ag | Device for feeding sheets |
| GB2570298A (en) * | 2018-01-17 | 2019-07-24 | Nokia Technologies Oy | Providing virtual content based on user context |
-
2012
- 2012-10-29 CN CN201280064922.8A patent/CN104011788B/en active Active
- 2012-10-29 RU RU2017115669A patent/RU2017115669A/en not_active Application Discontinuation
- 2012-10-29 JP JP2014539132A patent/JP6110866B2/en active Active
- 2012-10-29 KR KR1020147014448A patent/KR101944846B1/en active Active
- 2012-10-29 AU AU2012348348A patent/AU2012348348B2/en active Active
- 2012-10-29 EP EP17184948.2A patent/EP3258671B1/en active Active
- 2012-10-29 CA CA3164530A patent/CA3164530C/en active Active
- 2012-10-29 BR BR112014010230A patent/BR112014010230A8/en not_active Application Discontinuation
- 2012-10-29 EP EP24180270.1A patent/EP4404537A3/en active Pending
- 2012-10-29 KR KR1020177030368A patent/KR101964223B1/en active Active
- 2012-10-29 EP EP19219608.7A patent/EP3666352B1/en active Active
- 2012-10-29 KR KR1020197002451A patent/KR102005106B1/en active Active
- 2012-10-29 WO PCT/US2012/062500 patent/WO2013085639A1/en not_active Ceased
- 2012-10-29 CA CA3207408A patent/CA3207408A1/en active Pending
- 2012-10-29 CN CN201610908994.6A patent/CN106484115B/en active Active
- 2012-10-29 EP EP21206447.1A patent/EP3974041B1/en active Active
- 2012-10-29 EP EP18182118.2A patent/EP3404894B1/en active Active
- 2012-10-29 RU RU2014121402A patent/RU2621633C2/en not_active IP Right Cessation
- 2012-10-29 CA CA2853787A patent/CA2853787C/en active Active
- 2012-10-29 US US13/663,466 patent/US9215293B2/en active Active
- 2012-10-29 EP EP12855344.3A patent/EP2771877B1/en active Active
- 2012-10-29 KR KR1020187022042A patent/KR101917630B1/en active Active
- 2012-10-29 CA CA3048647A patent/CA3048647C/en active Active
-
2014
- 2014-04-28 IL IL232281A patent/IL232281B/en active IP Right Grant
- 2014-05-01 IN IN3300CHN2014 patent/IN2014CN03300A/en unknown
- 2014-10-14 US US14/514,115 patent/US20150032823A1/en not_active Abandoned
-
2015
- 2015-12-10 US US14/965,169 patent/US20160100034A1/en not_active Abandoned
-
2016
- 2016-08-16 US US15/238,657 patent/US10021149B2/en active Active
-
2017
- 2017-01-18 JP JP2017006418A patent/JP6345282B2/en active Active
- 2017-02-16 AU AU2017201063A patent/AU2017201063B2/en active Active
-
2018
- 2018-02-01 IL IL257309A patent/IL257309A/en active IP Right Grant
- 2018-03-13 US US15/920,201 patent/US20180205773A1/en not_active Abandoned
- 2018-04-16 JP JP2018078515A patent/JP6657289B2/en active Active
- 2018-07-30 IL IL260879A patent/IL260879B/en active IP Right Grant
-
2019
- 2019-01-29 US US16/261,352 patent/US10469546B2/en active Active
- 2019-02-11 IL IL264777A patent/IL264777B/en active IP Right Grant
- 2019-02-28 AU AU2019201411A patent/AU2019201411A1/en not_active Abandoned
- 2019-09-19 JP JP2019170240A patent/JP6792039B2/en active Active
- 2019-10-07 US US16/594,655 patent/US10637897B2/en active Active
- 2019-10-21 US US16/659,415 patent/US10587659B2/en active Active
- 2019-11-04 US US16/673,880 patent/US10594747B1/en active Active
- 2019-12-18 US US16/719,823 patent/US10841347B2/en active Active
-
2020
- 2020-01-24 US US16/752,577 patent/US10862930B2/en active Active
- 2020-02-04 IL IL272459A patent/IL272459B/en active IP Right Grant
- 2020-10-28 US US17/083,255 patent/US11082462B2/en active Active
- 2020-11-05 JP JP2020185126A patent/JP7246352B2/en active Active
-
2021
- 2021-01-13 AU AU2021200177A patent/AU2021200177A1/en not_active Abandoned
- 2021-06-29 US US17/362,841 patent/US11601484B2/en active Active
- 2021-12-29 JP JP2021215381A patent/JP7348261B2/en active Active
-
2023
- 2023-01-23 AU AU2023200357A patent/AU2023200357B2/en active Active
- 2023-02-01 US US18/163,195 patent/US12095833B2/en active Active
- 2023-09-07 JP JP2023145375A patent/JP7682963B2/en active Active
-
2024
- 2024-05-24 AU AU2024203489A patent/AU2024203489A1/en not_active Abandoned
- 2024-08-15 US US18/805,776 patent/US20240406232A1/en active Pending
-
2025
- 2025-05-14 JP JP2025081379A patent/JP2025122040A/en active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080024594A1 (en) * | 2004-05-19 | 2008-01-31 | Ritchey Kurtis J | Panoramic image-based virtual reality/telepresence audio-visual system and method |
| CN101005514A (en) * | 2006-12-27 | 2007-07-25 | 北京航空航天大学 | Multiple server organizing method for network game |
| CN101359432A (en) * | 2008-09-02 | 2009-02-04 | 浙江理工大学 | Interactive 3D virtual logistics simulation integration method and system |
| CN101539804A (en) * | 2009-03-11 | 2009-09-23 | 上海大学 | Real time human-machine interaction method and system based on augmented virtual reality and anomalous screen |
| CN101923462A (en) * | 2009-06-10 | 2010-12-22 | 成都如临其境创意科技有限公司 | FlashVR-based three-dimensional mini-scene network publishing engine |
Cited By (120)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11663789B2 (en) | 2013-03-11 | 2023-05-30 | Magic Leap, Inc. | Recognizing objects in a passable world model in augmented or virtual reality systems |
| CN105188516A (en) * | 2013-03-11 | 2015-12-23 | 奇跃公司 | System and method for augmented and virtual reality |
| CN105188516B (en) * | 2013-03-11 | 2017-12-22 | 奇跃公司 | For strengthening the System and method for virtual reality |
| US12039680B2 (en) | 2013-03-11 | 2024-07-16 | Magic Leap, Inc. | Method of rendering using a display device |
| US12380662B2 (en) | 2013-03-15 | 2025-08-05 | Magic Leap, Inc. | Frame-by-frame rendering for augmented or virtual reality systems |
| US11854150B2 (en) | 2013-03-15 | 2023-12-26 | Magic Leap, Inc. | Frame-by-frame rendering for augmented or virtual reality systems |
| TWI621097B (en) * | 2014-11-20 | 2018-04-11 | 財團法人資訊工業策進會 | Mobile device, operating method, and non-transitory computer readable storage medium for storing operating method |
| CN107209950B (en) * | 2015-01-29 | 2021-04-30 | 微软技术许可有限责任公司 | Automatic generation of virtual material from real world material |
| CN107209950A (en) * | 2015-01-29 | 2017-09-26 | 微软技术许可有限责任公司 | Automatically generate virtual materials from real-world materials |
| CN104759015A (en) * | 2015-02-11 | 2015-07-08 | 北京市朝阳区新希望自闭症支援中心 | Computer control based vision training system |
| TWI567670B (en) * | 2015-02-26 | 2017-01-21 | 宅妝股份有限公司 | Method and system for management of switching virtual-reality mode and augmented-reality mode |
| US10678324B2 (en) | 2015-03-05 | 2020-06-09 | Magic Leap, Inc. | Systems and methods for augmented reality |
| US11429183B2 (en) | 2015-03-05 | 2022-08-30 | Magic Leap, Inc. | Systems and methods for augmented reality |
| CN112764536A (en) * | 2015-03-05 | 2021-05-07 | 奇跃公司 | System and method for augmented reality |
| US10838207B2 (en) | 2015-03-05 | 2020-11-17 | Magic Leap, Inc. | Systems and methods for augmented reality |
| US11256090B2 (en) | 2015-03-05 | 2022-02-22 | Magic Leap, Inc. | Systems and methods for augmented reality |
| CN107533233A (en) * | 2015-03-05 | 2018-01-02 | 奇跃公司 | System and method for augmented reality |
| US12386417B2 (en) | 2015-03-05 | 2025-08-12 | Magic Leap, Inc. | Systems and methods for augmented reality |
| US11619988B2 (en) | 2015-03-05 | 2023-04-04 | Magic Leap, Inc. | Systems and methods for augmented reality |
| TWI672168B (en) * | 2015-03-20 | 2019-09-21 | 新力電腦娛樂股份有限公司 | System for processing content for a head mounted display(hmd), peripheral device for use in interfacing with a virtual reality scene generated by a computer for presentation on the hmd, and method of simulating a feeling of contact with a virtual object |
| CN107548484B (en) * | 2015-04-29 | 2020-10-27 | 索尼移动通讯有限公司 | Method, computer medium, and system for providing a representation of the orientation and movement of an item |
| CN107548484A (en) * | 2015-04-29 | 2018-01-05 | 索尼移动通讯有限公司 | The orientation of article moved in space and the expression of movement are provided |
| CN105739106A (en) * | 2015-06-12 | 2016-07-06 | 南京航空航天大学 | Somatosensory multi-view point large-size light field real three-dimensional display device and method |
| CN105739106B (en) * | 2015-06-12 | 2019-07-09 | 南京航空航天大学 | A kind of true three-dimensional display apparatus of body-sensing multiple views large scale light field and method |
| CN107683497B (en) * | 2015-06-15 | 2022-04-08 | 索尼公司 | Information processing apparatus, information processing method, and program |
| CN107683497A (en) * | 2015-06-15 | 2018-02-09 | 索尼公司 | Message processing device, information processing method and program |
| CN106325378A (en) * | 2015-07-01 | 2017-01-11 | 三星电子株式会社 | Method and apparatus for context based application grouping in virtual reality |
| CN111450521B (en) * | 2015-07-28 | 2023-11-24 | 弗丘伊克斯控股公司 | System and method for soft decoupling of inputs |
| CN111450521A (en) * | 2015-07-28 | 2020-07-28 | 弗丘伊克斯控股公司 | System and method for soft decoupling of inputs |
| CN105117931A (en) * | 2015-07-30 | 2015-12-02 | 金华唯见科技有限公司 | Goal-driven virtual reality ecosystem |
| CN105183147A (en) * | 2015-08-03 | 2015-12-23 | 众景视界(北京)科技有限公司 | Head-mounted smart device and method thereof for modeling three-dimensional virtual limb |
| CN108027984A (en) * | 2015-09-25 | 2018-05-11 | 奇跃公司 | Method and system for detecting and combining structural features in 3D reconstruction |
| US11288832B2 (en) | 2015-12-04 | 2022-03-29 | Magic Leap, Inc. | Relocalization systems and methods |
| US10909711B2 (en) | 2015-12-04 | 2021-02-02 | Magic Leap, Inc. | Relocalization systems and methods |
| CN107038738A (en) * | 2015-12-15 | 2017-08-11 | 联想(新加坡)私人有限公司 | Object is shown using modified rendering parameter |
| CN108139801B (en) * | 2015-12-22 | 2021-03-16 | 谷歌有限责任公司 | System and method for performing electronic display stabilization via light field preserving rendering |
| CN108139801A (en) * | 2015-12-22 | 2018-06-08 | 谷歌有限责任公司 | For performing the system and method for electronical display stabilization via light field rendering is retained |
| CN106919270B (en) * | 2015-12-28 | 2020-04-21 | 宏达国际电子股份有限公司 | Virtual reality device and virtual reality method |
| CN106919270A (en) * | 2015-12-28 | 2017-07-04 | 宏达国际电子股份有限公司 | Virtual reality device and virtual reality method |
| CN105759958A (en) * | 2016-01-27 | 2016-07-13 | 中国人民解放军信息工程大学 | Data interaction system and method |
| CN114995647A (en) * | 2016-02-05 | 2022-09-02 | 奇跃公司 | System and method for augmented reality |
| CN108139805B (en) * | 2016-02-08 | 2021-05-25 | 谷歌有限责任公司 | Control system for navigation in a virtual reality environment |
| CN108139805A (en) * | 2016-02-08 | 2018-06-08 | 谷歌有限责任公司 | For the control system of the navigation in reality environment |
| CN107219916B (en) * | 2016-03-21 | 2020-05-12 | 埃森哲环球解决方案有限公司 | Multi-platform based experience generation |
| CN107219916A (en) * | 2016-03-21 | 2017-09-29 | 埃森哲环球解决方案有限公司 | Generated based on multi-platform experience |
| CN105892667B (en) * | 2016-03-31 | 2019-03-29 | 联想(北京)有限公司 | Information processing method and electronic equipment under a kind of virtual reality scenario |
| CN105892667A (en) * | 2016-03-31 | 2016-08-24 | 联想(北京)有限公司 | Information processing method in virtual reality scene and electronic equipment |
| CN109475776A (en) * | 2016-06-08 | 2019-03-15 | 伙伴有限公司 | A system that provides a shared environment |
| CN109478097A (en) * | 2016-06-16 | 2019-03-15 | Smi创新传感技术有限公司 | Method and system, client device, server, and computer program product for providing eye-tracking-based information about user behavior |
| CN109478097B (en) * | 2016-06-16 | 2022-02-22 | 苹果公司 | Method and system for providing information and computer program product |
| CN109478345A (en) * | 2016-07-13 | 2019-03-15 | 株式会社万代南梦宫娱乐 | Simulation system, processing method and information storage medium |
| CN109478345B (en) * | 2016-07-13 | 2023-07-28 | 株式会社万代南梦宫娱乐 | Simulation system, processing method, and information storage medium |
| US10649211B2 (en) | 2016-08-02 | 2020-05-12 | Magic Leap, Inc. | Fixed-distance virtual and augmented reality systems and methods |
| US11536973B2 (en) | 2016-08-02 | 2022-12-27 | Magic Leap, Inc. | Fixed-distance virtual and augmented reality systems and methods |
| US11073699B2 (en) | 2016-08-02 | 2021-07-27 | Magic Leap, Inc. | Fixed-distance virtual and augmented reality systems and methods |
| CN107705349A (en) * | 2016-08-03 | 2018-02-16 | 维布络有限公司 | System and method for augmented reality perceived content |
| CN107705349B (en) * | 2016-08-03 | 2021-06-11 | 维布络有限公司 | System and method for augmented reality aware content |
| CN109564504B (en) * | 2016-08-10 | 2022-09-20 | 高通股份有限公司 | Multimedia device for spatializing audio based on mobile processing |
| CN109564504A (en) * | 2016-08-10 | 2019-04-02 | 高通股份有限公司 | For the multimedia device based on mobile processing space audio |
| CN106484099A (en) * | 2016-08-30 | 2017-03-08 | 王杰 | Content reproduction apparatus, the processing system with the replay device and method |
| CN106484099B (en) * | 2016-08-30 | 2022-03-08 | 广州大学 | Content playback apparatus, processing system having the same, and method thereof |
| CN106310660A (en) * | 2016-09-18 | 2017-01-11 | 三峡大学 | Mechanics-based visual virtual football control system |
| CN110084087A (en) * | 2016-10-26 | 2019-08-02 | 奥康科技有限公司 | For analyzing image and providing the wearable device and method of feedback |
| CN108021229A (en) * | 2016-10-31 | 2018-05-11 | 迪斯尼企业公司 | High fidelity numeral immersion is recorded by computed offline to experience |
| CN108021229B (en) * | 2016-10-31 | 2021-09-14 | 迪斯尼企业公司 | Recording high fidelity digital immersive experience through offline computing |
| CN110050295B (en) * | 2016-12-14 | 2023-06-30 | 微软技术许可有限责任公司 | Subtractive rendering for augmented and virtual reality systems |
| CN110050295A (en) * | 2016-12-14 | 2019-07-23 | 微软技术许可有限责任公司 | It is drawn for enhancing with the subtracting property of virtual reality system |
| CN108334377A (en) * | 2017-01-20 | 2018-07-27 | 深圳纬目信息技术有限公司 | A kind of wear shows that the user of equipment uses progress monitoring method |
| US11711668B2 (en) | 2017-01-23 | 2023-07-25 | Magic Leap, Inc. | Localization determination for mixed reality systems |
| US10812936B2 (en) | 2017-01-23 | 2020-10-20 | Magic Leap, Inc. | Localization determination for mixed reality systems |
| US11206507B2 (en) | 2017-01-23 | 2021-12-21 | Magic Leap, Inc. | Localization determination for mixed reality systems |
| US12141341B2 (en) | 2017-01-27 | 2024-11-12 | Qualcomm Incorporated | Systems and methods for tracking a controller |
| US11740690B2 (en) | 2017-01-27 | 2023-08-29 | Qualcomm Incorporated | Systems and methods for tracking a controller |
| CN110140099B (en) * | 2017-01-27 | 2022-03-11 | 高通股份有限公司 | System and method for tracking controller |
| CN110140099A (en) * | 2017-01-27 | 2019-08-16 | 高通股份有限公司 | System and method for tracking control unit |
| US10861237B2 (en) | 2017-03-17 | 2020-12-08 | Magic Leap, Inc. | Mixed reality system with multi-source virtual content compositing and method of generating virtual content using same |
| US10861130B2 (en) | 2017-03-17 | 2020-12-08 | Magic Leap, Inc. | Mixed reality system with virtual content warping and method of generating virtual content using same |
| US10964119B2 (en) | 2017-03-17 | 2021-03-30 | Magic Leap, Inc. | Mixed reality system with multi-source virtual content compositing and method of generating virtual content using same |
| US10769752B2 (en) | 2017-03-17 | 2020-09-08 | Magic Leap, Inc. | Mixed reality system with virtual content warping and method of generating virtual content using same |
| US11423626B2 (en) | 2017-03-17 | 2022-08-23 | Magic Leap, Inc. | Mixed reality system with multi-source virtual content compositing and method of generating virtual content using same |
| US11978175B2 (en) | 2017-03-17 | 2024-05-07 | Magic Leap, Inc. | Mixed reality system with color virtual content warping and method of generating virtual content using same |
| US10762598B2 (en) | 2017-03-17 | 2020-09-01 | Magic Leap, Inc. | Mixed reality system with color virtual content warping and method of generating virtual content using same |
| US11410269B2 (en) | 2017-03-17 | 2022-08-09 | Magic Leap, Inc. | Mixed reality system with virtual content warping and method of generating virtual content using same |
| US11315214B2 (en) | 2017-03-17 | 2022-04-26 | Magic Leap, Inc. | Mixed reality system with color virtual content warping and method of generating virtual con tent using same |
| CN107330855A (en) * | 2017-06-16 | 2017-11-07 | 福州瑞芯微电子股份有限公司 | The method and apparatus of VR interaction datas dimensional uniformity regulation |
| CN107330855B (en) * | 2017-06-16 | 2021-03-02 | 瑞芯微电子股份有限公司 | Method and device for adjusting size consistency of VR (virtual reality) interactive data |
| CN111107911B (en) * | 2017-07-07 | 2023-10-27 | 布克斯顿全球企业股份有限公司 | competition simulation |
| CN111107911A (en) * | 2017-07-07 | 2020-05-05 | 布克斯顿全球企业股份有限公司 | competition simulation |
| CN112020836A (en) * | 2018-01-19 | 2020-12-01 | Esb实验室股份有限公司 | Virtual interactive audience interface |
| CN108983974B (en) * | 2018-07-03 | 2020-06-30 | 百度在线网络技术(北京)有限公司 | AR scene processing method, device, equipment and computer-readable storage medium |
| CN108983974A (en) * | 2018-07-03 | 2018-12-11 | 百度在线网络技术(北京)有限公司 | AR scene process method, apparatus, equipment and computer readable storage medium |
| US11790482B2 (en) | 2018-07-23 | 2023-10-17 | Magic Leap, Inc. | Mixed reality system with virtual content warping and method of generating virtual content using same |
| US10943521B2 (en) | 2018-07-23 | 2021-03-09 | Magic Leap, Inc. | Intra-field sub code timing in field sequential displays |
| US11379948B2 (en) | 2018-07-23 | 2022-07-05 | Magic Leap, Inc. | Mixed reality system with virtual content warping and method of generating virtual content using same |
| US12190468B2 (en) | 2018-07-23 | 2025-01-07 | Magic Leap, Inc. | Mixed reality system with virtual content warping and method of generating virtual content using same |
| US11501680B2 (en) | 2018-07-23 | 2022-11-15 | Magic Leap, Inc. | Intra-field sub code timing in field sequential displays |
| CN119367749A (en) * | 2018-12-20 | 2025-01-28 | 索尼互动娱乐有限责任公司 | Method for allocating resources for online games |
| CN113614710A (en) * | 2019-03-20 | 2021-11-05 | 诺基亚技术有限公司 | Device for presenting a presentation of data and associated method |
| CN111973979A (en) * | 2019-05-23 | 2020-11-24 | 明日基金知识产权控股有限公司 | Live management of the real world via a persistent virtual world system |
| CN112055033A (en) * | 2019-06-05 | 2020-12-08 | 北京外号信息技术有限公司 | Interaction method and system based on optical communication device |
| CN112055034B (en) * | 2019-06-05 | 2022-03-29 | 北京外号信息技术有限公司 | Interaction method and system based on optical communication device |
| CN112055034A (en) * | 2019-06-05 | 2020-12-08 | 北京外号信息技术有限公司 | Interaction method and system based on optical communication device |
| CN112100284A (en) * | 2019-06-18 | 2020-12-18 | 明日基金知识产权控股有限公司 | Interacting with real world objects and corresponding databases through virtual twin reality |
| CN110413109A (en) * | 2019-06-28 | 2019-11-05 | 广东虚拟现实科技有限公司 | Method, device, system, electronic device and storage medium for generating virtual content |
| CN114008582A (en) * | 2019-06-28 | 2022-02-01 | 索尼集团公司 | Information processing apparatus, information processing method, and program |
| CN114008582B (en) * | 2019-06-28 | 2024-11-26 | 索尼集团公司 | Information processing device, information processing method, and program |
| CN114651435A (en) * | 2019-11-20 | 2022-06-21 | 脸谱科技有限责任公司 | Artificial reality system with virtual wireless channel |
| CN111459267A (en) * | 2020-03-02 | 2020-07-28 | 杭州嘉澜创新科技有限公司 | Data processing method, first server, second server and storage medium |
| CN111462335B (en) * | 2020-03-18 | 2023-12-05 | Oppo广东移动通信有限公司 | Equipment control method and device, medium and equipment based on virtual object interaction |
| CN111462335A (en) * | 2020-03-18 | 2020-07-28 | Oppo广东移动通信有限公司 | Equipment control method and device based on virtual object interaction, medium and equipment |
| CN114125523B (en) * | 2020-08-28 | 2024-06-07 | 明日基金知识产权有限公司 | Data processing system and method |
| CN114125523A (en) * | 2020-08-28 | 2022-03-01 | 明日基金知识产权有限公司 | Data processing system and method |
| CN111939561B (en) * | 2020-08-31 | 2023-09-19 | 聚好看科技股份有限公司 | Display devices and interaction methods |
| CN111939561A (en) * | 2020-08-31 | 2020-11-17 | 聚好看科技股份有限公司 | Display device and interaction method |
| CN112363615A (en) * | 2020-10-27 | 2021-02-12 | 上海影创信息科技有限公司 | Multi-user VR/AR interaction system, method and computer readable storage medium |
| CN113946701B (en) * | 2021-09-14 | 2024-03-19 | 广州市城市规划设计有限公司 | Dynamic updating method and device for urban and rural planning data based on image processing |
| CN113946701A (en) * | 2021-09-14 | 2022-01-18 | 广州市城市规划设计有限公司 | Method and device for dynamically updating urban and rural planning data based on image processing |
| CN115047979B (en) * | 2022-08-15 | 2022-11-01 | 歌尔股份有限公司 | Head-mounted display equipment control system and interaction method |
| CN115047979A (en) * | 2022-08-15 | 2022-09-13 | 歌尔股份有限公司 | Head-mounted display equipment control system and interaction method |
| WO2026001272A1 (en) * | 2024-06-28 | 2026-01-02 | 中国电信股份有限公司技术创新中心 | Augmented reality processing method and apparatus, and communication device |
Also Published As
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7657338B2 (en) | Systems and methods for augmented and virtual reality - Patents.com | |
| JP7682963B2 (en) | Systems and methods for augmented and virtual reality - Patents.com |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| C14 | Grant of patent or utility model | ||
| GR01 | Patent grant |