US20090240359A1 - Realistic Audio Communication in a Three Dimensional Computer-Generated Virtual Environment - Google Patents
Realistic Audio Communication in a Three Dimensional Computer-Generated Virtual Environment Download PDFInfo
- Publication number
- US20090240359A1 US20090240359A1 US12/344,542 US34454208A US2009240359A1 US 20090240359 A1 US20090240359 A1 US 20090240359A1 US 34454208 A US34454208 A US 34454208A US 2009240359 A1 US2009240359 A1 US 2009240359A1
- Authority
- US
- United States
- Prior art keywords
- avatar
- audio
- virtual environment
- user
- avatars
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/40—Support for services or applications
- H04L65/401—Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
- H04L65/4015—Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference where at least one of the additional parallel sessions is real time or time sensitive, e.g. white board sharing, collaboration or spawning of a subconference
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/131—Protocols for games, networked simulations or virtual reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/42221—Conversation recording systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2201/00—Electronic components, circuits, software, systems or apparatus used in telephone systems
- H04M2201/22—Synchronisation circuits
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2203/00—Aspects of automatic or semi-automatic exchanges
- H04M2203/30—Aspects of automatic or semi-automatic exchanges related to audio recordings in general
- H04M2203/301—Management of recordings
Definitions
- the present invention relates to virtual environments and, more particularly, to a method and apparatus for implementing realistic audio communications in a three dimensional computer-generated virtual environment.
- Virtual environments simulate actual or fantasy 3-D environments and allow for many participants to interact with each other and with constructs in the environment via remotely-located clients.
- One context in which a virtual environment may be used is in connection with gaming, although other uses for virtual environments are also being developed.
- an actual or fantasy universe is simulated within a computer processor/memory.
- Multiple people may participate in the virtual environment through a computer network, such as a local area network or a wide area network such as the Internet.
- a computer network such as a local area network or a wide area network such as the Internet.
- Each player selects an “Avatar” which is often a three-dimensional representation of a person or other object to represent them in the virtual environment.
- Participants send commands to a virtual environment server that controls the virtual environment to cause their Avatars to move within the virtual environment. In this way, the participants are able to cause their Avatars to interact with other Avatars and other objects in the virtual environment.
- a virtual environment often takes the form of a virtual-reality three dimensional map, and may include rooms, outdoor areas, and other representations of environments commonly experienced in the physical world.
- the virtual environment may also include multiple objects, people, animals, robots, Avatars, robot Avatars, spatial elements, and objects/environments that allow Avatars to participate in activities. Participants establish a presence in the virtual environment via a virtual environment client on their computer, through which they can create an Avatar and then cause the Avatar to “live” within the virtual environment.
- the view experienced by the Avatar changes according to where the Avatar is located within the virtual environment.
- the views may be displayed to the participant so that the participant controlling the Avatar may see what the Avatar is seeing.
- many virtual environments enable the participant to toggle to a different point of view, such as from a vantage point outside of the Avatar, to see where the Avatar is in the virtual environment.
- the participant may control the Avatar using conventional input devices, such as a computer mouse and keyboard.
- the inputs are sent to the virtual environment client, which forwards the commands to one or more virtual environment servers that are controlling the virtual environment and providing a representation of the virtual environment to the participant via a display associated with the participant's computer.
- an Avatar may be able to observe the environment and optionally also interact with other Avatars, modeled objects within the virtual environment, robotic objects within the virtual environment, or the environment itself (i.e. an Avatar may be allowed to go for a swim in a lake or river in the virtual environment).
- client control input may be permitted to cause changes in the modeled objects, such as moving other objects, opening doors, and so forth, which optionally may then be experienced by other Avatars within the virtual environment.
- Interaction by an Avatar with another modeled object in a virtual environment means that the virtual environment server simulates an interaction in the modeled environment, in response to receiving client control input for the Avatar. Interactions by one Avatar with any other Avatar, object, the environment or automated or robotic Avatars may, in some cases, result in outcomes that may affect or otherwise be observed or experienced by other Avatars, objects, the environment, and automated or robotic Avatars within the virtual environment.
- a virtual environment may be created for the user, but more commonly the virtual environment may be persistent, in which it continues to exist and be supported by the virtual environment server even when the user is not interacting with the virtual environment.
- the environment may continue to evolve when a user is not logged in, such that the next time the user enters the virtual environment it may be changed from what it looked like the previous time.
- Virtual environments are commonly used in on-line gaming, such as for example in online role playing games where users assume the role of a character and take control over most of that character's actions.
- virtual environments are also being used to simulate real life environments to provide an interface for users that will enable on-line education, training, shopping, and other types of interactions between groups of users and between businesses and users.
- the participants represented by the Avatars may elect to communicate with each other.
- the participants may communicate with each other by typing messages to each other or audio may be transmitted between the users to enable the participants to talk with each other.
- a method and apparatus for implementing realistic audio communications in a three dimensional computer-generated virtual environment is provided.
- a participant in a three dimensional computer-generated virtual environment is able to control a dispersion pattern of his Avatar's voice such that the Avatar's voice may be directionally enhanced using simple controls.
- an audio dispersion envelope is designed to extend further in a direction in front of the Avatar and in a smaller direction to the sides and rear of the Avatar.
- the shape of the audio dispersion envelope may be affected by other aspects of the virtual environment such as ceilings, floors, walls and other logical barriers.
- the audio dispersion envelope may be static or optionally controllable by the participant to enable the Avatar's voice to be extended outward in front of the Avatar. This enables the Avatar to “shout” in the virtual environment such that other Avatars normally outside of hearing range of the Avatar the User can still hear the user. Similarly, the volume level of the audio may be reduced to allow the Avatars to whisper or adjusted based on the relative position of the Avatars and directions in which the Avatars are facing.
- Individual audio streams may be mixed for each user of the virtual environment depending on the position and orientation of the user's Avatar in the virtual environment, the shape of the user's dispersion envelope, and which other Avatars are located within the user's user dispersion envelope.
- FIG. 1 is a functional block diagram of a portion of an example system enabling users to have access to three dimensional computer-generated virtual environment;
- FIG. 2 is a two dimensional view of users in an example three dimensional computer-generated virtual environment and showing a normal audio dispersion envelope of the users in the three dimensional computer generated virtual environment;
- FIG. 3 is a two dimensional view of users in an example three dimensional computer-generated virtual environment and showing a directional audio dispersion envelope of the users in the three dimensional computer generated virtual environment;
- FIGS. 4-5 show two examples of user controllable audio dispersion envelopes to enable the user to project his voice in the three dimensional computer-generated virtual environment according to an embodiment of the invention
- FIGS. 6 and 7 show interaction of the audio dispersion envelope with obstacles in an example three dimensional computer-generated virtual environment according to an embodiment of the invention
- FIG. 8 is a flow chart showing a process of implementing realistic audio communications in a three dimensional computer-generated virtual environment
- FIG. 9 is a functional block diagram showing components of the system of FIG. 1 interacting to enable audio to be transmitted between users of the three dimensional computer-generated virtual environment according to an embodiment of the invention
- FIG. 10 is a diagram of three dimensional coordinate space showing dispersion envelopes in three dimensional space.
- FIG. 11 is a three dimensional view of a virtual environment.
- FIG. 1 shows a portion of an example system 10 showing the interaction between a plurality of users 12 and one or more virtual environments 14 .
- a user may access the virtual environment 14 from their computer 22 over a packet network 16 or other common communication infrastructure.
- the virtual environment 14 is implemented by one or more virtual environment servers 18 .
- Audio may be transmitted between the users 12 by one or more communication servers 20 .
- the virtual environment may be implemented as using one or more instances, each of which may be hosted by one or more virtual environment servers. Where there are multiple instances, the Avatars in one instance are generally unaware of Avatars in the other instance, however a user may have a presence in multiple worlds simultaneously through several virtual environment clients. Conventionally, each instance of the virtual environment may be referred to as a separate World. In the following description, it will be assumed that the Avatars are instantiated in the same world and hence can see and communicate with each other.
- a world may be implemented by one virtual environment server 18 , or may be implemented by multiple virtual environment servers.
- the virtual environment is designed as a visual representation of a real-world environment that enables humans to interact with each other and communicate with each other in near-real time.
- a virtual environment will have its own distinct three dimensional coordinate space.
- Avatars representing users may move within the three dimensional coordinate space and interact with objects and other Avatars within the three dimensional coordinate space.
- the virtual environment servers maintain the virtual environment and generate a visual presentation for each user based on the location of the user's Avatar within the virtual environment.
- the view may also depend on the direction in which the Avatar is facing and the selected viewing option, such as whether the user has opted to have the view appear as if the user was looking through the eyes of the Avatar, or whether the user has opted to pan back from the Avatar to see a three dimensional view of where the Avatar is located and what the Avatar is doing in the three dimensional computer-generated virtual environment.
- Each user 12 has a computer 22 that may be used to access the three-dimensional computer-generated virtual environment.
- the computer 22 will run a virtual environment client 24 and a user interface 26 to the virtual environment.
- the user interface 26 may be part of the virtual environment client 24 or implemented as a separate process.
- a separate virtual environment client may be required for each virtual environment that the user would like to access, although a particular virtual environment client may be designed to interface with multiple virtual environment servers.
- a communication client 28 is provided to enable the user to communicate with other users who are also participating in the three dimensional computer-generated virtual environment.
- the communication client may be part of the virtual environment client 24 , the user interface 26 , or may be a separate process running on the computer 22 .
- the user may see a representation of a portion of the three dimensional computer-generated virtual environment on a display/audio 30 and input commands via a user input device 32 such as a mouse, touch pad, or keyboard.
- the display/audio 30 may be used by the user to transmit/receive audio information while engaged in the virtual environment.
- the display/audio 30 may be a display screen having a speaker and a microphone.
- the user interface generates the output shown on the display under the control of the virtual environment client, and receives the input from the user and passes the user input to the virtual environment client.
- the virtual environment client enables the user's Avatar 34 or other object under the control of the user to execute the desired action in the virtual environment. In this way the user may control a portion of the virtual environment, such as the person's Avatar or other objects in contact with the Avatar, to change the virtual environment for the other users of the virtual environment.
- an Avatar is a three dimensional rendering of a person or other creature that represents the user in the virtual environment.
- the user selects the way that their Avatar looks when creating a profile for the virtual environment and then can control the movement of the Avatar in the virtual environment such as by causing the Avatar to walk, run, wave, talk, or make other similar movements.
- the block 34 representing the Avatar in the virtual environment 14 is not intended to show how an Avatar would be expected to appear in a virtual environment. Rather, the actual appearance of the Avatar is immaterial since the actual appearance of each user's Avatar may be expected to be somewhat different and customized according to the preferences of that user.
- Avatars have generally been represented herein using simple geometric shapes such as cubes and diamonds, rather than complex three dimensional shapes such as people and animals.
- FIG. 2 shows a portion of an example three dimensional computer-generated virtual environment and showing normal audio dispersion envelopes associated with Avatars in the three dimensional computer generated virtual environment.
- FIG. 2 has been shown in two dimensions for ease of illustration and to more clearly show how audio dispersion occurs. Audio dispersion occurs in the vertical direction as well in the same manner. To simplify the explanation, most of the description uses two-dimensional figures to explain how audio dispersion may be affected in the virtual environment. It should be remembered that many virtual environments are three dimensional and, hence, the audio dispersion envelopes will extend in all three dimensions. Extension of the two dimensional audio dispersion envelopes to three dimensions is straightforward, and several examples of this are provided in connection with FIGS. 10 and 11 . Thus, although the description may focus on two dimensions the invention is not limited in this manner as the same principles may be used to control the vertical dispersion (Z coordinate direction) as may be used to control the X and Y coordinate dispersions properties.
- FIG. 10 shows another example in which a first Avatar is located at XYZ coordinates 2, 2, 25, and a second Avatar is located at XYZ coordinates 25, 25, 4.
- the two Avatars are approximately 18.34 units apart. If the audio dispersion envelope is 20 units, then looking only at the X and Y coordinates the two Avatars should be able to talk with each other. However, when the Z coordinate is factored in, the separation between the two Avatars in the three dimensional space is 28.46 which is well beyond the 20 unit dispersion envelope reach.
- a mixed audio stream will be created for each user based on the position of the user's Avatar and the dispersion envelope for the Avatar, so that audio from each of the users represented by an Avatar within the user's dispersion envelope can be mixed and provided to the user.
- individual audio streams may be created for each user so that the user can hear audio from other users proximate their Avatar in the virtual environment.
- the volume of a particular user will be adjusted during the mixing process so that audio from close Avatars is louder than audio from Avatars that are farther away.
- FIG. 2 in this example it has been assumed that five Avatars 34 are present in the viewable area of virtual environment 14 .
- the Avatars have been labeled A through E for purposes of discussion.
- each Avatar would be controlled by a separate user, although there may be instances where a user could control more than one Avatar.
- the arrow shows the direction that the avatar is facing in the virtual environment.
- users associated with Avatars are generally allowed to talk with each other as long as they are within a particular distance of each other.
- Avatar A is not within range of any other Avatar and, accordingly, the user associated with Avatar A is not able to talk to the users associated with any of the other Avatars.
- Avatars D and E are not sufficiently close to talk with each other, even though they are looking at each other.
- Avatars B and C can hear each other, however, since they are sufficiently close to each other.
- the audio dispersion envelopes for each of the Avatars are spherical (circular in two dimensional space) such that an Avatar can communicate with any Avatar that is within a particular radial distance.
- the user associated with the Avatar speaks, the other Avatars within the radial distance will hear the user. If the Avatars are too far apart they are not able to “talk” within the virtual environment and, hence, audio packets/data will not be transmitted between the users.
- the manner in which the volume of a user's contribution is determined on a linear basis.
- the user A is located within a dispersion envelope. The volume close to the user is highest, as you get farther away from the user A within the dispersion envelope the volume of contribution tapers off on a linear basis until the edge of the dispersion envelope is reached.
- Audio is mixed individually for each user of the virtual environment.
- the particular mix of audio will depend on which other users are within the dispersion envelope for the user, and their location within the dispersion envelope.
- the location within the dispersion envelope affects the volume with which that user's audio will be presented to the user associated with the dispersion envelope. Since the audio is mixed individually for each user, an audio bridge is not required per user, but rather an audio stream may be created individually for each user based on which other users are proximate that user in the virtual environment.
- FIG. 3 shows an embodiment of the invention in which the audio dispersion envelope may be shaped to extend further in front of the Avatar and to a lesser distance behind and to the left and right sides of the Avatar.
- This enables the Avatar to talk in a particular direction within the virtual environment while minimizing the amount of noise the Avatar generates for other users of the virtual environment.
- the ability of a user to talk with other users is dependent, in part, on the orientation of the user's Avatar within the virtual environment. This mirrors real life, where it is easier to hear someone who is facing you than to hear the same person talking at the same volume, but facing a different direction.
- the audio dispersion envelope may be controlled by the user using simple controls, such as the controls used to control the direction in which the Avatar is facing.
- other controls may also be used to control the volume or distance of the audio dispersion envelope to further enhance the user's audio control in the virtual environment.
- the Avatars are in the same position as they are in the example shown in FIG. 2 .
- the audio dispersion envelopes have been adjusted to extend further in the direction that the Avatar is facing and to extend to a lesser extent to the sides of the Avatar and to the rear of the Avatar. This affects which Avatars are able to communicate with each other.
- Avatars B and C are not able to talk with each other, even though they are standing relatively close to each other, since they are not facing each other.
- Avatars D and E are able to talk with each other since they are facing each other, even though they are somewhat farther apart than Avatars B and C.
- Avatar A is still alone and, hence, cannot talk to any other Avatars.
- the Avatars must be within each other's audio dispersion envelope for audio to be transmitted between each other.
- An alternative may be to enable audio to be transmitted where at least one of the Avatars is within the dispersion envelope of the other Avatar. For example, if Avatar B is within the dispersion envelope of Avatar C, but Avatar C is not in the dispersion envelope of Avatar B, audio may be transmitted between the users associated with Avatars C and B.
- audio may be transmitted in one direction such that the user associated with Avatar C can hear the user associated with Avatar B, but the user associated with Avatar B cannot hear the user associated with Avatar C.
- the shape of the audio dispersion envelope may depend on the preferences of the user as well as the preferences of the virtual environment provider.
- the virtual environment provider may provide the user with an option to select an audio dispersion shape to be associated with their Avatar when they enter the virtual environment.
- the audio dispersion shape may be persistent until adjusted by the user. For example, the user may select a voice for their Avatar such that some users will have robust loud voices while others will have more meek and quiet voices.
- the virtual environment provider may provide particular audio dispersion profiles for different types of Avatars, for example police Avatars may be able to be heard at a greater distance than other types of Avatars.
- the shape of the audio dispersion envelope may depend on other environmental factors, such as the location of the Avatar within the virtual environment and the presence or absence of ambient noise in the virtual environment.
- FIG. 4 shows an embodiment in which the Avatar is provided with an option to increase the volume of their voice so that their voice may reach farther within the virtual environment. This allows the Avatar to “shout” within the virtual environment.
- the user associated with Avatar A would like to talk to the user associated with Avatar B.
- User A may face Avatar B in the virtual environment by causing his Avatar A to face Avatar B.
- the audio dispersion profile is designed to extend further in front of the Avatar, this action will cause the main audio dispersion profile lobe to extend toward Avatar B.
- the main audio dispersion lobe of the audio dispersion profile may be insufficiently large to encompass Avatar B if Avatars A and B are too far apart in the virtual environment.
- user A may “Shout” toward Avatar B to cause the audio dispersion profile to extend further in the direction of B.
- the user's intention to shout at B may be indicated by the user through the manipulation of simple controls.
- the mouse wheel may be a shout control that the user uses to extend the audio dispersion profile of their Avatar.
- the user may simply scroll the mouse wheel away by contacting the mouse wheel and pushing forward with their finger. This is commonly used to scroll up using the mouse wheel in most common computer user interfaces.
- the user may pull back on the top of the mouse wheel in a motion similar to scrolling down with the mouse wheel on most common user interfaces.
- a user may use explicit controls to invoke OmniVoice or, preferably, OmniVoice may be invoked intrinsically based on the location of the Avatar within the virtual environment. For example, the user may walk up to a podium on a stage and, the user's presence on the stage, may cause audio provided by the user to be included in the mixed audio stream of every other Avatar within a particular volume of the virtual environment.
- the mouse wheel may have multiple uses in the virtual environment, depending on the particular virtual environment and the other activities available to the Avatar. If this is the case, then a combination of inputs may be used to control the audio dispersion profile. For example, left clicking on the mouse combined with scrolling of the mouse wheel, or depressing a key on the keyboard along with scrolling of the mouse wheel may be used to signal that the mouse scrolling action is associated with the Avatar's voice rather than another action. Alternatively, the mouse wheel may not be used and a key stroke or combination of keystrokes on the keyboard may be used to extend the audio dispersion envelope and cause the audio dispersion envelope to return to normal.
- the user may also warp the audio dispersion envelope in real time to control the Avatar's audio dispersion envelope in the virtual environment.
- the facial expression of the Avatar may change or other visual indication may be provided as to who is shouting. This enables the user to know that they are shouting as well as to enable other users of the virtual environment to understand why the physics have changed.
- the Avatar may cup their hands around their mouth to provide a visual clue that they are shouting in a particular direction.
- a larger extension of the Avatar's voice may be displayed as an Avatar or even by providing a ghost of the Avatar move closer to the new center of the voice range so that other users can determine who is yelling.
- Other visual indications may be provided as well.
- the user controlled audio dispersion envelope warping may toggle, as shown in FIG. 4 , such that the user is either talking normally or shouting depending on the state of the toggle control.
- the user controlled audio dispersion profile warping may be more gradual and have multiple discrete levels as shown in FIG. 5 .
- the user may increase the directionality and range of the projection of their voice in the three-dimensional computer generated virtual environment so that the reach of their voice may extend different distances depending on the extent to which the user would like to shout in the virtual environment.
- the user has been provided with four discrete levels, and then a fifth level for use in connection with OmniVoice. Other numbers of levels may be used as well.
- the discretization may be blurred such that the control is closer to continuous depending on the implementation.
- the selection of a person to shout to may be implemented when the user mouses over another Avatar and depresses a button, such as left clicking or right clicking on the other Avatar. If the person is within normal talking distance of the other Avatar, audio from that other Avatar will be mixed into the audio stream presented to the user. Audio from other Avatars within listening distance will similarly be included in mixed audio stream presented to the user. If the other Avatar is not within listening distance, i.e. is not within the normal audio dispersion envelope, the user may be provided with an option to shout to the other Avatar. In this embodiment the user would be provided with an instruction to double click on the other Avatar to shout to them. Many different ways of implementing the ability to shout may be possible depending on the preferences of the user interface designer.
- the proximity of the Avatars may adjust the volume of their audio when their audio is mixed into the audio stream for the user so that the user is presented with an audio stream that more closely resemble normal realistic audio.
- Other environmental factors may similarly affect the communication between Avatars in the virtual environment.
- the same controls may optionally also be used to reduce the size of the dispersion envelope so that the user can whisper in the virtual environment.
- the user may control their voice in the opposite direction to reduce the size of the dispersion envelope so that users must be closer to the user's Avatar to communicate with the user.
- the mouse controls or other controls may be used in this manner to reduce the size of the audio dispersion envelope as desired.
- FIG. 6 and 7 show an embodiment in which the profile of the audio dispersion envelope is affected by obstacles in the virtual environment.
- two Avatars in a virtual environment will be able to hear each other if they are within talking distance of each other regardless of the other objects in the virtual environment. For example, assume that Avatar A is in one room and that Avatar B is in another room as shown in FIG. 7 .
- the virtual environment server would enable the Avatars to communicate with each other (as shown by the dashed line representing the dispersion envelope) since they are proximate each other in the virtual environment.
- FIG. 11 shows a similar situation where two Avatars are closed to each other, but are on different floors of the virtual environment.
- the two Avatars are unlikely to be able to see through the wall or floor/ceiling and, hence, enabling the Avatars to hear each other through an obstacle of this nature is unnatural.
- This feature also enables Avatars to listen in on conversations that are occurring within the virtual environment between other Avatars, without being seen by the other Avatars.
- the audio dispersion profile may be adjusted to account for obstacles in the virtual environment. Obstacles may be thought of as creating shadows on the profile to reduce the distance of the profile in a particular direction. This prevents communication between Avatars where they would otherwise be able to communicate if not for the imposition of the obstacle.
- Avatar A is in a room and talking through a doorway. The jambs and walls on either side of the door serve as obstacles that partially obstruct the audio dispersion profile of the user. If the door is closed, this too may affect the dispersion envelope.
- Avatar B who is standing next to the wall will be unable to hear or talk to Avatar A since Avatar B is in the shadow of the wall and outside of Avatar A's audio dispersion profile.
- Avatar C is in direct line of site of Avatar A through the door and, accordingly, may communicate with Avatar A.
- Avatar B may communicate with Avatar C and, hence may still hear Avatar B's side of the conversation between Avatars A and C.
- a particular Avatar may be only partially included in communications between other parties.
- a discrete mixer is implemented per client, where essentially all these described attributes and properties are calculated on a per client basis.
- the mixer determines which audio from adjacent Avatars is able to be heard by the particular user and mixes the available audio streams accordingly. Implementing this on a per-client basis is advantageous because no two users are ever in exactly the same position at a particular point in time, and hence the final output of each of the clients will necessarily be different.
- FIG. 11 shows two Avatars that are separated vertically as they are on separate floors.
- the floor may function in the same manner as the walls did in FIGS. 5 and 6 . Specifically, if the rooms on the separate floors are defined as separate volumes, sound from one floor will not enter the other floor so that, even though the two Avatars are very close to each other in both the X and Y coordinate space, they are separated vertically such that the two Avatars cannot talk with each other.
- the shadow objects may be wholly opaque to transmission of sound or may simply be attenuating objects.
- a concrete wall may attenuate sound 100%
- a normal wall may attenuate 90% of sound while allowing some sound to pass through
- a curtain may attenuate sound only modestly such as 10%.
- the level of attenuation may be specified when the object is placed in the virtual environment.
- Audio be implemented using a communication server that is configured to mix audio individually for each user of the virtual environment.
- the communication server will receive audio from all the users of the virtual environment and create an audio stream for a particular user by determining which of the other users have an Avatar within the user's dispersion envelope.
- a notion of directionality needs to be included in the selection process such that the selection process does not simply look at the relative distance of the participants, but also looks to see what direction the participants are facing within the virtual environment. This may be done by associating a vector with each participant and determining whether the vector extends sufficiently close to the other Avatar to warrant inclusion of the audio from that user in the audio stream.
- the process may look to determine whether the vector transverses any shadow objects in the virtual environment. If so, the extent of the shadow may be calculated to determine whether the audio should be included.
- Other ways of implementing the audio connection determination process may be used as well and the invention is not limited to this particular example implementation.
- audio is to be transmitted between two Avatars may be determined by integrating attenuation along a vector between the Avatars.
- the normal “air” or empty space in the virtual environment may be provided with an attenuation factor such as 5% per unit distance.
- Other objects within the environment may be provided with other attenuation factors depending on their intended material.
- Transmission of audio between Avatars may depend on the distance between the Avatars, and hence the amount of air the sound must pass through, and the objects the vector passes through.
- the strength of the vector, and hence the attenuation able to be accommodated while still enabling communication may depend on the direction the Avatar is facing. Additionally, the user may temporarily increase the strength of the vector by causing the Avatar to shout.
- That privilege may be reserved for Avatars possessing particular items within the virtual environment.
- the Avatar may need to find or purchase a particular item such as a virtual bull horn to enable the Avatar to shout.
- Other embodiments are possible as well.
- FIG. 8 shows a flow chart of a process, portions of which may be used by one or more entities, to determine whether audio should be transmitted between particular participants in a three dimensional computer-generated virtual environment.
- the virtual environment server(s) have rendered the virtual environment ( 100 ) so that it is available for use by participants.
- the virtual environment servers will enable user A to enter the virtual environment and will render Avatar A for user A within the virtual environment ( 102 ).
- the virtual environment servers will enable user B to enter the virtual environment and will render Avatar B for user B within the virtual environment ( 102 ′).
- Other users may have Avatars within the virtual environment as well.
- the virtual environment servers will also define an audio dispersion envelope for Avatar A which specifies how the Avatar will be able to communicate within the virtual environment ( 104 ).
- Each Avatar may have a set pre-defined audio dispersion envelope which is a characteristic of all Avatars within the virtual environment, or the virtual environment servers may define custom specific audio dispersion envelopes for each user.
- the step of defining audio dispersion envelopes may be satisfied by specifying that the Avatar is able to communicate with other Avatars that are located a greater distance in front of the Avatar than other Avatars located in other directions relative to the Avatar.
- the virtual environment server will determine whether Avatar B is within audio dispersion envelope for Avatar A ( 106 ). This may be implemented, for example, by looking to see whether Avatar A is facing Avatar B, and then determining how far distance Avatar B is from Avatar A in the virtual environment. If Avatar B is within the audio dispersion envelope of Avatar A, the virtual environment server will enable audio from Avatar B to be included in the audio stream transmitted to the user associated with Avatar A.
- Avatar B is not within the audio dispersion envelope of Avatar A, the user may be provided with an opportunity to control the shape of the audio dispersion envelope such as by enabling the user to shout toward Avatar B.
- user A may manipulate their user interface to cause Avatar A to shout toward Avatar B ( 110 ). If user A properly signals via their user interface that he would like to shout toward Avatar B, the virtual environment server will enlarge the audio dispersion envelope for the Avatar in the virtual environment in the direction of the shout ( 112 ).
- the virtual environment server will then similarly determine whether Avatar B is within the enlarged audio dispersion envelope for Avatar A ( 114 ). If so, the virtual environment server will enable audio to be transmitted between the users associated with the Avatars ( 116 ). If not, the two Avatars will need to move closer toward each other in the virtual environment to enable audio to be transmitted between the users associated with Avatars A and B ( 118 ).
- FIG. 9 shows a system that may be used to implement realistic audio within a virtual environment according to an embodiment of the invention.
- users 12 are provided with access to a virtual environment 14 that is implemented using one or more virtual environment servers 18 .
- Users 12 A, 12 B are represented by avatars 34 A, 34 B within the virtual environment 14 .
- an audio position and direction detection subsystem 64 will determine that audio should be transmitted between the users associated with the Avatars. Audio will be mixed by mixing function 78 to provide individually determined audio streams to each of the Avatars.
- the mixing function is implemented at the server.
- the invention is not limited in this manner as the mixing function may instead be implemented at the virtual environment client.
- audio from multiple users would be transmitted to the user's virtual environment client and the virtual environment client would select particular portions of the audio to be presented to the user.
- Implementing the mixing function at the server reduces the amount of audio that must be transmitted to each user, but increases the load on the server.
- Implementing the mixing function at the client distributes the load for creating individual mixed audio streams and, hence, is easier on the server. However, it requires multiple audio streams to be transmitted to each of the clients.
- the particular solution may be selected based on available bandwidth and processing power.
- ASIC Application Specific Integrated Circuit
- FPGA Field Programmable Gate Array
- Programmable logic can be fixed temporarily or permanently in a tangible medium such as a read-only memory chip, a computer memory, a disk, or other storage medium. All such embodiments are intended to fall within the scope of the present invention.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Multimedia (AREA)
- Information Transfer Between Computers (AREA)
- Telephonic Communication Services (AREA)
- Processing Or Creating Images (AREA)
- Computer And Data Communications (AREA)
Abstract
Description
- This application claims priority to U.S. Provisional Patent Application No. 61/037,447, filed Mar. 18, 2008, entitled “Method and Apparatus For Providing 3 Dimensional Audio on a Conference Bridge”, the content of which is hereby incorporated herein by reference.
- 1. Field of the Invention
- The present invention relates to virtual environments and, more particularly, to a method and apparatus for implementing realistic audio communications in a three dimensional computer-generated virtual environment.
- 2. Description of the Related Art
- Virtual environments simulate actual or fantasy 3-D environments and allow for many participants to interact with each other and with constructs in the environment via remotely-located clients. One context in which a virtual environment may be used is in connection with gaming, although other uses for virtual environments are also being developed.
- In a virtual environment, an actual or fantasy universe is simulated within a computer processor/memory. Multiple people may participate in the virtual environment through a computer network, such as a local area network or a wide area network such as the Internet. Each player selects an “Avatar” which is often a three-dimensional representation of a person or other object to represent them in the virtual environment. Participants send commands to a virtual environment server that controls the virtual environment to cause their Avatars to move within the virtual environment. In this way, the participants are able to cause their Avatars to interact with other Avatars and other objects in the virtual environment.
- A virtual environment often takes the form of a virtual-reality three dimensional map, and may include rooms, outdoor areas, and other representations of environments commonly experienced in the physical world. The virtual environment may also include multiple objects, people, animals, robots, Avatars, robot Avatars, spatial elements, and objects/environments that allow Avatars to participate in activities. Participants establish a presence in the virtual environment via a virtual environment client on their computer, through which they can create an Avatar and then cause the Avatar to “live” within the virtual environment.
- As the Avatar moves within the virtual environment, the view experienced by the Avatar changes according to where the Avatar is located within the virtual environment. The views may be displayed to the participant so that the participant controlling the Avatar may see what the Avatar is seeing. Additionally, many virtual environments enable the participant to toggle to a different point of view, such as from a vantage point outside of the Avatar, to see where the Avatar is in the virtual environment.
- The participant may control the Avatar using conventional input devices, such as a computer mouse and keyboard. The inputs are sent to the virtual environment client, which forwards the commands to one or more virtual environment servers that are controlling the virtual environment and providing a representation of the virtual environment to the participant via a display associated with the participant's computer.
- Depending on how the virtual environment is set up, an Avatar may be able to observe the environment and optionally also interact with other Avatars, modeled objects within the virtual environment, robotic objects within the virtual environment, or the environment itself (i.e. an Avatar may be allowed to go for a swim in a lake or river in the virtual environment). In these cases, client control input may be permitted to cause changes in the modeled objects, such as moving other objects, opening doors, and so forth, which optionally may then be experienced by other Avatars within the virtual environment.
- “Interaction” by an Avatar with another modeled object in a virtual environment means that the virtual environment server simulates an interaction in the modeled environment, in response to receiving client control input for the Avatar. Interactions by one Avatar with any other Avatar, object, the environment or automated or robotic Avatars may, in some cases, result in outcomes that may affect or otherwise be observed or experienced by other Avatars, objects, the environment, and automated or robotic Avatars within the virtual environment.
- A virtual environment may be created for the user, but more commonly the virtual environment may be persistent, in which it continues to exist and be supported by the virtual environment server even when the user is not interacting with the virtual environment. Thus, where there is more than one user of a virtual environment, the environment may continue to evolve when a user is not logged in, such that the next time the user enters the virtual environment it may be changed from what it looked like the previous time.
- Virtual environments are commonly used in on-line gaming, such as for example in online role playing games where users assume the role of a character and take control over most of that character's actions. In addition to games, virtual environments are also being used to simulate real life environments to provide an interface for users that will enable on-line education, training, shopping, and other types of interactions between groups of users and between businesses and users.
- As Avatars encounter other Avatars within the virtual environment, the participants represented by the Avatars may elect to communicate with each other. For example, the participants may communicate with each other by typing messages to each other or audio may be transmitted between the users to enable the participants to talk with each other.
- Although great advances have happened in connection with visual rendering of Avatars and animation, the audio implementation has lagged and often the audio characteristics of a virtual environment are not very realistic. Accordingly, it would be advantageous to be able to provide a method and apparatus for implementing more realistic audio communications in a three dimensional computer-generated virtual environment.
- A method and apparatus for implementing realistic audio communications in a three dimensional computer-generated virtual environment is provided. In one embodiment, a participant in a three dimensional computer-generated virtual environment is able to control a dispersion pattern of his Avatar's voice such that the Avatar's voice may be directionally enhanced using simple controls. In one embodiment, an audio dispersion envelope is designed to extend further in a direction in front of the Avatar and in a smaller direction to the sides and rear of the Avatar. The shape of the audio dispersion envelope may be affected by other aspects of the virtual environment such as ceilings, floors, walls and other logical barriers. The audio dispersion envelope may be static or optionally controllable by the participant to enable the Avatar's voice to be extended outward in front of the Avatar. This enables the Avatar to “shout” in the virtual environment such that other Avatars normally outside of hearing range of the Avatar the User can still hear the user. Similarly, the volume level of the audio may be reduced to allow the Avatars to whisper or adjusted based on the relative position of the Avatars and directions in which the Avatars are facing. Individual audio streams may be mixed for each user of the virtual environment depending on the position and orientation of the user's Avatar in the virtual environment, the shape of the user's dispersion envelope, and which other Avatars are located within the user's user dispersion envelope.
- Aspects of the present invention are pointed out with particularity in the appended claims. The present invention is illustrated by way of example in the following drawings in which like references indicate similar elements. The following drawings disclose various embodiments of the present invention for purposes of illustration only and are not intended to limit the scope of the invention. For purposes of clarity, not every component may be labeled in every figure. In the figures:
-
FIG. 1 is a functional block diagram of a portion of an example system enabling users to have access to three dimensional computer-generated virtual environment; -
FIG. 2 is a two dimensional view of users in an example three dimensional computer-generated virtual environment and showing a normal audio dispersion envelope of the users in the three dimensional computer generated virtual environment; -
FIG. 3 is a two dimensional view of users in an example three dimensional computer-generated virtual environment and showing a directional audio dispersion envelope of the users in the three dimensional computer generated virtual environment; -
FIGS. 4-5 show two examples of user controllable audio dispersion envelopes to enable the user to project his voice in the three dimensional computer-generated virtual environment according to an embodiment of the invention; -
FIGS. 6 and 7 show interaction of the audio dispersion envelope with obstacles in an example three dimensional computer-generated virtual environment according to an embodiment of the invention; -
FIG. 8 is a flow chart showing a process of implementing realistic audio communications in a three dimensional computer-generated virtual environment; -
FIG. 9 is a functional block diagram showing components of the system ofFIG. 1 interacting to enable audio to be transmitted between users of the three dimensional computer-generated virtual environment according to an embodiment of the invention; -
FIG. 10 is a diagram of three dimensional coordinate space showing dispersion envelopes in three dimensional space; and -
FIG. 11 is a three dimensional view of a virtual environment. - The following detailed description sets forth numerous specific details to provide a thorough understanding of the invention. However, those skilled in the art will appreciate that the invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, protocols, algorithms, and circuits have not been described in detail so as not to obscure the invention.
-
FIG. 1 shows a portion of anexample system 10 showing the interaction between a plurality ofusers 12 and one or morevirtual environments 14. A user may access thevirtual environment 14 from theircomputer 22 over apacket network 16 or other common communication infrastructure. Thevirtual environment 14 is implemented by one or morevirtual environment servers 18. Audio may be transmitted between theusers 12 by one ormore communication servers 20. - The virtual environment may be implemented as using one or more instances, each of which may be hosted by one or more virtual environment servers. Where there are multiple instances, the Avatars in one instance are generally unaware of Avatars in the other instance, however a user may have a presence in multiple worlds simultaneously through several virtual environment clients. Conventionally, each instance of the virtual environment may be referred to as a separate World. In the following description, it will be assumed that the Avatars are instantiated in the same world and hence can see and communicate with each other. A world may be implemented by one
virtual environment server 18, or may be implemented by multiple virtual environment servers. The virtual environment is designed as a visual representation of a real-world environment that enables humans to interact with each other and communicate with each other in near-real time. - Generally, a virtual environment will have its own distinct three dimensional coordinate space. Avatars representing users may move within the three dimensional coordinate space and interact with objects and other Avatars within the three dimensional coordinate space. The virtual environment servers maintain the virtual environment and generate a visual presentation for each user based on the location of the user's Avatar within the virtual environment. The view may also depend on the direction in which the Avatar is facing and the selected viewing option, such as whether the user has opted to have the view appear as if the user was looking through the eyes of the Avatar, or whether the user has opted to pan back from the Avatar to see a three dimensional view of where the Avatar is located and what the Avatar is doing in the three dimensional computer-generated virtual environment.
- Each
user 12 has acomputer 22 that may be used to access the three-dimensional computer-generated virtual environment. Thecomputer 22 will run avirtual environment client 24 and a user interface 26 to the virtual environment. The user interface 26 may be part of thevirtual environment client 24 or implemented as a separate process. A separate virtual environment client may be required for each virtual environment that the user would like to access, although a particular virtual environment client may be designed to interface with multiple virtual environment servers. Acommunication client 28 is provided to enable the user to communicate with other users who are also participating in the three dimensional computer-generated virtual environment. The communication client may be part of thevirtual environment client 24, the user interface 26, or may be a separate process running on thecomputer 22. - The user may see a representation of a portion of the three dimensional computer-generated virtual environment on a display/
audio 30 and input commands via auser input device 32 such as a mouse, touch pad, or keyboard. The display/audio 30 may be used by the user to transmit/receive audio information while engaged in the virtual environment. For example, the display/audio 30 may be a display screen having a speaker and a microphone. The user interface generates the output shown on the display under the control of the virtual environment client, and receives the input from the user and passes the user input to the virtual environment client. The virtual environment client enables the user'sAvatar 34 or other object under the control of the user to execute the desired action in the virtual environment. In this way the user may control a portion of the virtual environment, such as the person's Avatar or other objects in contact with the Avatar, to change the virtual environment for the other users of the virtual environment. - Typically, an Avatar is a three dimensional rendering of a person or other creature that represents the user in the virtual environment. The user selects the way that their Avatar looks when creating a profile for the virtual environment and then can control the movement of the Avatar in the virtual environment such as by causing the Avatar to walk, run, wave, talk, or make other similar movements. Thus, the
block 34 representing the Avatar in thevirtual environment 14, is not intended to show how an Avatar would be expected to appear in a virtual environment. Rather, the actual appearance of the Avatar is immaterial since the actual appearance of each user's Avatar may be expected to be somewhat different and customized according to the preferences of that user. Since the actual appearance of the Avatars in the three dimensional computer-generated virtual environment is not important to the concepts discussed herein, Avatars have generally been represented herein using simple geometric shapes such as cubes and diamonds, rather than complex three dimensional shapes such as people and animals. -
FIG. 2 shows a portion of an example three dimensional computer-generated virtual environment and showing normal audio dispersion envelopes associated with Avatars in the three dimensional computer generated virtual environment.FIG. 2 has been shown in two dimensions for ease of illustration and to more clearly show how audio dispersion occurs. Audio dispersion occurs in the vertical direction as well in the same manner. To simplify the explanation, most of the description uses two-dimensional figures to explain how audio dispersion may be affected in the virtual environment. It should be remembered that many virtual environments are three dimensional and, hence, the audio dispersion envelopes will extend in all three dimensions. Extension of the two dimensional audio dispersion envelopes to three dimensions is straightforward, and several examples of this are provided in connection withFIGS. 10 and 11 . Thus, although the description may focus on two dimensions the invention is not limited in this manner as the same principles may be used to control the vertical dispersion (Z coordinate direction) as may be used to control the X and Y coordinate dispersions properties. - For example,
FIG. 10 shows another example in which a first Avatar is located at XYZ coordinates 2, 2, 25, and a second Avatar is located at XYZ coordinates 25, 25, 4. Viewing only the X and Y coordinates, the two Avatars are approximately 18.34 units apart. If the audio dispersion envelope is 20 units, then looking only at the X and Y coordinates the two Avatars should be able to talk with each other. However, when the Z coordinate is factored in, the separation between the two Avatars in the three dimensional space is 28.46 which is well beyond the 20 unit dispersion envelope reach. Accordingly, in a three dimensional virtual environment it may often be necessary to consider the Z coordinate separation of the two Avatars as well as the X and Y coordinate separation when determining whether the two Avatars can talk with each other. A mixed audio stream will be created for each user based on the position of the user's Avatar and the dispersion envelope for the Avatar, so that audio from each of the users represented by an Avatar within the user's dispersion envelope can be mixed and provided to the user. In this way individual audio streams may be created for each user so that the user can hear audio from other users proximate their Avatar in the virtual environment. The volume of a particular user will be adjusted during the mixing process so that audio from close Avatars is louder than audio from Avatars that are farther away. - As shown in
FIG. 2 , in this example it has been assumed that fiveAvatars 34 are present in the viewable area ofvirtual environment 14. The Avatars have been labeled A through E for purposes of discussion. Typically each Avatar would be controlled by a separate user, although there may be instances where a user could control more than one Avatar. In the figure, the arrow shows the direction that the avatar is facing in the virtual environment. - In the example shown in
FIG. 2 , users associated with Avatars are generally allowed to talk with each other as long as they are within a particular distance of each other. For example, Avatar A is not within range of any other Avatar and, accordingly, the user associated with Avatar A is not able to talk to the users associated with any of the other Avatars. Similarly, Avatars D and E are not sufficiently close to talk with each other, even though they are looking at each other. Avatars B and C can hear each other, however, since they are sufficiently close to each other. In the example shown inFIG. 2 , the audio dispersion envelopes for each of the Avatars are spherical (circular in two dimensional space) such that an Avatar can communicate with any Avatar that is within a particular radial distance. When the user associated with the Avatar speaks, the other Avatars within the radial distance will hear the user. If the Avatars are too far apart they are not able to “talk” within the virtual environment and, hence, audio packets/data will not be transmitted between the users. - If users are closer together, the users will be able to hear each other more clearly, and as the users get farther apart the volume of the audio tapers off until, at the edge of the dispersion envelope, the contribution of a user's audio is reduced to zero. In one embodiment, the manner in which the volume of a user's contribution is determined on a linear basis. Thus, looking at the example shown in
FIG. 3 , the user A is located within a dispersion envelope. The volume close to the user is highest, as you get farther away from the user A within the dispersion envelope the volume of contribution tapers off on a linear basis until the edge of the dispersion envelope is reached. - Audio is mixed individually for each user of the virtual environment. The particular mix of audio will depend on which other users are within the dispersion envelope for the user, and their location within the dispersion envelope. The location within the dispersion envelope affects the volume with which that user's audio will be presented to the user associated with the dispersion envelope. Since the audio is mixed individually for each user, an audio bridge is not required per user, but rather an audio stream may be created individually for each user based on which other users are proximate that user in the virtual environment.
-
FIG. 3 shows an embodiment of the invention in which the audio dispersion envelope may be shaped to extend further in front of the Avatar and to a lesser distance behind and to the left and right sides of the Avatar. This enables the Avatar to talk in a particular direction within the virtual environment while minimizing the amount of noise the Avatar generates for other users of the virtual environment. Thus, the ability of a user to talk with other users is dependent, in part, on the orientation of the user's Avatar within the virtual environment. This mirrors real life, where it is easier to hear someone who is facing you than to hear the same person talking at the same volume, but facing a different direction. As a result, the audio dispersion envelope may be controlled by the user using simple controls, such as the controls used to control the direction in which the Avatar is facing. Optionally, as discussed below, other controls may also be used to control the volume or distance of the audio dispersion envelope to further enhance the user's audio control in the virtual environment. - In the example shown in
FIG. 3 , the Avatars are in the same position as they are in the example shown inFIG. 2 . However, the audio dispersion envelopes have been adjusted to extend further in the direction that the Avatar is facing and to extend to a lesser extent to the sides of the Avatar and to the rear of the Avatar. This affects which Avatars are able to communicate with each other. For example, as shown inFIG. 3 , Avatars B and C are not able to talk with each other, even though they are standing relatively close to each other, since they are not facing each other. Similarly, Avatars D and E are able to talk with each other since they are facing each other, even though they are somewhat farther apart than Avatars B and C. Avatar A is still alone and, hence, cannot talk to any other Avatars. - In the embodiment shown in
FIG. 3 , the Avatars must be within each other's audio dispersion envelope for audio to be transmitted between each other. An alternative may be to enable audio to be transmitted where at least one of the Avatars is within the dispersion envelope of the other Avatar. For example, if Avatar B is within the dispersion envelope of Avatar C, but Avatar C is not in the dispersion envelope of Avatar B, audio may be transmitted between the users associated with Avatars C and B. Optionally, audio may be transmitted in one direction such that the user associated with Avatar C can hear the user associated with Avatar B, but the user associated with Avatar B cannot hear the user associated with Avatar C. - The shape of the audio dispersion envelope may depend on the preferences of the user as well as the preferences of the virtual environment provider. For example, the virtual environment provider may provide the user with an option to select an audio dispersion shape to be associated with their Avatar when they enter the virtual environment. The audio dispersion shape may be persistent until adjusted by the user. For example, the user may select a voice for their Avatar such that some users will have robust loud voices while others will have more meek and quiet voices. Alternatively, the virtual environment provider may provide particular audio dispersion profiles for different types of Avatars, for example police Avatars may be able to be heard at a greater distance than other types of Avatars. Additionally, the shape of the audio dispersion envelope may depend on other environmental factors, such as the location of the Avatar within the virtual environment and the presence or absence of ambient noise in the virtual environment.
-
FIG. 4 shows an embodiment in which the Avatar is provided with an option to increase the volume of their voice so that their voice may reach farther within the virtual environment. This allows the Avatar to “shout” within the virtual environment. As shown inFIG. 4 , assume that the user associated with Avatar A would like to talk to the user associated with Avatar B. User A may face Avatar B in the virtual environment by causing his Avatar A to face Avatar B. As discussed above in connection withFIG. 3 , where the audio dispersion profile is designed to extend further in front of the Avatar, this action will cause the main audio dispersion profile lobe to extend toward Avatar B. However, as shown inFIG. 4 , the main audio dispersion lobe of the audio dispersion profile may be insufficiently large to encompass Avatar B if Avatars A and B are too far apart in the virtual environment. - According to an embodiment of the invention, user A may “Shout” toward Avatar B to cause the audio dispersion profile to extend further in the direction of B. The user's intention to shout at B may be indicated by the user through the manipulation of simple controls. For example, on a wheeled mouse, the mouse wheel may be a shout control that the user uses to extend the audio dispersion profile of their Avatar. In this embodiment, the user may simply scroll the mouse wheel away by contacting the mouse wheel and pushing forward with their finger. This is commonly used to scroll up using the mouse wheel in most common computer user interfaces. Conversely, if the user no longer wants to shout, the user may pull back on the top of the mouse wheel in a motion similar to scrolling down with the mouse wheel on most common user interfaces.
- There are times where the user may want to communicate with every user within a given volume of the virtual environment. For example a person may wish to make a presentation to a room full of Avatars. To do this, the user may cause their audio dispersion envelope to increase in all directions to expand to fill the entire volume. This is shown in
FIGS. 4 and 5 as dashedline 5. The volume of the user's voice in this embodiment won't increase, but rather all users within the volume will be able to hear the user so that they can hear audio generated by the user. This feature will be referred to herein as “OmniVoice”. When the user invokes OmniVoice, audio from the user is mixed into the audio stream presented to each of the other users within the volume so that the other users can hear the user invoking OmniVoice. When the user invokes OmniVoice, the appearance of the user may be altered somewhat so that other people engaged in the virtual environment are aware of who has invoked the OmniVoice feature. - A user may use explicit controls to invoke OmniVoice or, preferably, OmniVoice may be invoked intrinsically based on the location of the Avatar within the virtual environment. For example, the user may walk up to a podium on a stage and, the user's presence on the stage, may cause audio provided by the user to be included in the mixed audio stream of every other Avatar within a particular volume of the virtual environment.
- The mouse wheel may have multiple uses in the virtual environment, depending on the particular virtual environment and the other activities available to the Avatar. If this is the case, then a combination of inputs may be used to control the audio dispersion profile. For example, left clicking on the mouse combined with scrolling of the mouse wheel, or depressing a key on the keyboard along with scrolling of the mouse wheel may be used to signal that the mouse scrolling action is associated with the Avatar's voice rather than another action. Alternatively, the mouse wheel may not be used and a key stroke or combination of keystrokes on the keyboard may be used to extend the audio dispersion envelope and cause the audio dispersion envelope to return to normal. Thus, in addition to implementing directional audio dispersion envelopes based on the orientation of the Avatar within the three dimensional virtual environment, the user may also warp the audio dispersion envelope in real time to control the Avatar's audio dispersion envelope in the virtual environment.
- When the user elects to shout toward another user in the virtual environment, the facial expression of the Avatar may change or other visual indication may be provided as to who is shouting. This enables the user to know that they are shouting as well as to enable other users of the virtual environment to understand why the physics have changed. For example, the Avatar may cup their hands around their mouth to provide a visual clue that they are shouting in a particular direction. A larger extension of the Avatar's voice may be displayed as an Avatar or even by providing a ghost of the Avatar move closer to the new center of the voice range so that other users can determine who is yelling. Other visual indications may be provided as well.
- The user controlled audio dispersion envelope warping may toggle, as shown in
FIG. 4 , such that the user is either talking normally or shouting depending on the state of the toggle control. Alternatively, the user controlled audio dispersion profile warping may be more gradual and have multiple discrete levels as shown inFIG. 5 . Specifically, as shown inFIG. 5 , the user may increase the directionality and range of the projection of their voice in the three-dimensional computer generated virtual environment so that the reach of their voice may extend different distances depending on the extent to which the user would like to shout in the virtual environment. InFIG. 5 , the user has been provided with four discrete levels, and then a fifth level for use in connection with OmniVoice. Other numbers of levels may be used as well. Additionally, the discretization may be blurred such that the control is closer to continuous depending on the implementation. - In another embodiment, the selection of a person to shout to may be implemented when the user mouses over another Avatar and depresses a button, such as left clicking or right clicking on the other Avatar. If the person is within normal talking distance of the other Avatar, audio from that other Avatar will be mixed into the audio stream presented to the user. Audio from other Avatars within listening distance will similarly be included in mixed audio stream presented to the user. If the other Avatar is not within listening distance, i.e. is not within the normal audio dispersion envelope, the user may be provided with an option to shout to the other Avatar. In this embodiment the user would be provided with an instruction to double click on the other Avatar to shout to them. Many different ways of implementing the ability to shout may be possible depending on the preferences of the user interface designer.
- In one embodiment, the proximity of the Avatars may adjust the volume of their audio when their audio is mixed into the audio stream for the user so that the user is presented with an audio stream that more closely resemble normal realistic audio. Other environmental factors may similarly affect the communication between Avatars in the virtual environment.
- Although the previous description has focused on enabling the Avatar to increase the size of the dispersion envelope to shout in the virtual environment, the same controls may optionally also be used to reduce the size of the dispersion envelope so that the user can whisper in the virtual environment. In this embodiment, the user may control their voice in the opposite direction to reduce the size of the dispersion envelope so that users must be closer to the user's Avatar to communicate with the user. The mouse controls or other controls may be used in this manner to reduce the size of the audio dispersion envelope as desired.
-
FIG. 6 and 7 show an embodiment in which the profile of the audio dispersion envelope is affected by obstacles in the virtual environment. Commonly, two Avatars in a virtual environment will be able to hear each other if they are within talking distance of each other regardless of the other objects in the virtual environment. For example, assume that Avatar A is in one room and that Avatar B is in another room as shown inFIG. 7 . Normally, the virtual environment server would enable the Avatars to communicate with each other (as shown by the dashed line representing the dispersion envelope) since they are proximate each other in the virtual environment.FIG. 11 shows a similar situation where two Avatars are closed to each other, but are on different floors of the virtual environment. From the Avatar's perspective, the two Avatars are unlikely to be able to see through the wall or floor/ceiling and, hence, enabling the Avatars to hear each other through an obstacle of this nature is unnatural. This feature also enables Avatars to listen in on conversations that are occurring within the virtual environment between other Avatars, without being seen by the other Avatars. - According to an embodiment of the invention, the audio dispersion profile may be adjusted to account for obstacles in the virtual environment. Obstacles may be thought of as creating shadows on the profile to reduce the distance of the profile in a particular direction. This prevents communication between Avatars where they would otherwise be able to communicate if not for the imposition of the obstacle. In
FIG. 6 , for example, Avatar A is in a room and talking through a doorway. The jambs and walls on either side of the door serve as obstacles that partially obstruct the audio dispersion profile of the user. If the door is closed, this too may affect the dispersion envelope. Thus, Avatar B who is standing next to the wall will be unable to hear or talk to Avatar A since Avatar B is in the shadow of the wall and outside of Avatar A's audio dispersion profile. Avatar C, by contrast, is in direct line of site of Avatar A through the door and, accordingly, may communicate with Avatar A. Of course, Avatar B may communicate with Avatar C and, hence may still hear Avatar B's side of the conversation between Avatars A and C. Thus, a particular Avatar may be only partially included in communications between other parties. To implement an embodiment of this nature, a discrete mixer is implemented per client, where essentially all these described attributes and properties are calculated on a per client basis. The mixer determines which audio from adjacent Avatars is able to be heard by the particular user and mixes the available audio streams accordingly. Implementing this on a per-client basis is advantageous because no two users are ever in exactly the same position at a particular point in time, and hence the final output of each of the clients will necessarily be different. -
FIG. 11 shows two Avatars that are separated vertically as they are on separate floors. The floor may function in the same manner as the walls did inFIGS. 5 and 6 . Specifically, if the rooms on the separate floors are defined as separate volumes, sound from one floor will not enter the other floor so that, even though the two Avatars are very close to each other in both the X and Y coordinate space, they are separated vertically such that the two Avatars cannot talk with each other. - The shadow objects may be wholly opaque to transmission of sound or may simply be attenuating objects. For example, a concrete wall may attenuate sound 100%, a normal wall may attenuate 90% of sound while allowing some sound to pass through, and a curtain may attenuate sound only modestly such as 10%. The level of attenuation may be specified when the object is placed in the virtual environment.
- Audio be implemented using a communication server that is configured to mix audio individually for each user of the virtual environment. The communication server will receive audio from all the users of the virtual environment and create an audio stream for a particular user by determining which of the other users have an Avatar within the user's dispersion envelope. To enable participants to be selected, a notion of directionality needs to be included in the selection process such that the selection process does not simply look at the relative distance of the participants, but also looks to see what direction the participants are facing within the virtual environment. This may be done by associating a vector with each participant and determining whether the vector extends sufficiently close to the other Avatar to warrant inclusion of the audio from that user in the audio stream. Additionally, if shadows are to be included in the determination, the process may look to determine whether the vector transverses any shadow objects in the virtual environment. If so, the extent of the shadow may be calculated to determine whether the audio should be included. Other ways of implementing the audio connection determination process may be used as well and the invention is not limited to this particular example implementation.
- In another embodiment, audio is to be transmitted between two Avatars may be determined by integrating attenuation along a vector between the Avatars. In this embodiment, the normal “air” or empty space in the virtual environment may be provided with an attenuation factor such as 5% per unit distance. Other objects within the environment may be provided with other attenuation factors depending on their intended material. Transmission of audio between Avatars, in this embodiment, may depend on the distance between the Avatars, and hence the amount of air the sound must pass through, and the objects the vector passes through. The strength of the vector, and hence the attenuation able to be accommodated while still enabling communication, may depend on the direction the Avatar is facing. Additionally, the user may temporarily increase the strength of the vector by causing the Avatar to shout.
- In the preceding description, it was assumed that the user could elect to shout within the virtual environment. Optionally, that privilege may be reserved for Avatars possessing particular items within the virtual environment. For example, the Avatar may need to find or purchase a particular item such as a virtual bull horn to enable the Avatar to shout. Other embodiments are possible as well.
-
FIG. 8 shows a flow chart of a process, portions of which may be used by one or more entities, to determine whether audio should be transmitted between particular participants in a three dimensional computer-generated virtual environment. In the process shown inFIG. 8 , it will be assumed that the virtual environment server(s) have rendered the virtual environment (100) so that it is available for use by participants. Thus, the virtual environment servers will enable user A to enter the virtual environment and will render Avatar A for user A within the virtual environment (102). Similarly, the virtual environment servers will enable user B to enter the virtual environment and will render Avatar B for user B within the virtual environment (102′). Other users may have Avatars within the virtual environment as well. - The virtual environment servers will also define an audio dispersion envelope for Avatar A which specifies how the Avatar will be able to communicate within the virtual environment (104). Each Avatar may have a set pre-defined audio dispersion envelope which is a characteristic of all Avatars within the virtual environment, or the virtual environment servers may define custom specific audio dispersion envelopes for each user. Thus, the step of defining audio dispersion envelopes may be satisfied by specifying that the Avatar is able to communicate with other Avatars that are located a greater distance in front of the Avatar than other Avatars located in other directions relative to the Avatar.
- Automatically, or upon initiation of the user controlling Avatar A, the virtual environment server will determine whether Avatar B is within audio dispersion envelope for Avatar A (106). This may be implemented, for example, by looking to see whether Avatar A is facing Avatar B, and then determining how far distance Avatar B is from Avatar A in the virtual environment. If Avatar B is within the audio dispersion envelope of Avatar A, the virtual environment server will enable audio from Avatar B to be included in the audio stream transmitted to the user associated with Avatar A.
- If Avatar B is not within the audio dispersion envelope of Avatar A, the user may be provided with an opportunity to control the shape of the audio dispersion envelope such as by enabling the user to shout toward Avatar B. In particular, user A may manipulate their user interface to cause Avatar A to shout toward Avatar B (110). If user A properly signals via their user interface that he would like to shout toward Avatar B, the virtual environment server will enlarge the audio dispersion envelope for the Avatar in the virtual environment in the direction of the shout (112).
- The virtual environment server will then similarly determine whether Avatar B is within the enlarged audio dispersion envelope for Avatar A (114). If so, the virtual environment server will enable audio to be transmitted between the users associated with the Avatars (116). If not, the two Avatars will need to move closer toward each other in the virtual environment to enable audio to be transmitted between the users associated with Avatars A and B (118).
-
FIG. 9 shows a system that may be used to implement realistic audio within a virtual environment according to an embodiment of the invention. As shown inFIG. 9 ,users 12 are provided with access to avirtual environment 14 that is implemented using one or morevirtual environment servers 18. - Users 12A, 12B are represented by avatars 34A, 34B within the
virtual environment 14. When the users are proximate each other and facing each other, an audio position and direction detection subsystem 64 will determine that audio should be transmitted between the users associated with the Avatars. Audio will be mixed by mixingfunction 78 to provide individually determined audio streams to each of the Avatars. - In the embodiment shown in
FIG. 9 , the mixing function is implemented at the server. The invention is not limited in this manner as the mixing function may instead be implemented at the virtual environment client. In this alternative embodiment, audio from multiple users would be transmitted to the user's virtual environment client and the virtual environment client would select particular portions of the audio to be presented to the user. Implementing the mixing function at the server reduces the amount of audio that must be transmitted to each user, but increases the load on the server. Implementing the mixing function at the client distributes the load for creating individual mixed audio streams and, hence, is easier on the server. However, it requires multiple audio streams to be transmitted to each of the clients. The particular solution may be selected based on available bandwidth and processing power. - The functions described above may be implemented as one or more sets of program instructions that are stored in a computer readable memory and executed on one or more processors within on one or more computers. However, it will be apparent to a skilled artisan that all logic described herein can be embodied using discrete components, integrated circuitry such as an Application Specific Integrated Circuit (ASIC), programmable logic used in conjunction with a programmable logic device such as a Field Programmable Gate Array (FPGA) or microprocessor, a state machine, or any other device including any combination thereof. Programmable logic can be fixed temporarily or permanently in a tangible medium such as a read-only memory chip, a computer memory, a disk, or other storage medium. All such embodiments are intended to fall within the scope of the present invention.
- It should be understood that various changes and modifications of the embodiments shown in the drawings and described in the specification may be made within the spirit and scope of the present invention. Accordingly, it is intended that all matter contained in the above description and shown in the accompanying drawings be interpreted in an illustrative and not in a limiting sense. The invention is limited only as defined in the following claims and the equivalents thereto.
Claims (20)
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/344,542 US20090240359A1 (en) | 2008-03-18 | 2008-12-28 | Realistic Audio Communication in a Three Dimensional Computer-Generated Virtual Environment |
JP2011500904A JP5405557B2 (en) | 2008-03-18 | 2009-03-17 | Incorporating web content into a computer generated 3D virtual environment |
PCT/US2009/037424 WO2009117434A1 (en) | 2008-03-18 | 2009-03-17 | Realistic audio communications in a three dimensional computer-generated virtual environment |
CN200980109791.9A CN101978707B (en) | 2008-03-18 | 2009-03-17 | Realistic audio communication in a three dimensional computer-generated virtual environment |
EP09723406.6A EP2269389A4 (en) | 2008-03-18 | 2009-03-17 | Realistic audio communications in a three dimensional computer-generated virtual environment |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US3744708P | 2008-03-18 | 2008-03-18 | |
US12/344,542 US20090240359A1 (en) | 2008-03-18 | 2008-12-28 | Realistic Audio Communication in a Three Dimensional Computer-Generated Virtual Environment |
Publications (1)
Publication Number | Publication Date |
---|---|
US20090240359A1 true US20090240359A1 (en) | 2009-09-24 |
Family
ID=41089695
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/344,465 Expired - Fee Related US8605863B1 (en) | 2008-03-18 | 2008-12-27 | Method and apparatus for providing state indication on a telephone call |
US12/344,473 Active 2030-12-25 US9258337B2 (en) | 2008-03-18 | 2008-12-27 | Inclusion of web content in a virtual environment |
US12/344,550 Active 2029-01-21 US9392037B2 (en) | 2008-03-18 | 2008-12-28 | Method and apparatus for reconstructing a communication session |
US12/344,542 Abandoned US20090240359A1 (en) | 2008-03-18 | 2008-12-28 | Realistic Audio Communication in a Three Dimensional Computer-Generated Virtual Environment |
Family Applications Before (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/344,465 Expired - Fee Related US8605863B1 (en) | 2008-03-18 | 2008-12-27 | Method and apparatus for providing state indication on a telephone call |
US12/344,473 Active 2030-12-25 US9258337B2 (en) | 2008-03-18 | 2008-12-27 | Inclusion of web content in a virtual environment |
US12/344,550 Active 2029-01-21 US9392037B2 (en) | 2008-03-18 | 2008-12-28 | Method and apparatus for reconstructing a communication session |
Country Status (5)
Country | Link |
---|---|
US (4) | US8605863B1 (en) |
EP (3) | EP2269389A4 (en) |
JP (2) | JP5405557B2 (en) |
CN (2) | CN101978707B (en) |
WO (3) | WO2009117434A1 (en) |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090199275A1 (en) * | 2008-02-06 | 2009-08-06 | David Brock | Web-browser based three-dimensional media aggregation social networking application |
US20100146408A1 (en) * | 2008-12-10 | 2010-06-10 | International Business Machines Corporation | System and method to modify audio components in an online environment |
CN101996077A (en) * | 2010-09-08 | 2011-03-30 | 海皮士(北京)网络技术有限公司 | Method and system for embedding browser in three-dimensional client end |
US20130050199A1 (en) * | 2011-08-29 | 2013-02-28 | Avaya Inc. | Input, display and monitoring of contact center operation in a virtual reality environment |
US20150156228A1 (en) * | 2013-11-18 | 2015-06-04 | Ronald Langston | Social networking interacting system |
US9093259B1 (en) * | 2011-11-16 | 2015-07-28 | Disney Enterprises, Inc. | Collaborative musical interaction among avatars |
CN105007297A (en) * | 2015-05-27 | 2015-10-28 | 国家计算机网络与信息安全管理中心 | Interaction method and apparatus of social network |
US20160292966A1 (en) * | 2015-03-31 | 2016-10-06 | Gary Denham | System and method of providing a virtual shopping experience |
US20170123752A1 (en) * | 2015-09-16 | 2017-05-04 | Hashplay Inc. | Method and system for voice chat in virtual environment |
US20170151501A1 (en) * | 2014-06-25 | 2017-06-01 | Capcom Co., Ltd. | Game device, method and non-transitory computer-readable storage medium |
WO2018167363A1 (en) | 2017-03-17 | 2018-09-20 | Nokia Technologies Oy | Preferential rendering of multi-user free-viewpoint audio for improved coverage of interest |
EP3389260A4 (en) * | 2015-12-11 | 2018-11-21 | Sony Corporation | Information processing device, information processing method, and program |
US10154360B2 (en) | 2017-05-08 | 2018-12-11 | Microsoft Technology Licensing, Llc | Method and system of improving detection of environmental sounds in an immersive environment |
US20200311995A1 (en) * | 2019-03-28 | 2020-10-01 | Nanning Fugui Precision Industrial Co., Ltd. | Method and device for setting a multi-user virtual reality chat environment |
US10924566B2 (en) * | 2018-05-18 | 2021-02-16 | High Fidelity, Inc. | Use of corroboration to generate reputation scores within virtual reality environments |
US11025865B1 (en) * | 2011-06-17 | 2021-06-01 | Hrl Laboratories, Llc | Contextual visual dataspaces |
US11044570B2 (en) | 2017-03-20 | 2021-06-22 | Nokia Technologies Oy | Overlapping audio-object interactions |
US11074036B2 (en) | 2017-05-05 | 2021-07-27 | Nokia Technologies Oy | Metadata-free audio-object interactions |
US11096004B2 (en) * | 2017-01-23 | 2021-08-17 | Nokia Technologies Oy | Spatial audio rendering point extension |
US20220070232A1 (en) * | 2020-08-27 | 2022-03-03 | Varty Inc. | Virtual events-based social network |
US11282278B1 (en) * | 2021-04-02 | 2022-03-22 | At&T Intellectual Property I, L.P. | Providing adaptive asynchronous interactions in extended reality environments |
US11311803B2 (en) * | 2017-01-18 | 2022-04-26 | Sony Corporation | Information processing device, information processing method, and program |
US11395087B2 (en) | 2017-09-29 | 2022-07-19 | Nokia Technologies Oy | Level-based audio-object interactions |
US20220321375A1 (en) * | 2021-03-31 | 2022-10-06 | Snap Inc. | Mixing participant audio from multiple rooms within a virtual conferencing system |
US20230086248A1 (en) * | 2021-09-21 | 2023-03-23 | Meta Platforms Technologies, Llc | Visual navigation elements for artificial reality environments |
US11743430B2 (en) * | 2021-05-06 | 2023-08-29 | Katmai Tech Inc. | Providing awareness of who can hear audio in a virtual conference, and applications thereof |
US20240031530A1 (en) * | 2022-07-20 | 2024-01-25 | Katmai Tech Inc. | Using zones in a three-dimensional virtual environment for limiting audio and video |
US11947871B1 (en) | 2023-04-13 | 2024-04-02 | International Business Machines Corporation | Spatially aware virtual meetings |
Families Citing this family (100)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9357025B2 (en) | 2007-10-24 | 2016-05-31 | Social Communications Company | Virtual area based telephony communications |
US9009603B2 (en) | 2007-10-24 | 2015-04-14 | Social Communications Company | Web browser interface for spatial communication environments |
US8407605B2 (en) | 2009-04-03 | 2013-03-26 | Social Communications Company | Application sharing |
US7769806B2 (en) * | 2007-10-24 | 2010-08-03 | Social Communications Company | Automated real-time data stream switching in a shared virtual area communication environment |
US20090288007A1 (en) * | 2008-04-05 | 2009-11-19 | Social Communications Company | Spatial interfaces for realtime networked communications |
US8397168B2 (en) | 2008-04-05 | 2013-03-12 | Social Communications Company | Interfacing with a spatial virtual communication environment |
CN102084354A (en) | 2008-04-05 | 2011-06-01 | 社会传播公司 | Shared virtual area communication environment based apparatus and methods |
US8028021B2 (en) * | 2008-04-23 | 2011-09-27 | International Business Machines Corporation | Techniques for providing presentation material in an on-going virtual meeting |
EP2377089A2 (en) * | 2008-12-05 | 2011-10-19 | Social Communications Company | Managing interactions in a network communications environment |
KR101595751B1 (en) * | 2008-12-19 | 2016-02-22 | 삼성전자주식회사 | Method for storing data of video telephony call in mobile terminal and system thereof |
US8762861B2 (en) * | 2008-12-28 | 2014-06-24 | Avaya, Inc. | Method and apparatus for interrelating virtual environment and web content |
US9853922B2 (en) | 2012-02-24 | 2017-12-26 | Sococo, Inc. | Virtual area communications |
US9124662B2 (en) | 2009-01-15 | 2015-09-01 | Social Communications Company | Persistent network resource and virtual area associations for realtime collaboration |
US9288242B2 (en) | 2009-01-15 | 2016-03-15 | Social Communications Company | Bridging physical and virtual spaces |
US8284157B2 (en) * | 2010-01-15 | 2012-10-09 | Microsoft Corporation | Directed performance in motion capture system |
US20110225516A1 (en) * | 2010-03-10 | 2011-09-15 | Oddmobb, Inc. | Instantiating browser media into a virtual social venue |
US20110225498A1 (en) * | 2010-03-10 | 2011-09-15 | Oddmobb, Inc. | Personalized avatars in a virtual social venue |
US8667402B2 (en) * | 2010-03-10 | 2014-03-04 | Onset Vi, L.P. | Visualizing communications within a social setting |
US20110225518A1 (en) * | 2010-03-10 | 2011-09-15 | Oddmobb, Inc. | Friends toolbar for a virtual social venue |
US20110225515A1 (en) * | 2010-03-10 | 2011-09-15 | Oddmobb, Inc. | Sharing emotional reactions to social media |
US20110225517A1 (en) * | 2010-03-10 | 2011-09-15 | Oddmobb, Inc | Pointer tools for a virtual social venue |
US20110225519A1 (en) * | 2010-03-10 | 2011-09-15 | Oddmobb, Inc. | Social media platform for simulating a live experience |
US8572177B2 (en) * | 2010-03-10 | 2013-10-29 | Xmobb, Inc. | 3D social platform for sharing videos and webpages |
WO2011112296A1 (en) * | 2010-03-10 | 2011-09-15 | Oddmobb, Inc. | Incorporating media content into a 3d platform |
US20110225039A1 (en) * | 2010-03-10 | 2011-09-15 | Oddmobb, Inc. | Virtual social venue feeding multiple video streams |
US8527591B2 (en) | 2010-05-20 | 2013-09-03 | Actual Works, Inc. | Method and apparatus for the implementation of a real-time, sharable browsing experience on a guest device |
US9171087B2 (en) * | 2010-05-20 | 2015-10-27 | Samesurf, Inc. | Method and apparatus for the implementation of a real-time, sharable browsing experience on a host device |
EP2606466A4 (en) | 2010-08-16 | 2014-03-05 | Social Communications Co | Promoting communicant interactions in a network communications environment |
EP2614482A4 (en) | 2010-09-11 | 2014-05-14 | Social Communications Co | Relationship based presence indicating in virtual area contexts |
US8562441B1 (en) * | 2011-05-03 | 2013-10-22 | Zynga Inc. | Secure, parallel, and independent script execution |
TWI533198B (en) | 2011-07-22 | 2016-05-11 | 社交通訊公司 | Technology for communication between virtual areas and physical spaces |
US9530156B2 (en) | 2011-09-29 | 2016-12-27 | Amazon Technologies, Inc. | Customizable uniform control user interface for hosted service images |
US10147123B2 (en) | 2011-09-29 | 2018-12-04 | Amazon Technologies, Inc. | Electronic marketplace for hosted service images |
US9626700B1 (en) | 2011-09-29 | 2017-04-18 | Amazon Technologies, Inc. | Aggregation of operational data for merchandizing of network accessible services |
US8776043B1 (en) | 2011-09-29 | 2014-07-08 | Amazon Technologies, Inc. | Service image notifications |
US9392233B2 (en) * | 2011-12-08 | 2016-07-12 | International Business Machines Corporation | System and method for combined meeting recording |
US9503349B2 (en) * | 2012-01-26 | 2016-11-22 | Zoom International S.R.O. | Complex interaction recording |
US9679279B1 (en) | 2012-02-27 | 2017-06-13 | Amazon Technologies Inc | Managing transfer of hosted service licenses |
US9397987B1 (en) | 2012-03-23 | 2016-07-19 | Amazon Technologies, Inc. | Managing interaction with hosted services |
US9258371B1 (en) | 2012-03-23 | 2016-02-09 | Amazon Technologies, Inc. | Managing interaction with hosted services |
US9374448B2 (en) * | 2012-05-27 | 2016-06-21 | Qualcomm Incorporated | Systems and methods for managing concurrent audio messages |
US9727534B1 (en) | 2012-06-18 | 2017-08-08 | Bromium, Inc. | Synchronizing cookie data using a virtualized browser |
US10095662B1 (en) | 2012-06-18 | 2018-10-09 | Bromium, Inc. | Synchronizing resources of a virtualized browser |
US9734131B1 (en) | 2012-06-18 | 2017-08-15 | Bromium, Inc. | Synchronizing history data across a virtualized web browser |
US9384026B1 (en) * | 2012-06-18 | 2016-07-05 | Bromium, Inc. | Sharing and injecting cookies into virtual machines for retrieving requested web pages |
GB201216667D0 (en) * | 2012-07-10 | 2012-10-31 | Paz Hadar | Inside - on-line virtual reality immersion and integration system |
US8954853B2 (en) * | 2012-09-06 | 2015-02-10 | Robotic Research, Llc | Method and system for visualization enhancement for situational awareness |
DE102012019527A1 (en) * | 2012-10-05 | 2014-04-10 | Deutsche Telekom Ag | Method and system for improving and extending the functionality of a video call |
WO2014071076A1 (en) * | 2012-11-01 | 2014-05-08 | Gutman Ronald David | Conferencing for participants at different locations |
US9886160B2 (en) * | 2013-03-15 | 2018-02-06 | Google Llc | Managing audio at the tab level for user notification and control |
CN104105056A (en) * | 2013-04-12 | 2014-10-15 | 上海果壳电子有限公司 | Conversation method, conversation system and hand-held device |
US9553787B1 (en) | 2013-04-29 | 2017-01-24 | Amazon Technologies, Inc. | Monitoring hosted service usage |
US10334101B2 (en) * | 2013-05-22 | 2019-06-25 | Nice Ltd. | System and method for generating metadata for a recorded session |
US20180225885A1 (en) * | 2013-10-01 | 2018-08-09 | Aaron Scott Dishno | Zone-based three-dimensional (3d) browsing |
MX358612B (en) * | 2013-10-01 | 2018-08-27 | Dishno Aaron | Three-dimensional (3d) browsing. |
US10388297B2 (en) * | 2014-09-10 | 2019-08-20 | Harman International Industries, Incorporated | Techniques for generating multiple listening environments via auditory devices |
US10230801B2 (en) * | 2015-04-14 | 2019-03-12 | Avaya Inc. | Session reconstruction using proactive redirect |
KR20170055867A (en) * | 2015-11-12 | 2017-05-22 | 엘지전자 주식회사 | Mobile terminal and method for controlling the same |
JP6039042B1 (en) * | 2015-11-20 | 2016-12-07 | 株式会社スクウェア・エニックス | Drawing processing program, drawing processing device, drawing processing method, pronunciation processing program, pronunciation processing device, and pronunciation processing method |
US10516703B2 (en) * | 2016-03-07 | 2019-12-24 | Precision Biometrics, Inc. | Monitoring and controlling the status of a communication session |
US10560521B1 (en) * | 2016-09-12 | 2020-02-11 | Verint Americas Inc. | System and method for parsing and archiving multimedia data |
CN106528038B (en) * | 2016-10-25 | 2019-09-06 | 三星电子(中国)研发中心 | The method, system and device of audio frequency effect are adjusted under a kind of virtual reality scenario |
CN106598245B (en) * | 2016-12-16 | 2020-06-12 | 阿里巴巴(中国)有限公司 | Multi-user interaction control method and device based on virtual reality |
CN106886404A (en) * | 2017-01-17 | 2017-06-23 | 武汉卓尔云市集团有限公司 | A kind of 3D rendering devices of android |
CN107050851B (en) * | 2017-03-27 | 2020-12-08 | 熊庆生 | Sound enhancement method and system for game content effect |
CN110020235B (en) * | 2017-08-23 | 2021-08-03 | 北京京东尚科信息技术有限公司 | Web browser three-dimensional model positioning method, device, medium and electronic equipment |
US10372298B2 (en) | 2017-09-29 | 2019-08-06 | Apple Inc. | User interface for multi-user communication session |
US10582063B2 (en) * | 2017-12-12 | 2020-03-03 | International Business Machines Corporation | Teleconference recording management system |
CN107995440B (en) * | 2017-12-13 | 2021-03-09 | 北京奇虎科技有限公司 | Video subtitle map generating method and device, computer readable storage medium and terminal equipment |
US10673913B2 (en) * | 2018-03-14 | 2020-06-02 | 8eo, Inc. | Content management across a multi-party conference system by parsing a first and second user engagement stream and transmitting the parsed first and second user engagement stream to a conference engine and a data engine from a first and second receiver |
AU2019266225B2 (en) * | 2018-05-07 | 2021-01-14 | Apple Inc. | Multi-participant live communication user interface |
DK201870364A1 (en) | 2018-05-07 | 2019-12-03 | Apple Inc. | Multi-participant live communication user interface |
WO2019217320A1 (en) * | 2018-05-08 | 2019-11-14 | Google Llc | Mixing audio based on a pose of a user |
US11232532B2 (en) * | 2018-05-30 | 2022-01-25 | Sony Interactive Entertainment LLC | Multi-server cloud virtual reality (VR) streaming |
EP3576413A1 (en) * | 2018-05-31 | 2019-12-04 | InterDigital CE Patent Holdings | Encoder and method for encoding a tile-based immersive video |
US11227435B2 (en) | 2018-08-13 | 2022-01-18 | Magic Leap, Inc. | Cross reality system |
CN109168068B (en) * | 2018-08-23 | 2020-06-23 | Oppo广东移动通信有限公司 | Video processing method and device, electronic equipment and computer readable medium |
US10909762B2 (en) | 2018-08-24 | 2021-02-02 | Microsoft Technology Licensing, Llc | Gestures for facilitating interaction with pages in a mixed reality environment |
US11128792B2 (en) | 2018-09-28 | 2021-09-21 | Apple Inc. | Capturing and displaying images with multiple focal planes |
EP3861387A4 (en) | 2018-10-05 | 2022-05-25 | Magic Leap, Inc. | DISPLAY OF A SITE-SPECIFIC VIRTUAL CONTENT AT ANY LOCATION |
CN110473561A (en) * | 2019-07-24 | 2019-11-19 | 天脉聚源(杭州)传媒科技有限公司 | A kind of audio-frequency processing method, system and the storage medium of virtual spectators |
US11568605B2 (en) | 2019-10-15 | 2023-01-31 | Magic Leap, Inc. | Cross reality system with localization service |
EP4046401A4 (en) | 2019-10-15 | 2023-11-01 | Magic Leap, Inc. | WIRELESS FINGERPRINT CROSS-REALITY SYSTEM |
JP7604478B2 (en) | 2019-10-31 | 2024-12-23 | マジック リープ, インコーポレイテッド | Cross reality systems with quality information about persistent coordinate frames. |
US11386627B2 (en) | 2019-11-12 | 2022-07-12 | Magic Leap, Inc. | Cross reality system with localization service and shared location-based content |
EP4073763A4 (en) * | 2019-12-09 | 2023-12-27 | Magic Leap, Inc. | Cross reality system with simplified programming of virtual content |
US11228800B2 (en) * | 2019-12-31 | 2022-01-18 | Sling TV L.L.C. | Video player synchronization for a streaming video system |
US11830149B2 (en) | 2020-02-13 | 2023-11-28 | Magic Leap, Inc. | Cross reality system with prioritization of geolocation information for localization |
EP4104001A4 (en) | 2020-02-13 | 2024-03-13 | Magic Leap, Inc. | Cross reality system with map processing using multi-resolution frame descriptors |
CN115427758B (en) | 2020-02-13 | 2025-01-17 | 奇跃公司 | Cross-reality system with precisely shared maps |
US11521643B2 (en) * | 2020-05-08 | 2022-12-06 | Bose Corporation | Wearable audio device with user own-voice recording |
US20220070240A1 (en) * | 2020-08-28 | 2022-03-03 | Tmrw Foundation Ip S. À R.L. | Ad hoc virtual communication between approaching user graphical representations |
US12170579B2 (en) | 2021-03-05 | 2024-12-17 | Apple Inc. | User interfaces for multi-participant live communication |
US11907605B2 (en) | 2021-05-15 | 2024-02-20 | Apple Inc. | Shared-content session user interfaces |
US11893214B2 (en) | 2021-05-15 | 2024-02-06 | Apple Inc. | Real-time communication user interface |
US11928303B2 (en) | 2021-05-15 | 2024-03-12 | Apple Inc. | Shared-content session user interfaces |
US20220368737A1 (en) * | 2021-05-17 | 2022-11-17 | Cloudengage, Inc. | Systems and methods for hosting a video communications portal on an internal domain |
US11812135B2 (en) | 2021-09-24 | 2023-11-07 | Apple Inc. | Wide angle video conference |
US11935195B1 (en) | 2022-12-13 | 2024-03-19 | Astrovirtual, Inc. | Web browser derived content including real-time visualizations in a three-dimensional gaming environment |
US20240320925A1 (en) * | 2023-03-21 | 2024-09-26 | International Business Machines Corporation | Adjusting audible area of avatar's voice |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6504926B1 (en) * | 1998-12-15 | 2003-01-07 | Mediaring.Com Ltd. | User control system for internet phone quality |
US20060025216A1 (en) * | 2004-07-29 | 2006-02-02 | Nintendo Of America Inc. | Video game voice chat with amplitude-based virtual ranging |
US20080252637A1 (en) * | 2007-04-14 | 2008-10-16 | Philipp Christian Berndt | Virtual reality-based teleconferencing |
US20120166969A1 (en) * | 2007-03-01 | 2012-06-28 | Sony Computer Entertainment Europe Limited | Apparatus and method of data transfer |
Family Cites Families (54)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2892901B2 (en) * | 1992-04-27 | 1999-05-17 | インターナショナル・ビジネス・マシーンズ・コーポレイション | Automation system and method for presentation acquisition, management and playback |
US5559875A (en) * | 1995-07-31 | 1996-09-24 | Latitude Communications | Method and apparatus for recording and retrieval of audio conferences |
JP3139615B2 (en) * | 1996-08-08 | 2001-03-05 | 日本電信電話株式会社 | Three-dimensional virtual space sound communication method and apparatus |
US6175842B1 (en) | 1997-07-03 | 2001-01-16 | At&T Corp. | System and method for providing dynamic three-dimensional multi-user virtual spaces in synchrony with hypertext browsing |
US6850609B1 (en) * | 1997-10-28 | 2005-02-01 | Verizon Services Corp. | Methods and apparatus for providing speech recording and speech transcription services |
JPH11219278A (en) * | 1998-02-02 | 1999-08-10 | Mitsubishi Electric Corp | Three-dimensional virtual world system |
US6298129B1 (en) | 1998-03-11 | 2001-10-02 | Mci Communications Corporation | Teleconference recording and playback system and associated method |
US6230171B1 (en) * | 1998-08-29 | 2001-05-08 | International Business Machines Corporation | Markup system for shared HTML documents |
JP2000113221A (en) * | 1998-09-30 | 2000-04-21 | Sony Corp | Information processor, information processing method and provision medium |
WO2005015880A1 (en) | 1998-12-29 | 2005-02-17 | Tpresence, Inc. | Computer network architecture for persistent, distributed virtual environments |
KR100349597B1 (en) * | 1999-01-12 | 2002-08-22 | 삼성전자 주식회사 | Optical wave-guide device and manufacturing method thereof |
US7349944B2 (en) * | 1999-11-18 | 2008-03-25 | Intercall, Inc. | System and method for record and playback of collaborative communications session |
KR100404285B1 (en) * | 2000-02-09 | 2003-11-03 | (주) 고미드 | 2d/3d wed browsing method and recording medium storing the method |
US20010019337A1 (en) * | 2000-03-03 | 2001-09-06 | Jong Min Kim | System for providing clients with a three dimensional virtual reality |
WO2001071472A2 (en) | 2000-03-20 | 2001-09-27 | British Telecommunications Public Limited Company | Data entry |
US20060036756A1 (en) | 2000-04-28 | 2006-02-16 | Thomas Driemeyer | Scalable, multi-user server and method for rendering images from interactively customizable scene information |
JP4479051B2 (en) * | 2000-04-28 | 2010-06-09 | ソニー株式会社 | Information processing apparatus and method, and recording medium |
US6954728B1 (en) | 2000-05-15 | 2005-10-11 | Avatizing, Llc | System and method for consumer-selected advertising and branding in interactive media |
US6501739B1 (en) * | 2000-05-25 | 2002-12-31 | Remoteability, Inc. | Participant-controlled conference calling system |
AU2001271864A1 (en) * | 2000-07-06 | 2002-01-21 | Raymond Melkomian | Virtual interactive global exchange |
US6792093B2 (en) * | 2000-12-05 | 2004-09-14 | Zvi Barak | System and method for telephone call recording and recorded call retrieval |
AU781291B2 (en) | 2000-09-19 | 2005-05-12 | Nice Systems Ltd. | Communication management system for computer network based telephones |
GB0029574D0 (en) * | 2000-12-02 | 2001-01-17 | Hewlett Packard Co | Recordal service for voice communications |
JP2002312295A (en) * | 2001-04-09 | 2002-10-25 | Nec Interchannel Ltd | Virtual three-dimensional space conversation system |
US6909429B2 (en) | 2001-05-18 | 2005-06-21 | A.G. Imaginations Ltd. | System and method for displaying content in a three-dimensional virtual environment |
WO2003058518A2 (en) * | 2002-01-07 | 2003-07-17 | Stephen James Crampton | Method and apparatus for an avatar user interface system |
AUPR989802A0 (en) * | 2002-01-09 | 2002-01-31 | Lake Technology Limited | Interactive spatialized audiovisual system |
JP3829722B2 (en) | 2002-01-23 | 2006-10-04 | ソニー株式会社 | Information processing apparatus and method, and program |
GB2389736B (en) * | 2002-06-13 | 2005-12-14 | Nice Systems Ltd | A method for forwarding and storing session packets according to preset and/or dynamic rules |
US6778639B2 (en) * | 2002-06-26 | 2004-08-17 | International Business Machines Corporation | Method, apparatus and computer program for authorizing recording of a message |
US7054420B2 (en) * | 2002-09-11 | 2006-05-30 | Telstrat International, Ltd. | Voice over IP telephone recording architecture |
US7466334B1 (en) | 2002-09-17 | 2008-12-16 | Commfore Corporation | Method and system for recording and indexing audio and video conference calls allowing topic-based notification and navigation of recordings |
US6993120B2 (en) * | 2002-10-23 | 2006-01-31 | International Business Machines Corporation | System and method for copying and transmitting telephony conversations |
US7590230B1 (en) * | 2003-05-22 | 2009-09-15 | Cisco Technology, Inc. | Automated conference recording for missing conference participants |
US7467356B2 (en) * | 2003-07-25 | 2008-12-16 | Three-B International Limited | Graphical user interface for 3d virtual display browser using virtual display windows |
US20050030309A1 (en) * | 2003-07-25 | 2005-02-10 | David Gettman | Information display |
US20050226395A1 (en) * | 2004-04-05 | 2005-10-13 | Benco David S | Network support for consensual mobile recording |
JP2005322125A (en) * | 2004-05-11 | 2005-11-17 | Sony Corp | Information processing system, information processing method, and program |
GB2414369B (en) * | 2004-05-21 | 2007-08-01 | Hewlett Packard Development Co | Processing audio data |
US7227930B1 (en) * | 2004-10-20 | 2007-06-05 | Core Mobility, Inc. | Systems and methods for criteria-based recording of voice data |
US8077832B2 (en) * | 2004-10-20 | 2011-12-13 | Speechink, Inc. | Systems and methods for consent-based recording of voice data |
US7707262B1 (en) * | 2004-12-28 | 2010-04-27 | Aol Llc | Negotiating content controls |
JP2005149529A (en) * | 2005-01-06 | 2005-06-09 | Fujitsu Ltd | Spoken dialogue system |
US7738638B1 (en) * | 2005-04-05 | 2010-06-15 | At&T Intellectual Property Ii, L.P. | Voice over internet protocol call recording |
US7570752B2 (en) * | 2005-09-23 | 2009-08-04 | Alcatel Lucent | Telephony/conference activity presence state |
JP2007156558A (en) | 2005-11-30 | 2007-06-21 | Toshiba Corp | Communication device and communication method |
US20070282695A1 (en) | 2006-05-26 | 2007-12-06 | Hagai Toper | Facilitating on-line commerce |
JP4281925B2 (en) * | 2006-06-19 | 2009-06-17 | 株式会社スクウェア・エニックス | Network system |
US20080005347A1 (en) * | 2006-06-29 | 2008-01-03 | Yahoo! Inc. | Messenger system for publishing podcasts |
US7548609B2 (en) * | 2006-09-07 | 2009-06-16 | Cti Group (Holding), Inc. | Process for scalable conversation recording |
US8503651B2 (en) * | 2006-12-27 | 2013-08-06 | Nokia Corporation | Teleconferencing configuration based on proximity information |
US20080204449A1 (en) | 2007-02-27 | 2008-08-28 | Dawson Christopher J | Enablement of virtual environment functions and features through advertisement exposure |
US8300773B2 (en) * | 2007-04-30 | 2012-10-30 | Hewlett-Packard Development Company, L.P. | Telephonic recording system and method |
WO2009000028A1 (en) | 2007-06-22 | 2008-12-31 | Global Coordinate Software Limited | Virtual 3d environments |
-
2008
- 2008-12-27 US US12/344,465 patent/US8605863B1/en not_active Expired - Fee Related
- 2008-12-27 US US12/344,473 patent/US9258337B2/en active Active
- 2008-12-28 US US12/344,550 patent/US9392037B2/en active Active
- 2008-12-28 US US12/344,542 patent/US20090240359A1/en not_active Abandoned
-
2009
- 2009-03-17 JP JP2011500904A patent/JP5405557B2/en not_active Expired - Fee Related
- 2009-03-17 WO PCT/US2009/037424 patent/WO2009117434A1/en active Application Filing
- 2009-03-17 EP EP09723406.6A patent/EP2269389A4/en not_active Withdrawn
- 2009-03-17 CN CN200980109791.9A patent/CN101978707B/en not_active Expired - Fee Related
- 2009-03-18 WO PCT/CA2009/000328 patent/WO2009114936A1/en active Application Filing
- 2009-03-18 JP JP2011500018A patent/JP5396461B2/en active Active
- 2009-03-18 EP EP09721733.5A patent/EP2258103B1/en active Active
- 2009-03-18 WO PCT/CA2009/000329 patent/WO2009114937A1/en active Application Filing
- 2009-03-18 EP EP09722039.6A patent/EP2255492B1/en active Active
- 2009-03-18 CN CN200980109807.6A patent/CN102204207B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6504926B1 (en) * | 1998-12-15 | 2003-01-07 | Mediaring.Com Ltd. | User control system for internet phone quality |
US20060025216A1 (en) * | 2004-07-29 | 2006-02-02 | Nintendo Of America Inc. | Video game voice chat with amplitude-based virtual ranging |
US20120166969A1 (en) * | 2007-03-01 | 2012-06-28 | Sony Computer Entertainment Europe Limited | Apparatus and method of data transfer |
US20080252637A1 (en) * | 2007-04-14 | 2008-10-16 | Philipp Christian Berndt | Virtual reality-based teleconferencing |
Cited By (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090199275A1 (en) * | 2008-02-06 | 2009-08-06 | David Brock | Web-browser based three-dimensional media aggregation social networking application |
US9529423B2 (en) * | 2008-12-10 | 2016-12-27 | International Business Machines Corporation | System and method to modify audio components in an online environment |
US20100146408A1 (en) * | 2008-12-10 | 2010-06-10 | International Business Machines Corporation | System and method to modify audio components in an online environment |
CN101996077A (en) * | 2010-09-08 | 2011-03-30 | 海皮士(北京)网络技术有限公司 | Method and system for embedding browser in three-dimensional client end |
US11025865B1 (en) * | 2011-06-17 | 2021-06-01 | Hrl Laboratories, Llc | Contextual visual dataspaces |
US20130050199A1 (en) * | 2011-08-29 | 2013-02-28 | Avaya Inc. | Input, display and monitoring of contact center operation in a virtual reality environment |
US9105013B2 (en) | 2011-08-29 | 2015-08-11 | Avaya Inc. | Agent and customer avatar presentation in a contact center virtual reality environment |
US9251504B2 (en) | 2011-08-29 | 2016-02-02 | Avaya Inc. | Configuring a virtual reality environment in a contact center |
US9349118B2 (en) * | 2011-08-29 | 2016-05-24 | Avaya Inc. | Input, display and monitoring of contact center operation in a virtual reality environment |
US9093259B1 (en) * | 2011-11-16 | 2015-07-28 | Disney Enterprises, Inc. | Collaborative musical interaction among avatars |
US20150156228A1 (en) * | 2013-11-18 | 2015-06-04 | Ronald Langston | Social networking interacting system |
US20170151501A1 (en) * | 2014-06-25 | 2017-06-01 | Capcom Co., Ltd. | Game device, method and non-transitory computer-readable storage medium |
US11027200B2 (en) * | 2014-06-25 | 2021-06-08 | Capcom Co., Ltd. | Game device, method and non-transitory computer-readable storage medium |
US20160292966A1 (en) * | 2015-03-31 | 2016-10-06 | Gary Denham | System and method of providing a virtual shopping experience |
CN105007297A (en) * | 2015-05-27 | 2015-10-28 | 国家计算机网络与信息安全管理中心 | Interaction method and apparatus of social network |
US20170123752A1 (en) * | 2015-09-16 | 2017-05-04 | Hashplay Inc. | Method and system for voice chat in virtual environment |
US10007482B2 (en) * | 2015-09-16 | 2018-06-26 | Hashplay Inc. | Method and system for voice chat in virtual environment |
EP3389260A4 (en) * | 2015-12-11 | 2018-11-21 | Sony Corporation | Information processing device, information processing method, and program |
US11311803B2 (en) * | 2017-01-18 | 2022-04-26 | Sony Corporation | Information processing device, information processing method, and program |
US20210329400A1 (en) * | 2017-01-23 | 2021-10-21 | Nokia Technologies Oy | Spatial Audio Rendering Point Extension |
US11096004B2 (en) * | 2017-01-23 | 2021-08-17 | Nokia Technologies Oy | Spatial audio rendering point extension |
US10516961B2 (en) * | 2017-03-17 | 2019-12-24 | Nokia Technologies Oy | Preferential rendering of multi-user free-viewpoint audio for improved coverage of interest |
WO2018167363A1 (en) | 2017-03-17 | 2018-09-20 | Nokia Technologies Oy | Preferential rendering of multi-user free-viewpoint audio for improved coverage of interest |
EP3596941A4 (en) * | 2017-03-17 | 2021-03-03 | Nokia Technologies Oy | PREFERRED PLAYBACK OF MULTI-USER AUDIO WITH A FREE VIEWING POINT FOR IMPROVED COVERAGE OF INTEREST |
US20180270601A1 (en) * | 2017-03-17 | 2018-09-20 | Nokia Technologies Oy | Preferential Rendering of Multi-User Free-Viewpoint Audio for Improved Coverage of Interest |
US11044570B2 (en) | 2017-03-20 | 2021-06-22 | Nokia Technologies Oy | Overlapping audio-object interactions |
US11604624B2 (en) | 2017-05-05 | 2023-03-14 | Nokia Technologies Oy | Metadata-free audio-object interactions |
US11442693B2 (en) | 2017-05-05 | 2022-09-13 | Nokia Technologies Oy | Metadata-free audio-object interactions |
US11074036B2 (en) | 2017-05-05 | 2021-07-27 | Nokia Technologies Oy | Metadata-free audio-object interactions |
US10154360B2 (en) | 2017-05-08 | 2018-12-11 | Microsoft Technology Licensing, Llc | Method and system of improving detection of environmental sounds in an immersive environment |
US11395087B2 (en) | 2017-09-29 | 2022-07-19 | Nokia Technologies Oy | Level-based audio-object interactions |
US10924566B2 (en) * | 2018-05-18 | 2021-02-16 | High Fidelity, Inc. | Use of corroboration to generate reputation scores within virtual reality environments |
US11138780B2 (en) * | 2019-03-28 | 2021-10-05 | Nanning Fugui Precision Industrial Co., Ltd. | Method and device for setting a multi-user virtual reality chat environment |
US20200311995A1 (en) * | 2019-03-28 | 2020-10-01 | Nanning Fugui Precision Industrial Co., Ltd. | Method and device for setting a multi-user virtual reality chat environment |
US10846898B2 (en) * | 2019-03-28 | 2020-11-24 | Nanning Fugui Precision Industrial Co., Ltd. | Method and device for setting a multi-user virtual reality chat environment |
US20220070232A1 (en) * | 2020-08-27 | 2022-03-03 | Varty Inc. | Virtual events-based social network |
US11838336B2 (en) * | 2020-08-27 | 2023-12-05 | Varty Inc. | Virtual events-based social network |
US20220321375A1 (en) * | 2021-03-31 | 2022-10-06 | Snap Inc. | Mixing participant audio from multiple rooms within a virtual conferencing system |
US11792031B2 (en) * | 2021-03-31 | 2023-10-17 | Snap Inc. | Mixing participant audio from multiple rooms within a virtual conferencing system |
US11282278B1 (en) * | 2021-04-02 | 2022-03-22 | At&T Intellectual Property I, L.P. | Providing adaptive asynchronous interactions in extended reality environments |
US11743430B2 (en) * | 2021-05-06 | 2023-08-29 | Katmai Tech Inc. | Providing awareness of who can hear audio in a virtual conference, and applications thereof |
US20230353710A1 (en) * | 2021-05-06 | 2023-11-02 | Katmai Tech Inc. | Providing awareness of who can hear audio in a virtual conference, and applications thereof |
US20230086248A1 (en) * | 2021-09-21 | 2023-03-23 | Meta Platforms Technologies, Llc | Visual navigation elements for artificial reality environments |
US20240031530A1 (en) * | 2022-07-20 | 2024-01-25 | Katmai Tech Inc. | Using zones in a three-dimensional virtual environment for limiting audio and video |
US12022235B2 (en) * | 2022-07-20 | 2024-06-25 | Katmai Tech Inc. | Using zones in a three-dimensional virtual environment for limiting audio and video |
US11947871B1 (en) | 2023-04-13 | 2024-04-02 | International Business Machines Corporation | Spatially aware virtual meetings |
Also Published As
Publication number | Publication date |
---|---|
EP2255492A1 (en) | 2010-12-01 |
WO2009114937A1 (en) | 2009-09-24 |
US20090241037A1 (en) | 2009-09-24 |
EP2258103B1 (en) | 2018-05-02 |
JP5405557B2 (en) | 2014-02-05 |
EP2255492B1 (en) | 2016-06-15 |
WO2009114936A1 (en) | 2009-09-24 |
CN101978707B (en) | 2015-04-22 |
US9392037B2 (en) | 2016-07-12 |
EP2255492A4 (en) | 2013-04-10 |
CN102204207A (en) | 2011-09-28 |
WO2009117434A1 (en) | 2009-09-24 |
JP5396461B2 (en) | 2014-01-22 |
JP2011518366A (en) | 2011-06-23 |
EP2258103A4 (en) | 2013-07-24 |
JP2011517811A (en) | 2011-06-16 |
EP2269389A4 (en) | 2013-07-24 |
US8605863B1 (en) | 2013-12-10 |
CN101978707A (en) | 2011-02-16 |
US20090240818A1 (en) | 2009-09-24 |
EP2258103A1 (en) | 2010-12-08 |
EP2269389A1 (en) | 2011-01-05 |
US9258337B2 (en) | 2016-02-09 |
CN102204207B (en) | 2016-06-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20090240359A1 (en) | Realistic Audio Communication in a Three Dimensional Computer-Generated Virtual Environment | |
US20100169799A1 (en) | Method and Apparatus for Enabling Presentations to Large Numbers of Users in a Virtual Environment | |
US20100169796A1 (en) | Visual Indication of Audio Context in a Computer-Generated Virtual Environment | |
US7840668B1 (en) | Method and apparatus for managing communication between participants in a virtual environment | |
US8762861B2 (en) | Method and apparatus for interrelating virtual environment and web content | |
US10424101B2 (en) | System and method for enabling multiple-state avatars | |
US11609682B2 (en) | Methods and systems for providing a communication interface to operate in 2D and 3D modes | |
CN100442313C (en) | 3D virtual space simulator and its system | |
US20220197403A1 (en) | Artificial Reality Spatial Interactions | |
CN117597916A (en) | Protect private audio and applications in virtual meetings | |
US20230017111A1 (en) | Spatialized audio chat in a virtual metaverse | |
WO2020149893A1 (en) | Audio spatialization | |
WO2023034567A1 (en) | Parallel video call and artificial reality spaces | |
US20210322880A1 (en) | Audio spatialization | |
US20240211093A1 (en) | Artificial Reality Coworking Spaces for Two-Dimensional and Three-Dimensional Interfaces | |
US12141907B2 (en) | Virtual separate spaces for virtual reality experiences | |
GB2614567A (en) | 3D spatialisation of voice chat | |
US11533578B2 (en) | Virtual environment audio stream delivery | |
WO2024138035A1 (en) | Dynamic artificial reality coworking spaces | |
WO2024009653A1 (en) | Information processing device, information processing method, and information processing system | |
WO2024059606A1 (en) | Avatar background alteration | |
大西悠貴 | Actuated Walls as Media Connecting and Dividing Physical/Virtual Spaces. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NORTEL NETWORKS LIMITED, CANADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HYNDMAN, ARN;LIPPMAN, ANDREW;SAURIOL, NICHOLAS;REEL/FRAME:022596/0337;SIGNING DATES FROM 20081218 TO 20090309 |
|
AS | Assignment |
Owner name: CITIBANK, N.A., AS ADMINISTRATIVE AGENT,NEW YORK Free format text: SECURITY AGREEMENT;ASSIGNOR:AVAYA INC.;REEL/FRAME:023892/0500 Effective date: 20100129 Owner name: CITIBANK, N.A., AS ADMINISTRATIVE AGENT, NEW YORK Free format text: SECURITY AGREEMENT;ASSIGNOR:AVAYA INC.;REEL/FRAME:023892/0500 Effective date: 20100129 |
|
AS | Assignment |
Owner name: CITICORP USA, INC., AS ADMINISTRATIVE AGENT, NEW YORK Free format text: SECURITY AGREEMENT;ASSIGNOR:AVAYA INC.;REEL/FRAME:023905/0001 Effective date: 20100129 Owner name: CITICORP USA, INC., AS ADMINISTRATIVE AGENT,NEW YO Free format text: SECURITY AGREEMENT;ASSIGNOR:AVAYA INC.;REEL/FRAME:023905/0001 Effective date: 20100129 Owner name: CITICORP USA, INC., AS ADMINISTRATIVE AGENT, NEW Y Free format text: SECURITY AGREEMENT;ASSIGNOR:AVAYA INC.;REEL/FRAME:023905/0001 Effective date: 20100129 |
|
AS | Assignment |
Owner name: AVAYA INC.,NEW JERSEY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NORTEL NETWORKS LIMITED;REEL/FRAME:023998/0878 Effective date: 20091218 Owner name: AVAYA INC., NEW JERSEY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NORTEL NETWORKS LIMITED;REEL/FRAME:023998/0878 Effective date: 20091218 |
|
AS | Assignment |
Owner name: BANK OF NEW YORK MELLON TRUST, NA, AS NOTES COLLATERAL AGENT, THE, PENNSYLVANIA Free format text: SECURITY AGREEMENT;ASSIGNOR:AVAYA INC., A DELAWARE CORPORATION;REEL/FRAME:025863/0535 Effective date: 20110211 Owner name: BANK OF NEW YORK MELLON TRUST, NA, AS NOTES COLLAT Free format text: SECURITY AGREEMENT;ASSIGNOR:AVAYA INC., A DELAWARE CORPORATION;REEL/FRAME:025863/0535 Effective date: 20110211 |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., PENNSYLVANIA Free format text: SECURITY AGREEMENT;ASSIGNOR:AVAYA, INC.;REEL/FRAME:029608/0256 Effective date: 20121221 Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., P Free format text: SECURITY AGREEMENT;ASSIGNOR:AVAYA, INC.;REEL/FRAME:029608/0256 Effective date: 20121221 |
|
AS | Assignment |
Owner name: BANK OF NEW YORK MELLON TRUST COMPANY, N.A., THE, PENNSYLVANIA Free format text: SECURITY AGREEMENT;ASSIGNOR:AVAYA, INC.;REEL/FRAME:030083/0639 Effective date: 20130307 Owner name: BANK OF NEW YORK MELLON TRUST COMPANY, N.A., THE, Free format text: SECURITY AGREEMENT;ASSIGNOR:AVAYA, INC.;REEL/FRAME:030083/0639 Effective date: 20130307 |
|
AS | Assignment |
Owner name: CITIBANK, N.A., AS ADMINISTRATIVE AGENT, NEW YORK Free format text: SECURITY INTEREST;ASSIGNORS:AVAYA INC.;AVAYA INTEGRATED CABINET SOLUTIONS INC.;OCTEL COMMUNICATIONS CORPORATION;AND OTHERS;REEL/FRAME:041576/0001 Effective date: 20170124 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION |
|
AS | Assignment |
Owner name: OCTEL COMMUNICATIONS LLC (FORMERLY KNOWN AS OCTEL COMMUNICATIONS CORPORATION), CALIFORNIA Free format text: BANKRUPTCY COURT ORDER RELEASING ALL LIENS INCLUDING THE SECURITY INTEREST RECORDED AT REEL/FRAME 041576/0001;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:044893/0531 Effective date: 20171128 Owner name: AVAYA INTEGRATED CABINET SOLUTIONS INC., CALIFORNIA Free format text: BANKRUPTCY COURT ORDER RELEASING ALL LIENS INCLUDING THE SECURITY INTEREST RECORDED AT REEL/FRAME 041576/0001;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:044893/0531 Effective date: 20171128 Owner name: AVAYA INC., CALIFORNIA Free format text: BANKRUPTCY COURT ORDER RELEASING ALL LIENS INCLUDING THE SECURITY INTEREST RECORDED AT REEL/FRAME 025863/0535;ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST, NA;REEL/FRAME:044892/0001 Effective date: 20171128 Owner name: AVAYA INC., CALIFORNIA Free format text: BANKRUPTCY COURT ORDER RELEASING ALL LIENS INCLUDING THE SECURITY INTEREST RECORDED AT REEL/FRAME 023892/0500;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:044891/0564 Effective date: 20171128 Owner name: AVAYA INC., CALIFORNIA Free format text: BANKRUPTCY COURT ORDER RELEASING ALL LIENS INCLUDING THE SECURITY INTEREST RECORDED AT REEL/FRAME 029608/0256;ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A.;REEL/FRAME:044891/0801 Effective date: 20171128 Owner name: OCTEL COMMUNICATIONS LLC (FORMERLY KNOWN AS OCTEL Free format text: BANKRUPTCY COURT ORDER RELEASING ALL LIENS INCLUDING THE SECURITY INTEREST RECORDED AT REEL/FRAME 041576/0001;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:044893/0531 Effective date: 20171128 Owner name: AVAYA INC., CALIFORNIA Free format text: BANKRUPTCY COURT ORDER RELEASING ALL LIENS INCLUDING THE SECURITY INTEREST RECORDED AT REEL/FRAME 041576/0001;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:044893/0531 Effective date: 20171128 Owner name: AVAYA INTEGRATED CABINET SOLUTIONS INC., CALIFORNI Free format text: BANKRUPTCY COURT ORDER RELEASING ALL LIENS INCLUDING THE SECURITY INTEREST RECORDED AT REEL/FRAME 041576/0001;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:044893/0531 Effective date: 20171128 Owner name: VPNET TECHNOLOGIES, INC., CALIFORNIA Free format text: BANKRUPTCY COURT ORDER RELEASING ALL LIENS INCLUDING THE SECURITY INTEREST RECORDED AT REEL/FRAME 041576/0001;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:044893/0531 Effective date: 20171128 Owner name: AVAYA INC., CALIFORNIA Free format text: BANKRUPTCY COURT ORDER RELEASING ALL LIENS INCLUDING THE SECURITY INTEREST RECORDED AT REEL/FRAME 030083/0639;ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A.;REEL/FRAME:045012/0666 Effective date: 20171128 |
|
AS | Assignment |
Owner name: SIERRA HOLDINGS CORP., NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITICORP USA, INC.;REEL/FRAME:045045/0564 Effective date: 20171215 Owner name: AVAYA, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITICORP USA, INC.;REEL/FRAME:045045/0564 Effective date: 20171215 |