[go: up one dir, main page]

WO2023069016A1 - Method and system for managing virtual content - Google Patents

Method and system for managing virtual content Download PDF

Info

Publication number
WO2023069016A1
WO2023069016A1 PCT/SG2022/050730 SG2022050730W WO2023069016A1 WO 2023069016 A1 WO2023069016 A1 WO 2023069016A1 SG 2022050730 W SG2022050730 W SG 2022050730W WO 2023069016 A1 WO2023069016 A1 WO 2023069016A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual content
data
content
spatial environment
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/SG2022/050730
Other languages
French (fr)
Inventor
Wee Han Victor NEO
Han Chong LEE
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Revez Motion Pte Ltd
Original Assignee
Revez Motion Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Revez Motion Pte Ltd filed Critical Revez Motion Pte Ltd
Publication of WO2023069016A1 publication Critical patent/WO2023069016A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • G06F3/1462Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay with means for detecting differences between the image stored in the host and the images displayed on the remote displays
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0442Handling or displaying different aspect ratios, or changing the aspect ratio
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/14Solving problems related to the presentation of information to be displayed

Definitions

  • This invention relates generally to a method and system for managing virtual content.
  • An AR system may allow the user to interact with the generated AR content such as by manipulating generated elements, showing or hiding individual generated elements, and the like.
  • An AR system also may allow the user to add generated elements such as drawings or text annotations to the AR environment.
  • AR has been applied to many fields of use including events, conventions, architecture, art, construction, education, medicine, entertainment, and tourism, etc.
  • previously known AR systems are limited in that they are directed to augmentation of an entire environment, thus requiring specification of extensive AR environments.
  • previously known AR systems are limited in their ability to allow content to be generated, edited and uploaded to an AR environment. Tracking of use of AR content, when in use, is also non-existent which has reduced the pervasiveness of use of AR, or mixed reality technologies, on social media platform. Therefore, there exists a need for a solution for addressing the foregoing problems.
  • virtual content management method comprising converting input data in a first format into content data in a second format by a data conversion module, the content data being descriptive of and for generating virtual content therewith, and generating an instance of the virtual content from the content data by an content engine in a spatial environment defined by a software application having app data associated therewith, the virtual content being located with a reference frame in a spatial environment, the spatial environment being at least one of a virtual space, a real-world space and a locator being associated therewith.
  • the method further comprises generating a view of the virtual content in the spatial environment based on a predefined viewport defined by the software application operating from a computing device. Wherein interaction with user interaction with at least one of the virtual content and the spatial environment is performed via at least one of interacting with a user interface (UI) of the computing device and spatially manipulating the computing device.
  • UI user interface
  • virtual content management system comprising data conversion module for converting input data in a first format into content data in a second format, the content data being descriptive of and for generating virtual content therewith, an content engine for generating an instance of the virtual content from the content data in a spatial environment defined by a software application having app data associated therewith, the virtual content being located with a reference frame in a spatial environment, the spatial environment being at least one of a virtual space, a real-world space and a locator being associated therewith, and a rendering module for generating a view of the virtual content in the spatial environment based on a predefined viewport defined by the software application operating from a computing device.
  • interaction with user interaction with at least one of the virtual content and the spatial environment is performed via at least one of interacting with a user interface (UI) of the computing device and spatially manipulating the computing device.
  • UI user interface
  • a machine comprising a machine -readable medium having stored therein a plurality of programming instructions, which when executed, the instructions cause the machine to convert input data in a first format into content data in a second format, the content data being descriptive of and for generating virtual content therewith, generate an instance of the virtual content from the content data in a spatial environment defined by a software application having app data associated therewith, the virtual content being located with a reference frame in a spatial environment, the spatial environment being at least one of a virtual space, a real-world space and a locator being associated therewith, and generate a view of the virtual content in the spatial environment based on a predefined viewport defined by the software application operating from a computing device.
  • interaction with user interaction with at least one of the virtual content and the spatial environment is performed via at least one of interacting with a user interface (UI) of the computing device and spatially manipulating the computing device.
  • UI user interface
  • FIG. 1 illustrates a virtual content generated by a virtual content management method using a virtual content management method according to an aspect of the invention and presented on a computing device where the exemplary virtual content is an avatar located to words on a name card functioning as markers;
  • FIG. 2 illustrates an exemplary configuration between the virtual content management system and the computing device of FIG;
  • FIG. 3 shows a partial system diagram of the virtual content management system for generating the virtual content of FIG. 1 according to an aspect of the invention
  • FIG. 4 shows a process flow diagram of the virtual content management method according to an aspect of the invention.
  • FIG. 5 shows a data flow diagram of data flow between the virtual content management system of FIG. 3 and the computing device of FIG. 1 ;
  • FIG. 6 illustrates a non-coding-based design interface of an interface engine of the virtual content management system of FIG. 2 for designing and creating interaction intelligence for associating with an exemplary virtual content.
  • virtual content management method 100 utilising virtual content management system 20 is described hereinafter with reference to FIG. 1 to FIG. 6.
  • the virtual content management system 20 can represent various forms of server systems 21 including, but not limited to a web server, an application server, a proxy server, a network server, or a server farm.
  • the virtual content management system 20 may communicate wirelessly with one or more of a computing device 22 through a communication interface (not shown), which may include digital signal processing circuitry where necessary.
  • the communication interface may provide for communications under various modes or protocols.
  • short-range communication may occur, such as using a Bluetooth, WiFi, iBeacons or other such transceiver.
  • the virtual content management system 20 may perform wired or wireless communication with one or more of the computing device 22 via a network 24.
  • the network 24 can be a large computer network, such as a local area network (LAN), wide area network (WAN), the Internet, a cellular network, or a combination thereof connecting any number of mobile clients, fixed clients, and/or servers.
  • the network 24 may include a corporate network (e.g., an intranet) and one or more wireless access points.
  • the computing device 22 can represent various forms of processing devices including, but not limited to, a desktop computer, a laptop computer, a handheld computer, a personal digital assistant (PDA), a smartphone, a smart tablet, a cellular telephone, a network appliance, a camera, a smart phone, an enhanced general packet radio service (EGPRS) mobile phone, a media player, a navigation device, an email device, a game console, or a combination of any two or more of these data processing devices or other data processing devices.
  • the computing device 22 may access application software on the virtual content management system 20.
  • the virtual content management system 20 further comprises a data conversion module 26 for converting input data 200 in a first format into content data 202 in a second format in a step 102.
  • the first format of the input data 200 can be, for example, one of GLB, GLTF, 3D OBJ or 3DS. Further, the input data 200 can be an animated GIF file or a video file, for example AVI or MP4, for conversion to a reference or proprietary format defined by the second format.
  • the data conversion module 26 not only converts the format of the input data 200, but is able to also, for example, extract 3D features and properties of an item or article in a video or movie file.
  • the step 103 of converting between the first format and the second format may be excluded in some implementations if first format of the input data 200 is the same as the second format, in which case, the input data 200 is taken as the content data 202.
  • the content data 202 is descriptive of and for generating virtual content 204 therewith.
  • the virtual content management system 20 further comprises a content engine 28 for generating an instance of the virtual content 28 from the content data 202 by the content engine in a spatial environment 206 defined by a software application 30 having app data 208 associated therewith in a step 104.
  • the virtual content 204 being located with a reference frame 210 in the spatial environment 206, the spatial environment 206 being at least one of a virtual space, a real-world space and a locator being associated therewith.
  • the virtual content management system 20 further comprises a rendering module 32 for generating a view of the virtual content 204 in the spatial environment 206 based on a predefined virtual viewport 212 defined by the software application 30 operating from the computing device 22 in a step 106. Interaction with user interaction with at least one of the virtual content 204 and the spatial environment 206 is performed via at least one of interacting with a user interface 34 (UI 34) of the computing device 22 and spatially manipulating the computing device 22.
  • UI 34 user interface 34
  • the virtual content management method 100 further comprises a step 110 of tracking and capturing usage data 214 comprising user interaction with the virtual content 204 and the spatial environment 206.
  • the usage data 214 comprises at least one of a user identifier associated with a user of the computing device 22, captured interactions between the user and the virtual content 204 and spatial environment 206, usage timings, URI of the software application 30 and event data associated with the spatial environment 206 including comments and emotion indicator, for example emoticons and stickers, provided by other users of the spatial environment 206 or the platform, for example a social media platform, where the spatial environment 306 is presented or provided.
  • the usage data 214 can further comprise geo-location data of computing device 22 and geo-location data associated with the spatial environment 206.
  • the virtual content management method 100 further comprises a step 112 of performing data analysis on the usage data 214 to identify at least one of trends, correlations, data models, key influences and insights therefrom by an analytics module 38 forming part of the virtual content management system 20.
  • the software application 30 being one of a web browser, a front-end application, a middle-ware, a back-end application, web application, a native application, a hybrid application and a containerized application.
  • the at least one of a virtual space, a real-world space and a locator being at least one of a substantially real-time image captured by the computing device 22, a substantially real-time image streamed to the computing device 22, virtual spatial environment 206, and a mixed reality space generated from the substantially real-time image and the virtual spatial environment 206.
  • the step 104 of generating an instance of the virtual content 204 from the content data 202 by the content engine 28 in a spatial environment 206 comprises a step 120 of tracking position of a reference marker in the spatial environment 206, and a step 122 of locating the virtual content 204 to a reference marker defining the reference frame 210 in the spatial environment 206.
  • the reference marker can be a feature in the spatial environment 206, a QR code, a shape or word characters on a physical object such as a name card being captured in the real-world as the spatial environment 206, as shown in FIG. 1, or the like uniquely identifiable markers.
  • the step 106 of generating a view of the virtual content 204 in the spatial environment 206 based on a predefined virtual viewport 212 defined by the software application 30 comprises a step 124 of animating the virtual content 204 at least one of based on a pre -defined animation sequence and in response to user interaction therewith.
  • the step 104 of generating an instance of the virtual content 204 from the content data 202 by the content engine 28 in a spatial environment 206 comprising comprises a step 126 of generating an instance of the virtual content 204 from the content data 202 by a content engine 28 in a spatial environment 206 defined by the software application 30 in response to a request being made by the software application 30.
  • the request comprising account authentication data providable from the computing device 22.
  • the virtual content management method 100 further comprises a step 130 of packaging the content data 202 with software libraries 214 for generating the virtual content 204 in the spatial environment 206 into a software container 216, and a step 132 of providing the software container 216 to the computing device 22 in response to a request therefor.
  • the request may be generated automatically via a partner platform or application or may be requested for by the user of the computing device 22 via a third party application or platform.
  • the software container 216 provides an environment with the necessary software components for executing codes for or generating the virtual content 204, including inherited properties and objects, from the content data 202.
  • the virtual content 204 forms one of a virtual reality environment and a mixed reality environment with the spatial environment 206, and the virtual content 204 being a 2D content, a 3D content and a point cloud.
  • the input data 200 being providable by the software application 30 operating on the computing device 22.
  • the input data 200 is scanned discretised data of an article, engineering drawings, sketches, coordinates, 2D models and 3D models.
  • the virtual content management system 20 may be implemented as a web-based service and platform.
  • a front-end application, for operating on the computing device 22, may be provided for accessing and interacting with the virtual content management system 20.
  • a user may pre-capture an image or a video of a physical object for generating the input data 200.
  • the input data may be generated using a modeling software like maya or a CAD software like Solidworks.
  • the input data 200 may also be generated using a point cloud or a Lidar scanner.
  • the input data 200 is then uploaded onto the virtual content management system 20 for processing and conversion to the content data 202 by the data conversion module 26.
  • the user may access the virtual content management system 20 to preview, edit and further work on the virtual content 204.
  • animation of the virtual content 204 for example an avatar virtual content with a rotating or spinning animation, may be introduced to the content data 202.
  • the input data 200 may be containerized as the software container 216 at this point depending on the target software application 30 that it is to be used or implemented with.
  • the software application 30 may be a hybrid app on a smart phone that uses, for example, Apple’s AR toolkit to capture a real-world space, for example a bedroom, as the spatial environment 206 and to establish a reference frame 210 based on the feature mesh generated by the hybrid app based on images captured by the camera module of the computing device 22 so that the generated virtual content 204, for example the mentioned avatar or a furniture article, may be located and manipulated as an augmented reality content in the real-world spatial environment 206 image in substantially true-to-scale manner.
  • Apple Apple’s AR toolkit to capture a real-world space, for example a bedroom, as the spatial environment 206 and to establish a reference frame 210 based on the feature mesh generated by the hybrid app based on images captured by the camera module of the computing device 22 so that the generated virtual content 204, for example the mentioned avatar or a furniture article, may be located and manipulated as an augmented reality content in the real-world spatial environment 206 image in substantially true-to-scale manner.
  • the virtual viewport 212 may be manipulated through interacting with the smart phone or by spatially manipulating the smart phone to manipulate the virtual viewport.
  • the virtual content 204 may be resized or repositioned in the spatial environment 206 by manipulation of the UI 34, for example the touch screen of the smart phone.
  • Information and data of the software application 30 will be captured as the app data 208 and which will be sent together with the usage data 214 back to the virtual content management system 20 for analysis by the analytics module 38.
  • the software application 30 may also be a web-based social media platform, which may also operate as a native or hybrid app, such as Facebook or Instagram which will be identified and captured as the app data 208 together with other information, for example the corresponding Facebook page address/locator, tags or hashtags, URIs and event names/handlers.
  • the virtual content 204 may be generated in a Facebook live session, for example via the mentioned containerization or with plugins/APIs enabled through the social media platform, where the geo-location of the captured or streamed live spatial environment 206 may be determined, captured and sent to the analytics module 38 for analysis thereby. Therefore, the implementation of the virtual content management method 100 and system 20 is wide-ranging and can ideal for MICE events, outdoor Expos, online events, museum tours or the like.
  • the virtual content 204 may also be a hologram generated by projectors as the computing device 22 or virtual reality/augmented reality images or objects generated by an AR or VR headset, for example Google glass or Oculus headsets.
  • the virtual content management system 20 further comprises an interface engine 50 for creating an interaction intelligence associated with the virtual content 204, the interaction intelligence capturing and defining non-coding-based user interface designs and flow intelligence for determining interaction flow between the user and the virtual content by 204 the software application 30.
  • a non-coding-based design interface 52 of the interface engine allows for, for example, drag-and drop actions, drop-down list selections and wizards and the like aids to a user for defining the interaction intelligence.
  • content, media and snippets of codes may be uploaded or integrated with the interaction intelligence as well.
  • the interaction interface 50 will also further generate codes and resources from the created interaction intelligence and one of associating and packaging the codes and resources with the content data.
  • the generated code may further be packaged with the software container 216 based on user requirements.
  • the interaction intelligence’s trigger may be pre-defined and may be initiating the interaction intelligence in response to occurance of a pre-defined at least one of an event and an interaction with at least one of the virtual content 204 and the spatial environment 206.
  • the spatial environment 206 may be a multiverse space with multiverse capabilities and features
  • the interaction intelligence allows for a user to provide further features and services to the user as and when required and appropriate, depending on the intent of the content creator user or content owner user.
  • the virtual content 204 may be a virtual piece of furniture with a trigger point 54 pre -defined via the design interface 52 and presented on the software application 30.
  • textual and audible content may be presented to the user as part of the interaction intelligence. Options may also be presented to enable the user to find out more about the furniture, to trigger a conversation, chat or callback with a sales staff, or to initiate purchase payment and delivery arrangements with the business or individual retailing the furniture.
  • the user can be walking past a movie billboard in the spatial environment 206 as part of a multiverse.
  • a trailer of a movie may start to play as part of the interaction intelligence with the billboard being the virtual content 204.
  • the user may intent to immediately check availability of seats at his favourite cinema and initiate purchase and payment of tickets to the movie as part of the interaction intelligence, thereby improving user experience.
  • the virtual content management system 20 may further comprise a tokenization module 56 for tokenizing the virtual content 204, and/or its content data 202, into non-fungible tokens (NFTs).
  • NFTs non-fungible tokens
  • the NFTs may be programmed to enable the creator of the virtual content 204 to automatically receive royalty whenever ownership of the NFTs occurs, for example, in the form of percentage of the transaction price.
  • the NFTs enables ownership thereof and the digital assets associated therewith, specifically the virtual content 24 and the content data 22 wherefrom the virtual content 24 is generatable and derived from, and provide a way for the virtual content 24 to be bought, sold and transacted.
  • the virtual content 24 will therefore form part of a digital asset library 58 which may be enabled, for example through containerization, for use on the virtual content management system 20 and other third-party platforms to enable the creation of an eco-system for digital content creation and use.
  • the virtual content management system 20 can also be implemented as a machine to perform the virtual content management method 100 where the machine comprising a machine-readable medium having stored therein a plurality of programming instructions, which when executed, the instructions cause the machine to convert the input data 200 in the first format into content data 202 in a second format, generate an instance of the virtual content 204 from the content data 202 in the spatial environment 206 defined by the software application 30 having app data 208 associated therewith, and generate a view of the virtual content 204 in the spatial environment 206 based on the predefined and spatially manipulatable virtual viewport 212 defined by the software application 30 operating from a computing device 22.
  • interaction with user interaction with at least one of the virtual content 204 and the spatial environment 206 is performed via at least one of interacting with a user interface (UI 34) of the computing device 22 and spatially manipulating the computing device 22.
  • UI 34 user interface
  • aspects of particular embodiments of the present disclosure address at least one aspect, problem, limitation, and/or disadvantage associated with existing virtual content management methods. While features, aspects, and/or advantages associated with certain embodiments have been described in the disclosure, other embodiments may also exhibit such features, aspects, and/or advantages, and not all embodiments need necessarily exhibit such features, aspects, and/or advantages to fall within the scope of the disclosure. It will be appreciated by a visitor of ordinary skill in the art that several of the above -disclosed structures, components, or alternatives thereof, can be desirably combined into alternative structures, components, and/or applications. In addition, various modifications, alterations, and/or improvements may be made to various embodiments that are disclosed by a visitor of ordinary skill in the art within the scope of the present disclosure, which is limited only by the following claims.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

Described herein is virtual content management method comprising converting input data in a first format into content data in a second format by a data conversion module, the content data being descriptive of and for generating virtual content therewith, and generating an instance of the virtual content from the content data by a content engine in a spatial environment defined by a software application having app data associated therewith, the virtual content being located with a reference frame in a spatial environment, the spatial environment being at least one of a virtual space, a real-world space and a locator being associated therewith. The method further comprises generating a view of the virtual content in the spatial environment based on a predefined viewport defined by the software application operating from a computing device.

Description

METHOD AND SYSTEM FOR MANAGING VIRTUAL CONTENT
TECHNICAL FIELD
This invention relates generally to a method and system for managing virtual content.
Background
An AR system may allow the user to interact with the generated AR content such as by manipulating generated elements, showing or hiding individual generated elements, and the like. An AR system also may allow the user to add generated elements such as drawings or text annotations to the AR environment. AR has been applied to many fields of use including events, conventions, architecture, art, construction, education, medicine, entertainment, and tourism, etc. However, previously known AR systems are limited in that they are directed to augmentation of an entire environment, thus requiring specification of extensive AR environments. In addition, previously known AR systems are limited in their ability to allow content to be generated, edited and uploaded to an AR environment. Tracking of use of AR content, when in use, is also non-existent which has reduced the pervasiveness of use of AR, or mixed reality technologies, on social media platform. Therefore, there exists a need for a solution for addressing the foregoing problems.
Summary
In accordance with a first aspect of the invention, there is disclosed virtual content management method comprising converting input data in a first format into content data in a second format by a data conversion module, the content data being descriptive of and for generating virtual content therewith, and generating an instance of the virtual content from the content data by an content engine in a spatial environment defined by a software application having app data associated therewith, the virtual content being located with a reference frame in a spatial environment, the spatial environment being at least one of a virtual space, a real-world space and a locator being associated therewith. The method further comprises generating a view of the virtual content in the spatial environment based on a predefined viewport defined by the software application operating from a computing device. Wherein interaction with user interaction with at least one of the virtual content and the spatial environment is performed via at least one of interacting with a user interface (UI) of the computing device and spatially manipulating the computing device.
In accordance with a second aspect of the invention, there is disclosed virtual content management system comprising data conversion module for converting input data in a first format into content data in a second format, the content data being descriptive of and for generating virtual content therewith, an content engine for generating an instance of the virtual content from the content data in a spatial environment defined by a software application having app data associated therewith, the virtual content being located with a reference frame in a spatial environment, the spatial environment being at least one of a virtual space, a real-world space and a locator being associated therewith, and a rendering module for generating a view of the virtual content in the spatial environment based on a predefined viewport defined by the software application operating from a computing device. Wherein interaction with user interaction with at least one of the virtual content and the spatial environment is performed via at least one of interacting with a user interface (UI) of the computing device and spatially manipulating the computing device.
In accordance with a third aspect of the invention, there is disclosed a machine comprising a machine -readable medium having stored therein a plurality of programming instructions, which when executed, the instructions cause the machine to convert input data in a first format into content data in a second format, the content data being descriptive of and for generating virtual content therewith, generate an instance of the virtual content from the content data in a spatial environment defined by a software application having app data associated therewith, the virtual content being located with a reference frame in a spatial environment, the spatial environment being at least one of a virtual space, a real-world space and a locator being associated therewith, and generate a view of the virtual content in the spatial environment based on a predefined viewport defined by the software application operating from a computing device. Wherein interaction with user interaction with at least one of the virtual content and the spatial environment is performed via at least one of interacting with a user interface (UI) of the computing device and spatially manipulating the computing device.
Brief Description of the Drawings
FIG. 1 illustrates a virtual content generated by a virtual content management method using a virtual content management method according to an aspect of the invention and presented on a computing device where the exemplary virtual content is an avatar located to words on a name card functioning as markers;
FIG. 2 illustrates an exemplary configuration between the virtual content management system and the computing device of FIG;
FIG. 3 shows a partial system diagram of the virtual content management system for generating the virtual content of FIG. 1 according to an aspect of the invention;
FIG. 4 shows a process flow diagram of the virtual content management method according to an aspect of the invention; and
FIG. 5 shows a data flow diagram of data flow between the virtual content management system of FIG. 3 and the computing device of FIG. 1 ; and
FIG. 6 illustrates a non-coding-based design interface of an interface engine of the virtual content management system of FIG. 2 for designing and creating interaction intelligence for associating with an exemplary virtual content. Detailed Description
An exemplary embodiment of the present invention, virtual content management method 100 utilising virtual content management system 20 is described hereinafter with reference to FIG. 1 to FIG. 6. The virtual content management system 20 can represent various forms of server systems 21 including, but not limited to a web server, an application server, a proxy server, a network server, or a server farm. In some implementations, the virtual content management system 20 may communicate wirelessly with one or more of a computing device 22 through a communication interface (not shown), which may include digital signal processing circuitry where necessary. The communication interface may provide for communications under various modes or protocols. In addition, short-range communication may occur, such as using a Bluetooth, WiFi, iBeacons or other such transceiver.
In other implementations, the virtual content management system 20 may perform wired or wireless communication with one or more of the computing device 22 via a network 24. The network 24 can be a large computer network, such as a local area network (LAN), wide area network (WAN), the Internet, a cellular network, or a combination thereof connecting any number of mobile clients, fixed clients, and/or servers. In most implementations, the network 24 may include a corporate network (e.g., an intranet) and one or more wireless access points.
The computing device 22 can represent various forms of processing devices including, but not limited to, a desktop computer, a laptop computer, a handheld computer, a personal digital assistant (PDA), a smartphone, a smart tablet, a cellular telephone, a network appliance, a camera, a smart phone, an enhanced general packet radio service (EGPRS) mobile phone, a media player, a navigation device, an email device, a game console, or a combination of any two or more of these data processing devices or other data processing devices. The computing device 22 may access application software on the virtual content management system 20. Preferably, the virtual content management system 20 further comprises a data conversion module 26 for converting input data 200 in a first format into content data 202 in a second format in a step 102. The first format of the input data 200 can be, for example, one of GLB, GLTF, 3D OBJ or 3DS. Further, the input data 200 can be an animated GIF file or a video file, for example AVI or MP4, for conversion to a reference or proprietary format defined by the second format. The data conversion module 26 not only converts the format of the input data 200, but is able to also, for example, extract 3D features and properties of an item or article in a video or movie file. The step 103 of converting between the first format and the second format may be excluded in some implementations if first format of the input data 200 is the same as the second format, in which case, the input data 200 is taken as the content data 202. The content data 202 is descriptive of and for generating virtual content 204 therewith. The virtual content management system 20 further comprises a content engine 28 for generating an instance of the virtual content 28 from the content data 202 by the content engine in a spatial environment 206 defined by a software application 30 having app data 208 associated therewith in a step 104. The virtual content 204 being located with a reference frame 210 in the spatial environment 206, the spatial environment 206 being at least one of a virtual space, a real-world space and a locator being associated therewith. The virtual content management system 20 further comprises a rendering module 32 for generating a view of the virtual content 204 in the spatial environment 206 based on a predefined virtual viewport 212 defined by the software application 30 operating from the computing device 22 in a step 106. Interaction with user interaction with at least one of the virtual content 204 and the spatial environment 206 is performed via at least one of interacting with a user interface 34 (UI 34) of the computing device 22 and spatially manipulating the computing device 22.
The virtual content management method 100 further comprises a step 110 of tracking and capturing usage data 214 comprising user interaction with the virtual content 204 and the spatial environment 206. The usage data 214 comprises at least one of a user identifier associated with a user of the computing device 22, captured interactions between the user and the virtual content 204 and spatial environment 206, usage timings, URI of the software application 30 and event data associated with the spatial environment 206 including comments and emotion indicator, for example emoticons and stickers, provided by other users of the spatial environment 206 or the platform, for example a social media platform, where the spatial environment 306 is presented or provided. The usage data 214 can further comprise geo-location data of computing device 22 and geo-location data associated with the spatial environment 206.
The virtual content management method 100 further comprises a step 112 of performing data analysis on the usage data 214 to identify at least one of trends, correlations, data models, key influences and insights therefrom by an analytics module 38 forming part of the virtual content management system 20.
Preferably, the software application 30 being one of a web browser, a front-end application, a middle-ware, a back-end application, web application, a native application, a hybrid application and a containerized application. Further, the at least one of a virtual space, a real-world space and a locator being at least one of a substantially real-time image captured by the computing device 22, a substantially real-time image streamed to the computing device 22, virtual spatial environment 206, and a mixed reality space generated from the substantially real-time image and the virtual spatial environment 206.
Preferably, the step 104 of generating an instance of the virtual content 204 from the content data 202 by the content engine 28 in a spatial environment 206 comprises a step 120 of tracking position of a reference marker in the spatial environment 206, and a step 122 of locating the virtual content 204 to a reference marker defining the reference frame 210 in the spatial environment 206. The reference marker can be a feature in the spatial environment 206, a QR code, a shape or word characters on a physical object such as a name card being captured in the real-world as the spatial environment 206, as shown in FIG. 1, or the like uniquely identifiable markers. The step 106 of generating a view of the virtual content 204 in the spatial environment 206 based on a predefined virtual viewport 212 defined by the software application 30 comprises a step 124 of animating the virtual content 204 at least one of based on a pre -defined animation sequence and in response to user interaction therewith.
Preferably, the step 104 of generating an instance of the virtual content 204 from the content data 202 by the content engine 28 in a spatial environment 206 comprising comprises a step 126 of generating an instance of the virtual content 204 from the content data 202 by a content engine 28 in a spatial environment 206 defined by the software application 30 in response to a request being made by the software application 30. Wherein the request comprising account authentication data providable from the computing device 22.
The virtual content management method 100 further comprises a step 130 of packaging the content data 202 with software libraries 214 for generating the virtual content 204 in the spatial environment 206 into a software container 216, and a step 132 of providing the software container 216 to the computing device 22 in response to a request therefor. The request may be generated automatically via a partner platform or application or may be requested for by the user of the computing device 22 via a third party application or platform. The software container 216 provides an environment with the necessary software components for executing codes for or generating the virtual content 204, including inherited properties and objects, from the content data 202.
Preferably, the virtual content 204 forms one of a virtual reality environment and a mixed reality environment with the spatial environment 206, and the virtual content 204 being a 2D content, a 3D content and a point cloud. Further, the input data 200 being providable by the software application 30 operating on the computing device 22. The input data 200 is scanned discretised data of an article, engineering drawings, sketches, coordinates, 2D models and 3D models. Preferably, the virtual content management system 20 may be implemented as a web-based service and platform. A front-end application, for operating on the computing device 22, may be provided for accessing and interacting with the virtual content management system 20. When the virtual content management system 20 is in use or when the virtual content management method 100 is implemented, a user may pre-capture an image or a video of a physical object for generating the input data 200. Alternatively, the input data may be generated using a modeling software like maya or a CAD software like Solidworks. The input data 200 may also be generated using a point cloud or a Lidar scanner. The input data 200 is then uploaded onto the virtual content management system 20 for processing and conversion to the content data 202 by the data conversion module 26.
Once the content data 202 is obtained, the user may access the virtual content management system 20 to preview, edit and further work on the virtual content 204. At this point, animation of the virtual content 204, for example an avatar virtual content with a rotating or spinning animation, may be introduced to the content data 202. The input data 200 may be containerized as the software container 216 at this point depending on the target software application 30 that it is to be used or implemented with.
For example, the software application 30 may be a hybrid app on a smart phone that uses, for example, Apple’s AR toolkit to capture a real-world space, for example a bedroom, as the spatial environment 206 and to establish a reference frame 210 based on the feature mesh generated by the hybrid app based on images captured by the camera module of the computing device 22 so that the generated virtual content 204, for example the mentioned avatar or a furniture article, may be located and manipulated as an augmented reality content in the real-world spatial environment 206 image in substantially true-to-scale manner.
Once the virtual content 204 has been generated by the rendering module 32, a portion of which may be packaged in the software container 216, if implemented, the virtual viewport 212 may be manipulated through interacting with the smart phone or by spatially manipulating the smart phone to manipulate the virtual viewport. The virtual content 204 may be resized or repositioned in the spatial environment 206 by manipulation of the UI 34, for example the touch screen of the smart phone. Information and data of the software application 30 will be captured as the app data 208 and which will be sent together with the usage data 214 back to the virtual content management system 20 for analysis by the analytics module 38.
The software application 30 may also be a web-based social media platform, which may also operate as a native or hybrid app, such as Facebook or Instagram which will be identified and captured as the app data 208 together with other information, for example the corresponding Facebook page address/locator, tags or hashtags, URIs and event names/handlers. For example, the virtual content 204 may be generated in a Facebook live session, for example via the mentioned containerization or with plugins/APIs enabled through the social media platform, where the geo-location of the captured or streamed live spatial environment 206 may be determined, captured and sent to the analytics module 38 for analysis thereby. Therefore, the implementation of the virtual content management method 100 and system 20 is wide-ranging and can ideal for MICE events, outdoor Expos, online events, museum tours or the like. Beyond just being a 3D model, 2D model, a point cloud or the like content, the virtual content 204 may also be a hologram generated by projectors as the computing device 22 or virtual reality/augmented reality images or objects generated by an AR or VR headset, for example Google glass or Oculus headsets.
The virtual content management system 20 further comprises an interface engine 50 for creating an interaction intelligence associated with the virtual content 204, the interaction intelligence capturing and defining non-coding-based user interface designs and flow intelligence for determining interaction flow between the user and the virtual content by 204 the software application 30. A non-coding-based design interface 52 of the interface engine allows for, for example, drag-and drop actions, drop-down list selections and wizards and the like aids to a user for defining the interaction intelligence. Of course, content, media and snippets of codes may be uploaded or integrated with the interaction intelligence as well. The interaction interface 50 will also further generate codes and resources from the created interaction intelligence and one of associating and packaging the codes and resources with the content data. The generated code may further be packaged with the software container 216 based on user requirements.
The interaction intelligence’s trigger may be pre-defined and may be initiating the interaction intelligence in response to occurance of a pre-defined at least one of an event and an interaction with at least one of the virtual content 204 and the spatial environment 206. As the spatial environment 206 may be a multiverse space with multiverse capabilities and features, the interaction intelligence allows for a user to provide further features and services to the user as and when required and appropriate, depending on the intent of the content creator user or content owner user. For example, when in use, the virtual content 204 may be a virtual piece of furniture with a trigger point 54 pre -defined via the design interface 52 and presented on the software application 30. When the user interacts via touching the trigger point 54, for example anywhere on the furniture, textual and audible content may be presented to the user as part of the interaction intelligence. Options may also be presented to enable the user to find out more about the furniture, to trigger a conversation, chat or callback with a sales staff, or to initiate purchase payment and delivery arrangements with the business or individual retailing the furniture. In another use case, the user can be walking past a movie billboard in the spatial environment 206 as part of a multiverse. When the user touches the billboard, a trailer of a movie may start to play as part of the interaction intelligence with the billboard being the virtual content 204. The user may intent to immediately check availability of seats at his favourite cinema and initiate purchase and payment of tickets to the movie as part of the interaction intelligence, thereby improving user experience.
The virtual content management system 20 may further comprise a tokenization module 56 for tokenizing the virtual content 204, and/or its content data 202, into non-fungible tokens (NFTs). This will enable the virtual management system 20 to establish a marketplace for the creation, trading, licensing and commissioning of edits of the virtual content 204 and its content data 22 by way of the NFTs. Further, the NFTs may be programmed to enable the creator of the virtual content 204 to automatically receive royalty whenever ownership of the NFTs occurs, for example, in the form of percentage of the transaction price.
Further, the NFTs enables ownership thereof and the digital assets associated therewith, specifically the virtual content 24 and the content data 22 wherefrom the virtual content 24 is generatable and derived from, and provide a way for the virtual content 24 to be bought, sold and transacted. Hence, the virtual content 24 will therefore form part of a digital asset library 58 which may be enabled, for example through containerization, for use on the virtual content management system 20 and other third-party platforms to enable the creation of an eco-system for digital content creation and use.
The virtual content management system 20 can also be implemented as a machine to perform the virtual content management method 100 where the machine comprising a machine-readable medium having stored therein a plurality of programming instructions, which when executed, the instructions cause the machine to convert the input data 200 in the first format into content data 202 in a second format, generate an instance of the virtual content 204 from the content data 202 in the spatial environment 206 defined by the software application 30 having app data 208 associated therewith, and generate a view of the virtual content 204 in the spatial environment 206 based on the predefined and spatially manipulatable virtual viewport 212 defined by the software application 30 operating from a computing device 22. Wherein the wherein interaction with user interaction with at least one of the virtual content 204 and the spatial environment 206 is performed via at least one of interacting with a user interface (UI 34) of the computing device 22 and spatially manipulating the computing device 22.
Aspects of particular embodiments of the present disclosure address at least one aspect, problem, limitation, and/or disadvantage associated with existing virtual content management methods. While features, aspects, and/or advantages associated with certain embodiments have been described in the disclosure, other embodiments may also exhibit such features, aspects, and/or advantages, and not all embodiments need necessarily exhibit such features, aspects, and/or advantages to fall within the scope of the disclosure. It will be appreciated by a visitor of ordinary skill in the art that several of the above -disclosed structures, components, or alternatives thereof, can be desirably combined into alternative structures, components, and/or applications. In addition, various modifications, alterations, and/or improvements may be made to various embodiments that are disclosed by a visitor of ordinary skill in the art within the scope of the present disclosure, which is limited only by the following claims.

Claims

Claims
1. Virtual content management method comprising: converting input data in a first format into content data in a second format by a data conversion module, the content data being descriptive of and for generating virtual content therewith; generating an instance of the virtual content from the content data by a content engine in a spatial environment defined by a software application having app data associated therewith, the virtual content being located with a reference frame in the spatial environment, the spatial environment being at least one of a virtual space, a real-world space and a locator being associated therewith; and generating a view of the virtual content in the spatial environment based on a predefined viewport defined by the software application operating from a computing device, wherein interaction with user interaction with at least one of the virtual content and the spatial environment is performed via at least one of interacting with a user interface (UI) of the computing device and spatially manipulating the computing device.
2. The virtual content management method as in claim 1, further comprising: tracking and capturing usage data comprising user interaction with the virtual content and the spatial environment.
3. The virtual content management method as in claim 2, the usage data further comprising at least one of a user identifier associated with the user, user profile associated with the user, captured interactions between the user and the virtual content and spatial environment, usage timings, URI of the software application and event data associated with the spatial environment.
4. The virtual content management method as in claim 3, the usage data further comprising geo-location data of computing device and geo-location data associated with the spatial environment.
5. The virtual content management method as in claim 4, further comprising: performing data analysis on the usage data to identify at least one of trends, correlations, data models, key influences and insights therefrom.
6. The virtual content management method as in claim 1, the software application being one of a web application, a native application, a hybrid application and a containerized application.
7. The virtual content management method as in claim 6, the at least one of a virtual space, a real-world space and a locator being at least one of a substantially realtime image captured by the computing device, a substantially real-time image streamed to the computing device, virtual spatial environment, and a mixed reality space generated from the substantially real-time image and the virtual spatial environment.
8. The virtual content management method as in claim 1, generating an instance of the virtual content from the content data by an content engine in a spatial environment defined by a software application comprises: tracking position of a reference marker in the spatial environment; and locating the virtual content to a reference marker defining the reference frame in the spatial environment.
9. The virtual content management method as in claim 1, generating a view of the virtual content in the spatial environment based on a predefined viewport defined by the software application operating from a computing device comprising: 15 animating the virtual content at least one of based on a pre -defined animation sequence and in response to user interaction therewith. The virtual content management method as in claim 1, further comprising: packaging the content data with software libraries for generating the virtual content in the spatial environment into a software container; and providing the software container to the computing device in response to a request therefor. The virtual content management method as in claim 1 , generating an instance of the virtual content from the content data by the content engine in a spatial environment defined by a software application comprising: generating an instance of the virtual content from the content data by an content engine in a spatial environment defined by the software application in response to a request being made by the software application, wherein the request comprising account authentication data providable from the computing device. The virtual content management method as in claim 1, the virtual content forming one of a virtual reality environment and a mixed reality environment with the spatial environment. The virtual content management method as in claim 1, the virtual content being a 2D content, a 3D content and a point cloud. The virtual content management method as in claim 1, the input data being providable by the software application operating on the computing device, wherein the input data is scanned discretised data of an article, engineering drawings, sketches, coordinates, 2D models and 3D models. 16 The virtual content management method as in claim 1, further comprising: creating an interaction intelligence associated with the virtual content by an interface engine, the interaction intelligence capturing and defining non-coding- based user interface designs and flow intelligence for determining interaction flow between the user and the virtual content by the software application; and generating codes and resources from the created interaction intelligence and one of associating and packaging the codes and resources with the content data. The virtual content management method as in claim 1, further comprising: initiating the interaction intelligence in response to occurance of a predefined at least one of an event and an interaction with at least one of the virtual content and the spatial environment. The virtual content management method as in claim 1, further comprising: tokenizing at least one of the virtual content and its content data 202 into non-fungible tokens (NFTs) by a tokenization module. Virtual content management system comprising: data conversion module for converting input data in a first format into content data in a second format, the content data being descriptive of and for generating virtual content therewith; a content engine for generating an instance of the virtual content from the content data in a spatial environment defined by a software application having app data associated therewith, the virtual content being located with a reference frame in the spatial environment, the spatial environment being at least one of a virtual space, a real-world space and a locator being associated therewith; and a rendering module for generating a view of the virtual content in the spatial environment based on a predefined viewport defined by the software application operating from a computing device, 17 wherein interaction with user interaction with at least one of the virtual content and the spatial environment is performed via at least one of interacting with a user interface (UI) of the computing device and spatially manipulating the computing device. The virtual content management system as in claim 18, further comprising: an analyser module for performing data analysis on usage data to identify at least one of trends, correlations, data models, key influences and insights therefrom, the usage data comprises tracked and captured user interaction with the virtual content and the spatial environment, the usage data further comprises at least one of a user identifier associated with the user, user profile associated with the user, captured interactions between the user and the virtual content and spatial environment, usage timings, URI of the software application and event data associated with the spatial environment, wherein the usage data further comprising geo-location data of computing device and geo-location data associated with the spatial environment. The virtual content management system as in claim 18, the software application being one of a web application, a native application, a hybrid application and a containerized application, the at least one of a virtual space, a real-world space and a locator being at least one of a substantially real-time image captured by the computing device, a substantially real-time image streamed to the computing device, virtual spatial environment, and a mixed reality space generated from the substantially real-time image and the virtual spatial environment, and the virtual content being animated at least one of based on a pre-defined animation sequence and in response to user interaction therewith 18 The virtual content management system as in claim 18, an instance of the virtual content is generated from the content data by an content engine in a spatial environment defined by a software application having app data associated therewith, wherein tracking position of a reference marker is in the spatial environment for locating the virtual content to a reference marker defining the reference frame in the spatial environment. The virtual content management system as in claim 18, generating an instance of the virtual content from the content data by an content engine in a spatial environment defined by a software application comprising:
The content engine being further for generating an instance of the virtual content from the content data in the spatial environment defined by the software application in response to a request being made by the software application, wherein the request comprising account authentication data providable from the computing device, and the virtual content forming one of a virtual reality environment and a mixed reality environment with the spatial environment, and the virtual content being a 2D content, a 3D content and a point cloud, and wherein the input data being scanned discretised data of an article, engineering drawings, sketches, coordinates, 2D models and 3D models. The virtual content management system as in claim 18, further comprising: a tokenization module tokenizing at least one of the virtual content and its content data 202 for generating non -fungible tokens (NFTs) therefrom.
19 A machine comprising a machine -readable medium having stored therein a plurality of programming instructions, which when executed, the instructions cause the machine to: convert input data in a first format into content data in a second format, the content data being descriptive of and for generating virtual content therewith; generate an instance of the virtual content from the content data in a spatial environment defined by a software application having app data associated therewith, the virtual content being located with a reference frame in the spatial environment, the spatial environment being at least one of a virtual space, a real- world space and a locator being associated therewith; and generate a view of the virtual content in the spatial environment based on a predefined viewport defined by the software application operating from a computing device, wherein interaction with user interaction with at least one of the virtual content and the spatial environment is performed via at least one of interacting with a user interface (UI) of the computing device and spatially manipulating the computing device.
PCT/SG2022/050730 2021-10-21 2022-10-12 Method and system for managing virtual content Ceased WO2023069016A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
SG10202111721R 2021-10-21
SG10202111721R 2021-10-21
SG10202200291Q 2022-02-07
SG10202200291Q 2022-02-07

Publications (1)

Publication Number Publication Date
WO2023069016A1 true WO2023069016A1 (en) 2023-04-27

Family

ID=86059755

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2022/050730 Ceased WO2023069016A1 (en) 2021-10-21 2022-10-12 Method and system for managing virtual content

Country Status (1)

Country Link
WO (1) WO2023069016A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008026817A1 (en) * 2006-09-01 2008-03-06 Qtel Soft Co., Ltd. System and method for realizing virtual reality contents of 3-dimension using ubiquitous sensor network
US20190332400A1 (en) * 2018-04-30 2019-10-31 Hootsy, Inc. System and method for cross-platform sharing of virtual assistants
US20200043242A1 (en) * 2018-07-20 2020-02-06 Guangdong Virtual Reality Technology Co., Ltd. Interactive method for virtual content and terminal device
US20200312039A1 (en) * 2016-06-10 2020-10-01 Dirtt Environmental Solutions, Ltd Mixed-reality and cad architectural design environment
US20210026998A1 (en) * 2019-07-26 2021-01-28 Geopogo Rapid design and visualization of three-dimensional designs with multi-user input

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008026817A1 (en) * 2006-09-01 2008-03-06 Qtel Soft Co., Ltd. System and method for realizing virtual reality contents of 3-dimension using ubiquitous sensor network
US20200312039A1 (en) * 2016-06-10 2020-10-01 Dirtt Environmental Solutions, Ltd Mixed-reality and cad architectural design environment
US20190332400A1 (en) * 2018-04-30 2019-10-31 Hootsy, Inc. System and method for cross-platform sharing of virtual assistants
US20200043242A1 (en) * 2018-07-20 2020-02-06 Guangdong Virtual Reality Technology Co., Ltd. Interactive method for virtual content and terminal device
US20210026998A1 (en) * 2019-07-26 2021-01-28 Geopogo Rapid design and visualization of three-dimensional designs with multi-user input

Similar Documents

Publication Publication Date Title
KR102744683B1 (en) 3D avatar plugin for third-party games
KR102873670B1 (en) Create a gesture-based shared AR session
US11250887B2 (en) Routing messages by message parameter
KR102577630B1 (en) Display of augmented reality content in messaging applications
CN115769561B (en) Method and system for third party modification of camera user interface
CN115868149B (en) Camera user interface for generating content
CN114868101B (en) Marker-based shared augmented reality session creation
KR20220128664A (en) A selection of avatars to be included in the on-demand video
KR20220128665A (en) Video generation system for rendering frames on demand
KR20230004773A (en) Event Overlay Invitation Messaging System
CN115516406B (en) Depth estimation using biological data
CN115605897A (en) Augmented reality experience for physical products in messaging systems
US20210329310A1 (en) System and method for the efficient generation and exchange of descriptive information with media data
Rattanarungrot et al. The application of augmented reality for reanimating cultural heritage
WO2023069016A1 (en) Method and system for managing virtual content
KR20260005355A (en) Predicting conversion rates
KR20250030507A (en) A system for new platform recognition
Shin et al. Enriching natural monument with user-generated mobile augmented reality mashup
KR20250037513A (en) Incremental scanning for custom landmarks
CN119278428A (en) Context cards for media supplements
KR101678468B1 (en) Method and apparatus for sharing ordering information of online shopping
KR102476006B1 (en) Electronic commerce integrated meta-media generating method, distributing system and method for the electronic commerce integrated meta-media
KR20250059248A (en) How to provide experience analysis data for ar content provision services
CN121388007A (en) Product message display methods, apparatus, devices, and storage media
KR20230163073A (en) Method for manufacturing of augmented reality contents

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22884174

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21.08.2024)

122 Ep: pct application non-entry in european phase

Ref document number: 22884174

Country of ref document: EP

Kind code of ref document: A1