WO2024236160A1 - A system for delivering contextual information - Google Patents
A system for delivering contextual information Download PDFInfo
- Publication number
- WO2024236160A1 WO2024236160A1 PCT/EP2024/063634 EP2024063634W WO2024236160A1 WO 2024236160 A1 WO2024236160 A1 WO 2024236160A1 EP 2024063634 W EP2024063634 W EP 2024063634W WO 2024236160 A1 WO2024236160 A1 WO 2024236160A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- server
- information
- reading device
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0641—Electronic shopping [e-shopping] utilising user interfaces specially adapted for shopping
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
Definitions
- the present invention relates to a system and a method for delivering contextual information and interactive experiences associated with an object.
- NLP natural language processing
- AR augmented reality
- a system for delivering contextual information and interactive experiences in relation to an object comprising:
- a reading device configured to capture information from a unique identification element carried by an object, and to send the captured information to the server, the reading device being further configured to capture user inputs received from a user, and to send the user inputs to the server, and
- a communication network configured to facilitate communication between the reading device and the server, - wherein the server, in particular using an identification module, is configured to identify the object based on the information captured from the unique identifier element, and to access supplementary information about the object that are stored in the server, e.g. in an object database, and
- the server in particular using an outputting module, is configured to deliver the supplementary information to the reading device
- system in particular the server and/or the reading device, is configured to interpret the supplementary information and the user inputs, and to deliver contextual information and interactive experiences to the user, preferably the contextual information is based on user preferences, user location, user past interactions and/or environment factors.
- the invention is based on the basic idea to integrate Al-powered language models and AR applications in to a system to thereby offer personalized interactions, recommendations, and immersive experiences based on scanned unique identifiers linked to an object. This significantly enhances user engagement and satisfaction.
- the system comprises a server, a reading device and a communication network configured to facilitate communication between the reading device and the server.
- the reading device is configured to capture information from a unique identification element carried by an object and to send the captured information to the server. Further, the reading device is configured to capture user inputs received from a user and to send the user inputs to the server.
- the server is configured to identify the object based on the information captured from the unique identifier element, access supplementary information about the object that are stored in the server, and deliver the supplementary information to the reading device.
- the server and/or the reading device in particular comprise large language models, for interpreting the supplementary information and the user inputs and delivering contextual information and interactive experiences to the user.
- the contextual information is based on user preferences, user location, user past interactions and/or environment factors.
- the object comprises a consumer product, an event poster or a visual display element.
- the server in particular a processing module, and/or the reading device comprises Al-powered language models.
- the environment factors relate to the user’s surrounding and/or object’s location, and comprise factors such as weather conditions, ambient noise levels, time, network connectivity, lighting conditions, or proximity to other objects or individual.
- the reading device (or a user device) and the server are configured to exchange data and information over the communication network.
- the reading device further configured to receive information and data, e.g. in the form of a text, video and/or audio from a user.
- the user device (or the reading device) is further configured to record a video and/or an audio.
- the unique identifier is incorporated in a QR code, NFC, RFID, UHF, AutolD, or Bluetooth tag that is attached to the object.
- the system initiates a backend lookup within the server and the object database included therein to retrieve detailed metadata associated with the product from the manufacturer.
- NFC enables close-range interactions for contactless transactions and data exchange.
- RFID operating across different frequencies like UHF, empowers precise and real-time tracking of assets in logistics, retail, and supply chain management.
- AutolD encompassing various identification methods, streamlines data capture, improving accuracy.
- Bluetooth tags leverage wireless communication, enabling versatile and efficient asset tracking, personal item location, and proximity-based interactions.
- integration of these technologies into the system contributes to a connected ecosystem, promoting automation, accuracy, and enhanced user convenience
- the system further comprises a database (e.g. a contextual information database), wherein the data base is in communication with the server and wherein the server, e.g. a processing module of the server, is configured to process data and information retrieved from the database, and/or to interpret natural language queries received via the reading device from a user, and/or to generate contextually relevant responses based on learned language patterns and contextual information stored in the contextual information database.
- a database e.g. a contextual information database
- the server e.g. a processing module of the server, is configured to process data and information retrieved from the database, and/or to interpret natural language queries received via the reading device from a user, and/or to generate contextually relevant responses based on learned language patterns and contextual information stored in the contextual information database.
- a processing module includes a GPT-based Natural Language Processing (NLP) module that includes Al-powered language models.
- NLP Natural Language Processing
- the language models are further configured to be incorporated in the reading device and/or the server.
- the contextual information database may be a module included in (integrated in) the server or a separate unit that is in communication with the server.
- the server e.g. using a user profile management module, configured to store user profiles, wherein the user profiles, each includes at least one of user personal data, user preferences, and user inputs and a historical record of user inputs.
- an outputting module is further configured to process the user profiles, and to deliver contextual information about the object, based on the processed supplementary information, the processed user inputs and the processed user profiles.
- system e.g. the server, further comprises a real-time interaction unit configured to process user interactions in real-time; and to integrate content generated by the processing module seamlessly into interactive experiences within an Augmented Reality (AR) environment.
- AR Augmented Reality
- the real-time interaction unit may be a module included in (integrated in) the server or a separate unit that is in communication with the server.
- the real-time interaction unit comprises a rendering module (an AR Content Rendering module) configured to render the AR environment, thereby presenting information aligned with the user's context and preferences.
- a rendering module an AR Content Rendering module
- the rendering module is configured to process and display augmented reality content within a given environment.
- the system e.g. the real-time interaction unit, further comprises an AR device (augmented reality device) configured to communicate over the communication network with the server, in particular via with a real-time interaction unit of the server.
- AR device augmented reality device
- the AR devices include sensors, cameras, and displays to capture the real-world environment and then project or superimpose digital content onto the user's view in real-time.
- Examples of AR devices include AR glasses, smartphones, tablets, and specialized AR headsets.
- users can interact with and experience the augmented content seamlessly integrated into their surroundings. This, in turn, offers a variety of applications in healthcare, gaming around the products, navigation.
- the rendering module is configured to interpret the information generated by other parts of the system, such as the GPT-based NLP module (the processing module), and the real-time interaction module, and translates it into visual elements overlaid on the real- world view seen through an AR device, like a smartphone or AR glasses.
- the GPT-based NLP module the processing module
- the real-time interaction module translates it into visual elements overlaid on the real- world view seen through an AR device, like a smartphone or AR glasses.
- the rendering module for example comprises an AR display processor, that is characterized by its seamless integration of virtual content into the real-world environment, enriching the user's experience within an augmented reality system.
- the real-time interaction unit ensures dynamic and responsive rendering of virtual objects. Due to the spatial alignment capability of the rendering module, it is possible to accurately positions digital content within the user's physical surroundings, contributing to a convincing and immersive augmented reality encounter.
- the rendering module configured to adapt the display of virtual content based on contextual information and user interactions. For example, it may adjust the size, position, or appearance of virtual objects in response to changes in the user's location or gaze, contributing to a more natural and user-friendly experience.
- the real-time interaction unit in particular the rendering module, is configured to adjust the size, position, or appearance of virtual objects in response to changes in the user's location or gaze, contributing to a more natural and user-friendly experience.
- An additional advantage of the real-time interaction unit lies in its ability to enhance the overall quality, realism, and engagement of augmented reality interactions by effectively blending digital and physical elements.
- the system in particular, the real-time interaction unit, facilitates an enhanced user experience by seamlessly integrating digital information into the user's physical environment.
- This can include features like interactive 3D models, information overlays, or contextually relevant visual elements, contributing to a more engaging and immersive AR interaction.
- This adaptability enhances the system's ability to provide relevant and timely experiences, contributing to a more seamless and user-friendly interaction with objects.
- the server further comprises an interface unit configured to allow users and/or manufacturers to programmatically configure or customize software stored in the server.
- system further comprises a security and privacy module configured to implements robust protocols to protect user authentication data and personalized information stored therein.
- system further comprising a setting unit, wherein the setting unit comprise customization tools and setting tools for customizing and configuring parameters and settings of the system.
- the configuration unit enables administrators or manufacturers or users to customize system settings based on specific preferences or requirements.
- the configuration tools enable the users and/or manufacturers to tailor the system to meet their specific needs by adjusting configuration settings. Also changes in system requirements or preferences can be accommodated without major code alterations, reducing maintenance overhead.
- the personalization (customization) tools allow users to create an interface and functionality that aligns closely with their unique preferences and requirements.
- Customization often leads to higher user satisfaction and adoption rates as individuals can shape the system to match their workflow.
- configuration and customization tools can be implemented through a combination of software and hardware features.
- GUIs graphical user interfaces
- Uls provide an intuitive way for users to adjust without requiring technical expertise.
- configuration and customization tools may be embodied as physical controls such as switches, buttons, or dials that can be used for configuration and/or customization. These controls advantageously allow users to set preferences or adjust parameters directly on the hardware.
- the system in particular the server, further comprises an interface unit configured to allow users and/or developers to programmatically configure or customize the software.
- the interface unit comprises a software development kit and/or Application Programming Interfaces (APIs).
- APIs Application Programming Interfaces
- SDKs help to accelerate the development process by providing a standardized and efficient way for developers to interact with and leverage the capabilities of a particular software platform or hardware device.
- the system further comprises a user interaction module configured to allow a multi-user interaction for exchanging data and information among a plurality of users about the object and its use.
- the server further comprises a collaboration module included in the multiuser interaction unit is configured to create a shared session among multiple reading devices when the information of the same unique identifier element is received by the system around a same time.
- a collaboration module included in the multiuser interaction unit is configured to create a shared session among multiple reading devices when the information of the same unique identifier element is received by the system around a same time.
- the system allows synchronized participation in multi-user activities.
- system e.g. the server
- a collaboration module e.g. a shared application session module
- ID unique identifier
- system further comprises a learn unit, including a feedback and learning mechanism, configured to collecting user feedback and behavior data to continuously improve the personalized experiences, for example, delivered by the processing module and AR application.
- learn unit including a feedback and learning mechanism, configured to collecting user feedback and behavior data to continuously improve the personalized experiences, for example, delivered by the processing module and AR application.
- the learn unit e.g. via the feedback and learning mechanism, is configured to collect user feedback, analyzing user interactions, and to employ machine learning algorithms to iteratively improve system performance.
- the learn unit in particular via the feedback and learning mechanism, is configured to utilize user-provided feedback, behavioral data, and other relevant metrics to adapt its responses, enhance user experiences, and refine underlying algorithms.
- This continuous learning loop advantageously facilitates the system's ability to evolve, addressing user preferences, and ensuring optimal functionality over time.
- the implementation of a feedback and learning mechanism includes, for example, collecting user feedback and system data, analyzing patterns, and adjusting algorithms or configurations for continuous enhancement.
- the learn unit comprises smart devices where firmware updates can be informed by user interactions and performance data, and/or adaptive hardware components that are configured to learn from usage patterns to optimize functionality, and/or sensors that are configured to adjust parameters based on real-time feedback, collectively forming an iterative improvement loop.
- touchscreens or physical buttons on devices may be adjusted based on user feedback.
- the feedback and learning mechanism facilitates ongoing learning and adaptation to user or environmental factors, contributing to improved efficiency and user experience.
- system further comprises a data insights unit (e.g. analytics and reporting module) configured to track user engagement, AR usage patterns, and the effectiveness of the delivered contextual information for future optimizations.
- a data insights unit e.g. analytics and reporting module
- the data insights unit is configured to collect and process diverse data sources, ranging from user engagement metrics to system performance statistics.
- the data insights unit comprises algorithms for data analysis (i.e. software- driven algorithms and logic), utilizing methods such as machine learning and statistical modeling to derive actionable insights.
- the data insights unit is further configured to generate comprehensive reports, visualizations, or dashboards that convey valuable information to users or manufacturers.
- the data insights unit comprises accelerators or processer embedded therein, which are optimized for data analytics tasks to thereby enhance performance and efficiency.
- the data insights unit may further comprise data storage devices and memory modules, especially in scenarios where large datasets need to be efficiently managed.
- the present invention provides for a delivering contextual information and interactive experiences about an object, in particular using the system described above, the method comprising the steps of:
- processing using the server, in particular using a processing module and/or an outputting module, the supplementary information and data and delivering the same to the reading device,
- the method further comprises the step of rendering AR experiences, using a real-time interaction unit, based on user profiles and preferences stored in the database
- the method further comprises the steps of: - receiving information and/or queries from the user, e.g. using an input module, form a reading device and/or an AR device;
- server e.g. the processing module
- contextually relevant responses based on learned language patterns and contextual information stored in a database
- the server e.g. outputting module
- personalized and/or contextual information to the users based on the processed information and/or queries and using the reading device and or the AR device.
- the method further comprises the steps of:
- the method enhances the level of personalization experiences through the recipient's interaction with the scanned QR code.
- the method allows for interactions with the object to occur at different times.
- the disjoint time interaction allows for flexibility in how users engage with the object. Users can customize and personalize objects at their convenience before giving them as gifts.
- the method is configured to dynamically adjust its content delivery and/or presentation based on the primed information (received information).
- a computer-readable storage medium is provided, wherein the storage medium is configured to store instructions that, when executed by a processor, cause the system according to the present invention to perform the steps of the method as defined under the above.
- the present invention enables users to scan a QR code, NFC-tag, RFID- tag, or ultra-high frequency RFID-tag (UHF-tag) on an object, e.g. a consumer product or an event poster, and then interact with the system that provides contextual information about the product or event.
- a QR code e.g. a QR code
- NFC-tag e.g. a QR code
- RFID- tag e.g. a RFID- tag
- UHF-tag ultra-high frequency RFID-tag
- the unique identifier carried by the QR code or codes allows for a lookup in the backend, which returns specific metadata attached to the product by the manufacturer (e.g., pertaining to its contents (ingredients, nutrition facts, etc.), supply chain details, branding guidelines, time- and location-sensitive information, etc.)
- the system's applications may use generative Al, such as GPT type large language models (LLMs), and AR applications that are fed with data generated by the LLMs.
- LLMs GPT type large language models
- Multi-user interactions are managed by the backend system, which receives scanned IDs and creates a shared application session among multiple devices when the same ID is scanned around the same time.
- the interaction can also be disjoint in time, with one user scanning the object first, entering some specific information priming the system and personalizing the object, and then giving it as a gift to another person.
- the recipient Upon scanning the QR code, the recipient would get a personalized experience based on the input by the first user (e.g., personalizing gifts, such as a wine bottle).
- the QR code serves as a contextual entry point for users, with the default interface being a mobile web app or an App Fragment (or its iOS equivalent) that allows loading native applications without prior installation from the App Store.
- SDKs are also offered to partner companies for embedding in their own apps (e.g., a Mattel app can be opened when the user scans a box with Barbie).
- the LLMs provide an extremely flexible way of interaction, allowing users to ask specific questions that would otherwise require more time to find out from static product documentation/description. They can also solve adjacent pain points for users (e.g., coming up with personalized toasts for a wine bottle).
- the backend system can track multiple activities of users over time (via the scanned QR codes) and include this information in the context of the LLMs to work with.
- the language model may be hosted on a remote server (e.g. a backend server) or run on the mobile device itself, which would increase the privacy of the product and its availability in conditions where connectivity is insufficient.
- a remote server e.g. a backend server
- the system includes a backend server that hosts the Al-powered language models and manages the metadata associated with unique identifiers.
- the backend server receives scanned unique identifier data from user devices and returns relevant metadata and personalized interactions based on the user's context and preferences.
- a user interface can be a mobile web app, an App Clip or Instant App, or a native application installed on the user's device.
- the system also offers SDKs for partner companies to embed the functionality within their own applications, providing seamless integration and a consistent user experience across different devices and applications.
- the language models provide several benefits, including the ability to answer specific user questions that would otherwise require time-consuming manual searches, addressing adjacent pain points, and offering personalized experiences based on user input, such as toasts for a wine bottle.
- the system further supports AR applications that provide users with visualizations and real-time interactions based on the product or event information. For example, an AR application can visualize a proposed makeup plan on the user's face, allowing them to see how different cosmetics products would look together before making a purchase.
- the system enables multi-user interactions by creating a common application session among multiple devices that scan the same unique identifier around the same time. This allows users to engage in common activities or share personalized experiences based on the same unique identifier.
- the backend server manages and synchronizes data between multiple users, facilitating seamless multi-user experiences.
- the present invention provides a comprehensive system for delivering contextual information and interactive experiences to users based on scanned unique identifiers.
- the system offers a seamless, personalized, and engaging experience for users across various consumer products and events.
- This invention has the potential to revolutionize how users interact with products and events, providing context-aware and personalized information, entertainment, and assistance at their fingertips.
- the present invention comprises aspects that differentiate it from existing solutions in the market. These include:
- Context-aware interaction By scanning unique identifiers such as QR codes, NFC/RFID/UHF, the system provides users with contextual information and personalized experiences based on the specific product or event. This context-aware interaction enables users to quickly access relevant information without manually searching for it, streamlining the user experience.
- Al-powered language models The integration of advanced Al-powered language models, such as GPT-type large language models, allows for flexible and contextually relevant interactions with users. These models can understand user input and provide personalized responses, addressing specific questions and adjacent pain points that traditional static documentation may not cover. The option to host these models on a remote server or on the user's mobile device offers increased privacy and availability, making the system more versatile and adaptable to various situations.
- AR applications The support for AR applications provides users with real-time visualizations and interactions based on the product or event information. This allows users to have a more immersive and engaging experience, such as visualizing makeup plans on their own face or seeing how furniture would look in their room before purchasing.
- Multi-user interaction The system's ability to facilitate multi-user interactions through common application sessions and backend server synchronization enables users to engage in shared experiences and activities with others who have scanned the same unique identifier. This fosters a sense of community and enhances the overall user experience.
- the present invention offers a unique combination of context-aware interaction, Al-powered language models, AR applications, multi-user interaction capabilities, and seamless integration with third-party applications.
- These technical effects and novel aspects significantly enhance the user experience and provide a versatile, personalized, and engaging interaction with consumer products and events.
- FIG. 1 a schematic diagram of a system according ng to the present invention
- Fig. 2 a schematic diagram of a system according to the present invention, indicating modules incorporated in a server;
- Fig. 3 a diagram of a method according to the present invention.
- Fig. 4 a diagram of the method according to the present invention, indicating further method steps.
- FIG. 1 an example of a system 100 for delivering information, in particular contextual information and interactive experiences, about an object 102 or an item is illustrated.
- the system 100 comprises a reading device 102 (e.g. a user device) and a server 104.
- a reading device 102 e.g. a user device
- server 104 e.g. a server
- the server 104 and the reading device 102 are in data communication over a communication network 110.
- the reading device 106 is configured to read a unique identifier element 108 (e.g. a barcode, a QR code, a NFC tag, RFID tag or ultra-high frequency RFID tag) attached or printed on an object.
- a unique identifier element 108 e.g. a barcode, a QR code, a NFC tag, RFID tag or ultra-high frequency RFID tag
- Th object may comprise any consumable products (foods or beverages), posters, post card, a cosmetic product, medical products, electronic devices, packings of products.
- the QR code or codes contain a distinctive identifier that facilitates a backend lookup (e.g. the server-side), retrieving specific metadata associated with the product from the manufacturer.
- This metadata includes information about the product's contents (such as ingredients and nutrition facts), details about the supply chain, branding guidelines, and time- and location-specific data.
- the system 100 further comprises an augmented reality device 128 which is in communication with the reading device 106 and the server 104 over the communication network 110.
- the system 100 utilizes Al-powered language models, such as GPT-type large language models, to provide personalized and contextually relevant information and entertainment experiences for users. These models can be hosted on a remote server or run on the reading device, e.g. the user's mobile device, enhancing privacy and availability in varying connectivity conditions.
- the system's applications leverage generative Al, such as large language models (LLMs) like GPT, as well as augmented reality (AR) applications that utilize data generated by these LLMs.
- LLMs large language models
- AR augmented reality
- the system 100 may further comprise other reading devices 102 that are in communication with the server 104 over the communication network 110.
- the system 100 is configured to enable multi-user interactions. This enables users to communicate about the product and share their experiences.
- system 100 is configured to receive the information of the unique identifier element (e.g. scanned IDs) and to create a shared application session among multiple reading devices when the same ID is scanned around the same time or at a different time (disjoint in time).
- the unique identifier element e.g. scanned IDs
- the system 100 further comprises a database 122 (e.g. a contextual information database), a learn unit 136 and a data insights unit 138.
- a database 122 e.g. a contextual information database
- learn unit 136 e.g. a learn unit 136
- data insights unit 138 e.g. a data insights unit 138.
- the database 122, the learn unit 122 and the data insights unit 138 are in communication with the server 104.
- the database 122 is configured to store learned language patterns and contextual information.
- the learn unit 136 for example includes a feedback and learning algorithms for collecting user feedback and behavior data to continuously improve the personalized experiences.
- the user experiences are, for example, collected from the server and AR device and stored in the server.
- the data insights unit 138 (e.g. analytics and reporting module) is configured to track user engagement, AR usage patterns, and the effectiveness of the delivered contextual information.
- Fig. 2 illustrates a schematic diagram of the system 100 indicating the component of the server 104.
- the server 104 comprises a processing module 120, which is in communication with a plurality of modules and have access to information and data received by these modules.
- the server 100 comprises an identification module 112 that is configured to receive and identify the unique identifier
- the identification module 112 is further configured to access supplementary information about the object that are stored in an object database 111 in the server 104.
- An outputting module 114 is further included in the server 104 that is configured to process the supplementary information and deliver the supplementary information to the reading device 106. For example, when a user scans a QR code 106 (or the NFC, or the RFID or the Bluetooth tag) via the user device 102, the system 100 is configured to initiate a backend lookup within the server 104 and the object database 111 included therein to retrieve detailed metadata associated with the object 102, e.g. a product from the manufacturer.
- the object database for example, includes specific about the object linked with the unique identifier.
- the server further comprises a user profile management module 116 that is configured to store user profiles.
- the user profiles for example, include at least one of user personal data, user preferences, and user inputs and a historical record of user inputs,
- the outputting module 114 is further configured to process the user profiles, and to deliver contextual information in relation to (associated with the user and the object).
- the contextual information is based on the supplementary information, the user inputs and the user profiles that are processed in the server, e.g. by the outputting module and/or the processing module.
- the server 104 may further comprises an input module 118 configured to receive and/or analyze information and/or queries received from the user via the reading device 106.
- the system 100 further comprises an AR device 128 that is configured to communicate over the communication network with the server.
- the AR device is in data communication with a real-time interaction unit 124 of the server 104.
- the AR device is configured to integrate digital content with the physical environment around the user, enhancing their perception and interaction with reality.
- the AR device 128 is configured to be connected to the reading device 106 using wireless of wired connections, such as Bluetooth or Wi-Fi or Apps or cable or adapters.
- the real-time interaction module 124 comprises a rendering module 126 that is configured to process user interactions received via the AR device 128 and integrate the contextual content that is generated by the server 104 (e.g. by a processing module and/or an outputting module) seamlessly into interactive experiences within an Augmented Reality environment.
- the real-time interaction module 124 is further configured to render the AR environment, effectively displaying (presenting) information that corresponds with the object 102, the user's context, and preferences.
- the server 104 may further comprise a context sensitivity unit 130 that is configured to monitor and interpret or analyze the user interactions (e.g. in real-time data) and adapt AR experiences based on real-time user environment factors, such as location, time, weather conditions, ambient noise level, proximity to other objects and/or network connectivity.
- a context sensitivity unit 130 that is configured to monitor and interpret or analyze the user interactions (e.g. in real-time data) and adapt AR experiences based on real-time user environment factors, such as location, time, weather conditions, ambient noise level, proximity to other objects and/or network connectivity.
- the server further comprises an interface unit 132 that is configured to allow users and/or manufacturers to programmatically configure or customize software stored in the server.
- the interface unit 132 for example, comprises a software development kit (SDK) and Application Programming Interfaces (APIs). This unit facilitates seamless integration and customization of augmented reality (AR) experiences.
- SDK software development kit
- APIs Application Programming Interfaces
- developers can utilize the SDK to create AR applications tailored to specific platforms, leveraging APIs to access device features and functionalities.
- the SDK's support for cross-platform compatibility ensures broader accessibility for AR applications, enabling them to run smoothly across different devices and operating systems.
- a multi-user interaction unit 134 is also included in the server 104.
- the multi-user interaction unit 134 is configured to allow a multi-user interaction for exchanging data and information among a plurality of users about the object and/or its use.
- Fig. 3 illustrates a diagram of a method 300 for delivering contextual information and interactive experiences about an object. The method for example is performed using the system 100 as explained above.
- the method 300 comprises the steps of identifying 302, using a server 104, in particular via an identification module, an object 102 based on information of a unique identifier element 106 that is captured by a reading device 106, and retrieving 304, using the server 104, supplementary information and data about the object 102.
- the method 300 optionally comprises the steps of processing 306 the supplementary information and data, and delivering 308 the same to the reading device 106.
- the supplementary information is processed using a processing module 120 and/or an outputting module 118 included in the server 104.
- the method 300 further comprises 300 the step of interpreting 310, using the server and/or the reading device, the supplementary information and inputs received from a user.
- the supplementary information is received via a reading device 106 and/or an AR device 128.
- Fig. 4 illustrates a diagram of a continuation of the method 300 for delivering contextual information and interactive experiences about an object.
- the method 300 may further comprises the following steps:
- the method 300 may further comprise the following method steps, e.g. after the method steps 312 or 322.
- a user scans a QR code on a beverage product and receives recipes for mixing the perfect drink.
- the conversation can continue, with the user indicating the number of people participating in the event, and the system providing a shopping list based on the number of participants.
- the user can also ask for drinking games or other entertainment ideas.
- a user scans a QR code on furniture or tech products, asks a question, and receives step-by-step assembly instructions. If multiple products are scanned, users can also ask questions about connecting or installing the items together, such as installing a lamp.
- the system is configured to offer a seamless and immersive experience for users, delivering personalized and contextually relevant information through the combined power of GPT-based language understanding and AR interactivity.
- the system in particular the data insights unit, is characterized by its adaptability to different data formats, scalability for handling large datasets, and user- friendly interfaces that facilitate the interpretation of complex analytics for informed decision-making.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Finance (AREA)
- Accounting & Taxation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Business, Economics & Management (AREA)
- Software Systems (AREA)
- Strategic Management (AREA)
- Marketing (AREA)
- Economics (AREA)
- Development Economics (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Human Computer Interaction (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The present invention relates to a system (100) for delivering contextual information and interactive experiences in relation to an object (102), comprising: a server (104), a reading device (106) configured to capture information from a unique identifier element (108) carried by an object (102), and to send the captured information to the server (104), the reading device (106) being further configured to capture user inputs received from a user, and to send the user inputs to the server (104), and a communication network (110) configured to facilitate communication between the reading device (106) and the server (104), wherein the server (104), in particular using an identification module (112), is configured to identify the object based on the information captured from the unique identifier element (108), and to access supplementary information about the object that are stored in the server (104), wherein the server (104), in particular using an outputting module (114), is configured to deliver the supplementary information to the reading device (106), and wherein the system (100), in particular the server (104) and/or the reading device (106), is configured to interpret the supplementary information and the user inputs, and to deliver contextual information and interactive experiences to the user, preferably the contextual information is based on user preferences, user location, user past interactions and/or environment factors. The present invention further relates to a method for delivering contextual information and interactive experiences in relation to an object (102).
Description
A system for delivering contextual information
The present invention relates to a system and a method for delivering contextual information and interactive experiences associated with an object.
In today's rapidly evolving technological landscape, user engagement has become increasingly crucial for the success of various applications and services. However, existing systems often fall short in delivering truly interactive and contextually relevant experiences to users. While these systems effectively convey basic information about products or events, they lack the ability to adapt to users' individual contexts and preferences, thereby limiting their engagement potential.
Specifically, the absence of advanced natural language processing (NLP) capabilities hinders the ability of existing systems to understand and respond to user queries effectively. Furthermore, although some systems incorporate augmented reality (AR) technology, their use remains largely focused on simplistic overlays or static content visualization.
There therefore exists a need for a more versatile and real-time approach to delivering reliable contextual information and interactive experiences to users.
This object is achieved in the present invention by a system for delivering contextual information and interactive experiences in relation to an object, comprising:
- a server,
- a reading device configured to capture information from a unique identification element carried by an object, and to send the captured information to the server, the reading device being further configured to capture user inputs received from a user, and to send the user inputs to the server, and
- a communication network configured to facilitate communication between the reading device and the server,
- wherein the server, in particular using an identification module, is configured to identify the object based on the information captured from the unique identifier element, and to access supplementary information about the object that are stored in the server, e.g. in an object database, and
- wherein the server, in particular using an outputting module, is configured to deliver the supplementary information to the reading device,
- wherein the system, in particular the server and/or the reading device, is configured to interpret the supplementary information and the user inputs, and to deliver contextual information and interactive experiences to the user, preferably the contextual information is based on user preferences, user location, user past interactions and/or environment factors.
The invention is based on the basic idea to integrate Al-powered language models and AR applications in to a system to thereby offer personalized interactions, recommendations, and immersive experiences based on scanned unique identifiers linked to an object. This significantly enhances user engagement and satisfaction. The system comprises a server, a reading device and a communication network configured to facilitate communication between the reading device and the server. The reading device is configured to capture information from a unique identification element carried by an object and to send the captured information to the server. Further, the reading device is configured to capture user inputs received from a user and to send the user inputs to the server. The server is configured to identify the object based on the information captured from the unique identifier element, access supplementary information about the object that are stored in the server, and deliver the supplementary information to the reading device. Advantageously, the server and/or the reading device, in particular comprise large language models, for interpreting the supplementary information and the user inputs and delivering contextual information and interactive experiences to the user. For example, the contextual information is based on user preferences, user location, user past interactions and/or environment factors.
In particular, the object comprises a consumer product, an event poster or a visual display element.
In particular, the server, in particular a processing module, and/or the reading device comprises Al-powered language models.
In particular, the environment factors relate to the user’s surrounding and/or object’s location, and comprise factors such as weather conditions, ambient noise levels, time, network connectivity, lighting conditions, or proximity to other objects or individual.
In particular, the reading device (or a user device) and the server are configured to exchange data and information over the communication network.
In particular, the reading device further configured to receive information and data, e.g. in the form of a text, video and/or audio from a user.
In particular, the user device (or the reading device) is further configured to record a video and/or an audio.
In particular, the unique identifier is incorporated in a QR code, NFC, RFID, UHF, AutolD, or Bluetooth tag that is attached to the object.
In particular, when a user scans the QR code (or the NFC, or the RFID or the Bluetooth tag) via the user device, the system initiates a backend lookup within the server and the object database included therein to retrieve detailed metadata associated with the product from the manufacturer.
Advantageously, this, in turn, facilitates seamless identification, tracking, and communication, enhancing operational efficiency and user experiences.
For example, NFC enables close-range interactions for contactless transactions and data exchange. RFID, operating across different frequencies like UHF, empowers precise and real-time tracking of assets in logistics, retail, and supply chain management. AutolD, encompassing various identification methods, streamlines data capture, improving accuracy. Bluetooth tags leverage wireless communication, enabling versatile and efficient asset tracking, personal item location, and proximity-based interactions.
In particular, integration of these technologies into the system contributes to a connected ecosystem, promoting automation, accuracy, and enhanced user convenience
In particular, the system further comprises a database (e.g. a contextual information database), wherein the data base is in communication with the server and wherein the server, e.g. a processing module of the server, is configured to process data and information retrieved from the database, and/or to interpret natural language queries received via the reading device from a user, and/or to generate contextually relevant responses based on learned language patterns and contextual information stored in the contextual information database.
In particular, a processing module includes a GPT-based Natural Language Processing (NLP) module that includes Al-powered language models.
In particular, the language models are further configured to be incorporated in the reading device and/or the server.
In particular, the contextual information database may be a module included in (integrated in) the server or a separate unit that is in communication with the server.
In particular, the server, e.g. using a user profile management module, configured to store user profiles, wherein the user profiles, each includes at least one of user personal data, user preferences, and user inputs and a historical record of user inputs.
In particular, e.g. using an outputting module, is further configured to process the user profiles, and to deliver contextual information about the object, based on the processed supplementary information, the processed user inputs and the processed user profiles.
In particular the system, e.g. the server, further comprises a real-time interaction unit configured to process user interactions in real-time; and to integrate content generated by the processing module seamlessly into interactive experiences within an Augmented Reality (AR) environment.
In particular, the real-time interaction unit may be a module included in (integrated in) the server or a separate unit that is in communication with the server.
In particular, the real-time interaction unit comprises a rendering module (an AR Content Rendering module) configured to render the AR environment, thereby presenting information aligned with the user's context and preferences.
In particular, the rendering module is configured to process and display augmented reality content within a given environment. in particular, the system, e.g. the real-time interaction unit, further comprises an AR device (augmented reality device) configured to communicate over the communication network with the server, in particular via with a real-time interaction unit of the server.
In particular, the AR device is configured to overlay digital content such as images, text, or 3D models (provided by the rendering module) onto the real-world environment.
The AR devices use AR technology to blend computer-generated information with the user's perception of the physical world, creating an augmented or enhanced reality.
In particular, the AR devices, for example, include sensors, cameras, and displays to capture the real-world environment and then project or superimpose digital content onto the user's view in real-time. Examples of AR devices include AR glasses, smartphones, tablets, and specialized AR headsets.
Advantageously, users can interact with and experience the augmented content seamlessly integrated into their surroundings. This, in turn, offers a variety of applications in healthcare, gaming around the products, navigation.
The rendering module is configured to interpret the information generated by other parts of the system, such as the GPT-based NLP module (the processing module), and the real-time interaction module, and translates it into visual elements overlaid on the real- world view seen through an AR device, like a smartphone or AR glasses.
In particular, the rendering module is configured to manage the rendering of virtual objects, information overlays, or interactive elements in a way that aligns with the user's physical surroundings, creating a seamless integration of digital and real-world elements in the augmented reality experience.
The rendering module, for example comprises an AR display processor, that is characterized by its seamless integration of virtual content into the real-world environment, enriching the user's experience within an augmented reality system.
Operating in real-time with minimal latency, the real-time interaction unit ensures dynamic and responsive rendering of virtual objects.
Due to the spatial alignment capability of the rendering module, it is possible to accurately positions digital content within the user's physical surroundings, contributing to a convincing and immersive augmented reality encounter.
In particular, the rendering module configured to adapt the display of virtual content based on contextual information and user interactions. For example, it may adjust the size, position, or appearance of virtual objects in response to changes in the user's location or gaze, contributing to a more natural and user-friendly experience.
For example, the real-time interaction unit, in particular the rendering module, is configured to adjust the size, position, or appearance of virtual objects in response to changes in the user's location or gaze, contributing to a more natural and user-friendly experience.
An additional advantage of the real-time interaction unit lies in its ability to enhance the overall quality, realism, and engagement of augmented reality interactions by effectively blending digital and physical elements.
For example, the system, in particular, the real-time interaction unit, facilitates an enhanced user experience by seamlessly integrating digital information into the user's physical environment. This can include features like interactive 3D models, information overlays, or contextually relevant visual elements, contributing to a more engaging and immersive AR interaction.
In particular, the system, e.g. the real-time user interaction unit or the server, further comprises a context sensibility unit or a sensing unit configured to monitor and (interpret) analyze real-time data, and to adapt AR experiences based on real-time user environment factors, including location, time, and other contextual parameters.
In particular, the context sensitivity unit may be integrated in the real-time interaction unit or be a separated module in the server and communicating with the context sensitivity module.
In particular, the sensing unit includes a context awareness module that is configured to adjust the presentation of information, modify responses, or trigger specific actions based on the dynamically changing context.
This adaptability enhances the system's ability to provide relevant and timely experiences, contributing to a more seamless and user-friendly interaction with objects.
In particular, the server further comprises an interface unit configured to allow users and/or manufacturers to programmatically configure or customize software stored in the server.
In particular, the system further comprises a security and privacy module configured to implements robust protocols to protect user authentication data and personalized information stored therein.
In particular, the system further comprising a setting unit, wherein the setting unit comprise customization tools and setting tools for customizing and configuring parameters and settings of the system.
In particular, the configuration unit enables administrators or manufacturers or users to customize system settings based on specific preferences or requirements.
Advantageously, the configuration tools enable the users and/or manufacturers to tailor the system to meet their specific needs by adjusting configuration settings. Also changes in system requirements or preferences can be accommodated without major code alterations, reducing maintenance overhead.
Advantageously, the personalization (customization) tools allow users to create an interface and functionality that aligns closely with their unique preferences and requirements.
Customization often leads to higher user satisfaction and adoption rates as individuals can shape the system to match their workflow.
In particular, the configuration and customization tools can be implemented through a combination of software and hardware features.
For example, configuration and customization tools are provided via graphical user interfaces (GUIs) that allow users to interact with and modify system settings. The Uls provide an intuitive way for users to adjust without requiring technical expertise.
In particular, the configuration and customization tools may be embodied as physical controls such as switches, buttons, or dials that can be used for configuration and/or customization. These controls advantageously allow users to set preferences or adjust parameters directly on the hardware.
The system, in particular the server, further comprises an interface unit configured to allow users and/or developers to programmatically configure or customize the software.
In particular the interface unit comprises a software development kit and/or Application Programming Interfaces (APIs).
This is particularly of advantages for the automation or integration with other software.
Advantageously, SDKs help to accelerate the development process by providing a standardized and efficient way for developers to interact with and leverage the capabilities of a particular software platform or hardware device.
In particular, the system further comprises a user interaction module configured to allow a multi-user interaction for exchanging data and information among a plurality of users about the object and its use.
In particular, the server, further comprises a collaboration module included in the multiuser interaction unit is configured to create a shared session among multiple reading devices when the information of the same unique identifier element is received by the system around a same time. Advantageously, the system allows synchronized participation in multi-user activities.
In particular, the system (e.g. the server) further comprises a collaboration module (e.g. a shared application session module) configured to create a shared session among multiple reading devices when the same unique identifier (ID) is received by the system around a same time.
In particular, the system further comprises a learn unit, including a feedback and learning mechanism, configured to collecting user feedback and behavior data to continuously improve the personalized experiences, for example, delivered by the processing module and AR application.
In particular, the learn unit, e.g. via the feedback and learning mechanism, is configured to collect user feedback, analyzing user interactions, and to employ machine learning algorithms to iteratively improve system performance.
The learn unit, in particular via the feedback and learning mechanism, is configured to utilize user-provided feedback, behavioral data, and other relevant metrics to adapt its responses, enhance user experiences, and refine underlying algorithms.
This continuous learning loop advantageously facilitates the system's ability to evolve, addressing user preferences, and ensuring optimal functionality over time.
In particular, the implementation of a feedback and learning mechanism includes, for example, collecting user feedback and system data, analyzing patterns, and adjusting algorithms or configurations for continuous enhancement.
In particular, the learn unit comprises smart devices where firmware updates can be informed by user interactions and performance data, and/or adaptive hardware components that are configured to learn from usage patterns to optimize functionality, and/or sensors that are configured to adjust parameters based on real-time feedback, collectively forming an iterative improvement loop.
For instance, touchscreens or physical buttons on devices (e.g. user device) may be adjusted based on user feedback.
Advantageously by way of these hardware instances, the feedback and learning mechanism facilitates ongoing learning and adaptation to user or environmental factors, contributing to improved efficiency and user experience.
In particular, the system further comprises a data insights unit (e.g. analytics and reporting module) configured to track user engagement, AR usage patterns, and the effectiveness of the delivered contextual information for future optimizations.
Leveraging data analytics techniques, the data insights unit is configured to collect and process diverse data sources, ranging from user engagement metrics to system performance statistics.
In particular, the data insights unit comprises algorithms for data analysis (i.e. software- driven algorithms and logic), utilizing methods such as machine learning and statistical modeling to derive actionable insights. For example, once data being processed, the data insights unit is further configured to generate comprehensive reports, visualizations, or dashboards that convey valuable information to users or manufacturers.
In particular, the data insights unit comprises accelerators or processer embedded therein, which are optimized for data analytics tasks to thereby enhance performance and efficiency.
In particular, the data insights unit may further comprise data storage devices and memory modules, especially in scenarios where large datasets need to be efficiently managed.
In particular, the present invention provides for a delivering contextual information and interactive experiences about an object, in particular using the system described above, the method comprising the steps of:
- identifying, using a server, in particular via an identification module, an object based on information of a unique identifier element captured by a reading device,
- retrieving, using the server, supplementary information and data about the object,
- optionally, processing, using the server, in particular using a processing module and/or an outputting module, the supplementary information and data and delivering the same to the reading device,
- interpreting, using the server and/or the reading device, the supplementary information and inputs received from a user (e.g. via a reading device and/or an AR device), and
- delivering, using the reading device, contextual information and interactive experiences to the user based on the interpretation of the supplementary information and the user inputs.
In particular, the method further comprises the step of rendering AR experiences, using a real-time interaction unit, based on user profiles and preferences stored in the database
In particular, the method further comprises the steps of:
- receiving information and/or queries from the user, e.g. using an input module, form a reading device and/or an AR device;
- processing (and/or interpreting) natural language information and/or queries received from the users, e.g. using the processing module;
- generating, using the server (e.g. the processing module), contextually relevant responses based on learned language patterns and contextual information stored in a database; and
- delivering, using the server (e.g. outputting module) personalized and/or contextual information to the users based on the processed information and/or queries and using the reading device and or the AR device.
In particular, the method further comprises the steps of:
- reading by a first user the unique identifier element of the object,
- utilizing the unique identifier element to personalize the object based on inputs from the first user and/or data stored in the database, and
- delivering to a second user, upon reading the same unique identifier element from another object, interactive, personalized experiences based on the inputs received from the first user, in particular to initiate a multi-user interaction about the object.
Advantageously, the method enhances the level of personalization experiences through the recipient's interaction with the scanned QR code.
The method allows for interactions with the object to occur at different times. The disjoint time interaction allows for flexibility in how users engage with the object. Users can customize and personalize objects at their convenience before giving them as gifts.
In particular the method (and the system) is configured to dynamically adjust its content delivery and/or presentation based on the primed information (received information).
In particular, a computer-readable storage medium is provided, wherein the storage medium is configured to store instructions that, when executed by a processor, cause the system according to the present invention to perform the steps of the method as defined under the above.
In particular, the present invention, enables users to scan a QR code, NFC-tag, RFID- tag, or ultra-high frequency RFID-tag (UHF-tag) on an object, e.g. a consumer product or an event poster, and then interact with the system that provides contextual information about the product or event.
The unique identifier carried by the QR code or codes allows for a lookup in the backend, which returns specific metadata attached to the product by the manufacturer (e.g., pertaining to its contents (ingredients, nutrition facts, etc.), supply chain details, branding guidelines, time- and location-sensitive information, etc.) The system's applications may use generative Al, such as GPT type large language models (LLMs), and AR applications that are fed with data generated by the LLMs.
In particular, users can talk about the product and its uses (applications), be entertained with games around the products, and engage in multi-user activities mediated by the application (e.g., a game). Multi-user interactions are managed by the backend system, which receives scanned IDs and creates a shared application session among multiple devices when the same ID is scanned around the same time.
The interaction can also be disjoint in time, with one user scanning the object first, entering some specific information priming the system and personalizing the object, and then giving it as a gift to another person. Upon scanning the QR code, the recipient would get a personalized experience based on the input by the first user (e.g., personalizing gifts, such as a wine bottle).
The QR code serves as a contextual entry point for users, with the default interface being a mobile web app or an App Fragment (or its iOS equivalent) that allows loading
native applications without prior installation from the App Store. SDKs are also offered to partner companies for embedding in their own apps (e.g., a Mattel app can be opened when the user scans a box with Barbie).
The LLMs provide an extremely flexible way of interaction, allowing users to ask specific questions that would otherwise require more time to find out from static product documentation/description. They can also solve adjacent pain points for users (e.g., coming up with personalized toasts for a wine bottle). The backend system can track multiple activities of users over time (via the scanned QR codes) and include this information in the context of the LLMs to work with.
The language model may be hosted on a remote server (e.g. a backend server) or run on the mobile device itself, which would increase the privacy of the product and its availability in conditions where connectivity is insufficient.
For example, the system includes a backend server that hosts the Al-powered language models and manages the metadata associated with unique identifiers. The backend server receives scanned unique identifier data from user devices and returns relevant metadata and personalized interactions based on the user's context and preferences.
In particular, a user interface can be a mobile web app, an App Clip or Instant App, or a native application installed on the user's device. The system also offers SDKs for partner companies to embed the functionality within their own applications, providing seamless integration and a consistent user experience across different devices and applications.
The language models provide several benefits, including the ability to answer specific user questions that would otherwise require time-consuming manual searches, addressing adjacent pain points, and offering personalized experiences based on user input, such as toasts for a wine bottle.
The system further supports AR applications that provide users with visualizations and real-time interactions based on the product or event information. For example, an AR application can visualize a proposed makeup plan on the user's face, allowing them to see how different cosmetics products would look together before making a purchase.
The system enables multi-user interactions by creating a common application session among multiple devices that scan the same unique identifier around the same time. This allows users to engage in common activities or share personalized experiences based on the same unique identifier. The backend server manages and synchronizes data between multiple users, facilitating seamless multi-user experiences.
The present invention provides a comprehensive system for delivering contextual information and interactive experiences to users based on scanned unique identifiers. By integrating Al-powered language models, AR applications, and multi-user interaction capabilities, the system offers a seamless, personalized, and engaging experience for users across various consumer products and events. This invention has the potential to revolutionize how users interact with products and events, providing context-aware and personalized information, entertainment, and assistance at their fingertips.
The present invention comprises aspects that differentiate it from existing solutions in the market. These include:
1. Context-aware interaction: By scanning unique identifiers such as QR codes, NFC/RFID/UHF, the system provides users with contextual information and personalized experiences based on the specific product or event. This context-aware interaction enables users to quickly access relevant information without manually searching for it, streamlining the user experience.
2. Al-powered language models: The integration of advanced Al-powered language models, such as GPT-type large language models, allows for flexible and contextually relevant interactions with users. These models can understand user input and provide
personalized responses, addressing specific questions and adjacent pain points that traditional static documentation may not cover. The option to host these models on a remote server or on the user's mobile device offers increased privacy and availability, making the system more versatile and adaptable to various situations.
3. AR applications: The support for AR applications provides users with real-time visualizations and interactions based on the product or event information. This allows users to have a more immersive and engaging experience, such as visualizing makeup plans on their own face or seeing how furniture would look in their room before purchasing.
4. Multi-user interaction: The system's ability to facilitate multi-user interactions through common application sessions and backend server synchronization enables users to engage in shared experiences and activities with others who have scanned the same unique identifier. This fosters a sense of community and enhances the overall user experience.
5. Seamless integration: Offering SDKs for partner companies allows the system's functionality to be embedded within their own applications, ensuring a consistent and seamless user experience across different devices and applications. This integration enables users to access the system's features without having to install multiple separate applications.
Advantageously, the present invention offers a unique combination of context-aware interaction, Al-powered language models, AR applications, multi-user interaction capabilities, and seamless integration with third-party applications. These technical effects and novel aspects significantly enhance the user experience and provide a versatile, personalized, and engaging interaction with consumer products and events.
It is shown
Fig. 1 a schematic diagram of a system according ng to the present invention;
Fig. 2 a schematic diagram of a system according to the present invention, indicating modules incorporated in a server;
Fig. 3 a diagram of a method according to the present invention; and
Fig. 4 a diagram of the method according to the present invention, indicating further method steps.
In Fig. 1 an example of a system 100 for delivering information, in particular contextual information and interactive experiences, about an object 102 or an item is illustrated.
The system 100 comprises a reading device 102 (e.g. a user device) and a server 104.
The server 104 and the reading device 102 are in data communication over a communication network 110.
The reading device 106 is configured to read a unique identifier element 108 (e.g. a barcode, a QR code, a NFC tag, RFID tag or ultra-high frequency RFID tag) attached or printed on an object.
Th object may comprise any consumable products (foods or beverages), posters, post card, a cosmetic product, medical products, electronic devices, packings of products.
For example, the QR code or codes contain a distinctive identifier that facilitates a backend lookup (e.g. the server-side), retrieving specific metadata associated with the product from the manufacturer. This metadata includes information about the product's contents (such as ingredients and nutrition facts), details about the supply chain, branding guidelines, and time- and location-specific data.
The system 100 further comprises an augmented reality device 128 which is in communication with the reading device 106 and the server 104 over the communication network 110.
In particular, the system 100 utilizes Al-powered language models, such as GPT-type large language models, to provide personalized and contextually relevant information and entertainment experiences for users. These models can be hosted on a remote server or run on the reading device, e.g. the user's mobile device, enhancing privacy and availability in varying connectivity conditions.
The system's applications leverage generative Al, such as large language models (LLMs) like GPT, as well as augmented reality (AR) applications that utilize data generated by these LLMs.
The system 100 may further comprise other reading devices 102 that are in communication with the server 104 over the communication network 110.
For example, the system 100 is configured to enable multi-user interactions. This enables users to communicate about the product and share their experiences.
In particular, the system 100 is configured to receive the information of the unique identifier element (e.g. scanned IDs) and to create a shared application session among multiple reading devices when the same ID is scanned around the same time or at a different time (disjoint in time).
Users have the ability to discuss the product and its applications, partake in entertainment through product-related games, and engage in multi-user activities facilitated by the application, such as a game.
The system 100 further comprises a database 122 (e.g. a contextual information database), a learn unit 136 and a data insights unit 138.
The database 122, the learn unit 122 and the data insights unit 138 are in communication with the server 104.
The database 122 is configured to store learned language patterns and contextual information.
The learn unit 136 for example includes a feedback and learning algorithms for collecting user feedback and behavior data to continuously improve the personalized experiences.
The user experiences are, for example, collected from the server and AR device and stored in the server.
The data insights unit 138 (e.g. analytics and reporting module) is configured to track user engagement, AR usage patterns, and the effectiveness of the delivered contextual information.
Fig. 2 illustrates a schematic diagram of the system 100 indicating the component of the server 104.
The server 104 comprises a processing module 120, which is in communication with a plurality of modules and have access to information and data received by these modules.
For example, the server 100 comprises an identification module 112 that is configured to receive and identify the unique identifier,
The identification module 112 is further configured to access supplementary information about the object that are stored in an object database 111 in the server 104.
An outputting module 114 is further included in the server 104 that is configured to process the supplementary information and deliver the supplementary information to the reading device 106.
For example, when a user scans a QR code 106 (or the NFC, or the RFID or the Bluetooth tag) via the user device 102, the system 100 is configured to initiate a backend lookup within the server 104 and the object database 111 included therein to retrieve detailed metadata associated with the object 102, e.g. a product from the manufacturer.
The object database, for example, includes specific about the object linked with the unique identifier.
The server further comprises a user profile management module 116 that is configured to store user profiles. The user profiles, for example, include at least one of user personal data, user preferences, and user inputs and a historical record of user inputs,
The outputting module 114 is further configured to process the user profiles, and to deliver contextual information in relation to (associated with the user and the object).
The contextual information is based on the supplementary information, the user inputs and the user profiles that are processed in the server, e.g. by the outputting module and/or the processing module.
The server 104 may further comprises an input module 118 configured to receive and/or analyze information and/or queries received from the user via the reading device 106. the system 100 further comprises an AR device 128 that is configured to communicate over the communication network with the server.
For example, the AR device is in data communication with a real-time interaction unit 124 of the server 104.
The AR device is configured to integrate digital content with the physical environment around the user, enhancing their perception and interaction with reality.
The AR device 128 is configured to be connected to the reading device 106 using wireless of wired connections, such as Bluetooth or Wi-Fi or Apps or cable or adapters.
The real-time interaction module 124 comprises a rendering module 126 that is configured to process user interactions received via the AR device 128 and integrate the contextual content that is generated by the server 104 (e.g. by a processing module and/or an outputting module) seamlessly into interactive experiences within an Augmented Reality environment.
The real-time interaction module 124 is further configured to render the AR environment, effectively displaying (presenting) information that corresponds with the object 102, the user's context, and preferences.
The server 104 may further comprise a context sensitivity unit 130 that is configured to monitor and interpret or analyze the user interactions (e.g. in real-time data) and adapt AR experiences based on real-time user environment factors, such as location, time, weather conditions, ambient noise level, proximity to other objects and/or network connectivity.
The server further comprises an interface unit 132 that is configured to allow users and/or manufacturers to programmatically configure or customize software stored in the server.
The interface unit 132, for example, comprises a software development kit (SDK) and Application Programming Interfaces (APIs). This unit facilitates seamless integration and customization of augmented reality (AR) experiences.
For example, developers can utilize the SDK to create AR applications tailored to specific platforms, leveraging APIs to access device features and functionalities. Additionally, the SDK's support for cross-platform compatibility ensures broader
accessibility for AR applications, enabling them to run smoothly across different devices and operating systems.
A multi-user interaction unit 134 is also included in the server 104. The multi-user interaction unit 134 is configured to allow a multi-user interaction for exchanging data and information among a plurality of users about the object and/or its use.
Fig. 3 illustrates a diagram of a method 300 for delivering contextual information and interactive experiences about an object. The method for example is performed using the system 100 as explained above.
The method 300 comprises the steps of identifying 302, using a server 104, in particular via an identification module, an object 102 based on information of a unique identifier element 106 that is captured by a reading device 106, and retrieving 304, using the server 104, supplementary information and data about the object 102.
The method 300 optionally comprises the steps of processing 306 the supplementary information and data, and delivering 308 the same to the reading device 106. For example, the supplementary information is processed using a processing module 120 and/or an outputting module 118 included in the server 104.
The method 300 further comprises 300 the step of interpreting 310, using the server and/or the reading device, the supplementary information and inputs received from a user. For example, the supplementary information is received via a reading device 106 and/or an AR device 128.
The contextual information and interactive experiences are then delivered 312 to the user based on the interpretation of the supplementary information and the user inputs.
Fig. 4 illustrates a diagram of a continuation of the method 300 for delivering contextual information and interactive experiences about an object.
As shown in Fig. 4, the method 300 may further comprises the following steps:
- rendering 314 AR experiences, using a real-time interaction unit of the server, based on user profiles and preferences stored in the server 104, in particular in a user profile management module,
- receiving 316, using the server, (in particular an input module 118), information and/or queries from the user via the reading device and/or an AR device;
- processing 318, using the server (a processing module 120), natural language information and/or queries received from the users,
- generating 320, using the server (a processing module 120), contextually relevant responses based on learned language patterns and contextual information stored in a database, and
- delivering 322, using the server (an outputting module 114), personalized and/or contextual information to the users based on the processed information and/or queries via the reading device 106 and/or the AR device 128.
The method 300 may further comprise the following method steps, e.g. after the method steps 312 or 322.
- scanning or reading 324 by a first user a unique identifier element 106 that is associated with the object,
- utilizing 326 the unique identifier element to personalize the object based on inputs from the first user and/or data stored in a database, and
- delivering 328 to a second user, upon reading the same (or identical) unique identifier element of one another object, interactive and personalized experiences based on the inputs provided by the first user, wherein the objects read by the first and second users are of similar or identical types with identical identification number to thereby to initiate a multi-user interaction about the object.
The multi-user interaction can occur at separate or non-overlapping points in time (distinct occurrences that happen at different moments or intervals). Alternatively, the multi-user interaction is simultaneous or occurring concurrently.
Some example uses cases of the system according to the present invention are described below.
1. Beverage Mixing Assistance: A user scans a QR code on a beverage product and receives recipes for mixing the perfect drink. The conversation can continue, with the user indicating the number of people participating in the event, and the system providing a shopping list based on the number of participants. The user can also ask for drinking games or other entertainment ideas.
2. Cosmetics Recommendations: A user scans all the cosmetics products in their room, and the system provides makeup suggestions and links to tutorials and celebrity/influencer inspirations based on the available products.
3. Pharmaceutical Guidance: A user scans a QR code on a medication package, asks a question, and receives usage tips according to the package insert.
4. Assembly Assistance: A user scans a QR code on furniture or tech products, asks a question, and receives step-by-step assembly instructions. If multiple products are scanned, users can also ask questions about connecting or installing the items together, such as installing a lamp.
Advantageously, the system is configured to offer a seamless and immersive experience for users, delivering personalized and contextually relevant information through the combined power of GPT-based language understanding and AR interactivity.
Advantageously, the system, in particular the data insights unit, is characterized by its adaptability to different data formats, scalability for handling large datasets, and user- friendly interfaces that facilitate the interpretation of complex analytics for informed decision-making.
The described system, in particular, incorporates unique and innovative features related to object identification, user authentication, personalized content generation, real-time interaction in AR, and continuous improvement mechanisms.
Reference Numeral system object' server reading device unique identifier element communication network object database identification module outputting module user profile management module input module processing module database real-time interaction unit rendering module
AR device context sensitivity unit interface unit multi-user interaction unit learn unit data insights unit method method step method step method step method step method step method step method step method step
method step method step method step method step method step
Claims
1 . A system (100) for delivering contextual information and interactive experiences in relation to an object (102), comprising:
- a server (104),
- a reading device (106) configured to capture information from a unique identifier element (108) carried by an object (102), and to send the captured information to the server (104), the reading device (106) being further configured to capture user inputs received from a user, and to send the user inputs to the server (104), and
- a communication network (110) configured to facilitate communication between the reading device (106) and the server (104),
- wherein the server (104), in particular using an identification module (112), is configured to identify the object based on the information captured from the unique identifier element (108), and to access supplementary information about the object that are stored in the server (104),
- wherein the server (104), in particular using an outputting module (114), is configured to deliver the supplementary information to the reading device (106), and
- wherein the system (100), in particular the server (104) and/or the reading device (106), is configured to interpret the supplementary information and the user inputs, and to deliver contextual information and interactive experiences to the user, preferably the contextual information is based on user preferences, user location, user past interactions and/or environment factors.
2. The system (100) according to claim 1 , the server (104), in particular a processing module, and/or the reading device (106) comprises Al-powered language models.
3. The system (100) according to claims 1 or 2, characterized in that
the system (100) further comprises a database (122), wherein the server (104), in particular using a processing module, is configured to interpret the user inputs, in particular including natural language queries received via the reading device (106), and/or to generate contextually relevant responses based on learned language patterns and contextual information stored in the database (122).
4. The system (100) according to any one of the preceding claims, characterized in that the server (104), in particular using a user profile management module (116), configured to store user profiles, wherein the user profiles, each includes at least one of user personal data, user preferences, and user inputs and a historical record of user inputs, and/or the server (104), in particular using an outputting module (114), is further configured to process the user profiles, and to deliver contextual information about the object and/or the user, based on the processed supplementary information, the processed user inputs and the processed user profiles.
5. The system (100) according to any one of the preceding claims, characterized in that the unique identifier element (108) comprises a QR code, NFC, RFID, UHF, AutolD, or Bluetooth tag, and/or the user inputs include an audio, video and/or text.
6. The system (100) according to any one of the preceding claims, characterized in that the system (100) further comprises an AR device (128) configured to communicate over the communication network with the server (104), in particular with a real-time interaction unit of the server (104),
wherein the AR device (128) is configured to overlay digital content, onto the real- world environment, preferably the AR device (128) comprises AR glasses, smartphones, tablets, and AR headsets.
7. The system (100) according to any one of the preceding claims, characterized in that the server (104), in particular a real-time interaction module (124) including a rendering module (126), is configured to
- process user interactions received using the AR device (128),
- to integrate the contextual content that is generated by the server (104), in particular a processing module and/or an outputting module (114), seamlessly into interactive experiences within an Augmented Reality environment, and
- to render the AR environment, thereby presenting information aligned with the object, the user's context and preferences.
8. The system (100) according to claim 7, characterized in that the server (104), in particular a context sensitivity unit (130), is configured to monitor and analyze the user interactions; and to adapt AR experiences based on real-time environment factors, preferably the environment factors comprises weather conditions, ambient noise levels, lighting conditions, proximity to other objects, location, time, and network connectivity.
9. The system (100) according to any of the preceding claims, characterized in that the server (104) further comprises an interface unit (132) configured to allow users and/or manufacturers to programmatically configure or customize software stored in the server (104).
10. The system (100) according to any of the preceding claims,
characterized in that the system (100), in particular the server (104), further comprises a multi-user interaction unit (134) configured to allow a multi-user interaction for exchanging data and information among a plurality of users.
11 . The system (100) according to any one of the preceding claims, characterized in that the system (100) further comprises a learn unit (136), in particular including a feedback and learning algorithms, configured to collect user feedback and behavior data to improve the user experiences, and/or the system (100) further comprises a data insights unit (138) configured to track user engagement, AR usage patterns, and/or the effectiveness of the delivered contextual information for future optimizations.
12. A method (300) for delivering contextual information and interactive experiences about an object (102), in particular using the system (100) according to any one of the preceding claims, the method (300) comprising the steps of:
- identifying (302), using a server (104), in particular via an identification module (112), an object (102) based on information of a unique identifier element (108) captured by a reading device (106),
- retrieving (304), using the server (104), supplementary information and data about the object,
- optionally, processing (306), using the server (104), in particular using a processing module and/or an outputting module (114), the supplementary information and data and delivering (308) the same to the reading device (106),
- interpreting (310), using the server (104) and/or the reading device (106), the supplementary information and inputs received from a user, in particular the inputs received via a reading device (106) and/or an AR device (128), and
- delivering (312), using the reading device (106) and or an AR device (128), contextual information and interactive experiences to the user based on the interpretation of the supplementary information and the user inputs.
13. The method (300) according to claim 12, characterized in that the method (300) further comprises the steps of:
- rendering (314) AR experiences, using a real-time interaction unit of the server (104), based on user profiles and preferences stored in the server (104), in particular in a user profile management module (116),
- receiving (316), using the server (104), in particular an input module (118), information and/or queries from the user via the reading device (106) or the AR device (128);
- processing (318), using the server (104), natural language information and/or queries received from the users,
- generating (320), using the server (104), contextually relevant responses based on learned language patterns and contextual information stored in a database (122), and
- delivering (322), using the server (104), personalized and/or contextual information to the users based on the processed information and/or queries via the reading device (106) and or the AR device (128).
14. The method (300) according to any of claims 12 or 13, characterized in that the method (300) further comprises the steps of:
- reading (324) by a first user the unique identifier element of the object,
- utilizing (326) the unique identifier element to personalize the object based on inputs from the first user and/or data stored in the database, and
- delivering (328) to a second user, upon reading the same unique identifier element from another object, interactive, personalized experiences based on the
inputs received from the first user, in particular to initiate a multi-user interaction about the object.
15. A computer-readable storage medium configured to store instructions that, when executed, cause the system (100) according to any of claims 1 to 11 to perform the steps defined by the method (300) according to any one of claims 12 to 14.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP23174109 | 2023-05-17 | ||
| EP23174109.1 | 2023-05-17 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024236160A1 true WO2024236160A1 (en) | 2024-11-21 |
Family
ID=86426015
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2024/063634 Pending WO2024236160A1 (en) | 2023-05-17 | 2024-05-16 | A system for delivering contextual information |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2024236160A1 (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019079826A1 (en) * | 2017-10-22 | 2019-04-25 | Magical Technologies, Llc | Systems, methods and apparatuses of digital assistants in an augmented reality environment and local determination of virtual object placement and apparatuses of single or multi-directional lens as portals between a physical world and a digital world component of the augmented reality environment |
| US20220358727A1 (en) * | 2021-05-07 | 2022-11-10 | Meta Platforms, Inc. | Systems and Methods for Providing User Experiences in AR/VR Environments by Assistant Systems |
-
2024
- 2024-05-16 WO PCT/EP2024/063634 patent/WO2024236160A1/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019079826A1 (en) * | 2017-10-22 | 2019-04-25 | Magical Technologies, Llc | Systems, methods and apparatuses of digital assistants in an augmented reality environment and local determination of virtual object placement and apparatuses of single or multi-directional lens as portals between a physical world and a digital world component of the augmented reality environment |
| US20220358727A1 (en) * | 2021-05-07 | 2022-11-10 | Meta Platforms, Inc. | Systems and Methods for Providing User Experiences in AR/VR Environments by Assistant Systems |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Dogan et al. | Augmented object intelligence with xr-objects | |
| US20220035903A1 (en) | Personalized service operation system and method of smart device and robot using smart mobile device | |
| US11720854B2 (en) | Inventory management through image and data integration | |
| CN106257396B (en) | For managing computer implemented method, system and the equipment of Collaborative environment | |
| CN107391134B (en) | Method and device for automatically generating and dynamically transforming universal mobile application interactive interface | |
| US12541928B2 (en) | API to provide product cards generated by augmented reality content generators | |
| US20180129874A1 (en) | Augmented reality system and method | |
| US12260449B2 (en) | Product cards provided by augmented reality content generators | |
| KR20190094874A (en) | Digital signage system for providing mixed reality content comprising three-dimension object and marker and method thereof | |
| WO2022072521A1 (en) | Templates to generate augmented reality content items | |
| CN106133722A (en) | Computerized method and system for personalization narration | |
| CN104199861B (en) | Appointment ID method and device | |
| US20230214912A1 (en) | Dynamically presenting augmented reality content generators based on domains | |
| US10101885B1 (en) | Interact with TV using phone camera and touch | |
| US20250342520A1 (en) | Virtual reality ecommerce system | |
| WO2024236160A1 (en) | A system for delivering contextual information | |
| CN113762952A (en) | Method and system for interactive data management | |
| CN119810381A (en) | Method, device and storage medium for generating virtual image based on AR scene | |
| KR20120078290A (en) | Method and system for providing virtual clothes wearing service | |
| US11823124B2 (en) | Inventory management and delivery through image and data integration | |
| Okur et al. | Modern Tools in Fashion Retailing and Consumer Perspective | |
| Han et al. | PMM: A smart shopping guider based on mobile AR | |
| US12406450B2 (en) | Methods and systems for generating augmented reality (AR) objects for use to implement functions of an electronic program | |
| Ricci et al. | 4 User Interface Design in Mixed Reality Fashion Shopping | |
| Ikram | FYP-02 REPORT |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24727342 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |