CN116018608A - Electronic commerce label in multimedia content - Google Patents
Electronic commerce label in multimedia content Download PDFInfo
- Publication number
- CN116018608A CN116018608A CN202180054395.1A CN202180054395A CN116018608A CN 116018608 A CN116018608 A CN 116018608A CN 202180054395 A CN202180054395 A CN 202180054395A CN 116018608 A CN116018608 A CN 116018608A
- Authority
- CN
- China
- Prior art keywords
- multimedia content
- item
- user
- data
- interactive element
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
Landscapes
- Business, Economics & Management (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Strategic Management (AREA)
- Engineering & Computer Science (AREA)
- Development Economics (AREA)
- General Physics & Mathematics (AREA)
- Economics (AREA)
- Marketing (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The technology described herein is directed to e-commerce tags in multimedia content. In one example, multimedia content is received and item identification techniques are performed to identify one or more items referenced in the content. An interactive element is generated that indicates information about a given referenced item and is displayed when the multimedia content is output. Selection of the interactive element may cause a purchase user interface to be displayed that is pre-populated with the item and/or payment information based at least in part on the identifying attributes of the item and/or user preferences determined from historical purchase data associated with the user.
Description
Cross Reference to Related Applications
The present application claims priority from the title "e-commerce tag in multimedia content" filed on 8 th month 9 of 2020, the U.S. patent application Ser. No. 17/014,280 and the title "custom e-commerce tag in real-time multimedia content" filed on 8 th month 9 of 2020, the entire contents of which are incorporated herein by reference.
Technical Field
The presence of multimedia content, such as video containing a series of images and associated audio, has become ubiquitous. Although such multimedia content may be pre-recorded, in some cases, the multimedia content may be provided to a particular viewer in a live or near-live manner. Such multimedia content may be used for various purposes, such as informing consumers of the availability of items that are available for purchase and providing details regarding such items.
Drawings
The features of the present disclosure, its nature, and various advantages will be more apparent from the following detailed description considered in conjunction with the accompanying drawings. The following detailed description refers to the accompanying drawings. In the drawings, the leftmost digit(s) of a reference number identifies the drawing in which the reference number first appears. The use of the same reference symbols in different drawings indicates similar or identical items. The systems depicted in the drawings are not to scale and components in the drawings may be depicted as not being to scale with each other.
FIG. 1 illustrates an example environment of an electronic commerce ("e-commerce") tag in multimedia content and a custom electronic commerce tag in real-time multimedia content.
Fig. 2 shows an example conceptual diagram illustrating the output of multimedia content over time and the change in interactive elements displayed while rendering the multimedia content.
Fig. 3 shows an example conceptual diagram illustrating an example of item identification for items referenced in multimedia content.
Fig. 4A illustrates an example user device displaying a first example interactive element superimposed on multimedia content.
FIG. 4B illustrates an example user device displaying a second example interactive element superimposed on multimedia content.
Fig. 4C illustrates an example user device displaying a third example interactive element superimposed on multimedia content.
Fig. 4D illustrates an example user device displaying a fourth example interactive element superimposed on multimedia content.
FIG. 5 illustrates an example process for determining and utilizing inventory data associated with multimedia content representing an item in real-time.
FIG. 6 illustrates an example process for determining whether to aggregate item selections during presentation of multimedia content.
FIG. 7 illustrates an example process for modifying an interactive element based at least in part on user preference data.
FIG. 8 illustrates an example process for utilizing user preferences to modify the display of project information and pre-populate a purchase user interface.
Fig. 9 shows an example process of an e-commerce tag in multimedia content.
Fig. 10 illustrates another example process of an e-commerce tag in multimedia content.
FIG. 11 illustrates an example process for customizing an e-commerce tag in real-time multimedia content.
Fig. 12 shows a sequence diagram of a process for customizing an e-commerce label in real-time multimedia content.
FIG. 13 illustrates an example merchant ecosystem for facilitating techniques described herein, among other things.
Fig. 14 shows additional details associated with the various components of the merchant ecosystem described above in fig. 13.
Detailed Description
The technology described herein relates particularly to the generation, association, and/or use of electronic commerce tags in multimedia content and, more generally, to improving the use of electronic devices with multimedia content. The e-commerce tag may be an optional portion associated with the multimedia content that may include particular actions of the e-commerce, such as purchasing actions, item information actions, and/or other shopping-related actions. The interactive elements may include any selectable element that may be presented to a user and configured to receive user input indicative of a selection. For example, the multimedia content may be any content including image data and/or audio data. Multimedia content may be considered as video having a series of images, and in the example audio accompanying the images. For example, a user (e.g., a merchant) may post an image, video, or similar content (hereinafter referred to as "content") through a platform. Such content may describe items (e.g., goods and/or services). In some examples, the content may be associated with an intent to sell an item described in the content (e.g., text associated with the image indicates that the user is seeking to sell an item described in the image, speech associated with the video indicates that the user is seeking to sell an item described in the video, etc.). In other examples, the content may not be associated with a sales intent (e.g., no explicit or implicit indication indicates that the user wishes to sell anything described in the content). In at least one example, the techniques described herein alleviate the need for users interested in selling through certain platforms to perform any actions they would not normally perform prior to publishing content to such platforms. That is, users interested in selling through these platforms may publish content to one or more platforms, and the techniques described herein aim to create sales opportunities and facilitate transactions based on such content. In one implementation, such platforms are "pure" multimedia content platforms, such as YouTube video (YouTube video), youTube Live (YouTube Live), instagram video (Instagram video), instagram Live push (Live Instagram Feed), and the like. In this case, the disclosed methods and systems may be communicatively coupled to a content platform to provide interactive elements and e-commerce tags. In another embodiment, the e-commerce platform may provide functionality to publish content for sale and/or process payments.
For example, merchants have created or are otherwise associated with multimedia content that references one or more items. The multimedia content may include a visual representation of the item and/or speech associated with the multimedia content may audibly present a description of the item. In an example, the creation of such multimedia content may be to have potential customers touch items referenced in the multimedia content, and hope that these customers purchase the items. Alternatively, as described herein, item identification techniques may be used to identify and locate items in multimedia. It should be appreciated that when "items" are discussed herein, the items may include one or more merchandise (e.g., physical products) and/or one or more services (e.g., in person or virtually offered) that may be offered by a merchant. It should also be appreciated that when an item is described as "referenced," the item may be visually (e.g., by gesture) displayed in the multimedia content and/or audibly discussed or otherwise referenced. While presenting multimedia content to customers, for example, through one or more online platforms may be of interest to some customers, if a customer wishes to purchase the referenced items, they may need to navigate to a website or other platform that allows the customer to manually add the referenced items to a "shopping cart" and then proceed to a typical checkout process to purchase the items. That is, customers often need to leave what they are currently viewing, to access another online platform (e.g., a website, etc.) to view additional information, add items to a shopping cart, and/or otherwise participate in a payment process to purchase items. This friction during the purchase process reduces the interaction of the buyer with the merchant and may lead to the merchant losing sales opportunities. Further, while merchants may be adept at generating commercials and other forms of multimedia content, they may be unapt at generating user interfaces, displaying items that may be relevant to a particular consumer of content, and formatting displayable interactive elements to facilitate the ability of the consumer to purchase the referenced items.
In an example, the techniques described herein relate to generating instant interactive elements, such as e-commerce tags, and superimposing these interactive elements on multimedia content when output to a customer. In one embodiment, the generation and intelligent localization of such interactive elements requires no merchant intervention. Using the techniques described herein, a merchant may publish multimedia content to a given platform and, when output to a customer, a payment processing service provider may perform operations that result in the generation of items and/or user-centric interactive elements that may be superimposed on the multimedia content. For example, a payment processing service provider may receive an instance of multimedia content. The multimedia content may be received from a system associated with a given merchant and/or from one or more third party systems, such as a system associated with a platform such as a social media platform. The payment processing service provider system may be used to receive multimedia content and generate interactive elements as described herein.
For example, the payment processing service provider system may be configured to receive multimedia content and/or retrieve multimedia content. For example, a merchant system or other system may push multimedia content to a payment processing service provider system without making specific requests for such content. In other examples, the content component may query one or more other systems for multimedia content. In still other examples, the payment processing service provider system may receive an indication that multimedia content associated with a given merchant has been requested to be output onto a user device associated with a customer. In these examples, the payment processing service provider system may query for instances of the multimedia content and perform techniques for generating the interactive element overlays, for example, before the multimedia content is output onto the user device. In this way, the techniques described herein may intelligently generate an interactive element overlay based at least in part on multimedia content, with no or minimal input from a merchant.
The payment processing service provider system may analyze the multimedia content and/or related data to identify one or more items referenced in the multimedia content. For example, the payment processing service provider system may utilize image data of the multimedia content to identify items described in the image data. Such analysis may include using computer vision techniques to identify the presence of an object in the given image data, and then identifying the object itself and/or the class of object to which the object belongs (e.g., shirt, pants, hat, watch, etc.). More detailed information about the use of computer vision techniques is provided below. Additionally or alternatively, when the multimedia content includes user speech, speech may be recognized using speech recognition and natural language understanding techniques, text data representing the speech is generated, and then the intent and/or purpose of the speech is determined. By doing so, the payment processing service provider system can identify the items referenced in the multimedia content as well as, for example, attributes of the items (e.g., color, size, brand, quality (e.g., new/consigned), payment fulfillment methods, etc.). Additionally or alternatively, the payment processing service provider system may utilize metadata associated with the multimedia content and/or the merchant providing the content to identify the item. For example, the merchant system may provide metadata indicating the items referenced in the multimedia content. In an example, one or more other users may have reviewed or otherwise provided information related to the multimedia content. In these and other examples, some or all of this data may be used by the payment processing service provider system to identify items and/or attributes associated with the items referenced in the multimedia content. That is, the techniques described herein may utilize a machine-trained model to intelligently identify items referenced in multimedia content. This may simplify the generation of interactive multimedia content, thereby providing accurate interactive elements for display and minimizing or eliminating merchant involvement.
The payment processing service provider system may receive and/or determine item information associated with the items referenced in the multimedia content. For example, the merchant system may provide data indicative of information associated with the referenced item. The information may include information related to attributes of the item (e.g., size, color, brand, item type, item options, etc.). Additionally or alternatively, the item information component can query one or more systems for item information. For example, the payment processing service provider system may query the merchant system for inventory data indicating the current inventory of the item, such as when outputting multimedia content. In an example, the merchant system may return inventory data and may utilize the inventory data to inform the customer of the current inventory of items available from the merchant. In other examples, an indication of the current inventory of one or more other merchants may be retrieved and displayed on the user device, for example, when inventory data indicates that the item is out of stock and/or when user preferences indicate that the customer prefers a different merchant. The payment processing service provider system may also receive the item information. In examples where items are no longer available, the system may be configured to change the multimedia content to remove portions of the multimedia content that reference items that are no longer available. In this way, the techniques described herein may identify item attributes specific to a given multimedia content without requiring manual input of those item attributes, and in a manner that may be used to generate specific interactive elements. In fact, one challenge with this technique is when items identified in the multimedia content are not available for purchase. Associating and overlaying interactive elements may prove unnecessary when a user may not be able to complete a purchase when activating such links.
In at least one example, the operations performed by the payment processing service provider system may utilize a multi-party merchant ecosystem. That is, in some examples, a payment processing service provider (e.g., a server associated therewith) may communicate with end users (e.g., customers and/or merchants) via respective user computing devices and through one or more networks. Such a remotely, network-connected multi-party merchant ecosystem may enable a payment processing service provider to access data associated with a plurality of different merchants and/or customers and use such data to intelligently generate interactive elements, in some examples in real-time or near real-time. For a payment processing service provider (which may access a plurality of different merchants and a plurality of different platforms) to perform the processes described herein, merchant-related data and multimedia content may be uniquely generated and used to generate interactive elements for intelligent display with the multimedia content.
In an example, identifying information about an item may be used to construct a three-dimensional representation of the item that is displayed relative to the multimedia content. The three-dimensional representation of the item may also be displayed to the user, for example using interactive elements.
The payment processing service provider system may be configured to utilize the item identification data and/or the item information data to generate data representing the interactive element. The interactive elements may be configured such that when the multimedia content is output on the user device, the interactive elements are also presented, for example in the form of overlays. In an example, the interaction element may be specific to the multimedia content, the items referenced therein, the item attributes, and/or the user preferences. For example, using data received and/or determined as described herein, the payment processing service provider system may determine the type of interactive element to generate. The interactive element types may include, for example, selectable links, quick response codes ("QR codes") and/or other scannable codes, indicators that enable voice input to select interactive elements, indicators that enable gesture input to select interactive elements, and so forth. It should be understood that while several examples of element types have been provided herein, the present disclosure includes any element type that allows for receiving user input. The type of interaction element associated with the given multimedia content may be determined based at least in part on the device type of the user device. For example, if the device type indicates that the device includes a camera, gesture-based interactive elements may be used; or if the device type indicates that the device does not include a touch screen, the interactive element may be configured to receive user input other than touch screen input. Additionally or alternatively, purchase history associated with the user profile being used to view the multimedia content may be used to determine past user input types, and this information may be used to determine the type of interactive element to generate. In this way, the techniques described herein may generate new data on the fly, which may be configured to cause a user device to change the content displayed in a time-sensitive manner.
In addition to the type of interactive element, the payment processing service provider system may be configured to determine one or more other aspects associated with the interactive element, such as when the interactive element is displayed relative to the multimedia content, a location of the interactive element is displayed relative to a visual window of the user device, a quantity and/or type of item details to be displayed, and/or a function that occurs when the interactive element is selected. For example, the payment processing service provider system may determine when to display the interactive element based at least in part on data indicating when an item starts to be referenced in the multimedia content and when the item stops being referenced. For example, a given content may be two minutes in length, but the item may not begin to be referenced until the 30 second mark and then stop being referenced at the 1 minute mark. Using the item identification data described herein, the payment processing service provider system can generate interactive elements configured to be displayed only during the time frame in which the item is referenced. With respect to determining where to display the interactive element, the payment processing service provider system may utilize the item identification data to determine a relative position of the item described in the multimedia content with respect to a visual window of the user device. For example, the item identification data may indicate a location of an object identified in the image data, and the interactive element may be generated such that when displayed, the interactive element may be located near the object rather than, for example, above the object. This will enable the user to see the object and the interactive element simultaneously when the multimedia content is output, and perceive that the object and the interactive element are associated with each other.
With respect to determining the amount and/or type of item details to display, the payment processing service provider system may utilize the item information to determine attributes associated with the referenced item. In some examples, all attributes may be included in the interactive element. However, in other examples, only a portion of the attributes may be included. For example, one or more user preferences may be received and/or determined using historical data associated with the user profile, and these user preferences may inform which item information to select for inclusion in the interactive element. For example, the historical data may indicate that the user associated with the user profile in question purchased more items with a degree of item detail and/or provided some type of item detail. As a further example, the historical data may be data associated with more than the user profile in question (or other than that), such as historical data associated with customers of the merchant, customers of different merchants, and/or general customers. In other examples, the user preferences may be used to up-sell or otherwise bundle items based at least in part on the user's previous bundle purchases.
With respect to determining a function that will appear upon selection of an interactive element, the payment processing service provider system may receive and/or determine data indicating a user's preference for selecting the function. These user preferences may indicate that the user desires to display a purchase user interface upon selection of an interactive element. In other examples, these user preferences may indicate that the user wishes to display the purchase user interface only after the multimedia content has stopped or otherwise at some time after the selection of a given interactive element. Such functionality may allow a user to select a plurality of interactive elements, each corresponding to a different item, before being presented with a purchase user interface. In these examples, the interactive elements may be configured to be selected and then the data indicating these selections may be saved until the multimedia content ceases. Additionally, the user preference information described herein may be used to recommend additional multimedia content for display to the user and/or to influence how the user navigates between multimedia content.
The payment processing service provider system may be configured to generate commands that cause, among other things, a device, such as a user device, to perform an action. For example, the payment processing service provider system may generate a command to cause the interactive element to be presented with the multimedia content. The payment processing service provider system may also generate a command to cause the user device to display a purchase user interface in response to selection of the one or more interactive elements. The payment processing service provider system may also generate a command to cause the user device to display information in the user interface. For example, one or more user input fields of the purchase user interface may be pre-populated based at least in part on some or all of the data discussed herein. For example, the attributes and/or options associated with the selected item may be pre-populated with data from the user profile. Additionally or alternatively, item information determined from the multimedia content may be utilized to pre-populate the item attributes. Additionally, payment information from past transactions associated with the user profile may be used to pre-populate payment options, shipping addresses, etc. on the purchase user interface. In this way, upon selection of the interactive element, the purchase user interface may be automatically displayed and pre-populated with the item and payment instrument information so that the user may only need to confirm the purchase without providing any additional input to obtain the item. It should be appreciated that when the purchase user interface is displayed, the user interface may be associated with a merchant, such as a merchant website that allows the purchase, and/or the user interface may be associated with a payment processing service provider, such as a web-based and/or application-based user interface that allows the purchase. In this manner, the techniques described herein may cause applications of a user device to activate and cause display of user interfaces and pre-populate time-sensitive, secure data in these interfaces for use by a user.
The payment processing service provider system may be configured to receive feedback data associated with the interactive element. The feedback data may indicate positive and/or negative comments regarding the interactive element, the display of the interactive element, and/or the function associated with the selection of the interactive element. The feedback data may be collected and utilized to improve the generation of interactive elements and/or related functions. For example, the feedback data may be formatted as input data (or training data) for one or more machine learning models. The input may be used to train a machine learning model that may be used by various components of the systems described herein to perform one or more operations described with respect to the systems. In this way, the techniques described herein may utilize data provided by a user or reviewer to generate new, more accurate interactive elements.
It should be appreciated that the operations described herein may be performed on pre-recorded multimedia content and/or on real-time and/or near real-time streaming media content. When real-time and/or near real-time streaming media content is used, the content may be received at a payment processing service provider system and may be used for item identification and interactive element generation. This may be performed before the multimedia content is sent to the user device and/or while the user device is outputting the multimedia content. Regarding real-time and/or near real-time examples, the platform associated with the streaming media content may be any platform that allows customer interaction, such as a teleconferencing platform, a social media platform, and the like. In these examples, the functionality associated with the interactive element may be in the form of a downloadable widget or an application that may allow the interactive element to be displayed.
It should be noted that the data and/or information exchange described herein may be performed only if the user has agreed to exchange such information. For example, the user may be provided with an opportunity to opt-in and/or opt-out of data exchange between devices and/or perform the functions described herein when the device is set up and/or an application is launched. Additionally, when one of the devices is associated with a first user account and another of the devices is associated with a second user account, user consent may be obtained prior to performing some, any, or all of the operations and/or processes described herein. Additionally, operations performed by components of the systems described herein may be performed only if a user has agreed to perform the operations.
Embodiments described herein relate to the generation of new "embedded data," such as links that may be generated instantaneously in some embodiments, for displaying selectable links, quick reference codes, tags, etc. when multimedia content is output onto a screen of a user device. Additionally, dynamic display of purchase information (e.g., size, color, inventory, price, etc.) during video output and after a user clicks, scans, or otherwise selects a link is a computer-centric solution that utilizes different data sources to produce new and useful results specific to the computing environment. Furthermore, the techniques described herein include generating new data, such as the link data described herein, that is specific to a given multimedia content and that is configured to be displayed in different manners, such as customized based on user profiles, for example, for different user devices. By so doing, the online platform marketplace may be enhanced so that the information displayed to potential customers is more accurate, tailored to specific customers, and presented in a time-sensitive manner, and provides functionality that can only exist in a computer-centric environment. The commands for displaying the selectable elements as described herein may also be configured to cause an application on the user device to launch and/or activate and cause the display of time-sensitive information, e.g., without user input. By doing so, a substantial change of the user equipment itself is achieved so that the user equipment can perform functions that would not be possible without the computer-centric solution described herein. Additionally, given the tremendous amount of information for all of the marketable items, even with a single merchant, and the possible item options and payment options, the innovations described herein can be used to filter the internet content so that only relevant selectable links (and in a given example only at specific screen locations and at specific times) are displayed and only relevant items and purchase information are provided when the selectable elements are displayed. Additionally, if the inventory data indicates that the item is not available from a single merchant (e.g., a preferred merchant) or multiple merchants, the selectable link may be omitted, thereby reducing the processing of the payment service provider system and presenting only the selectable link that may be used by the user to complete the purchase. By doing so, the present method and system provide for filtering internet content in a new way that adds functionality to user devices, merchant devices, and the like.
Embodiments described herein use information about the similarity of multimedia content, including but not limited to spatial and temporal similarity in multimedia content; in particular, specific subject information about a speaker that appears in multimedia (e.g., video) is collected and used. In one embodiment, the disclosed method and system take advantage of the fact that the speech is typically made by one participant of the video conference at a time, and that the video displayed in the main window is typically focused on the speaker instead of the other non-speaking participants (the video of which may also be displayed but typically in a smaller sidebar window instead of the main window), filtering the video information according to one embodiment is performed in a participant-specific manner. Participant-specific video filtering is based at least on the manner in which a particular speaker speaks, e.g., facial expressions of the speaker, head and eye movements, gestures, etc., typically indicate or otherwise point to project information. In various examples described below, such participant-specific knowledge is learned through machine learning, and such knowledge is used to minimize redundancy in addition to (or instead of) spatial and temporal redundancy in video transmissions. This allows focus to be placed on one speaker and/or related items, but not other speakers and other items in the video. This also allows higher compression rates to be achieved than with standardized video filtering and recognition schemes. The learned participant-specific knowledge is also used to decode or reconstruct the encoded video, so that a high or desired perceived quality can be maintained despite the higher compression rate. Since the compression rate is typically higher relative to standardized coding, there is no need to reduce the frame rate of video transmission when the available bandwidth is low. Thus, a high or desired perceived quality may be maintained in a video conference, where no buffering of variable frame transmission rates is accommodated. More specifically, such techniques allow for more accurate marking and identification of related items for business.
Examples of multimedia content include multimedia transmissions, which fall broadly into two categories, namely storage video transmissions and real-time streaming media. Stored video transmission may be used to transmit once-made video and later transmitted and viewed. Examples of such videos include movies, previously recorded television programs, previously recorded sporting events, and the like. On the other hand, in real-time streaming media, video production, transmission, reception, and display occur in real-time. Examples of real-time streaming media include video chat, video conferencing, news feed, live shopping campaigns, social media feed, live information advertising, and the like. One major difference between storing video transmissions and real-time streaming media is that the former may use buffering while the latter lacks buffering. The embodiments described herein provide for the generation of e-commerce tags in any type of multimedia transmission. In the case of storing video or previously buffered video, the disclosed methods and systems may analyze and then generate tags to associate with relevant portions of the media. In the case of live multimedia, these methods and systems can analyze content in real-time or near real-time and generate tags on-the-fly using up-to-date information about the item.
The present disclosure provides a thorough understanding of the principles of structure, function, manufacture, and use of the systems and methods disclosed herein. One or more examples of the present disclosure are illustrated in the accompanying drawings. Those of ordinary skill in the art will understand that the systems and methods specifically described herein and illustrated in the accompanying drawings are non-limiting embodiments. The features illustrated or described in connection with one embodiment may be combined with the features of other embodiments, including between systems and methods. Such modifications and variations are intended to be included within the scope of the appended claims.
Additional details are described below with reference to several example embodiments.
FIG. 1 illustrates an example environment of an electronic commerce ("e-commerce") tag in multimedia content and a custom electronic commerce tag in real-time multimedia content. In fig. 1, one or more servers 104 may be associated with a payment processing service provider, which may communicate with user computing devices, such as merchant device 106 (also described herein as a merchant device and/or merchant system) and buyer device 102 (also described herein as a user device), via one or more networks 108. That is, merchant device 106 and purchaser device 102 are network connection devices that enable end users (e.g., merchants and purchasers, respectively) to access services provided by a payment processing service provider (e.g., via one or more servers 104). Additional details associated with one or more servers 104, a plurality of user computing devices (e.g., 102, 106), and one or more networks 108 are described below with reference to fig. 13 and 14.
In at least one example, one or more servers 102 can include a payment processing component 152. The payment processing component 152 may, among other things, process transactions. That is, in at least one example, payment processing component 152 can access payment data associated with a user, send an authorization request for the payment data to a payment service provider, and process a transaction based on a response from the payment service provider. In other examples, payment processing component 152 may access an account maintained by a payment processing service provider and may process the transaction using funds associated with the account. Additional details associated with payment processing component 152 are described below.
In at least one example, the payment processing service provider may expose functions and/or services through one or more Application Program Interfaces (APIs) 148, thereby enabling the functions and/or services described herein to be integrated into the various functional components of environment 100. One or more APIs 148, which may be associated with one or more servers 104, may expose the functionality described herein to various functional components associated with environment 100 and/or utilize payment processing services. At least one of the APIs 148 may be a proprietary API to provide services and/or functionality to functional components (e.g., applications, etc.) developed internally (e.g., by a developer associated with a payment processing service provider). At least one of the one or more APIs 148 may be an open or public API that provides third party developers (e.g., social media service providers described herein) with programmatic access to proprietary software applications or web services of the payment processing service provider. That is, the open or public API may enable the integration of payment processing service provider functions and/or services into the multimedia content platform. One or more APIs 148 may include a set of requirements that govern how applications or other functional components interact with each other.
In some examples, the payment processing service provider may provide a software development kit ("SDK") to the third party entity, which may utilize the functionality disclosed by the one or more APIs 148. The SDK may include software development tools that allow third party developers (i.e., developers independent of the payment processing service provider) to include the functionality and/or utilize services described herein. The SDK and/or one or more APIs 148 may include one or more libraries, programming code, executable files, other utilities, and documents that allow developers to include the functionality described herein directly within an application and/or utilize services, such as third party applications that provide social networking services, as described herein.
In at least one example, one or more servers 104 may include or otherwise access one or more data stores 150. The one or more data stores 150 may store types of data, particularly user profiles and inventory records. Additionally, one or more of the servers 104 may include a user registry 146, which may also include user profiles and/or associations between user profiles and merchant profiles. For example, the buyer's user profile may store payment data associated with one or more payment instruments of the buyer. In some examples, an account maintained by the payment processing service provider on behalf of the buyer may be mapped to, or otherwise associated with, the buyer's user profile. Such accounts may store funds received from point-to-point payment transactions, deposits from employers, transfers from other accounts of the buyer, and so forth. Additionally or alternatively, the merchant's user profile may be mapped to, or otherwise associated with, the merchant's account (which may be maintained by a payment processing service provider, bank, or other payment service institution). More additional information is provided below.
As shown in FIG. 1, the buyer device 102 is associated with one or more user interfaces 122 that enable the buyer to interact with the buyer device 102. One or more user interfaces 122 may be presented through a web browser, application (e.g., desktop or other dedicated, provided by a payment processing provider, provided by a third party, etc.), or the like, to enable a buyer to access the functions and/or services described herein. Similarly, the merchant device 106 may be associated with one or more user interfaces that can be presented through a web browser, an application (e.g., desktop or other dedicated, provided by a payment processing provider, provided by a third party, etc.), or the like, to enable the merchant to interact with the merchant device 106 and access the functions and/or services described herein.
In at least one example, the user interface of the one or more user interfaces 122 can be presented via a multimedia platform (e.g., website, application, etc.) associated with the multimedia content provider. The payment processing service provider's functions and/or services may be integrated into the social media platform through one or more APIs 148 and/or SDKs. In at least one example, a merchant may publish content through the platform. In fig. 1, the content is multimedia content, but in additional or alternative examples, the content may be any other type of content. In at least one example, the buyer can access and/or consume the content through a user interface of one or more user interfaces 122 presented via the platform. That is, both the merchant and the buyer may access the platform through the user interfaces presented by the respective devices.
In at least one example, one or more users may respond to the content, e.g., through comments (which may include text, images, emoticons, etc.), interactions through buttons or other actuation mechanisms (e.g., like, dislike, fun, love, etc.), and so forth. Such responses may be issued in near real time. For example, one or more users may respond to multimedia content provided by a merchant.
As described above, environment 100 may include a buyer device 102, one or more servers 104, and/or merchant device 106. In addition to the components discussed above, the buyer device 102 may include one or more components, such as one or more processors 110, one or more network interfaces 112, memory 114, one or more microphones 116, one or more speakers 118, and/or one or more displays 120. Microphone 116 may be configured to receive audio from environment 100, and may generate corresponding audio data, which may be utilized as discussed herein. Speaker 118 may be configured to output audio, for example, audio corresponding to at least a portion of the multimedia content output by purchaser device 102. The display 120 may be configured to present an image (which may be described as video) corresponding to at least a portion of the multimedia content output by the buyer device 102. Memory 114 may include one or more components, such as one or more user interfaces 122 (discussed above), and one or more application programs 124. The application 124 may be associated with a content provider, a merchant, and/or a payment processing service provider. Merchant device 106 may include the same or similar components that may perform the same or similar functions. It should be noted that merchant device 106, as with other devices and systems described herein, may take one or more forms, such as a computing device, a notebook computer, a telephone, and/or components thereof.
The one or more servers 104 may include one or more components including, for example, one or more processors 126, one or more network interfaces 128, and/or memory 130. Memory 130 may include one or more components, such as a content component 132, an item identification component 134, an item information component 136, an interactive element generator 138, a command generator 140, a feedback component 142, one or more machine learning models 144, a user registry 146, one or more APIs(s) 148, one or more data stores 150, and/or a payment processing component 152. User registry 146, API(s) 148, data store 150, and payment processing component 152 have been described above. Other components will be described below by way of example.
For example, the content component 132 may be configured to receive multimedia content and/or retrieve multimedia content. For example, merchant device 106 or other system may push multimedia content to a payment processing service provider without requiring a specific request for such content. In other examples, the content component 132 may query one or more other systems for multimedia content. In yet other examples, the content component 132 may receive an indication that multimedia content associated with a given merchant has been requested to be output onto the buyer device 102 associated with the customer. In these examples, the content component 132 may query instances of the multimedia content and perform techniques for generating the interactive element overlays, e.g., before the multimedia content is output onto the buyer device 102.
The item identification component 134 can analyze the multimedia content and/or related data to identify one or more items referenced in the multimedia content. For example, the item identification component 134 can utilize image data of the multimedia content to identify items depicted in the image data. Such analysis may include using computer vision techniques to identify the presence of an object in the given image data, and then identifying the object itself and/or the class of object to which the object belongs (e.g., shirt, pants, hat, watch, etc.). Additionally or alternatively, when the multimedia content includes user speech, speech may be recognized using speech recognition and natural language understanding techniques, text data representing the speech is generated, and then the intent and/or purpose of the speech is determined. By so doing, the item identification component 134 can identify the item referenced in the multimedia content as well as, for example, the attributes (e.g., color, size, brand, etc.) of the item. Additionally or alternatively, the item identification component 134 can utilize metadata associated with the multimedia content and/or the merchant providing the content to identify the item. For example, merchant device 106 may provide metadata indicating the items referenced in the multimedia content. In an example, one or more other users may have reviewed or otherwise provided information related to the multimedia content. In these and other examples, some or all of this data can be used by the item identification component 134 to identify items referenced in the multimedia content and/or attributes associated with the items. It should be appreciated that while speech recognition is described herein as being useful for recognizing items, in an example, speech recognition may be used for item recognition along with image recognition as described herein.
With respect to computer vision technology, computer vision includes methods for acquiring, processing, analyzing, and understanding images, as well as high-dimensional data, typically from the real world, to produce digital or symbolic information, for example in the form of decisions. Computer vision attempts to replicate the ability of human vision by electronically perceiving and understanding an image. In this case, understanding means converting visual images (input of retina) into descriptions of the world, which can interact with other thought processes and cause appropriate actions. Such image understanding can be seen as the separation of symbol information from image data using models built by means of geometric, physical, statistical and learning theory. Computer vision is also described as an item of automation and integrated extensive visual perception processes and representations. As a scientific discipline, computer vision is concerned with the theory behind the artificial system that extracts information from images. The image data may take a variety of forms, such as a video sequence, views from multiple cameras, or multi-dimensional data from a scanner. As a technical discipline, computer vision seeks to apply its theory and model to the construction of computer vision systems.
One aspect of computer vision includes determining whether image data contains certain specific objects, features, or activities. Different kinds of computer vision recognition include: object recognition (also referred to as object classification) -one or more pre-specified or learned objects, or object categories, may be identified, typically along with their two-dimensional position in an image or three-dimensional pose in a scene. Identify-identify a single instance of an object. Examples include identifying a particular face or fingerprint, identifying a handwritten number, or identifying a particular vehicle. Detect-scan image data to determine a particular condition. Examples include detecting abnormal cells or tissue that may be present in a medical image, or detecting a vehicle in an automated road tolling system. Detection based on relatively simple and fast calculations is sometimes used to find smaller areas of image data of interest, which areas can be further analyzed by computationally more demanding techniques to produce a correct interpretation.
There are some specialized tasks based on computer vision recognition, such as: optical Character Recognition (OCR), which recognizes characters in a printed or handwritten text image, is typically used to encode text into a format that is easier to edit or index (e.g., ASCII). Two-dimensional code reading-two-dimensional code such as data matrix and QR code. And (5) face recognition. Shape Recognition Technology (SRT) -a technique that distinguishes humans (e.g., head and shoulder patterns) from objects.
Some of the functions and components (e.g., hardware) found in many computer vision systems are described herein. For example, the digital image is generated by one or more image sensors, possibly including distance sensors, tomographic devices, radar, ultrasound cameras, etc., in addition to various types of photosensitive cameras. The generated image data may be a two-dimensional image, a three-dimensional volume, or a sequence of images, depending on the type of sensor. The pixel values may correspond to light intensities in one or more spectral bands (gray-scale or color images), but may also be related to various physical measures, such as depth, absorption or reflection of sound waves or electromagnetic waves, or nuclear magnetic resonance. Before applying computer vision methods to image data to extract certain specific information, it is often beneficial to process the data to ensure that it meets certain assumptions underlying the method. Examples of preprocessing include, but are not limited to: resampling to ensure that the image coordinate system is correct; noise reduction to ensure that sensor noise does not introduce spurious information; contrast enhancement to ensure that relevant information can be detected; and a scale-space representation to enhance the image structure at a locally appropriate scale. Image features of various degrees of complexity are extracted from the image data. Typical examples of such features are: lines, edges and ridges; local points of interest, such as corners, spots, or points; more complex features may be related to texture, shape, or motion. At some point during the processing, it may be decided which image points or areas of the image are relevant for further processing. Examples are: selecting a set of specific points of interest; segmenting one or more image regions containing a particular object of interest; the image is segmented into a nested scene architecture (also referred to as a spatially-taxonomic scene hierarchy) that includes a foreground, a group of objects, a single object, or a salient object portion. In this regard, the input may be a small set of data, such as a set of points or an image region that is assumed to contain a particular object. The remaining processing may include, for example: verifying whether the data satisfies model-based and application-specific assumptions; estimating an application-specific parameter, such as an object pose or an object size; classifying the detected objects into different categories; and comparing and combining two different views of the same object. The final decision required by the application may then be performed, for example to identify matches/mismatches in the application.
The item information component 136 can receive and/or determine item information associated with items referenced in the multimedia content. For example, the merchant device 106 may provide data indicating information associated with the referenced item. The information may include information related to attributes of the item (e.g., size, color, brand, item type, item options, etc.). Additionally or alternatively, the item information component 136 can query one or more systems for item information. For example, the item information component 136 may query the merchant device 106 for inventory data indicating the current inventory of the item, such as when outputting multimedia content. Inventory data may indicate one or more attributes and/or conditions of an item. For example, the inventory data may include an inventory count or other indication of the number of items in question in the current inventory. The inventory data may also include information about the item currently in inventory, such as item color, item size, item type, physical location and/or location association of the item, and/or availability of the item. In an example, the merchant device 106 may return inventory data, and the inventory data may be used to inform customers of the inventory of items currently available from the merchant. In other examples, an indication of the current inventory of one or more other merchants may be retrieved and displayed on the purchaser device 102, such as when inventory data indicates that the item is out of stock and/or when user preferences indicate that the customer prefers a different merchant. By communicating inventory data, the user obtains reliable information as to whether he or she can activate the selectable elements to access project information and/or e-commerce specific actions, as described below. Alternatively, optional links for out-of-stock items may not be displayed to avoid processing these items in the present system, thereby reducing the resource burden at the payment service provider system. The item information component 136 can also receive item information from the item identification component 136, such as when the item identification component 136 determines one or more attributes of an item utilizing the techniques described herein. When determining the inventory of items, a machine learning model (e.g., those described herein) may be utilized to determine which items may or may not be sold, and when the inventory is displayed as described herein, e.g., as part of an interactive element, the inventory may be a predictive inventory based at least in part on the output of the machine learning model.
The interactive element generator 138 may be configured to generate data representing the interactive element using data received and/or determined by the item identification component 134 and/or the item information component 136. The interactive elements may include any selectable element that may be presented to a user and configured to receive user input indicative of a selection. The e-commerce tag may be a type of interactive element and may be an optional portion associated with multimedia content that may include e-commerce specific actions such as purchase actions, item information actions, and/or other shopping-related actions. For example, the multimedia content may be any content including image data and/or audio data. The interactive elements may be configured such that when the multimedia content is output on the buyer device 102, the interactive elements are also presented, for example in the form of overlays. In an example, the interaction element may be specific to the multimedia content, the items referenced therein, the item attributes, and/or the user preferences. For example, using data received and/or determined as described herein, the interactive element generator 138 may determine the type of interactive element to generate. The interactive element types may include, for example, selectable links, quick response codes ("QR codes"), bar codes or other scannable elements, indicators that enable voice input to select interactive elements, indicators that enable gesture input to select interactive elements, and so forth. It should be understood that while several examples of element types have been provided herein, the present disclosure includes any element type that allows for receiving user input. Determining the type of interaction element associated with the given multimedia content may be based at least in part on the device type of the buyer device 102. For example, if the device type indicates that the device includes a camera, gesture-based interactive elements may be used; or if the device type indicates that the device does not include a touch screen, the interactive element may be configured to receive user input other than touch screen input. Additionally or alternatively, purchase history associated with the user profile being used to view the multimedia content may be used to determine past user input types, and this information may be used to determine the type of interactive element to generate.
In addition to the type of interactive element, the interactive element generator 138 may be configured to determine one or more other aspects associated with the interactive element, such as when the interactive element is displayed relative to the multimedia content, a location of the interactive element relative to a visual window, such as with respect to the buyer device 102, a quantity and/or type of item details to be displayed, and/or a function that occurs when the interactive element is selected. For example, the interactive element generator 138 may determine when to display the interactive element based at least in part on data indicating when an item starts to be referenced in the multimedia content and when the item stops being referenced. For example, a given content may be two minutes in length, but the item may not begin to be referenced until the 30 second mark and then stop being referenced at the 1 minute mark. Using the item identification data described herein, the interactive element generator 138 may generate interactive elements configured to be displayed only during the time frames in which the item is referenced. With respect to determining where to display the interactive elements, the interactive element generator 138 may utilize the item identification data to determine the relative position of the item described in the multimedia content with respect to the visual window of the buyer device 102. For example, the item identification data may indicate a location of an object identified in the image data, and the interactive element may be generated such that when displayed, the interactive element may be located near the object, rather than, for example, above the object. This will enable the user to see the object and the interactive element simultaneously and perceive that the object and the interactive element are associated with each other when the multimedia content is output.
With respect to determining the number and/or type of item details to display, the interactive element generator 138 may utilize item information from the item information component 136 to determine attributes associated with the referenced item. In some examples, all attributes may be included in the interactive element. However, in other examples, only a portion of the attributes may be included. For example, one or more user preferences may be received and/or determined using historical data associated with the user profile, and these user preferences may inform which item information to select for inclusion in the interactive element. For example, the historical data may indicate that the user associated with the user profile in question purchased more items with a degree of item detail and/or provided some type of item detail. As a further example, the historical data may be data associated with more than the user profile in question (or other than that), such as historical data associated with customers of the merchant, customers of different merchants, and/or general customers. Additionally, in examples where the device functionality is to allow for the display of an augmented reality and/or virtual reality representation of multimedia data, the interactive elements may be configured to also be displayed in the virtual reality and/or augmented reality settings. This may allow for different directions and/or views of the interactive elements, for example when the directions and/or views of items in the multimedia content are changed.
With respect to determining a function that will appear when an interactive element is selected, the interactive element generator 138 may receive and/or determine data indicating a user's preference for selecting the function. These user preferences may indicate that the user desires to display the purchase user interface 122 upon selection of an interactive element. In other examples, these user preferences may indicate that the user wishes to display the purchase user interface 122 only after the multimedia content has stopped or otherwise at some time after a given interactive element has been selected. This may allow the user to select a plurality of interactive elements, each corresponding to a different item, before being presented with the purchase user interface 122. In these examples, the interactive elements may be configured to be selected and then the data indicating these selections may be saved until the multimedia content ceases.
In the example provided above, the interactive element generator 138 is described as generating specific interactive elements for specific items referenced in the multimedia content. In other examples, the interactive element generator 138 may generate default and/or generic interactive elements that indicate that an item is available but that specific details regarding the item are not provided. When the generic interactive element is selected, the payment processing service provider system may parse known information about the item and/or user profile associated with the user to pre-populate the purchase user interface with more specific information.
The command generator 140 may be configured to generate commands that, among other things, cause a device, such as a user device, to perform an action. For example, the command generator 140 may generate commands that cause interactive elements to be presented with multimedia content. The command generator 140 may also generate a command to cause the buyer device 102 to display the purchase user interface 122 in response to selection of one or more interactive elements. The command generator 140 may also generate commands that cause the buyer device 102 to display information in the user interface 102. For example, one or more user input fields of the purchase user interface 122 may be pre-populated based at least in part on some or all of the data discussed herein. For example, the attributes and/or options associated with the selected item may be pre-populated with data from the user profile. Additionally or alternatively, item information determined from the multimedia content may be utilized to pre-populate the item attributes. Additionally, payment information from past transactions associated with the user profile may be used to pre-populate payment options, shipping addresses, etc. on the purchase user interface 122. In this manner, upon selection of the interactive element, the purchase user interface 122 may automatically display and pre-populate the item and payment instrument information so that the user may only need to confirm the purchase without providing any additional input to obtain the item. In addition to avoiding, the user may select a "save for later use" function that allows information associated with the purchase user interface to be saved so that the user may later use the information to purchase one or more items. In other examples, by selecting an interactive element instead of purchasing a user interface, a user may be allowed to enter an auction for an item that can be bid. This may allow multiple users to bid on the same item at the same time. As described above, selecting the interactive element may cause the purchase user interface 122 to be displayed. In an example, the purchase user interface 122 may be associated with a merchant. However, in other examples, at least a portion of the purchase functionality may be provided by the payment processing service provider system 104. In these examples, payment processing service provider system 104 may be associated with an application that may be stored on and/or accessible on a customer's device to allow one or more items to be purchased using the application. In other examples, selection of the interactive element may allow the user to engage in items in the video, and when the interactive element is selected, a request may be sent to a particular merchant with those associated items to interact directly with the customer, or to allow the customer to purchase the most relevant items directly. In these examples, the interactive elements may be associated with certain portions of the multimedia content (e.g., portions where information for items described in the multimedia content is available) while other portions are not interactive and therefore do not have any "purchasable" items associated therewith. Additionally, the user preference information described herein may be used to recommend additional multimedia content for display to a user and/or to influence how the user navigates between multimedia content.
In an example, the virtual shopping cart and/or shopping cart data structure may be generated based at least in part on selection of the interaction element and/or other interactions of the customer with the purchase user interface. In these examples, the virtual shopping cart may include details regarding the selected interactive element, one or more items associated with the interactive element, payment and/or fee information associated with the one or more items, and/or other data associated with the one or more items and/or payment transactions for the one or more items. The shopping cart data structure may be configured to generate and/or provide available payment links and/or to make subsequent interactions by the user easier and/or more efficient. For example, a shopping cart data structure may be saved or otherwise stored such that a user may later return to the virtual shopping cart to continue purchasing one or more items without having to select the items again and/or without having to enter information that has been entered and/or pre-filled. The message may also be sent to a device and/or user profile associated with the user using the shopping cart data structure. These messages may represent reminders for one or more items in the virtual shopping cart and/or requests to complete a purchase. In an example, the virtual shopping cart may include fully defined items or suggestions and/or categories of items that may be refined at a later time. Thus, the virtual shopping cart may act as a "bookmark" and/or may otherwise hold data associated with one or more items and/or data associated with a user's interaction with the items through multimedia content, interactive elements, applications, or other means.
Additionally, model 144 may be trained based on transaction information and/or other information associated with payment transactions. For example, transaction information for transactions performed between multiple merchants and multiple buyers may be received at the payment processing service provider system 104. The information may be received from a merchant computing device associated with the merchant. In these examples, the merchant computing devices may have respective instances of merchant applications installed thereon for configuring the merchant computing devices as point-of-sale (POS) terminals, respectively. The corresponding instance of the merchant application may configure the POS terminal to communicate transaction information to the payment processing service provider system 104 over one or more networks. In some examples, the POS terminal may be an online terminal. With the transaction information, the payment processing service provider system 104 may generate the profile using a model trained with at least one of merchant information, buyer information, and/or transaction information.
In some embodiments, the methods and systems described herein may be used with voice services (e.g., amazonApple- >Or Microsoft' s) Integration is performed. The present methods and systems may be integrated with "wake words" to invoke their respective voice services, e-commerce, and payment fulfillment channels. For example, speaker recognition techniques may be utilized to determine user profiles associated with users that provide user utterances to a user device to perform one or more operations described herein. The determined user profile may be used to customize the interactive elements described herein, the functionality that occurs when the interactive elements are selected, and/or the information displayed in the purchase user interface. In an example, it is also possible toTo customize the availability of interactive elements and/or multimedia content, interactive elements and/or items with the geographic location of the viewing device. The voice interface may also be used to provide user input as described herein. Further, the voice interface may be associated with an online platform and the platform may be utilized to complete item purchases.
In an example, a merchant information display panel may be used to allow a merchant to view interactions with multimedia content and/or data associated with interactions. Merchants may be able to provide input for dynamic pricing of items based at least in part on interactions. Additional functions may include instant question answers, rewards terms, and other functions that allow a merchant to interact with a customer. Additionally, the merchant information display panel may allow the merchant to interact with the customer in real-time and process payments with different users.
The feedback component 142 may be configured to receive feedback data associated with the interactive element. The feedback data may indicate positive and/or negative comments regarding the interactive element, the display of the interactive element, and/or the function associated with the selection of the interactive element. The feedback data may be collected and utilized to improve the generation of interactive elements and/or related functions. For example, the feedback data may be formatted as input data (or training data) for one or more machine learning models 144. This input may be used to train a machine learning model 144, which may be used by various components of the systems described herein to perform one or more of the operations described with respect to these systems.
Embodiments described herein use information about the similarity of multimedia content, including but not limited to spatial and temporal similarity in multimedia content; in particular, specific subject information about a speaker that appears in multimedia (e.g., video) is collected and used. In one embodiment, the disclosed method and system take advantage of the fact that the speech is typically made by one participant of the video conference at a time, and that the video displayed in the main window is typically focused on the speaker instead of the other non-speaking participants (the video of which may also be displayed but typically in a smaller sidebar window instead of the main window), filtering the video information according to one embodiment is performed in a participant-specific manner. Participant-specific video filtering is based at least on the manner in which a particular speaker speaks, e.g., facial expressions of the speaker, head and eye movements, gestures, etc., typically indicate or otherwise point to project information. In various examples described below, such participant-specific knowledge is learned through machine learning, and such knowledge is used to minimize redundancy in video transmission in addition to (or instead of) spatial and temporal redundancy. This allows focus to be placed on one speaker and/or related items, but not other speakers and other items in the video. This also allows higher compression rates to be achieved than with standardized video filtering and recognition schemes. The learned participant-specific knowledge is also used to decode or reconstruct the encoded video, so that a high or desired perceived quality can be maintained despite the higher compression rate. Since the compression rate is typically higher relative to standardized coding, there is no need to reduce the frame rate of video transmission when the available bandwidth is low. Thus, a high or desired perceived quality may be maintained in a video conference, where no buffering of variable frame transmission rates is accommodated. More specifically, such techniques allow for more accurate marking and identification of related items for business.
Fig. 2 shows an example conceptual diagram illustrating the output of multimedia content over time and the change in interactive elements displayed while rendering the multimedia content. The environment 200 of fig. 2 may include the buyer device 102 caused to display multimedia content and interactive elements displayed as overlays on the multimedia content.
With particular reference to fig. 2, the multimedia content may reference a first item 206, depicted herein as a shirt. The first item 206 may be identified using one or more of the techniques described elsewhere herein. The buyer device 102 can also display or otherwise be associated with a time bar 202 and a progress element 204. The time bar 202 may provide an indication of the total length of the multimedia content, for example, in minutes and seconds. The progress element 204 may provide an indication of the location of the currently displayed content relative to the total length of the multimedia content. As shown in step 1 of FIG. 2, the progress element 204 indicates an early display of the first item 206 in the multimedia content. Also as shown in step 1, the interactive element generator has generated a first interactive element 208 to be displayed in association with the first item 206. The first interactive element 208 may be generated and customized using techniques described elsewhere in this disclosure, for example with reference to fig. 1. As shown in step 1 of FIG. 2, the first interactive element 208 has been generated and caused to be displayed at a given location relative to the interactive window of the buyer device 102, and the first interactive element 208 includes certain project details. The interactive elements may include any selectable element that may be presented to a user and configured to receive user input indicative of a selection. The e-commerce tag may be a type of interactive element and may be an optional portion associated with multimedia content that may include e-commerce specific actions such as purchase actions, item information actions, and/or other shopping-related actions.
As shown in FIG. 2, item details include item type (e.g., "shirt"), brand (e.g., "A brand"), and inventory quantity of items (e.g., "5"). It should be appreciated that any other item information may also or alternatively be displayed as part of the interactive element. Additionally or alternatively, the displayed information, such as time, location, viewer, inventory status, etc., may be customized based on real-time or near real-time contextual information. When content is described herein as real-time, the content may be considered real-time streaming content as described above. Near real-time content may represent content that is not necessarily live, but is presented without prior recording. The context information may be determined based at least in part on one or more signals indicative of the context of the multimedia content, the items presented, and/or the user device viewing the content. For example, location tracking information, internet protocol address information, historical customer interaction data, customer preferences, behavioral data, merchant preferences, etc. may be used to determine a context, and information associated with the context may be used as described herein. With respect to determining where to display the interactive element 208, the payment processing service provider system may utilize the item identification data to determine the relative position of the item described in the multimedia content with respect to the visual window of the user device. For example, the item identification data may indicate a location of an object identified in the image data, and the interactive element 208 may be generated such that when displayed, the interactive element 208 may be positioned near the object rather than, for example, above the object. This will cause the user to see the object and the interactive element 208 simultaneously when the multimedia content is output, and perceive that the object and the interactive element 208 are associated with each other. In an example, the first interactive element 208 may be placed as a spot advertisement within the same application in which the item is presented. In another example, information about the item may be sent as an electronic message, such as a text message or email, as a commercial break within another mobile and/or web application, as a pop-up notification, or on a display that may be found by way of a gesture (e.g., pinch gesture) or a particular keyboard, audio, visual, or tactile input.
As shown in fig. 2, the multimedia content may continue to be output from step 1 or may otherwise proceed. In step 2, the multimedia content may have advanced to a point where it no longer references the first item 206 (not shown here), but instead references the second item 210. Using the item identification techniques described elsewhere herein, a time value associated with when the first item 206 stopped being referenced and when the second item 208 started being referenced can be determined, and this information can be used to determine when to stop displaying the first interactive element 208 and when to start displaying the second interactive element 212 associated with the second item. It should also be appreciated that the position of the second item 210 relative to the interactive window has changed from the position of the first item 206. This information may be used to configure the second interactive element 212 to be displayed in a different location than the location displayed by the first interactive element 208. By doing so, multiple interactive elements may be generated for multiple items in the same multimedia content, and each interactive element may be specifically configured for the item associated with the interactive element.
As shown in FIG. 2, the user interface may also include a "shopping cart" icon 214. Icon 14 may be used to provide visual indications of a number of selected icons and/or other details regarding items that have been selected as at least potential purchases. Icon 214 may be selected to display a purchase user interface. When displayed, item information for the selected item may be displayed, as described more fully herein. Additionally, in an example, purchase information associated with the user's user account may also be displayed. Additionally, the user interface may include a "chat" icon 216 that may be used to allow real-time chat with a merchant (e.g., a merchant associated with multimedia content). In these examples, a merchant may use a physical sensor (e.g., a bar code reader) to add items to, for example, a shopping cart. Additionally, the user interface may include a bar code icon 218 that may be used to allow a user to scan, for example, coupons or other information associated with the item in question using a bar code reader (e.g., a bar code reader associated with the user's phone).
It should be appreciated that while for the example shown in fig. 2, two items referenced in multimedia content are referenced at different times, the present disclosure includes the ability to identify two or more items that are referenced simultaneously and to generate and display multiple interactive elements simultaneously.
Fig. 2 also depicts functionality associated with playback of multimedia content. The functionality may include optional elements to control how and when multimedia content is output by the user device. For example, the selectable elements may include a play element configured to cause the multimedia content to be output, a pause element configured to stop the multimedia content from being output (e.g., a pause in the example), a volume element configured to control the volume of the audio output and/or mute the audio output, and/or one or more playback speed elements configured to increase or decrease the playback speed of the multimedia content and/or to output the multimedia content upside down.
It should also be appreciated that while the example of fig. 2 shows two interactive elements displayed at different times, the present disclosure includes generating and/or displaying two interactive elements simultaneously, such as when multiple items are depicted and/or described simultaneously. In these examples, the item identification and positioning process described herein may be used to determine where multiple interactive elements will be located at the same time.
Fig. 3 shows an example conceptual diagram illustrating an example item identification of items referenced in multimedia content. Fig. 3 illustrates a buyer device 102 that may include the same or similar components and perform the same or similar functions as the buyer device 102 in fig. 1.
In step 1, the buyer device 102 may be presenting multimedia content. As shown in fig. 3, the multimedia content depicts the first item 206 and the person who is speaking. In this example, the utterance is "the A brand shirt is very cost effective-! ". The item identification component can be utilized to identify the first item 206. For example, the item identification component can analyze the multimedia content and/or related data to identify one or more items referenced in the multimedia content. The item identification component can utilize image data of the multimedia content to identify items described in the image data. Such analysis may include using computer vision techniques to identify the presence of an object in the given image data, and then identifying the object itself and/or the class of object to which the object belongs (e.g., shirt, pants, hat, watch, etc.). Additionally or alternatively, when the multimedia content includes user speech, speech may be recognized using speech recognition and natural language understanding techniques, text data representing the speech is generated, and then the intent and/or purpose of the speech is determined. By doing so, the item identification component 134 can identify the item referenced in the multimedia content, as well as, for example, the attributes (e.g., color, size, brand, etc.) of the item. Additionally or alternatively, the item identification component 134 can utilize metadata associated with the multimedia content and/or the merchant providing the content to identify the item. For example, merchant device 106 may provide metadata indicating the items referenced in the multimedia content. In an example, one or more other users may have reviewed or otherwise provided information related to the multimedia content. In these and other examples, some or all of this data can be used by the item identification component 134 to identify items and/or attributes associated with items referenced in the multimedia content.
With respect to computer vision technology, computer vision includes methods of acquiring, processing, analyzing, and understanding images, and often high-dimensional data from the real world, to produce digital or symbolic information, for example in the form of decisions. These techniques are described in more detail above with respect to fig. 1. With respect to speech recognition, audio data corresponding to a user's speech may be processed by an automatic speech recognition component that can compare attributes of the audio data to reference attributes to determine that a given sound corresponds to a given word. In this way, the automatic speech recognition component can generate text data representing words of the user's speech. The natural language understanding component can utilize the text data to determine keywords corresponding to voice purposes or other elements in the text data. In the example of FIG. 3, this process may result in the system determining that the user is "the A brand shirt is very cost effective-! "which means that there is an item being displayed and that the item is a shirt, wherein one of the shirt properties is" A brand ". This information may be used to identify the item and/or attributes associated with the item. In addition to the image analysis described above, computer vision techniques may be used to identify gestures of a person depicted in the multimedia content. These gestures may indicate that the person is describing a given item and/or will determine that the item is available for purchase. This information may be used to identify items and/or to determine which items are available for purchase.
In step 2, the interactive element generator may use the item identification data to generate and place interactive elements 208 associated with the first item 206. The interactive element 208 may include or otherwise reference some or all of the item identification data, such as "brand a" attributes, item type "shirts", and other information, such as current inventory counts. The interactive elements may include any selectable element that may be presented to a user and configured to receive user input indicative of a selection. The e-commerce tag may be a type of interactive element and may be an optional portion associated with multimedia content that may include e-commerce specific actions such as purchase actions, item information actions, and/or other shopping-related actions.
Fig. 4A illustrates an example user device 102 that displays a first example interactive element that is superimposed or otherwise embedded on multimedia content while a user is viewing the multimedia content. The interactive elements may include any selectable element that may be presented to a user and configured to receive user input indicative of a selection. The e-commerce tag may be a type of interactive element and may be an optional portion associated with multimedia content that may include e-commerce specific actions such as purchase actions, item information actions, and/or other shopping-related actions. In an example, the interactive element may be placed as a spot advertisement within the same application in which the item is presented. In another example, information about the item may be sent as an electronic message, such as a text message or email, as a commercial break within another mobile and/or web application, as a pop-up notification, or on a display that may be found by way of a gesture (e.g., pinch gesture) or a particular keyboard, audio, visual, or tactile input. The example provided in fig. 4A shows a first interactive element 402 superimposed on the multimedia content of a reference shirt. The first interactive element 402 may be a selectable element, such as a link, that may, when selected, cause one or more actions to be performed. In some examples, the first interactive element 402 may be considered selected when a touch input is received at the user device 102 on a portion of the display of the user device 102 corresponding to a location where the first interactive element 402 is displayed. Other forms of user input may also be received, such as a click to the first interactive element 402 or a keyboard navigation. In one embodiment, after performing the action, the multimedia content may be automatically and immediately replaced with a summary image, such as a shopping cart, displaying the corresponding summary information when the sliding or dragging action is initiated, and providing the user with a selectable link to return to the multimedia content after viewing the summary or otherwise performing other actions. In another embodiment, the shopping cart may be displayed in a picture-in-picture mode when the user browses multimedia content, or as a separate window alongside the multimedia content, wherein the multimedia content is adapted to the separate window displaying the shopping cart information. In yet another embodiment, the multimedia content is not replaced by another window, and the summary information may be tracked as a background process and located in a separate data structure with the user and merchant identities and displayed, for example, after the multimedia content has ended. In one example, the shopping cart information may also be sent as an electronic message (e.g., email or text) or within a mobile application of the user or merchant. Merchants may be interested in viewing such information to provide incentives to customers and encourage users to complete purchases.
The first interactive element 402 having the input type depicted in fig. 4A may be selected and utilized to present the interactive element based at least in part on the device type of the user device 102, user preference data, and/or upon a request by a merchant and/or provider of multimedia content. For example, when the user device 102 has user input capabilities such as a touch screen, mouse, and/or keywords, the first interactive element 402 may be selected for display on the user device 102. As an additional example, the user preference data may indicate that the user generally provides certain user input types indicating that the first interactive element 402 will be more suitable for use than one or more other interactive element types.
Fig. 4B illustrates an example user device 102 of a second example interactive element superimposed on multimedia content. The example provided in fig. 4B shows a second interactive element 404 superimposed on multimedia content, the content referencing a shirt. The second interactive element 404 may be a readable element, such as a quick response code or bar code, which may cause one or more actions to be performed when scanned and/or when another device indicates that it has scanned a quick reference code. The interactive elements may include any selectable element that may be presented to a user and configured to receive user input indicative of a selection. The e-commerce tag may be a type of interactive element and may be an optional portion associated with multimedia content that may include e-commerce specific actions such as purchase actions, item information actions, and/or other shopping-related actions. When such interactive elements 404 are generated, actions may be defined and customized based on the identity of the device or user scanning the element. In some examples, the second interactive element 404 may be considered selected when the user device 102 and/or another device indicates that the quick response code has been scanned. For example, the quick response code may be displayed on the first device that also displays the multimedia content. In an example, the second device may scan the quick response code and may receive an indication from the second device that the quick response code has been scanned. It should be appreciated that while a quick response code is used herein as an example, other types of scannable, readable, and/or identifiable elements may be used. In one embodiment, after performing the action, the multimedia content may be automatically and immediately replaced with a summary image, such as a shopping cart, displaying corresponding summary information about the initiation of the sliding or dragging action, and providing the user with a selectable link to return to the multimedia content after viewing the summary or otherwise performing other actions. In another embodiment, the shopping cart may be displayed in a picture-in-picture mode when the user browses multimedia content, or as a separate window alongside the multimedia content, wherein the multimedia content is adapted to the separate window displaying the shopping cart information. In yet another embodiment, the multimedia content is not replaced by another window, and the summary information may be tracked as a background process and located in a separate data structure with the user and merchant identities and displayed, for example, after the multimedia content has ended. In one example, the shopping cart information may also be sent as an electronic message (e.g., email or text) or within a mobile application of the user or merchant. Merchants may be interested in viewing such information to provide incentives to customers and encourage users to complete purchases.
The second interactive element 404 having the input type depicted in fig. 4B may be selected and utilized to present the interactive element based at least in part on the device type of the user device 102, user preference data, and/or upon a request by a merchant and/or provider of the multimedia content. For example, when the user device 102 lacks certain capabilities such as a touch screen, mouse, and/or keywords, the second interactive element 404 may be selected for display on the user device 102. To this end, in one embodiment, the second interactive element 404 is generated based on device characteristics of the device displaying the content and/or another device for reading the interactive element, thereby determining whether there are factors such as form factors, operating system, reading capabilities, user preferences, etc., and configuring the second interactive element for the device accordingly. As an additional example, the user preference data may indicate that the user generally provides certain user input types indicating that the second interactive element 404 will be more suitable for use than one or more other interactive element types.
Fig. 4C illustrates an example user device 102 of a third example interactive element superimposed on multimedia content. The example provided in fig. 4C shows a third interactive element 406 superimposed on the multimedia content, the content referencing a shirt. The third interactive element 406 may be an indicator that speech input may be received to cause one or more actions to be performed. The interactive elements may include any selectable element that may be presented to a user and configured to receive user input indicative of a selection. The e-commerce tag may be a type of interactive element and may be an optional portion associated with multimedia content that may include e-commerce specific actions such as purchase actions, item information actions, and/or other shopping-related actions. In one embodiment, after performing the action, the multimedia content may be automatically and immediately replaced with a summary image, such as a shopping cart, displaying corresponding summary information about the initiation of the sliding or dragging action, and providing the user with a selectable link to return to the multimedia content after viewing the summary or otherwise performing other actions. In another embodiment, the shopping cart may be displayed in a picture-in-picture mode when the user browses multimedia content, or as a separate window alongside the multimedia content, wherein the multimedia content is adapted to the separate window displaying the shopping cart information. In yet another embodiment, the multimedia content is not replaced by another window, and the summary information may be tracked as a background process and located in a separate data structure with the user and merchant identities and displayed, for example, after the multimedia content has ended. In one example, the shopping cart information may also be sent as an electronic message (e.g., email or text) or within a mobile application of the user or merchant. Merchants may be interested in viewing such information to provide incentives to customers and encourage users to complete purchases.
In some examples, the third interactive element 406 may be considered selected when the user device 102 and/or a remote system, such as a speech processing system, indicates that the speech input indicates an intent to select the third interactive element 406. The audio input may include specific trigger words that are specifically created to enable an e-commerce experience between the merchant and the customer. For example, audio input may be received only when interactive element 406 is displayed on the device. The audio is then recorded from that time stamp to a future time stamp (e.g., the end of the video or the end of an indicator identifying the interactive element 406). Such a timestamp may define a corresponding time offset from the beginning of the audio recording or from another specific location in the audio recording. During the recording, it is determined whether the customer provided purchase intent, such as by using trigger words, such as "add this item to my shopping cart", "purchase small green", and mapping these trigger words to specific interactive elements accordingly, adding these specific items to the online shopping cart. Trigger words may also be predefined by the merchant or customer and stored in the respective configuration file, as well as algorithms to analyze the audio recordings to create an online shopping cart. In another example, trigger words may also be counted or further filtered to determine additional details, such as how many items the customer wants, what color, what size, etc.
In some embodiments, the methods and systems described herein may be used with voice services (e.g., amazonApple->Or Microsoft' s) Integration is performed. The present methods and systems may be integrated with "wake words" to invoke their respective voice services, e-commerce, and payment fulfillment channels. For example, speaker recognition techniques may be utilized to determine user profiles associated with users that provide user utterances to a user device to perform one or more operations described herein. The determined user profile may be used to customize the interactive elements described herein, the functionality that occurs when the interactive elements are selected, and/or the information displayed in the purchase user interface. The voice interface may also be used to provide user input as described herein. Further, the voice interface may be associated with an online platform and the platform may be utilized to complete item purchases.
The third interactive element 406 having the input type depicted in fig. 4C may be selected and utilized to present the interactive element based at least in part on the device type of the user device 102, user preference data, and/or upon a request by a merchant and/or provider of multimedia content. For example, when the user device 102 lacks certain capabilities such as a touch screen, mouse, and/or keywords, the third interactive element 406 may be selected for display on the user device 102. As an additional example, the user preference data may indicate that the user generally provides certain user input types indicating that the third interactive element 406 will be more suitable for use than one or more other interactive element types.
Fig. 4D illustrates an example user device 102 of a fourth example interactive element superimposed on multimedia content. The example provided in fig. 4D shows a fourth interactive element 408 superimposed on multimedia content, which references a shirt. The fourth interactive element 408 may be an indicator indicating that the gesture input, when received, may result in one or more embedded actions being performed. The interactive elements may include any selectable element that may be presented to a user and configured to receive user input indicative of a selection. The e-commerce tag may be a type of interactive element and may be an optional portion associated with multimedia content that may include e-commerce specific actions such as purchase actions, item information actions, and/or other shopping-related actions. The gesture may be a swipe or pinch gesture swipe gesture, wherein the interactive element is grabbed (e.g., by a finger pressing on the touch screen or a cursor clicking and holding by a mouse input) and dragged, swiped, clicked or tapped in a specific direction to allow for initialization of the embedded action (e.g., adding an item to a shopping cart). In one embodiment, after performing the action, the multimedia content may be automatically and immediately replaced with a summary image, such as a shopping cart, displaying corresponding summary information about the initiation of the sliding or dragging action, and providing the user with a selectable link to return to the multimedia content after viewing the summary or otherwise performing other actions. In another embodiment, the shopping cart may be displayed in a picture-in-picture mode when the user browses multimedia content, or as a separate window alongside the multimedia content, wherein the multimedia content is adapted to the separate window displaying the shopping cart information. In yet another embodiment, the multimedia content is not replaced by another window, and the summary information may be tracked as a background process and displayed in a separate data structure with the user and merchant identities and, for example, after the multimedia content has ended. In one example, the shopping cart information may also be sent as an electronic message (e.g., email or text) or within a mobile application of the user or merchant. Merchants may be interested in viewing such information to provide incentives to customers and encourage users to complete purchases.
In some examples, the fourth interactive element 408 may be considered selected when the user device 102 indicates that image data collected by, for example, a camera of the user device 102 depicts a movement of the user, and that its movement coincides with a reference pattern associated with the selection of the fourth interactive element 408.
The fourth interactive element 408 having the input type depicted in fig. 4D may be selected and utilized to present the interactive element based at least in part on the device type of the user device 102, user preference data, and/or upon a request by a merchant and/or provider of multimedia content. For example, when the user device 102 lacks certain capabilities such as a touch screen, mouse, and/or keywords, the fourth interactive element 408 may be selected for display on the user device 102. As an additional example, the user preference data may indicate that the user generally provides certain user input types indicating that the fourth interactive element 408 will be more suitable for use than one or more other interactive element types.
As shown in fig. 4A-4D, a user may provide user input indicating selection of interactive elements 402-408. Regardless of the type of interactive element used, when the user input data indicates selection of such interactive element, the process may include displaying a purchase user interface based at least in part on the received user input data indicating selection of the interactive element. Additionally, the interactive elements may be tailored for visibility, positioning, etc. and/or may be customized for content and interpreted when selected based on customer authentication. The modified view of the interactive elements may be based on our knowledge of the customer, such as from historical account data, cookie sessions from the customer, and the devices they are using, etc. In this way, the presentation of the interactive elements may be customized for the relevant user.
Fig. 5 to 12 illustrate processes of an e-commerce tag in multimedia content and a customized e-commerce tag in real-time multimedia content. The processes described herein are illustrated as a collection of blocks in a logic flow diagram, which represents a series of operations, some or all of which may be implemented in hardware, software, or a combination thereof. In the context of software, the blocks may represent computer-executable instructions stored on one or more computer-readable media that, when executed by one or more processors, program the processors to perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, etc. that perform particular functions or implement particular data types. The order in which the blocks are described should not be construed as a limitation unless specifically indicated. Any number of the described blocks may be combined in any order and/or in parallel to implement a process or an alternative process, and not all blocks need be performed. For purposes of discussion, reference is made in describing these processes to the environments, architectures, and systems described in the examples herein, e.g., those described with respect to fig. 1-4D, 13, and 14, although these processes may be implemented in a wide variety of other environments, architectures, and systems.
Fig. 5 illustrates an example process 500 for determining and utilizing inventory data associated with multimedia content representing an item in real-time. The order of the described operations or steps is not intended to be construed as a limitation, and any number of the described operations may be combined in any order and/or in parallel to implement process 500.
At block 502, the process 500 may include identifying an item associated with the multimedia content. For example, the item identification component can analyze the multimedia content and/or related data to identify one or more items referenced in the multimedia content. For example, the item identification component can utilize image data of the multimedia content to identify items depicted in the image data. Such analysis may include using computer vision techniques to identify the presence of an object in the given image data, and then identifying the object itself and/or the class of object to which the object belongs (e.g., shirt, pants, hat, watch, etc.). Additionally or alternatively, when the multimedia content includes user speech, speech may be recognized using speech recognition and natural language understanding techniques, text data representing the speech is generated, and then the intent and/or purpose of the speech is determined. By doing so, the item identification component can identify the item referenced in the multimedia content, as well as, for example, the attributes (e.g., color, size, brand, etc.) of the item. Additionally or alternatively, the item identification component can utilize metadata associated with the multimedia content and/or the merchant providing the content to identify the item. For example, the merchant device may provide metadata indicating the items referenced in the multimedia content. In an example, one or more other users may have reviewed or otherwise provided information related to the multimedia content. In these and other examples, some or all of this data may be used by the item identification component to identify items and/or attributes associated with items referenced in the multimedia content.
At block 504, process 500 may include identifying a merchant associated with the multimedia content. For example, the multimedia content may include metadata and/or otherwise indicate a source of the multimedia content, and the source may be associated with a merchant. In one embodiment, the service provider provides a platform for one or more of the following: processing payments for one or more merchants, including the merchant, maintaining inventory, providing payroll services, providing lending services, and the like.
At block 506, the process 500 may include receiving inventory data associated with a merchant. For example, the payment processing service provider may store inventory data associated with the merchant and/or a system associated with the merchant may provide inventory data to the payment processing service provider in real-time or near real-time. In another example, the payment processing service provider may access a third party inventory database to obtain the latest inventory status through an API call or through a web-crawling.
At block 508, the process 500 may include determining whether inventory data indicates that the item is available from a merchant. For example, the item information component may query the merchant system for inventory data indicating the current inventory of the item (e.g., when outputting multimedia content). In an example, the merchant system may return inventory data and may utilize the inventory data to inform the customer of the current inventory of items available from the merchant. In other examples, an indication of the current inventory of one or more other merchants may be retrieved and displayed on the user device, for example, when inventory data indicates that the item is out of stock and/or when user preferences indicate that the customer prefers a different merchant. In some examples, availability of items similar to the referenced item at the merchant or similar merchants may also be searched for, and/or appropriate alternatives at the same merchant or different merchants may be recommended accordingly.
Where the inventory data indicates that the item is available from a merchant, the process 500 may include generating an interactive element at block 510 that includes an indicator of an inventory count for the item. For example, the interactive elements may be generated as described herein and may include information associated with the item, such as identifying information about the item. In addition, an indicator of the current inventory count may also be displayed. Thanks to this indicator of the current stock count, the user can obtain direct and reliable information about the availability of the items of interest to them without activating the selectable information and interrupting the presentation of the content items. In an example, the inventory count may be "real-time" or may change as the customer purchases an instance of the item from the merchant. In this way, the customer can perceive the sales speed of the item. The generation and/or placement of interactive elements will be described in more detail with reference to fig. 2 and 3 above. The interactive elements may include any selectable element that may be presented to a user and configured to receive user input indicative of a selection. The e-commerce tag may be a type of interactive element and may be an optional portion associated with multimedia content that may include e-commerce specific actions such as purchase actions, item information actions, and/or other shopping-related actions.
At block 512, the process 500 may include causing a user device on which the multimedia content is output to display the interactive element. The interactive elements may be displayed, for example, by way of overlays, and may be displayed while the item is being referenced by the multimedia content being output.
Returning to block 508, in the event that the inventory data indicates that the item is not available from a merchant, process 500 may include identifying one or more other merchants from which the item is currently available at block 514. For example, the payment processing service provider may access inventory data of one or more other merchants. The identifier of the item may be used to query inventory data from one or more other merchants, and the payment processing service provider may use this information to determine which other merchants currently have the item in stock. Alternatively, the system may not overlay any interactive elements to avoid providing garbage to users interested in purchasing the relevant items. Furthermore, by relinquishing the addition of interactive elements when inventory data indicates that the item is not available, processing time may be saved at the payment service provider system, with fewer interactive elements being inserted only when inventory data indicates that the relevant item is available.
In other examples, links associated with interactive elements may be corrupted or otherwise unavailable. In these examples, item information may be used to find alternative items and/or merchants, and links to purchasing user interfaces associated with these alternative merchants may be provided.
At block 516, process 500 may include generating an interaction element that includes indicators of one or more other merchants. For example, the interactive element may provide an indication that a merchant associated with the multimedia content is currently out of stock, and may instruct the customer to select the interactive element to view other merchants that do have the item in stock. In other examples, the interaction element may include an identifier of at least one other merchant having the inventory item.
At block 518, the process 500 may include causing a user device on which the multimedia content is output to display the interactive element. For example, the interactive elements may be displayed, such as by way of an overlay, and may be displayed when the item is being referenced by the multimedia content being output.
Fig. 6 illustrates an example process 600 for determining whether to aggregate item selections during presentation of multimedia content. The order of the described operations or steps is not intended to be construed as a limitation, and any number of the described operations may be combined in any order and/or in parallel to implement process 600.
At block 602, the process 600 may include causing output of multimedia content having an interactive element associated with an item such that the interactive element is superimposed on the multimedia content. For example, multimedia content may be output and may include images and/or audio referencing one or more items. The interactive elements may also be displayed while outputting the multimedia content, or during outputting at least a portion of the multimedia content, for example as an overlay of the content, such that the user may see the multimedia content and the interactive elements. The interactive elements may include any selectable element that may be presented to a user and configured to receive user input indicative of a selection. The e-commerce tag may be a type of interactive element and may be an optional portion associated with multimedia content that may include e-commerce specific actions such as purchase actions, item information actions, and/or other shopping-related actions.
At block 604, process 600 may include determining whether one or more user-configurable options indicate a preference for summary item selection. For example, user-provided preferences may be utilized to determine whether a user wishes to wait until the multimedia content ceases to be displayed in a purchase user interface.
In the case where the user configurable option does not indicate a preference for summary item selection, process 600 may include determining whether the user history indicates a preference for summary selection at block 606. For example, historical transaction data may be utilized to determine whether a user typically purchases more than one item referenced in the same multimedia content.
Where the user history indicates preferences for summarized selections and/or the user configurable options indicate preferences for summarized item selections, process 600 may include, at block 608, storing data representing interactions with interactive elements during output of the multimedia content. For example, the system may determine to avoid causing a purchase user interface to be displayed in response to selection of the interactive element. Instead, the system may store data representing the selection of one or more interactive elements when outputting the multimedia content. In other examples, the purchase interface may be displayed but may not interfere with the output of the multimedia content. In some examples, the plurality of multimedia content, such as the plurality of videos, may be displayed immediately prior to displaying the purchase user interface, even where the multimedia content is associated with a different merchant and/or content provider.
At block 610, the process 600 may include refraining from displaying the purchase user interface until an event occurs, such as the multimedia content stopping, receiving a request, exceeding a payout limit, designating a time, and the like. For example, instead of stopping outputting the multimedia content and/or causing another window to open and obstruct the customer's ability to view the remainder of the multimedia content, the system may avoid doing so, but may instead, for example, cause an indicator to be presented when the multimedia content is displayed that indicates that an item was selected and/or that a plurality of items have been selected.
At block 612, the process 600 may include displaying a purchase user interface with summarized item selections in response to the stopping of the multimedia content. In this example, the identification information associated with each selected item may be displayed or available for display so that a customer may purchase all of the selected items at the same time.
Returning to block 606, where the user history does not indicate a preference for the aggregated selection, process 600 may include, at block 614, displaying a purchase user interface with the selected item. In this example, upon selection of the interactive element, the purchase user interface may display identification information for the selected item. The multimedia content may be caused to cease outputting or, in an example, the focus on the user device may be on the purchasing user interface instead of the multimedia content. While the user interface is described as a purchase user interface having functionality that allows payment for selected items, it should be understood that the user interface may be configured to provide other views, such as summary information about the selected items, additional information links about the selected items, individual purchase links for each selected item, and so forth.
FIG. 7 illustrates an example process 700 for modifying an interactive element based at least in part on user preference data. The order of the described operations or steps is not intended to be construed as a limitation, and any number of the described operations may be combined in any order and/or in parallel to implement process 700.
At block 702, process 700 may include receiving text data corresponding to one or more comments on the multimedia content. For example, the feedback component of the payment processing service provider system may be configured to receive feedback data associated with the interactive element. In at least one example, one or more users may respond to the content and provide feedback, e.g., through comments (which may include text, images, emoticons, etc.), interactions with buttons or other actuation mechanisms (e.g., like, dislike, fun, love, etc.), etc., such as on a media platform of the content provider (e.g., social networking platform, microblog, blog, video sharing platform, music sharing platform, etc.), which enables user interaction and participation through comments, posts, messages on an electronic bulletin board, messages on a social networking platform, and/or any other type of message. The content provider may enable users of the media platform to interact with each other (e.g., by creating messages, posting comments, etc.). In some embodiments, the content media platform may also refer to an application or web page of an e-commerce or retail organization that provides products and/or services. Such websites may provide online "forms" to complete before or after adding products or services to the virtual shopping cart. The online form may include one or more fields to receive user interactions and participation, such as questions about an order, feedback on previous orders. Such responses may be issued in near real time. The feedback data may indicate positive and/or negative comments regarding the interactive element, the display of the interactive element, and/or the function associated with the selection of the interactive element. The feedback data may include text data corresponding to such feedback.
At block 704, process 700 may include analyzing text data for keywords associated with the item. For example, natural language understanding techniques may be utilized to parse text data and identify words that may be important to the context of one or more comments and multimedia content. For example, the comment, "this shirt is also green" may be processed and annotated with a label to determine the semantic interpretation of the comment and the comment is about the shirt's attribute, here color and the specific attribute is "green". More additional identifiers may be used to determine the nature of the "shirt".
At block 706, process 700 may include identifying an item and/or an attribute associated with the item from the keyword. For example, keywords associated with reference attributes of an item may be identified and utilized to determine one or more attributes of the item. These attributes may include any physical details about the item and/or one or more details associated with the item.
At block 708, process 700 may include determining whether an interactive element has been generated for one or more items represented in the multimedia content. For example, the multimedia content may have been analyzed by a payment processing service provider system, which may have generated interactive elements. In other examples, the multimedia content may not have been analyzed and/or the interactive element is not associated with the multimedia content.
Where no interactive elements have been generated, process 700 may include generating interactive elements indicating identified items and/or item attributes at block 710. For example, the interactive element generator of the payment processing service provider system may be configured to generate data representing the interactive element using data received and/or determined by the item identification component and/or the item information component. The interactive elements may be configured such that when the multimedia content is output on the user device, the interactive elements are also presented, for example in the form of overlays. In an example, the interaction element may be specific to the multimedia content, the items referenced therein, the item attributes, and/or the user preferences. For example, using data received and/or determined as described herein, the interactive element generator may determine the type of interactive element to generate. The interactive element types may include, for example, selectable links, quick response codes ("QR codes"), indicators that enable voice input to select interactive elements, indicators that enable gesture input to select interactive elements, and so forth. It should be appreciated that while several examples of element types have been provided herein, the present disclosure includes any element type that allows for receiving user input. The type of interaction element associated with the given multimedia content may be determined based at least in part on the device type of the user device. For example, if the device type indicates that the device includes a camera, gesture-based interactive elements may be used; or if the device type indicates that the device does not include a touch screen, the interactive element may be configured to receive user input other than touch screen input. Additionally or alternatively, purchase history associated with the user profile being used to view the multimedia content may be used to determine past user input types, and this information may be used to determine the type of interactive element to generate. The interactive elements may include any selectable element that may be presented to a user and configured to receive user input indicative of a selection. The e-commerce tag may be a type of interactive element and may be an optional portion associated with multimedia content that may include e-commerce specific actions such as purchase actions, item information actions, and/or other shopping-related actions.
In addition to the type of interactive element, the interactive element generator may be configured to determine one or more other aspects associated with the interactive element, such as when the interactive element is displayed relative to the multimedia content, a location of the interactive element is displayed relative to a visual window of the user device, a quantity and/or type of item details to be displayed, and/or a function that occurs when the interactive element is selected. For example, the interactive element generator may determine when to display the interactive element based at least in part on data indicating when an item starts to be referenced in the multimedia content and when the item stops being referenced. For example, a given content may be two minutes in length, but the item may not begin to be referenced until the 30 second mark and then stop being referenced at the 1 minute mark. With the item identification data described herein, the interactive element generator may generate interactive elements configured to be displayed only during the time frame in which the item is referenced. With respect to determining where to display the interactive element, the interactive element generator may utilize the item identification data to determine a relative position of an item described in the multimedia content with respect to a visual window of the user device. For example, the item identification data may indicate a location of an object identified in the image data, and the interactive element may be generated such that when displayed, the interactive element may be located near the object, rather than, for example, above the object. This will enable the user to see the object and the interactive element simultaneously and perceive that the object and the interactive element are associated with each other when the multimedia content is output.
At block 712, process 700 may include causing display of an interactive element when outputting at least a portion of the multimedia content. For example, the interactive elements may be displayed, such as by way of an overlay, and may be displayed when the item is being referenced by the multimedia content being output.
Returning to block 708, where an interactive element has not been generated, process 700 may include modifying the interactive element to include the identified item and/or item attributes at block 714. For example, one or more portions of the interactive element may be dynamically changed and/or updated to include the identifier item and/or item attributes. For example, the original interactive element may already include that the item is a "black shirt", but the interactive element may be changed to a "black T-shirt of brand a" using the information collected and/or determined as described above. It should be appreciated that when the present disclosure discusses interactive element changes and/or modifications, such disclosure includes generating data representing interactive elements with changed information. In some implementations, the modification may be triggered by an inventory or availability status of the item, e.g., if there is no black T-shirt, the color is updated to an alternate color. In another embodiment, the modification may be triggered by a customer or merchant based rule (either explicitly specified by such parties or implicitly determined based on historical transactions) to modify the item description. For example, the purchase history of the customer may be used to dynamically modify the item description to the size that the customer (viewing content) typically purchases. Alternatively, the merchant's preferences may be used to modify the item description to the color that the merchant wishes to sell first. In another embodiment, the modification may be initiated based on context rules, where the context is a factor from an incentive such as the location of the merchant, the location of the customer, the time of day, what the customer is currently buying, what the merchant is currently selling, items with coupons, and the like.
As described herein, new interactive elements (which may be generated instantaneously in some embodiments) are generated and/or modified for displaying selectable links, quick reference codes, tags, etc. when multimedia content is output onto a screen of a user device, representing a computer-centric solution to a computer-centric problem utilized by an online platform. Furthermore, the techniques described herein include generation of new data, such as the link data described herein, that is specific to a given multimedia content and configured to be displayed in different manners, such as for different user devices. By doing so, the online platform marketplace can be enhanced such that the information displayed to potential customers is more accurate, tailored to specific customers, presented in a time-sensitive manner, and provided functionality that can only exist in a computer-centric environment. The commands for displaying the selectable elements as described herein may also be configured to cause an application on the user device to launch and/or activate and cause the display of time-sensitive information, e.g., without user input. By doing so, a substantial change of the user equipment itself is achieved so that the user equipment can perform functions that would not be possible without the computer-centric solution described herein.
At block 716, process 700 may include causing display of the modified interactive element. For example, the modified interactive element may be displayed, such as by way of an overlay, and may be displayed while the item is being referenced by the multimedia content being output.
FIG. 8 illustrates an example process 800 for modifying the display of project information with user preferences and for pre-populating a purchase user interface. The order of the described operations or steps is not intended to be construed as a limitation, and any number of the described operations may be combined in any order and/or in parallel to implement process 800.
At block 802, the process 800 may include receiving multimedia content. For example, the content component of the payment processing service provider system may be configured to receive multimedia content and/or retrieve multimedia content. For example, a merchant system or other system may push multimedia content to a payment processing service provider system without requiring a specific request for such content. In other examples, the content component may query one or more other systems for multimedia content. In other examples, the content component may receive an indication that multimedia content associated with a given merchant has been requested to be output onto a user device associated with a customer. In these examples, the content component may query instances of the multimedia content and perform techniques for generating interactive element overlays, for example, before the multimedia content is output onto the user device.
At block 804, process 800 may include identifying one or more items represented in the multimedia content. For example, the item identification component of the payment processing service provider system can analyze the multimedia content and/or related data to identify one or more items referenced in the multimedia content. For example, the item identification component can utilize image data of the multimedia content to identify items depicted in the image data. Such analysis may include using computer vision techniques to identify the presence of an object in the given image data, and then identifying the object itself and/or the class of object to which the object belongs (e.g., shirt, pants, hat, watch, etc.). More detailed information about the use of computer vision techniques is provided below. Additionally or alternatively, when the multimedia content includes user speech, speech may be recognized using speech recognition and natural language understanding techniques, text data representing the speech is generated, and then the intent and/or purpose of the speech is determined. By doing so, the item identification component can identify the item referenced in the multimedia content as well as, for example, the attributes (e.g., color, size, brand, etc.) of the item. Additionally or alternatively, the item identification component can utilize metadata associated with the multimedia content and/or the merchant providing the content to identify the item. For example, the merchant system may provide metadata indicating the items referenced in the multimedia content. In an example, one or more other users may have reviewed or otherwise provided information related to the multimedia content. In these and other examples, some or all of this data may be used by the item identification component to identify items and/or attributes associated with items referenced in the multimedia content.
At block 806, process 800 may include determining whether user preferences have been determined for user profiles for viewing multimedia content. For example, user input associated with user profile and/or past transactions may indicate one or more user preferences.
Where the user preferences have been determined, process 800 may include determining item information to display using the user preferences at block 808. With respect to determining the amount and/or type of item details to display, the interactive element generator may utilize item information from the item information component to determine attributes associated with the referenced item. In some examples, all attributes may be included in the interactive element. However, in other examples, only a portion of the attributes may be included. For example, one or more user preferences may be received and/or determined using historical data associated with the user profile, and the user preferences may inform which item information to select for inclusion in the interactive element. For example, the historical data may indicate that the user associated with the user profile in question purchased more items with a degree of item detail and/or some type of item detail provided. As a further example, the historical data may be data associated with more than the user profile in question (or other than that), such as historical data associated with customers of the merchant, customers of different merchants, and/or general customers.
At block 810, process 800 may include determining a type of interactive element to display using user preferences. For example, using data received and/or determined as described herein, the interactive element generator may determine the type of interactive element to generate. The interactive element types may include, for example, selectable links, QR codes, indicators that enable voice input to select interactive elements, indicators that enable gesture input to select interactive elements, and so forth. It should be understood that while several examples of element types have been provided herein, the present disclosure includes any element type that allows for receiving user input. The type of interaction element associated with the given multimedia content may be determined based at least in part on the device type of the user device. For example, if the device type indicates that the device includes a camera, gesture-based interactive elements may be used; or if the device type indicates that the device does not include a touch screen, the interactive element may be configured to receive user input other than touch screen input. Additionally or alternatively, purchase history associated with the user profile being used to view the multimedia content may be used to determine past user input types, preferred payment options (e.g., previously recorded payment cards), and this information may be used to determine the type of interactive element to be generated. The interactive elements may include any selectable element that may be presented to a user and configured to receive user input indicative of a selection. The e-commerce tag may be a type of interactive element and may be an optional portion associated with multimedia content that may include e-commerce specific actions such as purchase actions, item information actions, and/or other shopping-related actions.
Returning to block 806, where the user preferences have not been determined, process 800 may include determining item information to display using default preferences at block 812. For example, a set of default preferences may be utilized to provide a given amount and/or type of item details.
At block 814, process 800 may include determining a type of interactive element to display using the device capability data. This may be performed in the same or similar manner as the operations described at block 810.
At block 816, the process 800 may include causing the interactive element to be displayed with the multimedia content. For example, the interactive elements may be displayed, such as by way of an overlay, and may be displayed when the item is being referenced by the multimedia content being output.
At block 818, the process 800 may include receiving user input data indicating a selection of an interactive element. For example, a user may provide input to a user device, which may generate user input data indicating a selection of an interactive element.
At block 820, process 800 may include causing a purchase user interface to be displayed with pre-filled item information and/or purchase information. For example, one or more user input fields of the purchase user interface may be pre-populated based at least in part on some or all of the data discussed herein. For example, the attributes and/or options associated with the selected item may be pre-populated with data from the user profile. Additionally or alternatively, item information determined from the multimedia content may be utilized to pre-populate the item attributes. Additionally, payment information from past transactions associated with the user profile may be used to pre-populate payment options, shipping addresses, etc. on the purchase user interface. In this way, upon selection of the interactive element, the purchase user interface may be automatically displayed and pre-populated with the item and payment instrument information, such that the user may only need to confirm the purchase without providing any additional input to obtain the item. In some examples, the payment processing service provider system may be associated with a purchasing user interface, and in these examples, the system may allow the user to indicate that the item of interest is stored in association with one or more applications associated with the system. The application may allow for synchronization between multiple merchants and may allow purchases to be completed at a later time, even across multiple merchants. The application may also provide various payment fulfillment options that may be based at least in part on the purchased item, one or more related merchants, user preferences, and the like. In some examples, the payment fulfillment options may include a loan option, where the payment processing service provider system provides an option to provide a loan, which may allow the user to purchase items under the loan conditions. The selection to provide the loan option may be based at least in part on historical data associated with the customer, merchant, and/or based on coupons or other identifying information indicating that the loan may be provided for a given transaction.
FIG. 9 illustrates an example process 900 for an electronic commerce ("e-commerce") tag in multimedia content. The order of the described operations or steps is not intended to be construed as a limitation, and any number of the described operations may be combined in any order and/or in parallel to implement process 900.
At block 902, process 900 may include receiving multimedia content including a representation of an item offered for sale by a merchant. For example, the content component of the payment processing service provider system may be configured to receive multimedia content and/or retrieve multimedia content. For example, a merchant system or other system may push multimedia content to a payment processing service provider system without requiring a specific request for such content. In other examples, the content component may query one or more other systems for multimedia content. In other examples, the content component may receive an indication that multimedia content associated with a given merchant has been requested to be output onto a user device associated with a customer. In these examples, the content component may query instances of the multimedia content and perform techniques for generating interactive element overlays, for example, before the multimedia content is output onto the user device.
At block 904, process 900 may include identifying an item in the multimedia content by one or more identification techniques. For example, the item identification component can utilize image data of the multimedia content to identify items described in the image data. Such analysis may include using computer vision techniques to identify the presence of an object in the given image data, and then identifying the object itself and/or the class of object to which the object belongs (e.g., shirt, pants, hat, watch, etc.). More details regarding the use of computer vision techniques are provided below. Additionally or alternatively, when the multimedia content includes user speech, speech may be recognized using speech recognition and natural language understanding techniques, text data representing the speech is generated, and then the intent and/or purpose of the speech is determined. By doing so, the item identification component can identify the item referenced in the multimedia content as well as, for example, the attributes (e.g., color, size, brand, etc.) of the item. Additionally or alternatively, the item identification component can utilize metadata associated with the multimedia content and/or the merchant providing the content to identify the item. For example, the merchant system may provide metadata indicating the items referenced in the multimedia content. In an example, one or more other users may have reviewed or otherwise provided information related to the multimedia content. In these and other examples, some or all of this data may be used by the item identification component to identify items and/or attributes associated with items referenced in the multimedia content.
At block 906, process 900 may include determining identification information associated with the item based at least in part on inventory data associated with the merchant. For example, the item information component of the payment processing service provider system can receive and/or determine item information associated with items referenced in the multimedia content. For example, the merchant system may provide data indicative of information associated with the referenced item. The information may include information related to attributes of the item (e.g., size, color, brand, item type, item options, etc.). Additionally or alternatively, the item information component can query one or more systems for item information. For example, the item information component may query the merchant system for inventory data indicating the current inventory of the item, such as when outputting multimedia content. In an example, the merchant system may return inventory data and may utilize the inventory data to inform the customer of the current inventory of items available from the merchant. In other examples, an indication of the current inventory of one or more other merchants may be retrieved and displayed on the user device, for example, when inventory data indicates that the item is out of stock and/or when user preferences indicate that the customer prefers a different merchant. The item information component can also receive item information from the item identification component, for example, when the item identification component determines one or more attributes of an item utilizing the techniques described herein. Some use cases may include modifications when a user purchases antiques, jewelry, or gardening items. For example, when antiques are purchased, the display of information may be modified based at least in part on items being identified as unique or rare. In a jewelry example, item information associated with quality control of an item and/or authentication of an item may be provided. In a horticultural example, preferences based on project location (e.g., seasonal demand) may be utilized to determine what information is to be displayed. In addition to modifications to the project information display, modifications based on project use cases may include modifications to the displayed widgets, available interaction types, and/or modifications during the ordering process.
At block 908, the process 900 may include associating the identifying information with an interactive element to be presented in association with the multimedia content, wherein the interactive element, when selected, causes the user device to display a graphical user interface configured to allow the customer to purchase the item. For example, the interactive element generator of the payment processing service provider system may be configured to generate data representing the interactive element using data received and/or determined by the item identification component and/or the item information component. The interactive elements may be configured such that when the multimedia content is output on the user device, the interactive elements are also presented, for example in the form of a overlay or other overlay technique. In an example, the interaction element may be specific to the multimedia content, the items referenced therein, the item attributes, and/or the user preferences. For example, using data received and/or determined as described herein, the interactive element generator may determine the type of interactive element to generate. The interactive element types may include, for example, selectable links, QR codes, indicators that enable voice input to select interactive elements, indicators that enable gesture input to select interactive elements, and so forth. It should be understood that while several examples of element types have been provided herein, the present disclosure includes any element type that allows for receiving user input. The determination of the type of interaction element associated with the given multimedia content may be based at least in part on the device type of the user device. For example, if the device type indicates that the device includes a camera, gesture-based interactive elements may be used; or if the device type indicates that the device does not include a touch screen, the interactive element may be configured to receive user input other than touch screen input. Additionally or alternatively, purchase history associated with the user profile being used to view the multimedia content may be used to determine past user input types, and this information may be used to determine the type of interactive element to generate. When using a QR code, the QR code may have embedded therein actions such as payment links, functions allowing items to be added to the virtual shopping cart, additional information about related items, etc. These actions may be embedded into the relevant interaction elements by the payment processing service provider system and/or the merchant. The interactive elements may include any selectable element that may be presented to a user and configured to receive user input indicative of a selection. The e-commerce tag may be a type of interactive element and may be an optional portion associated with multimedia content that may include e-commerce specific actions such as purchase actions, item information actions, and/or other shopping-related actions.
In addition to the type of interactive element, the interactive element generator may be configured to determine one or more other aspects associated with the interactive element, such as when the interactive element is displayed relative to the multimedia content, a location of the interactive element is displayed relative to a visual window of the user device, a quantity and/or type of item details to be displayed, and/or a function that occurs when the interactive element is selected. For example, the interactive element generator may determine when to display the interactive element based at least in part on data indicating when an item starts to be referenced in the multimedia content and when the item stops being referenced. For example, a given content may be two minutes in length, but the item may not begin to be referenced until the 30 second mark and then stop being referenced at the 1 minute mark. With the item identification data described herein, the interactive element generator may generate interactive elements configured to be displayed only during the time frame in which the item is referenced. With respect to determining where to display the interactive element, the interactive element generator may utilize the item identification data to determine a relative position of an item described in the multimedia content with respect to a visual window of the user device. For example, the item identification data may indicate a location of an object identified in the image data, and the interactive element may be generated such that when displayed, the interactive element may be located near the object, rather than, for example, above the object. This will enable the user to see the object and the interactive element simultaneously when the multimedia content is output, and perceive that the object and the interactive element are associated with each other.
At block 910, process 900 may include overlaying interactive elements on a portion of the multimedia content for customer interaction. For example, the interactive elements may be displayed, such as by way of an overlay, and may be displayed when the item is being referenced by the multimedia content being output.
At block 912, process 900 may include determining whether input data has been received indicating a customer interaction with the interaction element. For example, the input data may correspond to user input to a customer device indicating selection of an interactive element.
At block 914, process 900 may include, when the input data has been received, causing, based at least in part on the input data, a user device of the customer to display a graphical user interface configured to allow the customer to purchase the item. For example, the command generator of the payment processing service provider system may be configured to generate commands that, among other things, cause a device, such as a user device, to perform an action. For example, the command generator may also generate a command to cause the user device to display a purchase user interface in response to selection of one or more interactive elements. The command generator may also generate a command to cause the user device to display information in the user interface. For example, one or more user input fields of the purchase user interface may be pre-populated based at least in part on some or all of the data discussed herein. For example, the attributes and/or options associated with the selected item may be pre-populated with data from the user profile. Additionally or alternatively, item information determined from the multimedia content may be utilized to pre-populate the item attributes. Additionally, payment information from past transactions associated with the user profile may be used to pre-populate payment options, shipping addresses, etc. on the purchase user interface. In this way, upon selection of the interactive element, the purchase user interface may be automatically displayed and pre-populated with the item and payment instrument information, such that the user may only need to confirm the purchase without providing any additional input to obtain the item.
At block 916, if no input data is received, the process 900 may end.
Additionally or alternatively, process 900 may include receiving inventory data from a system associated with a merchant, the inventory data indicating a current inventory of items available for purchase from the merchant. The process 900 may also include associating the digital representation of the current inventory with the interactive element, wherein overlaying the interactive element on a portion of the multimedia content includes overlaying the digital representation of the current inventory on a portion of the multimedia content.
Additionally or alternatively, the process 900 may include determining not to cause display of the graphical user interface until the multimedia content is stopped. Process 900 may also include storing first data indicating that the interactive element is selected. The process 900 may also include receiving additional input data indicating customer interactions with additional interactive elements associated with additional items in the multimedia content, and storing second data indicating that the additional interactive elements are selected. The process 900 can also include causing the graphical user interface to display purchase information for the item and the additional item based at least in part on the multimedia content stopping and using the first data and the second data.
Additionally or alternatively, the process 900 can include determining attributes of items presented in the multimedia content, the attributes including selectable options associated with the items. Process 900 may also include receiving, from a system associated with a merchant, fee data indicating a fee for an item having the attribute. Process 900 may also include causing the graphical user interface to include the fee and causing the graphical user interface to include the attribute as pre-populated information in the user input field.
FIG. 10 illustrates another example process 1000 for an electronic commerce ("e-commerce") tag in multimedia content. The order of the described operations or steps is not intended to be construed as a limitation, and any number of the described operations may be combined in any order and/or in parallel to implement process 1000.
At block 1002, process 1000 may include receiving multimedia content including a representation of an item. For example, the content component of the payment processing service provider system may be configured to receive multimedia content and/or retrieve multimedia content. For example, a merchant system or other system may push multimedia content to a payment processing service provider system without requiring a specific request for such content. In other examples, the content component may query one or more other systems for multimedia content. In other examples, the content component may receive an indication that multimedia content associated with a given merchant has been requested to be output onto a user device associated with a customer. In these examples, the content component may query instances of the multimedia content and perform techniques for generating interactive element overlays, for example, before the multimedia content is output onto the user device.
At block 1004, process 1000 may include identifying an item in the multimedia content. For example, the item identification component can utilize image data of the multimedia content to identify items depicted in the image data. Such analysis may include using computer vision techniques to identify the presence of an object in the given image data, and then identifying the object itself and/or the class of object to which the object belongs (e.g., shirt, pants, hat, watch, etc.). More detailed information about the use of computer vision techniques is provided below. Additionally or alternatively, when the multimedia content includes user speech, speech may be recognized using speech recognition and natural language understanding techniques, text data representing the speech is generated, and then the intent and/or purpose of the speech is determined. By doing so, the item identification component can identify the item referenced in the multimedia content as well as, for example, the attributes (e.g., color, size, brand, etc.) of the item. Additionally or alternatively, the item identification component can utilize metadata associated with the multimedia content and/or the merchant providing the content to identify the item. For example, the merchant system may provide metadata indicating the items referenced in the multimedia content. In an example, one or more other users may have reviewed or otherwise provided information related to the multimedia content. In these and other examples, some or all of this data may be used by the item identification component to identify items and/or attributes associated with items referenced in the multimedia content.
At block 1006, process 1000 may include determining identification information associated with the item. For example, the item information component of the payment processing service provider system can receive and/or determine item information associated with items referenced in the multimedia content. For example, the merchant system may provide data indicative of information associated with the referenced item. The information may include information related to attributes of the item (e.g., size, color, brand, item type, item options, etc.). Additionally or alternatively, the item information component can query one or more systems for item information. For example, the item information component may query the merchant system for inventory data indicating the current inventory of the item, such as when outputting multimedia content. In an example, the merchant system may return inventory data and may utilize the inventory data to inform the customer of the current inventory of items available from the merchant. In other examples, an indication of the current inventory of one or more other merchants may be retrieved and displayed on the user device, for example, when inventory data indicates that the item is out of stock and/or when user preferences indicate that the customer preferences are different merchants. As described with respect to fig. 5, when an item is out of stock, the interactive elements of the item may not be generated to save resources at the payment service provider system. The item information component can also receive item information from the item identification component, such as when the item identification component determines one or more attributes of an item utilizing the techniques described herein.
At block 1008, process 1000 may include associating the identifying information with an interactive element to be presented in association with the multimedia content, wherein the interactive element is selectable by a user of the user device. For example, the interactive element generator of the payment processing service provider system may be configured to generate data representing the interactive element using data received and/or determined by the item identification component and/or the item information component. The interactive elements may be configured such that when the multimedia content is output on the user device, the interactive elements are also presented, for example in the form of overlays. In an example, the interaction element may be specific to the multimedia content, the items referenced therein, the item attributes, and/or the user preferences. For example, using data received and/or determined as described herein, the interactive element generator may determine the type of interactive element to generate. The interactive element types may include, for example, selectable links, quick response codes ("QR codes"), indicators that enable voice input to select interactive elements, indicators that enable gesture input to select interactive elements, and so forth. It should be understood that while several examples of element types have been provided herein, the present disclosure includes any element type that allows for receiving user input. The determination of the type of interaction element associated with the given multimedia content may be based at least in part on the device type of the user device. For example, if the device type indicates that the device includes a camera, gesture-based interactive elements may be used; or if the device type indicates that the device does not include a touch screen, the interactive element may be configured to receive user input other than touch screen input. Additionally or alternatively, purchase history associated with the user profile being used to view the multimedia content may be used to determine past user input types, and this information may be used to determine the type of interactive element to generate. The interactive elements may include any selectable element that may be presented to a user and configured to receive user input indicative of a selection. The e-commerce tag may be a type of interactive element and it may be an optional portion associated with multimedia content that may include e-commerce specific actions such as purchase actions, item information actions, and/or other shopping-related actions.
In addition to the type of interactive element, the interactive element generator may be configured to determine one or more other aspects associated with the interactive element, such as when the interactive element is displayed relative to the multimedia content, a location of the interactive element relative to a visual window of the user device, a quantity and/or type of item details to be displayed, and/or a function that occurs when the interactive element is selected. For example, the interactive element generator may determine when to display the interactive element based at least in part on data indicating when an item starts to be referenced in the multimedia content and when the item stops being referenced. For example, a given content may be two minutes in length, but the item may not begin to be referenced until the 30 second mark and then stop being referenced at the 1 minute mark. With the item identification data described herein, the interactive element generator may generate interactive elements configured to be displayed only during the time frame in which the item is referenced. With respect to determining where to display the interactive element, the interactive element generator may utilize the item identification data to determine a relative position of an item described in the multimedia content with respect to a visual window of the user device. For example, the item identification data may indicate the location of the identified object in the image data and the interactive element may be generated such that when displayed, the interactive element may be located near the object rather than, for example, above the object, which would enable the user to see the object and the interactive element simultaneously and perceive that the object and the interactive element are associated with each other when the multimedia content is output. It should be appreciated that the interactive elements generated and used herein may be e-commerce tags. The e-commerce tag may be an optional portion associated with the multimedia content that may include e-commerce specific actions such as a purchase action, an item information action, and/or other shopping-related actions. The interactive elements may include any selectable element that may be presented to a user and configured to receive user input indicative of a selection.
At block 1010, process 1000 may include associating an interactive element with a portion of the multimedia content. For example, with respect to multimedia content, the item identification component and/or metadata can indicate when an item in question starts to be referenced relative to the multimedia content and when the item stops to be referenced. An interactive element may be associated with the time frame.
At block 1012, process 1000 may include determining whether input data indicating a user interaction with an interaction element has been received from a user device. For example, the input data may correspond to user input to a customer device indicating selection of an interactive element.
At block 1014, process 1000 may include, when input data is received, causing a user device to display a graphical user interface configured to allow a user to interact with the item based at least in part on the input data. For example, the command generator of the payment processing service provider system may be configured to generate commands that, among other things, cause a device, such as a user device, to perform an action. For example, the command generator may also generate a command to cause the user device to display a purchase user interface in response to selection of one or more interactive elements. The command generator may also generate a command to cause the user device to display information in the user interface. For example, one or more user input fields of the purchase user interface may be pre-populated based at least in part on some or all of the data discussed herein. For example, the attributes and/or options associated with the selected item may be pre-populated with data from the user profile. Additionally or alternatively, item information determined from the multimedia content may be utilized to pre-populate the item attributes. Additionally, payment information from past transactions associated with the user profile may be used to pre-populate payment options, shipping addresses, etc. on the purchase user interface. In this way, upon selection of the interactive element, the purchase user interface may be automatically displayed and pre-populated with the item and payment instrument information, such that the user may only need to confirm the purchase without providing any additional input to obtain the item.
At block 1016, if no input data is received, process 1000 may end.
Additionally or alternatively, process 1000 can include determining a first time indicator for the item that indicates when referenced in the multimedia content. Process 1000 may also include determining a second time indicator for determining when the representation for the item stops being referenced in the multimedia content. The process 1000 may also include causing the interactive element to be displayed in association with the multimedia content from the first time indicator to the second time indicator when the multimedia content is output via the user device.
Additionally or alternatively, process 1000 may include determining a location on a viewable window of the user device at which the item is presented during output of the multimedia content by the user device. Process 1000 may also include causing an interactive element to be presented in association with the location when the multimedia content is output via the user device.
Additionally or alternatively, process 1000 may include determining a portion of the multimedia content in which to display the representation. Process 1000 may also include identifying attributes of the item using image data from the portion of the multimedia content. Process 1000 may also include including the attribute as at least a portion of the identification information.
Additionally or alternatively, process 1000 may include receiving text data representing one or more comments associated with the multimedia content. Process 1000 may also include identifying one or more keywords associated with the item from the text data. Process 1000 may also include modifying the identification information based at least in part on the one or more keywords.
Additionally or alternatively, process 1000 may include determining that the item is not available from the first merchant when the user input is received. Process 1000 may also include identifying one or more second merchants from which the item may be obtained. Process 1000 may also include causing the graphical user interface to include identifiers of one or more second merchants.
Additionally or alternatively, process 1000 may include receiving inventory data from a system associated with a merchant selling the item, the inventory data indicating a current inventory of items available for purchase from the merchant. Process 1000 may also include determining that the current inventory is less than the threshold inventory value. Process 1000 may also include sending a recommendation for the replacement item currently in inventory based at least in part on the current inventory being less than the threshold inventory value. The replacement items may include variants of items depicted in the multimedia content. For example, one or more of the attributes may be different, such as color, size, similarity, etc. The data indicating the substitute may be saved in a catalog such that the same interactive element, when selected, points to the substitute instead of the original item. The catalog data may provide an indication of associations between items, interactive elements, and alternative items and/or an association indicative of alternative merchants.
Additionally or alternatively, process 1000 may include generating first data representing the interactive element, the first data being separate from the multimedia content. Process 1000 may also include receiving an indication that the multimedia content has been requested for output to the user device. Process 1000 may also include causing, in response to the indication, first data to be superimposed on the multimedia content when the multimedia content is output onto the user device. In this way, item information and links associated with the interactive elements may be retained and utilized even when the multimedia content is changed and/or if the multimedia content becomes inoperable. In these examples, the multimedia content and the one or more interactive elements associated therewith may still be associated with updated inventory information even in the event that the item information changes, e.g., over time. This may reduce or eliminate the need to change the multimedia content itself. Additionally, metadata associated with the user's interaction with the interactive element and/or metadata associated with the item and/or the interactive element itself may be separate from purchase data associated with the purchase of the item.
Additionally or alternatively, process 1000 may include receiving item information about the item from a system associated with the merchant. Process 1000 may also include receiving payment information from the user device via the graphical user interface. Process 1000 may also include initiating a payment transaction for purchasing the item using the payment information and the item information.
Additionally or alternatively, process 1000 may include determining a geographic region associated with the user device upon receiving the user input data. Process 1000 may also include determining a current inventory of items in the geographic area, wherein causing the interactive element to be displayed includes causing an indication of the current inventory to be displayed.
Additionally or alternatively, process 1000 may include analyzing image data of the multimedia content to identify objects depicted in the multimedia content using one or more computer vision processes and prior to transmitting the instance of the multimedia content to the user device. Process 1000 may also include, prior to transmitting the instance of the multimedia content, generating first data comprising an interactive element, wherein the interactive element is based at least in part on the identified object, and transmitting the first data and the instance of the multimedia content to the user device.
Additionally or alternatively, process 1000 may include receiving, from a user device, image data depicting a gesture made by a user of the user device when outputting the multimedia content, the user input data comprising the image data. Process 1000 may also include determining a movement pattern of the gesture based at least in part on the analysis of the image data. Process 1000 may also include determining that the movement pattern corresponds to a reference movement pattern that indicates that the user has provided input to select an item to purchase. Process 1000 may also include causing an action to be performed based at least in part on the motion pattern corresponding to the reference motion pattern.
Additionally or alternatively, process 1000 may include utilizing a speech recognition process performed on audio data of the multimedia content to determine text data indicative of a speech portion of the multimedia content. Process 1000 may also include identifying one or more characteristics of the item based at least in part on the text data. Process 1000 may also include pre-populating at least one field of the graphical user interface with one or more characteristics.
FIG. 11 illustrates an example process 1100 for customizing an e-commerce tag in real-time multimedia content. The order of the described operations or steps is not intended to be construed as a limitation, and any number of the described operations may be combined in any order and/or in parallel to implement process 1100.
At block 1102, process 1100 may include receiving multimedia content, including a representation of an item offered for sale by a merchant. For example, the content component of the payment processing service provider system may be configured to receive multimedia content and/or retrieve multimedia content. For example, a merchant system or other system may push multimedia content to a payment processing service provider system without requiring a specific request for such content. In other examples, the content component may query one or more other systems for multimedia content. In other examples, the content component may receive an indication that multimedia content associated with a given merchant has been requested to be output onto a user device associated with a customer. In these examples, the content component may query instances of the multimedia content and perform techniques for generating interactive element overlays, for example, before the multimedia content is output onto the user device.
In the event that multimedia content has been received, process 1100 may include determining, at block 1104, a user profile associated with the particular user device that has requested the output of the multimedia content, the user profile including a purchase history of the user. For example, a user device from which a request to view multimedia content is received may be associated with a device identifier. The device identifier may be associated with a given user profile. In other examples, access information associated with the user profile may be used to log in or otherwise access a platform for viewing multimedia content, and the access information may be used to identify the user profile.
At block 1106, the process 1100 may include determining user preferences based at least in part on the purchase history. For example, the purchase history may indicate one or more aspects of the user profile and/or the manner in which the user associated with the user profile interacted with the device and/or platform. For example, the purchase history may indicate typical sizes ordered, number of items, item colors, brands, item options, payment information, user input, user feedback, user association with other accounts, discount information, loyalty membership, and the like. Some or all of this information may be used to determine one or more user preferences.
At block 1108, the process 1100 may include generating identification information associated with the item that emphasizes details about the item associated with the user preference. For example, determining the type of interaction element associated with a given multimedia content may be based at least in part on user preferences. The purchase history associated with the user profile being used to view the multimedia content may be used to determine the type of user input in the past, and this information may be used to determine the type of interactive element to be generated. In addition to the type of interactive element, the interactive element generator may also utilize item information from the item information component to determine attributes associated with the referenced item. In some examples, all attributes may be included in the interactive element. However, in other examples, only a portion of the attributes may be included. For example, one or more user preferences may be received and/or determined using historical data associated with the user profile, and these user preferences may inform which item information to select for inclusion in the interactive element. For example, the historical data may indicate that the user associated with the user profile in question purchased more items with a degree of item detail and/or provided some type of item detail. As a further example, the historical data may be data associated with more than the user profile in question (or other than that), such as historical data associated with customers of the merchant, customers of different merchants, and/or general customers. The interactive elements may include any selectable element that may be presented to a user and configured to receive user input indicative of a selection. The e-commerce tag may be a type of interactive element and may be an optional portion associated with multimedia content that may include e-commerce specific actions such as purchase actions, item information actions, and/or other shopping-related actions.
With respect to determining a function that will appear when an interactive element is selected, the interactive element generator may receive and/or determine data indicating user preferences for selecting the function. These user preferences may indicate that the user desires to display a purchase user interface upon selection of an interactive element. In other examples, these user preferences may indicate that the user wishes to display the purchase user interface only after the multimedia content has stopped or otherwise at some time after a given interactive element has been selected. This may allow the user to select a plurality of interactive elements, each corresponding to a different item, before being presented with the purchase user interface. In these examples, the interactive elements may be configured to be selected and then the data indicating these selections may be saved until the multimedia content ceases.
At block 1110, the process 1100 may include integrating the identification information with the multimedia content such that the identification information is displayed when the multimedia content is displayed on a particular user device. For example, data representing the interactive elements may be generated and may be associated with the multimedia content such that when the multimedia content is displayed, the interactive elements are also displayed. The identification information may be part of the interactive element such that the identification information is displayed when the multimedia data is displayed. In other examples, the identification information may not be included in the interactive element but may be associated with the interactive element such that when the user interacts with the interactive element, the identification information is displayed to the user.
Additionally or alternatively, process 1100 may include receiving transaction data associated with past transactions of the user profile. Process 1100 may also include utilizing the transaction data to process past transactions and determining a purchase history based at least in part on the transaction data. Process 1100 may also include determining one or more attributes associated with the past transaction that indicate a purchase trend of the user profile. The process 1100 may also include determining attributes for the item corresponding to the one or more attributes, wherein generating identifying information that emphasizes details about the item includes emphasizing the attribute for the item corresponding to the one or more attributes.
Additionally or alternatively, the process 1100 may include determining that the historical user input to the particular user device is a particular input type selected from at least one of a touch screen input type, a click input type, a quick response code input type, a sound input type, or a gesture input type, wherein the user preference includes the particular input type. Process 1100 may also include causing the identifying information to be presented as an interactive element configured to receive a particular input type.
Additionally or alternatively, process 1100 can include determining a level of item detail associated with items purchased in past transactions associated with the user profile, wherein the user preference indicates the level of item detail. Process 1100 may also include causing identification information to be displayed on the particular user device, the identification information including the item level of detail.
Additionally or alternatively, the process 1100 can include determining from the purchase history one or more item categories of previously purchased items associated with the user profile, the user preferences indicating the one or more item categories. The process 1100 may also include determining that the item corresponds to at least one of the one or more item categories, wherein generating the identifying information is based at least in part on determining that the item corresponds to at least one of the one or more item categories.
Additionally or alternatively, process 1100 may include determining from a purchase history of the user profile an amount of time that the selectable element has been displayed prior to receiving user input indicating selection of the selectable element to purchase the item. Process 1100 may also include causing the interactive element to be displayed for at least a historical amount of time while outputting the multimedia content.
Additionally or alternatively, process 1100 may include determining payment instrument information from a purchase history of the user profile for use in past payment transactions. The process 1100 may also include receiving input data indicating a selection of an item to purchase, and causing, based at least in part on the input data, a user device to display a graphical user interface configured to allow a user to purchase the item, the graphical user interface including an input field pre-populated with payment instrument information.
Additionally or alternatively, process 1100 may include receiving first data from one or more merchants corresponding to a purchase history indicating items associated with past transactions of user profile and determining one or more attributes associated with the items. The process 1100 may also include receiving input data indicating a selection of an item to purchase, and based at least in part on the input data, causing a user device to display a graphical user interface configured to allow a user to purchase the item, the graphical user interface including an input field pre-populated with one or more item-related options based at least in part on one or more attributes.
Additionally or alternatively, process 1100 can include determining from a purchase history of the user profile a merchant that has provided items to a user associated with the user profile. Process 1100 may also include determining that a first amount of transactions with a first one of the merchants exceeds a second amount of transactions with a second one of the merchants. Process 1100 may also include determining that the first merchant provides the item for sale and determining that the second merchant provides the item for sale, and wherein the identification information includes an identifier of the first merchant but not the second merchant based at least in part on the first transaction amount exceeding the second transaction amount.
Additionally or alternatively, process 1100 can include receiving feedback data from a user device associated with the user profile indicating user interaction with the identification information. Process 1100 may also include modifying the user preferences based at least in part on the feedback data and causing the modified identification information to be displayed on the user device based on the modified user preferences.
Additionally or alternatively, process 1100 can include determining historical user input associated with the user profile that indicates that a plurality of items represented in other multimedia data were purchased together. The process 1100 may further include avoiding causing display of a graphical user interface configured to allow purchase of the item prior to the multimedia content ceasing to output based at least in part on determining that the plurality of items are purchased together. Process 1100 may also include causing display of a graphical user interface in response to the multimedia content ceasing to output.
Additionally or alternatively, process 1100 can include determining a device type of a device used to output the multimedia content in association with the user profile. Process 1100 may also include determining, based at least in part on the device type, one or more user input types that the device is capable of receiving. The process 1100 may also include generating an interactive element including the identifying information, the interactive element configured to be displayed when the multimedia content is displayed, and the interactive element configured to receive one or more user input types.
Additionally or alternatively, process 1100 can include determining from the purchase history that the user profile has involved at least a threshold number of past purchases in which the item was purchased through interactions with other multimedia content. Process 1100 may also include determining a discount value associated with the user profile based at least in part on the user profile engaging in at least a threshold number of past purchases. The process 1100 may also include causing a graphical user interface to be displayed in response to input from a user selecting an item to purchase, the graphical user interface indicating that a discount value is to be applied to a payment transaction for the item.
Additionally or alternatively, process 1100 can include storing data indicative of historical interactions with other multimedia content to purchase items and determining one or more characteristics of the other multimedia content. Process 1100 may also include identifying content associated with the user profile that has not been viewed and that is associated with one or more characteristics of other multimedia content. Process 1100 may also include sending a recommendation to view the content to a user device associated with the user profile.
FIG. 12 illustrates another example sequence diagram showing a process 1200 for customizing an e-commerce label in real-time multimedia content. The order of the described operations or steps is not intended to be construed as a limitation, and any number of the described operations may be combined in any order and/or in parallel to implement process 1200.
At block 1202, the process 1200 may include the user device 102 transmitting user input data indicating selection of an interactive element during display of the multimedia content. For example, the interactive elements may be displayed on the user device 102 when the user device 102 is outputting multimedia content including items associated with the interactive elements. The user of the user device 102 may select an interactive element and user input data indicating the selection may be sent to the payment processing service provider system 104.
At block 1204, process 1200 may include payment processing service provider system 104 sending a query to merchant device 106 for inventory data and/or item data. For example, the payment processing service provider system 104 may query the current inventory of items associated with the interactive elements and/or information associated with the items, such as currently available sizes, colors, types, options, inventory conditions, and the like.
At block 1206, process 1200 may include merchant device 106 transmitting the requested inventory data and/or item data to payment processing service provider system 104. Inventory data and/or item data may be used to determine whether an item and/or an item having particular attributes (e.g., color and size) is currently in inventory and available from the merchant in question. In the event that inventory data and/or item data indicates that items having desired item attributes are available, an indication of inventory count and/or some or all of the item information may be displayed on the user device 102. In the event that the inventory data and/or item data indicates that the item is not available from a merchant, additional operations may be performed to provide the user with other information about other items and/or items available from other merchants.
At block 1208, process 1200 may include payment processing service provider system 104 sending a query for inventory data and/or item data to one or more other merchant devices 1250. The query may be similar to the query described above with respect to block 1204.
At block 1210, process 1200 may include one or more other merchant devices transmitting the requested inventory data and/or item data to payment processing service provider system 104. Inventory data and/or item data may be transmitted in a similar manner as inventory data and/or item data is transmitted with respect to block 1206.
At block 1212, process 1200 may include payment processing service provider system 104 sending a command to user device 102 to display inventory information and/or replacement merchant information. For example, one or more indicators may be displayed. These indicators may identify other merchants from which the item may be obtained, inventory data from other merchants, item options available from other merchants, pricing information, and the like.
Fig. 13 illustrates an example environment 1300. The environment 1300 includes one or more server computing devices 1302 that can communicate over a network 1304 with a plurality of user devices 1306 (which in some examples can be merchant devices 1308 (a) -1308 (N), respectively)) and/or one or more server computing devices 1310 associated with one or more third-party service providers. One or more server computing devices 1302 can be associated with a service provider 1312, which service provider 1312 can provide one or more services for the benefit of a user 1314, as described below. The actions attributed to service provider 1312 may be performed by one or more server computing devices 1302.
In at least one example, service provider 1312 may correspond to the payment processing service provider described above. In at least one example, one or more server computing devices 1302 may correspond to one or more servers 102, and one or more networks 1304 may correspond to one or more networks 108 described above with reference to fig. 1. In at least one example, the multimedia content service provider described above with reference to fig. 1 can be associated with one or more server computing devices 1310 associated with one or more third party service providers.
The environment 1300 can facilitate generation and use of interactive elements associated with multimedia content. As described above, a content provider, such as a merchant, may publish or otherwise provide multimedia content. Such content may describe and/or discuss (e.g., reference) one or more items (e.g., goods and/or services). In some examples, the content may be associated with an intent to sell an item described in the content (e.g., text associated with the image indicates that the user is looking for an item described in the sales content). In other examples, the content may not be associated with a sales intent (e.g., no explicit or implicit indication indicates that the user wishes to sell anything described in the content). The service provider 1312 may identify the referenced items in the multimedia content and generate interactive elements to superimpose on the content at the time of output. The interactive element may be selectable and, when selected, may cause a purchase user interface to be presented.
In at least one example, the techniques performed by environment 1300 can alleviate the need for users interested in selling through a platform outputting multimedia content to perform any actions they would not normally perform prior to publishing the content to the platform. That is, users interested in selling through such platforms may simply publish content to one or more platforms, and the techniques described herein aim to create sales opportunities and facilitate transactions based on such content.
As described above, the components of environment 1300 may create sales opportunities even if the sales opportunities were not originally present (e.g., when content was published). That is, if the user-published content does not have interactive elements that can be allowed to purchase, the service provider 1312 may still immediately create the interactive elements and cause the elements to be displayed with the multimedia content to allow the customer to purchase the referenced items in the content.
As described above, users of platforms (e.g., websites, applications, and other network-based communication tools provided by service providers) utilize tools to conduct online commerce ("e-commerce"). However, as described above, the current technology has limitations. In some examples, users interested in purchasing items published through such platforms need to follow up with merchants through another communication tool (e.g., email, text message, private message, etc.) to coordinate purchase issues. Such systems introduce unnecessary hysteresis due to the response time associated with the user. Furthermore, current infrastructure does not allow for automatically filtering users and the merchant to assume responsibility for whether to initiate a conversation with interested users, conduct financial transactions with them, etc. In other examples, the user is directed to a web page (typically different from the web page or platform that initiated the interaction) on which the user then needs to add items to the virtual shopping cart and provide payment data to complete the online transaction. Thus, the platforms must establish a communication interface between different platforms, such as between a content platform (allowing interaction between two users) and a payment platform (facilitating payment transactions). These communication interfaces must meet security protocols to allow secure communications, such as the exchange of financial data. The prior art also brings friction when a user intends to purchase items through the content providing platform. That is, the user, whether the buyer or the merchant, needs to perform multiple actions to facilitate the transaction, which may include multiple communication exchanges, multiple clicks on multiple web pages, interaction or registration with multiple platforms, and so forth. Thus, the current technology is inefficient and user-unfriendly. The environment 1300 described herein enables friction-free (or nearly friction-free) transactions through interactions with multimedia content. Accordingly, the technology described herein provides an improvement over the prior art.
As described above, the environment 1300 may include a plurality of user devices 1306. Each of the plurality of user devices 1306 may be any type of computing device, such as a tablet computing device, a smart phone or mobile communication device, a notebook computer, a netbook or other portable or semi-portable computer, a desktop computing device, a terminal computing device or other semi-fixed or fixed computing device, a dedicated device, a wearable computing device or other body-worn computing device, an augmented reality device, a virtual reality device, an internet of things (IoT) device, and so forth. In some examples, individual ones of the user devices may be operated by a user 1314. The user 1314 may refer to a buyer, customer, seller, merchant, borrower, employee, employer, payer, payee, courier, or the like. The user 1314 may interact with the user device 1306 through a user interface presented by the user device 1306. In at least one example, the user interface may be presented via a web browser or similar manner. In other examples, the user interface may be presented via an application, such as a mobile application or desktop application, which may be provided by service provider 1312 or may be a dedicated application. In some examples, an individual in user device 1306 may have an instance or versioned instance of an application, which may be downloaded, for example, from an application store, which may present one or more user interfaces described herein. In at least one example, the user 1314 may interact with the user interface via touch input, voice input, or any other type of input.
In at least one example, merchant device 104 and buyer device 102 described above in fig. 1 can include user device 1306 as described herein. Similarly, merchants and buyers may include users 1314 as used herein.
In at least one example, the user 1314 may include a merchant 1316 (a) -1316 (N), respectively). In an example, the merchant 1316 may operate a respective merchant device 1308, which may be a user device 1306 configured for use by the merchant 1316. For purposes of this discussion, a "merchant" may be any entity that provides an item (e.g., a good or service) for purchase or other acquisition (e.g., rental, borrowing, bartering, etc.). The merchant 1316 may provide items for purchase or other acquisition via a physical store, a mobile store (e.g., a pop-up store, a food truck, etc.), an online store, a combination of the foregoing, etc. In some examples, at least some merchants 1316 may be associated with the same entity, but may have different merchant locations and/or may have franchise/franchise relationships. In additional or alternative examples, merchant 1316 may be a different merchant. That is, in at least one example, merchant 1316 (a) is a different merchant than merchant 1316 (B) and/or merchant 1316 (C).
For purposes of this discussion, "different merchants" may refer to two or more unrelated merchants. Thus, "different merchants" may refer to two or more merchants belonging to different legal entities (e.g., natural and/or legal) that do not share accounting, employees, brands, and the like. As used herein, a "different merchant" has a different name, employer Identification Number (EIN), business scope (in some examples), inventory (or at least a portion thereof), and/or the like. Thus, use of the term "different merchants" does not refer to merchants with different merchant locations or franchise/franchise relationships. Such merchants, having different merchant locations or franchise/affiliation relationships, may be referred to as merchants having different merchant locations and/or different commercial channels.
Each merchant device 1308 may have an instance of POS application 1318 stored thereon. The POS application 1318 may configure the merchant device 1308 as a POS terminal, which enables the merchant 1316 (a) to interact with one or more buyers 1320. As described above, the user 1314 may include a buyer, such as buyer 1320, shown interacting with merchant 1316 (A). For purposes of this discussion, a "buyer" may be any entity that obtains items from a merchant. Although only two buyers 1320 are illustrated in fig. 13, any number of buyers 1320 may interact with the merchant 1316. Further, while FIG. 13 illustrates buyer 1320 interacting with merchants 1316 (A), buyer 1320 may interact with any merchant 1316.
In at least one example, interactions between the buyer 1320 and the merchant 1316 involving exchange of funds (from the buyer 1320) for items (from the merchant 1316) may be referred to as "POS transactions" and/or "transactions. In at least one example, the POS application 1318 can determine transaction data associated with a POS transaction. The transaction data may include payment information (which may be obtained from a reader device 1322 associated with the merchant device 1308 (a)), user authentication data, purchase amount information, point of purchase information (e.g., one or more items purchased, date of purchase, time of purchase, etc.), and the like. POS application 1318 may send transaction data to one or more server computing devices 1302. Further, POS application 1318 may present a User Interface (UI) to enable merchant 1316 (a) to interact with POS application 1318 and/or interact with service provider 1312 via POS application 1318.
In at least one example, merchant device 1308 (a) can be a special purpose computing device configured as a POS terminal (through execution of POS application 1318). At least one ofIn an example, the POS terminal may be connected to a reader device 1322 that is capable of receiving a variety of payment instruments, such as credit cards, debit cards, gift cards, short-range communication based payment instruments, and the like, as described below. In at least one example, the reader device 1322 may be plugged into a port in the merchant device 1308 (a), such as a microphone port, an earphone port, an audio jack, a data port, or other suitable port. In additional or alternative examples, the reader device 1322 may be coupled to the merchant device 1308 (a) via another wired or wireless connection, for example via Bluetooth Low Energy (BLE), etc. Additional details are described below with reference to fig. 14. In some examples, reader device 1322 may read information from alternative payment instruments, including but not limited to a wristband or the like.
In some examples, reader device 1322 may be in communication with a device such as a magnetic stripe payment card, an EMV payment card, and/or a short range communication (e.g., near Field Communication (NFC), radio Frequency Identification (RFID), Bluetooth Low Energy (BLE), etc.) payment instruments (e.g., cards or devices configured to be point touched) physically interact. The POS terminal can provide a rich user interface, communicate with the reader device 1322, and communicate with one or more server computing devices 1302, which can provide, among other services, payment processing services. One or more server computing devices 1302 associated with service provider 1312 may communicate with one or more server computing devices 1310, as described below. In this manner, the POS terminal and reader device 1322 may collectively process one or more transactions between the merchant 1316 and the purchaser 1320. In some examples, the POS terminal and the reader device may be configured as a one-to-one pairing. In other examples, the POS terminal and reader device may be configured as a many-to-one pairing (e.g., one POS terminal coupling Coupled to multiple reader devices or multiple POS terminals coupled to one reader device). In some examples, there may be multiple POS terminals connected to some other device, such as an "auxiliary" terminal, e.g., a backend system, printer, line-buster device, POS reader, etc., to allow information from the auxiliary terminals to be shared between one or more primary POS terminals and one or more auxiliary terminals, e.g., via short-range communication techniques. This arrangement may also operate in an offline-online scenario to allow one device (e.g., an auxiliary terminal) to continue to receive user input and synchronize data with another device (e.g., an auxiliary terminal) when the primary or auxiliary terminal switches to online mode. In other examples, such data synchronization may occur periodically or at randomly selected time intervals.
Although the POS terminal and reader device 1322 of the POS system 1324 are shown as separate devices, in additional or alternative examples, the POS terminal and reader device 1322 may be part of a single device. In some examples, the reader device 1322 may have a display integrated therein for presenting information to the buyer 1320. In an additional or alternative example, the POS terminal may have a display integrated therein for presenting information to the purchaser 1320. POS systems, such as POS system 1324, may be mobile such that POS terminals and reader devices may process transactions at different locations around the world. POS systems may be used to process card transactions and card-less payment (CNP) transactions, as described below.
A card-on transaction is a transaction in which both buyer 1320 and his or her payment instrument are actually present at the time of the transaction. The card transaction may be processed by swiping, inserting, clicking, or any other interaction between a physical payment instrument (e.g., card) or otherwise present and the reader device 1322, whereby the reader device 1322 is able to obtain payment data from the payment instrument. Swipe cards are a type of card-with-transaction in which a purchaser 1320 slides a card or other payment instrument having a magnetic stripe past a reader device 1322, the reader device 1322 capturing payment data contained in the magnetic stripe. The card insert is a card transaction in which buyer 1320 will first have an embedded microchipThe payment instrument of the chip (i.e., chip) is inserted into the reader device 1322. The inserted payment instrument remains in the payment reader until the reader device 1322 prompts the buyer 1320 to remove the card or other payment instrument. When the payment instrument is in the reader device 1322, the microchip can create a one-time code that is transmitted from the POS system 1324 to one or more server computing devices 1310 (which can be associated with a third party service provider that provides payment services, including but not limited to an acquiring bank, an issuing card row, and/or a card payment network (e.g. Etc.) to match the same one-time code. A swipe is a card transaction in which a buyer 1320 can swipe or hover his or her payment instrument (e.g., card, electronic device such as a smart phone running a payment application, etc.) over a reader device 1322 through short range communications (e.g., NFC, RFID, etc.>BLE, etc.) to complete the transaction. The short-range communication payment instrument is capable of exchanging information with the reader device 1322. The point contact may also be referred to as a contactless payment.
CNP transactions refer to transactions in which a card or other payment instrument is not actually present at the POS, and therefore require manual entry (e.g., by a merchant, buyer, etc.) of payment data, or call-out of payment data from a card archive data store to complete the transaction.
The POS system 1324, the one or more server computing devices 1302, and/or the one or more server computing devices 1310 may exchange payment information and transaction data to determine whether the transaction is authorized. For example, the POS system 1324 may provide encrypted payment data, user authentication data, purchase amount information, point of purchase information, and the like (collectively referred to as transaction data) to the one or more server computing devices 1302 via the one or more networks 1304. The one or more server computing devices 1302 may send the transaction data to the one or more server computing devices 1310. As described above, in at least one example, one or more server computing devices 1310 may be To be associated with third party service providers that provide payment services, including but not limited to acquiring banks, issuers, and/or card payment networks (e.g.Etc
For purposes of this discussion, a "payment service provider" may be an acquirer bank ("acquirer"), an issuing bank ("issuer"), a card payment network, or the like. For example, the acquirer is a bank or financial institution that processes payments (e.g., credit or debit card payments), and may assume a risk on behalf of one or more merchants. The acquirer may be a card association (e.g.) May also be part of a card payment network. An acquirer (e.g., one or more server computing devices 1310 associated therewith) may pay a card network (e.g.Etc.) sends a funds transfer request to determine whether the transaction is authorized or incomplete. In at least one example, service provider 1312 may act as an acquirer and connect directly with the card payment network.
The card payment network (e.g., one or more server computing devices 1310 associated therewith) may forward the funds transfer request to an issuing bank (e.g., an "issuer"). An issuer is a bank or financial institution that provides a financial account (e.g., a credit or debit card account) to a user. The issuer may issue the payment card to the user and may pay the acquirer for the purchase by the cardholder to whom the issuer has issued the payment card. The issuer (e.g., the one or more server computing devices 1310 associated therewith) may determine whether the buyer has the ability to afford the associated fees associated with the payment transaction. In at least one example, service provider 1312 may act as an issuer and/or may cooperate with an issuer. The transaction is approved or declined by the issuer and/or the card payment network (e.g., the one or more server computing devices 1310 associated therewith), and a payment authorization message is communicated from the issuer to the POS device over the reverse path as described above or over the alternate path.
As described above, one or more server computing devices 1310, which may be associated with one or more payment service providers, may determine whether a transaction is authorized based on transaction data and information related to parties to the transaction (e.g., buyer 1320 and/or merchant 1316 (a)). The one or more server computing devices 1310 may send authorization notifications to the one or more server computing devices 1302 over the one or more networks 1304, and the one or more server computing devices 1302 may send authorization notifications to the POS system 1324 over the one or more networks 1304 to indicate whether the transaction is authorized. The one or more server computing devices 1302 send additional messages, such as transaction identifiers, to the POS system 1324. In one example, one or more server computing devices 1302 may include merchant applications and/or other functional components for communicating with POS system 1324 and/or one or more server computing devices 1310 to authorize or reject transactions.
Based on the authentication notification received by POS system 1324 from one or more server computing devices 1302, merchant 1316 (a) may indicate to purchaser 1320 whether the transaction has been approved. In some examples, approval may be indicated at POS system 1324, for example, at a display of POS system 1324. In other examples, information regarding approved transactions may be provided to the short-range communication payment instrument for presentation via a display screen of the smartphone or watch, for example, using the smartphone or watch operating as the short-range communication payment instrument. In some examples, additional or alternative information may additionally be presented with approved transaction notifications, including but not limited to receipts, special offers, coupons, or loyalty program information.
As described above, the service provider 1312 may provide, among other services, payment processing services, inventory management services, commercial banking services, financing services, lending services, subscription management services, network development services, payroll services, employee management services, reservation services, loyalty tracking services, restaurant management services, order management services, payment fulfillment services, point-to-point payment services, online services, identity verification (IDV) services, and the like. In some examples, user 1314 may access all services of service provider 1312. In other examples, the user 1314 may have hierarchical access to services, which may be based on risk tolerance capabilities, IDV output, subscriptions, and the like. In at least one example, the merchant 1316 may access such services through a POS application 1318. In additional or alternative examples, each service may be associated with its own access point (e.g., application, web browser, etc.).
As described above, service provider 1312 may provide payment processing services on behalf of merchant 1316 for processing payments. For example, as described above, service provider 1312 may provide payment processing software, payment processing hardware, and/or payment processing services to merchant 1316 to enable merchant 1316 to receive payment from purchaser 1320 when conducting a POS transaction with purchaser 1320. For example, service provider 1312 may enable merchant 1316 to receive cash payments, payment card payments, and/or electronic payments from buyer 1320 to conduct POS transactions, and service provider 1312 may process the transactions on behalf of merchant 1316.
As service provider 1312 processes the transaction on behalf of merchant 1316, service provider 1312 may maintain an account or balance of merchant 1316 in one or more ledgers. For example, service provider 1312 may analyze the transaction data received for a transaction to determine the amount of funds owed to merchant 1316 (a) for the transaction. In at least one example, such an amount may be the total purchase price minus the fee charged by service provider 1312 for providing the payment processing service. Based on determining the amount of funds owed to merchant 1316 (a), service provider 1312 may deposit the funds into the account of merchant 1316 (a). The account may have a storage balance that may be managed by service provider 1312. The account may differ from a traditional bank account at least in that the stored balance is managed by a ledger of service provider 1312 and the associated funds are accessible through various withdrawal channels, including, but not limited to, regular deposit, current day deposit, instant deposit, and linked payment instruments.
When service provider 1312 transfers funds associated with the stored balance of merchant 1316 (a) to a bank account held by merchant 1316 (a) at a bank or other financial institution (e.g., associated with one or more server computing devices 1310). The predetermined deposit may occur at a pre-scheduled time after the POS transaction obtains funds, which may be one workday after the POS transaction occurs, or earlier or later. In some examples, merchant 1316 (a) may obtain funds prior to the reservation of the deposit. For example, merchant 1316 (a) may obtain a deposit on the same day (e.g., where service provider 1312 deposits funds from the stored balance into the merchant's associated bank account on the same day of the POS transaction, in some examples before the POS transaction obtains funds) or an instant deposit (e.g., where service provider 1312 deposits funds from the stored balance into the merchant's associated bank account on demand (e.g., in response to a request)). Further, in at least one example, merchant 1316 (a) may have a payment instrument associated with the stored balance that enables the merchant to obtain funds without first transferring the funds from an account managed by service provider 1312 to the merchant's bank account 1316 (a).
In at least one example, the service provider 1312 may provide inventory management services. That is, the service provider 1312 may provide inventory tracking and reporting. The inventory management service may enable the merchant 1316 (a) to access and manage a database that stores data associated with the number of each item (i.e., inventory) available to the merchant 1316 (a). Further, in at least one example, service provider 1312 may provide a catalog management service to enable merchant 1316 (a) to maintain a catalog, which may be a database (i.e., catalog management service) that stores data associated with items available to merchant 1316 (a). In at least one example, the catalog can include a plurality of data items, and one of the plurality of data items can represent an item available to the merchant 1361 (a). The service provider 1312 may provide recommendations regarding item pricing, location of items on inventory, and multi-party payment fulfillment of the inventory.
In at least one example, service provider 1312 may provide commercial banking services that allow merchant 1316 (a) to track deposits (from payment processing and/or other sources of funds) into an account of merchant 1316 (a), payroll payments from that account (e.g., to employees of merchant 1316 (a)), payments directly from an account or associated debit card to other merchants (e.g., business-to-business), withdrawals through regular and/or instant deposits, and the like. In addition, the commercial banking service may cause merchants 1316 (a) to obtain customized payment instruments (e.g., credit cards), check how much money they earn (e.g., by providing available earned balances), learn about their funds flow (e.g., by deposit reports (which may include cost details), expense reports, etc.), acquire/use earned money (e.g., by regular deposit, instant deposit, associated payment instruments, etc.), feel their own money in control (e.g., by managing deposit plans, deposit speeds, associated instruments, etc.), etc. Further, the commercial banking service may enable merchants 1316 to visualize their cash flows to track their financial health, reserve funds (e.g., deposit) for upcoming obligations, organize funds around targets, and so forth.
In at least one example, service provider 1312 may provide financing services and products, such as by commercial loans, consumer loans, periodic loans, flexible deadline loans, and the like. In at least one example, service provider 1312 may utilize one or more risk signals to determine whether to extend a financing offer and/or terms associated with such a financing offer.
In at least one example, the service provider 1312 may provide financing services to provide and/or loan a borrower, which in some cases is used to fund short-term operational needs of the borrower (e.g., capital loans). For example, a potential borrower as a merchant may obtain a capital loan through a capital loan product in order to fund various operating costs (e.g., rent, salary, inventory, etc.). In at least one example, the service provider 1312 may provide different types of funding products. For example, in at least one example, service provider 1312 may provide a daily repayment of a loan product, wherein, for example, the capital loan is repayment daily from a portion of the transaction processed by the payment processing service representative. Additionally and/or alternatively, service provider 1312 may provide monthly payouts of loan products, wherein the capital loans are paid out monthly, for example, by debiting from a bank account linked to a payment processing service. The merchant's credit risk may be assessed using a risk model that takes into account a variety of factors, such as payment amounts, similarly, merchant's credit risk, past transaction history, seasonal, credit history, and so forth.
Additionally or alternatively, the service provider 1312 may provide financing services for providing and/or lending a loan to the borrower, which in some cases will be used to fund the borrower's consumer purchase (e.g., consume the loan). In at least one example, the borrower may submit a loan request to enable the borrower to purchase items from a merchant, which may be one of merchants 1316. The service provider 1312 may create a loan based at least in part on determining that the borrower purchases or intends to purchase items from the merchant. The loan may be associated with a balance based on the actual purchase price of the item, and the borrower may repay the loan over a period of time. In some examples, the borrower may repay the loan through an installment that may be paid for by funds managed and/or maintained by the service provider 1312 (e.g., funds owed to the merchant from payments processed on behalf of the merchant, funds transferred to the merchant, etc.). The service provider 1312 may provide specific financial products, such as payment instruments that are specifically tied to loan products. For example, in one embodiment, the server provider 1312 associates funds with a debit card of a merchant or buyer, where the use of the debit card is specified by the terms of the loan. In some examples, the merchant may make a particular purchase using only a debit card. In other examples, the "installment" associated with the loan product is directly loaned through the payment instrument. Thus, the payment instrument is customized for the loan and/or the parties associated with the loan.
The service provider 1312 may provide network development services that enable users 1314, who are unfamiliar with HTML, XML, javaScript, CSS or other network design tools, to create and maintain professional and aesthetically pleasing websites. Some of these web page editing applications allow a user to build a web page and/or modify a web page (e.g., alter, add, or delete content associated with the web page). In addition, network development services may create and maintain other online full channel deployments, such as social media posts, in addition to websites. In some examples, the generated one or more web pages and/or other content items may be used to provide sales of one or more items through an online/e-commerce platform. That is, the generated one or more web pages and/or other content items may be associated with online stores or offerings of one or more merchants 1316. In at least one example, the service provider 1312 may recommend and/or create content items to supplement the full channel deployment of merchants 1316. That is, if one of merchants 1316 has a web page, service provider 1312, via web development or other service, may recommend and/or create additional content items for presentation via one or more other channels, such as social media, email, and the like.
In addition, service provider 1312 may provide payroll services to enable employers to pay employees for work they perform on behalf of the employers. In at least one example, the service provider 1312 may receive data including the employee's work time (e.g., via imported attendance card and/or POS interactions), sales by the employee, rewards received by the employee, and the like. Based on such data, service provider 1312 may pay payroll to one or more employees via payroll services on behalf of the employer. For example, service provider 1312 may facilitate transferring a total amount of payroll to be paid to the employee from the employer's bank to the service provider's 1312 bank for payroll payment. In at least one example, when the bank of service provider 1312 has received funds, service provider 1312 may pay the employee, such as by check or direct deposit, typically one day, one week, or more after the employee has actually completed work. In additional or alternative examples, service provider 1312 may enable one or more employees to receive payments through the current day or instant deposit based at least in part on risk and/or reliability analysis performed by service provider 1312.
Further, in at least one example, the service provider 1312 may provide an employee management service for managing an employee's calendar. Further, service provider 1312 may provide a subscription service to enable user 1314 to set a schedule for scheduling a subscription and/or user 1314 to schedule a subscription.
In some examples, service provider 1312 may provide restaurant management services to enable user 1314 to make and/or manage subscriptions, monitor foreground and/or background operations, and the like. In such examples, one or more merchant devices 1308 and/or one or more server computing devices 1302 may be configured to communicate with one or more other computing devices, which may be located in the foreground (e.g., one or more POS devices) and/or in the background (e.g., one or more Kitchen Display Systems (KDS)), in at least one example, service provider 1312 may provide order management services and/or payment fulfillment services to enable restaurants to manage invoicing, ticketing, etc., and/or to manage payment fulfillment services.
In at least one example, the service provider 1312 may provide a payment fulfillment service that may be delivered using a courier, where the courier may move between locations to provide delivery services, photo taking services, and the like. The courier may be a user 1314 that may move between multiple locations to perform services (e.g., delivering items, capturing images, etc.) for the requesting user 1314. In some examples, the courier may obtain compensation from the service provider 1312. The courier may use one or more vehicles, such as automobiles, bicycles, scooters, motorcycles, buses, airplanes, helicopters, boats, skateboards, etc. Although in other cases the courier may walk, or move in other ways that do not use the vehicle. Some examples discussed herein enable people to participate as couriers in a crowdsourcing service economy. Here, basically any person who owns the mobile device can immediately become an courier in the courier network providing the services as described herein, or immediately stop becoming an courier. In at least one example, the courier may be an unmanned aerial vehicle (e.g., an unmanned aerial vehicle), an autonomous vehicle, or any other type of vehicle capable of receiving instructions to travel between different sites. In some examples, service provider 1312 may receive a request for an courier service, automatically assign the request to an active courier, and communicate a dispatch instruction to the courier through a user interface (e.g., an application, web browser, or other access point) presented by respective device 1306.
In some examples, the service provider 1312 may provide a full channel payment fulfillment service. For example, if a buyer places an order with a merchant, who is unable to pay to fulfill the order because one or more items are out of stock or otherwise unavailable, service provider 1312 may utilize other merchants and/or sales channels that are part of the platform of service provider 1312 to fulfill the buyer's order. That is, another merchant may provide one or more items to fulfill the order of the buyer. Further, in some examples, another sales channel (e.g., online, brick-and-mortar store, etc.) may be used to fulfill the buyer's order.
In some examples, service provider 1312 may implement conversational commerce through conversational commerce services that may use one or more machine learning mechanisms to analyze messages exchanged between two or more users 1314, voice inputs to virtual assistants, etc., to determine intent of one or more users 1314. In some examples, service provider 1312 may utilize the determined intent to automate buyer services, provide promotions, provide recommendations, or otherwise interact with the buyer in real-time. In at least one example, service provider 1312 may integrate products and services and payment mechanisms into a communication platform (e.g., messaging, etc.) to enable buyers to purchase or otherwise conduct transactions without having to call, send e-mail, or access a merchant's web page or other channel. That is, conversational commerce alleviates the need for buyers to switch back and forth between conversations and web pages to collect information and make purchases.
In at least one example, service provider 1312 may provide a point-to-point payment service that enables point-to-point payment between two or more users 1314. In at least one example, the service provider 1312 may communicate with an instance of a payment application (or other access point) that is installed on a device 1306 configured to be operated by a user 1314. In an example, an instance of a payment application executing on a first device operated by a payer may send a request to service provider 1312 to transfer an amount of funds (e.g., legal or non-legal currency, such as crypto-currency, securities, and related assets) from the payer's account to the payee's account (e.g., point-to-point payment). The service provider 1312 may facilitate the transfer and may send a notification to the instance of the payment application executing on the second mobile device operated by the payee that the transfer is in progress (or has completed). In some examples, service provider 1312 may send additional or alternative information to the instance of the payment application (e.g., send low balance to the payer, send current balance to the payer or payee, etc.). In some implementations, the payer and/or payee may be automatically identified, e.g., based on context, proximity, previous transaction history, etc. In other examples, the payee may send a request for funds to the payer prior to the transfer of funds by the payer. The funds transferred may be associated with any digital currency type, including but not limited to cash, cryptocurrency, and the like. In some embodiments, service provider 1312 requests funds from the payee on behalf of the payer to accelerate the transfer process and compensate for any hysteresis that may be attributed to the payer's financial network.
In some implementations, service provider 1312 may trigger a point-to-point payment process by identifying a "payment agent" that has a particular syntax. For example, the syntax includes a currency indicator prefixed with one or more alphanumeric characters (e.g., $flash). The currency indicator operates as a marking mechanism that instructs the computer system to treat the input as a sender's request to transfer cash, wherein detection of the syntax (including marking by the currency indicatorIs used) may trigger a cash transfer. The currency indicator may correspond to a variety of currencies including, but not limited to: dollars ($), euro (, etc.), pounds (, lubi)RMB (, the like). Although a dollar monetary indicator ($) is used herein, it should be understood that any monetary symbol is equally applicable. The point-to-point process may be initiated by a particular application executing on user device 1306.
In some embodiments, the point-to-point process may be implemented in a forum context. The term "forum" as used herein refers to a media channel of a content provider (e.g., social networking platform, microblog, blog, video sharing platform, music sharing platform, etc.) that enables users to interact and participate through comments, posts, messages on electronic bulletin boards, messages on social networking platforms, and/or any other type of message. Content providers may use a forum to enable users of the forum to interact with each other (e.g., by creating messages, posting comments, etc.). In some embodiments, a "forum" may also refer to an application or web page of an e-commerce or retail organization that provides products and/or services. Such websites may provide online "forms" to complete before or after adding products or services to the virtual shopping cart. The online form may include one or more fields to receive user interactions and participation. Examples include the user's name and other identification, the user's shipping address, and the like. Some of the fields may be configured to receive payment information, such as a payment agent, in lieu of other types of payment mechanisms, such as credit cards, debit cards, prepaid cards, gift cards, virtual wallets, and the like.
In some embodiments, the point-to-point process may be implemented in a communication application context, such as a messaging application context. The term "messaging application" as used herein refers to any messaging application that allows users (e.g., senders and recipients of messages) to communicate via use of communication messages over a wired or wireless communication network. The messaging application may be used by service provider 1312. For example, service provider 1312 may provide messaging services that provide communication services to users through messaging applications (e.g., chat or messaging capabilities). The messaging applications may include, for example, text messaging applications for communication between telephones (e.g., conventional mobile phones or smartphones), or cross-platform instant messaging applications for smartphones and telephones that communicate using the internet. The messaging application may execute on a user device 1306, such as a mobile device or a conventional Personal Computer (PC), based on instructions sent to and from one or more server computing devices 1302, which may be referred to as a "messaging server" in such examples. In some cases, the messaging application may include a payment application with messaging capabilities such that users of the payment application are able to communicate with each other. In this case, the payment application may execute on the user device 1306 (e.g., the payment service discussed in this specification or another payment service supporting payment transactions) based on instructions sent to the one or more server computing devices 1302 and from the one or more server computing devices 1302.
In at least some embodiments, the point-to-point process may be implemented in a landing page context. The term "landing page" as used herein refers to a virtual location identified by a personalized location address that is dedicated to collecting funds on behalf of a payee associated with the personalized location address. The personalized location address identifying the landing page may include the payment agent discussed above. Service provider 1312 may create a login page to enable the recipient to conveniently receive one or more payments from one or more senders. In some embodiments, the personalized location address identifying the landing page is a Uniform Resource Locator (URL) that incorporates the payment agent. In such embodiments, the landing page is a web page, e.g., www.cash.me/$flash.
In at least one example, the user 1314 may be a new user of the service provider 1312 such that the user 1314 has not been registered with the service provider 1312 (e.g., subscribed to receive access to one or more services provided by the service provider). The service provider 1312 may provide an online service for registering potential users 1314 with the service provider 1312. In some examples, online may involve presenting various questions, prompts, etc. to potential users 1314 to obtain information that may be used to create material for potential users 1314. In at least one example, service provider 1312 may provide limited or short-term access to its services prior to or during online (e.g., users of point-to-point payment services may transfer and/or receive funds prior to full online, merchants may process payments prior to full online, etc.). In at least one example, the potential user 1314 may be entered into the service provider 1312 in response to the potential user 1314 providing all of the necessary information. In such examples, any limited or short-term access to services of service provider 1312 may translate into more relaxed (e.g., less restrictive) or longer-term access to such services.
The service provider 1312 may be associated with an IDV service that may be used by the service provider 1312 for compliance purposes and/or may be provided as a service to, for example, a third party service provider (e.g., associated with one or more server computing devices 1310). That is, service provider 1312 may provide IDV services to verify the identity of user 1314 seeking to use or being using its services. Authentication requires the buyer (or potential buyer) to provide information that is used by the compliance department to prove that the information is associated with the identity of the actual individual or entity. In at least one example, service provider 1312 may perform a service for determining whether the identification information provided by user 1314 accurately identified the buyer (or potential buyer) (i.e., whether the buyer is their person.
The service provider 1312 can provide additional or alternative services, and the services described above are provided as a sample of the services. In at least one example, the service provider 1312 may exchange data with one or more server computing devices 1310 associated with multiple third party service providers. Such a third party service provider may provide information that enables service provider 1312 to provide services such as those described above. In additional or alternative examples, such third party service providers may access the services of service provider 1312. That is, in some examples, the third party service provider may be a subscriber to the service of service provider 1312, or it may access the service of service provider 1312.
The techniques described herein may be configured to operate in real-time/online and offline modes. An "online" mode refers to a mode when a device is capable of communicating with a service provider 1312 (e.g., one or more server computing devices 1302) and/or one or more server computing devices 1310 via one or more networks 1304. In some examples, one or more merchant devices 1308, for example, cannot connect with service provider 1312 (e.g., one or more server computing devices 1302) and/or one or more server computing devices 1310 due to network connection issues. In additional or alternative examples, one or more server computing devices 1302 are unable to communicate with one or more server computing devices 1310, e.g., due to network connection issues. In such examples, the device may operate in an "offline" mode, wherein at least some payment data is stored (e.g., on one or more merchant devices 1308 and/or on one or more server computing devices 1302) until the connection is restored and the payment data may be transmitted to one or more server computing devices 1302 and/or one or more server computing devices 1310 for processing.
In at least one example, service provider 1312 may be associated with a center, such as an order center, inventory center, payment fulfillment center, etc., which may enable integration with one or more additional service providers (e.g., with additional one or more server computing devices 1310). In some examples, such additional service providers may provide additional or alternative services, and service provider 1312 may provide interfaces or other computer readable instructions to integrate the functionality of service provider 1312 into one or more additional service providers.
The technology described herein is directed to services provided via a distributed system of user devices 1306 in communication with one or more server computing devices 1302 of a service provider 1312. That is, the techniques described herein are directed to particular embodiments, or implementations, that utilize a distributed system of user devices 1306 in communication with one or more server computing devices 1302 of a service provider 1312 to perform various services, as described above. The non-conventional configuration of the distributed system described herein enables one or more server computing devices 1302 remote from an end user (e.g., user 1314) to intelligently provide services based on aggregated data associated with the end user (e.g., user 1314) (e.g., data associated with a plurality of different merchants and/or a plurality of different buyers), in some examples in near real-time. Thus, the techniques described herein are directed to a particular arrangement of elements that provides a technical improvement over conventional techniques that perform payment processing services and the like. Especially for small business owners, the business environment is often decentralized and relies on unrelated tools and procedures, making it difficult for the business owner to manually integrate and view such data. The techniques described herein constantly or periodically monitor dispersed and disparate merchant accounts, such as accounts within control of service provider 1312, as well as accounts outside of control of service provider 1312, to track business conditions (accounts payable, accounts receivables, payroll, invoices, reservations, funds, etc.) of the merchant. The technology herein provides an integrated view of merchant cash flow, predicts demand, preemptively provides recommendations or services (e.g., funds, coupons, etc.), and/or enables funds flow between different accounts (merchant, another merchant, even an account of a payment service) in a frictionless and transparent manner.
As described herein, artificial intelligence, machine learning, etc. may be used to dynamically make decisions, recommendations, etc. to add intelligence and contextual awareness to the generally applicable schemes for providing payment processing services and/or to the additional or alternative services described herein. In some embodiments, the distributed system is capable of applying intelligence derived from an existing user group to a new user, thereby making the new user's online experience more personalized and smooth than traditional online methods. Thus, the techniques described herein improve upon prior art processes.
As described above, various Graphical User Interfaces (GUIs) may be presented to facilitate the techniques described herein. Some of the techniques described herein relate to user interface features presented via a GUI to improve interactions between a user 1314 and a user device 1306. In addition, these features may be dynamically altered based on knowledge of the user interacting with the GUI. Accordingly, the technology described herein is directed to improvements in computing systems.
Fig. 14 depicts an illustrative block diagram illustrating a system 1400 for performing the techniques described herein. The system 1400 includes a user device 1402 that communicates via one or more networks 1406 (e.g., the internet, one or more wired networks, one or more cellular networks, one or more cloud networks, one or more wireless networks (e.g., wi-Fi), and one or more wired networks), and close range communications such as Low power consumption->(BLE), etc.) communicates with one or more server computing devices (e.g., one or more servers 1404). Although a single user device 1402 is shown, in additional or alternative examples, the system 1400 may have multiple user devices, as described above with reference to fig. 13.
The environment 1400 can facilitate generation and use of interactive elements associated with multimedia content. As described above, a content provider, such as a merchant, may publish or otherwise provide multimedia content. Such content may describe and/or discuss (e.g., reference) one or more items (e.g., goods and/or services). In some examples, the content may be associated with an intent to sell an item described in the content (e.g., text associated with the image indicates that the user is looking for an item described in the sales content). In other examples, the content may not be associated with a sales intent (e.g., no explicit or implicit indication indicates that the user wishes to sell anything described in the content). The service provider 1312 may identify the referenced items in the multimedia content and generate interactive elements to superimpose on the content at the time of output. The interactive element may be selectable and, when selected, may cause a purchase user interface to be presented.
In at least one example, the techniques performed by environment 1400 may alleviate the need for users interested in selling through platforms that output multimedia content to perform any actions they would not normally perform prior to publishing the content to the platform. That is, users interested in selling through such platforms may simply publish content to one or more platforms, and the techniques described herein aim to create sales opportunities and facilitate transactions based on such content.
As described above, components of environment 1400 may create sales opportunities even though the sales opportunities were not originally present (e.g., when content was published). That is, if the user-published content does not have interactive elements that can be allowed to purchase, the service provider 1412 may still immediately create the interactive elements and cause the elements to be displayed with the multimedia content to allow the customer to purchase the referenced items in the content.
As described above, users of platforms (e.g., websites, applications, and other network-based communication tools provided by service providers) utilize tools to conduct online commerce ("e-commerce"). However, as described above, the current technology has limitations. In some examples, users interested in purchasing items published through such platforms need to follow up with merchants through another communication tool (e.g., email, text message, private message, etc.) to coordinate purchase issues. Such systems introduce unnecessary hysteresis due to the response time associated with the user. Furthermore, current infrastructure does not allow for automatically filtering users and the merchant to assume responsibility for whether to initiate a conversation with interested users, conduct financial transactions with them, etc. In other examples, the user is directed to a web page (typically different from the web page or platform that initiated the interaction) and then the user needs to add items to the virtual shopping cart and provide payment data to complete the online transaction. Thus, the platforms must establish a communication interface between different platforms, such as between a content platform (allowing interaction between two users) and a payment platform (facilitating payment transactions). These communication interfaces must meet security protocols to allow secure communications, such as the exchange of financial data. The prior art also brings friction when a user intends to purchase items through the content providing platform. That is, the user (whether the buyer or the merchant) needs to perform multiple actions to facilitate the transaction, which may include multiple communication exchanges, multiple clicks on multiple web pages, interaction or registration with multiple platforms, and so forth. Thus, the current technology is inefficient and user-unfriendly. The environment 1400 described herein enables friction-free (or nearly friction-free) transactions through interactions with multimedia content. Accordingly, the technology described herein provides an improvement over the prior art.
In at least one example, the user device 1402 can be any suitable type of computing device, such as portable, semi-fixed, or fixed. Some examples of user devices 1402 may include, but are not limited to, tablet computing devices, smart phones or mobile communication devices, notebook computers, netbooks or other portable or semi-portable computers, desktop computing devices, terminal computing devices or other semi-fixed or fixed computing devices, dedicated devices, wearable computing devices or other body-worn computing devices, augmented reality devices, virtual reality devices, internet of things (IoT) devices, and the like. That is, user device 1402 can be any computing device capable of sending communications and performing functions in accordance with the techniques described herein. The user device 1402 may include a device, such as a payment card reader, or a component capable of receiving payment, as described below.
In the illustrated example, the user device 1402 includes one or more processors 1408, one or more computer-readable media 1410, one or more communication interfaces 1412, one or more input/output (I/O) devices 1414, a display 1416, and one or more sensors 1418.
In at least one example, each processor 1408 itself may include one or more processors or processing cores. For example, the one or more processors 1408 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. In some examples, the one or more processors 1408 may be one or more hardware processors and/or any suitable type of logic circuitry specifically programmed or configured to perform the algorithms and processes described herein. The one or more processors 1408 may be configured to obtain and execute computer-readable, processor-executable instructions stored in a computer-readable medium 1410.
Depending on the configuration of the user device 1402, the computer-readable medium 1410 may be an example of a tangible, non-transitory computer storage medium and may include volatile and non-volatile memory and/or removable and non-removable media implemented in any type of technology for storage of information such as computer-readable and processor-executable instructions, data structures, program modules, or other data. Computer-readable media 1410 may include, but is not limited to, RAM, ROM, EEPROM, flash memory, solid state storage, magnetic disk storage, optical storage, and/or other computer-readable media technologies. Further, in some examples, the user device 1402 may access external storage, such as a RAID storage system, a storage array, network attached storage, a storage area network, cloud storage, or other media that may be used to store information and may be accessed directly by the one or more processors 1408 or through another computing device or network. Thus, the computer-readable medium 1410 may be a computer storage medium capable of storing instructions, modules, or components that are executable by the one or more processors 1408. Further, when referred to, non-transitory computer-readable media excludes media such as energy, carrier wave signals, electromagnetic waves, and signals themselves.
The computer-readable medium 1410 may be used to store and maintain any number of functional components that may be executed by the one or more processors 1408. In some implementations, these functional components include instructions or programs executable by the one or more processors 1408, and when executed, implement the operational logic for performing the actions and services attributed above to the user device 1402. Functional components stored in computer-readable media 1410 can include a user interface 1420 to enable a user to interact with user device 1402 and, thus, with one or more servers 1404 and/or other networking devices. In at least one example, the user interface 1420 may be presented via a web browser or similar. In other examples, the user interface 1420 can be presented by an application, such as a mobile application or desktop application, which can be provided by a service provider 1312 associated with one or more servers 1404, or can be a dedicated application. In some examples, the user interface 1420 may be one of the one or more user interfaces 122 described above with reference to fig. 1. In at least one example, a user may interact with the user interface through touch input, voice input, gestures, or any other type of input. The term "input" is also used to describe "contextual" input that may not be provided directly by the user through the user interface 1420. For example, user interactions with the user interface 1420 are analyzed using, for example, natural language processing techniques to determine the user's context or intent, which may be handled in a manner similar to "direct" user input.
Depending on the type of user device 1402, the computer-readable medium 1410 may optionally also include other functional components and data, such as other modules and data 1422, which may include programs, drivers, etc., as well as data used or created by the functional components. In addition, the computer-readable medium 1410 may also store data, data structures, and the like used by the functional components. Moreover, user device 1402 can comprise many other logical, procedural, and physical components, of which those described are merely examples related to the discussion herein.
In at least one example, the computer-readable medium 1410 can include additional functional components, such as an operating system 1424 for controlling and managing the various functions of the user device 1402 and for enabling basic user interactions.
The one or more communication interfaces 1412 can include one or more interfaces and hardware components for implementing the various other devicesSuch as through one or more networks 1406 or directly. For example, the one or more communication interfaces 1412 may communicate over one or more networks 1406, which may include, but are not limited to, any type of network known in the art, such as a local area network or a wide area network (e.g., the internet), and may include a wireless network (e.g., a cellular network), a cloud network, a local wireless network (e.g., wi-Fi and/or close range wireless communication, such as, for example BLE, NFC, RFID), a wired network, or any other such network, or any combination thereof. Thus, one or more of networks 1406 may include wired and/or wireless communication technologies, including +.>BLE, wi-Fi, and cellular communication technologies, and wired or fiber optic technologies. The components used for such communications depend, at least in part, on the type of network, the chosen environment, or both. Protocols for communicating over such networks are well known and will not be discussed in detail herein.
Embodiments of the present disclosure may be provided to users through a cloud computing infrastructure. Cloud computing refers to providing extensible computing resources as services over a network to enable convenient, on-demand network access to a shared pool of configurable computing resources that can be quickly configured and released with minimal management effort or interaction by service providers. Thus, cloud computing allows users to access virtual computing resources (e.g., storage, data, applications, even complete virtualized computing systems) in the "cloud" without regard to the underlying physical systems (or locations of these systems) used to provide the computing resources.
The user device 1402 can further include one or more input/output (I/O) devices 1414.I/O devices 1414 may include speakers, microphones, cameras and various user controls (e.g., buttons, joysticks, keyboards, keypads, and the like), haptic output devices, and the like. I/O devices 1414 may also include accessories that connect with user device 1402 using accessories (audio jacks, USB-C, bluetooth, etc.).
In at least one example, the user device 1402 can include a display 1416. The display 1416 may employ any suitable display technology depending on the type of computing device or devices used as the user device 1402. For example, the display 1416 may be a liquid crystal display, a plasma display, a light emitting diode display, an OLED (organic light emitting diode) display, an electronic paper display, or any other suitable type of display capable of presenting digital content thereon. In at least one example, the display 1416 can be an augmented reality display, a virtual reality display, or any other display capable of presenting and/or projecting digital content. In some examples, the display 1416 may have a touch sensor associated with the display 1416 to provide a touch screen display configured to receive touch input to enable interaction with a graphical interface presented on the display 1416. Thus, embodiments herein are not limited to any particular display technology. Alternatively, in some examples, the user device 1402 may not include the display 1416 and the information may be presented in other manners, such as audibly, tactilely, and the like.
Further, the user device 1402 can include one or more sensors 1418. The one or more sensors 1418 may include a GPS device capable of indicating location information. Further, the one or more sensors 1418 may include, but are not limited to, an accelerometer, a gyroscope, a compass, a proximity sensor, a camera, a microphone, and/or a switch.
In some examples, a GPS device may be used to identify the location of the user. In at least one example, service provider 1312 may use the location of the user to provide one or more services, as described above. That is, in some examples, service provider 1312 may implement a geofence to provide a particular service to a user. For example, for a loan service, a location may be used to confirm whether the intended purpose of the loan corresponds to evidence of use (e.g., whether a user uses the loan consistent with what he or she said he or she is about to use. Further, in some examples, the location may be used for payroll purposes. For example, if the contractor completes a project, the contractor may provide a geotag image (e.g., tagging based on location information available to the GPS device). In some examples, the location may be used to facilitate point-to-point payments between nearby users 1314 and/or to send notifications to the users 1314 regarding available reservations with one or more merchants located in the vicinity of the users 1314. In at least one example, the location can be used to collect funds from a nearby buyer when the nearby buyer leaves the geofence, or the location can be used to initiate an action in response to the user 1314 entering the merchant's brick-and-mortar store. The location may be used in addition or alternatively.
In addition, the user device 1402 may include various other components not shown, examples of which include removable memory, a power source (e.g., battery and power control unit), a bar code scanner, a printer, a cash drawer, and the like.
Further, in some examples, the user device 1402 may include, be connectable to, or otherwise coupled to a reader device 1426 for reading a payment instrument and/or an identifier associated with a payment object. In some examples, as described above, the reader device 1426 may be plugged into a port in the user device 1402, such as a microphone port, an earphone port, an audio jack, a data port, or other suitable port. In additional or alternative examples, the reader device 1426 may be coupled to the user device 1402 via another wired or wireless connection, e.g., viaBLE, etc. The card reader device 1426 may include a read head for reading the magnetic stripe of the payment card, and may further include encryption techniques for encrypting information read from the magnetic stripe. Additionally or alternatively, the reader device 1426 may be an EMV payment reader, which in some examples may be embedded in the user device 1402. Furthermore, many other types of readers may be used by the user device 1402 herein, depending on the type and configuration of the user device 1402.
Reader device 1426 may be a portable magnetic stripe card reader, an optical scanner, a smart card (card with embedded IC chip) reader (e.g., a card reader conforming to EMV standards or a short-range communication enabled reader), an RFID reader, or similar device configured to detect and acquire data of any payment instrument. Thus, reader device 1426 may include hardware implementations, such as slots, magnetic tracks, and tracks with one or more sensors or electrical contacts to facilitate detection and receipt of payment instruments. That is, the reader device 1426 may include a hardware implementation to enable the reader device 1426 to interact with a payment instrument by swiping a card (i.e., a card-with-transaction in which a purchaser slides a card with a magnetic stripe past a payment reader that captures payment data contained in the magnetic stripe), inserting a card (i.e., a card-with-transaction in which the purchaser first inserts a card with an embedded microchip (i.e., a chip) into the payment reader until the payment reader prompts the purchaser to remove the card), or tapping (i.e., a card-with-transaction in which the purchaser may swipe or hover his or her electronic device, such as a smart phone running a payment application, over the payment reader to complete the transaction via a short-range communication) to obtain payment data associated with the purchaser. Additionally or alternatively, the reader device 1426 may also include a biometric sensor to receive and process biometric features and treat them as a payment instrument, assuming such biometric features are registered with a payment processing service provider and connected to a financial account with a banking server.
The reader device 1426 can include one or more processing units, computer-readable media, reader chips, transaction chips, timers, clocks, network interfaces, power sources, etc. One or more processing units of the reader device 1426 may execute one or more modules and/or processes to cause the reader device 1426 to perform various functions, as described above and explained in further detail in the following disclosure. In some examples, the one or more processing units may include a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a CPU, and a GPU, or processing units or components known in the art. Additionally, each of the one or more processing units may have its own local memory in which program modules, program data, and/or one or more operating systems may also be stored. Depending on the particular configuration and type of reader device 1426, the computer-readable medium may include volatile memory (e.g., RAM), non-volatile memory (e.g., ROM, flash memory, mini-hard disk, memory card, or the like), or some combination thereof. In at least one example, the computer-readable medium of the reader device 1426 can include at least one module for performing the various functions as described herein.
The reader chip may perform functions that control the operation and processing of the reader device 1426. That is, the reader chip may perform functions of controlling a payment interface (e.g., a contactless interface, a contact interface, etc.), a wireless communication interface, a wired interface, a user interface (e.g., a signal conditioning device (FPGA)), and the like. Additionally, the reader chip may perform the function of controlling a timer, which may provide a timer signal indicating the amount of time that has been delayed after a particular event (e.g., interaction, power down event, etc.). In addition, the reader chip may perform the function of controlling a clock, which may provide a clock signal indicative of time. In addition, the reader chip may perform the function of controlling a network interface that may be connected to one or more networks 1406, as described below.
Additionally, the reader chip may perform the function of controlling the power supply. The power source may include one or more power sources, such as a physical connection to an ac power source or a battery. The power supply may include power conversion circuitry for converting the alternating current and generating a plurality of direct current voltages for use by components of the reader device 1426. When the power source comprises a battery, the battery may be charged by a physical power connection, by inductive charging, or by any other suitable method.
The transaction chip may perform functions related to processing payment transactions, interfacing with payment instruments, encryption, and other payment-specific functions. That is, the transaction chip may access payment data associated with the payment instrument and may provide the payment data to the POS terminal, as described above. Payment data may include, but is not limited to: the name of the buyer, the address of the buyer, the type of payment instrument (e.g., credit card, debit card, etc.), the number associated with the payment instrument, a verification value associated with the payment instrument (e.g., PIN Verification Key Index (PVKI), PIN Verification Value (PVV), card Verification Value (CVV), card Verification Code (CVC), etc.), expiration data associated with the payment instrument, a Primary Account Number (PAN) corresponding to the buyer (which may or may not match the number associated with the payment instrument), a limit on the type of fees/debts that may be generated, etc. Additionally, the transaction chip may encrypt the payment data after receipt of the payment data.
It should be appreciated that in some examples, the reader chip may have its own one or more processing units and computer-readable media, and/or the transaction chip may have its own one or more processing units and computer-readable media. In other examples, the functionality of the reader chip and the transaction chip may be embodied in a single chip or in multiple chips, each chip including any suitable combination of processing units and computer readable media to collectively perform the functionality of the reader chip and the transaction chip as described herein.
Although the user device 1402 (which may be a POS terminal) and the reader device 1426 are shown as separate devices, in additional or alternative examples, the user device 1402 and the reader device 1426 may be part of a single device, which may be battery powered devices. In such an example, the components of the user device 1402 and the reader device 1426 can be associated with a single device. In some examples, the reader device 1426 may have a display integrated therewith, which may be in addition to (or in lieu of) the display 1416 associated with the user device 1402.
The one or more servers 1404 may include one or more servers or other types of computing devices, which can be embodied in any number of ways. For example, in the example of a server, the modules, other functional components, and data may be implemented on a single server, a server cluster, a server farm or data center, a cloud-hosted computing service, a cloud-hosted storage service, etc., although other computer architectures may additionally or alternatively be used.
Further, while the figures illustrate components and data of one or more servers 1404 as residing in a single location, such components and data could alternatively be distributed in any manner across different computing devices and different locations. Thus, these functions may be implemented by one or more server computing devices, with the various functions described above being distributed across different computing devices in various ways. Multiple servers 1404 may be located together or separately and organized, for example, as virtual servers, server groups, and/or server farms. The described functionality may be provided by a server of a single merchant or enterprise, or may be provided by servers and/or services of a plurality of different buyers or enterprises.
In the illustrated example, the one or more servers 1404 can include one or more processors 1428, one or more computer-readable media 1430, one or more I/O devices 1432, and one or more communication interfaces 1434. Each processor 1428 may be a single processing unit or multiple processing units, and may include a single or multiple computing units or multiple processing cores. The one or more processors 1428 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. For example, the one or more processors 1428 may be one or more hardware processors and/or any suitable type of logic circuitry specifically programmed or configured to perform the algorithms and processes described herein. The one or more processors 1428 may be configured to obtain and execute computer-readable instructions stored in the computer-readable medium 1430, which may program the one or more processors 1428 to perform the functions described herein.
Computer-readable media 1430 may include volatile and nonvolatile memory and/or removable and non-removable media implemented in any type of technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Such computer-readable media 1430 may include, but is not limited to, RAM, ROM, EEPROM, flash memory or other storage technology, optical storage, solid state storage, magnetic tape, magnetic disk storage, RAID storage systems, storage arrays, network attached storage, storage area networks, cloud storage, or any other medium that can be used to store the desired information and that can be accessed by a computing device. Depending on the configuration of the one or more servers 1404, the computer-readable medium 1430 may be a computer-readable storage medium and/or may be a tangible, non-transitory medium, to the extent that when referred to, non-transitory computer-readable medium excludes media such as energy, carrier signals, electromagnetic waves, signals themselves, and the like.
Computer-readable media 1430 may be used to store any number of functional components that can be executed by one or more processors 1428. In many embodiments, these functional components include instructions or programs executable by the processor 1428, and when executed, specifically configure the one or more processors 1428 to perform the actions attributed above to the service provider 1312 and/or the payment processing service. The functional components stored in the computer-readable medium 1430 may optionally include a merchant module 1436, a training module 1438, and one or more other modules and data 1440.
The merchant module 1436 may be configured to receive transaction data from a POS system, such as the POS system 1324 described above with reference to fig. 13. The merchant module 1436 may send a request (e.g., authorization, acquisition, settlement, etc.) to one or more payment service server computing devices to facilitate POS transactions between the merchant and the buyer. The merchant module 1436 may communicate the success or failure of the POS transaction to the POS system. The payment processing module 116 described above with reference to fig. 1 and 2 may correspond to the merchant module 1436.
The training module 1438 may be configured to train the model using a machine learning mechanism. For example, the machine learning mechanism may analyze the training data to train a data model that creates an output, which may be a recommendation, score, and/or other indication. The machine learning mechanism may include, but is not limited to, supervised learning algorithms (e.g., artificial neural networks, bayesian statistics, support vector machines, decision trees, classifiers, k-nearest neighbors, etc.), unsupervised learning algorithms (e.g., artificial neural networks, association rule learning, hierarchical clustering, cluster analysis, etc.), semi-supervised learning algorithms, deep learning algorithms, etc.), statistical models, and the like. In at least one example, the machine-trained data model can be stored in one or more data stores associated with one or more user devices 1402 and/or one or more servers 1404 for use at some time (e.g., at runtime) after the data model has been trained.
One or more other modules and data 1440 may include interactive element generator 138 and/or command generator 140, the functions of which are described at least in part above. In addition, one or more other modules and data 1440 may include programs, drivers, etc., as well as data used or created by the functional components. Moreover, one or more of the servers 1404 can include many other logical, procedural, and physical components, of which the above descriptions are merely examples related to the discussion herein.
One or more "modules" and/or "components" referred to herein may be implemented as more modules or fewer modules, and the functions described for the modules may be redistributed according to implementation details. As used herein, the term "module" generally refers to software stored on a non-transitory storage medium (e.g., volatile or non-volatile memory for a computing device), hardware, or firmware (or any combination thereof) modules. Modules typically have functionality such that they can create useful data or other outputs using one or more specified inputs. The modules may or may not be self-contained. An application (also referred to as an "application") may include one or more modules, or a module may include one or more applications (e.g., executable code to cause a device to perform actions) that may be accessed over a network or downloaded as software to the device. An application (also referred to as an "application") may include one or more modules, or a module may include one or more applications. In additional and/or alternative examples, one or more modules may be implemented as computer-readable instructions, various data structures, etc., via at least one processing unit, to configure one or more computing devices described herein to execute the instructions and perform the operations described herein.
In some examples, a module may include one or more Application Programming Interfaces (APIs) to perform some or all of its functions (e.g., operations). In at least one example, a Software Development Kit (SDK) may be provided by a service provider to allow third party developers to include service provider functionality and/or utilize service provider services associated with their own third party applications. Additionally or alternatively, in some examples, the service provider may utilize the SDK to integrate third party service provider functionality into its applications. That is, one or more APIs and/or one or more SDKs may enable third party developers to customize the manner in which their respective third party applications interact with the service provider, and vice versa. One or more of the APIs 148 described above may correspond to such situations.
Computer-readable media 1430 may additionally include an operating system 1442 for controlling and managing the various functions of the one or more servers 1404.
Communication interface 1434 may include one or more interfaces and hardware components to enable communication with various other devices, such as through one or more networks 1406 or direct communication. For example, the one or more communication interfaces 1434 may communicate over the one or more networks 1406, which may include, but are not limited to, any type of network known in the art, such as a local area network or a wide area network (e.g., the internet), and may include a wireless network (e.g., a cellular network), a local wireless network (e.g., wi-Fi and/or close range wireless communication, such as, for example BLE, NFC, RFID), a wired network, or any other such network, or any combination thereof. Thus, one or more networks 1402 may include wired and/or wireless communication techniques, including +.>BLE, wi-Fi and cellularCommunication technology, and wired or fiber optic technology. The components used for such communications depend, at least in part, on the type of network, the chosen environment, or both. Protocols for communicating over such networks are known and will not be discussed in detail herein.
One or more of the servers 1404 can further be equipped with various I/O devices 1432. Such I/O devices 1432 may include a display, various user interface controls (e.g., buttons, joysticks, keyboards, mice, touch screens, biometric or sensory input devices, etc.), audio speakers, connection ports, and the like.
In at least one example, the system 1400 can include one or more data stores 1444 that can be configured to store accessible, manageable, and updateable data. In some examples, one or more data stores 1444 can be integrated with user device 1402 and/or one or more servers 1404. In other examples, as shown in fig. 14, one or more data stores 1444 may be located remotely from the one or more servers 1404 and accessible by the one or more servers 1404. The one or more data stores 1444 can include a plurality of databases and/or servers that are connected locally and/or remotely through one or more networks 1406. One or more of the data stores 150 described above with reference to fig. 1 may correspond to one or more data stores 1444.
In at least one example, the one or more data stores 1444 can store user profiles, which can include merchant profiles, buyer profiles, and the like.
The merchant data may store or otherwise correlate data related to the merchant. For example, the merchant data may store or otherwise associate merchant-related information (e.g., name of the merchant, geographic location of the merchant, business hours of the merchant, employee information, etc.), merchant Category Classification (MCC), one or more items sold by the merchant, hardware used by the merchant (e.g., device type), transaction data related to the merchant (e.g., transactions conducted by the merchant, payment data related to the transaction, items related to the transaction, descriptions of items related to the transaction, per-item and/or total payments per transaction, parties to the transaction, date, time and/or place related to the transaction, etc.), loan information related to the merchant (e.g., loan previously provided to the merchant, default to the loan, etc.), risk information related to the merchant (e.g., risk indication, fraud instance, order withdrawal, etc.), reservation information (e.g., previous reservation, upcoming (reservation), reservation time, reservation duration, etc.), payroll information (e.g., employee, payroll frequency, payroll amount, etc.), reservation information (e.g., previous, incoming (reservation), reservation data (e.g., reservation, etc.), reservation data associated with the reservation service, etc.), reservation data, etc. The merchant profile may securely store bank account information provided by the merchant. In addition, merchant data may store payment information associated with payment instruments linked to the merchant's stored balance, such as the stored balance maintained in the ledger by service provider 1312.
The buyer profile may store buyer data including, but not limited to, buyer information (e.g., name, telephone number, address, bank information, etc.), buyer preferences (e.g., learned or buyer-specified), purchase history data (e.g., identifying one or more items purchased (and corresponding item information), payment instruments for purchasing the one or more items, returns associated with the one or more orders, status of the one or more orders (e.g., prepared, packaged, in transit, delivered, etc.), reservation data (e.g., previous reservations, upcoming (scheduled) reservations, reservation times, reservation durations, etc.), payroll data (e.g., employer, payroll frequency, payroll amount, etc.), reservation data (e.g., previous reservations, upcoming (scheduled) reservations, reservation durations, interactions associated with such reservations, etc.), inventory data, buyer service data, etc.
In at least one example, one or more of the accounts described above with reference to fig. 1 may include or be associated with merchant data and/or buyer data described above.
Further, in at least one example, one or more data stores 1444 can store one or more inventory databases and/or one or more catalog databases. As described above, the inventory may store data associated with the number of each item available that the merchant owns. The above records may be stored in an inventory data store. In addition, the catalog may store data associated with items available to the merchant. The one or more data stores 1444 may store additional or alternative types of data as described herein.
Example clauses
1. A method implemented by at least one server computing device of a service provider, the method comprising: receiving multimedia content comprising a representation of an item to be sold provided by a merchant; identifying items in the multimedia content by one or more identification techniques; determining identification information associated with the item based at least in part on inventory data associated with the merchant; associating the identifying information with an interactive element to be presented in association with the multimedia content, wherein the interactive element, when selected, causes the user device to display a graphical user interface configured to allow the customer to purchase the item; at least partially and for customer interaction, overlaying an interactive element onto a portion of the multimedia content; receiving input data from a user device of a customer indicating a customer interaction with an interaction element; and based at least in part on the input data, causing a user device of the customer to display a graphical user interface configured to allow the customer to purchase the item.
2. The method of clause 1, further comprising: receiving inventory data from a system associated with a merchant, the inventory data indicating a current inventory of items available for purchase from the merchant; associating the digital representation of the current inventory with the interactive element, wherein overlaying the interactive element on the portion of the multimedia content includes overlaying the digital representation of the current inventory on the portion of the multimedia content.
3. The method of clause 1 and/or clause 2, further comprising: determining to avoid causing display of the graphical user interface until the multimedia content ceases; storing first data indicating that the interactive element is selected; receiving additional input data indicating customer interactions of additional interaction elements associated with additional items in the multimedia content; storing second data indicating that the additional interactive element is selected; the graphical user interface is caused to display purchase information for the item and the additional item based at least in part on the multimedia content stopping and using the first data and the second data.
4. The method of any one of clauses 1 to 3, further comprising: determining attributes of items presented in the multimedia content, the attributes including selectable options associated with the items; receiving, from a system associated with a merchant, fee data indicating a fee for an item having the attribute; causing the graphical user interface to include a fee; and causing the graphical user interface to include the attribute as pre-filled information in the user input field.
5. A system comprising: one or more processors; a non-transitory computer-readable medium storing instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising: receiving multimedia content comprising a representation of an item; identifying an item in the multimedia content; determining identification information associated with the item; associating the identifying information with an interactive element to be presented in association with the multimedia content, wherein the interactive element is selectable by a user of the user device; associating the interactive element with a portion of the multimedia content; receiving input data from a user device indicating a customer interaction with an interaction element; based at least in part on the input data, the user device is caused to display a graphical user interface configured to allow a user to obtain the item.
6. The system of clause 5, the operations further comprising: a first time indicator that determines when a representation of an item is referenced in the multimedia content; determining when the representation of the item ceases to be referenced in the multimedia content; the interactive element is caused to be displayed in association with the multimedia content from the first time indicator to the second time indicator when the multimedia content is output by the user device.
7. The system of clauses 5 and/or 6, the operations further comprising: determining a location on a visual window of the user device, wherein during output of the multimedia content by the user device, the item is presented at the location; the interactive elements are rendered in association with the location when the multimedia content is output by the user device.
8. The system of any one of clauses 5 to 7, the operations further comprising: determining a portion of the multimedia content in which to display the representation; identifying attributes of the item using image data from the portion of the multimedia content; and including the attribute as at least a portion of the identification information.
9. The system of any one of clauses 5 to 8, the operations further comprising: receiving text data representing one or more comments, the one or more comments being associated with the multimedia content; identifying one or more keywords associated with the item from the text data; and modifying the identification information based at least in part on the one or more keywords.
10. The system of any of clauses 5 to 9, wherein the multimedia content is associated with a first merchant, and the operations further comprise: upon receiving the user input, determining that the item is not available from the first merchant; identifying one or more second merchants from which the item is available; the graphical user interface is caused to include identifiers of one or more second merchants.
11. The system of any one of clauses 5 to 10, the operations further comprising: generating first data representing the interactive element, the first data being separate from the multimedia content; receiving an indication that the multimedia content has been requested for output on the user device; in response to the indication, the first data is caused to be superimposed on the multimedia content as the multimedia content is output onto the user device.
12. The system of any of clauses 5-11, wherein the graphical user interface is provided by a payment processing service provider, and the operations further comprise: receiving item information about an item from a system associated with a merchant; receiving payment information from a user device via a graphical user interface; and initiating a payment transaction for purchasing the item using the payment information and the item information.
13. A method implemented at least in part by one or more computers of a payment processing service provider, the method comprising: receiving multimedia content comprising a representation of an item; identifying an item in the multimedia content; determining identification information associated with the item; associating the identifying information with an interactive element to be presented in association with the multimedia content; associating the interactive element with a portion of the multimedia content; displaying interactive elements while outputting the multimedia content on the user device; receiving input data from a user device indicating a user interaction with the interactive element while the interactive element is displayed; and based at least in part on the input data, causing the user device to display a graphical user interface configured to allow a user to obtain the item.
14. The method of clause 13, further comprising: receiving inventory data from a system associated with a merchant selling items, the inventory data indicating a current inventory of items available for purchase from the merchant; determining that the current inventory is less than a threshold inventory value; and based at least in part on the current inventory being less than the threshold inventory value, sending a recommendation for the replacement item currently available.
15. The method of clauses 13 and/or 14, further comprising: storing first data indicating that the interactive element is selected; receiving additional input data indicating customer interactions of additional interaction elements associated with additional items in the multimedia content; storing second data indicating that the additional interactive element is selected; and causing the graphical user interface to display purchase information for the item and the additional item based at least in part on the first data and the second data.
16. The method of any one of clauses 13 to 15, further comprising: determining an attribute comprising at least one of a size, a color, or a number of items presented in the multimedia content; and causing the graphical user interface to include the attribute as pre-filled information in the user entered field.
17. The method of any one of clauses 13 to 16, further comprising: determining a geographic area associated with the user device when the user input data is received; determining a current inventory of items in the geographic region, wherein causing the interactive element to be displayed includes causing an indication of the current inventory to be displayed.
18. The method of any of clauses 13 to 17, wherein the multimedia content comprises real-time streaming media or near real-time streaming media content, receiving the multimedia content comprises receiving the multimedia content from a device associated with a merchant, and the method further comprises: analyzing image data of the multimedia content using one or more computer vision processes to identify objects depicted in the multimedia content prior to sending the instance of the multimedia content to the user device; generating first data comprising an interactive element prior to transmitting an instance of the multimedia content, wherein the interactive element is based at least in part on the identified object; and transmitting the first data and the instance of the multimedia content to the user device.
19. The method of any one of clauses 13 to 18, further comprising: receiving image data from a user device describing a gesture made by a user of the user device while the multimedia content is being output, the user input data comprising image data; determining a movement pattern of the gesture based at least in part on the analysis of the image data; determining that the movement pattern corresponds to a reference movement pattern indicating that the user has provided input selecting an item to purchase; and causing an action to be performed based at least in part on the motion pattern corresponding to the reference motion pattern.
20. The method of any one of clauses 13 to 19, further comprising: determining text data indicating a speech portion of the multimedia content using a speech recognition process performed on the audio data of the multimedia content; identifying one or more features of the item based at least in part on the text data; at least one field of the graphical user interface is pre-populated with one or more features.
1. A method implemented by at least one server computing device of a service provider, the method comprising: receiving multimedia content comprising a representation of an item to be sold provided by a merchant; determining user profile associated with a particular user device that has requested to output multimedia content, the user profile including a purchase history of the user; determining a user preference based at least in part on the purchase history; generating identification information associated with the item, the identification information emphasizing details about the item associated with the user preference; and combining the identification information with the multimedia content such that the identification information is also displayed when the multimedia content is displayed on the particular user device.
2. The method of clause 1, further comprising: receiving transaction data associated with past transactions of the user profile; utilizing transaction data for processing past transactions; determining a purchase history based at least in part on the transaction data; determining one or more attributes associated with the past transaction, the one or more attributes indicating a purchasing trend of the user profile; and determining attributes corresponding to the one or more attributes regarding the item, wherein generating identifying information that emphasizes details regarding the item includes emphasizing the attributes of the item corresponding to the one or more attributes.
3. The method of clause 1 and/or clause 2, further comprising: determining that the historical user input to the particular user device is a particular input type selected from at least one of a touch screen input type, a click input type, a quick response code input type, a sound input type, or a gesture input type, wherein the user preference includes the particular input type; and causing the identifying information to be presented as an interactive element configured to receive the particular input type.
4. The method of any one of clauses 1 to 3, further comprising: determining a level of item detail associated with items purchased in past transactions associated with the user profile, wherein the user preference indicates the level of item detail; and causing identification information to be displayed on the particular user device, the identification information including a degree of detail of the item.
5. A system, comprising: one or more processors; and a non-transitory computer-readable medium storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: receiving multimedia content including providing a representation of a sold item; determining user profile associated with a request to output multimedia content; determining a user preference based at least in part on a purchase history associated with the user profile; generating identification information associated with the item based at least in part on the user preferences; the identification information is associated with the multimedia content such that the identification information is displayed when the multimedia content is displayed.
6. The system of clause 5, wherein the user preference comprises at least one of a color option associated with the item, a size option associated with the item, a type of item associated with the item, a brand option associated with the item, a payment fulfillment option associated with the item, or a quantity of the item.
7. The system of clauses 5 and/or 6, the operations further comprising: determining one or more item categories for previously purchased items associated with the user profile from the purchase history, the user preferences indicating the one or more item categories; determining that the item corresponds to at least one of the one or more item categories, wherein the identifying information is generated based at least in part on determining that the item corresponds to at least one of the one or more item categories.
8. The system of any one of clauses 5 to 7, the operations further comprising: determining from a purchase history of the user profile an amount of time that the selectable element has been displayed before receiving user input indicating selection of the selectable element to purchase the item; and causing the interactive element to display at least the historical amount of time while outputting the multimedia content.
9. The system of any one of clauses 5 to 8, the operations further comprising: determining payment instrument information used in past payment transactions from a purchase history of the user profile; receiving input data indicating a selection of an item for purchase; and based at least in part on the input data, causing the user device to display a graphical user interface configured to allow the user to purchase the item, the graphical user interface including an input field pre-populated with payment instrument information.
10. The system of any one of clauses 5 to 9, the operations further comprising: receiving first data from one or more merchants corresponding to a purchase history indicating items associated with past transactions of the user profile; determining one or more attributes associated with the item; receiving input data indicating a selection of an item for purchase; and based at least in part on the input data, causing the user device to display a graphical user interface configured to allow the user to purchase the item, the graphical user interface including an input field having one or more options pre-populated with the items based at least in part on the one or more attributes.
11. The system of any one of clauses 5 to 10, the operations further comprising: determining from the purchase history of the user profile a plurality of merchants that have provided items to the user associated with the user profile; determining that a first amount of transactions with a first merchant of the plurality of merchants exceeds a second amount of transactions with a second merchant of the plurality of merchants; determining that the first merchant provides the item for sale; and determining that the second merchant provides the item for sale, and wherein the identification information includes an identifier of the first merchant and not an identifier of the second merchant based at least in part on the first transaction amount exceeding the second transaction amount.
12. The system of any one of clauses 5 to 11, the operations further comprising: receiving feedback data indicating user interaction with the identification information from a user device associated with the user profile; modifying the user preference based at least in part on the feedback data; and displaying the modified identification information on the user device according to the modified user preference.
13. A method implemented at least in part by one or more computers of a payment processing service provider, the method comprising: receiving multimedia content including providing a representation of a sold item; determining user profile associated with the request to output the multimedia content; determining a user preference based at least in part on a purchase history associated with the user profile; generating, based at least in part on the user preferences, identification information associated with the item, the identification information indicating details about the item; generating an interactive element containing identification information; and associating the interactive element with the multimedia content such that the interactive element is displayed when the multimedia content is displayed.
14. The method of clause 13, further comprising: receiving first data from one or more merchants corresponding to a purchase history indicating items associated with past transactions of user profile; determining one or more purchasing trends associated with past transactions; and determining a characteristic of the item corresponding to the one or more purchasing trends, wherein generating the identifying information includes emphasizing the characteristic.
15. The method of clause 13 and/or clause 14, further comprising: determining that the historical user input associated with the user profile is a particular input type, wherein the user preference includes the particular input type; and causing the identifying information to be presented as an interactive element configured to receive the particular input type.
16. The method of any of clauses 13 to 15, wherein the data representing the purchase history is received from a payment processing service provider.
17. The method of any one of clauses 13 to 16, further comprising: determining that historical user input associated with the user profile indicates that a plurality of items represented in other multimedia data are co-purchased; avoiding causing a display of a graphical user interface configured to allow purchase of the item until the multimedia content ceases to be output based at least in part on determining that the plurality of items are co-purchased; and causing the graphical user interface to be displayed in response to the multimedia content ceasing to be output.
18. The method of any one of clauses 13 to 17, further comprising: determining a device type of the device that is utilized in association with the user profile to output the multimedia content; determining, based at least in part on the device type, one or more user input types that the device is capable of receiving; and generating an interactive element comprising the identifying information, the interactive element configured to be displayed when the multimedia content is displayed, and the interactive element configured to receive one or more user input types.
19. The method of any one of clauses 13 to 18, further comprising: determining from the purchase history that the user profile has involved at least a threshold number of past purchases, wherein the item was purchased through interactions with other multimedia content; determining a discount value associated with the user profile based at least in part on the user profile involving at least a threshold number of past purchases; and in response to a user input to select an item to purchase, causing a graphical user interface to be displayed that indicates a payment transaction for which the discount value is to be applied.
20. The method of any one of clauses 13 to 19, further comprising: storing data indicating historical interactions with other multimedia content to purchase items; determining one or more characteristics of other multimedia content; identifying content associated with the user profile that is not viewed and that is associated with one or more features of other multimedia content; and sending a recommendation to view the content to a user device associated with the user profile.
The phrases "in some examples," "according to various examples," "in the illustrated examples," "in one example," "in other examples," "various examples," "some examples," and the like generally indicate that a particular feature, structure, or characteristic following the phrase is included in at least one example of the invention, and may be included in more than one example of the invention. Moreover, such phrases are not necessarily referring to the same example or to different examples.
If a component or feature is stated in the specification as "capable of", "probable", "can" or "can" includes or has a certain feature, that particular component or feature is not required to necessarily include or have that feature.
Furthermore, the above description is directed to devices and applications related to payment technology. However, it should be understood that the technique may be extended to any device and application. Furthermore, the techniques described herein may be configured to operate regardless of the type of payment object reader, POS terminal, network application, mobile application, POS topology, payment card, computer network, and environment.
The various figures included herein are flowcharts illustrating example methods relating to techniques as described herein. The illustrated method is described with reference to fig. 5-12 for convenience and ease of understanding. However, the illustrated methods are not limited to being performed using the components described in fig. 1-4D, 13, and 14, and such components are not limited to performing the methods illustrated herein.
Furthermore, the above-described methods are illustrated as a collection of blocks in a logic flow diagram, which represent a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by a processor, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, etc. that perform particular functions or implement particular abstract data types. The order of the described operations is not intended to be construed as a limitation, and any number of the described blocks can be combined in any order and/or in parallel to implement a process. In some embodiments, one or more blocks of the process may be omitted entirely. Furthermore, these methods may be combined with each other or with other methods, in whole or in part.
The foregoing is merely illustrative of the principles of this disclosure and various modifications can be made by those skilled in the art without departing from the scope of the invention. The above examples are presented for purposes of illustration and not limitation. The present disclosure may take many other forms in addition to those explicitly described herein. Therefore, it is emphasized that the present disclosure is not limited to the specifically disclosed methods, systems and apparatus, but is intended to include variations and modifications thereof that are within the spirit of the appended claims.
As a further example, changes may be made to the apparatus or process limitations (e.g., size, configuration, assembly, sequence of process steps, etc.) to further optimize the structures, apparatus, and methods provided, as shown and described herein. Regardless, the structures and devices and associated methods described herein have many applications. Accordingly, the disclosed subject matter should not be limited to any single example described herein, but rather should be construed in breadth and scope in accordance with the appended claims.
Claims (20)
1. A method implemented by at least one server computing device of a service provider, the method comprising:
Receiving multimedia content, the multimedia content comprising a representation of an item to be sold provided by a merchant;
identifying the items in the multimedia content by one or more identification techniques;
determining identification information associated with the item based at least in part on inventory data associated with the merchant;
associating the identifying information with an interactive element to be presented in association with the multimedia content, wherein the interactive element, when selected, causes a user device to display a graphical user interface configured to allow a customer to purchase the item;
superimposing the interactive element onto a portion of the multimedia content, at least in part and for customer interaction;
receiving input data from the user device of the customer indicating a customer interaction with the interaction element; and
based at least in part on the input data, causing the user device of the customer to display the graphical user interface configured to allow the customer to purchase the item.
2. The method of claim 1, further comprising:
Receiving inventory data from a system associated with the merchant, the inventory data indicating a current inventory of the item available for purchase from the merchant; and
associating the digital representation of the current inventory with the interactive element, wherein overlaying the interactive element on a portion of the multimedia content includes overlaying the digital representation of the current inventory on a portion of the multimedia content.
3. The method of claim 1, further comprising:
determining to avoid causing display of the graphical user interface until the multimedia content ceases;
storing first data, the first data indicating that the interactive element is selected;
receiving additional input data indicative of customer interactions of additional interaction elements associated with additional items in the multimedia content;
storing second data, the second data indicating that the additional interactive element is selected; and
the graphical user interface is caused to display purchase information for the item and the additional item based at least in part on the multimedia content stopping and using the first data and the second data.
4. The method of claim 1, further comprising:
determining attributes of the items presented in the multimedia content, the attributes including selectable options associated with the items;
receiving cost data from a system associated with the merchant, the cost data indicating a cost of the item having the attribute;
causing the graphical user interface to include the fee; and
the graphical user interface is caused to include the attribute as pre-filled information in a user input field.
5. A system, comprising:
one or more processors; and
a non-transitory computer-readable medium storing instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising:
receiving multimedia content comprising a representation of an item;
identifying the item in the multimedia content;
determining identification information associated with the item;
associating the identifying information with an interactive element to be presented in association with the multimedia content, wherein the interactive element is selectable by a user of a user device;
associating the interactive element with a portion of the multimedia content;
Receiving input data from the user device, the input data indicating a user interaction with the interaction element; and
based at least in part on the input data, causing the user device to display a graphical user interface configured to allow the user to obtain the item.
6. The system of claim 5, the operations further comprising:
a first time indicator that determines when a representation of the item is referenced in the multimedia content;
determining when the representation of the item ceases to be referenced in the multimedia content; and
the interactive element is caused to be displayed in association with the multimedia content from a first time indicator to a second time indicator when the multimedia content is output by the user device.
7. The system of claim 5, the operations further comprising:
determining a location on a visual window of a user device, wherein the item is presented at the location during output of the multimedia content by the user device; and
the interactive element is rendered in association with the location when the multimedia content is output via the user device.
8. The system of claim 5, the operations further comprising:
determining a portion of the multimedia content in which the representation is displayed;
identifying attributes of the item using image data from the portion of the multimedia content; and
the attribute is included as at least a portion of the identification information.
9. The system of claim 5, the operations further comprising:
receiving text data representing one or more comments, the comments being associated with the multimedia content;
identifying one or more keywords associated with the item from the text data; and
the identification information is modified based at least in part on the one or more keywords.
10. The system of claim 5, wherein the multimedia content is associated with a first merchant, and the operations further comprise:
determining that the item is not available from the first merchant when the user input is received;
identifying one or more second merchants from which the item may be obtained; and
causing the graphical user interface to include identifiers of the one or more second merchants.
11. The system of claim 5, the operations further comprising:
Generating first data representing the interactive element, the first data being separate from the multimedia content;
receiving an indication that the multimedia content has been requested for output on the user device; and
in response to the indication, causing first data to be superimposed on the multimedia content when the multimedia content is output on the user device.
12. The system of claim 5, wherein the graphical user interface is provided by a payment processing service, and the operations further comprise:
receiving item information about the item from a system associated with a merchant;
receiving payment information from the user device via the graphical user interface; and
and initiating a payment transaction for purchasing the item by using the payment information and the item information.
13. A method implemented at least in part by one or more computers of a payment processing service provider, the method comprising:
receiving multimedia content comprising a representation of an item;
identifying the item in the multimedia content;
determining identification information associated with the item;
associating the identifying information with an interactive element to be presented in association with the multimedia content;
Associating the interactive element with a portion of the multimedia content;
causing the interactive element to be displayed when the multimedia content is output on a user device;
receiving input data from the user device while the interactive element is displayed, the input data indicating a user interaction with the interactive element; and
based at least in part on the input data, causing the user device to display a graphical user interface configured to allow the user to obtain the item.
14. The method of claim 13, further comprising:
receiving inventory data from a system associated with a merchant selling the item, the inventory data indicating a current inventory of the item available for purchase from the merchant;
determining that the current inventory is less than a threshold inventory value; and
based at least in part on the current inventory being below the threshold inventory value, a recommendation for a replacement item currently in inventory is sent.
15. The method of claim 13, further comprising:
storing first data, the first data indicating that the interactive element is selected;
receiving additional input data indicative of customer interactions of additional interaction elements associated with additional items in the multimedia content;
Storing second data, the second data indicating that the additional interactive element is selected; and
based at least in part on the first data and the second data, causing the graphical user interface to display purchase information for the item and the additional item.
16. The method of claim 13, further comprising:
determining an attribute comprising at least one of a size, a color, or a number of the items presented in the multimedia content; and
the graphical user interface is caused to include attributes as pre-populated information in the user input field.
17. The method of claim 13, further comprising:
determining a geographic area associated with the user device when the user input data is received; and
determining a current inventory of the item in the geographic region, wherein causing an interactive element to be displayed includes causing an indication of the current inventory to be displayed.
18. The method of claim 13, wherein the multimedia content comprises real-time streaming media or near real-time streaming media content, the receiving the multimedia content comprises receiving the multimedia content from a device associated with a merchant, and the method further comprises:
Analyzing image data of the multimedia content using one or more computer vision processes to identify objects described in the multimedia content prior to sending the instance of the multimedia content to the user device;
generating first data comprising the interactive element prior to transmitting the instance of the multimedia content, wherein the interactive element is based at least in part on the identified object; and
and sending the first data and the instance of the multimedia content to the user equipment.
19. The method of claim 13, further comprising:
receiving image data from the user device describing a gesture made by a user of the user device while the multimedia content is being output, the user input data comprising the image data;
determining a movement pattern of the gesture based at least in part on the analysis of the image data;
determining that the movement pattern corresponds to a reference movement pattern, the reference movement pattern indicating that the user has provided input selecting an item to purchase; and
an action is caused to be performed based at least in part on the motion pattern corresponding to the reference motion pattern.
20. The method of claim 13, further comprising:
determining text data indicative of a speech portion of the multimedia content using speech recognition processing performed on audio data of the multimedia content;
identifying one or more features of the item based at least in part on the text data; and
at least one field of the graphical user interface is pre-populated with the one or more features.
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/014,250 US11403692B2 (en) | 2020-09-08 | 2020-09-08 | Customized e-commerce tags in realtime multimedia content |
US17/014,250 | 2020-09-08 | ||
US17/014,280 US11893624B2 (en) | 2020-09-08 | 2020-09-08 | E-commerce tags in multimedia content |
US17/014,280 | 2020-09-08 | ||
PCT/US2021/047931 WO2022055723A1 (en) | 2020-09-08 | 2021-08-27 | E-commerce tags in multimedia content |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116018608A true CN116018608A (en) | 2023-04-25 |
Family
ID=77897755
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202180054395.1A Pending CN116018608A (en) | 2020-09-08 | 2021-08-27 | Electronic commerce label in multimedia content |
Country Status (6)
Country | Link |
---|---|
EP (1) | EP4205063A1 (en) |
JP (1) | JP2023540861A (en) |
CN (1) | CN116018608A (en) |
AU (1) | AU2021339550A1 (en) |
CA (1) | CA3186532A1 (en) |
WO (1) | WO2022055723A1 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11893624B2 (en) | 2020-09-08 | 2024-02-06 | Block, Inc. | E-commerce tags in multimedia content |
US11403692B2 (en) | 2020-09-08 | 2022-08-02 | Block, Inc. | Customized e-commerce tags in realtime multimedia content |
US11657401B2 (en) * | 2021-02-08 | 2023-05-23 | Capital One Services, Llc | Systems and methods for warranty coverage alerts based on acquisition data |
CN118365431B (en) * | 2024-06-19 | 2024-10-11 | 广州大事件网络科技有限公司 | Big data-based commodity recommendation method and system for electronic commerce platform |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190297390A1 (en) * | 2013-03-05 | 2019-09-26 | Brandon Grusd | Method and system for physically tagging objects prior to creation of an interactive video |
CA3028710A1 (en) * | 2016-06-23 | 2017-12-28 | Capital One Services, Llc | Systems and methods for automated object recognition |
WO2019171128A1 (en) * | 2018-03-06 | 2019-09-12 | Yogesh Chunilal Rathod | In-media and with controls advertisement, ephemeral, actionable and multi page photo filters on photo, automated integration of external contents, automated feed scrolling, template based advertisement post and actions and reaction controls on recognized objects in photo or video |
-
2021
- 2021-08-27 EP EP21773961.4A patent/EP4205063A1/en active Pending
- 2021-08-27 JP JP2023507531A patent/JP2023540861A/en active Pending
- 2021-08-27 CA CA3186532A patent/CA3186532A1/en active Pending
- 2021-08-27 CN CN202180054395.1A patent/CN116018608A/en active Pending
- 2021-08-27 AU AU2021339550A patent/AU2021339550A1/en not_active Abandoned
- 2021-08-27 WO PCT/US2021/047931 patent/WO2022055723A1/en unknown
Also Published As
Publication number | Publication date |
---|---|
EP4205063A1 (en) | 2023-07-05 |
WO2022055723A1 (en) | 2022-03-17 |
CA3186532A1 (en) | 2022-03-17 |
JP2023540861A (en) | 2023-09-27 |
AU2021339550A1 (en) | 2023-05-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11687911B2 (en) | Application integration for contactless payments | |
US11798062B2 (en) | Customized e-commerce tags in realtime multimedia content | |
US11544695B2 (en) | Transaction identification by comparison of merchant transaction data and context data | |
US12056775B2 (en) | Transacting via social media interactions | |
US11893624B2 (en) | E-commerce tags in multimedia content | |
JP7637780B2 (en) | Contextual communication routing method and system - Patents.com | |
CN116018608A (en) | Electronic commerce label in multimedia content | |
US11983763B1 (en) | Modeling to generate dynamic electronic representations | |
US11763360B1 (en) | Intelligently identifying items for resale | |
US20230209116A1 (en) | Integration of platforms for multi-platform content access | |
US20240193502A1 (en) | Intelligent virtualization of merchants | |
US20230336512A1 (en) | Contextual communication routing methods and systems | |
US20240177256A1 (en) | Adaptive Media Content Supervision Platform | |
US20250069055A1 (en) | Interactive interface to create shared purchase channel | |
JP2025032111A (en) | Embedded Card Reader Security | |
US20240330376A1 (en) | Event-Based Customized Recommendations in a Distributed Network | |
US20230306391A1 (en) | Processing payments using electronic messages | |
US20240396890A1 (en) | Decentralized Trust Establishment Using Sentiment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |