US20190121509A1 - Image Grouping with Audio Commentaries System and Method - Google Patents
Image Grouping with Audio Commentaries System and Method Download PDFInfo
- Publication number
- US20190121509A1 US20190121509A1 US16/222,302 US201816222302A US2019121509A1 US 20190121509 A1 US20190121509 A1 US 20190121509A1 US 201816222302 A US201816222302 A US 201816222302A US 2019121509 A1 US2019121509 A1 US 2019121509A1
- Authority
- US
- United States
- Prior art keywords
- audio
- image
- commentaries
- image grouping
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0641—Shopping interfaces
- G06Q30/0643—Graphical representation of items or shoppers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/16—Real estate
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/06—Message adaptation to terminal or network requirements
- H04L51/066—Format adaptation, e.g. format conversion or compression
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/07—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
- H04L51/10—Multimedia information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/27—Server based end-user applications
- H04N21/274—Storing end-user multimedia data in response to end-user request, e.g. network recorder
- H04N21/2743—Video hosting of uploaded data from client
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/57—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for processing of video signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/04—Real-time or near real-time messaging, e.g. instant messaging [IM]
-
- H04L51/38—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/58—Message adaptation for wireless communication
Definitions
- the audio image app 132 would then inform its user that the audio image file is available for viewing.
- the app 132 would list all received audio image files in a queue for selection by the user. When one of the files is selected, the app 132 would present the image and play the most recently added audio commentary made about that image.
- the app 132 would also give the user of device 168 the ability to record a reply commentary to the image, and then send that reply back to mobile device 110 in the form of a new audio image file.
- the new audio image file containing the reply comment could also be forwarded to third parties.
- the audio images 520 - 550 are presented in a queue in reverse chronological order, with the most recently received audio image 520 being presented at the top.
- the audio images 520 - 550 are presented in a hierarchical in-box.
- At the top of the hierarchy are participants—the party or parties on the other side of a conversation with the user.
- the in-box presents audio images associated with that participant as the next level in the hierarchy. These audio images are preferably presented in reverse chronological order, but this could be altered to suit user preferences.
- the in-box may then present the separate commentaries made in that audio image as the lowest level of the hierarchy. A user would then directly select a particular audio commentary for viewing in the app. Alternatively, the app could present the latest audio commentary to the user after the user selected a particular audio image without presenting the separate commentaries for individual selection.
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Strategic Management (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Economics (AREA)
- Tourism & Hospitality (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Signal Processing (AREA)
- Finance (AREA)
- Accounting & Taxation (AREA)
- Entrepreneurship & Innovation (AREA)
- Computer Networks & Wireless Communication (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Primary Health Care (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Development Economics (AREA)
- Quality & Reliability (AREA)
- Operations Research (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- Information Transfer Between Computers (AREA)
- Telephonic Communication Services (AREA)
Abstract
A system and method are presented to allow audio communication between users concerning a group of images. A first mobile device has an app that selects a plurality of images and then records audio commentary for the plurality of images. The images and audio commentary are grouped into an image grouping, and the image grouping is transmitted to a second mobile device. Reply commentaries to the image grouping can also be associated with a particular image in the grouping. Image groupings can be presented by reviewing all commentaries on a single image first before moving to a second image, or by reviewing all commentaries made in a user-session before moving to a second user session.
Description
- This application is a continuation of U.S. patent application Ser. No. 15/181,529, filed on Jun. 14, 2016, which in turn is a continuation-in-part to U.S. patent application Ser. No. 14/043,385, filed on Oct. 1, 2013 (now U.S. Pat. No. 9,894,022). These priority applications are hereby incorporated by reference in their entireties. The present application is also related to the content found in U.S. patent application Ser. No. 14/542,599 (filed on Nov. 15, 2014); U.S. patent application Ser. No. 14/521,576 (filed on Oct. 23, 2014); U.S. patent application Ser. No. 14/227,032 (filed on Mar. 27, 2014, now U.S. Pat. No. 10,057,731); and U.S. patent application Ser. No. 14/179,602 (filed on Feb. 23, 2014, now U.S. Pat. No. 9,977,591); all of which are also hereby incorporated by reference in their entireties.
- The present application relates to the field of image-centered communication between users. More particularly, the described embodiments relate to a system and method for bi-directional communications centered on a plurality of still images with audio commentaries gathered into an image grouping.
- One embodiment of the present invention provides audio communication between users concerning an image. The originator of the communication uses an app operating on a mobile device to create or select a photograph or other image. The same app is then used to attach an audio commentary to the image. The app encodes the audio commentary and the image together into a video file that can be viewed by video players included with modern mobile devices. This video file is one example of an “audio image” file used by the present invention.
- The originator can then select one or more recipients to receive the video file. Recipients are identified by e-mail addresses, cell phone numbers, or user identifiers used by a proprietary communication system. The app analyzes each recipient address to determine the preferred mode of delivery for the video file. If the recipient also uses the app, the file is delivered through the proprietary communication system and received by the app on the recipient's mobile device. Otherwise, the file is delivered through MMS (if the recipient is identified by a telephone number) or through e-mail (if the recipient is identified by an e-mail address). Regardless of how the file is sent, a message containing the file and the particulars of the transmission are sent to the server managing the proprietary communication system.
- When the file is sent through MMS or e-mail, it is accompanied by a link that allows the recipient to download an app to their mobile device to continue the dialog with the originator. When the link is followed, the user can download the app. Part of the set-up process for the app requires that new users identify their e-mail address and cell phone. This set-up information is communicated to the proprietary server, which can then identify audio image messages that were previously sent to the recipient through either e-mail or MMS message. Those audio image messages are then presented through an in-box in the app, where they can be selected for downloading and presentation to the newly enrolled user.
- All recipients of the audio image file can play the file in order to view the image and hear the originator's audio commentary. Recipients using the app on their mobile devices can record a reply audio commentary. This reply audio is then encoded by the app into a new video file, where the reply audio is added to the beginning of the previous audio track and the video track remains a static presentation of the originally selected image. This new video file can be returned to the originator, allowing the originator to create a new response to the reply audio.
- In addition, a group of images can be selected for inclusion in a single image grouping. The sender selects the images, and then indicates the order in which the images should be presented. The user starts to record the audio commentary while viewing the first image, and then provides input to the mobile device when to switch to the next image. The timed-transitions between grouped images can be recorded into a video file by the sending device, or be recorded as metadata for translation by the app on the recipient's device. Alternatively, the user can record audio commentaries for each of the images separately. In this alternative embodiment, each image is associated with its own audio commentary and any reply commentaries made to the image. The images are grouped together in a single image grouping via metadata. When displayed, user can select whether they wish to listen to the commentaries of the message groupings. Users can choose to view all of the commentaries relating to a single image first before moving to the next image, or view all of the commentaries from a particular user session across multiple images before viewing the commentaries from a later user session.
-
FIG. 1 is a schematic view of a system utilizing the present invention. -
FIG. 2 is a schematic diagram showing a database accessed by a server used in the system ofFIG. 1 . -
FIG. 3 is a schematic diagram showing the components of an audio image file. -
FIG. 4 is a schematic diagram showing the components of a new audio image file after an audio comment is added to the audio image file ofFIG. 3 . -
FIG. 5 is a plan view of a mobile device displaying a user interface provided by an app. -
FIG. 6 is a plan view of the mobile device ofFIG. 5 displaying a second user interface provided by the app. -
FIG. 7 is a flow chart showing a method of creating, transmitting, and responding to an audio image file. -
FIG. 8 is a flow chart showing the detailed steps of responding to an audio image file. -
FIG. 9 is a flow chart showing the method of receiving an audio image file without the initial use of an app. -
FIG. 10 is a flow chart showing a method of creating an audio-image file having multiple images. -
FIG. 11 is a portion of an alternative embodiment flowchart that replaceselement 1002 in the flow chart ofFIG. 10 . -
FIG. 12 is a flow chart showing a method for displaying a grouping of multiple images. -
FIG. 13 is a schematic diagram showing two different orderings for the presentation of commentaries and images in a grouping of multiple images. -
FIG. 14 is a schematic diagram showing the parties that could make use of the grouping of images inFIG. 13 in the context of a real estate home search. -
FIG. 1 shows asystem 100 in which amobile device 110 can create and transmit audio image files to other users. Audio image files allow users to have a bi-directional, queued, audio communication about a particular visual image or presentation. Themobile device 110 can communicate over a widearea data network 150 with a plurality of computing devices. InFIG. 1 , themobile device 110 communicates overnetwork 150 with anaudio image server 160 to send an audio image tomobile device 168, and communicates over thesame network 150 with ane-mail server 170 in order to send an e-mail containing an audio image to a secondmobile device 174. In one embodiment, the wide area data network is the Internet. Themobile device 110 is also able to communicate with a multimedia messaging service center (“MMS center”) 180 overMMS network 152 in order to send an audio image within an MMS message to a thirdmobile device 184. - The
mobile device 110 can take the form of a smart phone or tablet computer. As such, thedevice 110 will include amicrophone 112 and acamera 114 for receiving audio and visual inputs. Thedevice 110 also includes a touchscreen user interface 116. In the preferred embodiment,touch screen 116 both presents visual information to the user over the display portion of thetouch screen 116 and also receives touch input from the user. - The
mobile device 110 communicates over thedata network 150 through adata network interface 118. In one embodiment, thedata network interface 118 connects thedevice 110 to a local wireless network that provides connection to the widearea data network 150. Thedata network interface 118 preferably connects via one of the Institute of Electrical and Electronics Engineers' (IEEE) 802.11 standards. In one embodiment, the local network is based on TCP/IP, and thedata network interface 118 utilizes a TCP/IP protocol stack. - Similarly, the
mobile device 110 communicates over theMMS network 152 via acellular network interface 120. In the preferred embodiment, themobile device 110 sends multi-media messaging service (“MMS”) messages via the standards provided by acellular network 152, meaning that theMMS network 152 used for data messages is thesame network 152 that is used by themobile device 110 to make cellular voice calls. In some embodiments, the provider of the cellular data network also provides an interface to the widearea data network 150, meaning that the MMS orcellular network 152 could be utilized to send e-mail and proprietary messages as well as MMS messages. This means that the actualphysical network interface mobile device 110 is relatively unimportant. Consequently, the following description will focus on three types of messaging: e-mail, MMS, and proprietary messaging, without necessarily limiting these messages to aparticular network network interface particular interfaces networks - The
mobile device 110 also includes aprocessor 122 and amemory 130. Theprocessor 120 can be a general purpose CPU, such as those provided by Intel Corporation (Mountain View, Calif.) or Advanced Micro Devices, Inc. (Sunnyvale, Calif.), or a mobile specific processor, such as those designed by ARM Holdings (Cambridge, UK). Mobile devices such asdevice 110 generally usespecific operating systems 140 designed for such devices, such as iOS from Apple Inc. (Cupertino, Calif.) or ANDROID OS from Google Inc. (Menlo Park, Calif.). Theoperating system 140 is stored onmemory 130 and is used by theprocessor 120 to provide a user interface for thetouch screen display 116, handle communications for thedevice 110, and to manage and provide services to applications (or apps) that are stored in thememory 130. In particular, themobile device 100 is shown with anaudio image app 132,MMS app 142, and ane-mail app 144. TheMMS app 142 is responsible for sending, receiving, and managing MMS messages over theMMS network 152. Incoming messages are received from theMMS center 180, which temporarily stores incoming messages until themobile device 110 is able to receive them. Similarly, thee-mail app 144 sends, receives, and manages e-mail messages with the aid of one ormore e-mail servers 170. - The
audio image app 132 is responsible for the creation of audio image files, the management of multiple audio image files, and the sending and receiving of audio image files. In one embodiment, theaudio image app 132 containsprogramming instructions 134 for theprocessor 122 as well asaudio image data 136. Theimage data 136 will include all of the undeleted audio image files that were created and received by theaudio image app 132. In the preferred embodiment, the user is able to delete old audio image files that are no longer desired in order to save space inmemory 130. - The
app programming 134 instructs theprocessor 122 how to create audio image files. The first step in so doing is either the creation of a new imagefile using camera 114, or the selection of an existingimage file 146 accessible by themobile device 110. The existingimage file 146 may be retrieved from thememory 130 of themobile device 110, or from a remote data storage service (not shown inFIG. 1 ) accessible overdata network 150. Theprocessor 122 then uses thedisplay 116 to show the image to the user, and allows the user to input an audio commentary using themicrophone 112. Theapp programming 134 instructs theprocessor 122 how to combine the recorded audio data with the image into an audio image file. In some embodiments, the audio-image file will take the form of a standard video file. In the preferred embodiment, theapp programming 134 takes advantage of the ability to link to existing routines in theoperating system 140 in order to render this video file. In most cases, these tools take the form of a software development kit (or “SDK”) or access to an application programming interface (or “API”). For example, Apple's iOS gives third-party apps access to an SDK to render videos using the H.264 video codec. - After the
app programming 134 causes theprocessor 122 to create the video file (one type of an audio image file), theapp programming 134 causes theprocessor 122 to present a user input screen ondisplay 116 that allows the user to select a recipient of the audio image file. In one embodiment, the user is allowed to select recipients from existingcontact records 148 that already exist on themobile device 110. These same contact records may be used by theMMS app 142 to send MMS messages and theE-mail app 144 to send e-mail messages. In one embodiment, when the user selects a contact as a recipient, theapp programming 134 identifies either an e-mail address or a cell phone number for the recipient. - Once the recipient is identified, the
app 132 determines whether the audio image file should be sent to the recipient using theaudio image server 160 and its proprietary communications channel, or should be sent via e-mail or MMS message. This determination may be based on whether or not the recipient mobile device is utilizing theaudio image app 132. A mobile device is considered to be using theaudio image app 132 if theapp 132 is installed on the device and the user has registered themselves as a user of theapp 132 with theaudio image server 160. InFIG. 1 ,mobile device 168 is using theaudio image app 132, whilemobile devices app 132. - To make this determination, the
app programming 134 instructs theprocessor 122 to send a user verification request containing a recipient identifier (such the recipient's e-mail address or cell phone of the recipient, either of which could be considered the recipient's “audio image address”) to theaudio image server 160. Theserver 160 is a programmed computing device operating aprocessor 161 under control ofserver programming 163 that is stored on thememory 162 of theaudio image server 160. Theprocessor 161 is preferably a general purpose CPU of the type provided by Intel Corporation or Advanced Micro Devices, Inc., operating under the control of a general purpose operating system such as Mac OS by Apple, Inc., Windows by Microsoft Corporation (Redmond, Wash.), or Linux (available from a variety of sources under open source licensing restrictions). Theserver 160 is in further communication with adatabase 164 that contains information on audio image users, the audio image addresses of the users, and audio image files. Theserver 160 responds to the user verification request by consulting thedatabase 164 to determine whether each recipient's audio image address is associated in thedatabase 164 with a known user of theapp 132. Theserver 160 then informs themobile device 110 of its findings. - Although the
server 160 is described above as a single computer with asingle processor 161, it would be straightforward to implementserver 160 as a plurality of separate physical computers operating under common or cooperative programming. Consequently, the terms server, server computer, or server computers should all be viewed as covering situations utilizing one or more than one physical computer. - If the
server 160 indicates that therecipient device 168 is associated with a known user of theapp 132, then, in one embodiment, theaudio image file 166 is transmitted to thatmobile device 168 via theserver 160. To do so, themobile device 110 transmits to theserver 160 the audio image video file along with metadata that identifies the sender and recipient of thefile 166. Theserver 160 stores this information indatabase 164, and informs the recipientmobile device 168 that it has received anaudio image file 166. If thedevice 168 is powered on and connected to thedata network 150, theaudio image file 166 can be immediately transmitted to themobile device 168, where it is received and managed by theaudio image app 132 on thatdevice 168. Theaudio image app 132 would then inform its user that the audio image file is available for viewing. In the preferred embodiment, theapp 132 would list all received audio image files in a queue for selection by the user. When one of the files is selected, theapp 132 would present the image and play the most recently added audio commentary made about that image. Theapp 132 would also give the user ofdevice 168 the ability to record a reply commentary to the image, and then send that reply back tomobile device 110 in the form of a new audio image file. The new audio image file containing the reply comment could also be forwarded to third parties. - If the
server 160 indicates that therecipient device audio image app 132, themobile device 110 will send the audio image file without using the proprietary communication system provided by theaudio image server 160. If the audio image address is an e-mail address, theaudio image app 132 ondevice 110 will create ane-mail message 172 to that address. Thise-mail message 172 will contain the audio image file as an attachment, and will be sent to ane-mail server 170 that receives e-mail for the e-mail address used bydevice 174. Thisserver 170 would then communicate to thedevice 174 that an e-mail has been received. If thedevice 174 is powered on and connected to thedata network 150, ane-mail app 176 on themobile device 174 will receive and handle the audio image file within the receivede-mail message 172. - Similarly, if the audio image address is a cell phone number, the
audio image app 132 will create anMMS message 182 for transmission through thecellular network interface 120. ThisMMS message 182 will include the audio image file, and will be delivered to anMMS center 180 that receives MMS messages formobile device 184. If themobile device 184 is powered on and connected to theMMS network 152, anMMS app 186 onmobile device 184 will download and manage theMMS message 182 containing theaudio image file 182. Because the audio image file in either thee-mail message 172 and theMMS message 182 is a standard video file, bothmobile devices devices devices device 110 without requiring the presence of theaudio image app 132. However, without the presence of theapp 132, it would not be possible for eitherdevice device 110. - In the preferred embodiment, the
e-mail message 172 and theMMS message 182 both contain links tolocation 190 where the recipientmobile devices audio image app 132. The message will also communicate that downloading theapp 132 at the link will allow the recipient to create and return an audio reply to this audio image file. The linked-to downloadlocation 190 may be an “app store”, such as Apple's App Store for iOS devices or Google's Play Store for Android devices. The user of eitherdevice audio image app 132 from theapp store 190. When the downloadedapp 132 is initially opened, the users are given the opportunity to register themselves by providing their name, e-mail address(es) and cell phone number(s) to theapp 132. Theapp 132 then shares this information with theaudio image server 160, which creates a new user record indatabase 164. Theserver 160 can then identify audio image messages that were previously sent to that user and forward those messages to the user. At this point, the user can review the audio image files using theapp 132, and now has the ability to create and send a reply audio message as a new audio image file. - In some embodiments, the audio image file is delivered as a video file to e-mail recipients and MMS recipients, but is delivered as separate data elements to
mobile devices 168 that utilize theaudio image app 132. In other words, a single video file is delivered via an e-mail or MMS attachment, while separate data elements are delivered to themobile devices 168 that use theaudio image app 132. In these cases, the “audio image file” delivered to themobile device 168 would include an image file compressed using a still-image codec (such as JPG, PNG, or GIF), one or more audio files compressed using an audio codec (such as MP3 or AAC), and metadata identifying the creator, creation time, and duration of each of the audio files. Theaudio image app 132 would then be responsible for presenting these separate data elements as a unified whole. As explained below, theaudio image file 166 may further include a plurality of still images, one or more video segments, metadata identifying the order and timing of presentations of the different visual elements, or metadata defining augmentations that may be made during the presentation of the audio image file. - In sending the
MMS message 182, themobile device 130 may take advantage of the capabilities of theseparate MMS app 144 residing on themobile device 110. Such capabilities could be accessed through an API or SDK provided by theapp 144, which is described in more detail below. Alternatively, the audioimage app programming 134 could contain all of the programming necessary to send theMMS message 182 without requiring the presence of adedicated MMS app 142. Similarly, themobile device 130 could use the capabilities of aseparate e-mail app 144 to handle the transmission of thee-mail message 172 tomobile device 174, or could incorporate the necessary SMTP programming into theprogramming 134 of theaudio image app 132 itself. -
FIG. 2 shows one embodiment ofdatabase 164 that is used to track users and audio image messages. Thedatabase 164 may be stored in thememory 162 of theaudio image server 160, or it may be stored in external memory accessible to theserver 160 through a bus ornetwork 165. Thedatabase 164 is preferably organized as structured data, such as separate tables in a relational database or as database objects in an object-oriented database environment.Database programming 163 stored on thememory 162 of theaudio image server 160 directs theprocessor 161 to access, manipulate, update, and report on the data in thedatabase 164.FIG. 2 shows thedatabase 164 with tables or objects foraudio image messages 200, audio image data or files 210,users 220, e-mail addresses 230,cell phone numbers 240, and audioimage user IDs 250. Since e-mail addresses 230,cell phone numbers 240, and audioimage user IDs 250 can all be used as a recipient or sender address for anaudio image message 200,FIG. 2 shows adotted box 260 around thesedatabase entities audio image address 260. Theseaddresses 260 can all be considered electronic delivery addresses, as theaddresses 260 each can be used to deliver an electronic communication to a destination. - Relationships between the database entities are represented in
FIG. 2 using crow's foot notation. For example,FIG. 2 shows that eachuser database entity 220 can be associated with a plurality ofe-mail address 230 andcell phone numbers 240, but with only a single audioimage user ID 250. Meanwhile, eache-mail address 230,cell phone number 240, and audio image user ID 250 (i.e., each audio image address 260) is associated with only asingle user entity 220. Similarly, eachaudio image message 200 can be associated with a plurality of audio image addresses 260 (e-mail addresses 230,cell phone numbers 240, and audio image user IDs 250), which implies that asingle message 200 can have multiple recipients. In the preferred embodiment, theaudio image message 200 is also associated with a singleaudio image address 260 to indicate the sender of theaudio image message 200. The fact that eachaudio image address 260 can be associated with multipleaudio image messages 200 indicates that a singleaudio image address 260 can be the recipient or sender formultiple messages 200.FIG. 2 also shows that each audio imagemessage database entity 200 is associated directly with anaudio image file 210. Thisaudio image file 210 can be a single video file created by theaudio image app 132, or can be separate image and audio files along with metadata describing these files. The distinctions between these database entities 200-250 are exemplary and do not need to be maintained to implement the present invention. For example, it would be possible for theaudio image message 200 to incorporate the audio image data or files 210 in a single database entity. Similarly, each of the audio image addresses 260 could be structured as part of theuser database entity 220. The separate entities shown inFIG. 2 are presented to assist in understanding the data that is maintained indatabase 164 and the relationships between that data. - Associations or relationships between the database entities shown in
FIG. 2 can be implemented through a variety of known database techniques, such as through the use of foreign key fields and associative tables in a relational database model. InFIG. 2 , associations are shown directly between two database entities, but entities can also be associated through a third database entity. For example, auser database entity 200 is directly associated with one or more audio image addresses 260, and through that relationship theuser entity 200 is also associated withaudio image messages 200. These relationships can also be used to indicate different roles. For instance, anaudio image message 200 may be related to two different audioimage user IDs 250, one in the role of a recipient and one in the role as the sender. - An example
audio image file 300 is shown inFIG. 3 . In this example, theaudio image file 300 is a video file containing avideo track 310, anaudio track 320, andmetadata 330. The video track contains a single, unchanging still image 312 that is compressed using a known video codec. When the H.264 codec is used, for example, the applicable compression algorithms will ensure that the size of thevideo track 310 will not increase proportionally with the length of the audio track, as an unchanging video track is greatly compressed using this codec. While the H.264 codec does use keyframes that contain the complete video image, intermediate frames contain data only related to changes in the video signal. With an unchanging video feed, the intermediate frames do not need to reflect any changes. By increasing the time between keyframes, even greater compression of thevideo track 310 is possible. - In the
audio image file 300 shown inFIG. 3 , the audio track contains two separateaudio comments FIG. 3 , thefirst comment 322 to appear in thetrack 320 is actually the second to be recorded chronologically. This means that theaudio track 320 of theaudio image file 300 will start with the mostrecent comment 322. When a standard video player plays thisaudio image file 300, the most recently added comment will be played first. This could be advantageous ifmultiple comments audio image file 300 and the recipient is only interested in hearing the most recently addedcomments audio commentaries audio image file 300 in standard chronological order so that the first comment recorded 324 will start theaudio track 320. This allows a user who views theaudio image file 300 with a standard video player to hear all thecomments - The
metadata 330 that is included in thevideo file 300 provides information about these twoaudio commentaries Metadata 332 contains information about thefirst comment 322, including the name of the user who recorded the comment (Katy Smith), the data and time at which Ms. Smith recorded this comment, and the time slice in theaudio track 320 at which thiscomment 322 can be found. Similarly,metadata 334 provides the user name (Bob Smith), date and time of recording, and the time slice in theaudio track 320 for thesecond user comment 324. Themetadata 330 may also contain additional data about theaudio image file 300, as theaudio image file 300 is itself a video file and the video codec and theaudio image app 132 that created thisfile 300 may have stored additional information about thefile 300 inmetadata 330. - In the preferred embodiment, the
different comments single audio track 320 without chapter breaks. Chapter breaks are normally used to divide video files into logical breaks, like chapters in a book. The video playback facilities in some standard mobile device operating systems are not capable of displaying and managing chapter breaks, and similarly are not able to separately play different audio tracks in a video file, As a result, theaudio image file 300 shown inFIG. 300 does not use separate chapters or separate audio tracks to differentiate betweendifferent user comments metadata 330 is solely responsible for identifying thedifferent comments audio track 320 of thefile 300. InFIG. 3 , this is done through the “time slice” data, which indicates the start and stop time (or start time and duration) of each comment in thetrack 320. In other embodiments, true video file chapter breaks (or even multiple tracks) could be used to differentiate between differentaudio comments -
FIG. 4 shows a newaudio image file 400 that is created after athird comment 422 is added to thefile 300 shown inFIG. 3 . As was the case withfile 300, thisfile 400 includes avideo track 410, anaudio track 420, andmetadata 430. Theaudio track 420 includes athird comment 422 in addition to the twocomments file 300. InFIG. 4 , thisnew comment 422 appears at the beginning of theaudio track 420, as thiscomment 422 is the most recent comment in thisaudio image file 400. Similarly, themetadata 430 includesmetadata 432 concerning thisnew track 422, in addition to themetadata tracks tracks new audio track 420. Whiletrack 322 originally appeared at the beginning oftrack 320, it now appears intrack 420 after the whole oftrack 422. Consequently, the new location ofaudio comments metadata audio track 420 in chronological order, thenew commentary 422 would appear aftercommentary 324 andcommentary 322 in theaudio track 420. Furthermore, in this embodiment it would not be necessary to modifymetadata commentaries track 420 would not have changed with the addition of thenew commentary 422. With both embodiments, thevideo track 410 will again include an unchangingstill image 412, much like thevideo track 310 offile 300. The one difference is that thisvideo track 410 must extend for the duration of all threecomments audio track 420. -
FIG. 5 shows amobile device 500 that has atouch screen display 502 and auser input button 504 located below thedisplay 502. In this Figure, thedevice 500 is presenting auser interface 510 created by theaudio image app 132. Thisinterface 510 shows a plurality of audio images 520-550 that have been received by theapp 132 from theserver 160. The audio images 520-550 are presented in a list form, with each item in the list showing a thumbnail graphic from the audio image and the name of an individual associated with the audio image 520-550. In some circumstances, the name listed ininterface 510 is the name of the individual that last commented on the audio image 520-550. In other circumstances, the user who owns themobile device 500 may have made the last comment. In these circumstances, the name listed may be the other party (or parties) who are participating in the audio commentary concerning the displayed image. The list ininterface 510 also shows the date and time of the last comment added to each audio image. InFIG. 5 , the first twoaudio images audio images interface 510 may also include anedit button 512 that allows the user to select audio images 520-550 for deletion. - In
FIG. 5 , the audio images 520-550 are presented in a queue in reverse chronological order, with the most recently receivedaudio image 520 being presented at the top. In other embodiments, the audio images 520-550 are presented in a hierarchical in-box. At the top of the hierarchy are participants—the party or parties on the other side of a conversation with the user. After selection of a participant, the in-box presents audio images associated with that participant as the next level in the hierarchy. These audio images are preferably presented in reverse chronological order, but this could be altered to suit user preferences. After selection of an individual audio image, the in-box may then present the separate commentaries made in that audio image as the lowest level of the hierarchy. A user would then directly select a particular audio commentary for viewing in the app. Alternatively, the app could present the latest audio commentary to the user after the user selected a particular audio image without presenting the separate commentaries for individual selection. - If a user selects the first
audio image 520 frominterface 510, anew interface 610 is presented to the user, as shown inFIG. 6 . This interface includes a larger version of theimage 620 included in the audio image file. Superimposed on thisimage 620 is aplay button 622, which, if pressed, will play the last audio commentary that has been added to his audio image. Below theimage 620 is a list of theaudio commentaries FIG. 6 , the most recent audio commentary was created by Bob Smith on Feb. 12, 2014 at 3:13 PM, and has a duration of 0 minutes and 13 seconds. If the user selects the play button 622 (or anywhere else on the image 620), this audio commentary will be played. If the user wishes to select one of the earlieraudio commentaries smaller playback buttons image 620 than can be simultaneously displayed oninterface 610, a scrollable list is presented to the user. - In the preferred embodiment, the
user interface 610 will remove thelistings display 502 when an audio commentary is being played. Theimage 620 will expand to cover the area of thedisplay 502 that previously contained this list. This allows the user to focus only on theimage 620 when hearing the selected audio commentary. When the user has finished listening to the audio commentary, they can press and hold therecord button 660 onscreen 502 to record their own response. In the preferred embodiment, the user holds thebutton 660 down throughout the entire audio recording process. When thebutton 660 is released, the audio recorded is paused. Thebutton 660 could be pressed and held again to continue recording the user's audio commentary. When thebutton 660 is released, the user is presented with the ability to listen to their recording, re-record their audio commentary, delete their audio commentary, or send a new audio image that includes the newly recorded audio commentary to the sender (in this case Bob Smith) or to a third party. By pressing theback button 670, the user will return tointerface 510. By pressing theshare button 680 without recording a new commentary, themobile device 500 will allow a user to share the selectedaudio commentary 520 as it was received by thedevice 500. - The flowchart in
FIG. 7 shows amethod 700 for creating, sending, and playing an audio image file. Thismethod 700 will be described from the point of view of thesystem 100 shown inFIG. 1 . The method begins atstep 705, when the originator of an audio image either selects an image from the existingphotos 146 already on theirmobile device 110, or creates a newimage using camera 114. Atstep 710, theapp 132 shows the selected image to the user and allows the user to record an audio commentary, such as by holding down a record button (similar to button 660) presented on thetouch screen 116 of themobile device 110. Theapp 132 will then use a video codec, such as may be provided by the mobiledevice operating system 140, to encode both the image and the audio commentary into a video file (step 715). Theapp 132 will also addmetadata 330 to the video file to create anaudio image file 300 atstep 720. Themetadata 330 provides sufficient information about theaudio track 320 of theaudio image file 300 to allow another device operating theapp 132 to correctly play the recorded audio commentary. - Once the
audio image file 300 is created, theapp 132 will, atstep 725, present a user interface to allow the originator to select a recipient (or multiple recipients) for thisfile 300. As explained above, theapp 132 may present the user with their existingcontact list 148 to make it easier to select a recipient. In some cases, a recipient may have multiple possible audio image addresses 260 at which they can receive theaudio image file 300. For instance, a user may have two e-mail addresses 230 and twocellular telephone numbers 240. In these cases, theapp 132 can either request that the originator select a single audio image address for the recipient, or the app can select a “best” address for that user. The best address can be based on a variety of criteria, including which address has previously been used to successfully send an audio image file to that recipient in the past. - Once the recipient is selected, the
app 132 will determine atstep 730 whether or not the recipient is a user of theapp 132. As explained above, this can be accomplished by theapp 132 sending a query to theaudio image server 160 requesting a determination as to whether the audio image address for that recipient is associated with a known user of theapp 132. If the recipient has multiple possible audio image addresses, the query may send all of these addresses to theserver 160 for evaluation. If the recipient is not a known user of theapp 132, this will be determined atstep 735. Step 740 will then determine whether the selected or best audio image address is an e-mail address or a cell phone number. If it is an e-mail address, step 745 will create and send ane-mail 172 to the recipient. Thise-mail 172 will include theaudio image file 300 as an attachment to the e-mail. In addition, the e-mail will include a link to thedownload location 190 for theapp 132 along with a message indicating that theapp 132 is needed to create and send a reply to the audio image. Ifstep 740 determines that theaudio image address 260 is a cell phone number, then step 750 will create and send anMMS message 182 to the recipient. As was true of thee-mail 172, theMMS message 182 will include the audio image file as an attachment, and will include a link to downloadlocation 190 along with a message stating that theapp 132 is necessary to create a reply to the audio image. - After sending an e-mail at
step 745 or an MMS message atstep 750,step 755 will also send the audio image file and relevant transmission information to theaudio image server 160. This transmission information may include the time of the e-mail or MMS transmission, the time that the audio comment was generated, the name of the originator and the recipient, and the recipient's chosen audio image address. This information will then be stored indatabase 164 along with the audio image file itself (step 760). As shown inFIG. 7 , thesesame steps step 735 determined that the recipient was a user of theapp 132, as theserver 160 needs this information to complete the transmission to the recipient. In fact, since theserver 160 always receives this information from the sendingmobile device 110 regardless of the transmission type, it is possible to eliminate the separate query ofstep 730. In this alternative embodiment, the transmission of the information atstep 755 would occur atstep 730. Theapp 132 could then be informed if the recipient were not a user of theapp 132, allowing steps 740-750 to proceed. If theapp 132 onmobile device 110 instead received notification that theserver 160 was able to transmit the information directly to the recipient, then no additional actions would be required on behalf of the sendingmobile device 110. - Once the
server 160 has received the transmission information atstep 755 and stored this information indatabase 164 atstep 760,step 765 considers whether the recipient is a user of theapp 132. If not, theserver 160 need not take any further action, as the sendingmobile device 110 is responsible for sending the audio image file to the recipient. In this case, themethod 700 will then end at step 790 (method 900 shown inFIG. 9 describes the receipt of an audio image file by a mobile device that does not use the app). - Assuming that the recipient is using the
app 132, then theserver 160 transmits theaudio image file 300 to the recipientmobile device 168. Therecipient device 168 receives theaudio image file 300 atstep 770, and then provides a notification to the user than thefile 300 was received. The notification is preferably provided using the notification features built into the operating systems of mostmobile devices 168. Atstep 775, theapp 132 is launched and the user requests theapp 132 to present theaudio image file 300. Atstep 780, the image is then displayed on the screen and this audio commentary is played. At this time, the user may request to record a reply message. Ifstep 785 determines that the user did not desire to record a reply, themethod 700 ends atstep 790. If a reply message is desired, thenmethod 800 is performed. -
Method 800 is presented in the flow chart found inFIG. 8 . The method starts atstep 805 with the user ofmobile device 168 indicating that they wish to record a reply. In the embodiments described above, this is accomplished by holding down arecord button 660 during or after viewing thevideo image file 300. When the user lets go of therecord button 660, the audio recording stops. Atstep 810, the audio recording is added to the beginning of theaudio track 320 of theaudio image file 300. With some audio codecs, the combining of two or more audio commentaries into asingle audio track 320 can be accomplished by simply merging the two files without the need to re-compress the relevant audio. Other codecs may require other techniques, which are known to those who are of skill in the art. Atstep 815, thevideo track 310 is extended to cover the duration of all of the audio commentaries in theaudio track 320. Finally, atstep 820 metadata is added to the new audio image file. This metadata will name the reply commentator, and will include information about the time and duration of the new comment. This metadata must also reflect the new locations in the audio track for all pre-existing audio comments, as these comments might now appear later in the new audio image file. - At
step 825,mobile device 168 sends the new audio image file to theserver 160 for transmission to the originatingdevice 110. Note that the transmission of a reply to the originatingdevice 110 may be assumed by theapp 132, but in most cases this assumption can be overcome by user input. For instance, the recipient usingmobile device 168 may wish to record a commentary and then send the new audio image file to a mutual friend, or to both the originator and mutual friend. In this case, the workflow would transition to step 730 described above. For the purpose of describingmethod 800, it will be assumed that only a reply to the originatingdevice 110 is desired. - The server will then store the new audio image file and the transmission information in its database 164 (step 830), and then transmit this new file to the originating mobile device 110 (step 835).
App 132 will then notify the user through thetouch screen interface 116 that a new audio image has been received atstep 840. When theapp 132 is opened, theapp 132 might present all of the user's audio image files in a list, such as that described in connection withFIG. 5 (step 845). If the user request that theapp 132 play the revised audio image file, theapp 132 will display the original image and then play back the reply audio message atstep 850. Themetadata 330 in thefile 300 will indicate when the reply message ends, allowing theapp 132 to stop playback before that portion of the video file containing the original message is reached. As indicated atstep 855, theapp 132 can also present to the user a complete list of audio comments that are found in thisaudio image file 300, such as throughinterface 610 shown inFIG. 6 . - In some cases, an audio image file may contain numerous comments. To assist with the management of comments, the
app 132 can be designed to allow a user to filter the audio comments so that not all comments are displayed and presented oninterface 610. For instance, a user may wish to only know about comments made by friends that are found in theircontact records 148 or are made by the individual who sent the message to the user. In this instance,interface 610 would display only the comments that the user desired. Theinterface 610 may also provide a technique for the user to reveal the hidden comments. The user is allowed to select any of the displayed comments in the list for playback. Theapp 132 would then use themetadata 330 associated with that comment to play back only the relevant portion of the audio track 320 (step 860). The originator would also have the ability to create their own reply message atstep 865. If such a re-reply is desired, themethod 800 would start again. If not, themethod 800 ends atstep 870. -
FIG. 9 displays a flow chart describing themethod 900 by which a non-user of theapp 132 is able to download theapp 132 and see previously transmitted messages. Themethod 900 begins atstep 905 when the user receives an e-mail or an MMS message containing anaudio image file 300. When the e-mail or MMS message is opened, it will display a message indicating that theapp 132 is required to create a reply (step 910). The message will also include a link to theapp 132 at anapp store 190, making the download of theapp 132 as simple as possible. - Since the
audio image file 300 that is sent in this context is a video file, the user can play the audio image file as a standard video file atstep 915. This would allow the user to view the image and hear the audio commentaries made about the image. If more than one audio commentary were included in theaudio image file 300, a standard video player would play through all of the commentaries without stopping. Whether the commentaries would play in chronological order or in reverse chronological order will depend completely on the order in which the commentaries were positioned in the audio track, as described above in connection withFIGS. 3 and 4 . When a standard video player is used to play theaudio image file 300, the user will not be able to add a new audio commentary to thisfile 300. - If the user wishes to create a new comment, they will select the provided link to
app store 190. This selection will trigger the downloading of theapp 132 atstep 920. When the user initiates theapp 132 by selecting the app's icon in the app selection screen of the operating system atstep 925, theapp 132 will request that the user enter personal information into the app. In particular, theapp 132 will request that the user provide their name, their e-mail address(es), and their cell phone number(s). This information is received by theapp 132 atstep 930, and then transmitted to theserver 160. Theserver 160 will then create anew user record 220 in thedatabase 164, give that record 220 anew User ID 250, and then associate thatuser record 220 with the user provided e-mail addresses 230 and cell phone numbers 240 (step 935). - At
step 940, theserver 160 will search the database foraudio image messages 200 that have been previously sent to one of the e-mail addresses 230 orcell phone numbers 240 associated with thenew user record 220. Allmessages 200 so identified will be downloaded, along with the actual audio image file ordata 210, to the user'sapp 132 atstep 945. The user can then view the downloaded audio image files (such as throughuser interface 510 ofFIG. 5 ), select one of the audio image files (as shown inFIG. 6 ), and then view theaudio image file 300 through the app 132 (step 950). Step 950 will also allow the user to create reply audio messages throughmethod 800, and transmit the resulting new audio image files to other users. Theprocess 900 then terminates atstep 955. - As described above, the
database 164 is designed to receive a copy of all audio image data files 300 that are transmitted usingsystem 100. In addition,app 132 may store a copy of all audio image data files 300 that are transmitted or received at amobile device 110. In the preferred embodiment, theapp 132 is able to selectively delete local copies of the audio image data files 300, such as by usingedit button 512 described above. To the extent that the same data is stored asdatabase entity 210 in thedatabase 164 managed byserver 160, it is possible to allow anapp 132 to undelete anaudio image file 300 by simply re-downloading the file from theserver 160. If this were allowed, the server might require the user to re-authenticate themselves, such as by providing a password, before allowing a download of a previously deleted audio image file. - In some embodiments, the
server 160 will retain a copy of theaudio image file 300 asdata entity 210 only as long as necessary to ensure delivery of the audio image. If all recipients of anaudio image file 300 were users of theapp 132 and had successfully downloaded theaudio image file 300, this embodiment would then delete theaudio image data 210 from thedatabase 164. Meta information about the audio image could still be maintained indatabase entity 200. This would allow the manager ofserver 160 to maintain information about alltransmissions using system 100 while ensuring users that the actual messages are deleted after the transmission is complete. If some or all of the recipients are not users of theapp 132, theserver 160 will keep theaudio image data 210 to allow later downloads when the recipients do become users of theapp 132. The storage of these audio image files indatabase 164 can be time limited. For example, one embodiment may require deletion of allaudio image data 210 within three months after the original transmission of the audio image file even if the recipient has not become a user of theapp 132. - In the above-described embodiments, audio-image files were created based around a single image.
FIG. 10 describes aprocess 1000 in which multiple images can be combined into a single audio-image file. The process starts atstep 1005, where the creator selects a plurality of still images for inclusion as an image set. As shown inFIG. 10 , thisstep 1005 also requests that the user sort the selected images in the image set before recording an audio commentary for the image set. This pre-sorting allows a user to easily flip between the ordered images in the image set when creating an audio commentary. This sorting can be skipped, but then it would be necessary for the user to manually select the next image to be displayed while recording the audio commentary. - After the images in the image set are selected and ordered in
step 1005, theapp 132 will present the first image atstep 1010. When the user is ready, the user will begin recording the audio commentary atstep 1015, such as by pressing therecord button 1040. In the preferred embodiment, no audio commentary in an audio-image file is allowed to exceed a preset time limit. This helps to control the size of the audio-image files, and encourages more, shorter-length interchanges between parties communicating via audio-image files. While such time limits could apply to all audio-image files, they are particular useful when multiple images are selected inmethod 1000 because of a user's tendency to provide too much commentary for each image in the image set. As a result,method 1000 includesstep 1020, in which a progress bar is constantly displayed during creation of the audio commentary indicating to the user how much time is left before they reach the maximum time for their comments. - In addition to displaying the first image and the progress bar, the
app 132 will preferably present to the user a clear method for advancing to the next image in the image set. This may take the form of a simple arrow superimposed over the image. When the user taps the arrow, that interaction will be viewed as a user input to advance to the next image atstep 1025. This user input could also take the form of a simple swipe gesture, which is commonly used in mobile devices to advance to a next image or page in a document. When this input is received atstep 1025, the next image will be displayed atstep 1030. In addition, theapp 132 will record the time during the audio commentary at which the next image was displayed. The method returns to step 1015, which allows the user to continue to record their audio commentary, andstep 1020, which continues to display the progress bar. If no input for the next image is received atstep 1025, themethod 1000 proceeds to step 1035 to determine whether audio recording should stop. An audio recording will stop if the user indicates that he or she is done recording the audio (such as by pressing record button 1040), or if the maximum time for the audio recording is reached. Ifstep 1035 does not stop the recording, the method simply returns to step 1015 to allow for additional audio recording and advancement to additional images. - As explained above, time-limits on a user's commentary can be helpful even when only a single image is being included in an audio-image file. As a result, the steps of including of a progress bar at
step 1020 and a determination as to whether a maximum time is reached atstep 1035 may be included in the other methods of creating an audio-image file described herein. - The steps described above for selecting multiple images and recording commentaries for those images have been grouped together as
element 1002 inflow chart 1000.FIG. 11 presents an alternative embodiment, where steps 1105-1115 replace steps 1005-1035 inFIG. 10 . In this alternative embodiment, the user selects a single image for the grouping of images that they intend to create (step 1105). The user then records an audio commentary for that single image instep 1110.Step 1110 can be performed in the same manner as described for the recording of audio commentaries elsewhere in this disclosure. The main distinction betweenstep 1110 andstep 1015 is that the audio commentary instep 1110 relates only to a single image, while the single audio commentary recorded through the looping of steps 1015-1030 relate to all of the images of the grouping. After the audio commentary for the single image is recorded instep 1110,step 1115 determines whether or not the user wishes to add another image to the image grouping. If so,step 1105 will then select the next image for the grouping andstep 1110 will record a commentary for that next image. This loop will continue until all images have been selected and commentaries have been recorded for those images. In some embodiments, it would be possible to skipstep 1110 for some images, allowing some images to be included in the image grouping without any commentary. - After selecting the images and recording the commentaries for those images through one of the alternatives for sub-process 1002,
step 1040 determines whether a video track should be created that includes the transitions between the various images in the image set. As explained above, this type of video track is required if the recipient is not using theapp 132, or if theapp 132 is designed to display video tracks directly. This video track will time the transitions between the images to coincide with the audio commentary. This can be accomplished using the timings recorded atstep 1030, or simply by determining the length of the commentaries recorded for each image instep 1110. If some images were included without commentaries instep 1110, the image can be included in the video for a set period of time (such as five seconds) without any audio commentary. Once the video track is created along with the audio track containing the audio commentary,step 1050 may store information about the individual images and transitions between the images in the metadata, and theprocess 1000 will end atstep 1055. Of course, since the transitions and images are all embedded in the generated movie, it is possible that step 1050 could be skipped after the creation of the movie instep 1045. - As explained elsewhere herein, the receiving
app 132 may use the included metadata to directly generate and display a received audio commentary rather than simply presenting a movie that was pre-generated by the sending device. If all of the recipients have access to such apps,step 1040 may elect to skip themovie generation step 1045. If so,step 1060 will create the audio image file with still images for each of the images in the image set. In effect, the audio image file constitutes a message or image grouping containing all of the individual images and their associated commentary. If a single audio track was created for all of the images pursuant to steps 1005-1035, transition information based on the times recorded atstep 1030 will be included in the metadata stored with the file instep 1050. If separate audio commentaries were created for each of the images pursuant to steps 1105-1115, the metadata would associate each commentary with an image, and would associate the images together into an image or message grouping. This metadata would also specify the order in which the images in the image grouping should be presented. The message grouping is then transmitted to the recipient app. In one embodiment, this transmission occurs directly, with the message grouping being sent directly to the recipient mobile device. In other embodiments, the message grouping (the images, commentary, and associated metadata) is transmitted to the server (such as server 160), where it is associated with a message identifier. The message identifier is then sent to the recipient mobile device. The recipient device can then use this message identifier to request the message grouping from the server. In a third embodiment, the message grouping and destination are transmitted to the server. The server is then responsible for delivering the message grouping to the recipient. This can be accomplished by the server “pushing” the message to the recipient, or by waiting for the client app on the recipient device to request (or “pull”) the message. When the recipient app receives this image grouping, it will use the metadata to determine the order of presentation of the various images, and will synchronize those images with the audio commentary. If the message grouping does not contain ordering metadata, the recipient app can simply present the images in the order in which they are stored in the grouping file. - In alternative embodiments, the receiving app will give the receiving user some control over the playback of the audio-image file. For instance, the recipient of an audio-image file containing a plurality of images may be given the ability to swipe between the various images, allowing the user to move back-and-forth between the images as desired. The audio commentary associated with each image could still be presented for each image when the image is displayed. Obviously, if the sender used the plurality of images to tell a single story via their audio commentary, the ability to control transitions and move backwards through the presented images would disrupt the continuity of the story. In these circumstances, the sender may restrict the ability of the recipient to control transitions between images through the transmitted metadata. Alternatively, the recipient may be required to review the entire audio commentary before being able to control transitions between the images. In these embodiments, the recipient app will automatically (without requiring user input) present all of the images and associated commentary to the user in a single presentation.
- One disadvantage of using the movie recording created in
step 1045 is that a reply commentary to the audio-image file will necessary need to either reply to a single static image (such as the last image in the image set), or reply to the entire image set using the transition timing of the original creator of the audio-image file. If the app presenting the audio-image file uses metadata rather than a video track to present the transitions between multiple images in the image set, the reply audio-commentary can be created using a new set of transitions between the images under the control of the reply commentator. This new transition metadata can be added to the audio-file metadata and used by the app when presenting the reply audio commentary. Because this is a significant benefit, the preferred embodiment ofmethod 1000 will save the separate images and the transition metadata instep 1050 even when a movie containing the images and transitions are made instep 1045. In this way even a recipient without the app can first view the movie file created instep 1045, and then download the app, obtain a copy of the audio-image file with metadata from theserver 160, and record a reply commentary with new transitions between the images. - Of course, if the audio tracks are separately recorded for each image using the process of
FIG. 11 , this is not a problem. A recipient can simply record their commentaries for each image (or a selected subset of the images), and their reply commentaries would remain associated with that image during late playback.FIG. 12 shows one technique for presenting an image grouping that exemplifies this ability.FIG. 12 is best understood in the context of anexample image grouping 1300 presented inFIG. 13 . Thisimage grouping 1300 contains three different images or videos, namely image 1 (1310), image 2 (1320), and video 1 (1330). According toFIG. 13 , a first user commented on all three images/videos first image 1310 andvideo 1330. After receiving this reply, the first user sent a second commentary back on thefirst image 1310. Note that commentaries on video clips can be obtained as described above and in the incorporated applications, and included in the audio image files created through any of the processes described inFIGS. 10 and 11 . - The
method 1200 for presenting the commentaries in thisproduct grouping 1300 is shown inFIG. 12 . The method begins by displaying the first image and letting the user select between viewing/presentation options (step 1210). In one embodiment, the user may select between viewing/hearing the commentaries by image, or by user-session. The selection by the user is received instep 1220. If the user selects to review the commentaries by image, this means that the user will view (through step 1230) all audio commentaries for afirst image 1310 in theimage grouping 1300 before reviewing commentaries for thesecond image 1320. InFIG. 13 , the order of presentation of the commentaries in this option is shown by dottedline 1340. The other option available for the user is to review all the audio for a user session for all images before reviewing the commentaries created in the next user session (step 1240). InFIG. 13 , the ordering of presentation for this option is shown by thesolid line 1350. With either option, the images associated with the commentaries will be displayed to the user while the audio commentaries are being played. Of course, other viewing options are possible, such as reviewing only commentaries from a particular user, or reviewing all commentaries made by a particular user before reviewing any commentaries made by another user. In addition, it is preferable to give users the option to manually pass through the images/video clips - As explained above, images without audio commentaries can be displayed for a short period of time when no commentary is present. So if
user 1 had not made any commentary onimage 2 during her first commentary session,image 2 could still be displayed during review of the image grouping. This is true regardless of whichoption grouping 1300 have been displayed, it is possible to skip uncommented-upon images during a second pass through the images and video clips (for instance, when followingpath 1350 through theimage grouping 1300 while playinguser 2's commentaries). - Once all of the commentaries are presented through
steps step 1250 allows a user to record reply messages for the individual images in theimage grouping 1300. The user is also able to augment the images, as is further explained in the incorporated patent applications. Theprocess 1200 then ends atstep 1260. -
FIG. 14 presents a sample use case for theimage grouping 1300 and the methods ofFIGS. 10-12 . Here, areal estate agent 1400 is working with one or more clients 1410-1412. Client 1 (1410) is currently looking at three houses for purchase, namely house 1 (1420), house 2 (1422), and house 3 (1424). For each of the houses 1420-1424 that are being viewed by a client 1410-1412, theagent 1400 can generate and comment upon a set of photographs and videos taken at the house. InFIG. 14 , the image set 1300 described above is represented as the image set created for house 1 (1420). In this way, theagent 1400 can “preview” a house for a client, take still images or videos of each of the rooms of the house, and include their own commentaries on each image/video clip. The client can review theimage grouping 1300 and hear the agents comments, and then add their own commentary to some of the images/video clips. These reply comments can be sent back to the agent, and the agent can add their own response to these comments. New comments can be directly accessed using the methods and interfaces described above (such as those described in connection withFIG. 6 ). - The many features and advantages of the invention are apparent from the above description. Numerous modifications and variations will readily occur to those skilled in the art. Since such modifications are possible, the invention is not to be limited to the exact construction and operation illustrated and described. Rather, the present invention should be limited only by the following claims.
Claims (15)
1. A computerized method comprising:
a) at a server computer, receiving a first image grouping from a first computing device, the first image grouping having:
i) a plurality of still images, and
ii) first audio commentaries for each of the plurality of still images;
b) at the server computer, storing the first image grouping in computer-readable memory;
c) at the server computer, transmitting the first image grouping to a second computing device;
d) at the server computer, receiving from the second computing device second audio commentaries on at least a subset of the plurality of still images in the first image grouping; and
e) at the server computer, storing a second image grouping in the computer-readable memory, the second image grouping having:
i) the plurality of still images,
ii) the first audio commentaries, and
iii) the second audio commentaries.
2. The computerized method of claim 1 , wherein the first image grouping is stored in association with a message identifier.
3. The computerized method of claim 2 , wherein the server computer receives the message identifier from the second computing device and uses the message identifier to identify the first image grouping before transmitting the first image grouping to the second computing device.
4. The computerized method of claim 1 , wherein after storing the second image grouping, the server computer receives an identifier from a third computing device and uses the identifier to identify the second image grouping, further comprising the server computer transmitting the second image grouping to the third computing device after identifying the second image grouping.
5. The computerized method of claim 4 , further comprising:
f) at the third computing device, providing a user interface input allowing a user to choose between two presentation options, the presentation options comprising:
i) playing all audio commentaries associated with a single image while displaying that single image before presenting the next image and its associated audio commentaries, and
ii) first playing the first audio commentaries for all of the still images while displaying the still images associated with the first audio commentaries, and then playing the second audio commentaries for the still images while displaying the still images associated with the second audio commentaries;
g) at the third computing device, presenting the second image grouping according to the chosen presentation option.
6. The computerized method of claim 1 , wherein each audio commentary is associated with a single one of the plurality of images.
7. The computerized method of claim 6 , wherein the first audio commentaries are associated with the plurality of still images through metadata found in the first image grouping.
8. The computerized method of claim 1 , wherein the first audio commentaries are associated with the plurality of still images through metadata found in the first image grouping.
9. The computerized method of claim 1 , wherein the first image grouping is encoded into a video file.
10. The computerized method of claim 9 , wherein the first image grouping is transmitted to the second computing device in the form of the video file.
11. The computerized method of claim 1 , wherein the audio commentaries each comprise a separate audio file.
12. The computerized method of claim 1 , wherein the first audio commentaries are stored in a first single track, wherein first metadata is stored with the first image grouping, and further wherein the first metadata defines the divisions in the single track between the first audio commentaries.
13. The computerized method of claim 12 , wherein the first metadata further comprises transition instructions between the plurality of still images.
14. The computerized method of claim 12 , wherein the second audio commentaries are also stored in the same audio track as the first audio commentaries within the second image grouping.
15. A computerized method comprising:
a) at a first computing device, receiving a selection of a plurality of still images;
b) at the first computing device, recording first audio commentaries for the plurality of still images through a microphone on the first computing device;
c) at the first computing device, creating a first image grouping that associates the plurality of still images with the first audio commentaries and that stores the first audio commentaries into a single audio track, wherein metadata defines the divisions in the single audio track between the first audio commentaries;
d) at the first computing device, transmitting the first image grouping to a server computer for storage at the server computer;
e) at a second computing device, receiving the first image grouping from the server computer;
f) at the second computing device, presenting, through a user interface on the second computing device, the plurality of still images and first audio commentaries that comprise the first image grouping using the metadata to determine the transitions between the plurality of still images;
g) at the second computing device, recording second audio commentaries on at least a subset of the still images in the first image grouping;
h) at the second computing device, creating a second image grouping combining the first image grouping with the second audio commentaries wherein the second audio commentaries are also stored on the single audio track; and
i) at the second computing device, transmitting the second image grouping to the server computer for storage at the server computer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/222,302 US20190121509A1 (en) | 2013-10-01 | 2018-12-17 | Image Grouping with Audio Commentaries System and Method |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/043,385 US9894022B2 (en) | 2013-07-19 | 2013-10-01 | Image with audio conversation system and method |
US15/181,529 US10180776B2 (en) | 2013-10-01 | 2016-06-14 | Image grouping with audio commentaries system and method |
US16/222,302 US20190121509A1 (en) | 2013-10-01 | 2018-12-17 | Image Grouping with Audio Commentaries System and Method |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/181,529 Continuation US10180776B2 (en) | 2013-10-01 | 2016-06-14 | Image grouping with audio commentaries system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
US20190121509A1 true US20190121509A1 (en) | 2019-04-25 |
Family
ID=57017520
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/181,529 Expired - Fee Related US10180776B2 (en) | 2013-10-01 | 2016-06-14 | Image grouping with audio commentaries system and method |
US16/222,302 Abandoned US20190121509A1 (en) | 2013-10-01 | 2018-12-17 | Image Grouping with Audio Commentaries System and Method |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/181,529 Expired - Fee Related US10180776B2 (en) | 2013-10-01 | 2016-06-14 | Image grouping with audio commentaries system and method |
Country Status (1)
Country | Link |
---|---|
US (2) | US10180776B2 (en) |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101419764B1 (en) * | 2013-06-07 | 2014-07-17 | 정영민 | Mobile terminal control method for voice emoticon |
WO2016145408A1 (en) * | 2015-03-12 | 2016-09-15 | everyStory, Inc. | Story capture system |
DK179494B1 (en) | 2016-06-12 | 2019-01-11 | Apple Inc. | User interface for managing controllable external devices |
US10928980B2 (en) | 2017-05-12 | 2021-02-23 | Apple Inc. | User interfaces for playing and managing audio items |
CN111343060B (en) | 2017-05-16 | 2022-02-11 | 苹果公司 | Method and interface for home media control |
US20220279063A1 (en) | 2017-05-16 | 2022-09-01 | Apple Inc. | Methods and interfaces for home media control |
US10694223B2 (en) | 2017-06-21 | 2020-06-23 | Google Llc | Dynamic custom interstitial transition videos for video streaming services |
US10372298B2 (en) | 2017-09-29 | 2019-08-06 | Apple Inc. | User interface for multi-user communication session |
CN114845122B (en) | 2018-05-07 | 2024-04-30 | 苹果公司 | User interface for viewing live video feeds and recording video |
US10936275B2 (en) * | 2018-12-27 | 2021-03-02 | Microsoft Technology Licensing, Llc | Asynchronous communications in mixed-reality |
US10904029B2 (en) | 2019-05-31 | 2021-01-26 | Apple Inc. | User interfaces for managing controllable external devices |
WO2020243691A1 (en) | 2019-05-31 | 2020-12-03 | Apple Inc. | User interfaces for audio media control |
AU2020239711C1 (en) * | 2020-05-11 | 2022-03-31 | Apple Inc. | User interface for audio message |
US11513667B2 (en) | 2020-05-11 | 2022-11-29 | Apple Inc. | User interface for audio message |
US11392291B2 (en) | 2020-09-25 | 2022-07-19 | Apple Inc. | Methods and interfaces for media control with dynamic feedback |
US12170579B2 (en) | 2021-03-05 | 2024-12-17 | Apple Inc. | User interfaces for multi-participant live communication |
US11360634B1 (en) | 2021-05-15 | 2022-06-14 | Apple Inc. | Shared-content session user interfaces |
CN113360117B (en) * | 2021-06-28 | 2024-06-25 | 北京字节跳动网络技术有限公司 | Control method, device, terminal and storage medium of electronic equipment |
US12267622B2 (en) | 2021-09-24 | 2025-04-01 | Apple Inc. | Wide angle video conference |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030048289A1 (en) * | 2001-09-06 | 2003-03-13 | Vronay David P. | Assembling verbal narration for digital display images |
US20120066594A1 (en) * | 2010-09-15 | 2012-03-15 | Verizon Patent And Licensing, Inc. | Secondary Audio Content by Users |
US20120254156A1 (en) * | 2004-11-10 | 2012-10-04 | Bindu Rama Rao | Mobile system for collecting and distributing real-estate evaluation reports |
Family Cites Families (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8973017B2 (en) | 1999-09-08 | 2015-03-03 | Kenneth F. Krutsch | Productivity application management |
AU2001245575A1 (en) | 2000-03-09 | 2001-09-17 | Videoshare, Inc. | Sharing a streaming video |
US8199188B2 (en) | 2001-11-09 | 2012-06-12 | Karl Storz Imaging, Inc. | Video imaging system with a camera control unit |
GB2424535A (en) | 2003-04-30 | 2006-09-27 | Hewlett Packard Co | Editing an image and associating sound with it |
US7848493B2 (en) | 2003-06-24 | 2010-12-07 | Hewlett-Packard Development Company, L.P. | System and method for capturing media |
US8010579B2 (en) | 2003-11-17 | 2011-08-30 | Nokia Corporation | Bookmarking and annotating in a media diary application |
JP4651623B2 (en) | 2003-12-01 | 2011-03-16 | リサーチ イン モーション リミテッド | Previewing new events on small screen devices |
EP1712070A2 (en) | 2003-12-02 | 2006-10-18 | Martin E. Groeger | Method for the generation sending and receiving of mms messages a computer programme and a computer-readable storage medium |
US7571213B2 (en) | 2004-03-26 | 2009-08-04 | Microsoft Corporation | Interactive electronic bubble messaging |
US20060041848A1 (en) | 2004-08-23 | 2006-02-23 | Luigi Lira | Overlaid display of messages in the user interface of instant messaging and other digital communication services |
US8225335B2 (en) | 2005-01-05 | 2012-07-17 | Microsoft Corporation | Processing files from a mobile device |
US20080028023A1 (en) | 2006-07-26 | 2008-01-31 | Voicetribe Llc. | Sharing commentaries synchronized with video content |
US7565332B2 (en) | 2006-10-23 | 2009-07-21 | Chipin Inc. | Method and system for providing a widget usable in affiliate marketing |
KR101484779B1 (en) | 2007-01-19 | 2015-01-22 | 삼성전자주식회사 | System and method for interactive video blogging |
US20090225788A1 (en) | 2008-03-07 | 2009-09-10 | Tandem Readers, Llc | Synchronization of media display with recording of audio over a telephone network |
US8793282B2 (en) | 2009-04-14 | 2014-07-29 | Disney Enterprises, Inc. | Real-time media presentation using metadata clips |
US9424444B2 (en) | 2009-10-14 | 2016-08-23 | At&T Mobility Ii Llc | Systems, apparatus, methods and computer-readable storage media for facilitating integrated messaging, contacts and social media for a selected entity |
KR20110060650A (en) | 2009-11-30 | 2011-06-08 | 엘지전자 주식회사 | How to change operation mode of TV that can be connected to network |
US8606297B1 (en) | 2010-03-24 | 2013-12-10 | Grindr LLC | Systems and methods for providing location-based cascading displays |
US20110258050A1 (en) | 2010-04-16 | 2011-10-20 | Bread Labs Inc. A Delaware Corporation | Social advertising platform |
US8566348B2 (en) | 2010-05-24 | 2013-10-22 | Intersect Ptp, Inc. | Systems and methods for collaborative storytelling in a virtual space |
US9646352B2 (en) | 2010-12-10 | 2017-05-09 | Quib, Inc. | Parallel echo version of media content for comment creation and delivery |
KR101718770B1 (en) | 2010-12-17 | 2017-03-22 | 삼성전자주식회사 | Method for displaying message in mobile communication terminal |
US20120192220A1 (en) | 2011-01-25 | 2012-07-26 | Youtoo Technologies, LLC | User-generated social television content |
US8701020B1 (en) | 2011-02-01 | 2014-04-15 | Google Inc. | Text chat overlay for video chat |
US20120317499A1 (en) | 2011-04-11 | 2012-12-13 | Shen Jin Wen | Instant messaging system that facilitates better knowledge and task management |
US8744237B2 (en) | 2011-06-20 | 2014-06-03 | Microsoft Corporation | Providing video presentation commentary |
US8788584B2 (en) | 2011-07-06 | 2014-07-22 | Yahoo! Inc. | Methods and systems for sharing photos in an online photosession |
US8380040B2 (en) | 2011-07-18 | 2013-02-19 | Fuji Xerox Co., Ltd. | Systems and methods of capturing and organizing annotated content on a mobile device |
WO2013025556A1 (en) | 2011-08-12 | 2013-02-21 | Splunk Inc. | Elastic scaling of data volume |
US20140348394A1 (en) | 2011-09-27 | 2014-11-27 | Picsured, Inc. | Photograph digitization through the use of video photography and computer vision technology |
US8682973B2 (en) | 2011-10-05 | 2014-03-25 | Microsoft Corporation | Multi-user and multi-device collaboration |
US20130178961A1 (en) | 2012-01-05 | 2013-07-11 | Microsoft Corporation | Facilitating personal audio productions |
US9042923B1 (en) | 2012-02-08 | 2015-05-26 | Fsp Llc | Text message definition and control of multimedia |
KR102042265B1 (en) | 2012-03-30 | 2019-11-08 | 엘지전자 주식회사 | Mobile terminal |
KR101873761B1 (en) | 2012-04-20 | 2018-07-03 | 엘지전자 주식회사 | Mobile terminal and method for controlling the same |
US20120284426A1 (en) | 2012-07-19 | 2012-11-08 | Jigsaw Informatics, Inc. | Method and system for playing a datapod that consists of synchronized, associated media and data |
US20120290907A1 (en) | 2012-07-19 | 2012-11-15 | Jigsaw Informatics, Inc. | Method and system for associating synchronized media by creating a datapod |
US9113033B2 (en) | 2012-08-28 | 2015-08-18 | Microsoft Technology Licensing, Llc | Mobile video conferencing with digital annotation |
US8798598B2 (en) | 2012-09-13 | 2014-08-05 | Alain Rossmann | Method and system for screencasting Smartphone video game software to online social networks |
US9703792B2 (en) | 2012-09-24 | 2017-07-11 | Moxtra, Inc. | Online binders |
US9160984B2 (en) | 2012-11-29 | 2015-10-13 | Fanvision Entertainment Llc | Mobile device with personalized content |
US9210477B2 (en) | 2012-11-29 | 2015-12-08 | Fanvision Entertainment Llc | Mobile device with location-based content |
US20140163980A1 (en) | 2012-12-10 | 2014-06-12 | Rawllin International Inc. | Multimedia message having portions of media content with audio overlay |
JP2014222439A (en) | 2013-05-14 | 2014-11-27 | ソニー株式会社 | Information processing apparatus, part generating and using method, and program |
US9894022B2 (en) | 2013-07-19 | 2018-02-13 | Ambient Consulting, LLC | Image with audio conversation system and method |
US9787631B2 (en) | 2013-07-30 | 2017-10-10 | Wire Swiss Gmbh | Unified and consistent multimodal communication framework |
US10237953B2 (en) | 2014-03-25 | 2019-03-19 | Osram Sylvania Inc. | Identifying and controlling light-based communication (LCom)-enabled luminaires |
US9344993B2 (en) | 2014-04-01 | 2016-05-17 | Telecommunication Systems, Inc. | Location verification |
US9648295B2 (en) | 2014-07-18 | 2017-05-09 | Pankaj Sharma | System and methods for simultaneously capturing audio and image data for digital playback |
-
2016
- 2016-06-14 US US15/181,529 patent/US10180776B2/en not_active Expired - Fee Related
-
2018
- 2018-12-17 US US16/222,302 patent/US20190121509A1/en not_active Abandoned
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030048289A1 (en) * | 2001-09-06 | 2003-03-13 | Vronay David P. | Assembling verbal narration for digital display images |
US20120254156A1 (en) * | 2004-11-10 | 2012-10-04 | Bindu Rama Rao | Mobile system for collecting and distributing real-estate evaluation reports |
US20120066594A1 (en) * | 2010-09-15 | 2012-03-15 | Verizon Patent And Licensing, Inc. | Secondary Audio Content by Users |
Also Published As
Publication number | Publication date |
---|---|
US20160291824A1 (en) | 2016-10-06 |
US10180776B2 (en) | 2019-01-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10180776B2 (en) | Image grouping with audio commentaries system and method | |
US9977591B2 (en) | Image with audio conversation system and method | |
US9894022B2 (en) | Image with audio conversation system and method | |
US10325394B2 (en) | Mobile communication terminal and data input method | |
US11670271B2 (en) | System and method for providing a video with lyrics overlay for use in a social messaging environment | |
US8973072B2 (en) | System and method for programmatic link generation with media delivery | |
US7945622B1 (en) | User-aware collaboration playback and recording | |
US10057731B2 (en) | Image and message integration system and method | |
US9131059B2 (en) | Systems, methods, and computer programs for joining an online conference already in progress | |
US8788584B2 (en) | Methods and systems for sharing photos in an online photosession | |
US20150092006A1 (en) | Image with audio conversation system and method utilizing a wearable mobile device | |
US20140033073A1 (en) | Time-shifted collaboration playback | |
TWI711304B (en) | Video processing method, client and server | |
TWI522815B (en) | Direct sharing system of photo | |
US20120162350A1 (en) | Audiocons | |
US20130198288A1 (en) | Systems, Methods, and Computer Programs for Suspending and Resuming an Online Conference | |
TW201030616A (en) | Synchronizing presentation states between multiple applications | |
US20160142361A1 (en) | Image with audio conversation system and method utilizing social media communications | |
US20190199763A1 (en) | Systems and methods for previewing content | |
KR101123370B1 (en) | service method and apparatus for object-based contents for portable device | |
WO2016110180A1 (en) | Data transmission method, related apparatus and system | |
TW201600990A (en) | Message storage | |
WO2015050966A1 (en) | Image and message integration system and method | |
US20140362290A1 (en) | Facilitating generation and presentation of sound images | |
WO2014072739A1 (en) | Video distribution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |