US20050198006A1 - System and method for real-time media searching and alerting - Google Patents
System and method for real-time media searching and alerting Download PDFInfo
- Publication number
- US20050198006A1 US20050198006A1 US11/063,559 US6355905A US2005198006A1 US 20050198006 A1 US20050198006 A1 US 20050198006A1 US 6355905 A US6355905 A US 6355905A US 2005198006 A1 US2005198006 A1 US 2005198006A1
- Authority
- US
- United States
- Prior art keywords
- video
- text
- media
- monitoring system
- closed captioned
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7844—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using original textual content or text extracted from visual content or transcript of audio data
Definitions
- the present invention relates generally to media monitoring systems. More particularly, the present invention relates to video media searching and alerting systems.
- closed captions which have been used successfully to identify the subject matter of a video stream.
- Systems have been developed to monitor and act upon the closed captioned text. For example, such systems trigger on the basis of keywords and selectively record video for later viewing.
- no refinement or cross-referencing could be performed on past video, and new searches would only be applied to subsequent video broadcasts.
- U.S. Pat. No. 5,481,296 is directed to a scanning method of monitoring video content using a predefined set of keywords. Based on a keyword, the system has the ability to monitor multiple streams and to return reception devices in real-time to selectively capture the matching video. The described system also attempts to selectively save video that has matched while removing segments that have not matched. The goal is to selectively record only the video that is desired.
- U.S. Pat. No. 5,986,692 is directed to a system for generating a custom-tailored video stream.
- the system is designed to work unattended, watching video signals, extracting and collating those that are deemed to be of interest to a specific user.
- the system also defines filters that attempt to detect and discern specific components of a video signal that are unwanted. For example, opening credits are video components that are typically undesired.
- U.S. Pat. No. 6,061,056 is directed to a system that automatically monitors a video stream for desired content. Users enter their search parameters, and the system watches the applied video streams for matches. However, this system only records video when a match occurs. The user is then presented with a series of clips that were saved based on their matches. Any new searches or refinements to the query only take effect for future searches. As well, any desired content that was not caught by the programmed search is lost forever. As an example, a user search for “Company A” may produce a result announcing a surprise merger of “Company A” and “Company B”. With the system as described in U.S. Pat. No. 6,061,056, new searches for “Company B” will only take effect on video occurring after the user adds this search. Therefore, the system is incapable of searching for any records prior to the new search being executed, such as recent happenings leading up to the merger.
- U.S. Pat. No. 6,266,094 is directed to a system of aggregating and distributing closed caption text over a distributed system.
- the system focuses on extensive scrubbing and preparation of closed caption text to enhance usability.
- the described system has no facility for archiving the video associated with the clip, nor does it present the program text to the user.
- the present invention provides a media monitoring system for receiving at least one video channel having corresponding closed captioned text in real time.
- the media monitoring system includes a media management system and a user access system.
- the media management system continuously stores all the data of the at least one video channel locally and extracts the corresponding closed captioned text into decoded text.
- the decoded text is provided to a global storage database.
- the media management system further includes a search engine for comparing the decoded text against search terms to provide matching results, and an indexing engine for indexing units of the decoded text by time.
- the user access system receives and displays the matching results, and transmits a request for stored data corresponding to specific units of the decoded text from the media management system.
- the media management system then provides said stored data corresponding to specific units of the decoded text in response to the request.
- the media management system can include a media server pod, an index server and a web server.
- the media server pod receives the at least one video channel and locally stores the data of the at least one video channel.
- the media server pod can include a close caption decoder for extracting the corresponding closed captioning text into the decoded text.
- the index server receives the decoded text from the media server pod over a first network, and includes the index engine.
- the web server includes the global storage database for storing the decoded text received from the index servers over a second network.
- the web servers can include the search engine and a search term database for storing the search terms.
- the media server pod can include at least one media source for providing the at least one video channel, and a media server in direct communication with the at least one media source.
- the media server receives the at least one video channel, and has a decoder for extracting the corresponding closed captioned text into decoded text from a vertical blanking interval of the at least one video channel.
- the media server can further include mass storage media for storing the data of the at least one video channel.
- the media server can include a parser for generating the stored data corresponding to specific units of the decoded text
- the media server pod can include a plurality of media sources for providing a corresponding number of video channels.
- the media server can include a video/audio compression system for compressing the data of the at least one video channel prior to storage onto the mass storage media.
- the media server can include a speech-to-text system for converting audio signals corresponding to the at least one video channel into text, and a text detector for detecting an absence of the corresponding closed captioned text, such that the text detector generates an alert indicating the absence of the corresponding closed captioned text.
- the media source can include one of a satellite receiver, a cable box, an antenna, and a digital radio source.
- the index server can include the global storage database, or the web server can include the global storage database.
- the first network can include a wide area network.
- the media management system can further include a second media server pod for receiving data of a different video channel, where the second media server pod is in communication with the first network.
- the media server pod and the second media server pod can be geographically distant from each other.
- the user access system can include a duplicate video dip detector for identifying the matching results that are duplicates of each other, and a user access device in communication with the web server over a third network for receiving and displaying the matching results.
- the user access device can provide the search terms to the media management system.
- the user access system can include a fourth network in communication with the user access device and the media server pod, where the user access device receives said stored data corresponding to specific units of the decoded text in response to the request over the fourth network.
- the present invention provides a method for searching video data corresponding to at least one video channel collected and stored in a media monitoring system.
- the method includes (a) providing search terms; (b) comparing the search terms to stored closed captioned text corresponding to the video data, the closed captioned text being indexed by channel and time; (c) displaying matching results from the step of comparing; (d) requesting selected video data corresponding to one of the matching results; and (e) providing the selected video data corresponding to one of the matching results.
- the step of requesting includes selecting a time indexed segment of the closed captioned text of one of the matching results, the step of selecting includes setting a video start time and a video end time for the selected time indexed segment, and the step of providing the selected video data includes parsing the video data to correspond with the selected time indexed segment to provide the selected video data.
- the step of providing search terms can include storing the search terms.
- the search terms are provided to a web server over a first network, where the web server executes the step of comparing and providing the matching results to a user access device for display over the first network.
- the step of providing the video data includes transferring the video data over a second network to the user access device, and the step of providing the video data can include parsing the video data to provide the portion of the video data.
- the present invention provides a method for automatic identification of video clips matching stored search terms.
- the method includes (a) continuously receiving and locally storing video data corresponding to at least one video channel in real time; (b) extracting and globally storing the closed captioned text from the video data; (c) indexing the closed captioned text by channel and time; (d) comparing the stored closed captioned text to the stored search terms; and, (e) providing match results of the closed captioned text matching the search terms, each match result having an optionally viewable video clip.
- the method can further include the steps of displaying the match results on a user access device, requesting the video clip corresponding to a selected match result, and displaying the video clip on the user access device.
- the step of requesting includes viewing the closed captioned text corresponding to the selected match result with time indices, setting a video start time and a video end time, and, providing a request having the video start time and the video end time, and channel information corresponding to the selected match result.
- the step of displaying the video clip includes receiving the request, and parsing the video data to provide the video clip having the video start time and the video end time.
- the video data is compressed prior to being stored, the extracted closed captioned text is transmitted over a first network to an indexing server for indexing the closed captioned text by channel and time, the closed captioned text is transmitted over a second network for storage on a web server, the step of comparing is executed on the web server.
- the match results can be transmitted over a third network to a user access device, and the step of displaying the video clip includes transmitting the video dip over a fourth network to the user access device.
- the step of comparing includes (i) providing a segment of the closed captioned text, (ii) iteratively obtaining search terms from the stored search terms for comparing to the segment of the closed captioned text until all of the stored search terms have been compared to the segment of the closed captioned text, and (iii) storing details of all matches to the stored search terms as the match results.
- the step of storing includes selectively generating an alert when the segment of the closed captioned text matches the stored search terms, the step of selectively generating includes generating the alert only when the matching search term has an associated alerting status, and the alert can include one of in-system alerts, mobile device activation, pager activation and automatic email generation.
- the step of extracting includes detecting an absence of the closed captioned text from the video data, and generating an alert message when no closed captioned text is detected.
- FIG. 1 is a schematic of the media monitoring system according to an embodiment of the present invention
- FIG. 2 is a schematic of the media monitoring system according to another embodiment of the present invention.
- FIG. 3 is a block diagram of the functional components of the media monitoring system shown in FIG. 1 ;
- FIG. 4 is a flow chart illustrating a manual operation mode of the media monitoring system of the present invention.
- FIG. 5 is a computer screen user interface for prompting search parameters from a user
- FIG. 6 is a computer screen user interface showing compact example results from a search
- FIG. 7 is a computer screen user interface showing detailed example results from a search
- FIG. 8 is a computer screen user interface showing matching captioning and timing information.
- FIG. 9 is a flow chart illustrating an automatic scanning mode of the media monitoring system of the present invention.
- the present invention provides a method and system for continually storing and cataloguing streams of broadcast content, allowing real-time searching and real-time results display of all catalogued video.
- a bank of video recording devices store and index all video content on any number of broadcast sources. This video is stored along with the associated program information such as program name, description, airdate and channel.
- a parallel process obtains the text of the program, either from the closed captioning data stream, or by using a speech-to-text system. Once the text is decoded, stored, and indexed, users can then perform searches against the text, and view matching video immediately along with its associated text and broadcast information.
- an alerting mechanism scans all content in real-time and can be configured to notify users by various means upon the occurrence of a specified search criteria in the video stream.
- the system is preferably designed to be used on publicly available broadcast video content, but can also be used to catalog private video, such as conference speeches or audio-only content such as radio broadcasts.
- the system non-selectively records all video/audio applied to it, and allows user searches to review all video on the system. Only under user control is a presentation clip prepared and retrieved. Furthermore, searches can be performed at any time to examine archived video, rather than searches being the basis by which video is saved. Video is only shown to the user that they specifically request, and any editing can be done under user control.
- FIG. 1 A general block diagram of a media monitoring system according to an embodiment of the present invention is shown in FIG. 1 .
- Media monitoring system 100 comprises two major component groups. The first is the media management system 102 , and the second is a user access system 104 .
- the media management system 102 is responsible for receiving and archiving video and its corresponding audio.
- the video and audio data is continuously received and stored.
- Video streams are tuned using media sources 106 such as satellite receivers, other signal receiving devices such as cable boxes, antennas, and VCR's or DVD players.
- media sources 106 can include non-video media sources, such as digital radio sources, for example.
- the media sources 106 receive digital signals.
- the video signal, corresponding audio, and any corresponding closed-captioned text, is captured by video/audio capture hardware/software stored on media servers 108 .
- the video/audio data can be stored in any sized segment, such as in one-hour segments, with software code to later extract any desired segment of any size by channel, start time, and end time.
- media management system 102 can include any number of media servers 108 , and each video server can be in communication with any number of media sources 106 . If storage space is limited, the media servers can compress the digital data from the media sources 106 into smaller file sizes. The data from media sources 106 are stored in consecutive segments onto a mass storage device using uniquely generated filenames that encode the channel and airdate of the video segment.
- closed-captioned text is extracted from the video stream and stored in web servers 114 as searchable text, as will be discussed later.
- the extracted closed-captioned text is indexed to its corresponding video/audio clips stored in the media servers 108 .
- the media management system 102 stores all of the text associated with the video stream. In most cases, this text is obtained from the closed-captioning signal encoded into the video.
- the media management system 102 can further Include a closed-captioned text detector for detecting the absence of closed-captioned text in the data stream, in order to alert the system administrator that closed-captioned text has not been detected for a predetermined amount of time. In a situation where closed-captioned text is undetected, the aforementioned alert can notify the system operator to take appropriate action in order to resolve the problem.
- the stream may not be a digital stream, and the system can include a speech-to-text system to convert the audio signals into text.
- these sub-systems can be executed within each media server 108 .
- the extracted text is broken into small sections, preferably into one-minute segments.
- Each clip is then stored in a database along with the program name, channel, and airdate of the clip.
- the text is also pushed into an indexing engine of index servers 110 , which allows it to be searched.
- the closed captioned text spanning a preset time received by index servers 110 are converted to XML format, bundled, and sent to web servers 114 for global storage via network 116 .
- Web servers 114 can execute the searches for matches between user specified terms and the stored closed captioned text, via a web-based search interface.
- the closed captioned text can be stored in index servers 110 .
- the channel and airdate fields of the text segment allow it to be matched to a video clip stored by the media management system 102 as needed. Further details of media management system 102 will be described later.
- the media management system 102 includes an alerting system. This system watches for each closed captioned segment that is indexed and cross-references it against the stored list of user defined alerts. Any matches will trigger user alerts to notify the user that a match has occurred. Alerts can include in-system alerts, mobile device activation, pager activation, automatic email generation, which can be generated from web servers 114 .
- the user access system 104 can include access devices such as a computer workstation 118 , or mobile computing devices such as a laptop computer 120 and a PDA 122 . Of course, other wireless devices such as mobile phones can also be used. These web enabled access devices can communicate with the web servers 114 via the Internet 126 wirelessly through BlueTooth or WiFi network systems, or through traditional wired systems, Optionally, users can dial up directly to the network 116 with a non-web search interface enabled computer 124 . As will be shown later in FIG. 3 , the user access system 104 further includes an alternate data transfer path for transferring video data to the access devices to reduce congestion within media management system 102 . As previously discussed, each web server 114 can store identical copies of the closed captioned text bundle received from index servers 110 . This configuration facilitates searches conducted by users since the text data is quickly accessible, and search results consisting of closed captioned text can be quickly forwarded to the user's access device.
- the user can search for occurrences of keywords, retrieve video by date and time, store alert parameters, etc.
- the user interface software can take the form of a web interface, locally run software, mobile device interface, or any other interactive form.
- the back end portion of the web interface maintains a connection to the text database, as well as to the index of video streams.
- the user interface software can be used to stream video to the user, or alternatively, to direct the user to an alternate server where video will be presented.
- networks 112 and 116 can be implemented as a local area network (LAN), such as in an office building for example.
- Local area networks typically provide high bandwidth operation.
- media monitoring system 100 can be deployed across a wide network, meaning that the components of the system can be geographically dispersed, making networks 112 and 116 wide area networks (WAN).
- WAN wide area networks
- the bandwidth of a WAN Is generally smaller than that of a LAN.
- those of skill in the art will understand that the presently described system can be implemented with a combination of WAN and LAN.
- media servers 108 and their corresponding media sources 106 can be geographically distributed to collect and store local video, which is then shared within the system.
- “pods” of media servers 108 and their corresponding media sources 106 can be located in different cities, and in different countries.
- the server the user is connected to may not physically be at the location where the video streams are being recorded.
- the distributed media server pods are considered remotely connected to index servers 110 , since they are connected via a WAN.
- an advantage of the present invention is that the monitoring and notification speed remains fast regardless of the network configuration of the media monitoring system 100 . This is due to the fact that the small sized closed captioned text can be rapidly transferred within the system, and more particularly, between the media servers 108 and the user access devices.
- the larger video data is accessed and sent to the user. Due to the size of the video, it is preferable to avoid congesting the networks 112 , 116 and 126 and limiting performance for all users. However, video may be transferred to the user in an all-LAN environment with satisfactory speed.
- the user access device connects to index servers 110 , which functions as the conductor of traffic between media servers 108 and the user access device. Therefore, according to another embodiment of the invention, requested video can be directly sent from the appropriate media server 108 to the video enabled user access device.
- FIG. 2 illustrates the configuration of the media monitoring system 100 when video data is to be transferred to a user access device in a geographically distributed system.
- one video server 108 and its corresponding media sources 106 represent a single video processing unit of a pod of video processing units 130 , that may be deployed in a particular city and geographically distant from index servers 110 and network 112 .
- the pod 130 remains in communication with remote access devices 118 , 120 and 122 via LAN/WAN network 132 , which may be geographically distant from pod 130 .
- LAN/WAN network 132 which may be geographically distant from pod 130 .
- Media server 108 can include a parser for providing the requested video dip that corresponds with the time-indexed closed captioned text. Since the video clips are received through a path outside of the media management system 102 and user access system 104 , the potential for congestion of data traffic within the system is greatly reduced. At the same time, multiple users can receive their respective requested video clips rapidly.
- the index servers will search the archived closed captioned text, and notify the user if any matches have occurred. Matches are displayed with the relevant bibliographic information such as air date and channel. The user then has the option of viewing and hearing a time segment of the videos containing the matched terms, the time segment being selectable by the user.
- the search of key terms can extend to future broadcasts, such that the search Is conducted dynamically in real-time. Thus, the user can be notified shortly after a search term has been matched in a current broadcast. Since the video broadcast is recorded, the user can selectively view the entire broadcast, or any portion thereof.
- FIG. 3 illustrates a block diagram of the general functional components of media monitoring system 100 shown in FIG. 1 .
- the media monitoring system 100 converts a video signal to an indexed series of digital files on a mass storage system, which can then be retrieved by specifying the desired channel, start time, and end time. This capability is then used to supply the actual video that matches the search result from the user interface component.
- Video is archived at a specified quality, depending on operator configuration. Higher quality settings allow for larger video frames, higher frame rates, and greater image detail, but with a penalty of greater file storage requirements. All parameters are configurable by the operator at the system level.
- the video/audio signal to be archived is made available from an external source. In practice, this usually consists of an antenna, or a satellite receiver or cable feed supplied by a signal provider.
- VBI Vertical Blanking Interval
- the video/audio signal is applied to the input of a video capture device 200 , which, either through a hardware or a software compression system 202 , converts the video signal to a digital stream.
- video capture device 200 and software compression system 202 can be implemented in media servers 108 .
- the exact format of this stream can be specified by the operator, but is typically chosen to be a compressed stream in a standard format such as MPEG or AVI formatted video.
- the video capture process outputs a continuous stream of video, which is then divided into manageable files. According to an embodiment of the present invention, the files are preferably limited to one hour blocks of video.
- mass storage system 204 locally stores the video/audio data for its corresponding media sources 106 .
- Video clips can be retrieved from mass storage system 204 in response to retrieval requests from permitted machines. These requests would be generated from servers that are serving users who have requested a video clip. From the users standpoint, this video clip is chosen by its content, but the system would know it as belonging to a specified channel for a given period of time. Most user clip requests are for small segments of video, an example being “CBC-Ottawa, 5:55 pm-5:58 pm”.
- the archive system using the channel and the date required, first deduces which large file the video segment is located in. It then parses the video file to locate and extract the stream data representing the segment selected. The stream data is then re-encapsulated to convert it to a stand-alone video file, and the result is returned to the calling machine, ultimately to be delivered to the user.
- the system can continuously replace the oldest video streams in its archive with the newest. This ensures that as much video is stored as possible. Additional storage can be added or removed as needed.
- Media monitoring system 100 can include self monitoring functions to ensure robust operation, and to minimize potential errors.
- the video digitizing process has the ability to detect the lack of video present at its input. This condition will raise an operator alert to allow the operator to locate the cause of the outage. In the field, this can be attributed to cabling problems, weather phenomena, hardware failure, upstream problems, etc.
- the system can be configured to attempt an automatic repair, by restarting or re-initializing a process or external device.
- the closed captioned text associated with the video is preferably extracted from the closed captioning stream in the video signal, or an associated speech-to-text device.
- closed captioning data is available in the video signal, the signal is applied to a decoder 206 typically located in each media server 108 , that can read the VBI stream.
- the decoder 206 extracts the closed captions that are encoded into the video signal. In practice, this can be the same device performing the video compression, and the extraction can be done in software.
- the audio stream is fed into a speech-to-text device instead of decoder 206 , and the resulting text is fed into the system. This option can be used if the content is not a video signal, such as a commercial radio stream or recorded speech.
- the decoder 206 includes a buffer, into which text accumulates at “human reading” speed. After a short increment of time, preferably one minute, the text buffer is stored into text database 208 along with the channel and time information associated with the clip. This database 208 then contains a complete record of all text that has flowed through the system, sorted by channel and airdate. As previously mentioned, database 208 can be located within either index servers 110 or web servers 114 . In either case, database 208 functions as global storage of the decoded closed captioned text.
- indexing engine 210 implemented in index servers 110 receives a block of text, which in this case represents a small unit of video transcript (typically one minute), and stores it in a format that is optimized for full text searches. For practical implementation purposes, standard off-the-shelf products can be employed for the indexing function. According to the presently described embodiments, the video captions are indexed by channel and time for example.
- the formatted text is stored in index database 212 , which can be located in index servers 110 or web servers 114 . Database 212 can also function as global storage of all the formatted text.
- the user's search string is submitted to a full text search engine that searches database 212 .
- Any results returned from this engine also contain indexes to the corresponding channel and time of the airing.
- the entire text is stored in database 208 , it can be retrieved using standard techniques to search on the channel and air time.
- database 212 is used for full text searching, while database 208 has been formatted such that the data is ordered by time and channel to facilitate look up by time and channel.
- user-defined searches can be executed through user access system 104 .
- user search interface Operating upon each access device is a user search interface that provides the functionality of the system. The interface is designed to allow users with minimal training to be able to perform text searches, examine the program text that matches, and selectively view or archive the video streams where the captioning appeared.
- the reference application is a web-based system, the system can be searched through other means, such as mobile WiFi devices, Bluetooth-enabled devices, and locally running software, for example.
- FIG. 4 shows a flow chart of the process executed by the media monitoring system 100
- FIGS. 5-8 are examples of user interface screens that prompt the user for information and display results to the user.
- the process begins at step 300 , where the user logs into the interface with the goal of researching a topic's appearance in the recent media.
- the user is presented with a screen that allows them to enter the search terms that would match their desired content.
- Common search parameters are provided, such as specifying phrases that must appear as typed, words that should appear within a certain distance of each other, boolean queries, etc.
- the query can be limited to only return results from specific broadcast channels.
- FIG. 5 is an example user interface for prompting the search parameters from the user.
- the search parameters provided by the user are first groomed at step 302 .
- Grooming is an optional step, which refers to optimization of the search parameters, especially if the user's search parameters are broken. For example, the user may enter “red blue” in the MUST CONTAIN THESE WORDS search field, and “GREEN” in the MAY CONTAIN search field.
- the grooming process then optimizes the search parameters to “GREEN RED AND BLUE”.
- the groomed search parameters are compared to database 208 that stores all the closed captioned text.
- the user is presented with a match results page at step 304 , itemizing the results obtained, the programs they appeared in, and a score that represents how strong the match was.
- the results can be sorted in numerous ways, such as by date, by program name, or by score.
- a compact example results page is shown in FIG. 6
- a more detailed version is shown in FIG. 7 .
- the user can select any row to view further details of that program segment.
- the results pages shown in FIGS. 6 and 7 may list concurrent segments belonging to the same broadcast, since the search term appears in each segment. For example, the results may return “Channel Y, 6:00 pm to 6:01 pm”, “Channel Y, 6:01 pm to 6:02 pm” and “Channel Y, 6:02 pm to 6:03 pm” as separate program segment items.
- the system can optimize the results by recognizing that the three segments are chronological segments of Channel Y, and collapse the results into a simplified description, such as “Channel Y, 6:00 pm to 6:03 pm”.
- the user Upon selecting a program segment at step 306 , the user is presented with a caption viewing screen showing the matching captioning and timing information, as shown in FIG. 8 .
- the present screen gives the user the option of viewing the clip associated with the shown extracted closed captioned text.
- the user is also presented with a navigation system that allows the user to move forward or backward in the video stream beyond the matched segment, to peruse the context that the clip was presented in.
- the caption viewing screen also features controls to compose a video clip that consists of several consecutive units of video. More specifically, the user has the ability to define the start and end points of a video clip, and then view or save that clip. This is suitable for preparing a salient clip that is to be saved for future reference.
- the process can return to step 300 to restart the search.
- the process can return to step 304 to permit the user to view the results page and select a different program segment.
- the system determines if the video clip is stored locally at step 308 . It is important to note that a locally stored video clip refers to one that is accessible via a high bandwidth network, which is typically available in a local area network, such as in an office environment. In contrast, remotely stored video clips are generally available only through a low bandwidth network, or one that is too low to have a copy of all video sent to it all the time. As previously discussed, the user can access the video remotely over a low bandwidth connection.
- the process provides a video access method optimized according to whether or not the user is accessing the system remotely. If the video clip is stored locally, ie. on a high bandwidth connection suitable for streaming video, then the system proceeds to step 310 . At step 310 , the video clip is retrieved and assembled with the appropriate video segments, and then displayed for the user at step 312 . The video clip can be played with the user's preferred video playing software. Alternately at step 308 , if the video clip is not stored locally, the system proceeds to step 314 , where a query is sent to the specific remote server that will return the video that the user is asking for. The video clip is retrieved from the remote system at step 316 , and finally displayed for the user at step 312 . Once the clip has ended, the user has the option of returning to step 304 to view another program segment. Alternately, the user may return to step 300 to initiate a new search.
- the video dip can be ordered through the user interface where it will be delivered to the user via email, via a link to a web site, or a physical medium such as a DVD, CD or video cassette for example.
- This service is suitable for clients requiring a permanent copy of especially important video segments.
- the previously described manual interactive operation method of FIG. 4 is effective for searching and viewing archived video.
- the media monitoring system 100 can concurrently operate in an automatic scanning mode to match user defined terms with real time extracted closed captioned text.
- the user can selectively activate the alerting system to provide notification for specific terms.
- searches can be stored by users so that they are executed on all incoming text corresponding to real-time recorded video. Any matches will selectively generate an immediate alert, which can be communicated to the user by various means. Selective generation of an alert refers to the fact that the user can set specific search terms to trigger an alert when matched.
- the stored search terms are archived in a search term database, preferably located on web servers, 114 including parameters reflecting the desired level of alerting the user has requested. Examples of such alerting levels can include “Never alert me”, “alert me by putting a message in the product”, “and alert me urgently by sending me an email to my mobile device”.
- the automatic scanning mode method of operation of the media monitoring system 100 is described with reference to FIG. 9 . It is assumed that the following process operates upon each stored unit of program text after the text is stored and indexed. Then the index is searched again with the terms to detect if anything new appears. It is further assumed that the user has previously defined his/her search terms and stored them in a search term database 404 , which can be physically located on web server 114 .
- the process begins at step 400 , where the text from index database 212 for the unit is retrieved.
- a search term from the users search term database 404 is retrieved and compared to the stored unit of program text at step 406 .
- step 408 the system checks if there are any further search terms to check against the stored unit of program text. If there are no more search terms, the process ends at step 410 . Otherwise, the system loops back to step 402 to fetch the next search term.
- step 412 the system checks if the user has activated an alert, for the present search term. If an alert has been activated for the present search term, the system generates a notification message for the user at step 418 , in accordance with their desired alert level. Depending on settings and system configuration, this alert/notification can be delivered using a number of methods, including but not limited to, alerts in the interface, via email, via mobile and wireless devices, for example.
- step 418 the matched search result processing the system proceeds to step 408 to determine if there are any further search terms. This aforementioned process is executed for each unit of program text stored in the index.
- the media monitoring system of the present invention can immediately search the archives to identify any prior program segments that match the new search term, and monitor new program segments for occurrences of the new search term.
- the system described in this application stores all video from all channels, allowing searches to be refined or changed at will with instant results. As well, learnings from the results of one query can be incorporated into a new search with instant results.
- This invention improves the user experience by storing and indexing all recent video and captions. This allows not only unlimited queries with real time results, but also allows new searches inspired by results to be performed immediately and with instant results.
- web server 114 can include a duplicate video clip detector to mark matching video clip results that are essentially the same. This function can be executed in web servers 114 as search results are returned to the user. For example, the text of the returned search results can be scanned such that the duplicates are marked as such. This feature allows the user to view one video clip and dismiss those marked as duplicates very quickly, without opening each one and viewing the clip.
- the duplicate video clip detector can be implemented on web server 114 , but can be executed in index servers 110 .
- a first matching result is added to the database and then fuzzy matching is executed to determine if further matches are essentially the same as the first stored instance. If so, then the duplicates are marked as such for the users' convenience.
- fuzzy matching is executed to determine if further matches are essentially the same as the first stored instance. If so, then the duplicates are marked as such for the users' convenience.
- an essential match between two clips is one where a substantial percentage of the content are the same. Naturally, this percentage can be preset by the system administrator.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Library & Information Science (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Television Signal Processing For Recording (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
A method and system for continually storing and cataloguing streams of broadcast content, allowing real-time searching and real-time results display of all catalogued video. A bank of video recording devices store and index all video content on any number of broadcast sources. This video is stored along with the associated program information such as program name, description, airdate and channel. A parallel process obtains the text of the program, either from the closed captioning data stream, or by using a speech-to-text system. Once the text is decoded, stored, and indexed, users can then perform searches against the text, and view matching video immediately along with its associated text and broadcast information. Users can retrieve program information by other methods, such as by airdate, originating station, program name and program description. An alerting mechanism scans all content in real-time and can be configured to notify users by various means upon the occurrence of a specified search criteria in the video stream. The system is preferably designed to be used on publicly available broadcast video content, but can also be used to catalog private video, such as conference speeches or audio-only content such as radio broadcasts.
Description
- This application claims the benefit of U.S. Provisional Application No. 60/546,954, filed Feb. 24, 2004, the entire contents of which are incorporated herein by reference.
- The present invention relates generally to media monitoring systems. More particularly, the present invention relates to video media searching and alerting systems.
- Many businesses and organizations have an interest in what is being broadcast, but the volume of information available makes it prohibitive to monitor completely.
- The overwhelming majority of broadcast sources include closed captions, which have been used successfully to identify the subject matter of a video stream. Systems have been developed to monitor and act upon the closed captioned text. For example, such systems trigger on the basis of keywords and selectively record video for later viewing. However, no refinement or cross-referencing could be performed on past video, and new searches would only be applied to subsequent video broadcasts.
- U.S. Pat. No. 5,481,296 is directed to a scanning method of monitoring video content using a predefined set of keywords. Based on a keyword, the system has the ability to monitor multiple streams and to return reception devices in real-time to selectively capture the matching video. The described system also attempts to selectively save video that has matched while removing segments that have not matched. The goal is to selectively record only the video that is desired.
- U.S. Pat. No. 5,986,692 is directed to a system for generating a custom-tailored video stream. The system is designed to work unattended, watching video signals, extracting and collating those that are deemed to be of interest to a specific user. The system also defines filters that attempt to detect and discern specific components of a video signal that are unwanted. For example, opening credits are video components that are typically undesired.
- U.S. Pat. No. 6,061,056 is directed to a system that automatically monitors a video stream for desired content. Users enter their search parameters, and the system watches the applied video streams for matches. However, this system only records video when a match occurs. The user is then presented with a series of clips that were saved based on their matches. Any new searches or refinements to the query only take effect for future searches. As well, any desired content that was not caught by the programmed search is lost forever. As an example, a user search for “Company A” may produce a result announcing a surprise merger of “Company A” and “Company B”. With the system as described in U.S. Pat. No. 6,061,056, new searches for “Company B” will only take effect on video occurring after the user adds this search. Therefore, the system is incapable of searching for any records prior to the new search being executed, such as recent happenings leading up to the merger.
- U.S. Pat. No. 6,266,094 is directed to a system of aggregating and distributing closed caption text over a distributed system. The system focuses on extensive scrubbing and preparation of closed caption text to enhance usability. However, the described system has no facility for archiving the video associated with the clip, nor does it present the program text to the user.
- It is, therefore, desirable to provide a media monitoring system that can dynamically search archived media content and real-time media content with unlimited queries.
- It is an object of the present invention to obviate or mitigate at least one disadvantage of previous media monitoring systems. In particular, it is an object of the present invention to provide a system and method for conducting real-time searches of recorded video, by comparing extracted closed captioned text of the video to predefined search parameters. Selected video segments time indexed to closed captioned text segments can be selectively viewed. The system searches real-time video and archived video.
- In a first aspect, the present invention provides a media monitoring system for receiving at least one video channel having corresponding closed captioned text in real time. The media monitoring system includes a media management system and a user access system. The media management system continuously stores all the data of the at least one video channel locally and extracts the corresponding closed captioned text into decoded text. The decoded text is provided to a global storage database. The media management system further includes a search engine for comparing the decoded text against search terms to provide matching results, and an indexing engine for indexing units of the decoded text by time. The user access system receives and displays the matching results, and transmits a request for stored data corresponding to specific units of the decoded text from the media management system. The media management system then provides said stored data corresponding to specific units of the decoded text in response to the request.
- According to embodiments of the first aspect, the media management system can include a media server pod, an index server and a web server. The media server pod receives the at least one video channel and locally stores the data of the at least one video channel. The media server pod can include a close caption decoder for extracting the corresponding closed captioning text into the decoded text. The index server receives the decoded text from the media server pod over a first network, and includes the index engine. The web server includes the global storage database for storing the decoded text received from the index servers over a second network. The web servers can include the search engine and a search term database for storing the search terms. The media server pod can include at least one media source for providing the at least one video channel, and a media server in direct communication with the at least one media source. The media server receives the at least one video channel, and has a decoder for extracting the corresponding closed captioned text into decoded text from a vertical blanking interval of the at least one video channel. The media server can further include mass storage media for storing the data of the at least one video channel.
- In aspects of the present embodiment, the media server can include a parser for generating the stored data corresponding to specific units of the decoded text, and the media server pod can include a plurality of media sources for providing a corresponding number of video channels. The media server can include a video/audio compression system for compressing the data of the at least one video channel prior to storage onto the mass storage media.
- According to further aspects of the present embodiment, the media server can include a speech-to-text system for converting audio signals corresponding to the at least one video channel into text, and a text detector for detecting an absence of the corresponding closed captioned text, such that the text detector generates an alert indicating the absence of the corresponding closed captioned text. In yet other aspects, the media source can include one of a satellite receiver, a cable box, an antenna, and a digital radio source. The index server can include the global storage database, or the web server can include the global storage database.
- According to yet another embodiment of the present aspect, the first network can include a wide area network. The media management system can further include a second media server pod for receiving data of a different video channel, where the second media server pod is in communication with the first network. The media server pod and the second media server pod can be geographically distant from each other.
- In further embodiments of the present aspect, the user access system can include a duplicate video dip detector for identifying the matching results that are duplicates of each other, and a user access device in communication with the web server over a third network for receiving and displaying the matching results. The user access device can provide the search terms to the media management system. In an aspect of the present embodiments, the user access system can include a fourth network in communication with the user access device and the media server pod, where the user access device receives said stored data corresponding to specific units of the decoded text in response to the request over the fourth network.
- In a second aspect, the present invention provides a method for searching video data corresponding to at least one video channel collected and stored in a media monitoring system. The method includes (a) providing search terms; (b) comparing the search terms to stored closed captioned text corresponding to the video data, the closed captioned text being indexed by channel and time; (c) displaying matching results from the step of comparing; (d) requesting selected video data corresponding to one of the matching results; and (e) providing the selected video data corresponding to one of the matching results.
- In an embodiment of the present aspect, the step of requesting includes selecting a time indexed segment of the closed captioned text of one of the matching results, the step of selecting includes setting a video start time and a video end time for the selected time indexed segment, and the step of providing the selected video data includes parsing the video data to correspond with the selected time indexed segment to provide the selected video data. The step of providing search terms can include storing the search terms. In yet another embodiment of the present aspect, the search terms are provided to a web server over a first network, where the web server executes the step of comparing and providing the matching results to a user access device for display over the first network. The step of providing the video data includes transferring the video data over a second network to the user access device, and the step of providing the video data can include parsing the video data to provide the portion of the video data.
- In a third aspect, the present invention provides a method for automatic identification of video clips matching stored search terms. The method includes (a) continuously receiving and locally storing video data corresponding to at least one video channel in real time; (b) extracting and globally storing the closed captioned text from the video data; (c) indexing the closed captioned text by channel and time; (d) comparing the stored closed captioned text to the stored search terms; and, (e) providing match results of the closed captioned text matching the search terms, each match result having an optionally viewable video clip.
- According to an embodiment of the present aspect, the method can further include the steps of displaying the match results on a user access device, requesting the video clip corresponding to a selected match result, and displaying the video clip on the user access device. The step of requesting includes viewing the closed captioned text corresponding to the selected match result with time indices, setting a video start time and a video end time, and, providing a request having the video start time and the video end time, and channel information corresponding to the selected match result. The step of displaying the video clip includes receiving the request, and parsing the video data to provide the video clip having the video start time and the video end time.
- According to another embodiment of the present aspect, the video data is compressed prior to being stored, the extracted closed captioned text is transmitted over a first network to an indexing server for indexing the closed captioned text by channel and time, the closed captioned text is transmitted over a second network for storage on a web server, the step of comparing is executed on the web server. The match results can be transmitted over a third network to a user access device, and the step of displaying the video clip includes transmitting the video dip over a fourth network to the user access device.
- According to yet another embodiment of the present aspect, the step of comparing includes (i) providing a segment of the closed captioned text, (ii) iteratively obtaining search terms from the stored search terms for comparing to the segment of the closed captioned text until all of the stored search terms have been compared to the segment of the closed captioned text, and (iii) storing details of all matches to the stored search terms as the match results. The step of storing includes selectively generating an alert when the segment of the closed captioned text matches the stored search terms, the step of selectively generating includes generating the alert only when the matching search term has an associated alerting status, and the alert can include one of in-system alerts, mobile device activation, pager activation and automatic email generation.
- According to a further embodiment of the present aspect, the step of extracting includes detecting an absence of the closed captioned text from the video data, and generating an alert message when no closed captioned text is detected.
- Other aspects and features of the present invention will become apparent to those ordinarily skilled in the art upon review of the following description of specific embodiments of the invention In conjunction with the accompanying figures.
- Embodiments of the present invention will now be described, by way of example only, with reference to the attached Figures, wherein:
-
FIG. 1 is a schematic of the media monitoring system according to an embodiment of the present invention; -
FIG. 2 is a schematic of the media monitoring system according to another embodiment of the present invention; -
FIG. 3 is a block diagram of the functional components of the media monitoring system shown inFIG. 1 ; -
FIG. 4 is a flow chart illustrating a manual operation mode of the media monitoring system of the present invention; -
FIG. 5 is a computer screen user interface for prompting search parameters from a user; -
FIG. 6 is a computer screen user interface showing compact example results from a search; -
FIG. 7 is a computer screen user interface showing detailed example results from a search; -
FIG. 8 is a computer screen user interface showing matching captioning and timing information; and, -
FIG. 9 is a flow chart illustrating an automatic scanning mode of the media monitoring system of the present invention. - Generally, the present invention provides a method and system for continually storing and cataloguing streams of broadcast content, allowing real-time searching and real-time results display of all catalogued video. A bank of video recording devices store and index all video content on any number of broadcast sources. This video is stored along with the associated program information such as program name, description, airdate and channel. A parallel process obtains the text of the program, either from the closed captioning data stream, or by using a speech-to-text system. Once the text is decoded, stored, and indexed, users can then perform searches against the text, and view matching video immediately along with its associated text and broadcast information.
- Users can also retrieve program information by other methods, such as by airdate, originating station, program name and program description. Additionally, an alerting mechanism scans all content in real-time and can be configured to notify users by various means upon the occurrence of a specified search criteria in the video stream. The system is preferably designed to be used on publicly available broadcast video content, but can also be used to catalog private video, such as conference speeches or audio-only content such as radio broadcasts.
- The system according to the embodiments of the present invention non-selectively records all video/audio applied to it, and allows user searches to review all video on the system. Only under user control is a presentation clip prepared and retrieved. Furthermore, searches can be performed at any time to examine archived video, rather than searches being the basis by which video is saved. Video is only shown to the user that they specifically request, and any editing can be done under user control.
- A general block diagram of a media monitoring system according to an embodiment of the present invention is shown in
FIG. 1 .Media monitoring system 100 comprises two major component groups. The first is themedia management system 102, and the second is auser access system 104. - The
media management system 102 is responsible for receiving and archiving video and its corresponding audio. Preferably, the video and audio data is continuously received and stored. Video streams are tuned usingmedia sources 106 such as satellite receivers, other signal receiving devices such as cable boxes, antennas, and VCR's or DVD players. Alternately,media sources 106 can include non-video media sources, such as digital radio sources, for example. Preferably, themedia sources 106 receive digital signals. The video signal, corresponding audio, and any corresponding closed-captioned text, is captured by video/audio capture hardware/software stored onmedia servers 108. The video/audio data can be stored in any sized segment, such as in one-hour segments, with software code to later extract any desired segment of any size by channel, start time, and end time. Those of skill in the art will understand that the video/audio data can be stored in any suitable format. As shown inFIG. 1 ,media management system 102 can include any number ofmedia servers 108, and each video server can be in communication with any number ofmedia sources 106. If storage space is limited, the media servers can compress the digital data from themedia sources 106 into smaller file sizes. The data frommedia sources 106 are stored in consecutive segments onto a mass storage device using uniquely generated filenames that encode the channel and airdate of the video segment. If the data is a video stream, closed-captioned text is extracted from the video stream and stored inweb servers 114 as searchable text, as will be discussed later. The extracted closed-captioned text is indexed to its corresponding video/audio clips stored in themedia servers 108. - The
media management system 102 stores all of the text associated with the video stream. In most cases, this text is obtained from the closed-captioning signal encoded into the video. Themedia management system 102 can further Include a closed-captioned text detector for detecting the absence of closed-captioned text in the data stream, in order to alert the system administrator that closed-captioned text has not been detected for a predetermined amount of time. In a situation where closed-captioned text is undetected, the aforementioned alert can notify the system operator to take appropriate action in order to resolve the problem. In some cases, the stream may not be a digital stream, and the system can include a speech-to-text system to convert the audio signals into text. Accordingly, these sub-systems can be executed within eachmedia server 108. The extracted text is broken into small sections, preferably into one-minute segments. Each clip is then stored in a database along with the program name, channel, and airdate of the clip. The text is also pushed into an indexing engine ofindex servers 110, which allows it to be searched. In a preferred embodiment, the closed captioned text spanning a preset time received byindex servers 110 are converted to XML format, bundled, and sent toweb servers 114 for global storage vianetwork 116.Web servers 114 can execute the searches for matches between user specified terms and the stored closed captioned text, via a web-based search interface. Alternately, the closed captioned text can be stored inindex servers 110. The channel and airdate fields of the text segment allow it to be matched to a video clip stored by themedia management system 102 as needed. Further details ofmedia management system 102 will be described later. - Although not shown in
FIG. 1 , themedia management system 102 includes an alerting system. This system watches for each closed captioned segment that is indexed and cross-references it against the stored list of user defined alerts. Any matches will trigger user alerts to notify the user that a match has occurred. Alerts can include in-system alerts, mobile device activation, pager activation, automatic email generation, which can be generated fromweb servers 114. - The
user access system 104 can include access devices such as acomputer workstation 118, or mobile computing devices such as alaptop computer 120 and aPDA 122. Of course, other wireless devices such as mobile phones can also be used. These web enabled access devices can communicate with theweb servers 114 via theInternet 126 wirelessly through BlueTooth or WiFi network systems, or through traditional wired systems, Optionally, users can dial up directly to thenetwork 116 with a non-web search interface enabledcomputer 124. As will be shown later inFIG. 3 , theuser access system 104 further includes an alternate data transfer path for transferring video data to the access devices to reduce congestion withinmedia management system 102. As previously discussed, eachweb server 114 can store identical copies of the closed captioned text bundle received fromindex servers 110. This configuration facilitates searches conducted by users since the text data is quickly accessible, and search results consisting of closed captioned text can be quickly forwarded to the user's access device. - From
user access system 104, the user can search for occurrences of keywords, retrieve video by date and time, store alert parameters, etc. The user interface software can take the form of a web interface, locally run software, mobile device interface, or any other interactive form. - The back end portion of the web interface maintains a connection to the text database, as well as to the index of video streams. Depending on configuration, the user interface software can be used to stream video to the user, or alternatively, to direct the user to an alternate server where video will be presented.
- The previously described embodiment of the invention can be deployed locally, at a single site for example, to monitor all the media channels of interest. Therefore,
networks media monitoring system 100 can be deployed across a wide network, meaning that the components of the system can be geographically dispersed, makingnetworks - In a wide deployment embodiment of the invention,
media servers 108 and theircorresponding media sources 106 can be geographically distributed to collect and store local video, which is then shared within the system. For example, “pods” ofmedia servers 108 and theircorresponding media sources 106 can be located in different cities, and in different countries. As such, it is advantageous to store the relatively large video/audio data locally withinrespective media servers 108. In such an embodiment, the server the user is connected to may not physically be at the location where the video streams are being recorded. In the present context, the distributed media server pods are considered remotely connected to indexservers 110, since they are connected via a WAN. However, an advantage of the present invention is that the monitoring and notification speed remains fast regardless of the network configuration of themedia monitoring system 100. This is due to the fact that the small sized closed captioned text can be rapidly transferred within the system, and more particularly, between themedia servers 108 and the user access devices. - Once the user desires to view corresponding video, then the larger video data is accessed and sent to the user. Due to the size of the video, it is preferable to avoid congesting the
networks servers 110, which functions as the conductor of traffic betweenmedia servers 108 and the user access device. Therefore, according to another embodiment of the invention, requested video can be directly sent from theappropriate media server 108 to the video enabled user access device. -
FIG. 2 illustrates the configuration of themedia monitoring system 100 when video data is to be transferred to a user access device in a geographically distributed system. In the present example, onevideo server 108 and itscorresponding media sources 106 represent a single video processing unit of a pod ofvideo processing units 130, that may be deployed in a particular city and geographically distant fromindex servers 110 andnetwork 112. Thepod 130 remains in communication withremote access devices WAN network 132, which may be geographically distant frompod 130. Hence, once a user requests a particular video clip, the request is sent directly to theappropriate video server 108, which then transfers the requested video clip, parsed as requested by the user, to their access device via WAN/LAN network 132.Media server 108 can include a parser for providing the requested video dip that corresponds with the time-indexed closed captioned text. Since the video clips are received through a path outside of themedia management system 102 anduser access system 104, the potential for congestion of data traffic within the system is greatly reduced. At the same time, multiple users can receive their respective requested video clips rapidly. - In general operation, when a user specifies key search terms through their computer or wireless device, the index servers will search the archived closed captioned text, and notify the user if any matches have occurred. Matches are displayed with the relevant bibliographic information such as air date and channel. The user then has the option of viewing and hearing a time segment of the videos containing the matched terms, the time segment being selectable by the user. The search of key terms can extend to future broadcasts, such that the search Is conducted dynamically in real-time. Thus, the user can be notified shortly after a search term has been matched in a current broadcast. Since the video broadcast is recorded, the user can selectively view the entire broadcast, or any portion thereof.
-
FIG. 3 illustrates a block diagram of the general functional components ofmedia monitoring system 100 shown inFIG. 1 . - The
media monitoring system 100 converts a video signal to an indexed series of digital files on a mass storage system, which can then be retrieved by specifying the desired channel, start time, and end time. This capability is then used to supply the actual video that matches the search result from the user interface component. Video is archived at a specified quality, depending on operator configuration. Higher quality settings allow for larger video frames, higher frame rates, and greater image detail, but with a penalty of greater file storage requirements. All parameters are configurable by the operator at the system level. As previously mentioned, the video/audio signal to be archived is made available from an external source. In practice, this usually consists of an antenna, or a satellite receiver or cable feed supplied by a signal provider. Any standard video signal may be used, although the originating device preferably supports encoding of closed-captions in the Vertical Blanking Interval (VBI), which is the dead time where the scanning gun of the monitor finishes at the bottom and moves back to the top of the screen. The system can also be configured to store audio-only content should the signal not have a video component. - The video/audio signal is applied to the input of a
video capture device 200, which, either through a hardware or asoftware compression system 202, converts the video signal to a digital stream. InFIG. 1 ,video capture device 200 andsoftware compression system 202 can be implemented inmedia servers 108. The exact format of this stream can be specified by the operator, but is typically chosen to be a compressed stream in a standard format such as MPEG or AVI formatted video. The video capture process outputs a continuous stream of video, which is then divided into manageable files. According to an embodiment of the present invention, the files are preferably limited to one hour blocks of video. These files are then stored on amass storage system 204 within theirrespective media servers 108, indexed by the channel they represent, and the block of time that the video recording was done. Accordingly,mass storage system 204 locally stores the video/audio data for itscorresponding media sources 106. - Video clips can be retrieved from
mass storage system 204 in response to retrieval requests from permitted machines. These requests would be generated from servers that are serving users who have requested a video clip. From the users standpoint, this video clip is chosen by its content, but the system would know it as belonging to a specified channel for a given period of time. Most user clip requests are for small segments of video, an example being “CBC-Ottawa, 5:55 pm-5:58 pm”. The archive system, using the channel and the date required, first deduces which large file the video segment is located in. It then parses the video file to locate and extract the stream data representing the segment selected. The stream data is then re-encapsulated to convert it to a stand-alone video file, and the result is returned to the calling machine, ultimately to be delivered to the user. - Since storage space is finite, the system can continuously replace the oldest video streams in its archive with the newest. This ensures that as much video is stored as possible. Additional storage can be added or removed as needed.
-
Media monitoring system 100 can include self monitoring functions to ensure robust operation, and to minimize potential errors. For example, the video digitizing process has the ability to detect the lack of video present at its input. This condition will raise an operator alert to allow the operator to locate the cause of the outage. In the field, this can be attributed to cabling problems, weather phenomena, hardware failure, upstream problems, etc. In certain cases the system can be configured to attempt an automatic repair, by restarting or re-initializing a process or external device. - The closed captioned text associated with the video is preferably extracted from the closed captioning stream in the video signal, or an associated speech-to-text device. If closed captioning data is available in the video signal, the signal is applied to a
decoder 206 typically located in eachmedia server 108, that can read the VBI stream. Thedecoder 206 extracts the closed captions that are encoded into the video signal. In practice, this can be the same device performing the video compression, and the extraction can be done in software. If closed captioning data is not available, the audio stream is fed into a speech-to-text device instead ofdecoder 206, and the resulting text is fed into the system. This option can be used if the content is not a video signal, such as a commercial radio stream or recorded speech. Thedecoder 206 includes a buffer, into which text accumulates at “human reading” speed. After a short increment of time, preferably one minute, the text buffer is stored intotext database 208 along with the channel and time information associated with the clip. Thisdatabase 208 then contains a complete record of all text that has flowed through the system, sorted by channel and airdate. As previously mentioned,database 208 can be located within eitherindex servers 110 orweb servers 114. In either case,database 208 functions as global storage of the decoded closed captioned text. - To facilitate and accelerate searching, the program text is provided to an
indexing engine 210.Indexing engine 210, implemented inindex servers 110 receives a block of text, which in this case represents a small unit of video transcript (typically one minute), and stores it in a format that is optimized for full text searches. For practical implementation purposes, standard off-the-shelf products can be employed for the indexing function. According to the presently described embodiments, the video captions are indexed by channel and time for example. The formatted text is stored inindex database 212, which can be located inindex servers 110 orweb servers 114.Database 212 can also function as global storage of all the formatted text. - For searching the
text database 208, the user's search string is submitted to a full text search engine that searchesdatabase 212. Any results returned from this engine also contain indexes to the corresponding channel and time of the airing. Furthermore, since the entire text is stored indatabase 208, it can be retrieved using standard techniques to search on the channel and air time. It is noted thatdatabase 212 is used for full text searching, whiledatabase 208 has been formatted such that the data is ordered by time and channel to facilitate look up by time and channel. - Due to the small size of text streams, all extracted text could be retained for as long as required, even after its corresponding video clip has been deleted. The cleanup thread of the text system removes the captions from the database and the search index as they expire from the archival service. Alternatively, they may be retained as long as desired but are flagged to indicate that the associated video is no longer available. Additional search options allow searches to include this “archived” text if desired.
- Once video data has been received, processed and archived in
media management system 102 as previously described, user-defined searches can be executed throughuser access system 104. Operating upon each access device is a user search interface that provides the functionality of the system. The interface is designed to allow users with minimal training to be able to perform text searches, examine the program text that matches, and selectively view or archive the video streams where the captioning appeared. While the reference application is a web-based system, the system can be searched through other means, such as mobile WiFi devices, Bluetooth-enabled devices, and locally running software, for example. - Following is an example of a common interactive mode of operation between a user and the
media monitoring system 100 shown inFIG. 1 .FIG. 4 shows a flow chart of the process executed by themedia monitoring system 100, whileFIGS. 5-8 are examples of user interface screens that prompt the user for information and display results to the user. - The process begins at
step 300, where the user logs into the interface with the goal of researching a topic's appearance in the recent media. The user is presented with a screen that allows them to enter the search terms that would match their desired content. Common search parameters are provided, such as specifying phrases that must appear as typed, words that should appear within a certain distance of each other, boolean queries, etc. As well, the query can be limited to only return results from specific broadcast channels.FIG. 5 is an example user interface for prompting the search parameters from the user. - Upon submitting the form, the search parameters provided by the user are first groomed at
step 302. Grooming is an optional step, which refers to optimization of the search parameters, especially if the user's search parameters are broken. For example, the user may enter “red blue” in the MUST CONTAIN THESE WORDS search field, and “GREEN” in the MAY CONTAIN search field. The grooming process then optimizes the search parameters to “GREEN RED AND BLUE”. The groomed search parameters are compared todatabase 208 that stores all the closed captioned text. The user is presented with a match results page atstep 304, itemizing the results obtained, the programs they appeared in, and a score that represents how strong the match was. The results can be sorted in numerous ways, such as by date, by program name, or by score. A compact example results page is shown inFIG. 6 , and a more detailed version is shown inFIG. 7 . In both the compact and detailed results pages, the user can select any row to view further details of that program segment. The results pages shown inFIGS. 6 and 7 may list concurrent segments belonging to the same broadcast, since the search term appears in each segment. For example, the results may return “Channel Y, 6:00 pm to 6:01 pm”, “Channel Y, 6:01 pm to 6:02 pm” and “Channel Y, 6:02 pm to 6:03 pm” as separate program segment items. The system can optimize the results by recognizing that the three segments are chronological segments of Channel Y, and collapse the results into a simplified description, such as “Channel Y, 6:00 pm to 6:03 pm”. - Upon selecting a program segment at
step 306, the user is presented with a caption viewing screen showing the matching captioning and timing information, as shown inFIG. 8 . The present screen gives the user the option of viewing the clip associated with the shown extracted closed captioned text. From the caption viewing screen, the user is also presented with a navigation system that allows the user to move forward or backward in the video stream beyond the matched segment, to peruse the context that the clip was presented in. The caption viewing screen also features controls to compose a video clip that consists of several consecutive units of video. More specifically, the user has the ability to define the start and end points of a video clip, and then view or save that clip. This is suitable for preparing a salient clip that is to be saved for future reference. - If the user chooses not to view the corresponding video clip at
step 306, the process can return to step 300 to restart the search. Optionally, the process can return to step 304 to permit the user to view the results page and select a different program segment. If the user chooses to view the corresponding video clip, then the system determines if the video clip is stored locally atstep 308. It is important to note that a locally stored video clip refers to one that is accessible via a high bandwidth network, which is typically available in a local area network, such as in an office environment. In contrast, remotely stored video clips are generally available only through a low bandwidth network, or one that is too low to have a copy of all video sent to it all the time. As previously discussed, the user can access the video remotely over a low bandwidth connection. Therefore, the process provides a video access method optimized according to whether or not the user is accessing the system remotely. If the video clip is stored locally, ie. on a high bandwidth connection suitable for streaming video, then the system proceeds to step 310. Atstep 310, the video clip is retrieved and assembled with the appropriate video segments, and then displayed for the user atstep 312. The video clip can be played with the user's preferred video playing software. Alternately atstep 308, if the video clip is not stored locally, the system proceeds to step 314, where a query is sent to the specific remote server that will return the video that the user is asking for. The video clip is retrieved from the remote system atstep 316, and finally displayed for the user atstep 312. Once the clip has ended, the user has the option of returning to step 304 to view another program segment. Alternately, the user may return to step 300 to initiate a new search. - Some installations and user devices (such as WiFi or Bluetooth wireless devices) do not have the ability to view video clips. In this scenario, the video dip can be ordered through the user interface where it will be delivered to the user via email, via a link to a web site, or a physical medium such as a DVD, CD or video cassette for example. This service is suitable for clients requiring a permanent copy of especially important video segments.
- The previously described manual interactive operation method of
FIG. 4 is effective for searching and viewing archived video. According to an embodiment of the present invention, themedia monitoring system 100 can concurrently operate in an automatic scanning mode to match user defined terms with real time extracted closed captioned text. The user can selectively activate the alerting system to provide notification for specific terms. - As previously described, searches can be stored by users so that they are executed on all incoming text corresponding to real-time recorded video. Any matches will selectively generate an immediate alert, which can be communicated to the user by various means. Selective generation of an alert refers to the fact that the user can set specific search terms to trigger an alert when matched. The stored search terms are archived in a search term database, preferably located on web servers, 114 including parameters reflecting the desired level of alerting the user has requested. Examples of such alerting levels can include “Never alert me”, “alert me by putting a message in the product”, “and alert me urgently by sending me an email to my mobile device”.
- The automatic scanning mode method of operation of the
media monitoring system 100 is described with reference toFIG. 9 . It is assumed that the following process operates upon each stored unit of program text after the text is stored and indexed. Then the index is searched again with the terms to detect if anything new appears. It is further assumed that the user has previously defined his/her search terms and stored them in asearch term database 404, which can be physically located onweb server 114. The process begins atstep 400, where the text fromindex database 212 for the unit is retrieved. Atstep 402, a search term from the userssearch term database 404 is retrieved and compared to the stored unit of program text atstep 406. If there is an absence of a match, the system proceeds to step 408, where the system checks if there are any further search terms to check against the stored unit of program text. If there are no more search terms, the process ends atstep 410. Otherwise, the system loops back to step 402 to fetch the next search term. - If a match was found at
step 406, the system proceeds to step 412 to store the match information In aresults database 414. This results database is preferably located inweb server 114, and is local to the user's portal. The results summarize matches between the search terms and the video clips for the user when they log in to their portal. Atstep 416, the system checks if the user has activated an alert, for the present search term. If an alert has been activated for the present search term, the system generates a notification message for the user atstep 418, in accordance with their desired alert level. Depending on settings and system configuration, this alert/notification can be delivered using a number of methods, including but not limited to, alerts in the interface, via email, via mobile and wireless devices, for example. Once the user has been alerted atstep 418, or if no alert has been activated for the present search term atstep 416, the matched search result processing the system proceeds to step 408 to determine if there are any further search terms. This aforementioned process is executed for each unit of program text stored in the index. - Therefore, should the user add a new search term to his/her search term database at a later time, the media monitoring system of the present invention can immediately search the archives to identify any prior program segments that match the new search term, and monitor new program segments for occurrences of the new search term.
- The system described in this application stores all video from all channels, allowing searches to be refined or changed at will with instant results. As well, learnings from the results of one query can be incorporated into a new search with instant results.
- This invention improves the user experience by storing and indexing all recent video and captions. This allows not only unlimited queries with real time results, but also allows new searches inspired by results to be performed immediately and with instant results.
- The aforementioned embodiments of the present invention records and stores video/audio clips that are broadcast across any number of channels. There are instances where the same video clips are broadcast by affiliated channels. An example includes all those channels affiliated with CTV. Hence, there is a great likelihood that a user's search parameters will return duplicate video clips. In an enhancement to the embodiments of the present invention,
web server 114 can include a duplicate video clip detector to mark matching video clip results that are essentially the same. This function can be executed inweb servers 114 as search results are returned to the user. For example, the text of the returned search results can be scanned such that the duplicates are marked as such. This feature allows the user to view one video clip and dismiss those marked as duplicates very quickly, without opening each one and viewing the clip. Preferably, the duplicate video clip detector can be implemented onweb server 114, but can be executed inindex servers 110. Generally, a first matching result is added to the database and then fuzzy matching is executed to determine if further matches are essentially the same as the first stored instance. If so, then the duplicates are marked as such for the users' convenience. Those of skill in the art should understand that an essential match between two clips is one where a substantial percentage of the content are the same. Naturally, this percentage can be preset by the system administrator. - The above-described embodiments of the present invention are intended to be examples only. Alterations, modifications and variations may be effected to the particular embodiments by those of skill in the art without departing from the scope of the invention, which is defined solely by the claims appended hereto.
Claims (41)
1. A media monitoring system for receiving at least one video channel having corresponding closed captioned text in real time, comprising:
a media management system for continuously storing all the data of the at least one video channel locally and for extracting the corresponding closed captioned text into decoded text, the decoded text being provided to a global storage database, the media management system having
a search engine for comparing the decoded text against search terms to provide matching results, and
an indexing engine for indexing units of the decoded text by time; and
a user access system for receiving and displaying the matching results, the user access system transmitting a request for stored data corresponding to specific units of the decoded text from the media management system, the media management system providing said stored data corresponding to specific units of the decoded text in response to the request.
2. The media monitoring system of claim 1 , wherein the media management system includes
a media server pod for receiving the at least one video channel and for locally storing the data of the at least one video channel, the media server pod including a close caption decoder for extracting the corresponding closed captioning text into the decoded text,
an index server for receiving the decoded text from the media server pod over a first network, the index servers having the index engine, and
a web server including the global storage database for storing the decoded text received from the index servers over a second network, the web servers having the search engine and a search term database for storing the search terms.
3. The media monitoring system of claim 2 , wherein the media server pod includes
at least one media source for providing the at least one video channel, and
a media server in direct communication with the at least one media source for receiving the at least one video channel, the media server having a decoder for extracting the corresponding closed captioned text into decoded text from a vertical blanking interval of the at least one video channel, the media server including mass storage media for storing the data of the at least one video channel.
4. The media monitoring system of claim 3 , wherein the media server includes a parser for generating the stored data corresponding to specific units of the decoded text.
5. The media monitoring system of claim 3 , wherein the media server pod includes a plurality of media sources for providing a corresponding number of video channels.
6. The media monitoring system of claim 3 , wherein the media server includes a video/audio compression system for compressing the data of the at least one video channel prior to storage onto the mass storage media.
7. The media monitoring system of claim 6 , wherein the media server includes a speech-to-text system for converting audio signals corresponding to the at least one video channel into text.
8. The media monitoring system of claim 6 , wherein the media server includes a text detector for detecting an absence of the corresponding closed captioned text, the text detector generating an alert indicating the absence of the corresponding closed captioned text.
9. The media monitoring system of claim 3 , wherein the media source includes one of a satellite receiver, a cable box, an antenna, and a digital radio source.
10. The media monitoring system of claim 2 , wherein the index server includes the global storage database.
11. The media monitoring system of claim 2 , wherein the web server includes the global storage database.
12. The media monitoring system of claim 3 , wherein the first network includes a wide area network.
13. The media monitoring system of claim 12 , wherein the media management system further includes a second media server pod for receiving data of a different video channel, the second media server pod being in communication with the first network.
14. The media monitoring system of claim 13 , wherein the media server pod and the second media server pod are geographically distant from each other.
15. The media monitoring system of claim 1 , wherein the user access system includes a duplicate video clip detector for identifying the matching results that are duplicates of each other.
16. The media monitoring system of claim 2 , wherein the user access system includes a user access device in communication with the web server over a third network, for receiving and displaying the matching results.
17. The media monitoring system of claim 16 , wherein the user access system includes a fourth network in communication with the user access device and the media server pod, the user access device receiving said stored data corresponding to specific units of the decoded text in response to the request over the fourth network.
18. The media monitoring system of claim 16 , wherein the user access device provides the search terms to the media management system.
19. A method for searching video data corresponding to at least one video channel collected and stored in a media monitoring system, comprising:
(a) providing search terms;
(b) comparing the search terms to stored closed captioned text corresponding to the video data, the closed captioned text being indexed by channel and time;
(c) displaying matching results from the step of comparing;
(d) requesting selected video data corresponding to one of the matching results; and
(e) providing the selected video data corresponding to one of the matching results.
20. The method of claim 19 , wherein the step of requesting includes selecting a time indexed segment of the closed captioned text of one of the matching results.
21. The method of claim 20 , wherein the step of selecting includes setting a video start time and a video end time for the selected time indexed segment.
22. The method of claim 21 , wherein the step of providing the selected video data includes parsing the video data to correspond with the selected time indexed segment to provide the selected video data.
23. The method of claim 19 , wherein the step of providing search terms includes storing the search terms.
24. The method of claim 19 , wherein the search terms are provided to a web server over a first network, the web server executing the step of comparing and providing the matching results to a user access device for display over the first network.
25. The method of claim 24 , wherein the step of providing the video data includes transferring the video data over a second network to the user access device.
26. The method of claim 24 , wherein the step of providing the video data includes parsing the video data to provide the portion of the video data.
27. A method for automatic identification of video clips matching stored search terms comprising:
(a) continuously receiving and locally storing video data corresponding to at least one video channel in real time;
(b) extracting and globally storing the closed captioned text from the video data;
(c) indexing the closed captioned text by channel and time;
(d) comparing the stored closed captioned text to the stored search terms; and,
(e) providing match results of the closed captioned text matching the search terms, each match result having an optionally viewable video clip.
28. The method of claim 27 , further including the steps of:
displaying the match results on a user access device, requesting the video clip corresponding to a selected match result, and displaying the video clip on the user access device.
29. The method of claim 28 , wherein the step of requesting includes viewing the closed captioned text corresponding to the selected match result with time indices,
setting a video start time and a video end time, and,
providing a request having the video start time and the video end time, and channel information corresponding to the selected match result.
30. The method of claim 29 , wherein the step of displaying the video clip includes
receiving the request, and,
parsing the video data to provide the video clip having the video start time and the video end time.
31. The method of claim 27 , wherein the video data is compressed prior to being stored.
32. The method of claim 30 , wherein the extracted closed captioned text is transmitted over a first network to an indexing server for indexing the closed captioned text by channel and time.
33. The method of claim 32 , wherein the closed captioned text is transmitted over a second network for storage on a web server.
34. The method of claim 33 , wherein the step of comparing is executed on the web server, and the match results are transmitted over a third network to a user access device.
35. The method of claim 34 , wherein the step of displaying the video clip includes transmitting the video clip over a fourth network to the user access device.
36. The method of claim 27 , wherein the step of comparing includes
(i) providing a segment of the closed captioned text,
(ii) iteratively obtaining search terms from the stored search terms for comparing to the segment of the closed captioned text until all of the stored search terms have been compared to the segment of the closed captioned text, and
(iii) storing details of all matches to the stored search terms as the match results.
37. The method of claim 36 , wherein the step of storing includes selectively generating an alert when the segment of the closed captioned text matches the stored search terms.
38. The method of claim 37 , wherein the step of selectively generating includes generating the alert only when the matching search term has an associated alerting status.
39. The method of claim 38 , wherein the alert can include one of in-system alerts, mobile device activation, pager activation and automatic email generation.
40. The method of claim 27 , wherein the step of extracting includes detecting an absence of the closed captioned text from the video data.
41. The method of claim 40 , wherein the step of detecting includes generating an alert message when no closed captioned text is detected.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/063,559 US20050198006A1 (en) | 2004-02-24 | 2005-02-24 | System and method for real-time media searching and alerting |
US11/947,460 US8015159B2 (en) | 2004-02-24 | 2007-11-29 | System and method for real-time media searching and alerting |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US54695404P | 2004-02-24 | 2004-02-24 | |
US11/063,559 US20050198006A1 (en) | 2004-02-24 | 2005-02-24 | System and method for real-time media searching and alerting |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/947,460 Continuation US8015159B2 (en) | 2004-02-24 | 2007-11-29 | System and method for real-time media searching and alerting |
Publications (1)
Publication Number | Publication Date |
---|---|
US20050198006A1 true US20050198006A1 (en) | 2005-09-08 |
Family
ID=34886282
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/063,559 Abandoned US20050198006A1 (en) | 2004-02-24 | 2005-02-24 | System and method for real-time media searching and alerting |
US11/947,460 Expired - Fee Related US8015159B2 (en) | 2004-02-24 | 2007-11-29 | System and method for real-time media searching and alerting |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/947,460 Expired - Fee Related US8015159B2 (en) | 2004-02-24 | 2007-11-29 | System and method for real-time media searching and alerting |
Country Status (2)
Country | Link |
---|---|
US (2) | US20050198006A1 (en) |
CA (1) | CA2498364C (en) |
Cited By (70)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050033760A1 (en) * | 1998-09-01 | 2005-02-10 | Charles Fuller | Embedded metadata engines in digital capture devices |
US20060122984A1 (en) * | 2004-12-02 | 2006-06-08 | At&T Corp. | System and method for searching text-based media content |
US20060153535A1 (en) * | 2005-01-07 | 2006-07-13 | Samsung Electronics Co., Ltd. | Apparatus and method for reproducing storage medium that stores metadata for providing enhanced search function |
US20060153542A1 (en) * | 2005-01-07 | 2006-07-13 | Samsung Electronics Co., Ltd. | Storage medium storing metadata for providing enhanced search function |
US20060212905A1 (en) * | 2005-03-17 | 2006-09-21 | Hitachi, Ltd. | Broadcast receiving terminal and information processing apparatus |
US20060294082A1 (en) * | 2005-06-28 | 2006-12-28 | Samsung Electronics Co., Ltd | Apparatus and method for playing content according to numeral key input |
US20070027844A1 (en) * | 2005-07-28 | 2007-02-01 | Microsoft Corporation | Navigating recorded multimedia content using keywords or phrases |
US20070050406A1 (en) * | 2005-08-26 | 2007-03-01 | At&T Corp. | System and method for searching and analyzing media content |
WO2007073347A1 (en) * | 2005-12-19 | 2007-06-28 | Agency For Science, Technology And Research | Annotation of video footage and personalised video generation |
US20070154176A1 (en) * | 2006-01-04 | 2007-07-05 | Elcock Albert F | Navigating recorded video using captioning, dialogue and sound effects |
US20070154171A1 (en) * | 2006-01-04 | 2007-07-05 | Elcock Albert F | Navigating recorded video using closed captioning |
US20070174326A1 (en) * | 2006-01-24 | 2007-07-26 | Microsoft Corporation | Application of metadata to digital media |
US7260564B1 (en) * | 2000-04-07 | 2007-08-21 | Virage, Inc. | Network video guide and spidering |
US20070204285A1 (en) * | 2006-02-28 | 2007-08-30 | Gert Hercules Louw | Method for integrated media monitoring, purchase, and display |
US20070203945A1 (en) * | 2006-02-28 | 2007-08-30 | Gert Hercules Louw | Method for integrated media preview, analysis, purchase, and display |
US7295752B1 (en) | 1997-08-14 | 2007-11-13 | Virage, Inc. | Video cataloger system with audio track extraction |
US20070276852A1 (en) * | 2006-05-25 | 2007-11-29 | Microsoft Corporation | Downloading portions of media files |
EP1873966A1 (en) * | 2006-03-08 | 2008-01-02 | Huawei Technologies Co., Ltd. | Playing method, system and device |
US20080046401A1 (en) * | 2006-08-21 | 2008-02-21 | Myung-Cheol Lee | System and method for processing continuous integrated queries on both data stream and stored data using user-defined share trigger |
US20080046929A1 (en) * | 2006-08-01 | 2008-02-21 | Microsoft Corporation | Media content catalog service |
US20080086456A1 (en) * | 2006-10-06 | 2008-04-10 | United Video Properties, Inc. | Systems and methods for acquiring, categorizing and delivering media in interactive media guidance applications |
US20080091513A1 (en) * | 2006-09-13 | 2008-04-17 | Video Monitoring Services Of America, L.P. | System and method for assessing marketing data |
US20080097970A1 (en) * | 2005-10-19 | 2008-04-24 | Fast Search And Transfer Asa | Intelligent Video Summaries in Information Access |
US20080212932A1 (en) * | 2006-07-19 | 2008-09-04 | Samsung Electronics Co., Ltd. | System for managing video based on topic and method using the same and method for searching video based on topic |
US20080320159A1 (en) * | 2007-06-25 | 2008-12-25 | University Of Southern California (For Inventor Michael Naimark) | Source-Based Alert When Streaming Media of Live Event on Computer Network is of Current Interest and Related Feedback |
US20090019009A1 (en) * | 2007-07-12 | 2009-01-15 | At&T Corp. | SYSTEMS, METHODS AND COMPUTER PROGRAM PRODUCTS FOR SEARCHING WITHIN MOVIES (SWiM) |
WO2009026564A1 (en) * | 2007-08-22 | 2009-02-26 | Google Inc. | Detection and classification of matches between time-based media |
US20090089379A1 (en) * | 2007-09-27 | 2009-04-02 | Adobe Systems Incorporated | Application and data agnostic collaboration services |
US20090281897A1 (en) * | 2008-05-07 | 2009-11-12 | Antos Jeffrey D | Capture and Storage of Broadcast Information for Enhanced Retrieval |
US7631015B2 (en) | 1997-03-14 | 2009-12-08 | Microsoft Corporation | Interactive playlist generation using annotations |
US20090319365A1 (en) * | 2006-09-13 | 2009-12-24 | James Hallowell Waggoner | System and method for assessing marketing data |
US20100082585A1 (en) * | 2008-09-23 | 2010-04-01 | Disney Enterprises, Inc. | System and method for visual search in a video media player |
CN101720028A (en) * | 2009-12-01 | 2010-06-02 | 北京中星微电子有限公司 | Method and system for realizing voice broadcast during video monitoring |
US7769827B2 (en) | 2000-04-07 | 2010-08-03 | Virage, Inc. | Interactive video application hosting |
US20110067077A1 (en) * | 2009-09-14 | 2011-03-17 | At&T Intellectual Property I, L.P. | System and Method of Analyzing Internet Protocol Television Content Credits Information |
US20110067078A1 (en) * | 2009-09-14 | 2011-03-17 | At&T Intellectual Property I, L.P. | System and Method of Proactively Recording to a Digital Video Recorder for Data Analysis |
US20110067079A1 (en) * | 2009-09-14 | 2011-03-17 | At&T Intellectual Property I, L.P. | System and Method of Analyzing Internet Protocol Television Content for Closed-Captioning Information |
US20110072456A1 (en) * | 2009-09-24 | 2011-03-24 | At&T Intellectual Property I, L.P. | System and Method for Substituting Broadband Delivered Advertisements for Expired Advertisements |
US7945622B1 (en) | 2008-10-01 | 2011-05-17 | Adobe Systems Incorporated | User-aware collaboration playback and recording |
US7954049B2 (en) | 2006-05-15 | 2011-05-31 | Microsoft Corporation | Annotating multimedia files along a timeline |
US7962948B1 (en) | 2000-04-07 | 2011-06-14 | Virage, Inc. | Video-enabled community building |
US20110239099A1 (en) * | 2010-03-23 | 2011-09-29 | Disney Enterprises, Inc. | System and method for video poetry using text based related media |
US8171509B1 (en) | 2000-04-07 | 2012-05-01 | Virage, Inc. | System and method for applying a database to video multimedia |
US8214374B1 (en) * | 2011-09-26 | 2012-07-03 | Limelight Networks, Inc. | Methods and systems for abridging video files |
US8381249B2 (en) | 2006-10-06 | 2013-02-19 | United Video Properties, Inc. | Systems and methods for acquiring, categorizing and delivering media in interactive media guidance applications |
US8396878B2 (en) | 2006-09-22 | 2013-03-12 | Limelight Networks, Inc. | Methods and systems for generating automated tags for video files |
US20130066633A1 (en) * | 2011-09-09 | 2013-03-14 | Verisign, Inc. | Providing Audio-Activated Resource Access for User Devices |
US8521719B1 (en) | 2012-10-10 | 2013-08-27 | Limelight Networks, Inc. | Searchable and size-constrained local log repositories for tracking visitors' access to web content |
US20130291019A1 (en) * | 2012-04-27 | 2013-10-31 | Mixaroo, Inc. | Self-learning methods, entity relations, remote control, and other features for real-time processing, storage, indexing, and delivery of segmented video |
US20140032561A1 (en) * | 2006-07-18 | 2014-01-30 | Aol Inc. | Searching for transient streaming multimedia resources |
US8688679B2 (en) | 2010-07-20 | 2014-04-01 | Smartek21, Llc | Computer-implemented system and method for providing searchable online media content |
US8688667B1 (en) * | 2011-02-08 | 2014-04-01 | Google Inc. | Providing intent sensitive search results |
US8966389B2 (en) | 2006-09-22 | 2015-02-24 | Limelight Networks, Inc. | Visual interface for identifying positions of interest within a sequentially ordered information encoding |
US9015172B2 (en) | 2006-09-22 | 2015-04-21 | Limelight Networks, Inc. | Method and subsystem for searching media content within a content-search service system |
US9021538B2 (en) | 1998-07-14 | 2015-04-28 | Rovi Guides, Inc. | Client-server based interactive guide with server recording |
US9125169B2 (en) | 2011-12-23 | 2015-09-01 | Rovi Guides, Inc. | Methods and systems for performing actions based on location-based rules |
US9294799B2 (en) | 2000-10-11 | 2016-03-22 | Rovi Guides, Inc. | Systems and methods for providing storage of data on servers in an on-demand media delivery system |
US9294291B2 (en) | 2008-11-12 | 2016-03-22 | Adobe Systems Incorporated | Adaptive connectivity in network-based collaboration |
US9420014B2 (en) | 2007-11-15 | 2016-08-16 | Adobe Systems Incorporated | Saving state of a collaborative session in an editable format |
AU2013201160B2 (en) * | 2006-10-06 | 2016-09-29 | Rovi Guides, Inc. | Systems and Methods for Acquiring, Categorizing and Delivering Media in Interactive Media Guidance Applications |
WO2017129979A1 (en) * | 2016-01-29 | 2017-08-03 | Waazon (Holdings) Limited | Automated search method, apparatus, and database |
US10063934B2 (en) | 2008-11-25 | 2018-08-28 | Rovi Technologies Corporation | Reducing unicast session duration with restart TV |
US20180348970A1 (en) * | 2017-05-31 | 2018-12-06 | Snap Inc. | Methods and systems for voice driven dynamic menus |
US10795699B1 (en) * | 2019-03-28 | 2020-10-06 | Cohesity, Inc. | Central storage management interface supporting native user interface versions |
AU2018241142B2 (en) * | 2006-10-06 | 2020-10-22 | Rovi Guides, Inc. | Systems and Methods for Acquiring, Categorizing and Delivering Media in Interactive Media Guidance Applications |
US20210026849A1 (en) * | 2015-04-28 | 2021-01-28 | Splunk Inc. | Executing alert actions based on search query results |
US11463507B1 (en) * | 2019-04-22 | 2022-10-04 | Audible, Inc. | Systems for generating captions for audio content |
US11531712B2 (en) | 2019-03-28 | 2022-12-20 | Cohesity, Inc. | Unified metadata search |
US11722507B1 (en) | 2015-04-28 | 2023-08-08 | Splunk Inc. | User configurable alert notifications applicable to search query results |
US11997340B2 (en) | 2012-04-27 | 2024-05-28 | Comcast Cable Communications, Llc | Topical content searching |
Families Citing this family (111)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9953032B2 (en) | 2005-10-26 | 2018-04-24 | Cortica, Ltd. | System and method for characterization of multimedia content signals using cores of a natural liquid architecture system |
US10380164B2 (en) | 2005-10-26 | 2019-08-13 | Cortica, Ltd. | System and method for using on-image gestures and multimedia content elements as search queries |
US10585934B2 (en) | 2005-10-26 | 2020-03-10 | Cortica Ltd. | Method and system for populating a concept database with respect to user identifiers |
US9031999B2 (en) | 2005-10-26 | 2015-05-12 | Cortica, Ltd. | System and methods for generation of a concept based database |
US9466068B2 (en) | 2005-10-26 | 2016-10-11 | Cortica, Ltd. | System and method for determining a pupillary response to a multimedia data element |
US10387914B2 (en) | 2005-10-26 | 2019-08-20 | Cortica, Ltd. | Method for identification of multimedia content elements and adding advertising content respective thereof |
US10848590B2 (en) | 2005-10-26 | 2020-11-24 | Cortica Ltd | System and method for determining a contextual insight and providing recommendations based thereon |
US11604847B2 (en) | 2005-10-26 | 2023-03-14 | Cortica Ltd. | System and method for overlaying content on a multimedia content element based on user interest |
US9529984B2 (en) | 2005-10-26 | 2016-12-27 | Cortica, Ltd. | System and method for verification of user identification based on multimedia content elements |
US10742340B2 (en) | 2005-10-26 | 2020-08-11 | Cortica Ltd. | System and method for identifying the context of multimedia content elements displayed in a web-page and providing contextual filters respective thereto |
US9235557B2 (en) | 2005-10-26 | 2016-01-12 | Cortica, Ltd. | System and method thereof for dynamically associating a link to an information resource with a multimedia content displayed in a web-page |
US10380623B2 (en) | 2005-10-26 | 2019-08-13 | Cortica, Ltd. | System and method for generating an advertisement effectiveness performance score |
US20160321253A1 (en) | 2005-10-26 | 2016-11-03 | Cortica, Ltd. | System and method for providing recommendations based on user profiles |
US11216498B2 (en) | 2005-10-26 | 2022-01-04 | Cortica, Ltd. | System and method for generating signatures to three-dimensional multimedia data elements |
US10614626B2 (en) | 2005-10-26 | 2020-04-07 | Cortica Ltd. | System and method for providing augmented reality challenges |
US10193990B2 (en) | 2005-10-26 | 2019-01-29 | Cortica Ltd. | System and method for creating user profiles based on multimedia content |
US9330189B2 (en) | 2005-10-26 | 2016-05-03 | Cortica, Ltd. | System and method for capturing a multimedia content item by a mobile device and matching sequentially relevant content to the multimedia content item |
US9396435B2 (en) | 2005-10-26 | 2016-07-19 | Cortica, Ltd. | System and method for identification of deviations from periodic behavior patterns in multimedia content |
US9087049B2 (en) | 2005-10-26 | 2015-07-21 | Cortica, Ltd. | System and method for context translation of natural language |
US9384196B2 (en) | 2005-10-26 | 2016-07-05 | Cortica, Ltd. | Signature generation for multimedia deep-content-classification by a large-scale matching system and method thereof |
US9767143B2 (en) | 2005-10-26 | 2017-09-19 | Cortica, Ltd. | System and method for caching of concept structures |
US9489431B2 (en) | 2005-10-26 | 2016-11-08 | Cortica, Ltd. | System and method for distributed search-by-content |
US10191976B2 (en) | 2005-10-26 | 2019-01-29 | Cortica, Ltd. | System and method of detecting common patterns within unstructured data elements retrieved from big data sources |
US8818916B2 (en) | 2005-10-26 | 2014-08-26 | Cortica, Ltd. | System and method for linking multimedia data elements to web pages |
US10535192B2 (en) | 2005-10-26 | 2020-01-14 | Cortica Ltd. | System and method for generating a customized augmented reality environment to a user |
US9256668B2 (en) | 2005-10-26 | 2016-02-09 | Cortica, Ltd. | System and method of detecting common patterns within unstructured data elements retrieved from big data sources |
US10698939B2 (en) | 2005-10-26 | 2020-06-30 | Cortica Ltd | System and method for customizing images |
US10180942B2 (en) | 2005-10-26 | 2019-01-15 | Cortica Ltd. | System and method for generation of concept structures based on sub-concepts |
US9372940B2 (en) | 2005-10-26 | 2016-06-21 | Cortica, Ltd. | Apparatus and method for determining user attention using a deep-content-classification (DCC) system |
US8312031B2 (en) * | 2005-10-26 | 2012-11-13 | Cortica Ltd. | System and method for generation of complex signatures for multimedia data content |
US10607355B2 (en) | 2005-10-26 | 2020-03-31 | Cortica, Ltd. | Method and system for determining the dimensions of an object shown in a multimedia content item |
US9747420B2 (en) | 2005-10-26 | 2017-08-29 | Cortica, Ltd. | System and method for diagnosing a patient based on an analysis of multimedia content |
US11620327B2 (en) | 2005-10-26 | 2023-04-04 | Cortica Ltd | System and method for determining a contextual insight and generating an interface with recommendations based thereon |
US9477658B2 (en) | 2005-10-26 | 2016-10-25 | Cortica, Ltd. | Systems and method for speech to speech translation using cores of a natural liquid architecture system |
US9646005B2 (en) | 2005-10-26 | 2017-05-09 | Cortica, Ltd. | System and method for creating a database of multimedia content elements assigned to users |
US9639532B2 (en) | 2005-10-26 | 2017-05-02 | Cortica, Ltd. | Context-based analysis of multimedia content items using signatures of multimedia elements and matching concepts |
US10372746B2 (en) | 2005-10-26 | 2019-08-06 | Cortica, Ltd. | System and method for searching applications using multimedia content elements |
US11032017B2 (en) | 2005-10-26 | 2021-06-08 | Cortica, Ltd. | System and method for identifying the context of multimedia content elements |
US9218606B2 (en) | 2005-10-26 | 2015-12-22 | Cortica, Ltd. | System and method for brand monitoring and trend analysis based on deep-content-classification |
US8326775B2 (en) * | 2005-10-26 | 2012-12-04 | Cortica Ltd. | Signature generation for multimedia deep-content-classification by a large-scale matching system and method thereof |
US9191626B2 (en) | 2005-10-26 | 2015-11-17 | Cortica, Ltd. | System and methods thereof for visual analysis of an image on a web-page and matching an advertisement thereto |
US10949773B2 (en) | 2005-10-26 | 2021-03-16 | Cortica, Ltd. | System and methods thereof for recommending tags for multimedia content elements based on context |
US11386139B2 (en) | 2005-10-26 | 2022-07-12 | Cortica Ltd. | System and method for generating analytics for entities depicted in multimedia content |
US9558449B2 (en) | 2005-10-26 | 2017-01-31 | Cortica, Ltd. | System and method for identifying a target area in a multimedia content element |
US8266185B2 (en) | 2005-10-26 | 2012-09-11 | Cortica Ltd. | System and methods thereof for generation of searchable structures respective of multimedia data content |
US11403336B2 (en) | 2005-10-26 | 2022-08-02 | Cortica Ltd. | System and method for removing contextually identical multimedia content elements |
US10776585B2 (en) | 2005-10-26 | 2020-09-15 | Cortica, Ltd. | System and method for recognizing characters in multimedia content |
US10621988B2 (en) | 2005-10-26 | 2020-04-14 | Cortica Ltd | System and method for speech to text translation using cores of a natural liquid architecture system |
US10635640B2 (en) | 2005-10-26 | 2020-04-28 | Cortica, Ltd. | System and method for enriching a concept database |
US11361014B2 (en) | 2005-10-26 | 2022-06-14 | Cortica Ltd. | System and method for completing a user profile |
US11003706B2 (en) * | 2005-10-26 | 2021-05-11 | Cortica Ltd | System and methods for determining access permissions on personalized clusters of multimedia content elements |
US9286623B2 (en) | 2005-10-26 | 2016-03-15 | Cortica, Ltd. | Method for determining an area within a multimedia content element over which an advertisement can be displayed |
US10691642B2 (en) | 2005-10-26 | 2020-06-23 | Cortica Ltd | System and method for enriching a concept database with homogenous concepts |
US11019161B2 (en) | 2005-10-26 | 2021-05-25 | Cortica, Ltd. | System and method for profiling users interest based on multimedia content analysis |
US10380267B2 (en) | 2005-10-26 | 2019-08-13 | Cortica, Ltd. | System and method for tagging multimedia content elements |
US10360253B2 (en) | 2005-10-26 | 2019-07-23 | Cortica, Ltd. | Systems and methods for generation of searchable structures respective of multimedia data content |
US7965923B2 (en) * | 2006-05-01 | 2011-06-21 | Yahoo! Inc. | Systems and methods for indexing and searching digital video content |
US7991271B2 (en) * | 2007-02-14 | 2011-08-02 | Sony Corporation | Transfer of metadata using video frames |
US10733326B2 (en) | 2006-10-26 | 2020-08-04 | Cortica Ltd. | System and method for identification of inappropriate multimedia content |
EP1976297A1 (en) * | 2007-03-29 | 2008-10-01 | Koninklijke KPN N.V. | Method and system for autimatically selecting television channels |
US20080313146A1 (en) * | 2007-06-15 | 2008-12-18 | Microsoft Corporation | Content search service, finding content, and prefetching for thin client |
US8312022B2 (en) * | 2008-03-21 | 2012-11-13 | Ramp Holdings, Inc. | Search engine optimization |
US20110173270A1 (en) * | 2010-01-11 | 2011-07-14 | Ricoh Company, Ltd. | Conferencing Apparatus And Method |
US20120240177A1 (en) * | 2011-03-17 | 2012-09-20 | Anthony Rose | Content provision |
US9473614B2 (en) * | 2011-08-12 | 2016-10-18 | Htc Corporation | Systems and methods for incorporating a control connected media frame |
US8972262B1 (en) | 2012-01-18 | 2015-03-03 | Google Inc. | Indexing and search of content in recorded group communications |
US10304036B2 (en) | 2012-05-07 | 2019-05-28 | Nasdaq, Inc. | Social media profiling for one or more authors using one or more social media platforms |
US9418389B2 (en) | 2012-05-07 | 2016-08-16 | Nasdaq, Inc. | Social intelligence architecture using social media message queues |
US8935713B1 (en) * | 2012-12-17 | 2015-01-13 | Tubular Labs, Inc. | Determining audience members associated with a set of videos |
US8782722B1 (en) | 2013-04-05 | 2014-07-15 | Wowza Media Systems, LLC | Decoding of closed captions at a media server |
US8782721B1 (en) | 2013-04-05 | 2014-07-15 | Wowza Media Systems, LLC | Closed captions for live streams |
US20150310107A1 (en) * | 2014-04-24 | 2015-10-29 | Shadi A. Alhakimi | Video and audio content search engine |
KR102121534B1 (en) | 2015-03-10 | 2020-06-10 | 삼성전자주식회사 | Method and device for determining similarity of sequences |
CN104866404B (en) * | 2015-05-19 | 2017-12-22 | 北京控制工程研究所 | A kind of general data monitoring method |
US11037015B2 (en) | 2015-12-15 | 2021-06-15 | Cortica Ltd. | Identification of key points in multimedia data elements |
US11195043B2 (en) | 2015-12-15 | 2021-12-07 | Cortica, Ltd. | System and method for determining common patterns in multimedia content elements based on key points |
US10452714B2 (en) | 2016-06-24 | 2019-10-22 | Scripps Networks Interactive, Inc. | Central asset registry system and method |
US10372883B2 (en) | 2016-06-24 | 2019-08-06 | Scripps Networks Interactive, Inc. | Satellite and central asset registry systems and methods and rights management systems |
US11868445B2 (en) | 2016-06-24 | 2024-01-09 | Discovery Communications, Llc | Systems and methods for federated searches of assets in disparate dam repositories |
WO2019008581A1 (en) | 2017-07-05 | 2019-01-10 | Cortica Ltd. | Driving policies determination |
WO2019012527A1 (en) | 2017-07-09 | 2019-01-17 | Cortica Ltd. | Deep learning networks orchestration |
KR102281882B1 (en) * | 2018-03-23 | 2021-07-27 | 엔이디엘.콤 잉크. | Real-time audio stream retrieval and presentation system |
US10891100B2 (en) | 2018-04-11 | 2021-01-12 | Matthew Cohn | System and method for capturing and accessing real-time audio and associated metadata |
US10846544B2 (en) | 2018-07-16 | 2020-11-24 | Cartica Ai Ltd. | Transportation prediction system and method |
US10839694B2 (en) | 2018-10-18 | 2020-11-17 | Cartica Ai Ltd | Blind spot alert |
US11126870B2 (en) | 2018-10-18 | 2021-09-21 | Cartica Ai Ltd. | Method and system for obstacle detection |
US20200133308A1 (en) | 2018-10-18 | 2020-04-30 | Cartica Ai Ltd | Vehicle to vehicle (v2v) communication less truck platooning |
US11181911B2 (en) | 2018-10-18 | 2021-11-23 | Cartica Ai Ltd | Control transfer of a vehicle |
US11700356B2 (en) | 2018-10-26 | 2023-07-11 | AutoBrains Technologies Ltd. | Control transfer of a vehicle |
US10789535B2 (en) | 2018-11-26 | 2020-09-29 | Cartica Ai Ltd | Detection of road elements |
US11643005B2 (en) | 2019-02-27 | 2023-05-09 | Autobrains Technologies Ltd | Adjusting adjustable headlights of a vehicle |
US11285963B2 (en) | 2019-03-10 | 2022-03-29 | Cartica Ai Ltd. | Driver-based prediction of dangerous events |
US11694088B2 (en) | 2019-03-13 | 2023-07-04 | Cortica Ltd. | Method for object detection using knowledge distillation |
US11132548B2 (en) | 2019-03-20 | 2021-09-28 | Cortica Ltd. | Determining object information that does not explicitly appear in a media unit signature |
US11569921B2 (en) | 2019-03-22 | 2023-01-31 | Matthew Cohn | System and method for capturing and accessing real-time audio and associated metadata |
US12055408B2 (en) | 2019-03-28 | 2024-08-06 | Autobrains Technologies Ltd | Estimating a movement of a hybrid-behavior vehicle |
US10789527B1 (en) | 2019-03-31 | 2020-09-29 | Cortica Ltd. | Method for object detection using shallow neural networks |
US11222069B2 (en) | 2019-03-31 | 2022-01-11 | Cortica Ltd. | Low-power calculation of a signature of a media unit |
US10796444B1 (en) | 2019-03-31 | 2020-10-06 | Cortica Ltd | Configuring spanning elements of a signature generator |
US11488290B2 (en) | 2019-03-31 | 2022-11-01 | Cortica Ltd. | Hybrid representation of a media unit |
US10776669B1 (en) | 2019-03-31 | 2020-09-15 | Cortica Ltd. | Signature generation and object detection that refer to rare scenes |
US11593662B2 (en) | 2019-12-12 | 2023-02-28 | Autobrains Technologies Ltd | Unsupervised cluster generation |
US10748022B1 (en) | 2019-12-12 | 2020-08-18 | Cartica Ai Ltd | Crowd separation |
US11590988B2 (en) | 2020-03-19 | 2023-02-28 | Autobrains Technologies Ltd | Predictive turning assistant |
US11827215B2 (en) | 2020-03-31 | 2023-11-28 | AutoBrains Technologies Ltd. | Method for training a driving related object detector |
CN111898510B (en) * | 2020-07-23 | 2023-07-28 | 合肥工业大学 | Cross-modal pedestrian re-identification method based on progressive neural network |
US11756424B2 (en) | 2020-07-24 | 2023-09-12 | AutoBrains Technologies Ltd. | Parking assist |
US12049116B2 (en) | 2020-09-30 | 2024-07-30 | Autobrains Technologies Ltd | Configuring an active suspension |
US12142005B2 (en) | 2020-10-13 | 2024-11-12 | Autobrains Technologies Ltd | Camera based distance measurements |
US12139166B2 (en) | 2021-06-07 | 2024-11-12 | Autobrains Technologies Ltd | Cabin preferences setting that is based on identification of one or more persons in the cabin |
EP4194300A1 (en) | 2021-08-05 | 2023-06-14 | Autobrains Technologies LTD. | Providing a prediction of a radius of a motorcycle turn |
Citations (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3760275A (en) * | 1970-10-24 | 1973-09-18 | T Ohsawa | Automatic telecasting or radio broadcasting monitoring system |
US4792864A (en) * | 1985-09-03 | 1988-12-20 | Video Research Limited | Apparatus for detecting recorded data in a video tape recorder for audience rating purposes |
US4975770A (en) * | 1989-07-31 | 1990-12-04 | Troxell James D | Method for the enhancement of contours for video broadcasts |
US5157491A (en) * | 1988-10-17 | 1992-10-20 | Kassatly L Samuel A | Method and apparatus for video broadcasting and teleconferencing |
US5231494A (en) * | 1991-10-08 | 1993-07-27 | General Instrument Corporation | Selection of compressed television signals from single channel allocation based on viewer characteristics |
US5313297A (en) * | 1991-09-19 | 1994-05-17 | Costem Inc. | System for providing pictures responding to users' remote control |
US5636346A (en) * | 1994-05-09 | 1997-06-03 | The Electronic Address, Inc. | Method and system for selectively targeting advertisements and programming |
US5717878A (en) * | 1994-02-25 | 1998-02-10 | Sextant Avionique | Method and device for distributing multimedia data, providing both video broadcast and video distribution services |
US5847760A (en) * | 1997-05-22 | 1998-12-08 | Optibase Ltd. | Method for managing video broadcast |
US5892554A (en) * | 1995-11-28 | 1999-04-06 | Princeton Video Image, Inc. | System and method for inserting static and dynamic images into a live video broadcast |
US5986692A (en) * | 1996-10-03 | 1999-11-16 | Logan; James D. | Systems and methods for computer enhanced broadcast monitoring |
US5999970A (en) * | 1996-04-10 | 1999-12-07 | World Gate Communications, Llc | Access system and method for providing interactive access to an information source through a television distribution system |
US6061056A (en) * | 1996-03-04 | 2000-05-09 | Telexis Corporation | Television monitoring system with automatic selection of program material of interest and subsequent display under user control |
US6157809A (en) * | 1996-08-07 | 2000-12-05 | Kabushiki Kaisha Toshiba | Broadcasting system, broadcast receiving unit, and recording medium used in the broadcasting system |
US6160988A (en) * | 1996-05-30 | 2000-12-12 | Electronic Data Systems Corporation | System and method for managing hardware to control transmission and reception of video broadcasts |
US6188436B1 (en) * | 1997-01-31 | 2001-02-13 | Hughes Electronics Corporation | Video broadcast system with video data shifting |
US6226030B1 (en) * | 1997-03-28 | 2001-05-01 | International Business Machines Corporation | Automated and selective distribution of video broadcasts |
US6266094B1 (en) * | 1999-06-14 | 2001-07-24 | Medialink Worldwide Incorporated | Method and apparatus for the aggregation and selective retrieval of television closed caption word content originating from multiple geographic locations |
US6320917B1 (en) * | 1997-05-02 | 2001-11-20 | Lsi Logic Corporation | Demodulating digital video broadcast signals |
US20010049820A1 (en) * | 1999-12-21 | 2001-12-06 | Barton James M. | Method for enhancing digital video recorder television advertising viewership |
US6397041B1 (en) * | 1999-12-22 | 2002-05-28 | Radio Propagation Services, Inc. | Broadcast monitoring and control system |
US6546556B1 (en) * | 1997-12-26 | 2003-04-08 | Matsushita Electric Industrial Co., Ltd. | Video clip identification system unusable for commercial cutting |
US6606128B2 (en) * | 1995-11-20 | 2003-08-12 | United Video Properties, Inc. | Interactive special events video signal navigation system |
US20030229900A1 (en) * | 2002-05-10 | 2003-12-11 | Richard Reisman | Method and apparatus for browsing using multiple coordinated device sets |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020133816A1 (en) * | 1994-06-21 | 2002-09-19 | Greene Steven Bradford | System for collecting data concerning received transmitted material |
GB9504376D0 (en) | 1995-03-04 | 1995-04-26 | Televitesse Systems Inc | Automatic broadcast monitoring system |
US20020120925A1 (en) * | 2000-03-28 | 2002-08-29 | Logan James D. | Audio and video program recording, editing and playback systems using metadata |
CA2399502A1 (en) * | 2000-02-02 | 2001-08-09 | Worldgate Service, Inc. | System and method for transmitting and displaying targeted information |
FR2806573B1 (en) * | 2000-03-15 | 2002-09-06 | Thomson Multimedia Sa | METHOD FOR VIEWING BROADCASTED AND RECORDED BROADCASTS HAVING A COMMON CHARACTERISTIC AND ASSOCIATED DEVICE |
EP1287679A4 (en) * | 2000-04-21 | 2009-06-17 | Goldpocket Interactive Inc | System and method for merging interactive television data with closed caption data |
US20020152463A1 (en) * | 2000-11-16 | 2002-10-17 | Dudkiewicz Gil Gavriel | System and method for personalized presentation of video programming events |
US20020157113A1 (en) * | 2001-04-20 | 2002-10-24 | Fred Allegrezza | System and method for retrieving and storing multimedia data |
JP4708607B2 (en) * | 2001-07-03 | 2011-06-22 | キヤノン株式会社 | Broadcast receiving apparatus and control method thereof |
JP2003189206A (en) * | 2001-12-20 | 2003-07-04 | Pioneer Electronic Corp | Method and device for generating viewing schedule |
KR100875137B1 (en) * | 2002-02-23 | 2008-12-22 | 주식회사 엘지이아이 | Automatic Cable TV Band Navigation Method |
-
2005
- 2005-02-24 US US11/063,559 patent/US20050198006A1/en not_active Abandoned
- 2005-02-24 CA CA2498364A patent/CA2498364C/en active Active
-
2007
- 2007-11-29 US US11/947,460 patent/US8015159B2/en not_active Expired - Fee Related
Patent Citations (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3760275A (en) * | 1970-10-24 | 1973-09-18 | T Ohsawa | Automatic telecasting or radio broadcasting monitoring system |
US4792864A (en) * | 1985-09-03 | 1988-12-20 | Video Research Limited | Apparatus for detecting recorded data in a video tape recorder for audience rating purposes |
US5157491A (en) * | 1988-10-17 | 1992-10-20 | Kassatly L Samuel A | Method and apparatus for video broadcasting and teleconferencing |
US4975770A (en) * | 1989-07-31 | 1990-12-04 | Troxell James D | Method for the enhancement of contours for video broadcasts |
US5313297A (en) * | 1991-09-19 | 1994-05-17 | Costem Inc. | System for providing pictures responding to users' remote control |
US5231494A (en) * | 1991-10-08 | 1993-07-27 | General Instrument Corporation | Selection of compressed television signals from single channel allocation based on viewer characteristics |
US5717878A (en) * | 1994-02-25 | 1998-02-10 | Sextant Avionique | Method and device for distributing multimedia data, providing both video broadcast and video distribution services |
US5636346A (en) * | 1994-05-09 | 1997-06-03 | The Electronic Address, Inc. | Method and system for selectively targeting advertisements and programming |
US6606128B2 (en) * | 1995-11-20 | 2003-08-12 | United Video Properties, Inc. | Interactive special events video signal navigation system |
US5892554A (en) * | 1995-11-28 | 1999-04-06 | Princeton Video Image, Inc. | System and method for inserting static and dynamic images into a live video broadcast |
US6061056A (en) * | 1996-03-04 | 2000-05-09 | Telexis Corporation | Television monitoring system with automatic selection of program material of interest and subsequent display under user control |
US5999970A (en) * | 1996-04-10 | 1999-12-07 | World Gate Communications, Llc | Access system and method for providing interactive access to an information source through a television distribution system |
US6160988A (en) * | 1996-05-30 | 2000-12-12 | Electronic Data Systems Corporation | System and method for managing hardware to control transmission and reception of video broadcasts |
US6157809A (en) * | 1996-08-07 | 2000-12-05 | Kabushiki Kaisha Toshiba | Broadcasting system, broadcast receiving unit, and recording medium used in the broadcasting system |
US5986692A (en) * | 1996-10-03 | 1999-11-16 | Logan; James D. | Systems and methods for computer enhanced broadcast monitoring |
US6188436B1 (en) * | 1997-01-31 | 2001-02-13 | Hughes Electronics Corporation | Video broadcast system with video data shifting |
US6226030B1 (en) * | 1997-03-28 | 2001-05-01 | International Business Machines Corporation | Automated and selective distribution of video broadcasts |
US6320917B1 (en) * | 1997-05-02 | 2001-11-20 | Lsi Logic Corporation | Demodulating digital video broadcast signals |
US5847760A (en) * | 1997-05-22 | 1998-12-08 | Optibase Ltd. | Method for managing video broadcast |
US6546556B1 (en) * | 1997-12-26 | 2003-04-08 | Matsushita Electric Industrial Co., Ltd. | Video clip identification system unusable for commercial cutting |
US6266094B1 (en) * | 1999-06-14 | 2001-07-24 | Medialink Worldwide Incorporated | Method and apparatus for the aggregation and selective retrieval of television closed caption word content originating from multiple geographic locations |
US20010049820A1 (en) * | 1999-12-21 | 2001-12-06 | Barton James M. | Method for enhancing digital video recorder television advertising viewership |
US20050273828A1 (en) * | 1999-12-21 | 2005-12-08 | Tivo Inc. | Method for enhancing digital video recorder television advertising viewership |
US6397041B1 (en) * | 1999-12-22 | 2002-05-28 | Radio Propagation Services, Inc. | Broadcast monitoring and control system |
US20030229900A1 (en) * | 2002-05-10 | 2003-12-11 | Richard Reisman | Method and apparatus for browsing using multiple coordinated device sets |
US20040031058A1 (en) * | 2002-05-10 | 2004-02-12 | Richard Reisman | Method and apparatus for browsing using alternative linkbases |
Cited By (124)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7631015B2 (en) | 1997-03-14 | 2009-12-08 | Microsoft Corporation | Interactive playlist generation using annotations |
US7295752B1 (en) | 1997-08-14 | 2007-11-13 | Virage, Inc. | Video cataloger system with audio track extraction |
US9055318B2 (en) | 1998-07-14 | 2015-06-09 | Rovi Guides, Inc. | Client-server based interactive guide with server storage |
US9021538B2 (en) | 1998-07-14 | 2015-04-28 | Rovi Guides, Inc. | Client-server based interactive guide with server recording |
US9118948B2 (en) | 1998-07-14 | 2015-08-25 | Rovi Guides, Inc. | Client-server based interactive guide with server recording |
US9154843B2 (en) | 1998-07-14 | 2015-10-06 | Rovi Guides, Inc. | Client-server based interactive guide with server recording |
US10075746B2 (en) | 1998-07-14 | 2018-09-11 | Rovi Guides, Inc. | Client-server based interactive television guide with server recording |
US9055319B2 (en) | 1998-07-14 | 2015-06-09 | Rovi Guides, Inc. | Interactive guide with recording |
US9232254B2 (en) | 1998-07-14 | 2016-01-05 | Rovi Guides, Inc. | Client-server based interactive television guide with server recording |
US9226006B2 (en) | 1998-07-14 | 2015-12-29 | Rovi Guides, Inc. | Client-server based interactive guide with server recording |
US20050033760A1 (en) * | 1998-09-01 | 2005-02-10 | Charles Fuller | Embedded metadata engines in digital capture devices |
US8495694B2 (en) | 2000-04-07 | 2013-07-23 | Virage, Inc. | Video-enabled community building |
US8171509B1 (en) | 2000-04-07 | 2012-05-01 | Virage, Inc. | System and method for applying a database to video multimedia |
US7260564B1 (en) * | 2000-04-07 | 2007-08-21 | Virage, Inc. | Network video guide and spidering |
US9338520B2 (en) | 2000-04-07 | 2016-05-10 | Hewlett Packard Enterprise Development Lp | System and method for applying a database to video multimedia |
US7962948B1 (en) | 2000-04-07 | 2011-06-14 | Virage, Inc. | Video-enabled community building |
US7769827B2 (en) | 2000-04-07 | 2010-08-03 | Virage, Inc. | Interactive video application hosting |
US9684728B2 (en) | 2000-04-07 | 2017-06-20 | Hewlett Packard Enterprise Development Lp | Sharing video |
US8387087B2 (en) | 2000-04-07 | 2013-02-26 | Virage, Inc. | System and method for applying a database to video multimedia |
US8548978B2 (en) | 2000-04-07 | 2013-10-01 | Virage, Inc. | Network video guide and spidering |
US9294799B2 (en) | 2000-10-11 | 2016-03-22 | Rovi Guides, Inc. | Systems and methods for providing storage of data on servers in an on-demand media delivery system |
US7912827B2 (en) | 2004-12-02 | 2011-03-22 | At&T Intellectual Property Ii, L.P. | System and method for searching text-based media content |
US20060122984A1 (en) * | 2004-12-02 | 2006-06-08 | At&T Corp. | System and method for searching text-based media content |
US8625960B2 (en) * | 2005-01-07 | 2014-01-07 | Samsung Electronics Co., Ltd. | Apparatus and method for reproducing storage medium that stores metadata for providing enhanced search function |
US20100202753A1 (en) * | 2005-01-07 | 2010-08-12 | Samsung Electronics Co., Ltd. | Apparatus and method for reproducing storage medium that stores metadata for providing enhanced search function |
US20100217775A1 (en) * | 2005-01-07 | 2010-08-26 | Samsung Electronics Co., Ltd. | Apparatus and method for reproducing storage medium that stores metadata for providing enhanced search function |
US20060153535A1 (en) * | 2005-01-07 | 2006-07-13 | Samsung Electronics Co., Ltd. | Apparatus and method for reproducing storage medium that stores metadata for providing enhanced search function |
US8842977B2 (en) * | 2005-01-07 | 2014-09-23 | Samsung Electronics Co., Ltd. | Storage medium storing metadata for providing enhanced search function |
US8630531B2 (en) | 2005-01-07 | 2014-01-14 | Samsung Electronics Co., Ltd. | Apparatus and method for reproducing storage medium that stores metadata for providing enhanced search function |
US20060153542A1 (en) * | 2005-01-07 | 2006-07-13 | Samsung Electronics Co., Ltd. | Storage medium storing metadata for providing enhanced search function |
US20060212905A1 (en) * | 2005-03-17 | 2006-09-21 | Hitachi, Ltd. | Broadcast receiving terminal and information processing apparatus |
US20060294082A1 (en) * | 2005-06-28 | 2006-12-28 | Samsung Electronics Co., Ltd | Apparatus and method for playing content according to numeral key input |
US20070027844A1 (en) * | 2005-07-28 | 2007-02-01 | Microsoft Corporation | Navigating recorded multimedia content using keywords or phrases |
US8156114B2 (en) * | 2005-08-26 | 2012-04-10 | At&T Intellectual Property Ii, L.P. | System and method for searching and analyzing media content |
US20070050406A1 (en) * | 2005-08-26 | 2007-03-01 | At&T Corp. | System and method for searching and analyzing media content |
US9372926B2 (en) | 2005-10-19 | 2016-06-21 | Microsoft International Holdings B.V. | Intelligent video summaries in information access |
US20080097970A1 (en) * | 2005-10-19 | 2008-04-24 | Fast Search And Transfer Asa | Intelligent Video Summaries in Information Access |
US8296797B2 (en) * | 2005-10-19 | 2012-10-23 | Microsoft International Holdings B.V. | Intelligent video summaries in information access |
US9122754B2 (en) | 2005-10-19 | 2015-09-01 | Microsoft International Holdings B.V. | Intelligent video summaries in information access |
WO2007073347A1 (en) * | 2005-12-19 | 2007-06-28 | Agency For Science, Technology And Research | Annotation of video footage and personalised video generation |
WO2007073349A1 (en) * | 2005-12-19 | 2007-06-28 | Agency For Science, Technology And Research | Method and system for event detection in a video stream |
US20100005485A1 (en) * | 2005-12-19 | 2010-01-07 | Agency For Science, Technology And Research | Annotation of video footage and personalised video generation |
US20070154176A1 (en) * | 2006-01-04 | 2007-07-05 | Elcock Albert F | Navigating recorded video using captioning, dialogue and sound effects |
US20070154171A1 (en) * | 2006-01-04 | 2007-07-05 | Elcock Albert F | Navigating recorded video using closed captioning |
US20070174326A1 (en) * | 2006-01-24 | 2007-07-26 | Microsoft Corporation | Application of metadata to digital media |
US20070203945A1 (en) * | 2006-02-28 | 2007-08-30 | Gert Hercules Louw | Method for integrated media preview, analysis, purchase, and display |
US20070204285A1 (en) * | 2006-02-28 | 2007-08-30 | Gert Hercules Louw | Method for integrated media monitoring, purchase, and display |
EP1873966A1 (en) * | 2006-03-08 | 2008-01-02 | Huawei Technologies Co., Ltd. | Playing method, system and device |
EP1873966A4 (en) * | 2006-03-08 | 2009-04-29 | Huawei Tech Co Ltd | Playing method, system and device |
US7954049B2 (en) | 2006-05-15 | 2011-05-31 | Microsoft Corporation | Annotating multimedia files along a timeline |
US20070276852A1 (en) * | 2006-05-25 | 2007-11-29 | Microsoft Corporation | Downloading portions of media files |
US20140032561A1 (en) * | 2006-07-18 | 2014-01-30 | Aol Inc. | Searching for transient streaming multimedia resources |
US20080212932A1 (en) * | 2006-07-19 | 2008-09-04 | Samsung Electronics Co., Ltd. | System for managing video based on topic and method using the same and method for searching video based on topic |
US20110209185A1 (en) * | 2006-08-01 | 2011-08-25 | Microsoft Corporation | Media content catalog service |
US20080046929A1 (en) * | 2006-08-01 | 2008-02-21 | Microsoft Corporation | Media content catalog service |
US9055317B2 (en) | 2006-08-01 | 2015-06-09 | Microsoft Technology Licensing, Llc | Media content catalog service |
US8555317B2 (en) | 2006-08-01 | 2013-10-08 | Microsoft Corporation | Media content catalog service |
US7962937B2 (en) | 2006-08-01 | 2011-06-14 | Microsoft Corporation | Media content catalog service |
US20080046401A1 (en) * | 2006-08-21 | 2008-02-21 | Myung-Cheol Lee | System and method for processing continuous integrated queries on both data stream and stored data using user-defined share trigger |
US7860884B2 (en) * | 2006-08-21 | 2010-12-28 | Electronics And Telecommunications Research Institute | System and method for processing continuous integrated queries on both data stream and stored data using user-defined shared trigger |
US20080091513A1 (en) * | 2006-09-13 | 2008-04-17 | Video Monitoring Services Of America, L.P. | System and method for assessing marketing data |
US20090319365A1 (en) * | 2006-09-13 | 2009-12-24 | James Hallowell Waggoner | System and method for assessing marketing data |
US8396878B2 (en) | 2006-09-22 | 2013-03-12 | Limelight Networks, Inc. | Methods and systems for generating automated tags for video files |
US9015172B2 (en) | 2006-09-22 | 2015-04-21 | Limelight Networks, Inc. | Method and subsystem for searching media content within a content-search service system |
US8966389B2 (en) | 2006-09-22 | 2015-02-24 | Limelight Networks, Inc. | Visual interface for identifying positions of interest within a sequentially ordered information encoding |
US8832742B2 (en) | 2006-10-06 | 2014-09-09 | United Video Properties, Inc. | Systems and methods for acquiring, categorizing and delivering media in interactive media guidance applications |
US8381249B2 (en) | 2006-10-06 | 2013-02-19 | United Video Properties, Inc. | Systems and methods for acquiring, categorizing and delivering media in interactive media guidance applications |
AU2018241142B2 (en) * | 2006-10-06 | 2020-10-22 | Rovi Guides, Inc. | Systems and Methods for Acquiring, Categorizing and Delivering Media in Interactive Media Guidance Applications |
US9215504B2 (en) | 2006-10-06 | 2015-12-15 | Rovi Guides, Inc. | Systems and methods for acquiring, categorizing and delivering media in interactive media guidance applications |
US20080086456A1 (en) * | 2006-10-06 | 2008-04-10 | United Video Properties, Inc. | Systems and methods for acquiring, categorizing and delivering media in interactive media guidance applications |
AU2013201160B2 (en) * | 2006-10-06 | 2016-09-29 | Rovi Guides, Inc. | Systems and Methods for Acquiring, Categorizing and Delivering Media in Interactive Media Guidance Applications |
US8301731B2 (en) | 2007-06-25 | 2012-10-30 | University Of Southern California | Source-based alert when streaming media of live event on computer network is of current interest and related feedback |
US7930420B2 (en) * | 2007-06-25 | 2011-04-19 | University Of Southern California | Source-based alert when streaming media of live event on computer network is of current interest and related feedback |
US20080320159A1 (en) * | 2007-06-25 | 2008-12-25 | University Of Southern California (For Inventor Michael Naimark) | Source-Based Alert When Streaming Media of Live Event on Computer Network is of Current Interest and Related Feedback |
US20110167136A1 (en) * | 2007-06-25 | 2011-07-07 | University Of Southern California | Source-Based Alert When Streaming Media of Live Event on Computer Network is of Current Interest and Related Feedback |
US20090019009A1 (en) * | 2007-07-12 | 2009-01-15 | At&T Corp. | SYSTEMS, METHODS AND COMPUTER PROGRAM PRODUCTS FOR SEARCHING WITHIN MOVIES (SWiM) |
US9218425B2 (en) | 2007-07-12 | 2015-12-22 | At&T Intellectual Property Ii, L.P. | Systems, methods and computer program products for searching within movies (SWiM) |
US9747370B2 (en) | 2007-07-12 | 2017-08-29 | At&T Intellectual Property Ii, L.P. | Systems, methods and computer program products for searching within movies (SWiM) |
US10606889B2 (en) | 2007-07-12 | 2020-03-31 | At&T Intellectual Property Ii, L.P. | Systems, methods and computer program products for searching within movies (SWiM) |
US8781996B2 (en) | 2007-07-12 | 2014-07-15 | At&T Intellectual Property Ii, L.P. | Systems, methods and computer program products for searching within movies (SWiM) |
WO2009026564A1 (en) * | 2007-08-22 | 2009-02-26 | Google Inc. | Detection and classification of matches between time-based media |
US9178957B2 (en) | 2007-09-27 | 2015-11-03 | Adobe Systems Incorporated | Application and data agnostic collaboration services |
US20090089379A1 (en) * | 2007-09-27 | 2009-04-02 | Adobe Systems Incorporated | Application and data agnostic collaboration services |
US9420014B2 (en) | 2007-11-15 | 2016-08-16 | Adobe Systems Incorporated | Saving state of a collaborative session in an editable format |
US20090281897A1 (en) * | 2008-05-07 | 2009-11-12 | Antos Jeffrey D | Capture and Storage of Broadcast Information for Enhanced Retrieval |
US20130007620A1 (en) * | 2008-09-23 | 2013-01-03 | Jonathan Barsook | System and Method for Visual Search in a Video Media Player |
US8239359B2 (en) * | 2008-09-23 | 2012-08-07 | Disney Enterprises, Inc. | System and method for visual search in a video media player |
US20100082585A1 (en) * | 2008-09-23 | 2010-04-01 | Disney Enterprises, Inc. | System and method for visual search in a video media player |
US9165070B2 (en) * | 2008-09-23 | 2015-10-20 | Disney Enterprises, Inc. | System and method for visual search in a video media player |
US7945622B1 (en) | 2008-10-01 | 2011-05-17 | Adobe Systems Incorporated | User-aware collaboration playback and recording |
US9294291B2 (en) | 2008-11-12 | 2016-03-22 | Adobe Systems Incorporated | Adaptive connectivity in network-based collaboration |
US9565249B2 (en) | 2008-11-12 | 2017-02-07 | Adobe Systems Incorporated | Adaptive connectivity in network-based collaboration background information |
US10063934B2 (en) | 2008-11-25 | 2018-08-28 | Rovi Technologies Corporation | Reducing unicast session duration with restart TV |
US8914829B2 (en) | 2009-09-14 | 2014-12-16 | At&T Intellectual Property I, Lp | System and method of proactively recording to a digital video recorder for data analysis |
US8910232B2 (en) | 2009-09-14 | 2014-12-09 | At&T Intellectual Property I, Lp | System and method of analyzing internet protocol television content for closed-captioning information |
US20110067077A1 (en) * | 2009-09-14 | 2011-03-17 | At&T Intellectual Property I, L.P. | System and Method of Analyzing Internet Protocol Television Content Credits Information |
US8938761B2 (en) | 2009-09-14 | 2015-01-20 | At&T Intellectual Property I, Lp | System and method of analyzing internet protocol television content credits information |
US20110067078A1 (en) * | 2009-09-14 | 2011-03-17 | At&T Intellectual Property I, L.P. | System and Method of Proactively Recording to a Digital Video Recorder for Data Analysis |
US20110067079A1 (en) * | 2009-09-14 | 2011-03-17 | At&T Intellectual Property I, L.P. | System and Method of Analyzing Internet Protocol Television Content for Closed-Captioning Information |
US20110072456A1 (en) * | 2009-09-24 | 2011-03-24 | At&T Intellectual Property I, L.P. | System and Method for Substituting Broadband Delivered Advertisements for Expired Advertisements |
CN101720028A (en) * | 2009-12-01 | 2010-06-02 | 北京中星微电子有限公司 | Method and system for realizing voice broadcast during video monitoring |
US20110239099A1 (en) * | 2010-03-23 | 2011-09-29 | Disney Enterprises, Inc. | System and method for video poetry using text based related media |
US9190109B2 (en) * | 2010-03-23 | 2015-11-17 | Disney Enterprises, Inc. | System and method for video poetry using text based related media |
US8688679B2 (en) | 2010-07-20 | 2014-04-01 | Smartek21, Llc | Computer-implemented system and method for providing searchable online media content |
US8688667B1 (en) * | 2011-02-08 | 2014-04-01 | Google Inc. | Providing intent sensitive search results |
US9183277B1 (en) | 2011-02-08 | 2015-11-10 | Google Inc. | Providing intent sensitive search results |
US20130066633A1 (en) * | 2011-09-09 | 2013-03-14 | Verisign, Inc. | Providing Audio-Activated Resource Access for User Devices |
US8214374B1 (en) * | 2011-09-26 | 2012-07-03 | Limelight Networks, Inc. | Methods and systems for abridging video files |
US9125169B2 (en) | 2011-12-23 | 2015-09-01 | Rovi Guides, Inc. | Methods and systems for performing actions based on location-based rules |
US20130291019A1 (en) * | 2012-04-27 | 2013-10-31 | Mixaroo, Inc. | Self-learning methods, entity relations, remote control, and other features for real-time processing, storage, indexing, and delivery of segmented video |
US11997340B2 (en) | 2012-04-27 | 2024-05-28 | Comcast Cable Communications, Llc | Topical content searching |
US8521719B1 (en) | 2012-10-10 | 2013-08-27 | Limelight Networks, Inc. | Searchable and size-constrained local log repositories for tracking visitors' access to web content |
US11995094B2 (en) * | 2015-04-28 | 2024-05-28 | Splunk Inc. | Executing alert actions based on search query results |
US11722507B1 (en) | 2015-04-28 | 2023-08-08 | Splunk Inc. | User configurable alert notifications applicable to search query results |
US20210026849A1 (en) * | 2015-04-28 | 2021-01-28 | Splunk Inc. | Executing alert actions based on search query results |
WO2017129979A1 (en) * | 2016-01-29 | 2017-08-03 | Waazon (Holdings) Limited | Automated search method, apparatus, and database |
US20180348970A1 (en) * | 2017-05-31 | 2018-12-06 | Snap Inc. | Methods and systems for voice driven dynamic menus |
US10845956B2 (en) * | 2017-05-31 | 2020-11-24 | Snap Inc. | Methods and systems for voice driven dynamic menus |
US11934636B2 (en) | 2017-05-31 | 2024-03-19 | Snap Inc. | Voice driven dynamic menus |
US11640227B2 (en) | 2017-05-31 | 2023-05-02 | Snap Inc. | Voice driven dynamic menus |
US10795699B1 (en) * | 2019-03-28 | 2020-10-06 | Cohesity, Inc. | Central storage management interface supporting native user interface versions |
US11531712B2 (en) | 2019-03-28 | 2022-12-20 | Cohesity, Inc. | Unified metadata search |
US11442752B2 (en) * | 2019-03-28 | 2022-09-13 | Cohesity, Inc. | Central storage management interface supporting native user interface versions |
US11463507B1 (en) * | 2019-04-22 | 2022-10-04 | Audible, Inc. | Systems for generating captions for audio content |
Also Published As
Publication number | Publication date |
---|---|
US8015159B2 (en) | 2011-09-06 |
CA2498364A1 (en) | 2005-08-24 |
CA2498364C (en) | 2012-05-15 |
US20080072256A1 (en) | 2008-03-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8015159B2 (en) | System and method for real-time media searching and alerting | |
US6266094B1 (en) | Method and apparatus for the aggregation and selective retrieval of television closed caption word content originating from multiple geographic locations | |
US8589973B2 (en) | Peer to peer media distribution system and method | |
US8533210B2 (en) | Index of locally recorded content | |
US8285701B2 (en) | Video and digital multimedia aggregator remote content crawler | |
US9047375B2 (en) | Internet video content delivery to television users | |
KR100889986B1 (en) | System and Method for Providing Suggested Keywords for Interactive Broadcasting Terminal | |
WO1996027840A1 (en) | Automatic broadcast monitoring system | |
US20020170068A1 (en) | Virtual and condensed television programs | |
US20030074671A1 (en) | Method for information retrieval based on network | |
US20030120748A1 (en) | Alternate delivery mechanisms of customized video streaming content to devices not meant for receiving video | |
KR100807745B1 (en) | EP information provision method and system | |
US20090022476A1 (en) | Broadcasting System and Program Contents Delivery System | |
WO2017189177A1 (en) | Multimedia content management system | |
EP2724525A1 (en) | Method and device for optimizing storage of recorded video programs | |
CN111656794A (en) | System and method for tag-based content aggregation of related media content | |
US7009657B2 (en) | Method and system for the automatic collection and conditioning of closed caption text originating from multiple geographic locations | |
JP4195555B2 (en) | Content management receiver | |
WO2004043029A2 (en) | Multimedia management | |
CN1976430B (en) | Method for realizing previewing mobile multimedia program in terminal | |
US7268823B2 (en) | Method and system for the automatic collection and conditioning of closed caption text originating from multiple geographic locations, and resulting databases produced thereby | |
US7518657B2 (en) | Method and system for the automatic collection and transmission of closed caption text | |
US20090172733A1 (en) | Method and system for content recording and indexing | |
KR100878909B1 (en) | Interactive DM broadcast system and its provision | |
JP5105109B2 (en) | Search device and search system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: DNA13 INC., CANADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BOICEY, TREVOR NELSON;JOHNSON, CHRISTOPHER JAMES;REEL/FRAME:016331/0106 Effective date: 20050223 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |