HK1140573B - Research data gathering - Google Patents
Research data gathering Download PDFInfo
- Publication number
- HK1140573B HK1140573B HK10106785.4A HK10106785A HK1140573B HK 1140573 B HK1140573 B HK 1140573B HK 10106785 A HK10106785 A HK 10106785A HK 1140573 B HK1140573 B HK 1140573B
- Authority
- HK
- Hong Kong
- Prior art keywords
- data
- code
- media data
- signal
- audio
- Prior art date
Links
Description
Technical Field
The present invention relates to data acquisition, and more particularly to environmental data acquisition.
Background
There is considerable interest in encoding audio and video signals for various applications. For example, to identify what an individual or listener listens to at a particular time, the listener's environment is monitored at fixed time intervals for an audio signal. If the audio signals contain an identification code, these audio signals can be identified by reading this code.
Methods of encoding identification codes in conjunction with broadcast signals are known. For example, methods are known for encoding a payload signal and an ancillary signal into an audio signal, wherein the ancillary signal comprises an identification code. By detecting and decoding the ancillary codes and associating the detected codes with one or more individuals, it is possible to correlate media audience activity with the delivery of a particular payload signal.
Disclosure of Invention
Having examined and understood the scope of previously available devices, the inventors of the present invention have developed a new and important understanding of the problems associated with the prior art, and from this new and important understanding, have developed new and useful solutions and improved devices, including solutions and devices that produce surprising and beneficial results, which solutions and devices have not been previously discovered and disclosed by researchers in the field.
The invention, including these new and useful solutions and improved devices, will be described below with reference to several exemplary embodiments, including preferred embodiments.
For various groups (groups), it is useful and often important to identify the audio signals heard by the listener. Copyright owners seeking to facilitate copyright enforcement and protection form such groups. Copyrighted works are encoded with watermarks or other types of identifying information to enable electronic devices to ascertain when to reproduce or copy the copyrighted works, or alternatively to restrict such reproduction or copying.
Another group that may be of interest is audio listeners, many of which seek to obtain additional information about the received audio, including information identifying the audio work, such as the name of the work, the performer, the identity of the broadcaster, and so forth.
Yet another group are market research companies and their customers, including advertisers, advertising agencies, and media distributors, who are interested in ascertaining whether listeners and viewers perceive and/or are exposed content through audible and/or visual messages, program content, advertisements, and the like. Market research companies typically engage in audience metrics or perform other operations (e.g., programs to achieve customer loyalty, commercial verification, etc.) using a variety of techniques.
Yet another interesting group is those seeking additional bandwidth for communicating data for other purposes that may not be related or correlated to the audio and/or video signal (e.g., song, program). For example, a telecommunications company, news organization, or other entity may utilize additional bandwidth for communicating data for various reasons, such as the communication of news, financial information, and the like.
In view of the above, it would be highly desirable to be able to accurately detect identification codes encoded in audio and/or video signals. However, many factors can interfere with the detection process, particularly where the encoded audio is communicated via an acoustic channel. The acoustic characteristics of audio environments vary widely, and thus the accurate detection rate varies depending on such environments. For example, various types of environments are quite disadvantageous for easily and accurately detecting an identification code encoded in audio or video due to the presence of a large amount of noise or interference. In some instances and for various reasons, data encoded in audio and/or video signals is not properly transmitted by the electronic equipment that transmitted the signals, and/or electronic equipment that receives audio and/or video signals does not properly receive the encoded data for one reason or another.
Therefore, there is a great need for a system/process that is able to ascertain with sufficient accuracy the ancillary codes encoded in audio and/or video signals under real-world adverse conditions.
These and other advantages and features of the invention will be more readily understood upon reading the following detailed description of the invention provided in conjunction with the accompanying drawings.
Drawings
FIG. 1 is a functional block diagram illustrating certain embodiments of a system for reading ancillary code encoded in audio media data;
FIG. 2 illustrates an auxiliary code reading process including embodiments of the embodiment shown in FIG. 1;
FIG. 2A illustrates an auxiliary code reading process that includes various other embodiments of some of the embodiments shown in FIG. 1;
FIG. 3 illustrates an auxiliary code reading process, in accordance with certain embodiments;
FIG. 4 schematically illustrates certain embodiments for reading auxiliary code from stored media data using different window sizes;
FIG. 5 further schematically illustrates read processes using different window sizes, in accordance with certain embodiments;
FIG. 6 schematically illustrates the use of multiple sub-passes to read auxiliary codes from stored media data, in accordance with certain embodiments;
FIG. 7 illustrates various read processes using frequency offset in accordance with certain embodiments;
FIG. 8 shows a table identifying ten exemplary frequency bins (frequency bins) and their corresponding frequency components (frequency components) in which code components (codelocations) are expected to be included in audio media data containing auxiliary codes;
FIG. 9 shows a table identifying exemplary frequency bands and their corresponding frequency components in which code components intended to be included in audio media data containing ancillary codes are biased;
FIG. 10 illustrates an exemplary pattern of symbols containing messages;
FIG. 11 is an exemplary pattern of symbols representing the same message "A" encoded in audio media data repeated three times;
FIG. 12 illustrates an exemplary pattern of decoded symbols containing incorrectly decoded symbols;
FIG. 13 is a functional block diagram illustrating a system operating in multiple power modes in accordance with certain embodiments; and
FIG. 14 is another functional block diagram illustrating a system operating in multiple modes according to some other embodiments.
Detailed Description
The following description is presented to enable any person skilled in the art to make and use the disclosed invention and sets forth the best modes presently contemplated by the inventors of carrying out the invention. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention.
For the purposes of this application, the following terms and definitions will apply:
the term "data" as used herein refers to any indication, signal, label, symbol, domain, collection of symbols, any one or more other physical forms, whether permanent or temporary, whether visible, audible, acoustic, electrical, magnetic, electromagnetic or other known form, representing and representing information. The term "data" as used to represent predetermined information in one physical form is to be construed as including any and all representations of corresponding information in one or more different physical forms.
The terms "media data" and "media" as used herein refer to data that is accessible in a variety of ways, whether by radio or cable, satellite, network, internetwork (including the internet), printing, displaying, distributed on storage media, or by any other means or technique that is humanly perceptible, regardless of the form or content of the data, and includes, but is not limited to, audio, video, audio/video, text, images, animations, databases, broadcasts, displays (including but not limited to video displays, posters, and billboards), signs, signals, web pages, print media, and streaming media data.
The term "research data" as used herein refers to data comprising: (1) data about media data usage (usage), (2) data about media data exposure (exposure), and/or (3) market research data.
The term "ancillary code" as used herein refers to data encoded in, added to, combined with, or embedded in media data for providing information identifying, describing, and/or characterizing the media data and/or other information that may be used as research data.
The term "read" as used herein refers to one or more processes for recovering research data that has been added to, encoded in, combined with, or embedded in media data.
The term "database" as used herein refers to an organized entity of related data, regardless of the manner in which the data or organized entities thereof are represented. For example, the organized entity of related data may take the form of one or more of a table, a map, a grid, a packet, a datagram, a frame, a file, an email, a message, a document, a list, or may take any other form.
The term "network" as used herein includes both networks of all kinds and interconnected networks (including the internet) and is not limited to any particular network or interconnected network.
The terms "first," "second," "primary," and "secondary" are used to distinguish one element, set, data, object, step, process, activity or thing from another, and are not used to specify a relative position or a temporal setting unless otherwise specifically stated.
The terms "coupled," coupled, "and. (a) A connection, whether direct or through one or more other devices, apparatus, files, circuits, elements, functions, operations, processes, programs, media, components, networks, systems, subsystems, or means, (b) a communicative relationship, whether direct or through one or more other devices, apparatus, files, circuits, elements, functions, operations, processes, programs, media, components, networks, systems, subsystems, or means, and/or (c) a functional relationship wherein the operation of any one or more of the devices, apparatus, files, circuits, elements, functions, operations, processes, programs, media, components, networks, systems, subsystems, or means, is dependent, in whole or in part, on the operation of any one or more of the other objects.
The terms "communication," "communicating," and "communication" as used herein include both transferring data from a source to a destination and delivering data to a communication medium, system, channel, network, device, wire, cable, fiber, circuit, and/or link that is to transfer the data to the destination. The term "communication" as used herein includes one or more of communication media, systems, channels, networks, devices, wires, cables, optical fibers, circuits, and links.
The term "processor" as used herein refers to processing devices, apparatus, programs, circuits, components, systems and subsystems, whether implemented in hardware, software, or both, and whether programmable or not. The term "processor" as used herein includes, but is not limited to, one or more computers, hardwired circuitry, signal modification devices and systems, devices and machines for controlling systems, central processing units, programmable devices and systems, field programmable gate arrays, application specific integrated circuits, systems on a chip, systems comprised of discrete elements and/or circuitry, state machines, virtual machines, data processors, processing facilities, and combinations of any of the above.
The terms "memory" and "data store" as used herein refer to one or more data storage devices, apparatus, programs, circuits, components, systems, subsystems, locations, and storage media used to retain data, whether temporary or permanent, and to provide such retained data.
The terms "panelist", "responder" and "participant" are used interchangeably herein to refer to a person, whether intentionally or unknowingly involved in the study of collecting information, whether by electronic, research or other means, that is related to the activity of the person.
The term "research device" as used herein shall refer to (1) a portable user appliance configured or otherwise capable of collecting, storing and/or communicating research data or cooperating with other devices to collect, store and/or communicate research data, and/or (2) a research data collection, storage and/or communication device.
Fig. 1 is a functional block diagram illustrating an advantageous embodiment of a system 10 for reading an ancillary code encoded as a message in audio media data. In some of these embodiments, the encoded message comprises a continuous stream of messages including data that may be used for audience metrics, commercial verification, royalty calculations, and the like. Such data typically includes an identification of a program, good, file, song, network, station, or channel or otherwise describes some aspect of the media audio data or other data related thereto to characterize the audio media data. In some such embodiments, the continuous stream of encoded messages is composed of symbols arranged chronologically in the audio media data.
The system 10 includes an audio media data input 12 for receiving audio media data encoded with an ancillary code. In certain embodiments, audio media data input 12 comprises or is contained in a single device fixed at the source being monitored or multiple devices fixed at multiple sources being monitored. In some embodiments, audio media data input 12 comprises and/or is contained within a portable monitoring device that can be carried by an individual for monitoring any audio media data exposed to the individual. In certain embodiments, the PUA includes an audio media data input.
When the audio media data is acoustic data, the audio media data input 12 typically includes an acoustic transducer (such as a microphone) having an input that receives the audio media data in the form of acoustic energy and for transforming the acoustic energy into electrical data. When monitoring audio media data in the form of light energy, audio media data input 12 includes a light sensitive device, such as a photodiode. In certain embodiments, audio media data input 12 includes a magnetic pickup for sensing a magnetic field associated with a speaker and a capacitive pickup for sensing an electric field or an antenna for electromagnetic energy. In still other embodiments, the audio media data input 12 comprises an electrical connection to a monitored device, which may be a television, radio, cable converter, satellite television system, gaming system, VCR, DVD player, PUA, portable media player, Hi-Fi system, home theater system, audio reproduction system, video reproduction system, computer, web appliance, or the like. In still other embodiments, the audio media data input 12 is embodied as monitoring software running on a computer or other rendering or processing system for collecting media data.
The storage 14 stores the received audio media data for subsequent processing. The processor 16 is used to process the received data to read the ancillary codes encoded in the audio media data and store the detected encoded message in the storage 14. For example, it may be desirable to store data generated by the processor 16 for later use. A communication 20 coupled to the processor 16 is used to communicate data from the system 10 to, for example, another processor 22. In certain embodiments, another processor 22 generates the report based on ancillary code read from the audio media data and communicated from the system 10 by the processor 16. In certain embodiments, the processor 22 processes audio media data in compressed or uncompressed form communicated from the system 10 to read ancillary code therein. In certain embodiments, processor 16 performs preliminary processing of the audio media data to reduce processing requirements on processor 22, and processor 22 completes processing of the pre-processed data to read ancillary code therefrom. In certain embodiments, processor 16 is configured to read the ancillary codes in the audio media data using a first process and processor 22 further processes the ancillary codes and/or the audio media data collected by system 10 using a second process that is a modified version of the first process or a different process.
A method of collecting data relating to the use and/or exposure of media data comprises processing the media data using a parameter having a first value to produce first media use and/or exposure data, assigning a second value to the parameter, the second value being different from the first value, and processing the media data using the parameter having the second value to produce second media use and/or exposure data.
A system for collecting data relating to usage and/or exposure of media data includes a processor configured to process the media data using a parameter having a first value to produce first media usage and/or exposure data, assign a second value to the parameter, the second value being different from the first value, and process the media data using the parameter having the second value to produce second media usage and/or exposure data.
Fig. 2 is a flow chart 100 provided to illustrate the decoding process performed by the processor 16 and in other embodiments. At the beginning, parameters for processing the received media data are set 110. Various parameters that may be set, and as described further below, include window size and frequency scale (frequency scale). In particular, the type of one or more parameters that are set 110 depends on the type of processing performed by the processor 16 on the received media data at 120. In certain embodiments, processor 16 performs symbol sequence evaluation of the audio media data to read symbols of encoded messages contained in the audio media data as a continuous stream of encoded messages. Various code reading techniques suitable for processing 120 are disclosed in the following patent documents: U.S. Pat. No.5,764,763 to Jensen et al, U.S. Pat. No.5,450,490 to Jensen et al, U.S. Pat. No.5,579,124 to Aiiala et al, U.S. Pat. No.5,581,800 to Fardeau et al, U.S. Pat. No.6,871,180 to Neuhauser et al, U.S. Pat. No.6,845,360 to Jensen et al, U.S. Pat. No.6,862,355 to Kolessar et al, U.S. Pat. No.5,319,735 to Preuss et al, U.S. Pat. No.5,687,191 to Lee et al, U.S. Pat. No.6,175,627 to Petrovich et al, U.S. Pat. No.5,828,325 to Wosewicz et al, U.S. Pat. No.6,154,484 to Lee et al, U.S. Pat. No.5,945,932 to Smith et al, U.S. Pat. No. 2001/0053190 to Srinivasan, U.S. Pat. No.5, 2003/0110485 to Lu et al, U.S. Pat. 4 to Doughivance et al, Srivan et al, U.S. Pat..
Examples of Techniques for encoding ancillary codes in audio and for reading these codes are provided in Bender et al, "Techniques for Data Hiding," by IBM Systems Journal (IBM Systems Journal) 35, 1996, stages 3 and 4, which is incorporated herein by reference in its entirety. Bender et al discloses a technique for encoding audio, referred to as "phase encoding," in which segments of audio are transformed into the frequency domain, such as by a Discrete Fourier Transform (DFT), to produce phase data for each segment. The phase data is then modified to encode a code symbol, such as one bit. The processing of the phase encoded audio to read the code is performed by synchronizing with the data sequence and detecting the phase encoded data using known values of the segment length, the DFT points and the data interval.
Bender et al also describes spread spectrum encoding and decoding, various embodiments of which are disclosed in the above-referenced U.S. Pat. No.5,579,124 to Aijala et al.
Yet another audio encoding and decoding technique described by Bender et al is echo data concealment, in which data is embedded in a host (host) audio signal by introducing echoes. The symbol state is represented by a value of the echo delay and is read by any suitable process for evaluating the length and/or presence of the coding delay.
There is also a technique or class of techniques, called "amplitude modulation," described in BBC Research and Development (BBC Research and Development) 2004 as "Audio Watermarking" by r.walker. Such techniques include modifying the envelope of the audio signal, for example by notching or otherwise modifying a transient portion of the signal, or by subjecting the envelope to a longer range of modification. Processing the audio to read the code may be accomplished by detecting an overshoot indicative of a dip or other modification, or by accumulating or integrating over a period of time commensurate with the duration of the encoded symbol, or by other suitable techniques.
Another class of techniques proposed by Walker involves transforming audio from the time domain to some transform domain, such as the frequency domain, and then encoding by adding data or otherwise modifying the transformed audio. The domain transform may be performed by fourier, DCT, Hadamard, wavelet, or other transform, or by digital or analog filtering. The encoding may be achieved by adding a modulated carrier or other data (such as noise, noise-like data, or other symbols in the transform domain), or by modifying the transformed audio (such as by notching or changing one or more bands, bins, or combinations of bins), or by combining these methods. There are other related techniques for modifying the frequency distribution of audio data in the transform domain for encoding. Psychoacoustic masking (Psychoacoustic masking) may be used to render inaudible codes or to reduce their prominence. The processing to read ancillary codes encoded in audio data by such techniques typically involves transforming the encoded audio to the transform domain and detecting additions or other modifications to the representation code.
Another class of techniques proposed by Walker involves Audio data that is modified to be encoded for compression (whether lossy or lossless) or other purposes, such as Audio data encoded in MP3 format or other MPEG Audio format, AC-3, DTS, ATRAC, WMA, RealAudio, ogvorbis, APT X100, FLAC, Shorten, Monkey's Audio, or other formats. Encoding involves modification of the encoded audio data, such as modification of the encoding coefficients and/or predefined decision thresholds. Processing the audio to read the code is achieved by detecting these modifications using knowledge of predefined audio coding parameters.
Once the audio data has been processed 120, the data is stored 130 for later further processing, for communication from the system, and/or for preparation of reports.
A decision 140 is made whether further processing 120 is to be performed. If so, the processing parameters are again set 110 and further processing 120 is performed. If not, the data is not further processed. In some embodiments, the decision whether to perform further processing is made by incrementing or decrementing a counter and checking the counter value to determine whether it is equal to, greater than, or less than some predetermined value. This is useful when the number of passes is predetermined. In some embodiments, a flag or other flag is set at 110 when the last parameter value is set, and the flag or flag is tested at 140 to determine whether further processing is to be performed. This is useful when, for example, the number, type or value of parameters set at 110 can vary.
In certain embodiments, the data generated at 120 is evaluated to determine whether further processing is to be performed. Fig. 2A is a flow chart illustrating these embodiments.
As in the embodiment of fig. 2, the processing parameters are set 150 and processing is performed 160 to read the auxiliary code. Once the processor 16 has completed processing 160 the media data, the results of the processing are assessed 170. During the assessment 170, the results of the code reading process are evaluated to assess whether the quality or other characteristics of the data produced by the process 160 indicate that further processing using different or modified parameters should be performed. In some embodiments where the ancillary code to be read comprises one or more sequences of symbols representing an encoded message, such as an identification of a station, channel, network, producer, or identification of content, the assessment comprises determining whether all, some, or none of the expected symbols have been read and/or whether a quality or quality level representing the reliability of the symbol detection indicates a sufficient probability of correct detection.
After evaluating 170 the processing results, the processor 16 determines 180 whether the stored media data should be processed again. If so, one or more parameters are modified 150 and the processor 16 processes 160 the stored media data using the newly set one or more parameters. Thereafter, the results of further processing are assessed 170 and it is again determined 180 whether the stored media data should be processed. On the other hand, if the rating of the processing result indicates that the decoded signal has sufficient quality or other rated sufficient characteristics, or if the rating indicates that it is not worth processing the data again because there is insufficient likelihood of the auxiliary code being present in the data, the audio media data is no longer processed. In some embodiments, if the media data is determined to have no auxiliary code, the media data is discarded or overwritten. In certain embodiments, the media data is processed differently to produce research data, such as by extracting signatures. In certain embodiments, the different systems with which the media data is communicated are stored for further processing.
In some embodiments, if the assessment 170 indicates that some, but not all, of the one or more auxiliary codes have been read, further processing is performed. In some embodiments, if a predetermined number of processing cycles have been performed and/or a predetermined set of processing parameters have been used, and none of the one or more auxiliary codes have been read or the rating 170 indicates that the most recent processing cycle did not achieve a better result than one or more previous processing cycles, then processing is not resumed. In some embodiments, if a predetermined number of cycles have been performed and/or a predetermined set of processing parameters have been used and no portion of the auxiliary code has been read, processing is not resumed.
A method of collecting data relating to usage and/or exposure of media data, comprising processing the media data using a parameter having a first value to produce first media usage and/or exposure data, assessing the result of the first processing, assigning a second value to the parameter, the second value being different from the first value, and processing the media data using the parameter having the second value based on the result of the assessing to produce second media usage and/or exposure data.
A system for collecting data relating to usage and/or exposure of media data, comprising a processor configured to process the media data using a parameter having a first value to produce first media usage and/or exposure data, to assess the result of the first process, to assign a second value to the parameter, the second value being different from the first value, and to process the media data using the parameter having the second value to produce second media usage and/or exposure data based on the result of the assessment.
A method of collecting data relating to usage and/or exposure of media data, comprising applying a first window size to the media data to produce first processed data, processing the first processed data to produce first media usage and/or exposure data, applying a second window size to the media data to produce second processed data, the second window size being different from the first window size, and processing the second processed data to produce second media usage and/or exposure data.
A system for collecting data relating to usage and/or exposure of media data, comprising a processor configured to apply a first window size to the media data to produce first processed data, process the first processed data to produce first media usage and/or exposure data, apply a second window size to the media data to produce second processed data, the second window size being different from the first window size, and process the second processed data to produce second media and/or exposure data.
FIG. 3 is a flow diagram 200 illustrating a code reading routine of some embodiments in which a segment of time domain audio data is processed to read code therein (if present).
Under real-world conditions, ancillary codes contained in audio media data, for example, as a continuous stream of one or more encoded messages, may be difficult to detect in various environments. For example, when a relatively large segment of audio media containing such data is processed to read auxiliary codes of relatively short duration, such codes may be "missed" during decoding. This may occur if the auxiliary code forms a continuous stream of repeated messages each having the same message length, and the code is read by repeatedly accumulating the code components over the message length. The presence of relatively short coded segments may occur as a result of a consumer/user switching between different broadcast stations (e.g., television, radio) or other audio and/or video media devices, so that the audio media data comprising the coded message is received only for a relatively short duration (e.g., 5 seconds, 10 seconds, etc.). On the other hand, processing smaller segments of audio media data may result in failure to detect messages that are consistently encoded throughout a relatively large segment of audio media data, particularly if data information is lost or noise interferes with code reading. Certain embodiments of the flow chart 200 described herein, and particularly with reference to fig. 3, are for reading ancillary codes included within varying audio media data lengths or durations.
As shown in fig. 3, at the beginning, a segment size parameter (also referred to herein as a "window size") is set 210 to be relatively small, such as 10 seconds. The audio media data is subjected to one or more processes 220 to extract substantially single-frequency values of various message symbol components that may be present in the audio data. When receiving time-domain audio media data in analog form, these processes are advantageously performed by transforming the analog audio media data into digital audio media data and transforming the latter into frequency-domain data having sufficient resolution in the frequency domain to allow separation of the substantially single-frequency components of the message symbols that may be present. Some embodiments use a Fast Fourier Transform (FFT) to convert the data to the frequency domain and then produce a signal-to-noise ratio for substantially single-frequency symbol components that may be present. In some of these embodiments, the FFT is performed on a portion of the time domain audio data having a predetermined length or duration, such as a portion representing a fraction of a second (e.g., 0.1 second, 0.15 second, 0.25 second) of the audio data. Each successive FFT is performed on a different portion of the audio data that overlaps with the last processed portion, such as an 80%, 60% or 40% overlap. This implementation is disclosed in U.S. Pat. No.5,764,763 to Jensen et al, which is incorporated herein by reference in its entirety. Other suitable techniques for converting audio media data to the frequency domain may also be used, such as using a different transform or using analog or digital filtering.
The frequency components of interest (i.e., those frequency components or bins that are expected to contain code components) are accumulated 230 during the entire 10-second window. Techniques for accumulating code components to facilitate reading the code are disclosed in U.S. patent No.6,871,180 to Neuhauser et al and U.S. patent No.6,845,360 to Jensen et al, cited above. The ancillary codes (if any) are then read 240 from the accumulated frequency components. Techniques for reading accumulated codes are described in the above-referenced Neuhauser et al, U.S. Pat. No.6,871,180 and Jensen et al, U.S. Pat. No.6,845,360, and Kolesmar et al, U.S. Pat. No.6,862,355.
The ancillary code or codes, if any, that have been read from the audio media data are stored and the accumulator is reset. In some embodiments, the next segment of the audio media data, i.e., the 10 second window, is processed in the same manner as described previously for the previous segment. In some embodiments, a branching condition 250 is applied to determine whether there are more segments of media data to process depending on whether one or more conditions are satisfied. In some such embodiments, the condition is whether a predetermined number of audio portions have been processed to read any code therein. In some such embodiments, the condition is whether the end of the window has been reached.
Once the above condition occurs, the processor uses the different parameter values to determine 260 whether to process the stored audio media data again. In some embodiments, if the code cannot be read using a 10 second window size, the data is reprocessed using a different window size (e.g., 20 seconds). Advantageously, codes that can be detected with a 20 second window size but cannot be detected (or are difficult to detect) with a 10 second window size are detected in this second pass. In a similar manner, if no code is detected after all of the stored media data has been processed at the 20 second window size, then in some embodiments the window size is set to a longer duration (e.g., 30 seconds) and the stored audio media data is processed as previously described but at an increased window size.
In some embodiments, the decision 260 is adjusted to the extent (if at all) that the auxiliary code is read using the current window size. For example, there are cases where: due to noise or information loss, it is not possible to accumulate a sufficient amount of data to allow reliable discrimination of the symbols of successive repeated messages, or one or more symbols of a message may be detected visibly incorrectly. Useful in these cases are: the data is accumulated at longer time intervals to better discern the symbols of the messages that occur continuously in the audio. As another example, there are cases where: only ancillary codes that are clearly present in the audio data are messages of sufficiently short duration to be effectively read using a small window. In such cases and in some embodiments, it is decided 260 not to use a larger window to process the audio data.
Fig. 4 schematically illustrates the above-described processing of stored audio media data in some embodiments, in which non-overlapping windows of audio data having the same window size are processed. The first 10 seconds of media data, identified for convenience as data (0, 10), are processed to read the ancillary code therein. The next subsequent 10 seconds of media data identified as data (10, 20) are then processed in the same manner to read any such codes. This process is repeated until all stored audio media data has been processed in this ten second window.
If one or more conditions for further processing are met at 250, the window size is increased to 20 seconds as previously described. The data (0, 20) shown in fig. 4 is then processed to read any ancillary codes. Thereafter, the data is processed (20, 40), and so on. Fig. 4 also shows that each sample of data is processed for a set window size of 30 seconds. For convenience, processing the stored audio media data at a 10 second window size is referred to herein as "pass 1" or an initial pass, processing the stored audio media data at a 20 second window size is referred to herein as "pass 2" or a second pass, and so on. In some embodiments, the processing of the stored audio media data is limited to a preset maximum number of processing passes, such as 24 passes, where the window size during its last pass processing may be set to 240 seconds. Other maximum number of processing passes may be set, such as 2,3, 10.
In some embodiments, individual segments of stored audio media data are processed at a set window size regardless of whether a code is detected. Also, in some embodiments, the entire stored audio media data is processed using windows of various sizes as described above, regardless of whether ancillary codes have been detected within the audio media data.
Fig. 5 is a schematic diagram of multiple processing (i.e., multiple passes) of 140 seconds of stored audio media data. During the first pass (pass 1), each 10 second segment of the stored audio media data is processed, during the second pass (pass 2), each 20 second segment of the stored audio media data is processed, and so on. The multiple processing may be limited to, for example, three passes before analyzing the results of all processing to assess accurate detection of the code contained within the audio media data.
With further reference to fig. 5, if the codes are contained within a time period spanning 60 to 90 seconds (e.g., relative to the start of the stored audio media data) in the stored audio media data, then these codes will be detected with a high degree of certainty and accuracy during pass 3. Depending on the length of the code, the number of times the same code is repeated within the time frame, noise and other factors, the code may also be detected during pass 2, and even during pass 1.
A method of collecting data relating to the use and/or exposure of media data, comprising processing a first segment of the media data to produce first processed data, reading an ancillary code, if present, based on the first processed data, processing a second segment of the media data to produce second processed data, the second segment of the media data being different from the first segment and comprising at least a portion of the media data contained in the first segment, and reading the ancillary code, if present, based on the second processed data and without using the first processed data.
A system for collecting data relating to usage and/or exposure of media data, comprising a processor configured to process a first segment of media data to produce first processed data, read an ancillary code based on the first processed data if the ancillary code is present, process a second segment of the media data to produce second processed data, the second segment of the media data being different from the first segment and comprising at least a portion of the media data contained in the first segment, and read the ancillary code based on the second processed data and without using the first processed data if the ancillary code is present.
In some embodiments, the window size remains the same but the start of processing of the audio media data is changed during subsequent processing of the audio media data. FIG. 6 is a schematic diagram showing each pass each having multiple "sub-passes". Note that the terms "pass" and "sub-pass" are used herein merely as a means of distinguishing one process from another for convenience. As shown in fig. 6, the window size is set to 10 seconds for both pass 1A and pass 1B, but the start position in pass 1B is shifted or shifted by 5 seconds with respect to the start position in pass 1A of the stored audio media data. Passes 2A, 2B, 2C, and 2D use a 20 second window with each pass having a start time that is offset by 5 seconds relative to the start time of the previous pass. The offset may be different than 5 seconds and the number of sub-passes may be the same or different for each window size. In a simplified example, if one or more messages encoded in the audio media data are contained only in the stored audio media data over a period of time spanning 50 to 70 seconds, then these codes are detected with a relatively high degree of certainty during pass 2C shown in fig. 6, although they may also be read during other passes, but with a lower degree of certainty.
In some embodiments, when media data is processed using a given window size, successive overlapping segments are processed sequentially. For example, if the window size is set to 10 seconds in such embodiments, the first segment is selected to be data from 0 seconds to 10 seconds, the next segment is selected to be data from (0+ x) seconds to (10+ x) seconds, the next segment is selected to be data from (0+2x) seconds to (10+2x) seconds, and so on, where 0 < x < 10 seconds.
In certain embodiments discussed herein, various window sizes are indicated, including 10 seconds, 20 seconds, and 30 seconds. In some embodiments, the windows are different sizes and may be either small or large. Also, in some embodiments, the increment between different window sizes during subsequent passes (i.e., re-processing of the audio media data) may be different constants or variables.
In some embodiments, the start time offset for each segment to be processed may be less than or greater than the start time offset described above. If it is desired to detect the start or end position of the code within the audio media data to a relatively large extent or for other reasons, the start time offset may be relatively small in some embodiments, such as 1 or 2 seconds.
A method for collecting data relating to the use and/or exposure of media data, comprising processing the media data using a first frequency scale to produce first media use and/or exposure data, and processing the media data using a second frequency scale, the second frequency scale being different from the first frequency scale, to produce second media use and/or exposure data.
A system for collecting data relating to the usage and/or exposure of media data, comprising a processor configured to process the media data using a first frequency scale to produce first media usage and/or exposure data, and to process the media data using a second frequency scale, different from the first frequency scale, to produce second media usage and/or exposure data.
Fig. 7 is a functional flow diagram 400 used to describe various embodiments of detecting a frequency offset code included in audio media data. In certain embodiments, the process of FIG. 7 is used to read a continuous stream of encoded messages. As previously described, in some embodiments, frequency components or bins expected to contain code components are accumulated to obtain audio media data samples for processing.
Audio playback equipment typically has a sufficiently accurate clock so that there is negligible frequency offset between the recorded audio and the audio reproduced by the playing equipment. However, if the playback device has an inaccurate clock, frequency offset may be generated. Furthermore, if only pre-specified frequencies or frequency bands (i.e., those frequencies or frequency bands expected to contain code components) are used, the frequency components containing code components within the reproduced audio may be shifted enough to be undetectable. The same problem occurs when using PUAs to monitor exposure of media data in the event that the PUAs use an inaccurate clock. Various embodiments require a process for detecting frequency-shifted code components.
During the initial pass of some embodiments, a default frequency scale (described further below) is used 410 that assumes that the rendering device or PUA (as the case may be) has an accurate clock. The portions of the samples of audio media data stored in the storage device 14 are then transformed 420 (e.g., using an FFT) to the frequency domain and the frequency domain data is processed in accordance with any suitable symbol sequence reading process, such as any of the processes mentioned herein or described in the references mentioned above. The frequency components or bins expected to contain code components are accumulated 430 to obtain samples (e.g., 10 second windows) of audio media data to be processed.
The accumulated frequency components are processed 440 to read one or more codes (if any) encoded within the processed audio media data samples. In some embodiments, if the code is read 440, it is assumed that there is no frequency offset or only negligible frequency offset (as previously described). Here, the process ends 450. In some embodiments, data is generated that indicates a deterministic metric that the code was read correctly, even though the code was read. An example of a process for evaluating this certainty metric is disclosed in the aforementioned U.S. patent No.6,862,355 to Kolessar et al. This measure of certainty is used 450 to determine whether different frequency scales are to be used for processing the media data.
If the code is not detected, or this measure of certainty indicates that the code read may be incorrect or not sufficiently read (e.g., if a sufficient number or percentage of symbols have not been read), the same sample of audio media data is processed again. In some embodiments, several passes using different frequency scales, respectively, are performed before a determination is made whether to stop reading auxiliary codes from the media data.
During any second pass, code components are extracted based on the FFT results using a different frequency scale 420. For example, a frequency scale assuming a-0.1% frequency offset is selected 410 such that-0.1% frequency offset code components are accumulated at step 430. The accumulated frequency shifted code components are read 440. If it is subsequently determined to continue processing 450, the sample of audio media data is processed using another frequency scale. In the third pass, for example, a frequency scale assuming a frequency offset of + 0.1% is selected. If it is still determined to continue processing, then a frequency scale assuming a somewhat larger frequency offset (e.g., -0.2%) is used in the fourth pass. Likewise, if more passes are to be performed, frequency scales assuming progressively larger frequency offsets (e.g., + 0.2%, -0.3%, + 0.3%, etc.) are used. In some embodiments, other frequency offsets are assumed.
Fig. 8 shows a table identifying ten (10) exemplary frequency bands and their corresponding frequency components, where the frequency components are expected to be included in audio media data containing codes. If the stored audio media data has previously been revealed with a frequency shift of e.g. 0.2%, the frequency bands containing the code components and their corresponding frequency components are shown in the table depicted in fig. 9. If each frequency bin corresponds to, for example, 4Hz, an offset of 0.2% is sufficient to result in no code components being detected in the higher frequency bins during the first few passes described in connection with the flow chart of FIG. 7, but will be detected in one of the passes described herein.
In another embodiment, the selected frequency scale (410 of FIG. 7) is based on a percentage frequency offset that is less than the percentage frequency offset described above. Specifically, increments of 0.05% may be used. Thus, table 1 below identifies the frequency offset during each pass of the processing of the audio media data segment.
TABLE 1
| By the steps of | Frequency offset |
| 1 | 0.00 |
| 2 | -0.05% |
| 3 | +0.05% |
| 4 | -0.1% |
| 5 | +0.1% |
| 6 | -0.15% |
| 7 | +0.15% |
| 8 | -0.20% |
| 9 | +0.20% |
| 10 | -0.25% |
| ... | ... |
In another embodiment, the frequency offset uses a larger percentage increment than the percentage increment referred to herein. For example, increments of 0.5%, 1.0%, or other larger increments may be used.
In yet another embodiment, the frequency offset is increased in the same direction (e.g., positive, negative) for each pass until a set maximum offset (e.g., 1.0%) is reached, at which time the frequency offset is set in another direction, such as shown below in Table 2. In yet another embodiment, different increments may be used.
TABLE 2
| By the steps of | Frequency offset |
| 1 | 0.00 |
| 2 | +0.05% |
| 3 | +0.10% |
| 4 | +0.15% |
| 5 | +0.20% |
| 6 | +0.25% |
| ... | ... |
| 21 | +1.00% |
| 22 | -0.05% |
| 23 | -0.10% |
| 24 | -0.15% |
| 25 | -0.20% |
| 26 | -0.25% |
| ... | ... |
| 41 | -1.00% |
In various embodiments described herein, the code encoded within audio media data and detection thereof described herein may also be referred to as a symbol or a portion of a code. In general, a message included within audio media data typically includes a plurality of message symbols. The audio media data may also include a plurality of messages. A sequence of symbols is checked from a stream of messages to detect the presence of a message of a predetermined format. The symbol sequences may be selected for examination in any of a number of different ways, such as disclosed in U.S. patent No.6,862,355 to Kolessar et al and in U.S. patent No.6,845,360 to Jensen et al. For example, a set of sequential symbols may be examined based on the length and duration of the data. As another example, a subsequent sequence may be detected using a previously detected symbol sequence. As another example, synchronization symbols may be used.
Since the messages have a predetermined format, in some embodiments, upon detecting each message within the audio media data stored in memory 14, processor 16 relies on both the detection of certain symbols and the message format to determine whether a message has been detected. U.S. patent No.6,862,355 to Kolessar et al, mentioned above, sets forth various techniques for reconstructing a message when it can only be partially detected.
In certain embodiments, the audio media data is stored in memory 14 shown in FIG. 1 and processed to detect messages having a predetermined symbol format such as that shown in FIG. 10. In the exemplary format shown in fig. 10, the message is composed of 12 symbols, where the symbols M1 and M2 represent marker symbols, the symbols S1, S2, S3, S4, S5, and S6 represent respective code symbols, and the symbols T1, T2, T3, and T4 represent time symbols. If all symbols of a single message are not detected during processing, then the previously detected message and/or subsequently detected messages are analyzed to identify undetected symbol values (if possible), also referred to herein as "missing symbols" for convenience. In some embodiments, the accumulator is cleared or reset after a period of time during processing of the audio media data.
Fig. 11 is an exemplary pattern of symbols representing the same message "a" encoded within audio media data repeated three times. The accumulator is cleared before each message is decoded, i.e., before each occurrence of message a. For various reasons, including information loss and noise, not all symbols may be detected during the initial process. FIG. 12 shows an exemplary pattern of decoded symbols, where a circled symbol is incorrectly decoded and thus represents a "missing symbol". According to some embodiments, audio media data containing missing symbols is compared to previously and/or subsequently decoded messages since the messages are known to repeat in a known format. The result of the comparison and processing is that the circled symbol S8 is considered to be in fact the marker symbol "M1". Likewise, the circled symbol S5 is considered to be actually the data symbol "S4".
In accordance with certain embodiments, messages identified as containing missing symbols are processed in any of the various ways described herein to decode the correct symbols (if possible). For example, stored audio media data processed to contain such missing symbols is reprocessed in accordance with one or more of the processes described herein with reference to fig. 5 and/or 6.
As previously mentioned, fig. 1 discloses a system 10 that includes at least a memory 14 and a processor 16. In certain embodiments, the system 10 includes a portable monitoring device that can be carried by a panelist for monitoring media from various sources as the panelist moves. In certain embodiments, processor 16 performs processing on audio media data stored in memory 14. Such processing includes the processing described in the embodiments described herein.
A method of collecting data relating to the use and/or exposure of media data using a portable monitor carried by a panelist's person includes storing audio media data in the portable monitor and disabling the portable monitor's ability to perform at least one process required to generate the use and/or exposure data from the audio media data while the portable monitor is powered by a power source on-board the portable monitor, and performing at least one process for generating the use and/or exposure data using the portable monitor while the portable monitor is powered by a power source external to the portable monitor.
A portable monitor for generating data relating to the use and/or exposure of a panelist to media data when the portable monitor is carried by the panelist, the portable monitor comprising an on-board power supply, a memory for storing audio media data when the portable monitor is powered by the on-board power supply, and a processor configured to perform at least one process required to generate the use and/or exposure data from the audio media data when the portable monitor is powered by an external power supply, and to refrain from performing the at least one process when the portable monitor is not receiving power from the external power supply.
FIG. 13 is a functional block diagram illustrating the system 30 in certain embodiments in which different types of processing are performed based on the type and/or source of power to power the components of the system 30. As shown, system 30 is similar to system 10 shown in FIG. 1 and includes an audio media data input 32, a storage device 34, a processor 36, and a data transfer device 40. The functions and variations of these devices within system 30 may be the same or similar to those of devices within system 10, and thus a description of these functions and variations is not repeated here.
The system 30 also includes an internal power source 42, typically in the form of a rechargeable battery or other on-board power source suitable for use within the portable device. Examples of other suitable on-board power sources include, but are not limited to, non-rechargeable batteries, capacitors, and on-board generators (e.g., solar photovoltaic panels, mechanical-to-electrical converters, etc.).
An on-board power supply 42 powers each device within the system 30. The system 30 also includes a device 44 (referred to in fig. 13 as an "external power port") for enabling each device within the system 30 to be powered by an external power source. In certain embodiments, device 44 and data transfer device 40 are used to obtain external power and transfer data, respectively, when system 30 is physically coupled to base station 50 or other suitable equipment.
According to some embodiments, the panelist carries with him a system 30 in the form of a portable monitoring device (also referred to herein as a "portable monitor 30"). When the audio media data is revealed to the person, which is also received at the input 32 of the portable monitor 30, the portable monitor 30 records the audio media data in the memory 34. The processor 36 processes the audio media data received by the input 32 in a manner that requires relatively low power provided by the internal power supply 42 (sometimes referred to herein for convenience as operation in a "low power mode" or an "on-board power mode"). Such processing may include noise filtering, compression, and other known processes that collectively require substantially less power than processor 36 would need to process the audio media data stored in memory 34 to read ancillary codes therefrom, such as transforming the audio media data to the frequency domain. Thus, the data stored in memory 34 includes audio media data received by input 32 and/or partially processed audio media data.
According to yet another embodiment of the invention, data corresponding to the received signal is stored in a memory device. According to one embodiment of the invention, the received signal is stored in a raw data format. In another embodiment of the invention, the received data signal is stored in a processed data format, such as a compressed data format. In various embodiments of the invention, the stored data is then transferred to an external processing system to extract information such as ancillary code.
According to one embodiment of the invention, it is allowed that a time interval elapses after the data is stored in the memory device and then the data is transferred for processing. In yet another embodiment of the invention, data is processed when supplemental power is available, without being transmitted to an external processing system, but after a time interval has elapsed. In one embodiment of the invention, the processing that occurs after a time interval has elapsed is relatively slow processing compared to real-time processing.
From time to time or periodically, the panelist couples the portable monitor 30 to the base station 50, and the base station 50 then serves as an external power source for the portable monitor 30. The base station may be, for example, of the type disclosed in U.S. patent No.5,483,276 to Brooks et al, which is incorporated herein by reference in its entirety. In certain embodiments, the panelist couples a suitable external power cable to the external power port 44 for providing external power to the portable monitor 30.
This condition may be detected by processor 30 upon application of an external power source to portable monitor 30, and then or thereafter switched to a high power mode or an external power mode. In such a high power mode or external power mode, processor 30 performs processes other than those performed while operating in the low power mode or on-board power mode. In certain embodiments, these processes include processes required to read auxiliary codes from stored media data or to complete processing of partially processed data to read such auxiliary codes.
In certain embodiments, the processor 36, operating in the high power mode or the external power mode, processes the audio media data stored in the memory 34 and/or the partially processed data stored therein in a plurality of code reading processes, each process using one or more parameters different from those used in other ones of the plurality of code reading processes. Various embodiments of such code reading processes are disclosed above.
In certain embodiments, the processor 36 operating in the high power mode or the external power mode also processes the auxiliary code read by the processor 16 operating in the low power mode or the on-board power mode to confirm that the previously read auxiliary code was read correctly or to apply processing to previously unread auxiliary code portions for reading and inference. In some of these embodiments, where the processor 16 does not read or correctly read all of the symbols of the auxiliary code in the low power mode or the on-board power mode, the processor 16 operating in the high power mode or the external power mode identifies message symbols that were not read or that were not correctly read based on the corresponding message symbols read in the previous or subsequent messages read from the media data. Such processing in the high power mode or the external power mode is performed in some embodiments in the manner described above in connection with fig. 10, 11, and 12.
Fig. 14 is a functional block diagram illustrating a system 60 in which audio media data is stored in a first portable monitor carried by a panelist's person and the stored audio media data is processed by a second device in the panelist's home to detect a code contained in the audio media data, according to some embodiments. As shown in FIG. 14, system 60 includes a portable monitor 70 that includes an input 72, a storage 74, a processor 76, a data transfer device 78, and an internal power source 79. These components within portable monitor 70 each operate in a similar manner to those in portable monitor 30 described above. During operation, a panelist carries a portable monitor 70 with him, and the portable monitor 70 stores audio media data that has been exposed to the panelist in the memory 74. The processor 76 may perform minimal processing of the received audio media data, such as some, but not all, of the processing required to filter, compress, or read any ancillary code in such data.
From time to time or periodically, portable monitor 70 is coupled, wirelessly or via a wired connection, to system 80, which includes data transfer device 82, memory 84, and processor 86. In some embodiments, system 80 is a base station, hub, or other device located in a panelist's home.
The audio media data stored in memory 74 of portable monitor 70 is transferred to system 80 via their respective data transfer devices 72 and 82 and the transferred audio media data is stored in memory 84 for further processing by processor 86. Processor 86 then performs the various processes disclosed herein for detecting code contained within the audio media data. In certain embodiments, processor 86 performs a single code reading process on the audio media data. In certain embodiments, processor 86 performs a code reading process multiple times, each time changing one or more parameters as described above.
In certain embodiments, the processor 86 also processes the auxiliary codes read by the processor 76 to confirm that the auxiliary codes were read correctly or to apply a process to read or infer portions of the auxiliary codes that were not read by the processor 76. In some of these embodiments, where the processor 76 does not read or correctly read all of the symbols of the auxiliary code, the processor 86 determines the message symbols that were not read or that were not correctly read based on the corresponding message symbols read from the previous or subsequent messages read from the media data. This processing by processor 86 is performed in some embodiments in the manner described above in connection with fig. 10, 11, and 12.
Certain embodiments described above relate to systems that collect audio media data in a portable monitor when the portable monitor is operating in a low power mode (i.e., when the power source is an on-board power source) and process one or another form of data collected in the portable monitor when the portable monitor is operating in a high power mode (i.e., when the power source is an externally-provided power source).
A method of operating a portable research data collection device, comprising sensing at a first time that power for operating the portable research data collection device is provided by a power source onboard the portable research data collection device, operating the portable research data collection device in a low power consumption mode after the first time, sensing at a second time different from the first time that power for operating the portable research data collection device is provided by an external power source, and operating the portable research data collection device in a high power consumption mode after the second time.
A portable research data collection apparatus comprising a detector adapted to sense at a first time that power for operating the portable research data collection apparatus is provided by a power source on-board the portable research data collection apparatus and adapted to sense at a second time different from the first time that power for operating the portable research data collection apparatus is provided by an external power source; and a processor adapted to operate in a low power consumption mode after the first time and adapted to operate in a high power consumption mode after the second time.
In certain embodiments, data is collected and stored in a low power mode and the stored data is processed in a high power mode. In some embodiments, the processing of the data entails reading a code within the stored data.
In various embodiments described herein, different processes are performed depending on the power source that is supplying power to process the stored audio media data. Certain embodiments advantageously enable extensive media data processing in a variety of ways due to currently existing power limitations (e.g., limitations of existing portable power supplies), time limitations, or other factors.
Although various embodiments of the present invention have been described with reference to a particular arrangement of parts, features and the like, these embodiments are not intended to exhaust all possible arrangements or features, and indeed many other embodiments, modifications and variations will be apparent to those of skill in the art.
Claims (9)
1. A method of extracting an ancillary code from a media signal, comprising:
monitoring the media signal for a first duration during a first time interval to detect a first signal portion;
evaluating whether an auxiliary code can be recovered in the first signal portion;
monitoring the media signal for a second duration during a second time interval to detect a second signal portion if the auxiliary code cannot be recovered in said first signal portion; and
it is evaluated whether the auxiliary code can be recovered from said second signal portion.
2. A method of extracting an ancillary code from a media signal as defined in claim 1 further comprising:
monitoring the media signal during the second time interval depending on a result of the evaluation whether the auxiliary code can be recovered in the first signal portion.
3. A method of extracting an ancillary code from a media signal as defined in claim 1 wherein evaluating whether the ancillary code can be recovered in the first signal portion is performed after monitoring the media signal for a second duration during the second time interval.
4. A method of extracting an ancillary code from a media signal as defined in claim 1 further comprising:
storing a record of the first signal portion;
storing a record of the second signal portion; and
subsequently recovering the recording of the first signal portion and the recording of the second signal portion, wherein said evaluating whether the auxiliary code can be recovered in the first signal portion is performed after said recovering the first and second recording.
5. A method of extracting an ancillary code from a media signal as defined in claim 1 wherein said evaluating whether an ancillary code can be recovered in said first signal portion comprises applying a transform to data relating to said first signal portion, wherein said transform comprises one of a fast fourier transform, a wavelet transform, analog filtering and digital filtering.
6. A method of extracting an ancillary code from a media signal as defined in claim 1 wherein
The monitoring the media signal during the first time interval includes monitoring the media signal in one of (a) a noisy environment, and (b) an environment in which the signal is interrupted by a change in a media channel.
7. A method of extracting an ancillary code from a media signal as defined in claim 1 wherein said ancillary code comprises a code adapted to identify a source or payload component of said media signal.
8. A method of extracting an ancillary code from a media signal as defined in claim 1 wherein said first signal portion is received from a first signal source and said second signal portion is received from a second signal source.
9. An apparatus for extracting an ancillary code from a media signal, comprising:
means for monitoring the media signal for a first duration during a first time interval to detect a first signal portion;
means for evaluating whether an auxiliary code can be recovered in the first signal portion;
means for monitoring the media signal for a second duration during a second time interval to detect a second signal portion if the ancillary code cannot be recovered in the first signal portion; and
means for evaluating whether an auxiliary code can be recovered from the second signal portion.
Applications Claiming Priority (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US88661507P | 2007-01-25 | 2007-01-25 | |
| US89734907P | 2007-01-25 | 2007-01-25 | |
| US60/886,615 | 2007-01-25 | ||
| US60/897,349 | 2007-01-25 | ||
| PCT/US2008/001017 WO2008091697A1 (en) | 2007-01-25 | 2008-01-25 | Research data gathering |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1140573A1 HK1140573A1 (en) | 2010-10-15 |
| HK1140573B true HK1140573B (en) | 2013-10-04 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11670309B2 (en) | Research data gathering | |
| US20210134267A1 (en) | Audio data receipt/exposure measurement with code monitoring and signature extraction | |
| US20120203363A1 (en) | Apparatus, system and method for activating functions in processing devices using encoded audio and audio signatures | |
| US20120203559A1 (en) | Activating functions in processing devices using start codes embedded in audio | |
| US20030005430A1 (en) | Media data use measurement with remote decoding/pattern matching | |
| US20100262642A1 (en) | Methods and apparatus for generating signatures | |
| EP2093706A1 (en) | Methods and apparatus to monitor advertisement exposure | |
| WO2004062282A1 (en) | Systems and methods for identifying and encoding audio data | |
| US9711153B2 (en) | Activating functions in processing devices using encoded audio and detecting audio signatures | |
| WO2014144589A1 (en) | Systems, methods, and apparatus to identify linear and non-linear media presentations | |
| CN1533677A (en) | Monitor media data usage with non-programmatic data exclusion | |
| HK1140573B (en) | Research data gathering | |
| AU2014227513B2 (en) | Research data gathering | |
| US9906833B2 (en) | Methods and systems to monitor a media device using a digital audio signal | |
| KR102798824B1 (en) | Media viewing information collection method for viewer rating calculation, recording medium and media viewing information collection device for performing the same |