US9621949B2 - Method and apparatus for reducing latency in multi-media system - Google Patents
Method and apparatus for reducing latency in multi-media system Download PDFInfo
- Publication number
- US9621949B2 US9621949B2 US14/677,960 US201514677960A US9621949B2 US 9621949 B2 US9621949 B2 US 9621949B2 US 201514677960 A US201514677960 A US 201514677960A US 9621949 B2 US9621949 B2 US 9621949B2
- Authority
- US
- United States
- Prior art keywords
- media data
- predetermined threshold
- processing pipeline
- threshold value
- buffered
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims description 25
- 238000012545 processing Methods 0.000 claims abstract description 39
- 230000003247 decreasing effect Effects 0.000 claims description 2
- 238000012544 monitoring process Methods 0.000 claims description 2
- 239000000872 buffer Substances 0.000 description 37
- 230000000007 visual effect Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 230000002457 bidirectional effect Effects 0.000 description 2
- 230000003139 buffering effect Effects 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44004—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/152—Data rate or code amount at the encoder output by measuring the fullness of the transmission buffer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/436—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
- H04N21/43072—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of multiple content streams on the same device
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/4424—Monitoring of the internal components or processes of the client device, e.g. CPU or memory load, processing speed, timer, counter or percentage of the hard disk space used
Definitions
- the present invention relates generally to processing of multimedia data (such as audio and/or video) and, more particularly, to reducing end to end latency of data transmitted from a source device to a sink device.
- multimedia data such as audio and/or video
- a typical wireless display (WD) system includes a source device and one or more sink devices.
- the source device and each of the sink devices may comprise, for example, a mobile telephone, tablet computer, laptop computer, portable media player, or a so-called “smart” phone or tablet, each with the capability to replay audio data and/or display video information on a display to the user.
- the source device sends multimedia data to one or more sink devices participating in a particular communication session whereupon the media data is buffered at the sink end and then played back to a user.
- Latency in general, is a period of delay between a signal entering and exiting a system.
- a high latency can detract from the user experience and a low latency is a particular requirement for voice and video over IP systems, video conferencing and wireless display systems.
- Sink-end buffering is a contributor to end-to-end latency.
- FIG. 1 is a schematic block diagram of a wireless display system comprising a sink apparatus in accordance with an embodiment of the present invention
- FIG. 2 is a simplified flow chart illustrating a method of operation of the sink apparatus of FIG. 1 ;
- FIG. 3 is a simplified flow chart illustrating a further method of operation of the sink apparatus of FIG. 1 .
- the present invention provides a method for adjusting latency in a sink apparatus, where the sink apparatus receives media data from a source apparatus for play back, and includes a processing pipeline.
- the method comprises the steps of: in the sink apparatus, monitoring an amount of media data buffered in at least a portion of the processing pipeline; comparing the monitored amount of data with a predetermined threshold value; and if the monitored amount of data exceeds the predetermined threshold value, discarding the media data from the processing pipeline until the monitored amount falls below the predetermined threshold value.
- the present invention provides a sink apparatus capable of receiving media data from a source apparatus for play back.
- the sink apparatus comprises a processing pipeline and a monitor.
- the monitor monitors an amount of media data buffered in at least a portion of the processing pipeline, compares the monitored amount of media data with a predetermined threshold value, and if the monitored amount of media data exceeds the predetermined threshold value, discards the media data from the processing pipeline until the monitored amount falls below the predetermined threshold value.
- an amount of buffered media data is determined by comparing timestamp values assigned to media data packets at different locations of the processing pipeline.
- a sink apparatus 100 that receives multimedia data over a wireless link 102 from a remote source 103 in accordance with an embodiment of the present invention is shown.
- the components of the sink apparatus 100 constitute a wireless display (WD) system.
- WD wireless display
- the sink apparatus 100 includes a user display, which comprises a speaker 104 and a visual display device 105 having a touch screen 106 .
- the speaker 104 reproduces audio data received by the sink apparatus 100 from the source 103 and the touch screen 106 displays video data received by the sink apparatus 100 from the source 103 .
- the sink apparatus 100 includes other components and it will be appreciated that the illustrated components of FIG. 1 constitute just one example configuration for a wireless display system. It will also be appreciated that in other embodiments, the speaker 104 can be replaced by any of a variety of audio output devices such as headphones or a single or multi-speaker system.
- the display device 105 can comprise one of a variety of video output devices such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, a light emitting diode (LED) display, an organic light emitting diode (OLED) display, or another type of display device including a user input means such as a push button or capacitive or resistive sensor in the case of a touch screen or any other type of input device, like a keyboard, either virtual or real, or a sensor driven input device.
- the sink apparatus 100 can be a single device, such as a tablet computer or smartphone.
- the sink apparatus 100 also includes a wireless modem 107 that receives an RF signal carrying (streamed) multimedia data packets (which may be arranged in data frames) from the source 103 and which is operably coupled to a socket 108 that in turn, is operably coupled to a packet receiver 109 .
- a first output of the packet receiver 109 is operably coupled to a monitor 110 .
- the monitor 110 also receives an input from the visual display device 105 on a line 111 .
- a second output of the packet receiver 109 is operably coupled to a stream buffer 112 .
- An output of the stream buffer 112 is operably coupled to an input of a de-multiplexer 113 .
- a first output of the de-multiplexer 113 is operably coupled to the monitor 110 .
- a second output of the de-multiplexer 113 is operably coupled to a first audio buffer 114 and a first video buffer 115 .
- An output of the first audio buffer 114 is operably coupled to an audio decoder 116 , and an output of the audio decoder 116 is operably coupled to an input of a second audio buffer 117 .
- An output of the second audio buffer 117 is operably coupled to an input of an audio renderer 118 .
- An output of the audio renderer 118 is operably coupled to an input of a third audio buffer 119 .
- An output of the third audio buffer 119 is operably coupled to the speaker 104 and to the monitor 110 .
- An output of the first video buffer 115 is operably coupled to a video decoder 120 .
- An output of the video decoder 120 is operably coupled to an input of a second video buffer 121 .
- An output of the second video buffer 121 is operably coupled to an input of a video renderer 122 .
- An output of the video renderer 122 is operably coupled to an input of a third video buffer 123 .
- An output of the third video buffer 123 is operably coupled to the visual display device 105 and the monitor 110 .
- the audio and video renderers 118 , 122 are operably coupled to the monitor 110 .
- the monitor 110 can also send instructions to the audio and video decoders 116 , 120 , the renderers 118 , 122 and first, second and third audio and video buffers 114 , 117 , 119 , and 115 , 121 , and 123 .
- the source 103 and the sink apparatus 100 establish a communication session according to any protocol (a description of the specific protocol is not necessary for a complete understanding of the invention) and then they communicate over the wireless link 102 using a conventional communications protocol.
- audio and video data that are transmitted from the source 103 to the sink apparatus 100 include multimedia content such as movies, television shows, or music and can also include real-time content generated by the source 103 .
- the audio and video data is encoded at the source 103 and typically transmitted in the form of data packets and video frames.
- the packet receiver 109 typically includes conventional mixers, filters, amplifiers and other components designed for signal demodulation.
- the wireless link 102 in one embodiment is a short-range communication channel, similar to Wi-Fi, Bluetooth®, or the like. In other examples, the wireless link 102 forms part of a packet-based network, such as a wired or wireless local area network, a wide-area network, or the Internet. Additionally, the wireless link 102 can be used by the source 103 and the sink apparatus 100 to create a peer-to-peer link.
- the de-multiplexer 113 conforms to an appropriate protocol and can be embodied in a microprocessors, digital signal processors (DSPs) or application specific integrated circuits (ASICs), for example.
- DSPs digital signal processors
- ASICs application specific integrated circuits
- the audio and video decoders 116 , 120 are arranged to implement any number of audio and video decoding standards, such as MPEG-TS, for example.
- the audio decoder 116 and video decoder 120 are integrated into one unit for decoding both audio and video data in a common data stream.
- FIG. 1 shows channels carrying audio data and video data separately, it is to be understood that in some instances video data and audio data can be part of a common data stream.
- a process for rendering the decoded data is carried out in a conventional manner by the audio renderer 118 and video renderer 122 .
- audio and video data are arranged in frames and the audio frames are synchronized with the video frames when rendered.
- the illustrated components of the sink apparatus 100 form a processing pipeline for receiving modulated and encoded streamed multimedia data packets, demodulating and decoding the received packets and rendering the data for display to a user on the visual display device 105 and speaker 104 .
- the processing pipeline also includes various buffers, as mentioned above.
- a data frame update can be queued or accumulated in any of the buffers 112 , 114 , 115 , 117 , 121 , 119 , 123 .
- a known direct memory access operation can be used to retrieve frame updates held in a buffer and move them along the processing pipeline.
- data packets or frames may be temporarily stored in a buffer until a processing operation is completed in a subsequent processing module or, in another example, until an expected incoming data packet has arrived.
- the third video buffer 123 for example, can be thought of as a ‘play out’ buffer that counters jitter in the displayed video images.
- a time stamp is applied (at the source end) to each audio and video packet or each video frame.
- the time stamp relates to a (source) ‘system time’ that can be used to synchronize video and audio data at the sink end and can typically comprise a sequential value.
- encoded audio and video data with time stamp information is packetized, multiplexed and transmitted as a MPEG-TS steam for reception by the sink apparatus 100 .
- the monitor 110 is arranged to read the time stamp value of each data packet received at the sink end at several locations in the processing pipeline.
- these locations are at the input and output of the packet receiver 109 , at the input and output of the de-multiplexer 113 , and at the output of the third audio and video buffers 119 , 123 respectively.
- the difference in time stamp values between a data packet appearing at the output of the third audio and videos buffers 119 , 123 and a data packet arriving at the packet receiver 109 provides an indication of the amount of data that is buffered in the processing pipeline comprising the packet receiver 109 , de-multiplexer 113 , decoders 116 , 120 , renderers 118 , 122 and buffers 112 , 114 , 115 , 117 , 121 , 119 , 123 , at any given point in time.
- the monitor 110 is arranged to monitor timestamp values at various locations in the processing pipeline and determine differences in the monitored time stamp values that are indicative of the amount of data buffered in the monitored portions of the processing pipeline.
- the monitor 110 is also arranged to compare a determined time stamp difference with one or more threshold values and instruct the first audio and video buffers 114 , 115 to drop multimedia data (by discarding one or more frames appearing at the output of the de-multiplexer, for example), depending on the comparison.
- a measured time stamp difference that is greater than a threshold means that more data than is necessary for maintaining an acceptable jitter-free playback has been buffered in the processing pipeline and so frames can be discarded without seriously affecting quality of the data played back to the user. Discarding frames also reduces latency that can enhance the user experience.
- the monitor 110 monitors the status of the packet receiver 109 , de-multiplexer 113 and media player pipeline buffer 114 / 115 .
- the packet receiver buffer status is determined by checking the time stamp of the packets input to and output thereof.
- the de-multiplexer buffer status is determined by checking the time stamp of the demuxed audio/video stream that is buffered in the de-multiplexer 113 .
- the media player pipeline buffer status is determined by checking the current media time (media player playing time) and the de-multiplexer output audio stream time stamp.
- the monitor 110 is also arranged to receive an input on line 111 from the user, (by way of the touch screen 106 ) whereby the user can specify a drop frequency of multimedia data.
- the monitor 110 is arranged to increase or decrease a threshold value depending on the user's input in a manner to be described below with reference to FIG. 3 .
- the monitor 110 also, in certain circumstances, compares a time stamp difference against two threshold values and if the compared value falls between the two values, the monitor 110 instructs the appropriate components of the playback pipeline (comprising the decoders 116 , 120 , renderers 118 , 122 and first, second and third buffers 114 , 115 , 117 , 121 , 119 , 123 ) to increase the playback speed at the speaker 104 .
- the audio renderer 118 is instructed to play back audio data faster, thereby consuming audio data buffered in the processing pipeline faster.
- the sink apparatus 100 can balance latency and quality, both of which can be influenced by user requirements.
- a user can set a quality criterion (or multimedia drop frequency) such as discarding frames once every minute if a low latency is required or once every hour if a high quality is required.
- the threshold value for video data is set higher than that for audio data.
- a method for operating the sink apparatus 100 of FIG. 1 will now be described with reference to the simplified flowchart of FIG. 2 .
- multimedia data is received at the packet receiver 109 and progresses through the stream buffer 112 , the de-multiplexer 113 and the playback pipeline comprising the decoders 116 , 120 , renderers 118 , 122 and buffers 114 , 115 , 117 , 119 , 121 and 123 .
- the total amount of multimedia data (in terms of a time stamp difference) that is buffered in the sink apparatus 100 is calculated by the monitor 110 .
- the monitor 110 Over a playback period of some pre-chosen duration, say five seconds, the smallest calculated value for the buffered multimedia data is determined from the calculations.
- the monitor 110 determines the amount of multimedia data (in terms of a time stamp value difference) which is buffered in the packet receiver 109 .
- the monitor 110 determines the amount of buffered multimedia data by obtaining the timestamps of data packets entering and leaving the packet receiver 109 and comparing the two values.
- the monitor 110 also determines the amount of multimedia data (in terms of a time stamp value difference) that is buffered in the de-multiplexer 113 .
- the monitor 110 determines the amount of buffered multimedia data by obtaining the timestamps of data packets entering and leaving the de-multiplexer 113 and comparing the two values.
- the monitor 110 also determines the amount of multimedia data (in terms of a time stamp value difference) that is buffered in the playback pipeline.
- the monitor 110 determines the amount of buffered multimedia data by obtaining the timestamps of data packets entering the first audio and video buffers 114 , 115 and leaving and arriving at the speaker 104 and visual display device 105 and comparing the two values.
- the monitor determines the total amount of multimedia data buffered in the sink apparatus by summing the results of steps 202 , 203 and 204 .
- the monitor 110 determines a value for the smallest amount of buffered media data (that is, the smallest measured timestamp difference value) that it has monitored in the preceding steps over the playback period.
- the monitor 110 compares this smallest measured timestamp difference value with a preset first threshold value. If this smallest timestamp difference value is greater than the first threshold then media frames are dropped from the processing pipeline. Thus, latency will be reduced. For example, the first audio and video buffers will drop a buffered frame and send a subsequent buffered frame to their respective decoders 116 , 120 . The process reverts to step 206 where the value for the smallest amount of buffered multimedia data continues to be determined and compared with the first threshold. In one example, multimedia frames are discarded until the processing pipeline contains just sufficient media data to avoid underrun. This can typically comprise 50 ms worth of media data. Discarding frames in this way has the effect of reducing latency in the sink apparatus.
- the method reverts to step 206 where the value for the smallest amount of buffered multimedia data continues to be determined and compared with the first threshold. If, on the other hand, the comparison reveals that the smallest timestamp difference value is greater than the second threshold, then at 209 , the audio playback speed is increased. Increasing the audio playback speed has the effect of reducing latency by absorbing audio data faster.
- the monitor 110 (See FIG. 1 ) sets the first threshold at an initial value.
- a requested media drop frequency (say, one drop every five minutes) is received at the monitor 110 from the user.
- a request can be generated, for example, via the touchscreen 106 of the visual display device 105 .
- the monitor 110 monitors the frequency of media drops.
- the monitor increases the first threshold.
- the value of the first threshold increases if the user's requirement has not been satisfied. In such a situation the user needs better quality and can tolerate a greater latency. Subsequently, the process reverts to 302 where the drop frequency continues to be monitored, taking into account any further requests from the user.
- the monitor 110 decreases the value of the first threshold. In such instances, the user's requirement has been satisfied and the first threshold can be decreased because the user can tolerate more frequent media drops. The process reverts to 302 where the drop frequency continues to be monitored, taking into account any further requests from the user.
- the invention may also be implemented in a computer program for running on a computer system, at least including code portions for performing steps of a method according to the invention when run on a programmable apparatus, such as a computer system or enabling a programmable apparatus to perform functions of a device or system according to the invention.
- a non-transitory computer-readable medium may be provided having computer-readable instructions stored thereon for performing a method for adjusting latency in a sink apparatus as disclosed herein.
- the non-transitory computer-readable medium may comprise at least one from a group consisting of: a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a Read Only Memory, a Programmable Read Only Memory, an Erasable Programmable Read Only Memory, EPROM, an Electrically Erasable Programmable Read Only Memory and a Flash memory.
- connections as discussed herein may be any type of connection suitable to transfer signals from or to the respective nodes, units or devices, for example via intermediate devices. Accordingly, unless implied or stated otherwise, the connections may for example be direct connections or indirect connections.
- the connections may be illustrated or described in reference to being a single connection, a plurality of connections, unidirectional connections, or bidirectional connections. However, different embodiments may vary the implementation of the connections. For example, separate unidirectional connections may be used rather than bidirectional connections and vice versa.
- plurality of connections may be replaced with a single connection that transfers multiple signals serially or in a time multiplexed manner. Likewise, single connections carrying multiple signals may be separated out into various different connections carrying subsets of these signals. Therefore, many options exist for transferring signals.
- FIG. 1 the boundaries between the functional blocks illustrated in FIG. 1 are merely illustrative and that alternative embodiments may merge functional blocks or circuit elements or impose an alternate decomposition of functionality upon various functional blocks or circuit elements.
- the architectures depicted herein are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality. Any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermediate components. Likewise, any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality.
- the illustrated examples may be implemented as circuitry located on a single integrated circuit or within a same device.
- the entire functionality of the modules comprising the sink apparatus shown in FIG. 1 may be implemented in an integrated circuit.
- Such an integrated circuit may be a package containing one or more dies.
- the examples may be implemented as separate integrated circuits or separate devices interconnected with each other in a suitable manner.
- An integrated circuit device may comprise one or more dies in a single package with electronic components provided on the dies that form the modules and which are connectable to other components outside the package through suitable connections such as pins or leads of the package and bond wires between the pins and the dies.
- the examples, or portions thereof may be implemented as soft or code representations of physical circuitry or of logical representations convertible into physical circuitry, such as in a hardware description language of any appropriate type.
- the invention is not limited to physical devices or units implemented in non-programmable hardware but can also be applied in programmable devices or units able to perform the desired device functions by operating in accordance with suitable program code, such as mainframes, minicomputers, servers, workstations, personal computers, notepads, personal digital assistants, electronic games, automotive and other embedded systems, cell phones and various other wireless devices, commonly denoted in this application as ‘computer systems.’
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
Claims (14)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410833214 | 2014-11-12 | ||
CN201410833214.7 | 2014-11-12 | ||
CN201410833214.7A CN105898541B (en) | 2014-11-12 | 2014-11-12 | The method and apparatus for reducing the waiting time in multimedia system |
Publications (2)
Publication Number | Publication Date |
---|---|
US20160134931A1 US20160134931A1 (en) | 2016-05-12 |
US9621949B2 true US9621949B2 (en) | 2017-04-11 |
Family
ID=55913270
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/677,960 Expired - Fee Related US9621949B2 (en) | 2014-11-12 | 2015-04-02 | Method and apparatus for reducing latency in multi-media system |
Country Status (2)
Country | Link |
---|---|
US (1) | US9621949B2 (en) |
CN (1) | CN105898541B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10225309B1 (en) * | 2016-03-22 | 2019-03-05 | Amazon Technologies, Inc. | Monitoring of media pipeline health using tracing |
CN105872728A (en) * | 2016-05-28 | 2016-08-17 | 刘健文 | Screen transfer video processing method for multi-screen interaction |
CN105847971A (en) * | 2016-05-28 | 2016-08-10 | 刘健文 | Method for processing screen transmission video |
CN105847946A (en) * | 2016-05-28 | 2016-08-10 | 刘健文 | Screen transmission video processing method |
CN107438197A (en) * | 2016-05-29 | 2017-12-05 | 刘健文 | A kind of display terminal |
CN107438198A (en) * | 2016-05-29 | 2017-12-05 | 刘健文 | A kind of display terminal for being used to pass screen video |
CN107438199A (en) * | 2016-05-29 | 2017-12-05 | 刘健文 | One kind passes screen video display terminal |
CN109996094B (en) * | 2017-12-29 | 2021-08-13 | 杭州海康威视系统技术有限公司 | Video playing method, device and system |
CN108449617B (en) | 2018-02-11 | 2020-04-03 | 浙江大华技术股份有限公司 | Method and device for controlling audio and video synchronization |
CN114390335B (en) * | 2020-10-22 | 2022-11-18 | 华为终端有限公司 | Method for playing audio and video online, electronic equipment and storage medium |
CN113766146B (en) * | 2021-09-07 | 2022-09-16 | 北京百度网讯科技有限公司 | Audio and video processing method and device, electronic equipment and storage medium |
CN114205662B (en) * | 2021-12-13 | 2024-02-20 | 北京蔚领时代科技有限公司 | Low-delay video rendering method and device of iOS (integrated operation system) terminal |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6072809A (en) | 1997-08-14 | 2000-06-06 | Lucent Technologies, Inc. | Statistical method for dynamically controlling the playback delay of network multimedia streams |
US6115357A (en) | 1997-07-01 | 2000-09-05 | Packeteer, Inc. | Method for pacing data flow in a packet-based network |
US6665728B1 (en) | 1998-12-30 | 2003-12-16 | Intel Corporation | Establishing optimal latency in streaming data applications that use data packets |
US6999921B2 (en) | 2001-12-13 | 2006-02-14 | Motorola, Inc. | Audio overhang reduction by silent frame deletion in wireless calls |
US7035210B2 (en) | 2001-07-12 | 2006-04-25 | Telefonaktiebolaget Lm Ericsson (Publ) | Media stream delay monitoring for node |
US20060149850A1 (en) * | 2005-01-05 | 2006-07-06 | Control4 Corporation | Method and apparatus for synchronizing playback of streaming media in multiple output devices |
US20080172441A1 (en) * | 2007-01-12 | 2008-07-17 | Microsoft Corporation | Dynamic buffer settings for media playback |
US20090059962A1 (en) * | 2007-08-30 | 2009-03-05 | Schmidt Brian K | Synchronizing related data streams in interconnection networks |
US20100091769A1 (en) * | 2004-06-25 | 2010-04-15 | Numerex Corporation | Method And System For Improving Real-Time Data Communications |
US7924711B2 (en) | 2004-10-20 | 2011-04-12 | Qualcomm Incorporated | Method and apparatus to adaptively manage end-to-end voice over internet protocol (VolP) media latency |
US20130148940A1 (en) * | 2011-12-09 | 2013-06-13 | Advanced Micro Devices, Inc. | Apparatus and methods for altering video playback speed |
US20130222699A1 (en) * | 2012-02-28 | 2013-08-29 | Qualcomm Incorporated | Customized buffering at sink device in wireless display system based on application awareness |
US20130262694A1 (en) * | 2012-03-30 | 2013-10-03 | Viswanathan Swaminathan | Buffering in HTTP Streaming Client |
US20140269942A1 (en) * | 2013-03-14 | 2014-09-18 | Jupiter Systems | Concurrent decompression of multiple video streams with constrained decompression resources |
-
2014
- 2014-11-12 CN CN201410833214.7A patent/CN105898541B/en not_active Expired - Fee Related
-
2015
- 2015-04-02 US US14/677,960 patent/US9621949B2/en not_active Expired - Fee Related
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6115357A (en) | 1997-07-01 | 2000-09-05 | Packeteer, Inc. | Method for pacing data flow in a packet-based network |
US6072809A (en) | 1997-08-14 | 2000-06-06 | Lucent Technologies, Inc. | Statistical method for dynamically controlling the playback delay of network multimedia streams |
US6665728B1 (en) | 1998-12-30 | 2003-12-16 | Intel Corporation | Establishing optimal latency in streaming data applications that use data packets |
US7035210B2 (en) | 2001-07-12 | 2006-04-25 | Telefonaktiebolaget Lm Ericsson (Publ) | Media stream delay monitoring for node |
US6999921B2 (en) | 2001-12-13 | 2006-02-14 | Motorola, Inc. | Audio overhang reduction by silent frame deletion in wireless calls |
US20100091769A1 (en) * | 2004-06-25 | 2010-04-15 | Numerex Corporation | Method And System For Improving Real-Time Data Communications |
US7924711B2 (en) | 2004-10-20 | 2011-04-12 | Qualcomm Incorporated | Method and apparatus to adaptively manage end-to-end voice over internet protocol (VolP) media latency |
US20060149850A1 (en) * | 2005-01-05 | 2006-07-06 | Control4 Corporation | Method and apparatus for synchronizing playback of streaming media in multiple output devices |
US20080172441A1 (en) * | 2007-01-12 | 2008-07-17 | Microsoft Corporation | Dynamic buffer settings for media playback |
US20090059962A1 (en) * | 2007-08-30 | 2009-03-05 | Schmidt Brian K | Synchronizing related data streams in interconnection networks |
US20130148940A1 (en) * | 2011-12-09 | 2013-06-13 | Advanced Micro Devices, Inc. | Apparatus and methods for altering video playback speed |
US20130222699A1 (en) * | 2012-02-28 | 2013-08-29 | Qualcomm Incorporated | Customized buffering at sink device in wireless display system based on application awareness |
US20130262694A1 (en) * | 2012-03-30 | 2013-10-03 | Viswanathan Swaminathan | Buffering in HTTP Streaming Client |
US20140269942A1 (en) * | 2013-03-14 | 2014-09-18 | Jupiter Systems | Concurrent decompression of multiple video streams with constrained decompression resources |
Also Published As
Publication number | Publication date |
---|---|
CN105898541B (en) | 2019-11-26 |
US20160134931A1 (en) | 2016-05-12 |
CN105898541A (en) | 2016-08-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9621949B2 (en) | Method and apparatus for reducing latency in multi-media system | |
US9055502B2 (en) | Content-based handover method and system | |
US10834296B2 (en) | Dynamically adjusting video to improve synchronization with audio | |
US9928844B2 (en) | Method and system of audio quality and latency adjustment for audio processing by using audio feedback | |
US8631143B2 (en) | Apparatus and method for providing multimedia content | |
CN104125482B (en) | A kind of flow media playing method and device | |
CN101715046B (en) | Electronic apparatus, content reproduction method | |
US10805658B2 (en) | Adaptive switching in a whole home entertainment system | |
US10277653B2 (en) | Failure detection manager | |
US11582300B2 (en) | Streaming synchronized media content to separate devices | |
CN108810656B (en) | Real-time live broadcast TS (transport stream) jitter removal processing method and processing system | |
US20130166769A1 (en) | Receiving device, screen frame transmission system and method | |
US11943497B2 (en) | Network-based audio playback | |
US20120154678A1 (en) | Receiving device, screen frame transmission system and method | |
US9319737B2 (en) | Transport layer modification to enable transmission gaps | |
CN106063284A (en) | Method and device for playing multimedia content in communication system | |
WO2022142481A1 (en) | Audio/video data processing method, livestreaming apparatus, electronic device, and storage medium | |
KR100632509B1 (en) | How to Synchronize Audio Video from a Video Playback Device | |
KR101328339B1 (en) | Apparatus and method for managing buffer for playing streaming video | |
WO2021002135A1 (en) | Data transmission device, data transmission system, and data transmission method | |
JP5591953B2 (en) | Method and apparatus for quieting a transmitter in a white space device | |
CN118828715A (en) | Method and device for playing audio, audio equipment and chip | |
WO2013091010A1 (en) | Media output methods and devices |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: FREESCALE SEMICONDUCTOR, INC., TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SONG, BING;LIU, XIAOWEN;WANG, ZENING;REEL/FRAME:035325/0783 Effective date: 20141105 |
|
AS | Assignment |
Owner name: CITIBANK, N.A., AS NOTES COLLATERAL AGENT, NEW YORK Free format text: SUPPLEMENT TO IP SECURITY AGREEMENT;ASSIGNOR:FREESCALE SEMICONDUCTOR, INC.;REEL/FRAME:036284/0105 Effective date: 20150724 Owner name: CITIBANK, N.A., AS NOTES COLLATERAL AGENT, NEW YORK Free format text: SUPPLEMENT TO IP SECURITY AGREEMENT;ASSIGNOR:FREESCALE SEMICONDUCTOR, INC.;REEL/FRAME:036284/0363 Effective date: 20150724 Owner name: CITIBANK, N.A., AS NOTES COLLATERAL AGENT, NEW YORK Free format text: SUPPLEMENT TO IP SECURITY AGREEMENT;ASSIGNOR:FREESCALE SEMICONDUCTOR, INC.;REEL/FRAME:036284/0339 Effective date: 20150724 Owner name: CITIBANK, N.A., AS NOTES COLLATERAL AGENT, NEW YOR Free format text: SUPPLEMENT TO IP SECURITY AGREEMENT;ASSIGNOR:FREESCALE SEMICONDUCTOR, INC.;REEL/FRAME:036284/0105 Effective date: 20150724 Owner name: CITIBANK, N.A., AS NOTES COLLATERAL AGENT, NEW YOR Free format text: SUPPLEMENT TO IP SECURITY AGREEMENT;ASSIGNOR:FREESCALE SEMICONDUCTOR, INC.;REEL/FRAME:036284/0339 Effective date: 20150724 Owner name: CITIBANK, N.A., AS NOTES COLLATERAL AGENT, NEW YOR Free format text: SUPPLEMENT TO IP SECURITY AGREEMENT;ASSIGNOR:FREESCALE SEMICONDUCTOR, INC.;REEL/FRAME:036284/0363 Effective date: 20150724 |
|
AS | Assignment |
Owner name: FREESCALE SEMICONDUCTOR, INC., TEXAS Free format text: PATENT RELEASE;ASSIGNOR:CITIBANK, N.A., AS COLLATERAL AGENT;REEL/FRAME:037357/0859 Effective date: 20151207 |
|
AS | Assignment |
Owner name: MORGAN STANLEY SENIOR FUNDING, INC., MARYLAND Free format text: ASSIGNMENT AND ASSUMPTION OF SECURITY INTEREST IN PATENTS;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:037565/0510 Effective date: 20151207 Owner name: MORGAN STANLEY SENIOR FUNDING, INC., MARYLAND Free format text: ASSIGNMENT AND ASSUMPTION OF SECURITY INTEREST IN PATENTS;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:037565/0527 Effective date: 20151207 |
|
AS | Assignment |
Owner name: MORGAN STANLEY SENIOR FUNDING, INC., MARYLAND Free format text: SUPPLEMENT TO THE SECURITY AGREEMENT;ASSIGNOR:FREESCALE SEMICONDUCTOR, INC.;REEL/FRAME:039138/0001 Effective date: 20160525 |
|
AS | Assignment |
Owner name: NXP, B.V., F/K/A FREESCALE SEMICONDUCTOR, INC., NETHERLANDS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC.;REEL/FRAME:040925/0001 Effective date: 20160912 Owner name: NXP, B.V., F/K/A FREESCALE SEMICONDUCTOR, INC., NE Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC.;REEL/FRAME:040925/0001 Effective date: 20160912 |
|
AS | Assignment |
Owner name: NXP B.V., NETHERLANDS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC.;REEL/FRAME:040928/0001 Effective date: 20160622 |
|
AS | Assignment |
Owner name: NXP USA, INC., TEXAS Free format text: CHANGE OF NAME;ASSIGNOR:FREESCALE SEMICONDUCTOR INC.;REEL/FRAME:040626/0683 Effective date: 20161107 |
|
AS | Assignment |
Owner name: NXP USA, INC., TEXAS Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE NATURE OF CONVEYANCE PREVIOUSLY RECORDED AT REEL: 040626 FRAME: 0683. ASSIGNOR(S) HEREBY CONFIRMS THE MERGER AND CHANGE OF NAME;ASSIGNOR:FREESCALE SEMICONDUCTOR INC.;REEL/FRAME:041414/0883 Effective date: 20161107 Owner name: NXP USA, INC., TEXAS Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE NATURE OF CONVEYANCE PREVIOUSLY RECORDED AT REEL: 040626 FRAME: 0683. ASSIGNOR(S) HEREBY CONFIRMS THE MERGER AND CHANGE OF NAME EFFECTIVE NOVEMBER 7, 2016;ASSIGNORS:NXP SEMICONDUCTORS USA, INC. (MERGED INTO);FREESCALE SEMICONDUCTOR, INC. (UNDER);SIGNING DATES FROM 20161104 TO 20161107;REEL/FRAME:041414/0883 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: NXP B.V., NETHERLANDS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC.;REEL/FRAME:050744/0097 Effective date: 20190903 |
|
AS | Assignment |
Owner name: NXP B.V., NETHERLANDS Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVEAPPLICATION 11759915 AND REPLACE IT WITH APPLICATION11759935 PREVIOUSLY RECORDED ON REEL 040928 FRAME 0001. ASSIGNOR(S) HEREBY CONFIRMS THE RELEASE OF SECURITYINTEREST;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC.;REEL/FRAME:052915/0001 Effective date: 20160622 |
|
AS | Assignment |
Owner name: NXP, B.V. F/K/A FREESCALE SEMICONDUCTOR, INC., NETHERLANDS Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVEAPPLICATION 11759915 AND REPLACE IT WITH APPLICATION11759935 PREVIOUSLY RECORDED ON REEL 040925 FRAME 0001. ASSIGNOR(S) HEREBY CONFIRMS THE RELEASE OF SECURITYINTEREST;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC.;REEL/FRAME:052917/0001 Effective date: 20160912 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20210411 |