[go: up one dir, main page]

EP2055109A2 - Method and device for assembling forward error correction frames in multimedia streaming - Google Patents

Method and device for assembling forward error correction frames in multimedia streaming

Info

Publication number
EP2055109A2
EP2055109A2 EP07804782A EP07804782A EP2055109A2 EP 2055109 A2 EP2055109 A2 EP 2055109A2 EP 07804782 A EP07804782 A EP 07804782A EP 07804782 A EP07804782 A EP 07804782A EP 2055109 A2 EP2055109 A2 EP 2055109A2
Authority
EP
European Patent Office
Prior art keywords
media
fec
error correction
random access
forward error
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP07804782A
Other languages
German (de)
French (fr)
Other versions
EP2055109A4 (en
Inventor
Ramakrishnan Vedantham
Vidya Setlur
Suresh Chitturi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Oyj
Nokia Inc
Original Assignee
Nokia Oyj
Nokia Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Oyj, Nokia Inc filed Critical Nokia Oyj
Publication of EP2055109A2 publication Critical patent/EP2055109A2/en
Publication of EP2055109A4 publication Critical patent/EP2055109A4/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • H04N21/6437Real-time Transport Protocol [RTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2383Channel coding or modulation of digital bit-stream, e.g. QPSK modulation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/438Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving encoded video stream packets from an IP network
    • H04N21/4382Demodulation or channel decoding, e.g. QPSK demodulation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/438Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving encoded video stream packets from an IP network
    • H04N21/4383Accessing a communication channel
    • H04N21/4384Accessing a communication channel involving operations to reduce the access time, e.g. fast-tuning for reducing channel switching latency
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
    • H04N21/8153Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics comprising still images, e.g. texture, background image

Definitions

  • the present invention relates generally to the assembly of forward error correction frames for groups of coded media packets and, more particularly, to the forward error correction frames in multimedia streaming.
  • IP Internet Protocol
  • Annoying artifacts in a media presentation resulting from errors in a media transmission can further be avoided by many different means during the media coding process.
  • adding redundancy bits during a media coding process is not possible for pre-coded content, and is normally less efficient than optimal protection mechanisms in the channel coding using a forward error correction (FEC).
  • FEC forward error correction
  • a media GOP stream 300 comprises a media GOP 310 and a media GOP 320 separated by a boundary 315.
  • the FEC structure 500 comprises a FEC frame 510 and a FEC frame 520 separated by a boundary 515.
  • the FEC frame 510 also contains an FEC packet 512 and two padding packets 516.
  • the FEC frame 520 contains an FEC packet in addition to the media packets 524.
  • the FEC frames 510, 520 are generally longer than the media GOPs. As such, the FEC frames are not aligned with the media GOPs.
  • FEC schemes intended for error protection allow selecting the number of to- be-protected media packets and the number of FEC packets to be chosen adaptively to select the strength of the protection and the delay constraints of the FEC subsystem.
  • Packet based FEC in the sense discussed above requires a synchronization of the receiver to the FEC frame structure, in order to take advantage of the FEC. That is, a receiver has to buffer all media and FEC packets of a FEC frame before error correction can commence.
  • Video coding schemes, and increasingly some audio coding schemes, for example, use so-called predictive coding techniques. Such techniques predict the content of a later video picture or audio frame from previous pictures or audio frames, respectively. In the following, video pictures and audio frames will both be referred to as "pictures", in order to distinguish them from FEC frames.
  • the compression scheme can be very efficient, but becomes also increasingly vulnerable to errors the longer the prediction chain becomes.
  • key pictures or the equivalent of non-predictively coded audio frames, both referred to as key pictures hereinafter, are inserted from time to time.
  • This technique re-establishes the integrity of the prediction chain by using only non-predictive coding techniques. It is not uncommon that a key pictures is 5 to 20 times bigger than a predictively coded picture.
  • Each encoded picture may correspond, for example, to one to-be-protected media packet.
  • GOP Group of Pictures
  • FEC schemes can be designed to be more efficient when FEC frames are big in size, for example, when they comprise some hundred packets.
  • most media coding schemes gain efficiency when choosing larger GOP sizes, since a GOP contains only one single key picture which is, statistically, much larger than the other pictures of the GOP.
  • both large FEC frames and large GOP sizes are required to synchronize to their respective structures. For FEC frames this implies buffering of the whole FEC frame as received, and correcting any correctable errors. For media GOPs this implies the parsing and discarding of those media packets that do not form the start of a GOP (the key frame).
  • the FEC frames should be aligned with the groups of media packets.
  • the encoder should be able to determine, for a group of coded media packets contained in an FEC frame, the number of next subsequent groups of coded media packets which fit completing into that FEC frame, and to select all coded media packets associated with the group or groups of coded media packets so determined for that FEC frame.
  • a media GOP stream 400 comprises a media GOP 410 and a media GOP 420 separated by a boundary 415.
  • the FEC structure 600 comprises a FEC frame 610 and a FEC frame 620 separated by a boundary 615.
  • the FEC frames 610 and 620 also contain FEC packets and the media packets, they can be made aligned with the GOPs.
  • Rich media content is generally referred to content that is graphically rich and contains compound (or multiple media) including graphics, text, video and audio and preferably delivered through a single interface. Rich media dynamically changes over time and could respond to user interaction.
  • Streaming of rich media content is becoming more and more important for delivering visually rich content for real-time transport especially within the Multimedia Broadcast/ Multicast Services (MBMS) and Packet-switched Streaming Services (PSSS) architectures in 3GPP.
  • PSS provides a framework for Internet Protocol (IP) based streaming applications in 3 G networks, especially over point-to-point bearers.
  • IP Internet Protocol
  • MBMS streaming services facilitate resource efficient delivery of popular real-time content to multiple receivers in a 3 G mobile environment.
  • PtP point-to-point
  • PtM point-to-multipoint
  • the streamed content may consist of video, audio, XML (extensible Markup Language) content such as Scalable Vector Graphics (SVG), timed-text and other supported media.
  • the content may be pre-recorded or generated from a live feed.
  • SVG allows for three types of graphic objects: vector graphic shapes, image and texts. Graphic objects can be grouped, transformed and composed from previously rendered objects. SVG content can be arranged in groups such that each of them can be processed and displayed independently from groups that are delivered later in time. Groups are also referred to as scenes.
  • SVGT 1.2 - is a language for describing two-dimensional graphics in XML.
  • SVG allows for three types of graphic objects: vector graphic shapes (e.g., paths consisting of straight lines and curves), multimedia (such as raster images, video, video), and text.
  • SVG drawings can be interactive (using DOM event model) and dynamic. Animations can be defined and triggered either declaratively (i.e., by embedding SVG animation elements in SVG content) or via scripting. Sophisticated applications of SVG are possible by use of a supplemental scripting language which accesses the SVG Micro Document Objects Module ( ⁇ DOM), which provides complete access to all elements, attributes and properties.
  • ⁇ DOM SVG Micro Document Objects Module
  • a rich set of event handles can be assigned to any SVG graphical object. Because of its compatibility and leveraging of other Web standards (such as CDF), features like scripting can be done on XHTML (Extensible HyperText Markup Language) and SVG elements simultaneously within the same
  • SMIL 2.0 The Synchronized Multimedia Integration Language (SMIL) enables simple authoring of interactive audiovisual presentations. SMIL is typically used for "rich media'Vmultimedia presentations which integrate streaming audio and video with images, text or any other media type.
  • SMIL Synchronized Multimedia Integration Language
  • CDF The Compound Documents Format (CDF) working group is producing recommendations on combining separate component languages (e.g. XML-based languages, elements and attributes from separate vocabularies), like XHTML, SVG, MathML, and SMIL, with a focus on user interface markups.
  • component languages e.g. XML-based languages, elements and attributes from separate vocabularies
  • XHTML XHTML
  • SVG Session Markup Language
  • a DIMS content stream typically consists of a series of RTP (Real-time Transport Protocol) packets whose payload is SVG scene, SVG scene update(s), and coded video and audio packets.
  • RTP Real-time Transport Protocol
  • SVG scene SVG scene update(s)
  • coded video and audio packets coded video and audio packets.
  • UDP User Datagram Protocol
  • 3GPP SA4 defined some media independent packet loss recovery mechanisms at transport layer and above in the MBMS and PSS frameworks.
  • MBMS application layer FEC is used for packet loss recovery for both streaming and download services.
  • PSS RTP layer retransmissions are used for packet loss recovery.
  • TCP Transmission Control Protocol
  • AL-FEC application layer forward error correction
  • DIMS streaming rich media
  • AL-FEC application layer forward error correction
  • the FEC frame is transmitted over the lossy network.
  • a receiver would be able to recover any lost media RTP packets if it receives sufficient total number of media and FEC RTP packets from that FEC frame.
  • the length of the above- mentioned source block is configurable. AL-FEC is more effective if large source blocks are used. On the other hand, the tune-in delay is directly proportional to the length of the source block.
  • source RTP packets of each media are bundled together to form a source block for FEC protection.
  • One or more FEC RTP packets are generated from this source block using an FEC encoding algorithm.
  • the source RTP packets of different media along with the FEC RTP packets are transmitted as separate RTP streams., as shown in Figure 3.
  • the DIMS RTP stream contains a plurality of FEC frames 61O 1 , 61O 2 and 6IO3, for example. These FEC frames may contain the source blocks for different DIMS media or the same medium.
  • the FEC frame 61O 1 comprises a source block 6 ⁇ A ⁇ of source RTP packets and a FEC RTP packet 612 ⁇ .
  • the client buffers the received RTP packets (both source and FEC) for sufficient duration and tries to reconstruct the above mentioned source block. If any source RTP packets are missing, then it tries to recover them by applying the FEC decoding algorithm.
  • the length of the FEC source block is a critical factor in determining the tune-in delay.
  • the client has to buffer for the duration of an entire FEC source block. If a client starts receiving data in the middle of the current FEC source block, then it may have to discard the data from the current source block, and wait to receive next source block from the beginning to the end. Hence on an average it has to wait for 1.5 times the FEC source block duration.
  • the packets are sent to various media decoders at the receiver.
  • the media decoders may not be able to decode from arbitrary points in the compressed media bit stream. If the FEC frames and the media GOPs are not aligned, then on an average the decoder may have to discard one half of the current media GOP data.
  • Tune-in delay 1.5 * (FEC source block duration) + 0.5 * (media GOP duration) (1)
  • FEC source block duration is the buffering delay of the FEC frame (in isochronous networks this is proportional to the size of the FEC frame)
  • media GOP duration is the buffering delay of the media GOP.
  • the worst case buffer sizes have to be chosen such that a complete FEC frame and a complete GOP, respectively, fits into the buffer of an FEC decoder and the buffer of a media decoder, respectively.
  • the present invention provides a method and device wherein a random access point is inserted at the beginning of each forward error correction (FEC) source block for a multimedia broadcast/ multicast-based streaming service content.
  • the media decoder can start decoding as soon as FEC decoding is finished and the second term in Equation 1 can be eliminated, thus reducing the tune-in delay.
  • the multimedia broadcast/ multicast streaming service includes dynamic interactive multimedia scene content where the source RTP packets of different media along with the FEC RTP packets are transmitted as separate RTP streams.
  • the inclusion of the random access point facilitates immediate rendering of the dynamic interactive multimedia scene content after FEC decoding, thus reducing the tune-in latency.
  • the first aspect of the present invention is a method for use in multimedia streaming wherein a packet stream is provided to a multimedia client capable of decoding media packets of a plurality of media, and the encoded media packets of each medium are arranged in frames, each frame having at least a source block following at least one forward error correction packet.
  • the method comprises inserting a random access point in at least some of the frames such that the random access point is located between the source block and the forward error correction packet.
  • the second aspect of the present invention is a module for use in a server in multimedia streaming wherein a packet stream is provided from the server to a multimedia client capable of decoding media packets of a plurality of media, and the encoded media packets of each medium are arranged in frames, each frame having at least a source block following at least one forward error correction packet.
  • the module is adapted for inserting a random access point in at least some of the frames such that the random access point is located between the source block and the forward error correction packet.
  • the third aspect of the present invention is a server in a communication system, the communication system comprising one or more multimedia clients capable of decoding media packets of a plurality of media, and the encoded media packets of each medium are arranged in frames, each frame having at least a source block following at least one forward error correction packet.
  • the server comprises a generation module for inserting a random access point in at least some of the frames such that the random access point is located between the source block and the forward error correction packet.
  • the fourth aspect of the present invention is a multimedia client adapted for receiving a multimedia bitstream, the bitstream comprising a plurality of encoded media packets arranged in frames, each frame having at least a source block following at least one forward error correction packet and wherein at least one random access point is inserted between the source block and the forward correction packet.
  • the client comprises a first decoder for forward error correction decoding and at least one media decoder for decoding the source block of encoded media packets after the forward error correction decoding based on the random access point.
  • the fifth aspect of the present invention is a software application product comprising a storage medium having a software application for use in multimedia streaming wherein a packet stream is provided to a multimedia client capable of decoding media packets of a plurality of media, and the encoded media packets of each medium are arranged in frames, each frame having at least a source block following at least one forward error correction packet.
  • the software application comprises programming code for inserting a random access point in at least some of the frames such that the random access point is located between the source block and the forward error correction packet.
  • the sixth aspect of the present invention is a software application product comprising a storage medium having a software application for use in a multimedia client, the client adapted for receiving a multimedia bitstream, the bitstream comprising a plurality of encoded media packets arranged in frames, each frame having at least a source block following at least one forward error correction packet and wherein at least one random access point is inserted between the source block and the forward correction packet.
  • the software application comprises programming code for forward error correction decoding and programming code for decoding the source block of encoded media packets after the forward error correction decoding based on the random access point
  • Figure 1 is a timing diagram showing a plurality of GOPs and the associated FEC frames which are not aligned with the GOPs.
  • Figure 2 is a timing diagram showing a plurality of GOPs and the associated FEC frames which are aligned with the GOPs.
  • FIG. 3 shows the FEC frames in multimedia streaming.
  • Figure 4 shows the insertion of a random access point at the beginning of each FEC source block for multimedia streaming, according to one embodiment of the present invention.
  • Figure 5a shows an FEC frame structure for DIMS, according to one embodiment of the present invention.
  • Figure 5b shows an FEC frame structure for DIMS, according to another embodiment of the present invention.
  • Figure 6 is a schematic representation of a communication system having a server and a client wherein random access points are inserted in FEC frames.
  • Figure 7 is a block diagram of an electric device having at least one of the multimedia streaming encoder and the decoder, according to the present invention.
  • the streamed content may consist of video, audio, XML content such as SVG, time-text and other support media.
  • An SVG stream generally consists of a scene and a series of scene updates. It is possible to consider the SVG scene as a starting point for decoding in an SVG decoder at the receiver after FEC decoding.
  • the current MBMS FEC framework uses media bundling for FEC protection purposes, i.e., the same FEC frame contains all types of media RTP packets (e.g., SVG, audio, video). In such arrangement, it is advantageous to have the random access points of the three media (in any order) at the beginning of the FEC frame.
  • FIG. 4 shows the insertion of a random access point at the beginning of each source block of an FEC frame.
  • a DIMS RTP stream comprises FEC frames 71O 1 , 71O 2 and 71O 3 , for example. These FEC frames may contain source blocks for different DIMS media such as video, audio, and timed text, or for the same medium.
  • the FEC frame 71Oi comprises a source block 7M 1 of source RTP packets, a random access point 718 ⁇ and a FEC RTP packet 712i.
  • the FEC frame 71O 2 comprises a source block 714 2 of source RTP packets, a random access point 718 2 and a FEC RTP packet 712 2 .
  • the FEC frame 71Oi comprises a source block 7M 1 of source RTP packets, a random access point 718 ⁇ and a FEC RTP packet 712i.
  • the FEC frame 71O 2 comprises a source block 714 2 of source RTP packets, a
  • 71 ⁇ 3 comprises a source block 714 3 of source RTP packets, a random access point 718 3 and a FEC RTP packet 712 3 .
  • an FEC frame can have more than one FEC packets so that the media bitstream is more robust against packet loss.
  • the FEC packets are normally at the end of the FEC frame, while the RAP packets are at the beginning of the FEC frame.
  • a random access point in the middle of an FEC frame is useful for quick tune-in. This is also useful in case of an FEC decoding failure. In such case, the first random access point is missing but the subsequent random access points in the same FEC frame can be used for media decoding.
  • Interactive Mobile TV services This service is understood as the ability to provide a deterministic rendering and behavior of Rich-media content including audio-video content, text, images, XML based content such as SVG 5 along with TV and radio channels, altogether in the end-user interface.
  • the service provides convenient navigation through content in a single application or service and allows synchronized interaction in local or in distant such as voting and personalization (e.g.: related menu or sub-menu, advertising and content in function of the end-user profile or service subscription).
  • Live enterprise data feed - This service includes stock tickers that provide streaming of real-time quotes, live intra-day charts with technical indicators, news monitoring, weather alerts, charts, business updates, etc.
  • Live Chat The live chat service can be incorporated within a web cam or video channel, or a rich-media blog service. End-users can register, save their surname and exchange messages. Messages appear dynamically in the live chat service along with rich- media data provided by the end-user.
  • the chat service can be either private or public in one or more multiple channels at the same time. End-users are dynamically alerted of new messages from other users. Dynamic updates of messages within the service occur without reloading a complete page.
  • Karaoke - This service displays a music TV channel or video clip catalog along with the speech of a song with fluid-like animation on the text characters to be singing (e.g. smooth color transition of fonts, scrolling of text).
  • the end-user can download a song of his choice along with the complete animation by selecting an interactive button
  • FIG. 6 A schematic representation of a communication system having a server and a client, according to an embodiment of the present invention, is shown in Figure 6.
  • the communication system is capable of providing multimedia/ multicast services.
  • the communication system has at least one server and one client for multimedia streaming.
  • the server is adapted for providing Rich media (DIMS) content over broadcast/multicast channels of a wireless network, such as the
  • the server is adapted for acquiring, receiving and/or storing DIMS content.
  • the DIMS content includes scenes and scene updates.
  • the DIMS content can be conveyed to an FSC frame generator which is adapted to insert random access points are inserted in FEC frames. More specifically, the random access points are inserted at the beginning of a source block for an MBMS -based streaming service for DIMS content.
  • the FEC generator is adapted to provide FEC frames aligned with the media DIMS packets with the random access points included.
  • the DIMS packets with aligned FEC frames are transmitted in a bitstream over broadcast/ multicast channels so as to allow one or more DIMS clients to receive and decode the bitstream.
  • the FEC generator can have a processing component running a FEC encoding software having programming code for aligning the FEC frame as well as random access points insertion.
  • each DIMS client has a FEC decoder for error correction purposes.
  • FEC decoder can have a processing component running a FEC decoding software. After FEC decoding, the DIMS contents are conveyed to a number of media decoders. The decoded content from each media decoder is provided to an output module. For example, if the media decoder is an video decoder, then the decoded content is provided to a screen for display. As shown in Figure 6, three different media decodes and three corresponding output modules are shown. One of the output modules can be a renderer adapted for SVG drawings, for example. SVG drawings can be interactive and dynamic and can be used in animation, for example.
  • Figure 7 shows an electronic device that equips at least one of the server module and the DIMS client module as shown in Figure 6.
  • the electronic device is a mobile terminal.
  • the mobile device 10 shown in Figure 7 is capable of cellular data and voice communications. It should be noted that the present invention is not limited to this specific embodiment, which represents one of a multiplicity of different embodiments.
  • the mobile device 10 includes a (main) microprocessor or micro-controller 100 as well as components associated with the microprocessor controlling the operation of the mobile device.
  • These components include a display controller 130 connecting to a display module 135, a nonvolatile memory 140, a volatile memory 150 such as a random access memory (RAM), an audio input/output (I/O) interface 160 connecting to a microphone 161, a speaker 162 and/or a headset 163, a keypad controller 170 connected to a keypad 175 or keyboard, any auxiliary input/output (I/O) interface 200, and a short-range communications interface 180.
  • a display controller 130 connecting to a display module 135, a nonvolatile memory 140, a volatile memory 150 such as a random access memory (RAM), an audio input/output (I/O) interface 160 connecting to a microphone 161, a speaker 162 and/or a headset 163, a keypad controller 170 connected to a keypad 175 or keyboard, any auxiliary input/output (I/O) interface 200, and a short-range communications interface 180.
  • Such a device also typically includes other device subsystems shown generally at 190.
  • the mobile device 10 may communicate over a voice network and/or may likewise communicate over a data network, such as any public land mobile networks (PLMNs) in form of e.g. digital cellular networks, especially GSM (global system for mobile communication) or UMTS (universal mobile telecommunications system).
  • PLMNs public land mobile networks
  • GSM global system for mobile communication
  • UMTS universal mobile telecommunications system
  • the voice and/or data communication is operated via an air interface, i.e. a cellular communication interface subsystem in cooperation with further components (see above) to a base station (BS) or node B (not shown) being part of a radio access network (RAN) of the infrastructure of the cellular network.
  • BS base station
  • RAN radio access network
  • the cellular communication interface subsystem as depicted illustratively in Figure 7 comprises the cellular interface 110, a digital signal processor (DSP) 120, a receiver (RX) 121, a transmitter (TX) 122, and one or more local oscillators (LOs) 123 and enables the communication with one or more public land mobile networks (PLMNs).
  • the digital signal processor (DSP) 120 sends communication signals 124 to the transmitter (TX) 122 and receives communication signals 125 from the receiver (RX) 121.
  • the digital signal processor 120 also provides for the receiver control signals 126 and transmitter control signal 127.
  • the gain levels applied to communication signals in the receiver (RX) 121 and transmitter (TX) 122 may be adaptively controlled through automatic gain control algorithms implemented in the digital signal processor (DSP) 120.
  • DSP digital signal processor
  • Other transceiver control algorithms could also be implemented in the digital signal processor (DSP) 120 in order to provide more sophisticated control of the transceiver 121/122.
  • a single local oscillator (LO) 123 may be used in conjunction with the transmitter (TX) 122 and receiver (RX) 121.
  • LO local oscillator
  • TX transmitter
  • RX receiver
  • a plurality of local oscillators can be used to generate a plurality of corresponding frequencies.
  • the mobile device 10 depicted in Figure 7 is used with the antenna 129 as or with a diversity antenna system (not shown), the mobile device 10 could be used with a single antenna structure for signal reception as well as transmission.
  • Information which includes both voice and data information, is communicated to and from the cellular interface 110 via a data link between the digital signal processor (DSP) 120.
  • DSP digital signal processor
  • the detailed design of the cellular interface 110, such as frequency band, component selection, power level, etc., will be dependent upon the wireless network in which the mobile device 10 is intended to operate.
  • the mobile device 10 may then send and receive communication signals, including both voice and data signals, over the wireless network.
  • Signals received by the antenna 129 from the wireless network are routed to the receiver 121, which provides for such operations as signal amplification, frequency down conversion, filtering, channel selection, and analog to digital conversion. Analog to digital conversion of a received signal allows more complex communication functions, such as digital demodulation and decoding, to be performed using the digital signal processor (DSP) 120.
  • DSP digital signal processor
  • signals to be transmitted to the network are processed, including modulation and encoding, for example, by the digital signal processor (DSP) 120 and are then provided to the transmitter 122 for digital to analog conversion, frequency up conversion, filtering, amplification, and transmission to the wireless network via the antenna 129.
  • DSP digital signal processor
  • the microprocessor / micro-controller ( ⁇ C) 110 which may also be designated as a device platform microprocessor, manages the functions of the mobile device 10.
  • Operating system software 149 used by the processor 110 is preferably stored in a persistent store such as the non- volatile memory 140, which may be implemented, for example, as a Flash memory, battery backed-up RAM, any other non- volatile storage technology, or any combination thereof.
  • the non-volatile memory 140 includes a plurality of high-level software application programs or modules, such as a voice communication software application 142, a data communication software application 141, an organizer module (not shown), or any other type of software module (not shown).
  • These modules are executed by the processor 100 and provide a high-level interface between a user of the mobile device 10 and the mobile device 10.
  • This interface typically includes a graphical component provided through the display 135 controlled by a display controller 130 and input/output components provided through a keypad 175 connected via a keypad controller 170 to the processor 100, an auxiliary input/output (I/O) interface 200, and/or a short-range (SR) communication interface 180.
  • I/O auxiliary input/output
  • SR short-range
  • the auxiliary I/O interface 200 comprises especially USB (universal serial bus) interface, serial interface, MMC (multimedia card) interface and related interface technologies/standards, and any other standardized or proprietary data communication bus technology, whereas the short-range communication interface radio frequency (RF) low-power interface includes especially WLAN (wireless local area network) and Bluetooth communication technology or an IRDA (infrared data access) interface.
  • RF low-power interface technology should especially be understood to include any IEEE 801.xx standard technology, which description is obtainable from the Institute of Electrical and Electronics Engineers.
  • the auxiliary I/O interface 200 as well as the short-range communication interface 180 may each represent one or more interfaces supporting one or more input/output interface technologies and communication interface technologies, respectively.
  • the operating system, specific device software applications or modules, or parts thereof, may be temporarily loaded into a volatile store 150 such as a random access memory (typically implemented on the basis of DRAM (direct random access memory) technology for faster operation).
  • received communication signals may also be temporarily stored to volatile memory 150, before permanently writing them to a file system located in the non- volatile memory 140 or any mass storage preferably detachably connected via the auxiliary I/O interface for storing data.
  • a volatile store 150 such as a random access memory (typically implemented on the basis of DRAM (direct random access memory) technology for faster operation).
  • received communication signals may also be temporarily stored to volatile memory 150, before permanently writing them to a file system located in the non- volatile memory 140 or any mass storage preferably detachably connected via the auxiliary I/O interface for storing data.
  • An exemplary software application module of the mobile device 10 is a personal information manager application providing PDA functionality including typically a contact manager, calendar, a task manager, and the like. Such a personal information manager is executed by the processor 100, may have access to the components of the mobile device 10, and may interact with other software application modules. For instance, interaction with the voice communication software application allows for managing phone calls, voice mails, etc., and interaction with the data communication software application enables for managing SMS (soft message service), MMS (multimedia service), e-mail communications and other data transmissions.
  • the non-volatile memory 140 preferably provides a file system to facilitate permanent storage of data items on the device including particularly calendar entries, contacts etc. The ability for data communication with networks, e.g.
  • the application modules 141 to 149 represent device functions or software applications that are configured to be executed by the processor 100.
  • a single processor manages and controls the overall operation of the mobile device as well as all device functions and software applications. Such a concept is applicable for today's mobile devices.
  • the implementation of enhanced multimedia functionalities includes, for example, reproducing of video streaming applications, manipulating of digital images, and capturing of video sequences by integrated or detachably connected digital camera functionality.
  • the implementation may also include gaming applications with sophisticated graphics and the necessary computational power.
  • a universal processor is designed for carrying out a multiplicity of different tasks without specialization to a preselection of distinct tasks
  • a multi-processor arrangement may include one or more universal processors and one or more specialized processors adapted for processing a predefined set of tasks. Nevertheless, the implementation of several processors within one device, especially a mobile device such as mobile device 10, requires traditionally a complete and sophisticated re-design of the components .
  • SoC system-on-a-chip
  • SoC system-on-a-chip
  • a typical processing device comprises a number of integrated circuits that perform different tasks.
  • These integrated circuits may include especially microprocessor, memory, universal asynchronous receiver-transmitters (UARTs), serial/parallel ports, direct memory access (DMA) controllers, and the like.
  • UART universal asynchronous receiver- transmitter
  • DMA direct memory access
  • VLSI very-large-scale integration
  • the device 10 is equipped with a module for scalable encoding 105 and scalable decoding 106 of video data according to the inventive operation of the present invention.
  • said modules 105, 106 may individually be used.
  • the device 10 is adapted to perform video data encoding or decoding respectively.
  • Said video data may be received by means of the communication modules of the device or it also may be stored within any imaginable storage means within the device 10.
  • Video data can be conveyed in a bitstream between the device 10 and another electronic device in a communications network.
  • a mobile terminal may be equipped with an encoder in a server or decoder in a DIMS client as described above.
  • the mobile terminal may have both the encoder and the decoder.
  • This invention covers the inclusion of different variants of DIMS RAPs at the beginning of each FEC source block.
  • the variants include:

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Detection And Prevention Of Errors In Transmission (AREA)

Abstract

In a RTP stream having a plurality of FEC frames containing source blocks of media packets, random access points are inserted in front of the source blocks so as to allow a media decoder to decode the media packets as soon as FEC decoding is finished. In particular, the media packets contain forward error correction (FEC) source blocks for a multimedia broadcast/ multicast-based streaming service content. As the multimedia broadcast/ multicast streaming service includes dynamic interactive multimedia scene content where the source RTP packets of different media along with the FEC RTP packets are transmitted as separate RTP streams, the inclusion of the random access point facilitates immediate rendering of the dynamic interactive multimedia scene content after FEC decoding, thus reducing the tune-in latency.

Description

METHOD AND DEVICE FOR ASSEMBLYING FORWARD ERROR CORRECTION FRAMES IN MULTIMEDIA STREAMING
Field of the Invention The present invention relates generally to the assembly of forward error correction frames for groups of coded media packets and, more particularly, to the forward error correction frames in multimedia streaming.
Background of the Invention Most packet based communication networks, especially Internet Protocol (IP) networks without guaranteed quality of service, suffer from a variable amount of packet losses or errors. Those losses can stem from many sources, for example router or transmission segment overload or bit errors in packets that lead to their deletion. It should be understood that packet losses are a common operation point in most packet networks architectures, and not a network failure. Media transmission, especially the transmission of compressed video, suffers greatly from packet losses.
Annoying artifacts in a media presentation resulting from errors in a media transmission can further be avoided by many different means during the media coding process. However, adding redundancy bits during a media coding process is not possible for pre-coded content, and is normally less efficient than optimal protection mechanisms in the channel coding using a forward error correction (FEC).
Forward Error Correction works by calculating a number of redundant bits over the to-be-protected bits in the various to-be-protected media packets, add those bits to FEC packets, and transmit both, the media packets and the FEC packets. At the receiver, the FEC packets can be used to check the integrity of the media packets and to reconstruct media packets that may be missing. Henceforth, the media packets and the FEC packets which are protecting those media packets will be called a FEC frame. Examples of the FEC frame are shown in Figure 1. As shown in Figure 1, a media GOP stream 300 comprises a media GOP 310 and a media GOP 320 separated by a boundary 315. The FEC structure 500 comprises a FEC frame 510 and a FEC frame 520 separated by a boundary 515. In addition to the media packets 514, the FEC frame 510 also contains an FEC packet 512 and two padding packets 516. Likewise, the FEC frame 520 contains an FEC packet in addition to the media packets 524. As such, the FEC frames 510, 520 are generally longer than the media GOPs. As such, the FEC frames are not aligned with the media GOPs.
Most FEC schemes intended for error protection allow selecting the number of to- be-protected media packets and the number of FEC packets to be chosen adaptively to select the strength of the protection and the delay constraints of the FEC subsystem.
Packet based FEC in the sense discussed above requires a synchronization of the receiver to the FEC frame structure, in order to take advantage of the FEC. That is, a receiver has to buffer all media and FEC packets of a FEC frame before error correction can commence. Video coding schemes, and increasingly some audio coding schemes, for example, use so-called predictive coding techniques. Such techniques predict the content of a later video picture or audio frame from previous pictures or audio frames, respectively. In the following, video pictures and audio frames will both be referred to as "pictures", in order to distinguish them from FEC frames. By using predictive coding techniques, the compression scheme can be very efficient, but becomes also increasingly vulnerable to errors the longer the prediction chain becomes. Hence, so-called key pictures, or the equivalent of non-predictively coded audio frames, both referred to as key pictures hereinafter, are inserted from time to time. This technique re-establishes the integrity of the prediction chain by using only non-predictive coding techniques. It is not uncommon that a key pictures is 5 to 20 times bigger than a predictively coded picture. Each encoded picture may correspond, for example, to one to-be-protected media packet.
Following the conventions of MPEG-2 visual, the picture sequence starting with a key picture and followed by zero or more non-key pictures is henceforth called Group of Pictures (GOP). In digital TV, a GOP consists normally of no more than six pictures. In streaming applications, however, GOP sizes are often chosen much bigger. Some GOPs can have hundred of pictures in a GOP in order to take advantage of the better coding efficiency of predictively coded pictures. For that reason, the "tune in" to such a sequence can take several seconds.
FEC schemes can be designed to be more efficient when FEC frames are big in size, for example, when they comprise some hundred packets. Similarly, most media coding schemes gain efficiency when choosing larger GOP sizes, since a GOP contains only one single key picture which is, statistically, much larger than the other pictures of the GOP. However, both large FEC frames and large GOP sizes are required to synchronize to their respective structures. For FEC frames this implies buffering of the whole FEC frame as received, and correcting any correctable errors. For media GOPs this implies the parsing and discarding of those media packets that do not form the start of a GOP (the key frame).
In U.S. Patent Application Publication No. 2006/0107189 Al5 it is stated that, in order to reduce a buffer delay at a decoding end, the FEC frames should be aligned with the groups of media packets. To that end, the encoder should be able to determine, for a group of coded media packets contained in an FEC frame, the number of next subsequent groups of coded media packets which fit completing into that FEC frame, and to select all coded media packets associated with the group or groups of coded media packets so determined for that FEC frame. For alignment purposes, it is possible to equalize the size of selected packets by adding predetermined data to some of them. Examples of aligned FEC frames and the groups of media packets are shown in Figure 2. As shown in Figure 2, a media GOP stream 400 comprises a media GOP 410 and a media GOP 420 separated by a boundary 415. The FEC structure 600 comprises a FEC frame 610 and a FEC frame 620 separated by a boundary 615. Although the FEC frames 610 and 620 also contain FEC packets and the media packets, they can be made aligned with the GOPs.
FEC can be applied to rich media content. Rich media content is generally referred to content that is graphically rich and contains compound (or multiple media) including graphics, text, video and audio and preferably delivered through a single interface. Rich media dynamically changes over time and could respond to user interaction.
Streaming of rich media content is becoming more and more important for delivering visually rich content for real-time transport especially within the Multimedia Broadcast/ Multicast Services (MBMS) and Packet-switched Streaming Services (PSSS) architectures in 3GPP. PSS provides a framework for Internet Protocol (IP) based streaming applications in 3 G networks, especially over point-to-point bearers. MBMS streaming services facilitate resource efficient delivery of popular real-time content to multiple receivers in a 3 G mobile environment. Instead of using different point-to-point (PtP) bearers to deliver the same content to different mobiles, a single point-to-multipoint (PtM) bearer is used to deliver the same content to different mobiles in a given cell. The streamed content may consist of video, audio, XML (extensible Markup Language) content such as Scalable Vector Graphics (SVG), timed-text and other supported media. The content may be pre-recorded or generated from a live feed. SVG allows for three types of graphic objects: vector graphic shapes, image and texts. Graphic objects can be grouped, transformed and composed from previously rendered objects. SVG content can be arranged in groups such that each of them can be processed and displayed independently from groups that are delivered later in time. Groups are also referred to as scenes.
Until recently, applications for mobile devices were text based with limited interactivity. However, as more wireless devices are coming equipped with color displays and more advanced graphics rendering libraries, consumers will demand a rich media experience from all their wireless applications. A real-time rich media content streaming service is imperative for mobile terminals, especially in the area of MBMS, PSS, and Multi-Media Streaming (MMS) services. Rich media applications particularly in the Web services domain include XML based content such as:
SVGT 1.2 - is a language for describing two-dimensional graphics in XML. SVG allows for three types of graphic objects: vector graphic shapes (e.g., paths consisting of straight lines and curves), multimedia (such as raster images, video, video), and text. SVG drawings can be interactive (using DOM event model) and dynamic. Animations can be defined and triggered either declaratively (i.e., by embedding SVG animation elements in SVG content) or via scripting. Sophisticated applications of SVG are possible by use of a supplemental scripting language which accesses the SVG Micro Document Objects Module (μDOM), which provides complete access to all elements, attributes and properties. A rich set of event handles can be assigned to any SVG graphical object. Because of its compatibility and leveraging of other Web standards (such as CDF), features like scripting can be done on XHTML (Extensible HyperText Markup Language) and SVG elements simultaneously within the same Web page.
SMIL 2.0 - The Synchronized Multimedia Integration Language (SMIL) enables simple authoring of interactive audiovisual presentations. SMIL is typically used for "rich media'Vmultimedia presentations which integrate streaming audio and video with images, text or any other media type.
CDF - The Compound Documents Format (CDF) working group is producing recommendations on combining separate component languages (e.g. XML-based languages, elements and attributes from separate vocabularies), like XHTML, SVG, MathML, and SMIL, with a focus on user interface markups. When combining user interface markups, specific problems have to be resolved that are not addressed by the individual markups specifications, such as the propagation of events across markups, the combination of rendering or the user interaction model with a combined document. The Compound Document Formats working group will address this type of problems. This work is divided in phases and two technical solutions: combining by reference and by inclusion.
In the current 3GPP DIMS (Dynamic Interactive Multimedia Scenes) activity, the streaming of DIMS content has been recognized as an important component of a dynamic rich media service for enabling real time, continuous realization of content at the client. A DIMS content stream typically consists of a series of RTP (Real-time Transport Protocol) packets whose payload is SVG scene, SVG scene update(s), and coded video and audio packets. These RTP packets are encapsulated by UDP (User Datagram Protocol)/IP headers and transmitted over the 3G networks. The packets may be lost due to transmission errors over the wireless links or buffer overflows at the intermediate routers of the 3G networks.
3GPP SA4 defined some media independent packet loss recovery mechanisms at transport layer and above in the MBMS and PSS frameworks. In MBMS, application layer FEC is used for packet loss recovery for both streaming and download services. In PSS, RTP layer retransmissions are used for packet loss recovery. For unicast download delivery, TCP (Transmission Control Protocol) takes care of the reliable delivery of the content.
For rich media based MBMS streaming services, it is very likely that the users tune-in to the service at arbitrary instants during the streaming session. The clients start receiving the packets as soon as they tune-in to the service and may have to wait for a certain time period to start decoding/rendering of the received rich media content. This time period is also called "tune-in delay". For good user experience, it is desirable that the clients start rendering the content as soon as possible from the time they receive the content. Thus one requirement of DIMS is to allow for efficient and quick tune-in of DIMS clients to the broadcast/multicast streaming service. Quick tune-in can be enabled by media level solutions, transport level solutions or a combination of the two.
When streaming rich media (DIMS) content over broadcast/multicast channels of the 3G wireless networks, it is essential to protect the content from packet losses by using application layer forward error correction (AL-FEC) mechanism. AL-FEC algorithm is typically applied over a source block of media RTP packets to generate redundant FEC RTP packets. As mentioned earlier and illustrated in Figures 1 and 2, the media and the associated FEC packets are collectively referred to as an "FEC frame". The FEC frame is transmitted over the lossy network. A receiver would be able to recover any lost media RTP packets if it receives sufficient total number of media and FEC RTP packets from that FEC frame. Currently, the length of the above- mentioned source block is configurable. AL-FEC is more effective if large source blocks are used. On the other hand, the tune-in delay is directly proportional to the length of the source block.
In a typical rich media streaming session that involves SVG, audio and video media, at the sender side, source RTP packets of each media are bundled together to form a source block for FEC protection. One or more FEC RTP packets are generated from this source block using an FEC encoding algorithm. The source RTP packets of different media along with the FEC RTP packets are transmitted as separate RTP streams., as shown in Figure 3. As shown in Figure 3, the DIMS RTP stream contains a plurality of FEC frames 61O1, 61O2 and 6IO3, for example. These FEC frames may contain the source blocks for different DIMS media or the same medium. The FEC frame 61O1 comprises a source block 6\A\ of source RTP packets and a FEC RTP packet 612χ. On the receiver side, the client buffers the received RTP packets (both source and FEC) for sufficient duration and tries to reconstruct the above mentioned source block. If any source RTP packets are missing, then it tries to recover them by applying the FEC decoding algorithm. The length of the FEC source block is a critical factor in determining the tune-in delay. The client has to buffer for the duration of an entire FEC source block. If a client starts receiving data in the middle of the current FEC source block, then it may have to discard the data from the current source block, and wait to receive next source block from the beginning to the end. Hence on an average it has to wait for 1.5 times the FEC source block duration.
After FEC decoding, the packets are sent to various media decoders at the receiver. The media decoders may not be able to decode from arbitrary points in the compressed media bit stream. If the FEC frames and the media GOPs are not aligned, then on an average the decoder may have to discard one half of the current media GOP data.
Tune-in delay = 1.5 * (FEC source block duration) + 0.5 * (media GOP duration) (1),
where FEC source block duration is the buffering delay of the FEC frame (in isochronous networks this is proportional to the size of the FEC frame), and media GOP duration is the buffering delay of the media GOP. The worst case buffer sizes have to be chosen such that a complete FEC frame and a complete GOP, respectively, fits into the buffer of an FEC decoder and the buffer of a media decoder, respectively.
Summary of the Invention
The present invention provides a method and device wherein a random access point is inserted at the beginning of each forward error correction (FEC) source block for a multimedia broadcast/ multicast-based streaming service content. As such, the media decoder can start decoding as soon as FEC decoding is finished and the second term in Equation 1 can be eliminated, thus reducing the tune-in delay. The multimedia broadcast/ multicast streaming service includes dynamic interactive multimedia scene content where the source RTP packets of different media along with the FEC RTP packets are transmitted as separate RTP streams. The inclusion of the random access point facilitates immediate rendering of the dynamic interactive multimedia scene content after FEC decoding, thus reducing the tune-in latency.
Thus, the first aspect of the present invention is a method for use in multimedia streaming wherein a packet stream is provided to a multimedia client capable of decoding media packets of a plurality of media, and the encoded media packets of each medium are arranged in frames, each frame having at least a source block following at least one forward error correction packet. The method comprises inserting a random access point in at least some of the frames such that the random access point is located between the source block and the forward error correction packet.
The second aspect of the present invention is a module for use in a server in multimedia streaming wherein a packet stream is provided from the server to a multimedia client capable of decoding media packets of a plurality of media, and the encoded media packets of each medium are arranged in frames, each frame having at least a source block following at least one forward error correction packet. The module is adapted for inserting a random access point in at least some of the frames such that the random access point is located between the source block and the forward error correction packet. The third aspect of the present invention is a server in a communication system, the communication system comprising one or more multimedia clients capable of decoding media packets of a plurality of media, and the encoded media packets of each medium are arranged in frames, each frame having at least a source block following at least one forward error correction packet. The server comprises a generation module for inserting a random access point in at least some of the frames such that the random access point is located between the source block and the forward error correction packet.
The fourth aspect of the present invention is a multimedia client adapted for receiving a multimedia bitstream, the bitstream comprising a plurality of encoded media packets arranged in frames, each frame having at least a source block following at least one forward error correction packet and wherein at least one random access point is inserted between the source block and the forward correction packet. The client comprises a first decoder for forward error correction decoding and at least one media decoder for decoding the source block of encoded media packets after the forward error correction decoding based on the random access point.
The fifth aspect of the present invention is a software application product comprising a storage medium having a software application for use in multimedia streaming wherein a packet stream is provided to a multimedia client capable of decoding media packets of a plurality of media, and the encoded media packets of each medium are arranged in frames, each frame having at least a source block following at least one forward error correction packet. The software application comprises programming code for inserting a random access point in at least some of the frames such that the random access point is located between the source block and the forward error correction packet. The sixth aspect of the present invention is a software application product comprising a storage medium having a software application for use in a multimedia client, the client adapted for receiving a multimedia bitstream, the bitstream comprising a plurality of encoded media packets arranged in frames, each frame having at least a source block following at least one forward error correction packet and wherein at least one random access point is inserted between the source block and the forward correction packet. The software application comprises programming code for forward error correction decoding and programming code for decoding the source block of encoded media packets after the forward error correction decoding based on the random access point
The present invention will become apparent upon reading the description taken in conjunction with Figures 1 to 7.
Brief Description of the Drawings
Figure 1 is a timing diagram showing a plurality of GOPs and the associated FEC frames which are not aligned with the GOPs. Figure 2 is a timing diagram showing a plurality of GOPs and the associated FEC frames which are aligned with the GOPs.
Figure 3 shows the FEC frames in multimedia streaming.
Figure 4 shows the insertion of a random access point at the beginning of each FEC source block for multimedia streaming, according to one embodiment of the present invention.
Figure 5a shows an FEC frame structure for DIMS, according to one embodiment of the present invention.
Figure 5b shows an FEC frame structure for DIMS, according to another embodiment of the present invention.
Figure 6 is a schematic representation of a communication system having a server and a client wherein random access points are inserted in FEC frames.
Figure 7 is a block diagram of an electric device having at least one of the multimedia streaming encoder and the decoder, according to the present invention.
Detailed Description of the Invention
In streaming of rich media content, the streamed content may consist of video, audio, XML content such as SVG, time-text and other support media. An SVG stream generally consists of a scene and a series of scene updates. It is possible to consider the SVG scene as a starting point for decoding in an SVG decoder at the receiver after FEC decoding.
According to present invention, it is advantageous to insert a random access point where a starting point for decoding is possible at a media decoder at the receiver after FEC decoding. In addition to inserting a random access point at the beginning of each FEC source block for an XML stream such as SVG, it is advantageous to insert a random access point at the beginning of each FEC source block for the video stream and at the beginning of each FEC source block for the audio. The current MBMS FEC framework uses media bundling for FEC protection purposes, i.e., the same FEC frame contains all types of media RTP packets (e.g., SVG, audio, video). In such arrangement, it is advantageous to have the random access points of the three media (in any order) at the beginning of the FEC frame. Such an inclusion of the random access point facilitates immediate rendering of the DIMS content after FEC decoding. Figure 4 shows the insertion of a random access point at the beginning of each source block of an FEC frame. As shown in Figure 4, a DIMS RTP stream comprises FEC frames 71O1, 71O2 and 71O3, for example. These FEC frames may contain source blocks for different DIMS media such as video, audio, and timed text, or for the same medium. The FEC frame 71Oi comprises a source block 7M1 of source RTP packets, a random access point 718ϋ and a FEC RTP packet 712i. The FEC frame 71O2 comprises a source block 7142 of source RTP packets, a random access point 7182 and a FEC RTP packet 7122. The FEC frame
71θ3comprises a source block 7143 of source RTP packets, a random access point 7183 and a FEC RTP packet 7123.
It should be noted that an FEC frame can have more than one FEC packets so that the media bitstream is more robust against packet loss. Furthermore, while it is natural to have one random access point after the FEC packet or packets, as shown in Figure 5a, it is also possible to have more than one random access points in one FEC frames to signal a scene change, as shown in Figure 5b. The FEC packets are normally at the end of the FEC frame, while the RAP packets are at the beginning of the FEC frame. A random access point in the middle of an FEC frame is useful for quick tune-in. This is also useful in case of an FEC decoding failure. In such case, the first random access point is missing but the subsequent random access points in the same FEC frame can be used for media decoding.
There are several streaming-based use cases for assembling RAPs within FEC blocks for tune-in purposes. Some of which are part of a genre of Rich media services, including:
1) Interactive Mobile TV services - This service is understood as the ability to provide a deterministic rendering and behavior of Rich-media content including audio-video content, text, images, XML based content such as SVG5 along with TV and radio channels, altogether in the end-user interface. The service provides convenient navigation through content in a single application or service and allows synchronized interaction in local or in distant such as voting and personalization (e.g.: related menu or sub-menu, advertising and content in function of the end-user profile or service subscription).
This use case is described in 4 steps corresponding to 4 services and sub-services available in an iTV mobile service:
• Mosaic menu: TV Channel landscape.
• Electronic Program Guide and triggering of related iTV service.
• iTV service. Personalized Menu "sport news."
2) Live enterprise data feed - This service includes stock tickers that provide streaming of real-time quotes, live intra-day charts with technical indicators, news monitoring, weather alerts, charts, business updates, etc.
3) Live Chat - The live chat service can be incorporated within a web cam or video channel, or a rich-media blog service. End-users can register, save their surname and exchange messages. Messages appear dynamically in the live chat service along with rich- media data provided by the end-user. The chat service can be either private or public in one or more multiple channels at the same time. End-users are dynamically alerted of new messages from other users. Dynamic updates of messages within the service occur without reloading a complete page.
4) Karaoke - This service displays a music TV channel or video clip catalog along with the speech of a song with fluid-like animation on the text characters to be singing (e.g. smooth color transition of fonts, scrolling of text). The end-user can download a song of his choice along with the complete animation by selecting an interactive button
A schematic representation of a communication system having a server and a client, according to an embodiment of the present invention, is shown in Figure 6. As shown in Figure 6, the communication system is capable of providing multimedia/ multicast services. Thus, the communication system has at least one server and one client for multimedia streaming. In particular, the server is adapted for providing Rich media (DIMS) content over broadcast/multicast channels of a wireless network, such as the
Internet. In particular, the server is adapted for acquiring, receiving and/or storing DIMS content. For examples, the DIMS content includes scenes and scene updates. The DIMS content can be conveyed to an FSC frame generator which is adapted to insert random access points are inserted in FEC frames. More specifically, the random access points are inserted at the beginning of a source block for an MBMS -based streaming service for DIMS content. Advantageously, the FEC generator is adapted to provide FEC frames aligned with the media DIMS packets with the random access points included. The DIMS packets with aligned FEC frames are transmitted in a bitstream over broadcast/ multicast channels so as to allow one or more DIMS clients to receive and decode the bitstream. The FEC generator can have a processing component running a FEC encoding software having programming code for aligning the FEC frame as well as random access points insertion. In general, each DIMS client has a FEC decoder for error correction purposes. The
FEC decoder can have a processing component running a FEC decoding software. After FEC decoding, the DIMS contents are conveyed to a number of media decoders. The decoded content from each media decoder is provided to an output module. For example, if the media decoder is an video decoder, then the decoded content is provided to a screen for display. As shown in Figure 6, three different media decodes and three corresponding output modules are shown. One of the output modules can be a renderer adapted for SVG drawings, for example. SVG drawings can be interactive and dynamic and can be used in animation, for example.
Referring now to Figure 7, Figure 7 shows an electronic device that equips at least one of the server module and the DIMS client module as shown in Figure 6. According to one embodiment of the present invention, the electronic device is a mobile terminal. The mobile device 10 shown in Figure 7 is capable of cellular data and voice communications. It should be noted that the present invention is not limited to this specific embodiment, which represents one of a multiplicity of different embodiments. The mobile device 10 includes a (main) microprocessor or micro-controller 100 as well as components associated with the microprocessor controlling the operation of the mobile device. These components include a display controller 130 connecting to a display module 135, a nonvolatile memory 140, a volatile memory 150 such as a random access memory (RAM), an audio input/output (I/O) interface 160 connecting to a microphone 161, a speaker 162 and/or a headset 163, a keypad controller 170 connected to a keypad 175 or keyboard, any auxiliary input/output (I/O) interface 200, and a short-range communications interface 180. Such a device also typically includes other device subsystems shown generally at 190.
The mobile device 10 may communicate over a voice network and/or may likewise communicate over a data network, such as any public land mobile networks (PLMNs) in form of e.g. digital cellular networks, especially GSM (global system for mobile communication) or UMTS (universal mobile telecommunications system). Typically the voice and/or data communication is operated via an air interface, i.e. a cellular communication interface subsystem in cooperation with further components (see above) to a base station (BS) or node B (not shown) being part of a radio access network (RAN) of the infrastructure of the cellular network.
The cellular communication interface subsystem as depicted illustratively in Figure 7 comprises the cellular interface 110, a digital signal processor (DSP) 120, a receiver (RX) 121, a transmitter (TX) 122, and one or more local oscillators (LOs) 123 and enables the communication with one or more public land mobile networks (PLMNs). The digital signal processor (DSP) 120 sends communication signals 124 to the transmitter (TX) 122 and receives communication signals 125 from the receiver (RX) 121. In addition to processing communication signals, the digital signal processor 120 also provides for the receiver control signals 126 and transmitter control signal 127. For example, besides the modulation and demodulation of the signals to be transmitted and signals received, respectively, the gain levels applied to communication signals in the receiver (RX) 121 and transmitter (TX) 122 may be adaptively controlled through automatic gain control algorithms implemented in the digital signal processor (DSP) 120. Other transceiver control algorithms could also be implemented in the digital signal processor (DSP) 120 in order to provide more sophisticated control of the transceiver 121/122.
In case the mobile device 10 communications through the PLMN occur at a single frequency or a closely-spaced set of frequencies, then a single local oscillator (LO) 123 may be used in conjunction with the transmitter (TX) 122 and receiver (RX) 121. Alternatively, if different frequencies are utilized for voice/ data communications or transmission versus reception, then a plurality of local oscillators can be used to generate a plurality of corresponding frequencies.
Although the mobile device 10 depicted in Figure 7 is used with the antenna 129 as or with a diversity antenna system (not shown), the mobile device 10 could be used with a single antenna structure for signal reception as well as transmission. Information, which includes both voice and data information, is communicated to and from the cellular interface 110 via a data link between the digital signal processor (DSP) 120. The detailed design of the cellular interface 110, such as frequency band, component selection, power level, etc., will be dependent upon the wireless network in which the mobile device 10 is intended to operate.
After any required network registration or activation procedures, which may involve the subscriber identification module (SIM) 210 required for registration in cellular networks, have been completed, the mobile device 10 may then send and receive communication signals, including both voice and data signals, over the wireless network. Signals received by the antenna 129 from the wireless network are routed to the receiver 121, which provides for such operations as signal amplification, frequency down conversion, filtering, channel selection, and analog to digital conversion. Analog to digital conversion of a received signal allows more complex communication functions, such as digital demodulation and decoding, to be performed using the digital signal processor (DSP) 120. In a similar manner, signals to be transmitted to the network are processed, including modulation and encoding, for example, by the digital signal processor (DSP) 120 and are then provided to the transmitter 122 for digital to analog conversion, frequency up conversion, filtering, amplification, and transmission to the wireless network via the antenna 129.
The microprocessor / micro-controller (μC) 110, which may also be designated as a device platform microprocessor, manages the functions of the mobile device 10. Operating system software 149 used by the processor 110 is preferably stored in a persistent store such as the non- volatile memory 140, which may be implemented, for example, as a Flash memory, battery backed-up RAM, any other non- volatile storage technology, or any combination thereof. In addition to the operating system 149, which controls low-level functions as well as (graphical) basic user interface functions of the mobile device 10, the non-volatile memory 140 includes a plurality of high-level software application programs or modules, such as a voice communication software application 142, a data communication software application 141, an organizer module (not shown), or any other type of software module (not shown). These modules are executed by the processor 100 and provide a high-level interface between a user of the mobile device 10 and the mobile device 10. This interface typically includes a graphical component provided through the display 135 controlled by a display controller 130 and input/output components provided through a keypad 175 connected via a keypad controller 170 to the processor 100, an auxiliary input/output (I/O) interface 200, and/or a short-range (SR) communication interface 180. The auxiliary I/O interface 200 comprises especially USB (universal serial bus) interface, serial interface, MMC (multimedia card) interface and related interface technologies/standards, and any other standardized or proprietary data communication bus technology, whereas the short-range communication interface radio frequency (RF) low-power interface includes especially WLAN (wireless local area network) and Bluetooth communication technology or an IRDA (infrared data access) interface. The RF low-power interface technology referred to herein should especially be understood to include any IEEE 801.xx standard technology, which description is obtainable from the Institute of Electrical and Electronics Engineers. Moreover, the auxiliary I/O interface 200 as well as the short-range communication interface 180 may each represent one or more interfaces supporting one or more input/output interface technologies and communication interface technologies, respectively. The operating system, specific device software applications or modules, or parts thereof, may be temporarily loaded into a volatile store 150 such as a random access memory (typically implemented on the basis of DRAM (direct random access memory) technology for faster operation). Moreover, received communication signals may also be temporarily stored to volatile memory 150, before permanently writing them to a file system located in the non- volatile memory 140 or any mass storage preferably detachably connected via the auxiliary I/O interface for storing data. It should be understood that the components described above represent typical components of a traditional mobile device 10 embodied herein in the form of a cellular phone. The present invention is not limited to these specific components and their implementation depicted merely for illustration and for the sake of completeness.
An exemplary software application module of the mobile device 10 is a personal information manager application providing PDA functionality including typically a contact manager, calendar, a task manager, and the like. Such a personal information manager is executed by the processor 100, may have access to the components of the mobile device 10, and may interact with other software application modules. For instance, interaction with the voice communication software application allows for managing phone calls, voice mails, etc., and interaction with the data communication software application enables for managing SMS (soft message service), MMS (multimedia service), e-mail communications and other data transmissions. The non-volatile memory 140 preferably provides a file system to facilitate permanent storage of data items on the device including particularly calendar entries, contacts etc. The ability for data communication with networks, e.g. via the cellular interface, the short-range communication interface, or the auxiliary I/O interface enables upload, download, and synchronization via such networks. The application modules 141 to 149 represent device functions or software applications that are configured to be executed by the processor 100. In most known mobile devices, a single processor manages and controls the overall operation of the mobile device as well as all device functions and software applications. Such a concept is applicable for today's mobile devices. The implementation of enhanced multimedia functionalities includes, for example, reproducing of video streaming applications, manipulating of digital images, and capturing of video sequences by integrated or detachably connected digital camera functionality. The implementation may also include gaming applications with sophisticated graphics and the necessary computational power. One way to deal with the requirement for computational power, which has been pursued in the past, solves the problem for increasing computational power by implementing powerful and universal processor cores. Another approach for providing computational power is to implement two or more independent processor cores, which is a well known methodology in the art. The advantages of several independent processor cores can be immediately appreciated by those skilled in the art. Whereas a universal processor is designed for carrying out a multiplicity of different tasks without specialization to a preselection of distinct tasks, a multi-processor arrangement may include one or more universal processors and one or more specialized processors adapted for processing a predefined set of tasks. Nevertheless, the implementation of several processors within one device, especially a mobile device such as mobile device 10, requires traditionally a complete and sophisticated re-design of the components .
In the following, the present invention will provide a concept which allows simple integration of additional processor cores into an existing processing device implementation enabling the omission of expensive complete and sophisticated redesign. The inventive concept will be described with reference to system-on-a-chip (SoC) design. System-on-a-chip (SoC) is a concept of integrating at least numerous (or all) components of a processing device into a single high-integrated chip. Such a system-on-a-chip can contain digital, analog, mixed-signal, and often radio-frequency functions - all on one chip. A typical processing device comprises a number of integrated circuits that perform different tasks. These integrated circuits may include especially microprocessor, memory, universal asynchronous receiver-transmitters (UARTs), serial/parallel ports, direct memory access (DMA) controllers, and the like. A universal asynchronous receiver- transmitter (UART) translates between parallel bits of data and serial bits. The recent improvements in semiconductor technology cause very-large-scale integration (VLSI) integrated circuits to enable a significant growth in complexity, making it possible to integrate numerous components of a system in a single chip. With reference to Figure 7, one or more components thereof, e.g. the controllers 130 and 170, the memory components 150 and 140, and one or more of the interfaces 200, 180 and 110, can be integrated together with the processor 100 in a signal chip which forms finally a system- on-a-chip (Soc). Additionally, the device 10 is equipped with a module for scalable encoding 105 and scalable decoding 106 of video data according to the inventive operation of the present invention. By means of the CPU 100 said modules 105, 106 may individually be used. However, the device 10 is adapted to perform video data encoding or decoding respectively. Said video data may be received by means of the communication modules of the device or it also may be stored within any imaginable storage means within the device 10. Video data can be conveyed in a bitstream between the device 10 and another electronic device in a communications network.
A mobile terminal, according to the present invention, may be equipped with an encoder in a server or decoder in a DIMS client as described above. The mobile terminal may have both the encoder and the decoder.
This invention covers the inclusion of different variants of DIMS RAPs at the beginning of each FEC source block. The variants include:
• An entire SVG scene. o A DIMS scene update that can replace the entire DOM tree on the client. o Redundant RAPs that comprise redundant SVG scenes with possible references to future scene updates. Such redundant RAPs may be ignored by the clients not requiring resynchonization.
Although the invention has been described with respect to one or more embodiments thereof, it will be understood by those skilled in the art that the foregoing and various other changes, omissions and deviations in the form and detail thereof may be made without departing from the scope of this invention.

Claims

What is claimed is:
1. A method for use in multimedia streaming wherein a packet stream is provided to a multimedia client capable of decoding media packets of a plurality of media, and the encoded media packets of each medium are arranged in frames, each frame having at least a source block of media packets following at least one forward error correction packet, said method characterized by inserting a random access point in at least some of the frames such that the random access point is located between the source block and the forward error correction packet.
2. A module for use in a server in multimedia streaming wherein a packet stream is provided from the server to a multimedia client capable of decoding media packets of a plurality of media, and the encoded media packets of each medium are arranged in frames, each frame having at least a source block of media packets following at least one forward error correction packet, said module characterized by a processor for inserting a random access point in at least some of the frames such that the random access point is located between the source block and the forward error correction packet.
3. A server in a communication system, the communication system comprising one or more multimedia clients capable of decoding media packets of a plurality of media, and the encoded media packets of each medium are arranged in frames, each frame having at least a source block of media packets following at least one forward error correction packet, said server characterized by: a generation module for inserting a random access point in at least some of the frames such that the random access point is located between the source block and the forward error correction packet.
4. A multimedia client adapted for receiving a multimedia bitstream, the bitstream comprising a plurality of encoded media packets arranged in frames, each frame having at least a source block of media packets following at least one forward error correction packet and wherein at least one random access point is inserted between the source block and the forward correction packet, said client characterized by: a first decoder for forward error correction decoding; and at least one media decoder for decoding the source block of encoded media packets after the forward error correction decoding based on the random access point.
5. A software application product embodied in a computer readable storage medium having a software application for use in multimedia streaming wherein a packet stream is provided to a multimedia client capable of decoding media packets of a plurality of media, and the encoded media packets of each medium are arranged in frames, each frame having at least a source block of media packets following at least one forward error correction packet, said software application characterized by: programming code for inserting a random access point in at least some of the frames such that the random access point is located between the source block and the forward error correction packet.
6. A software application product embodied in a computer readable storage medium having a software application for use in a multimedia client, the client adapted for receiving a multimedia bitstream, the bitstream comprising a plurality of encoded media packets arranged in frames, each frame having at least a source block of media packets following at least one forward error correction packet and wherein at least one random access point is inserted between the source block and the forward correction packet, said software application characterized by: programming code for forward error correction decoding and programming code for decoding the source block of encoded media packets after the forward error correction decoding based on the random access point.
7. A communication system comprising: a server as claimed in claim 3; and one or more multimedia clients, each client comprising: a first decoder for forward error correction decoding; and at least one media decoder for decoding the source block of encoded media packets after the forward error correction decoding based on the random access point.
EP07804782A 2006-08-22 2007-08-20 METHOD AND DEVICE FOR ASSEMBLING FRONTAL ERROR CORRECTION FRAMES IN A MULTIMEDIA CONTINUOUS FLOW Withdrawn EP2055109A4 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/508,726 US7746882B2 (en) 2006-08-22 2006-08-22 Method and device for assembling forward error correction frames in multimedia streaming
PCT/IB2007/002385 WO2008023236A2 (en) 2006-08-22 2007-08-20 Method and device for assembling forward error correction frames in multimedia streaming

Publications (2)

Publication Number Publication Date
EP2055109A2 true EP2055109A2 (en) 2009-05-06
EP2055109A4 EP2055109A4 (en) 2010-07-07

Family

ID=39107165

Family Applications (1)

Application Number Title Priority Date Filing Date
EP07804782A Withdrawn EP2055109A4 (en) 2006-08-22 2007-08-20 METHOD AND DEVICE FOR ASSEMBLING FRONTAL ERROR CORRECTION FRAMES IN A MULTIMEDIA CONTINUOUS FLOW

Country Status (9)

Country Link
US (1) US7746882B2 (en)
EP (1) EP2055109A4 (en)
KR (1) KR100959293B1 (en)
CN (2) CN104010229A (en)
AU (1) AU2007287316A1 (en)
BR (1) BRPI0719899A2 (en)
CA (1) CA2661320A1 (en)
MX (1) MX2009001924A (en)
WO (1) WO2008023236A2 (en)

Families Citing this family (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10372746B2 (en) 2005-10-26 2019-08-06 Cortica, Ltd. System and method for searching applications using multimedia content elements
US11403336B2 (en) 2005-10-26 2022-08-02 Cortica Ltd. System and method for removing contextually identical multimedia content elements
US10585934B2 (en) 2005-10-26 2020-03-10 Cortica Ltd. Method and system for populating a concept database with respect to user identifiers
US10742340B2 (en) 2005-10-26 2020-08-11 Cortica Ltd. System and method for identifying the context of multimedia content elements displayed in a web-page and providing contextual filters respective thereto
US10607355B2 (en) 2005-10-26 2020-03-31 Cortica, Ltd. Method and system for determining the dimensions of an object shown in a multimedia content item
US11032017B2 (en) 2005-10-26 2021-06-08 Cortica, Ltd. System and method for identifying the context of multimedia content elements
US10691642B2 (en) 2005-10-26 2020-06-23 Cortica Ltd System and method for enriching a concept database with homogenous concepts
US8818916B2 (en) 2005-10-26 2014-08-26 Cortica, Ltd. System and method for linking multimedia data elements to web pages
US10614626B2 (en) 2005-10-26 2020-04-07 Cortica Ltd. System and method for providing augmented reality challenges
US11003706B2 (en) 2005-10-26 2021-05-11 Cortica Ltd System and methods for determining access permissions on personalized clusters of multimedia content elements
US10387914B2 (en) 2005-10-26 2019-08-20 Cortica, Ltd. Method for identification of multimedia content elements and adding advertising content respective thereof
US11019161B2 (en) 2005-10-26 2021-05-25 Cortica, Ltd. System and method for profiling users interest based on multimedia content analysis
US10776585B2 (en) 2005-10-26 2020-09-15 Cortica, Ltd. System and method for recognizing characters in multimedia content
US10621988B2 (en) 2005-10-26 2020-04-14 Cortica Ltd System and method for speech to text translation using cores of a natural liquid architecture system
US11216498B2 (en) 2005-10-26 2022-01-04 Cortica, Ltd. System and method for generating signatures to three-dimensional multimedia data elements
US8326775B2 (en) 2005-10-26 2012-12-04 Cortica Ltd. Signature generation for multimedia deep-content-classification by a large-scale matching system and method thereof
KR100819302B1 (en) * 2006-12-01 2008-04-02 삼성전자주식회사 Data transmission / reception method for multicast and broadcast service of broadband wireless access system
US8238278B2 (en) 2007-01-08 2012-08-07 Hellosoft, Inc. Hardware-based beacon processing
US8243638B2 (en) 2007-01-08 2012-08-14 Hellosoft, Inc. Passive listening in wireless communication
US7792141B2 (en) * 2007-01-08 2010-09-07 Hellosoft, Inc. Hardware-centric medium access control (MAC) device
US9760146B2 (en) * 2007-01-08 2017-09-12 Imagination Technologies Limited Conditional activation and deactivation of a microprocessor
US8868499B2 (en) * 2007-08-15 2014-10-21 Salesforce.Com, Inc. Method and system for pushing data to subscribers in an on-demand service
FR2930662B1 (en) * 2008-04-23 2010-05-21 Streamezzo METHOD OF SECURING AN EVOLVING SCENE, DEVICE, SIGNAL AND CORRESPONDING COMPUTER PROGRAM, METHOD OF UPDATING AN EVOLUTIVE SCENE, CORRESPONDING DEVICE AND COMPUTER PROGRAM
US8261148B2 (en) 2008-07-28 2012-09-04 At&T Intellectual Property Ii, Lp Internet protocol multicast with internet protocol unicast/multicast error correction
US20120121014A1 (en) * 2009-03-04 2012-05-17 Thomas Rusert Processing of Multimedia Data
US8144612B2 (en) * 2009-04-15 2012-03-27 Ibiquity Digital Corporation Systems and methods for transmitting media content via digital radio broadcast transmission for synchronized rendering by a receiver
GB2481573A (en) 2010-06-15 2012-01-04 Nds Ltd Splicng of encoded media content
EP2403280B1 (en) * 2010-06-30 2020-08-19 Hughes Systique India Private Limited A method and system for compressing and efficiently transporting scalable vector graphics based images and animation over low bandwidth networks
KR101803970B1 (en) * 2011-03-16 2017-12-28 삼성전자주식회사 Method and apparatus for composing content
KR101933465B1 (en) * 2011-10-13 2019-01-11 삼성전자주식회사 Apparatus and method for transmitting/receiving a packet in a mobile communication system
US9526091B2 (en) * 2012-03-16 2016-12-20 Intel Corporation Method and apparatus for coordination of self-optimization functions in a wireless network
KR20130126876A (en) * 2012-04-30 2013-11-21 삼성전자주식회사 Method and apparatus for transmitting/receiving packet in a communication system
US10356143B2 (en) 2012-10-10 2019-07-16 Samsung Electronics Co., Ltd. Method and apparatus for media data delivery control
PH12017500352B1 (en) 2014-08-28 2022-07-06 Nokia Technologies Oy Audio parameter quantization
US20160323063A1 (en) * 2015-05-01 2016-11-03 Qualcomm Incorporated Bundled Forward Error Correction (FEC) for Multiple Sequenced Flows
US11037015B2 (en) 2015-12-15 2021-06-15 Cortica Ltd. Identification of key points in multimedia data elements
US11195043B2 (en) 2015-12-15 2021-12-07 Cortica, Ltd. System and method for determining common patterns in multimedia content elements based on key points
US11760387B2 (en) 2017-07-05 2023-09-19 AutoBrains Technologies Ltd. Driving policies determination
WO2019012527A1 (en) 2017-07-09 2019-01-17 Cortica Ltd. Deep learning networks orchestration
US10846544B2 (en) 2018-07-16 2020-11-24 Cartica Ai Ltd. Transportation prediction system and method
US11181911B2 (en) 2018-10-18 2021-11-23 Cartica Ai Ltd Control transfer of a vehicle
US20200133308A1 (en) 2018-10-18 2020-04-30 Cartica Ai Ltd Vehicle to vehicle (v2v) communication less truck platooning
US10839694B2 (en) 2018-10-18 2020-11-17 Cartica Ai Ltd Blind spot alert
US11126870B2 (en) 2018-10-18 2021-09-21 Cartica Ai Ltd. Method and system for obstacle detection
US10748038B1 (en) 2019-03-31 2020-08-18 Cortica Ltd. Efficient calculation of a robust signature of a media unit
US11244176B2 (en) 2018-10-26 2022-02-08 Cartica Ai Ltd Obstacle detection and mapping
US10789535B2 (en) 2018-11-26 2020-09-29 Cartica Ai Ltd Detection of road elements
US11643005B2 (en) 2019-02-27 2023-05-09 Autobrains Technologies Ltd Adjusting adjustable headlights of a vehicle
US11285963B2 (en) 2019-03-10 2022-03-29 Cartica Ai Ltd. Driver-based prediction of dangerous events
US11694088B2 (en) 2019-03-13 2023-07-04 Cortica Ltd. Method for object detection using knowledge distillation
US11132548B2 (en) 2019-03-20 2021-09-28 Cortica Ltd. Determining object information that does not explicitly appear in a media unit signature
US12055408B2 (en) 2019-03-28 2024-08-06 Autobrains Technologies Ltd Estimating a movement of a hybrid-behavior vehicle
US11222069B2 (en) 2019-03-31 2022-01-11 Cortica Ltd. Low-power calculation of a signature of a media unit
US10796444B1 (en) 2019-03-31 2020-10-06 Cortica Ltd Configuring spanning elements of a signature generator
US10776669B1 (en) 2019-03-31 2020-09-15 Cortica Ltd. Signature generation and object detection that refer to rare scenes
US10789527B1 (en) 2019-03-31 2020-09-29 Cortica Ltd. Method for object detection using shallow neural networks
US11593662B2 (en) 2019-12-12 2023-02-28 Autobrains Technologies Ltd Unsupervised cluster generation
US10748022B1 (en) 2019-12-12 2020-08-18 Cartica Ai Ltd Crowd separation
US11590988B2 (en) 2020-03-19 2023-02-28 Autobrains Technologies Ltd Predictive turning assistant
US11827215B2 (en) 2020-03-31 2023-11-28 AutoBrains Technologies Ltd. Method for training a driving related object detector
US11756424B2 (en) 2020-07-24 2023-09-12 AutoBrains Technologies Ltd. Parking assist
US12049116B2 (en) 2020-09-30 2024-07-30 Autobrains Technologies Ltd Configuring an active suspension
CN114415163A (en) 2020-10-13 2022-04-29 奥特贝睿技术有限公司 Camera-based distance measurement
CN112333470B (en) * 2020-10-27 2022-05-27 杭州叙简科技股份有限公司 FEC (forward error correction) system based on video frame
US12257949B2 (en) 2021-01-25 2025-03-25 Autobrains Technologies Ltd Alerting on driving affecting signal
US12139166B2 (en) 2021-06-07 2024-11-12 Autobrains Technologies Ltd Cabin preferences setting that is based on identification of one or more persons in the cabin
US20220392197A1 (en) * 2021-06-07 2022-12-08 Cortica, Ltd. Isolating unique and representative patterns of a concept structure
US12110075B2 (en) 2021-08-05 2024-10-08 AutoBrains Technologies Ltd. Providing a prediction of a radius of a motorcycle turn

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ATE511721T1 (en) 2004-10-06 2011-06-15 Nokia Corp COMPILE FORWARD ERROR CORRECTION FRAMEWORK
US7447978B2 (en) * 2004-11-16 2008-11-04 Nokia Corporation Buffering packets of a media stream
US7751324B2 (en) * 2004-11-19 2010-07-06 Nokia Corporation Packet stream arrangement in multimedia transmission
US20080008190A1 (en) * 2006-07-07 2008-01-10 General Instrument Corporation Method and apparatus for distributing statistical multiplex signals to handheld devices

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
"MPEG-4 part 20 based specification for DIMS:Dynamic Interactive Multimedia Scene" 3GPP DRAFT; MPEG4PART20 BASED SPECIFICATION FOR DIMS SPECIFICATION, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. SA WG4, no. Dallas; 20060510, 10 May 2006 (2006-05-10), XP050288479 [retrieved on 2006-05-10] *
3GPP DRAFT; MORE, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. SA WG4, no. Sophia Antipolis, France; 20060406, 6 April 2006 (2006-04-06), XP050282416 [retrieved on 2006-04-06] *
NOKIA ET AL: "MORE: The Mobile Open Rich Media Environment - An Overview of Technology Candidate Proposal for Dynamic and Interactive Multimedia Scenes (DIMS)" 3GPP DRAFT; S4-050735, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. SA WG4, no. Bordeaux; 20051109, 9 November 2005 (2005-11-09), XP050288173 [retrieved on 2005-11-09] *
See also references of WO2008023236A2 *
TIAN V KUMAR MV TAMPERE INTERNATIONAL CTR FOR SIGNAL PROCESSING (FINLAND) D ET AL: "Improved H.264/AVC video broadcast/multicast" VISUAL COMMUNICATIONS AND IMAGE PROCESSING; 12-7-2005 - 15-7-2005; BEIJING,, 12 July 2005 (2005-07-12), XP030080844 *

Also Published As

Publication number Publication date
BRPI0719899A2 (en) 2015-09-08
WO2008023236A3 (en) 2008-06-05
WO2008023236A2 (en) 2008-02-28
CN101536532A (en) 2009-09-16
CA2661320A1 (en) 2008-02-28
CN104010229A (en) 2014-08-27
KR20090048636A (en) 2009-05-14
EP2055109A4 (en) 2010-07-07
AU2007287316A1 (en) 2008-02-28
US20080049789A1 (en) 2008-02-28
MX2009001924A (en) 2009-04-30
US7746882B2 (en) 2010-06-29
KR100959293B1 (en) 2010-05-26

Similar Documents

Publication Publication Date Title
US7746882B2 (en) Method and device for assembling forward error correction frames in multimedia streaming
KR100927978B1 (en) How to embed SV content in an ISO-based media file format for progressive downloading and streaming of rich media content
US7917644B2 (en) Extensions to rich media container format for use by mobile broadcast/multicast streaming servers
CN101999136B (en) Transmission in progress and synchronous method and system are carried out to the discrete content in rich media service
CN101536527B (en) Scalable video coding and decoding
KR101029854B1 (en) Backward-compatible set of pictures in scalable video coding
RU2384956C2 (en) System and method for providing unequal error protection to priority labelled datagrams in dvb-h transmission system
US7792998B2 (en) System and method for providing real-time streaming service between terminals
US8472477B2 (en) SAF synchronization layer packet structure and server system therefor
EP2894831A1 (en) Transport mechanisms for dynamic rich media scenes
US20050123042A1 (en) Moving picture streaming file, method and system for moving picture streaming service of mobile communication terminal
TW201032597A (en) Method and apparatus for video coding and decoding
WO2008104926A2 (en) Script-based system to perform dynamic updates to rich media content and services
Lim et al. New MPEG transport standard for next generation hybrid broadcasting system with IP
CN102342057A (en) Method and apparatus for encapsulation of scalable media
JP2011511554A (en) Method for streaming video data
Montelius et al. Streaming Video in Wireless Networks: Service and Technique

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20090220

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK RS

RIN1 Information on inventor provided before grant (corrected)

Inventor name: CHITTURI, SURESH

Inventor name: SETLUR, VIDYA

Inventor name: VEDANTHAM, RAMAKRISHNAN

DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20100604

RIC1 Information provided on ipc code assigned before grant

Ipc: H04N 5/00 20060101ALI20100528BHEP

Ipc: H04N 7/66 20060101AFI20080402BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20101223