CN117939016A - Encoding method and device for audio and video files - Google Patents
Encoding method and device for audio and video files Download PDFInfo
- Publication number
- CN117939016A CN117939016A CN202410195400.6A CN202410195400A CN117939016A CN 117939016 A CN117939016 A CN 117939016A CN 202410195400 A CN202410195400 A CN 202410195400A CN 117939016 A CN117939016 A CN 117939016A
- Authority
- CN
- China
- Prior art keywords
- audio
- video
- frame data
- file
- coding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 description 46
- 238000012545 processing Methods 0.000 description 13
- 230000005540 biological transmission Effects 0.000 description 10
- 238000004891 communication Methods 0.000 description 10
- 238000004590 computer program Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 6
- 229920002776 polycyclohexyl methacrylate Polymers 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000010295 mobile communication Methods 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 238000012795 verification Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000008602 contraction Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003908 quality control method Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000026676 system process Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/42017—Customized ring-back tones
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/16—Vocoder architecture
- G10L19/173—Transcoding, i.e. converting between two coded representations avoiding cascaded coding-decoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/40—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440218—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The application discloses a method and a device for encoding an audio and video file. Wherein the method comprises the following steps: acquiring an audio and video source coding file; for each first coding frame data in the audio and video source coding file, decoding the first coding frame data to obtain audio and video frame data, and coding the audio and video frame data by using a target coder to obtain second coding frame data, wherein the target coder at least comprises: an EVS audio encoder; combining the plurality of second coding frame data according to the time sequence to obtain a universal audio/video coding file; and responding to the acquisition request of the target terminal for the audio and video source file, and sending the universal audio and video coding file to the target terminal. The application solves the technical problem that the related audio and video file coding and decoding technology needs to call the coder and decoder for transcoding the preset coding file stored locally in the system for a plurality of times, thereby causing more consumption of system resources.
Description
Technical Field
The application relates to the technical field of communication, in particular to an encoding method and device of an audio and video file.
Background
Currently, in order to meet the effect of playing the polyphonic ringtone of the terminal, the video polyphonic ringtone platform generally processes the audio and video file into an RTP media stream according to a video polyphonic ringtone encoding and decoding flow shown in fig. 1, and specifically includes the following steps: firstly, acquiring a sound source file issued by a service side; then, pre-transcoding the sound source file to obtain a source coding file, and offline transcoding the source coding file into a preset coding file and storing the preset coding file in a user's local, wherein the preset coding file mainly comprises the following formats: audio coding formats such as PCMA/PCMU/AMR/AMRWB, and video coding formats such as H2664/H265; finally, when the user terminal initiates the audio/video color ring instruction, the audio/video color ring instruction can be packaged into a data stream directly by a preset coding file and pushed to the terminal for presentation.
However, with the deployment of 5G (5 th -Generation Mobile Communication Technology, fifth generation mobile communication technology) networks on a large scale and the continuous coverage of 5G NR (New Radio), terminals, wireless, 5GC (5G core,5G core network) and IMS (IP Multimedia Subsystem, IP multimedia system) are gradually provided with E2E (End to End) functions of VoNR (Voice over New Radio, new air bearing voice). The video color ring platform is required to have a new generation of high-definition voice coding technology EVS (Enhance Voice Services, voice frequency encoder) and can realize the reading of the 3gp file and the interconversion of the coding format.
However, in the related art, when the video color ring service is triggered, a predetermined encoding file needs to be transcoded, so that larger system resource consumption is occupied.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the application provides an audio and video file coding method and device, which at least solve the technical problem that the related audio and video file coding and decoding technology needs to call a coder and decoder for transcoding a preset coding file locally stored in a system for a plurality of times, so that more system resources are consumed.
According to an aspect of an embodiment of the present application, there is provided a method for encoding an audio/video file, including: acquiring an audio and video source coding file, wherein the audio and video source coding file comprises at least one first coding frame data; for each first coding frame data in the audio and video source coding file, performing decoding operation on the first coding frame data to obtain corresponding audio and video frame data, and performing coding operation on the audio and video frame data by using a target encoder to obtain corresponding second coding frame data, wherein the target encoder at least comprises: an EVS audio encoder for performing an encoding operation on the audio data; combining the plurality of second coding frame data according to the time sequence to obtain a universal audio/video coding file; and responding to the acquisition request of the target terminal for the audio and video source file, and sending a general audio and video coding file to the target terminal, wherein the target terminal is used for transcoding the general audio and video coding file according to a target coding format supported by the target terminal to obtain the target audio and video coding file.
Optionally, obtaining the audio and video source coding file includes: acquiring an audio and video source file from a service end side; encoding the audio and video source file by using a preset encoder to obtain an audio and video source encoded file, wherein the encoder comprises at least one of the following: an audio encoder, a video encoder, and an encoding format of the audio encoder includes at least one of: PCMA, PCMU, AMR, AMRWB the encoding format of the video encoder includes at least one of: h264, H265.
Optionally, decoding the first encoded frame data to obtain corresponding audio and video frame data, and encoding the audio and video frame data by using the target encoder to obtain corresponding second encoded frame data, including: determining a data type of the first encoded frame data, wherein the data type comprises at least one of: audio type, video type; and decoding the first encoded frame data by adopting a decoder corresponding to the data type to obtain corresponding audio and video frame data, and encoding the audio and video frame data by adopting a target encoder corresponding to the data type to obtain corresponding second encoded frame data.
Optionally, decoding the first encoded frame data by using a decoder corresponding to the data type to obtain corresponding audio/video frame data, including: under the condition that the data type is the audio type, adopting an audio decoder with the same format as the audio encoder to decode the first encoded frame data to obtain corresponding audio frame data; and under the condition that the data type is the video type, adopting a video decoder with the same format as the video encoder to decode the first encoded frame data to obtain corresponding video frame data.
Optionally, the encoding operation is performed on the audio and video frame data by using a target encoder corresponding to the data type, so as to obtain corresponding second encoded frame data, including: under the condition that the data type is the audio type, an EVS audio encoder is adopted to encode the audio frame data, so as to obtain corresponding audio encoded frame data; and under the condition that the data type is the video type, adopting a video encoder to encode the video frame data to obtain corresponding video encoded frame data.
Optionally, sending the general audio/video coding file to the target terminal includes: and streaming the universal audio/video coding file to obtain a corresponding data stream, and feeding back the data stream to the target terminal through a real-time transmission protocol.
Optionally, the file format of the general audio/video coding file is a 3GP format.
According to another aspect of the embodiment of the present application, there is also provided an encoding apparatus for an audio/video file, including: the acquisition module is used for acquiring an audio and video source coding file, wherein the audio and video source coding file comprises at least one first coding frame data; the transcoding module is used for decoding the first coding frame data of each first coding frame data in the audio and video source coding file to obtain corresponding audio and video frame data, and utilizing the target encoder to encode the audio and video frame data to obtain corresponding second coding frame data, wherein the target encoder at least comprises: an EVS audio encoder for performing an encoding operation on the audio data; the combination module is used for combining the plurality of second coding frame data according to the time sequence to obtain a universal audio/video coding file; and the feedback module is used for responding to the acquisition request of the target terminal for the audio and video source file, and sending the universal audio and video coding file to the target terminal, wherein the target terminal is used for transcoding the universal audio and video coding file according to a target coding format supported by the target terminal, so as to obtain the target audio and video coding file.
According to another aspect of the embodiments of the present application, there is also provided a computer program product comprising a stored computer program, wherein the computer program, when executed by a processor, implements the above-mentioned method for encoding an audio-video file. According to another aspect of the embodiment of the present application, there is also provided an electronic device including: the device comprises a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the encoding method of the audio/video file through the computer program.
In the embodiment of the application, an audio and video source coding file is obtained, wherein the audio and video source coding file comprises at least one first coding frame data; for each first coding frame data in the audio and video source coding file, performing decoding operation on the first coding frame data to obtain corresponding audio and video frame data, and performing coding operation on the audio and video frame data by using a target encoder to obtain corresponding second coding frame data, wherein the target encoder at least comprises: an EVS audio encoder for performing an encoding operation on the audio data; combining the plurality of second coding frame data according to the time sequence to obtain a universal audio/video coding file; and responding to the acquisition request of the target terminal for the audio and video source file, and sending a general audio and video coding file to the target terminal, wherein the target terminal is used for transcoding the general audio and video coding file according to a target coding format supported by the target terminal to obtain the target audio and video coding file.
In the technical scheme, in the process of transcoding the audio and video source coding file, the first coding frame data in the audio and video source coding file is firstly decoded, and then the EVS audio coder and the video coder are adopted to code the audio and video frame data so as to construct a general audio and video coding file, wherein the general audio and video coding file can ensure that user terminals supporting different audio coding formats can call the EVS audio coder and decoder to realize real-time transcoding, so that the audio and video file can be normally played with high quality, and further the technical problem that the related audio and video file coding and decoding technology needs to call the coder and decoder for transcoding the preset coding file locally stored in the system for multiple times, thereby causing more system resource consumption is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
fig. 1 is a flow chart of an audio and video file codec according to the related art;
FIG. 2 is a block diagram of the hardware architecture of an alternative computer terminal for implementing the encoding method of an audio video file according to an embodiment of the present application;
FIG. 3 is a flow chart of an alternative method for encoding an audio/video file according to an embodiment of the present application;
Fig. 4 is a flowchart of an alternative video color ring system performing offline transcoding on an audio/video source file to obtain a general audio/video encoded file according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an alternative G192 format according to an embodiment of the application;
FIG. 6 is a flow chart of an alternative target terminal processing a generic audio video encoded file according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an alternative audio/video file encoding apparatus according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In addition, the related information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for presentation, analyzed data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party. For example, an interface is provided between the system and the relevant user or institution, before acquiring the relevant information, the system needs to send an acquisition request to the user or institution through the interface, and acquire the relevant information after receiving the consent information fed back by the user or institution.
In order to better understand the embodiments of the present application, technical terms related to the embodiments of the present application are explained as follows:
VoLTE (Voice over LTE) is a high-speed wireless communication standard for mobile phones and data terminals, and by name, it is a standard LTE Voice solution defined by GSMA IR 92 that transmits Voice over a 4G data network called LTE. The method is based on a 4G network, and realizes unified bearing of data, voice, video, short message and multimedia message service through an IP transmission technology.
VoNR (Voice over New Radio, new air interface carrying speech): is an IMS (IP multimedia system) based voice call service that uses a 5G network for voice over IP processing, and generally, voice carried over a 5G NR network is referred to as VoNR. It should be noted that VoLTE and VoNR are different access modes based on IMS voice/video communication services, and VoNR has advantages over VoLTE in that: voNR can force to support a new voice coding and decoding scheme, which can effectively improve the tone quality of voice call to the level of HiFi, namely EVS, also called ultra high resolution voice.
EVS (Enhance Voice Services, voice over frequency encoder) is a technology specifically designed for VoLTE that can achieve full high definition voice calls, enabling telephony voice fidelity to reach the same level as other digital media services today. Integrating the most advanced speech and audio coding techniques, EVS eliminates bandwidth limitations and limitations imposed by speech-directed codecs that have been used for mobile communications. It has the following advantages: the full high-definition voice communication quality, high efficiency and multifunction are realized, reliable service is ensured, and the existing VoLTE service is downward compatible.
RTP (Real-time Transport Protocol ) was published in RFC1889 by the multimedia transport working group of IETF (THE INTERNET ENGINEERING TASK Force, international Internet engineering task Force) in 1996. RTP provides end-to-end transport services for multimedia data such as voice, image, etc. over IP (Internet Protocol, an inter-network protocol) that needs to be transported in Real time, but cannot guarantee quality of service (Quality of Service, qoS) itself, and therefore needs to be used with the Real-time transport control protocol (RTCP, real-time Transport Control Protocol). Wherein, RTCP is the information of monitoring service quality and transmitting session participants, the server can use RTCP data packet information to change transmission rate and load data type.
E2E (End to End), where the term End-to-End communication protection is meant to refer to a data protection protocol/mechanism implemented between communication nodes for security related data in order to prevent possible failures in the communication links, which is applicable to various network structures, such as: CAN (Controller Area Network ), flexRay, ethernet, and the like.
Example 1
According to the embodiment of the application, the method embodiment of the encoding method of the audio and video file is provided, the method can be applied to a video color ring system, and aims to construct a general audio and video encoded file by performing offline transcoding on a source encoded file, wherein the audio type file is encoded by an EVS audio encoder, so that each subsequent user terminal can call an interface of the EVS audio decoder to realize real-time transcoding of the audio file, and other audio codecs are not additionally introduced, thereby reducing the system resource consumption.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
The method embodiments provided by the embodiments of the present application may be performed in a mobile terminal, a computer terminal, or similar computing device. Fig. 2 shows a block diagram of a hardware structure of a computer terminal for implementing an encoding method of an audio/video file. As shown in fig. 2, the computer terminal 20 may include one or more processors 202 (shown as 202a, 202b, … …,202n in the figures) (the processor 202 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA), a memory 204 for storing data, and a transmission device 206 for communication functions. In addition, the method may further include: a display, an input/output interface (I/O interface), a Universal Serial BUS (USB) port (which may be included as one of the ports of the BUS), a network interface, a power supply, and/or a camera. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 2 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the computer terminal 20 may also include more or fewer components than shown in FIG. 2, or have a different configuration than shown in FIG. 2.
It should be noted that the one or more processors 202 and/or other data processing circuits described above may be referred to herein generally as "data processing circuits. The data processing circuit may be embodied in whole or in part in software, hardware, firmware, or any other combination. Furthermore, the data processing circuitry may be a single stand-alone processing module or incorporated, in whole or in part, into any of the other elements in the computer terminal 20. As referred to in embodiments of the application, the data processing circuit acts as a processor control (e.g., selection of the path of the variable resistor termination connected to the interface).
The memory 204 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the encoding method of an audio/video file in the embodiment of the present application, and the processor 202 executes the software programs and modules stored in the memory 204 to perform various functional applications and data processing, that is, implement the encoding method of an audio/video file of the application program. Memory 204 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 204 may further include memory located remotely from the processor 202, which may be connected to the computer terminal 20 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 206 is used for receiving or transmitting data via a network. The specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 20. In one example, the transmission device 206 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 206 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 20.
In the above operating environment, fig. 3 is a flow chart of an alternative method for encoding an audio/video file according to an embodiment of the present application, as shown in fig. 3, the method at least includes steps S302-S308, where:
step S302, an audio and video source coding file is obtained.
In the technical solution provided in step S302, the video color ring system first directly obtains an audio/video source file (i.e., an audio/video color ring) from a service end, where the audio/video source file includes at least one audio/video frame data, and the audio/video frame data may be audio frame data and/or video frame data, so that the audio/video source file may also be understood as an audio/video frame sequence. In addition, the source file is directly obtained from the service end, so that the audio and video quality is prevented from being damaged due to encoding; and then, because the audio and video coding formats supported by different video color ring systems are different, the audio and video frame data in the audio and video source file can be coded according to the coding formats supported by the video color ring systems to obtain corresponding first coding frame data, and the corresponding audio and video source coding file can be obtained by combining the first coding frame data according to the time sequence of the audio and video frame data, so that the compatibility of the audio and video source coding file is ensured, meanwhile, the storage space and bandwidth consumption of the audio and video file in the video color ring system can be reduced, and in addition, the quality control of the audio and video file can be carried out by the pre-coding operation, including the adjustment of parameters such as resolution, frame rate, code rate and the like, so that better watching experience is provided. The audio frame data in the audio and video source coding file can be obtained by coding by using an audio coder with an audio coding format such as PCMA, PCMU, AMR, AMRWB, and the video frame data in the audio and video source coding file can be obtained by coding by using a video coder with a video coding format such as H2664/H265.
The process of obtaining the audio and video source coding file is to take an audio and video source file of the obtaining service end as an example, and the method of obtaining the audio and video source coding file is described. In general, the video color ring system sequentially acquires a plurality of audio and video source files from a service end, so that the video color ring needs to perform local storage, data updating and file integrity verification on the acquired audio and video source files each time, wherein the verification standard of the file integrity verification is to acquire the file size of the audio and video source files.
Step S304, for each first coding frame data in the audio and video source coding file, decoding the first coding frame data to obtain corresponding audio and video frame data, and using a target encoder to code the audio and video frame data to obtain corresponding second coding frame data.
In the technical solution provided in step S304, considering that the audio and video coding formats supported by the user terminal device connected to each video color ring system are different, it is also necessary to perform offline transcoding again on the video color ring system side, that is, perform decoding operation on each first encoded frame data in the audio and video source encoded file to obtain corresponding audio and video frame data, and then perform encoding operation on the audio and video frame data by using the target encoder to obtain corresponding second encoded frame data.
The target encoder in the embodiment of the application at least comprises: an EVS audio encoder for performing an encoding operation on audio data. The method is characterized in that if the audio coding formats supported by different user terminal devices are different, if the video color ring system transcodes the audio files in the audio and video source coding files into a certain format, such as AMRWB format, and the audio coding format supported by the user terminal device is PCMU, in order to normally play the audio and video source file on the user terminal, the audio and video source coding files in AMRWB format need to be decoded first, then encoded by an encoder in PCMU format, and then transmitted to the user terminal in a code stream mode, that is, the method needs to encode and decode the audio and video source coding files of the video color ring system n times in real time, thereby bringing a larger resource load to the system side. The application directly transcodes the audio files in the audio and video source coding files by using the EVS audio coder, wherein the EVS audio coder not only supports the coding and decoding of various audio formats, but also can adapt to different requirements and application scenes; the method can also be compatible with various devices and platforms, and can play and decode audio and video files on different devices, so that n times of real-time encoding and decoding of audio and video source encoding files by a video color ring system are reduced to one time.
Step S306, a plurality of second coding frame data are combined according to the time sequence, and the universal audio/video coding file is obtained.
In the technical scheme provided in step S306, because the video color ring system processes each first encoded frame data in the audio/video source encoded file according to the time sequence thereof, the video color ring system may combine the processed plurality of second encoded frame data according to the time sequence thereof, so as to obtain the general audio/video encoded file, where the time sequence is the same as the time sequence of each audio/video frame data in the audio/video file acquired from the service side.
Step S308, responding to the acquisition request of the target terminal for the audio and video source file, and sending the corresponding general audio and video coding file to the target terminal.
In the technical scheme provided in step S308, when the target terminal sends an acquisition request for an audio/video source file (i.e., an audio/video color ring) to the video color ring system, the video color ring system may send a general audio/video coding file corresponding to the audio/video source file to the target terminal. And the target terminal is used for transcoding the universal audio/video coding file according to a target coding format supported by the target terminal to obtain the target audio/video coding file.
Based on the scheme defined in the steps S302 to S308, it can be known that, in the embodiment, the video color ring system decodes each first encoded frame data of the acquired audio and video source encoded file to obtain corresponding audio and video frame data, and for the audio frame data, the embodiment of the application uses the enhanced voice service EVS audio encoder to perform offline transcoding on the audio frame data, and for the video frame data, the embodiment of the application continues to use the universal video encoder to perform offline transcoding on the video frame data, so as to finally obtain a universal audio and video encoded file containing two types of audio and video, where the universal audio and video encoded file can support encoding and decoding of multiple audio and video formats, thereby playing and decoding audio and video source files on different devices. Therefore, the technical problem that the related audio and video file coding and decoding technology needs to call the coder and decoder for transcoding the preset coding file stored locally in the system for a plurality of times, so that more system resources are consumed is solved.
The above-described method of this embodiment is further described below.
As an optional implementation manner, in the technical solution provided in step S302, the method may include:
step S302, acquiring an audio and video source file from a service end side;
Step S3022, encoding the audio and video source file with a preset encoder to obtain an audio and video source encoded file, where the encoder includes at least one of the following: an audio encoder, a video encoder, and an encoding format of the audio encoder includes at least one of: PCMA, PCMU, AMR, AMRWB, the coding format of the video encoder includes at least one of: h264, H265.
In the embodiment, firstly, a video color ring system acquires an audio and video source file from a service end side, wherein the audio and video source file comprises at least one audio frame data; then, for the frame data of different data types, adopting an encoder corresponding to the data type to encode each audio frame data to obtain corresponding first encoded frame data. That is, for frame data of an audio type, the video color ring system may employ an audio encoder to encode the audio frame data; for frame data of a video type, the video color ring system may employ a video encoder to encode the video frame data. And finally, storing the encoded first encoded frame data in the video color ring system in a 3GP format. It should be noted that, because the coding formats supported by different video color ring systems are different, the coding formats adopted by the audio encoder and the video encoder can be set by combining with the actual application scene.
As an optional implementation manner, in the technical solution provided in step S304, the method may include:
Step S3041, determining a data type of the first encoded frame data, wherein the data type includes at least one of: audio type, video type;
step S3042, adopting a decoder corresponding to the data type to decode the first encoded frame data to obtain corresponding audio and video frame data, and adopting a target encoder corresponding to the data type to encode the audio and video frame data to obtain corresponding second encoded frame data.
Similarly, the decoding principle of the first encoded frame data is the same as that of the audio/video source encoded file, and the corresponding codec is determined to perform the codec operation according to the data type of the frame data.
Therefore, in the technical solution provided in step S3042, the method further includes: under the condition that the data type is the audio type, adopting an audio decoder with the same format as the audio encoder to decode the first encoded frame data to obtain corresponding audio frame data; and under the condition that the data type is the video type, adopting a video decoder with the same format as the video encoder to decode the first encoded frame data to obtain corresponding video frame data.
Specifically, the details of step S3042 described above are explained with reference to fig. 4:
first, if the audio/video source encoded file is a 3GP format file obtained by mixing and decoding an audio encoder in AMRWB format and a video encoder in H264 format, then the audio decoder in AMRWB format and the video decoder in H264 format need to be used for decoding accordingly. Namely: if the first encoded frame data of the audio type is obtained by encoding the audio frame data with an audio encoder in AMRWB format, correspondingly, when decoding the first encoded frame data of the audio type, decoding the first encoded frame data with an audio decoder in AMRWB format is still needed to obtain the corresponding audio frame data. If the first encoded frame data of the video type is obtained by encoding the video frame data by using the H264 format audio encoder, correspondingly, when the first encoded frame data of the video type is decoded, the first encoded frame data still needs to be decoded by using the H264 format video decoder, so as to obtain the corresponding video frame data.
After the audio/video frame data corresponding to the first encoded frame data is obtained, the audio/video frame data can be transcoded, and the specific implementation mode is as follows:
Under the condition that the data type is the audio type, an EVS audio encoder is adopted to encode the audio frame data, so as to obtain corresponding audio encoded frame data;
and under the condition that the data type is the video type, adopting a video encoder to encode the video frame data to obtain corresponding video encoded frame data.
That is, the EVS audio encoder may be used to encode the audio frame data to obtain corresponding audio encoded frame data; the video frame data may be encoded by a video encoder to obtain corresponding video encoded frame data. The coding format of the video coder can be specifically set according to actual conditions.
Further, the second encoded frame data transcoded in the step S304 are combined according to a time sequence to obtain a general audio/video encoded file, where the file format of the general audio/video encoded file may be a 3GP format. It should be noted that, in the embodiment of the present application, the format setting of the general audio/video coding file is only a preferred example, which is not particularly limited, and the format setting can be specifically set by combining with an actual application scenario.
As an optional implementation manner, in the technical solution provided in step S308, the method may include: and streaming the universal audio/video coding file to obtain a corresponding data stream, and feeding back the data stream to the target terminal through a real-time transmission protocol.
In this embodiment, in order to facilitate transmission, the video color ring system may perform streaming processing on the general audio/video encoding file to obtain a corresponding data stream, and further package the data stream and related control information (such as an RTCP packet) into a single data packet by using a Real-time transport protocol RTP (Real-time Transport Protocol ), and feed back the single data packet to the target terminal, so as to ensure the sequence and integrity of the frame data while transmitting Real-time audio/video frame data.
In addition, after receiving the general audio/video coding file, the target terminal may place the value of the general audio/video coding file in Indices, where Indices has two storage modes: g192 (ITU-T G.192) and MIME (Multipurpose INTERNET MAIL Extensions).
Specifically, the storage format of G192 is shown in fig. 5, where the first Word is a synchronization value, divided into two types, a good frame (0 x6B 21) and a bad frame (0 x6B 20), and the second Word is a length, followed by each value (1 is denoted by 0x0081 and 0 is denoted by 0x 007F). The value in Indices is represented in binary, with a value of 1 on a bit being 0x0081 and 0 being 0x007F. In the MIME format, the value of Indices is converted into a serial value by using a pack_bit () function, the first Word in the MIME format is a header (the code rate index is lower by 4 bits, the fifth bit and the sixth bit are required to be 1 when wb_io is used, and 1 is not required to be set when evs is used), and the bit stream follows.
Further, fig. 6 is a flowchart of an alternative target terminal processing a general audio/video encoded file according to an embodiment of the present application, as shown in fig. 6. In order to show the audio/video source file, the target terminal needs to decode the obtained general audio/video encoded file. Since the general audio-video coding file includes the audio type general audio coding file and/or the video type general video coding file. Accordingly, the target terminal may decode the general video encoded file using a video decoder having the same format as the video encoder, and decode the general audio encoded file using an EVS audio decoder corresponding to the EVS audio encoder.
Generally, the EVS audio decoder has two decoding modes, namely fixed-point decoding and floating-point decoding, and the specific decoding mode can be selected by combining with the actual application scene. On the basis, decoding the general audio coding file by adopting preset decoding parameters such as a mode (Primory/AMR-WB IO mode), a code rate, a sampling rate, a compensation mode and the like to obtain a PCM (Pulse Code Modulation ) bare stream which is not subjected to any compression; and finally, coding the PCM code stream according to an audio coding format supported by the target terminal to obtain corresponding coded data, such as AMRWB/PCMA and other formats.
In the process of transcoding an audio and video source coding file by the video color ring system, firstly, decoding first coding frame data in the audio and video source coding file, and then adopting an EVS audio coder and a video coder to code the audio and video frame data so as to construct a general audio and video coding file, wherein the general audio and video coding file can ensure that user terminals supporting different audio coding formats can call the EVS audio coder and decoder to realize real-time transcoding, and on one hand, the audio and video file can be ensured to be normally played on a target terminal with high quality; the mixed coding of multiple audio coding formats can be realized, so that the video color ring system is ensured to avoid introducing other coding types of audio encoders, and the system side resource loss is reduced. In addition, the codec in the scheme of the application adopts a stateless related and supporting dynamic expansion and contraction mechanism to create conditions for the subsequent new audio and video codec access, so that the whole video color ring system has good capability expansibility.
It should be noted that the method not only can be applied to application scenes of video color ring back tone service, but also can be applied to various application scenes including voice service.
Example 2
Based on embodiment 1 of the present application, there is also provided an embodiment of an audio/video file encoding apparatus, where the apparatus executes the audio/video file encoding method of the above embodiment during operation. Fig. 7 is a schematic structural diagram of an alternative audio/video file encoding device according to an embodiment of the present application, and as shown in fig. 7, the audio/video file encoding device at least includes an obtaining module 71, a transcoding module 72, a combining module 73 and a feedback module 74, where:
The obtaining module 71 is configured to obtain an audio and video source coding file, where the audio and video source coding file includes at least one first coding frame data.
Alternatively, the above-mentioned obtaining module 71 may obtain the audio/video source code file according to the following method:
firstly, acquiring an audio and video source file from a service end side;
Then, encoding the audio and video source file by using a preset encoder to obtain an audio and video source encoded file, wherein the encoder comprises: an audio encoder, a video encoder, and an encoding format of the audio encoder includes at least one of: PCMA, PCMU, AMR, AMRWB the encoding format of the video encoder includes at least one of: h264, H265.
The transcoding module 72 is configured to decode the first encoded frame data for each first encoded frame data in the audio and video source encoded file to obtain corresponding audio and video frame data, and encode the audio and video frame data with a target encoder to obtain corresponding second encoded frame data, where the target encoder at least includes: an EVS audio encoder for performing an encoding operation on audio data.
Specifically, the above-mentioned transcoding module 72 may transcode each first encoded frame data of the audio/video source encoded file to obtain corresponding second encoded frame data according to the following method:
the first step: determining a data type of the first encoded frame data, wherein the data type comprises at least one of: audio type, video type.
And a second step of: and decoding the first encoded frame data by adopting a decoder corresponding to the data type to obtain corresponding audio and video frame data, and encoding the audio and video frame data by adopting a target encoder corresponding to the data type to obtain corresponding second encoded frame data.
Specifically, under the condition that the data type is the audio type, adopting an audio decoder with the same format as the audio encoder to decode the first encoded frame data to obtain corresponding audio frame data; and under the condition that the data type is the video type, adopting a video decoder with the same format as the video encoder to decode the first encoded frame data to obtain corresponding video frame data.
Further, after obtaining the audio/video frame data corresponding to the first encoded frame data, the audio/video frame data may be encoded again according to the following rule, so as to obtain corresponding second encoded frame data: under the condition that the data type is the audio type, an EVS audio encoder is adopted to encode the audio frame data, so as to obtain corresponding audio encoded frame data; and under the condition that the data type is the video type, adopting a video encoder to encode the video frame data to obtain corresponding video encoded frame data.
The combination module 73 is configured to combine the plurality of second encoded frame data according to a time sequence to obtain a general audio/video encoded file.
The above-mentioned combination module 73 may further encapsulate the combined general audio/video encoded file into a file with a 3GP format.
And the feedback module 74 is configured to send a general audio/video coding file to the target terminal in response to an acquisition request of the target terminal for the audio/video source file, where the target terminal is configured to transcode the general audio/video coding file according to a target coding format supported by the target terminal, so as to obtain the target audio/video coding file.
Alternatively, the feedback module 74 may feed back the general audio/video encoded file to the target terminal as follows: and streaming the universal audio/video coding file to obtain a corresponding data stream, and feeding back the data stream to the target terminal through a real-time transmission protocol.
Note that each module in the above-described encoding apparatus for an audio/video file may be a program module (for example, a set of program instructions for implementing a specific function), or may be a hardware module, and for the latter, it may be represented by the following form, but is not limited thereto: the expression forms of the modules are all a processor, or the functions of the modules are realized by one processor.
Example 3
According to an embodiment of the present application, there is also provided a computer program product in which a computer program is stored, wherein the computer program, when executed by a processor, implements the method of encoding an audio-video file in embodiment 1. Optionally, the computer program execution implements the steps of:
step S302, an audio and video source coding file is obtained, wherein the audio and video source coding file comprises at least one first coding frame data;
step S304, for each first coding frame data in the audio and video source coding file, decoding the first coding frame data to obtain corresponding audio and video frame data, and using a target encoder to encode the audio and video frame data to obtain corresponding second coding frame data, wherein the target encoder at least comprises: an EVS audio encoder for performing an encoding operation on the audio data;
step S306, combining the plurality of second coding frame data according to the time sequence to obtain a universal audio/video coding file;
Step S308, a general audio/video coding file is sent to a target terminal in response to an acquisition request of the target terminal for the audio/video source file, wherein the target terminal is used for transcoding the general audio/video coding file according to a target coding format supported by the target terminal, so as to obtain the target audio/video coding file.
According to an embodiment of the present application, there is further provided a processor for running a program, wherein the program executes the encoding method of the audio/video file in embodiment 1.
Optionally, the program execution realizes the following steps:
step S302, an audio and video source coding file is obtained, wherein the audio and video source coding file comprises at least one first coding frame data;
step S304, for each first coding frame data in the audio and video source coding file, decoding the first coding frame data to obtain corresponding audio and video frame data, and using a target encoder to encode the audio and video frame data to obtain corresponding second coding frame data, wherein the target encoder at least comprises: an EVS audio encoder for performing an encoding operation on the audio data;
step S306, combining the plurality of second coding frame data according to the time sequence to obtain a universal audio/video coding file;
Step S308, a general audio/video coding file is sent to a target terminal in response to an acquisition request of the target terminal for the audio/video source file, wherein the target terminal is used for transcoding the general audio/video coding file according to a target coding format supported by the target terminal, so as to obtain the target audio/video coding file.
There is also provided, in accordance with an embodiment of the present application, an electronic device, wherein the electronic device includes one or more processors; and a memory for storing one or more programs, which when executed by the one or more processors, cause the one or more processors to implement a method for running the programs, wherein the programs are configured to perform the encoding method of the audio and video files in embodiment 1 described above when run.
Optionally, the processor is configured to implement the following steps by computer program execution:
step S302, an audio and video source coding file is obtained, wherein the audio and video source coding file comprises at least one first coding frame data;
step S304, for each first coding frame data in the audio and video source coding file, decoding the first coding frame data to obtain corresponding audio and video frame data, and using a target encoder to encode the audio and video frame data to obtain corresponding second coding frame data, wherein the target encoder at least comprises: an EVS audio encoder for performing an encoding operation on the audio data;
step S306, combining the plurality of second coding frame data according to the time sequence to obtain a universal audio/video coding file;
Step S308, a general audio/video coding file is sent to a target terminal in response to an acquisition request of the target terminal for the audio/video source file, wherein the target terminal is used for transcoding the general audio/video coding file according to a target coding format supported by the target terminal, so as to obtain the target audio/video coding file.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the related art or all or part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.
Claims (10)
1. A method for encoding an audio/video file, comprising:
Acquiring an audio and video source coding file, wherein the audio and video source coding file comprises at least one first coding frame data;
and decoding each first coding frame data in the audio and video source coding file to obtain corresponding audio and video frame data, and coding the audio and video frame data by using a target encoder to obtain corresponding second coding frame data, wherein the target encoder at least comprises: an Enhanced Voice Service (EVS) audio encoder for encoding audio data;
Combining the plurality of second coding frame data according to the time sequence to obtain a universal audio/video coding file;
And responding to the acquisition request of the target terminal for the audio and video source file, and sending the universal audio and video coding file to the target terminal, wherein the target terminal is used for transcoding the universal audio and video coding file according to a target coding format supported by the target terminal to obtain the target audio and video coding file.
2. The method of claim 1, wherein obtaining an audio video source encoded file comprises:
acquiring the audio and video source file from a service end side;
Encoding the audio and video source file by using a preset encoder to obtain the audio and video source encoded file, wherein the encoder comprises at least one of the following: an audio encoder, a video encoder, and an encoding format of the audio encoder includes at least one of: PCMA, PCMU, AMR, AMRWB, the coding format of the video encoder includes at least one of: h264, H265.
3. The method of claim 1, wherein decoding the first encoded frame data to obtain corresponding audio-video frame data, and encoding the audio-video frame data with a target encoder to obtain corresponding second encoded frame data, comprises:
Determining a data type of the first encoded frame data, wherein the data type comprises at least one of: audio type, video type;
And decoding the first encoded frame data by adopting a decoder corresponding to the data type to obtain corresponding audio and video frame data, and encoding the audio and video frame data by adopting a target encoder corresponding to the data type to obtain corresponding second encoded frame data.
4. A method according to claim 3, wherein decoding the first encoded frame data using a decoder corresponding to the data type to obtain the corresponding audio video frame data comprises:
Under the condition that the data type is the audio type, adopting an audio decoder with the same format as the audio encoder to decode the first encoded frame data to obtain corresponding audio frame data;
And under the condition that the data type is the video type, adopting a video decoder with the same format as the video encoder to decode the first encoded frame data to obtain corresponding video frame data.
5. A method according to claim 3, wherein encoding the audio-video frame data using a target encoder corresponding to the data type to obtain corresponding second encoded frame data comprises:
under the condition that the data type is the audio type, adopting the EVS audio encoder to encode audio frame data to obtain corresponding audio encoded frame data;
And under the condition that the data type is the video type, adopting the video encoder to encode video frame data to obtain corresponding video encoding frame data.
6. The method of claim 1, wherein transmitting the generic audiovisual encoded file to the target terminal comprises:
And streaming the universal audio/video coding file to obtain a corresponding data stream, and feeding back the data stream to the target terminal through a real-time transmission protocol.
7. The method of claim 1, wherein the file format of the generic audiovisual encoded file is a 3GP format.
8. An apparatus for encoding an audio/video file, comprising:
The acquisition module is used for acquiring an audio and video source coding file, wherein the audio and video source coding file comprises at least one first coding frame data;
The transcoding module is configured to decode each first encoded frame data in the audio and video source encoded file to obtain corresponding audio and video frame data, and encode the audio and video frame data by using a target encoder to obtain corresponding second encoded frame data, where the target encoder at least includes: an EVS audio encoder for performing an encoding operation on the audio data;
the combination module is used for combining the plurality of second coding frame data according to the time sequence to obtain a universal audio/video coding file;
And the feedback module is used for responding to the acquisition request of the target terminal for the audio and video source file, and sending the universal audio and video coding file to the target terminal, wherein the target terminal is used for transcoding the universal audio and video coding file according to a target coding format supported by the target terminal, so as to obtain the target audio and video coding file.
9. A computer program product, comprising: computer program, wherein the computer program, when executed by a processor, implements the method of encoding an audio-visual file according to any one of claims 1 to 7.
10. An electronic device, comprising: a memory and a processor for executing a program stored in the memory, wherein the program is executed to perform the method of encoding an audio-visual file according to any one of claims 1 to 7.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410195400.6A CN117939016A (en) | 2024-02-21 | 2024-02-21 | Encoding method and device for audio and video files |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410195400.6A CN117939016A (en) | 2024-02-21 | 2024-02-21 | Encoding method and device for audio and video files |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN117939016A true CN117939016A (en) | 2024-04-26 |
Family
ID=90753801
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410195400.6A Pending CN117939016A (en) | 2024-02-21 | 2024-02-21 | Encoding method and device for audio and video files |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN117939016A (en) |
-
2024
- 2024-02-21 CN CN202410195400.6A patent/CN117939016A/en active Pending
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112752115B (en) | Live broadcast data transmission method, device, equipment and medium | |
| EP2271098A1 (en) | Server device, content distribution method, and program | |
| CN101924944B (en) | Selection method and information providing method and devices of scalable video coding operating point | |
| CN106921843B (en) | Data transmission method and device | |
| CN111327580A (en) | A message transmission method and device | |
| US20120017249A1 (en) | Delivery system, delivery method, conversion apparatus, and program | |
| KR20250170596A (en) | Signaling use of PDU sets and burst end markings for communicating WEBRTC media data. | |
| US20230362214A1 (en) | 5g support for webrtc | |
| KR20070095428A (en) | Signaling buffer parameters representing the receiver buffer architecture | |
| JP5257448B2 (en) | Server apparatus, communication method and program | |
| CN117939016A (en) | Encoding method and device for audio and video files | |
| US12395538B2 (en) | Signaling media timing information from a media application to a network element | |
| CN101273631B (en) | A multi-party video communication media flow control system and method | |
| US11855775B2 (en) | Transcoding method and apparatus, medium, and electronic device | |
| US20250150491A1 (en) | Tethered devices for webrtc in a cellular system | |
| CN102752586B (en) | The implementation method watched TV in terminal, Apparatus and system | |
| JPWO2009145294A1 (en) | Server apparatus, communication method and program | |
| WO2025101545A1 (en) | Tethered devices for webrtc in a cellular system | |
| CN117979093A (en) | Wireless ad hoc network video transmission system based on live555 and FFmpeg frames | |
| CN117596442A (en) | Integrated communication methods and platforms | |
| KR20250072957A (en) | Automatically generate video content in response to network outages | |
| Räsänen | Implementation of recording and playback in video call | |
| Sterca et al. | Evaluating Dynamic Client-Driven Adaptation Decision Support in Multimedia Proxy-Caches | |
| Logo | IVVR Technology Introduction with Applications Using Dialogic® PowerMedia™ Host Media Processing Software | |
| KR20050045665A (en) | Dynamic switching apparatus and method for encoding rate |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |