JP3472974B2 - Acoustic signal encoding method and acoustic signal decoding method - Google Patents
Acoustic signal encoding method and acoustic signal decoding methodInfo
- Publication number
- JP3472974B2 JP3472974B2 JP28503196A JP28503196A JP3472974B2 JP 3472974 B2 JP3472974 B2 JP 3472974B2 JP 28503196 A JP28503196 A JP 28503196A JP 28503196 A JP28503196 A JP 28503196A JP 3472974 B2 JP3472974 B2 JP 3472974B2
- Authority
- JP
- Japan
- Prior art keywords
- prediction
- coefficient
- frame
- length
- samples
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
Landscapes
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)
Description
【0001】[0001]
【発明の属する技術分野】この発明は音声や音楽などの
音響信号を、周波数領域に変換して能率よく量子化する
符号化方法及びその復号化方法に関する。BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a coding method and a decoding method for converting an audio signal such as voice or music into a frequency domain for efficient quantization.
【0002】[0002]
【従来の技術】音声や音楽などの音響信号を少ないビッ
ト数で符号化する際に周波数領域で量子化する手法がよ
く知られている。変換にはDFT(離散フーリエ変
換)、DCT(離散コサイン変換)MDCT(変形離散
コサイン変換)などが使われる。量子化の前に周波数領
域の係数を平坦化する目的で線形予測分析が有効である
ことも知られている。これらの技術を組み合わせて音響
信号の広い範囲の信号に対して品質の高い符号化を実現
する方法の例として音響信号変換符号化方法および復号
化方法(特願平7−52389)がある。この処理を図
3に示す。2. Description of the Related Art A method of quantizing a sound signal such as voice or music in the frequency domain when it is encoded with a small number of bits is well known. For the conversion, DFT (Discrete Fourier Transform), DCT (Discrete Cosine Transform) MDCT (Modified Discrete Cosine Transform), etc. are used. It is also known that linear prediction analysis is effective for the purpose of flattening the frequency domain coefficient before quantization. An acoustic signal conversion coding method and a decoding method (Japanese Patent Application No. 7-52389) are examples of methods of combining these techniques to realize high-quality coding for a wide range of acoustic signals. This process is illustrated
3 shows.
【0003】デジタル信号とされた音響信号は、フレー
ム分割処理11でそのN入力サンプル(1フレーム)ご
とにN個づつ重複させて過去2×Nサンプルの入力系列
が抽出され、その2×N個のサンプルの系列に対し時間
窓掛処理12で2Nの窓関数(時間窓)が掛けられる。
窓関数W(n)としては、例えばハニング窓が用いられ
る。The acoustic signal, which has been converted into a digital signal, is overlapped by N pieces for each N input samples (one frame) in the frame division processing 11 to extract an input sequence of the past 2 × N samples, and 2 × N pieces thereof are extracted. In the time windowing process 12, a 2N window function (time window) is applied to the sample sequence of.
For example, a Hanning window is used as the window function W (n).
【0004】この窓関数が掛けられた信号x(n)は例
えばN次のMDCT(Modified Discre
te Cosine Transform:変形離散コ
サイン変換)処理13で変形離散コサイン変換されて周
波数領域係数(周波数軸上のそれぞれの点におけるサン
プル値)y(k)に変換される。また窓掛処理12で得
られた信号x(n)は線形予測分析過程14で線形予測
分析され、P次の予測係数α0 ,…,αp が求められ
る。この予測係数α0 ,…,αp は量子化処理15で例
えばLSPパラメータに変換されてから量子化され、ス
ペクトラム概形を示すインデックスIp が得られる。こ
の例では量子化処理15で量子化されたLSPパラメー
タからスペクトラム概形計算処理16で線形予測係数α
0 ,…,α p のパワースペクトル包絡(電力伝達関数)
の平方根を計算し、MDCT係数の振幅包絡の近似とす
る。この計算されたスペクトル包絡によりMDCT係数
が平坦化処理17で割算される。またスペクトラム概形
計算処理16で計算されたスペクトラム包絡については
重み計算処理18で聴覚特性に応じた重みつき係数を計
算し、この重みつき係数を用いて、平坦化処理17で平
坦化されたMDCT係数v(k)が、重みつき量子化処
理19で聴覚重みつき量子化がなされ、その量子化イン
デックスIM が出力される。The signal x (n) multiplied by this window function is an example.
For example, Nth-order MDCT (Modified Discrete)
te Cosine Transform: Modified Discrete Co
Sine transformation) In step 13, the modified discrete cosine transformation is performed and the
Wavenumber domain coefficient (the sun at each point on the frequency axis
Pull value) y (k). Also obtained by windowing process 12
The obtained signal x (n) is linearly predicted in the linear prediction analysis process 14.
Analyzed, P-order prediction coefficient α0,… , ΑpIs sought
It This prediction coefficient α0,… , ΑpIs an example of quantization processing 15
For example, it is converted into LSP parameters and then quantized
Index I showing the outline of the vectorpIs obtained. This
Example, the LSP parameter quantized by the quantization processing 15
The linear prediction coefficient α
0,… , Α pPower spectrum envelope (power transfer function)
The square root of is calculated as the approximation of the amplitude envelope of MDCT coefficients.
It This calculated spectral envelope gives the MDCT coefficient
Is divided by the flattening process 17. Also the spectrum outline
For the spectrum envelope calculated in calculation process 16,
The weight calculation processing 18 calculates a weighting coefficient according to the auditory characteristic.
And using this weighted coefficient, the flattening process 17
The factorized MDCT coefficient v (k) is the weighted quantization process.
Quantization with the weighted auditory sense
Decks IMIs output.
【0005】復号化方法はインデックスIP ,I M が逆
量子化処理21でそれぞれ逆量子化され、LSPパラメ
ータと、平坦化係数v^(k)が得られ、そのLSPパ
ラメータはスペクトラム概形計算処理22でスペクトル
包絡特性の平方根が計算され、その計算結果で平坦化係
数v^(k)が逆平坦化処理23により割算され、その
割算結果x^(n)が逆MDCT処理24で逆変形離散
コサイン変換されて時間領域信号x^(n)とされ、そ
の時間領域信号x^(n)は1フレーム(Nサンプル)
ごとにN個づつ重複して取出された2N個のサンプルに
対し、窓掛処理25で窓関数が掛けられる。その窓関数
が掛けられた2×N個のサンプルは重ね合わせ処理26
で現フレームの前半Nサンプルと、前フレームの後半N
サンプルとが互いに加算されて、そのNサンプルを現フ
レームの再生音響信号とする。In the decoding method, the indexes I P and I M are inversely quantized in an inverse quantization process 21, and an LSP parameter and a flattening coefficient v ^ (k) are obtained, and the LSP parameter is a spectrum outline calculation. In process 22, the square root of the spectral envelope characteristic is calculated, the flattening coefficient v ^ (k) is divided by the result of the inverse flattening process 23, and the division result x ^ (n) is obtained in the inverse MDCT process 24. The inverse modified discrete cosine transform is performed to form a time domain signal x ^ (n), and the time domain signal x ^ (n) is one frame (N samples).
The window function 25 multiplies the window function on the 2N samples extracted by overlapping N pieces for each of the samples. The 2 × N samples multiplied by the window function are superposed 26
Then the first half N samples of the current frame and the second half N of the previous frame
The sample and the sample are added together, and the N samples are used as the reproduced sound signal of the current frame.
【0006】MDCTはフレーム境界雑音が出ないとい
う利点があるが、変換と逆変換の操作で、時間領域の信
号が折り返されて、もとの時間領域の波形が再生されな
い。このため線形予測フィルタや合成フィルタと組合わ
せることができない。またMDCT係数を線形予測のス
ペクトル包絡で平坦化するためにはスペクトル包絡特性
の逆数を求める必要があった。符号化処理では変換係数
のサンプル毎にスペクトル包絡特性の平方根の逆数をか
け、復号化処理では変換係数のサンプルをスペクトル包
絡特性の平方根の逆数で割り算を行なう必要があった。
特に復号化処理ではこの割り算などの演算量が問題とな
っていた。Although MDCT has an advantage that no frame boundary noise is generated, the signals in the time domain are folded back by the operations of the transform and the inverse transform, and the original waveform in the time domain is not reproduced. Therefore, it cannot be combined with a linear prediction filter or a synthesis filter. Further, in order to flatten the MDCT coefficient with the spectrum envelope of the linear prediction, it is necessary to obtain the reciprocal of the spectrum envelope characteristic. In the encoding process, it is necessary to multiply the reciprocal of the square root of the spectrum envelope characteristic for each transform coefficient sample, and in the decoding process, it is necessary to divide the sample of the transform coefficient by the reciprocal number of the square root of the spectrum envelope characteristic.
Especially in the decoding process, the amount of calculation such as division has been a problem.
【0007】DCT(離散コサイン変換)であれば、M
DCTのように等価的に時間領域の信号を折り返すこと
がないので、時間領域の通常の逆フィルタの操作が合成
フィルタの操作で可逆操作になる。この処理は図4に示
される。この場合はフレーム分割処理31で入力デジタ
ル音響信号はNサンプルごとに分割され、これは逆フィ
ルタ処理32で逆フィルタを通されて線形予測残差波形
が得られ、この残差波形に対し、DCT処理でDCTが
行われて周波数領域係数とされ、この周波数領域係数は
重みつき量子化処理34で聴覚重みつき量子化がなさ
れ、符号化コードが出力される。復号化処理は、入力さ
れた符号化コードが逆量子化処理35により逆量子化さ
れて周波数領域係数が再生され、これが逆DCT処理3
6で逆離散コサイン変換されて時間領域の残差波形とさ
れ、その残差波形は合成フィルタ処理37で線形予測合
成フィルタへ通されて、音響信号が再生される。For DCT (discrete cosine transform), M
Since the signal in the time domain is not folded back equivalently like the DCT, the operation of the normal inverse filter in the time domain becomes the reversible operation by the operation of the synthesis filter. This process is shown in FIG. In this case, the input digital acoustic signal is divided into every N samples in the frame division processing 31, and this is subjected to the inverse filtering in the inverse filtering processing 32 to obtain a linear prediction residual waveform, and the DCT is applied to this residual waveform. DCT is performed in the processing to obtain frequency domain coefficients, and the frequency domain coefficients are subjected to auditory weighted quantization in the weighted quantization processing 34, and an encoded code is output. In the decoding process, the input code is inversely quantized by the inverse quantization process 35 to reproduce the frequency domain coefficient, which is the inverse DCT process 3
In step 6, the inverse discrete cosine transform is performed to form a residual waveform in the time domain, and the residual waveform is passed through a linear prediction synthesis filter in synthesis filter processing 37 to reproduce an acoustic signal.
【0008】以上のようにDCTを用いることにより線
形予測分析の逆フィルタを通した信号を周波数領域に変
換することで平坦化された係数を求めることができる
し、またその逆変換された信号を合成フィルタを通すこ
とでもとのスペクトル包絡特性を再生できる。ただし、
DCTではフレーム境界の不連続雑音が問題となる場合
があった。By using the DCT as described above, it is possible to obtain a coefficient that has been flattened by transforming the signal that has passed through the inverse filter of the linear prediction analysis into the frequency domain. The original spectral envelope characteristic can be reproduced by passing it through a synthesis filter. However,
In DCT, discontinuous noise at the frame boundary may be a problem.
【0009】[0009]
【発明が解決しようとする課題】この発明の目的は、フ
レーム境界の雑音を抑えつつ処理量の削減や予測効率を
改善する音響信号符号化方法及び音響信号復号化方法を
提供することにある。SUMMARY OF THE INVENTION It is an object of the present invention to provide an acoustic signal coding method and an acoustic signal decoding method that reduce the amount of processing and improve prediction efficiency while suppressing noise at frame boundaries.
【0010】[0010]
【課題を解決するための手段】この発明符号化方法によ
れば、前のフレームと重複させて、フレームの更新の長
さNの2倍の窓関数をかけることでサンプルx(i),
(i=0,…,2N−1)を作成し、x(i)中のN/
2の長さのサンプルx(i),(i=0,…,N/2−
1)を時間的に反転した上でx(i)中のN/2の長さ
のサンプルx(i),(i=N/2,…,N−1)から
それぞれ引き、x(i)中のN/2の長さのサンプルx
(i),(i=N+N/2,…,2N−1)を時間的に
反転した上でx(i)中のN/2の長さのx(i),
(i=N,…,N+N/2−1)にそれぞれ加えること
でN点のy(i)を作成し、y(i)に対して近接予測
(例えば部分自己相関予測)と長期予測(例えばピッチ
予測)のどちらかまたは両方の線形予測分析を行ない、
その予測係数を係数とする逆フィルタにy(i)を通し
て予測残差信号z(i),(i=0,…,N−1)を作
成し、そのz(i)をコサイン変換で周波数領域係数v
(i),(i=0,…,N−1)を作成し、そのv
(i)を量子化して符号化出力を得る。According to the encoding method of the present invention, by overlapping with the previous frame and applying a window function of twice the update length N of the frame, the sample x (i),
(I = 0, ..., 2N-1) is created, and N / in x (i)
2 length samples x (i), (i = 0, ..., N / 2−
1) is temporally inverted and then subtracted from samples x (i), (i = N / 2, ..., N-1) of length N / 2 in x (i), respectively, and x (i) Medium N / 2 length sample x
(I), (i = N + N / 2, ..., 2N-1) are temporally inverted, and then x (i) having a length of N / 2 in x (i),
(I = N, ..., N + N / 2−1) is added respectively to create y (i) at N points, and proximity prediction (for example, partial autocorrelation prediction) and long-term prediction (for example, for y (i)) are created. Pitch prediction) or both linear prediction analysis,
Prediction residual signals z (i), (i = 0, ..., N-1) are created through y (i) in an inverse filter using the prediction coefficient as a coefficient, and z (i) is subjected to cosine transform in the frequency domain. Coefficient v
(I), (i = 0, ..., N-1) is created, and its v
Quantize (i) to obtain the encoded output.
【0011】この場合、入力信号がピッチの周期性をも
つ音声か、一般の音楽かを判別し、その判別が音声の可
能性の強い時は窓関数のフレームにまたがる重複を少な
くし(重なりゼロを含む)、音楽の可能性が強い時は窓
関数のフレームにまたがる重複を大きくすることができ
る。この発明の復号化方法によれば、逆量子化により作
成した周波数領域の係数v^(i),(i=0,…,N
−1)を逆コサイン変換して再生残差信号z^(i),
(i=0,…,N−1)を作成し、近接予測と長期予測
のどちらかまたは両方の線形予測分析(後方予測)を行
なうか、入力された符号を復号することで、予測係数を
作成し(前方予測)、その係数を使用した線形予測合成
フィルタにz^(i)を通して信号y^(i),(i=
0,…,N−1)を作成し、y^(i)中の前半のN/
2の長さのサンプルy^(i),(i=0,…,N/2
−1)に−1をかけて、時間的に反転してy^(i)の
フレームの前に拡張し、y^(i)中の後半のN/2の
長さのy^(i),(i=N/2,…,N−1)を時間
的に反転したy^(i)のフレームのあとに拡張してサ
ンプルx^(i),(i=0,…,N−1)を得、この
x(i)に対しフレーム長さNの2倍の窓関数をかけ
て、前後のフレームの後半と前半との波形を重ね合わせ
て再生音響信号を得る。In this case, it is determined whether the input signal is a voice having pitch periodicity or general music, and when the determination is highly likely to be a voice, the overlap over the frame of the window function is reduced (zero overlap). , The window function can be overlapped over the frame when the possibility of music is strong. According to the decoding method of the present invention, the frequency domain coefficients v ^ (i), (i = 0, ..., N) created by inverse quantization.
-1) is subjected to inverse cosine transform to reproduce residual signal z ^ (i),
(I = 0, ..., N−1) is created, and linear prediction analysis (backward prediction) of either or both of near-field prediction and long-term prediction is performed, or the input code is decoded to calculate the prediction coefficient. A signal y ^ (i), (i =) is generated (forward prediction) and is passed through z ^ (i) to a linear prediction synthesis filter using the coefficient.
0, ..., N-1), and the first half of N in y ^ (i) is N /
2 length samples y ^ (i), (i = 0, ..., N / 2
-1) is multiplied by -1, inverted in time, extended before the frame of y ^ (i), and the latter half of Y ^ (i) has a length of N / 2 y ^ (i). , (I = N / 2, ..., N-1) is expanded after the frame of y ^ (i) that is temporally inverted, and samples x ^ (i), (i = 0, ..., N-1) ), And a window function of twice the frame length N is applied to this x (i), and the waveforms of the latter half and the first half of the preceding and following frames are superimposed to obtain a reproduced acoustic signal.
【0012】以上のように、従来の線形予測分析を併用
する変換符号化ではMDCTと時間領域のフィルタ操作
を共存させることができなかったが、この発明において
は、MDCTを前処理ステップとDCTステップとに、
またMDCTを逆DCTステップと後処理ステップとに
それぞれ分割して、これの途中段階の信号に対しそれぞ
れ逆フィルタ処理、また合成フィルタ処理を適用したも
のであり、フレーム境界の雑音を抑えたまま、演算量の
削減や時間領域の予測による歪削減が実現できる。As described above, in the conventional transform coding that also uses linear prediction analysis, MDCT and time domain filter operation cannot coexist. However, in the present invention, MDCT is subjected to a preprocessing step and a DCT step. And
Further, the MDCT is divided into an inverse DCT step and a post-processing step, and inverse filter processing and synthesis filter processing are applied to signals in the middle of the steps, and noise at the frame boundary is suppressed, It is possible to reduce the amount of calculation and distortion by predicting the time domain.
【0013】[0013]
【発明の実施の形態】図1Aにこの発明による符号化方
法の実施例を示す。Nづつの重なりをもった2N点の入
力波形からN点の周波数領域の係数を求め量子化する点
では従来のMDCTにもとづく符号化方法と同じであ
る。つまり、入力信号をフレーム分割処理11でN点づ
つ重複した2N点の波形に分割し、この各2N点の波形
に対して、窓掛処理12で2N点長の窓関数win(n)
をかけて2N点サンプルx(i),(i=0,…,2N
−1)を得る。1A shows an embodiment of an encoding method according to the present invention. This is the same as the conventional MDCT-based encoding method in that the N-point frequency domain coefficients are obtained and quantized from 2N input waveforms having N overlaps. That is, by dividing the input signal to the waveform of frame division processing N points one by duplicate 2N points 11, relative to the waveform of each 2N points, window function 2N points length in windowing processing 12 w in (n)
2N point samples x (i), (i = 0, ..., 2N
-1) is obtained.
【0014】この発明では前処理41でサンプルのなら
べかえを行う。これを分かりやすくするために、通常の
MDCTの場合を考える。窓関数がかけられた系列をx
(n)とすると、このx(n)は下記に定義されるMD
CTを適用するとN点の係数Y(k)が得られる。
Y(k) =Σi=0 2N-1 x(i)cos{π(2i+1+N)(2k+
1) /(4N) }…(1)
このMDCTの変換関数、つまり cos{π(2i+1+
N)(2k+1)/(4N)}は例えばN=32とする
と、k=0、k=1、k=2、k=31のそれぞれにつ
いて、図5A,B,C,Dに示すようになる。これらの
曲線の前半(0〜N,i/N=0〜1)ではN/2(i
/N=0.5)を中心とする奇対称関数であり、後半
(N〜2N,i/N=1〜2)では3N/2(i/N=
1.5)を中心に偶対称関数である。この発明ではこの
性質を使って、MDCTを直接行うのではなく前処理と
コサイン変換に分割して処理する。MDCTによる時間
領域でのサンプルは、i/N=0〜0.5、i/N=
1.5〜2とそれぞれ乗算されるものである。そしてM
DCTは(1)式で示したようにx(i)との図5の関
数との積和演算であるから、前処理41では各フレーム
の先頭からN/2(i/N=0〜0.5)の長さのサン
プルx(i),(i=0,…,N/2−1)を時間的に
反転した上で次のN/2の長さのサンプルx(i),
(i=N/2,…,N−1:i/N=0.5〜1)から
引き、またフレームの最後のN/nの長さのサンプルx
(i),(i=N+N/2,…,2N−1:i/N=
1.5〜2)を時間的に反転した上でその直前のN/2
の長さのサンプルx(i),(i=N,…,N+N/2
−1:i/N=1,…,1.5)に加えることにより、
前処理41の処理を行う。In the present invention, preprocessing 41 replaces the samples. To make this easier to understand, consider the case of normal MDCT. The sequence multiplied by the window function is x
(N), this x (n) is the MD defined below.
When CT is applied, the coefficient Y (k) at N points is obtained. Y (k) = Σ i = 0 2N-1 x (i) cos {π (2i + 1 + N) (2k +
1) / (4N)} (1) The transform function of this MDCT, that is, cos {π (2i + 1 +
N) (2k + 1) / (4N)} is, for example, N = 32, as shown in FIGS. 5A, 5B, 5C, and 5D for k = 0, k = 1, k = 2, and k = 31. . In the first half of these curves (0 to N, i / N = 0 to 1), N / 2 (i
/N=0.5) is an odd symmetric function, and in the latter half (N to 2N, i / N = 1 to 2), 3N / 2 (i / N =
It is an even symmetric function centered on 1.5). In the present invention, using this property, MDCT is not directly performed but divided into pre-processing and cosine transformation for processing. Samples in the time domain by MDCT are i / N = 0 to 0.5, i / N =
It is multiplied by 1.5 to 2, respectively. And M
Since the DCT is a product-sum operation of x (i) and the function of FIG. 5 as shown in the equation (1), in the preprocessing 41, N / 2 (i / N = 0 to 0 from the beginning of each frame). .5) sample x (i), (i = 0, ..., N / 2−1) is temporally inverted and then sample x (i) of length N / 2 next,
(I = N / 2, ... , N- 1: i / N = 0.5 to 1), and also the sample of the last N / n length of the frame x
(I), (i = N + N / 2, ..., 2N-1: i / N =
1.5 to 2) is temporally reversed, and immediately before that, N / 2
Samples of length x (i), (i = N, ..., N + N / 2
-1: i / N = 1, ..., 1.5)
The pre-processing 41 is performed.
【0015】つまり前処理41の折り返し処理は次式で
表わせる。
y(i) =x(N/2+i)−x(N/2−1−i),(i=0,…,N/2 −1)
y(i) =x(N/2+i)+x(5N/2 −1−i),(i=N/2 ,…,N−1)
…(2)
この折り返し処理結果y(i)に対して、下記のN点の
DCT(離散コサイン変換)を行うと、通常のMDCT
係数v(k)と同一のものとなる。That is, the folding process of the preprocessing 41 can be expressed by the following equation. y (i) = x (N / 2 + i) -x (N / 2-1-i), (i = 0, ..., N / 2-1) y (i) = x (N / 2 + i) + x (5N / 2-1-i), (i = N / 2, ..., N-1) (2) The following N-point DCT (discrete cosine transform) is performed on the folding result y (i). And normal MDCT
It is the same as the coefficient v (k).
【0016】
v(k) =Σi=0 N-1 y(i)cos{π(2i+1+2N) (2k+1) /(4N) }
…(3)
この発明ではy(i)に対して、逆フィルタ処理42で
αj ,(j=1,…,p)をp次の予測係数とする線形
予測逆フィルタを通して、つまり次式の演算を行って予
測残差信号z(i)を求める。V (k) = Σ i = 0 N−1 y (i) cos {π (2i + 1 + 2N) (2k + 1) / (4N)} (3) In the present invention, an inverse filter is applied to y (i). In process 42, a prediction residual signal z (i) is obtained through a linear prediction inverse filter in which α j , (j = 1, ..., P) are p-th prediction coefficients, that is, the following equation is calculated.
【0017】
z(i) =y(i) +Σj=1 p αj y(i−j),(i=0,…,N−1)
…(4)
なお予測係数αj はy(i)を線形予測分析して求め
る。またフレームの先頭、つまりi<pでは現在のフレ
ームy(i)のかわりに前のフレームのy^(i)の最
後の(p−i)点を使えばよい。Z (i) = y (i) + Σ j = 1 p α j y (i−j), (i = 0, ..., N−1) (4) The prediction coefficient α j is y (i ) Is obtained by linear prediction analysis. Further, at the beginning of the frame, that is, at i <p, the last (pi) point of y ^ (i) of the previous frame may be used instead of the current frame y ( i).
【0018】次にz(i)に対して式(3)と同じコサ
イン変換をDCT処理33で行ない、周波数領域係数v
(i)を求め、このv(i)を重みつき量子化処理19
で聴覚重みつき量子化する。つまりMDCTを前処理4
1とDCT処理33との複数の処理で行い、その途中で
逆フィルタ処理42を行う。予測残差信号はスペクトル
包絡がほぼ平坦になっており、z(i)に対してコサイ
ン変換(DCT)を適用すると全帯域でほぼ平坦な係数
v(i)が得られる。このため量子化するにあたっても
通常のMDCT係数の平坦化処理17(図3A)は不要
となる。逆フィルタ処理42は上記の例では線形予測係
数αi を用いた近接予測であるが、ピッチ予測のような
長期予測またはその両方であってもよい。予測係数αi
は別に量子化して送出してもよいし(前方予測)、過去
の合成波形から推定してもよい(後方予測)。DCT係
数の量子化にはスペクトル包絡の重みがついた距離尺度
での量子化またはスペクトル包絡に応じた適応ビット割
り当て量子化が好ましい。Next, the same cosine transform as in equation (3) is performed on z (i) by the DCT process 33, and the frequency domain coefficient v
(I) is obtained, and this v (i) is weighted and quantized 19
Quantize with audio weights. In other words, MDCT preprocessing 4
1 and the DCT process 33, and the inverse filter process 42 is performed during the process. The prediction residual signal has a substantially flat spectrum envelope, and when the cosine transform (DCT) is applied to z (i), a substantially flat coefficient v (i) is obtained in all bands. For this reason, the normal MDCT coefficient flattening process 17 (FIG. 3A) is not required even when quantizing. The inverse filter processing 42 is proximity prediction using the linear prediction coefficient α i in the above example, but may be long-term prediction such as pitch prediction or both. Prediction coefficient α i
Alternatively, they may be separately quantized and transmitted (forward prediction), or may be estimated from past synthesized waveforms (backward prediction). For quantization of the DCT coefficient, quantization on a distance scale with a weight of the spectrum envelope or adaptive bit allocation quantization according to the spectrum envelope is preferable.
【0019】この発明の復号化方法の実施例を図1Bに
参照して説明する。復号化においてN点の係数から逆変
換により2N点の波形を作り、前後のフレームとN点づ
つ重ね合わせて合成するのでは、従来のMDCTにもと
づく復号化方法と同一である。逆量子化処理21で入力
された符号から平坦化されたDCT係数v^(k)を再
生する。この発明ではこの係数v^(k)に対し、逆D
CT処理36で式(5)の演算により逆DCTを行な
い、残差信号z^(i)を再生する。An embodiment of the decoding method of the present invention will be described with reference to FIG. 1B. In decoding, 2N-point waveforms are created by inverse transformation from N-point coefficients, and N frames are superposed on the preceding and following frames and synthesized, which is the same as the conventional MDCT-based decoding method. The flattened DCT coefficient v ^ (k) is reproduced from the code input in the inverse quantization process 21. In the present invention, for this coefficient v ^ (k), the inverse D
In the CT process 36, the inverse DCT is performed by the calculation of the equation (5) to reproduce the residual signal z ^ (i).
【0020】
z^(i) =Σv^(k)cos(π(2i+1+2N)(2k+1)/(4N))
…(5)
Σはk=0からN−1まで次にこの再生残差信号z^
(i)に対し、合成フィルタ処理44で式(6)の演算
により線形予測合成フィルタ処理を行う。Z ^ (i) = Σv ^ (k) cos (π (2i + 1 + 2N) (2k + 1) / (4N)) (5) Σ is the next reproduction residual signal z from k = 0 to N−1. ^
For (i), the synthesis filter processing 44 performs the linear prediction synthesis filter processing by the calculation of the equation (6).
【0021】
y^(i) =z^(i) −Σj=1 p αj y^(i−j),(i=0,…,N−1)
…(6)
なおフレームの先頭、つまりi<pでは現在のフレーム
y^(i)のかわりに前のフレームのy^(i) の最後の
(p−i)点を使えばよい。このy^(i) から後処理4
5で2N点の信号x(i)を再生する。[0021] y ^ (i) = z ^ (i) -Σ j = 1 p α j y ^ (i-j), (i = 0, ..., N-1) ... (6) It should be noted that the beginning of the frame, That is, when i <p, the last (p-i) point of y ^ (i) of the previous frame may be used instead of the current frame y ^ (i). Post-processing 4 from this y ^ (i)
At 5 the signal x (i) at 2N points is reproduced.
【0022】この後処理45では各フレームの前半のN
/2の長さのサンプルy^(i),(i=0,…,N/
2−1)に−1をかけ、かつ、時間的に順序を反転し
て、y^(i)のフレームの前に拡張し、各フレームの
後半のN/2の長さのサンプルy^(i),(i=N/
2,…,N−1)を時間的に順序を反転して、y^
(i)のフレームのあとに拡張してx^(i)を作る。
すなわち、x^(i)は次のようになる。In this post-processing 45, the first half N of each frame is processed.
Samples of length y ^ (i), (i = 0, ..., N /
2-1) is multiplied by -1, and the order is inverted in time to extend before the frame of y ^ (i), and the sample y ^ (N ^) of the latter half of each frame has a length of N / 2. i), (i = N /
2, ..., N−1) are temporally reversed in order, and y ^
After the frame of (i), it is expanded to make x ^ (i).
That is, x ^ (i) is as follows.
【0023】
i=0,…,N/2でx^(i) =−y^(N/2−1−i)
i=N/2,…,3N/2−1でx^(i) =y^(i−N/2)
i=3N/2,…,2N−1でx^(i) =y^(5N/2−1−i)
…(7)
次に窓掛処理25で窓関数wout (i)をx^(i)に
かけ、その後、重ね合わせ処理26で現フレームの前半
のx^(i)を前フレームの後半のx^(i)と重ね合
わせて、出力波形、つまり再生音響信号を得る。なお入
力の窓関数win(i)と出力の窓関数wout (i)の間
には次の関係があればよい。When i = 0, ..., N / 2, x ^ (i) = -y ^ (N / 2-1-i) i = N / 2, ..., 3N / 2-1, x ^ (i) = Y ^ (i−N / 2) i = 3N / 2, ..., 2N−1 and x ^ (i) = y ^ (5N / 2−1−i) (7) Next, in the windowing process 25 The window function w out (i) is applied to x ^ (i), and then, in the superposition process 26, the first half x ^ (i) of the current frame is superposed with the latter half x ^ (i) of the previous frame, and the output waveform , That is, to obtain a reproduced sound signal. It should be noted that it is if there is a following relationship between the window function of the input w in (i) and the output of the window function w out (i).
【0024】
wout (i) win(i) +wout (2N−1−i)win(2N−1−i)=1,
(i=1,…,2N−1) …(8)
この発明の符号化方法の他の実施例は、上記窓関数を入
力信号によって切り換えるものである。
win(i) =sin (iπ/(2N)),i=0,…,2N−1 …(9)
で定義される窓であれば50%の重なりをもつので、定
常的な音楽でもフレーム境界雑音はほとんど発生しな
い。ところが変化が比較的早い場合、またはピッチ周期
が明確な場合は重複を少なくする方が時間領域の線形予
測やピッチ予測の効果が大きくなり、歪が小さくなる。
重なり部分の長さをMとすると、次のような窓となる。W out (i) win in (i) + w out (2N-1-i) win in (2N-1-i) = 1, (i = 1, ..., 2N-1) (8) Another embodiment of the encoding method of the present invention switches the window function according to an input signal. w in (i) = sin ( iπ / (2N)), i = 0, ..., because it has a 2N-1 ... (9) is of 50% if the window is defined by the overlap, the frame also in a steady music Boundary noise hardly occurs. However, when the change is relatively fast or when the pitch period is clear, the effect of linear prediction and pitch prediction in the time domain becomes larger and the distortion becomes smaller when the overlap is reduced.
If the length of the overlapping portion is M, the window will be as follows.
【0025】
win(i) =0 0<i<N/2−M/2
win(i) =sin ((i−N/2+M/2)π/(2M))
N/2−M/2<<N/2+M/2
win(i) =1
N/2+M/2<3N/2−M/2
win(i) =sin ((i−3N/2+3M/2)π/(2M))
3N/2−M/2<i<3N/2+M/2
win(i) =0
3N/2−M/2<i<2N
…(10)
式(9)の窓の場合M=Nでの重なりは図2Aに示すよ
うになり、M=N/2での重なりは図2Bに示すように
なり、M=0で図2Cに示すように重なりはゼロにな
る。Mが小さいとピッチ予測ができて音声の歪は小さく
できるが、定常音ではフレーム境界雑音が出る場合があ
る。そこで入力にあわせて例えば音声入力の場合はM=
0とし、音楽入力の場合はM=Nと適応的に重なりMを
調整し、最も好ましい重なりを選択することで、いろい
ろな入力に対応できる。その際に重なりを指定する符号
を補助情報として送出すれば、復号器で再生できる。つ
まり復号化方法において入力符号中の重なりを指定する
符号に応じて窓関数のフレームにまたがる重複を変更す
る。[0025] w in (i) = 0 0 <i <N / 2M / 2 w in (i) = sin ((i-N / 2 + M / 2) π / (2M)) N / 2M / 2 <<N / 2 + M / 2 w in (i) = 1 N / 2 + M / 2 <3N / 2M / 2 w in (i) = sin ((i-3N / 2 + 3M / 2) π / (2M) in) 3N / 2-M / 2 <i <3N / 2 + M / 2 w in (i) = 0 3N / 2-M / 2 <i < the case of a window of 2N ... (10) equation (9) M = N 2A, the overlap at M = N / 2 is as shown in FIG. 2B, and at M = 0 the overlap is zero as shown in FIG. 2C. If M is small, pitch prediction can be performed and voice distortion can be reduced, but frame boundary noise may occur in stationary sounds. Therefore, in the case of voice input, M =
It is set to 0, and in the case of music input, various inputs can be dealt with by adaptively adjusting the overlap M such that M = N and selecting the most preferable overlap. At this time, if a code designating the overlap is sent as auxiliary information, it can be reproduced by the decoder. That is, in the decoding method, the overlap of the window functions over the frames is changed according to the code that specifies the overlap in the input code.
【0026】窓をかけたあとの逆フィルタや量子化処理
は入力に関係なく共通に効率よくできるので符号化シス
テム、復号化システムを複数持って切り換えるよりコン
パクトに構成できる。Since the inverse filter and the quantization process after applying the window can be efficiently performed in common regardless of the input, the configuration can be made more compact than having a plurality of encoding systems and decoding systems for switching.
【0027】[0027]
【発明の効果】この発明によれば、時間領域での重ね合
わせによるフレーム境界での連続性を維持するというM
DCTの特徴を生かしながら、時間領域の予測やフィル
タ処理が可能になり、量子化歪を小さくすることができ
る。また復号器でのスペクトル包絡やMDCT係数毎の
割り算を合成フィルタの演算に置き換えることが可能
で、演算量も削減することができる。According to the present invention, it is possible to maintain the continuity at the frame boundary by the superposition in the time domain.
While utilizing the characteristics of DCT, time domain prediction and filter processing can be performed, and quantization distortion can be reduced. Further, the spectrum envelope in the decoder and the division for each MDCT coefficient can be replaced with the calculation of the synthesis filter, and the calculation amount can be reduced.
【図1】Aはこの発明の符号化の処理過程を示す図、B
はこの発明の復号化の処理過程を示す図である。FIG. 1A is a diagram showing a process of encoding of the present invention, B
FIG. 6 is a diagram showing a decoding process of the present invention.
【図2】この発明の第3の実施例を説明するための窓関
数の重複状態の各種例を示す図。FIG. 2 is a diagram showing various examples of overlapping states of window functions for explaining a third embodiment of the present invention.
【図3】AはMDCTに基づく従来の符号化方法の処理
過程を示す図、Bはその復号化方法の処理過程を示す図
である。3A is a diagram showing a processing step of a conventional encoding method based on MDCT, and FIG. 3B is a diagram showing a processing step of the decoding method.
【図4】AはDCTと合成フィルタに基づく従来の符号
化の処理過程を示す図、Bはその復号化処理過程を示す
図である。4A is a diagram showing a conventional encoding process based on a DCT and a synthesis filter, and FIG. 4B is a diagram showing a decoding process thereof.
【図5】MDCTの変換係数の例を示す図。FIG. 5 is a diagram showing an example of transform coefficients of MDCT.
フロントページの続き (72)発明者 池田 和永 東京都新宿区西新宿三丁目19番2号 日 本電信電話株式会社内 (72)発明者 三樹 聡 東京都新宿区西新宿三丁目19番2号 日 本電信電話株式会社内 (56)参考文献 特開 平4−44099(JP,A) 特開 平6−232824(JP,A) (58)調査した分野(Int.Cl.7,DB名) G10L 19/08 G10L 19/04 Front Page Continuation (72) Inventor Kazunaga Ikeda 3-19-2 Nishishinjuku, Shinjuku-ku, Tokyo Inside Nippon Telegraph and Telephone Corporation (72) Inventor Satoshi Miki 3-19-3 Nishishinjuku, Shinjuku-ku, Tokyo Nihon Telegraph and Telephone Corporation (56) Reference JP-A-4-44099 (JP, A) JP-A-6-232824 (JP, A) (58) Fields investigated (Int.Cl. 7 , DB name) G10L 19/08 G10L 19/04
Claims (4)
位の入力信号を周波数領域に変換して量子化する符号化
方法において、 前のフレームと重複させて、Nの2倍の長さの窓関数を
かけることでサンプルx(i),(i=0,…,2N−
1)を作成する処理と、 上記サンプルx(i)中の長さN/2のサンプルx
(i),(i=0,…,N/2−1)を時間的に反転し
た上記サンプルx(i)中の長さN/2のx(i),
(i=N/2,…,N−1)からそれぞれ引き、x
(i)中の長さN/2のサンプルx(i),(i=N+
N/2,…,2N−1)を時間的に反転したx(i)中
の長さN/2のx(i),(i=N,…,N+N/2−
1)にそれぞれ加えることでN点のy(i),(i=
0,…,N−1)すなわち、 i=0,…,N/2−1で、y(i) =x(N/2+i)
−x(N/2−1−i) i=N/2,…,N−1で、y(i) =x(N/2+i)
+x(5N/2−1−i) を求める処理と、 y(i)に対して近接予測と長期予測のどちらかまたは
両方の線形予測分析を行ない、その予測係数を係数とす
る逆フィルタにy(i)を通すことで予測残差信号z
(i),(i=0,…,N−1)を作成する処理と、 上記予測残差信号z(i)をコサイン変換で周波数領域
の係数v(i),(i=0,…,N−1)を作成する処
理と、 上記周波数領域の係数v(i)を量子化して、符号化出
力を得る処理とからなることを特徴とする音響信号符号
化方法。1. A coding method for transforming an input signal of a frame unit consisting of a plurality of (N) samples into a frequency domain and quantizing the same, in a window function having a length twice that of N by overlapping with a previous frame. X (i), (i = 0, ..., 2N−
The process of creating 1) and the sample x of length N / 2 in the sample x (i)
(I), (i = 0, ..., N / 2−1) is temporally inverted
It was above SL samples x (i) the length in N / 2 of x (i),
Subtract from (i = N / 2, ..., N-1) respectively, x
Samples of length N / 2 in (i) x (i), (i = N +
N / 2, ..., the length in the 2N-1) a temporally inverted x (i) N / 2 of x (i), (i = N, ..., N + N / 2-
1) and y (i), (i =
0, ..., N-1), that is, i = 0, ..., N / 2-1, y (i) = x (N / 2 + i)
−x (N / 2−1−i) i = N / 2, ..., N−1, and y (i) = x (N / 2 + i)
+ X (5N / 2−1−i) is calculated, and y (i) is subjected to linear prediction analysis of either proximity prediction or long-term prediction or both, and y is applied to the inverse filter using the prediction coefficient as a coefficient. Prediction residual signal z by passing through (i)
(I), (i = 0, ..., N-1), and the frequency domain coefficients v (i), (i = 0, ...) By the cosine transform of the prediction residual signal z (i). N-1) is created, and the frequency domain coefficient v (i) is quantized to obtain a coded output.
声か、一般の音楽かを判別する処理を備え、上記判別が
音声の可能性の強い時には、上記窓関数のフレームにま
たがる重複を少なくし、音楽の可能性が強い時には窓関
数のフレームにまたがる重複を大きくすることを特徴と
する請求項1記載の音響信号符号化方法。2. A process for determining whether the input signal is voice having pitch periodicity or general music, and when the determination is highly likely to be voice, the overlap of the window function over frames is reduced. The method of encoding an acoustic signal according to claim 1, wherein the overlap of the window function over the frame is increased when the possibility of music is strong.
ら音声音響信号をフレーム単位で再生する方法であっ
て、 逆量子化により作成した周波数領域の係数v^(i),
(i=0,…,N−1)を逆コサイン変換で再生残差信
号z^(i),(i=0,…,N−1)を作成する処理
と、 近接予測と長期予測のどちらかまたは両方の線形予測分
析(後方予測)を行なうか、入力された符号を復号する
ことで、予測係数を作成し(前方予測)、その係数を使
った線形予測合成フィルタに上記再生残差信号z^
(i)を通すことでサンプルy^(i),(i=0,
…,N−1)を作成する処理と、 上記サンプルy^(i)中の前半の長さN/2のサンプ
ルy^(i),(i=0,…,N/2−1)に−1をか
けて、時間的に反転して、上記サンプルy^(i)のフ
レームの前に拡張し、サンプルy^(i)中の後半の長
さN/2のサンプルy^(i),(i=N/2,…,N
−1)を時間的に反転したy^(i)のフレームのあと
に拡張してx^(i)、すなわち、 i=0,…,N/2でx^(i) =−y^(N/2−1−
i) i=N/2,…,3N/2−1でx^(i) =y^(i−
N/2) i=3N/2,…,2N−1でx^(i) =y^(5N/
2−1−i) を求める処理と、 x^(i)に対してフレームの長さNの2倍の窓関数を
かけ、前後のフレームの波形と重ね合わせる処理とから
なることを特徴とする音響信号復号化方法。3. A method of reproducing a voice acoustic signal in units of frames from a code quantized by converting it into a frequency domain, wherein a frequency domain coefficient v ^ (i), which is created by inverse quantization,
(I = 0, ..., N-1) is processed by inverse cosine transform to generate a reproduction residual signal z ^ (i), (i = 0, ..., N-1), which is close prediction or long-term prediction. Either or both linear prediction analysis (backward prediction) is performed, or an input code is decoded to create a prediction coefficient (forward prediction), and a linear prediction synthesis filter using the coefficient is used to reproduce the residual signal. z ^
By passing through (i), samples y ^ (i), (i = 0,
, N-1), and the sample y ^ (i), (i = 0, ..., N / 2-1) of the first half length N / 2 in the sample y ^ (i). Multiply by -1, inverted in time, extended before the frame of sample y ^ (i) above, and the second half length in sample y ^ (i)
N / 2 samples y ^ (i), (i = N / 2, ..., N
-1) is expanded after the frame of y ^ (i) which is temporally inverted, and x ^ (i), that is, x ^ (i) = -y ^ (at i = 0, ..., N / 2. N / 2-1-
i) i = N / 2, ..., 3N / 2-1, and x ^ (i) = y ^ (i-
N / 2) i = 3N / 2, ..., 2N-1 and x ^ (i) = y ^ (5N /
2-1-i), and a process of multiplying x ^ (i) by a window function that is twice the frame length N and superimposing it on the waveforms of the preceding and following frames. Acoustic signal decoding method.
じて上記窓関数のフレームにまたがる重複を変更するこ
とを特徴とする請求項3記載の音響信号復号化方法。4. The method of decoding an acoustic signal according to claim 3, wherein the overlap of the window functions over the frames is changed in accordance with the information designating the overlap in the input signal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP28503196A JP3472974B2 (en) | 1996-10-28 | 1996-10-28 | Acoustic signal encoding method and acoustic signal decoding method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP28503196A JP3472974B2 (en) | 1996-10-28 | 1996-10-28 | Acoustic signal encoding method and acoustic signal decoding method |
Publications (2)
Publication Number | Publication Date |
---|---|
JPH10133695A JPH10133695A (en) | 1998-05-22 |
JP3472974B2 true JP3472974B2 (en) | 2003-12-02 |
Family
ID=17686268
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP28503196A Expired - Lifetime JP3472974B2 (en) | 1996-10-28 | 1996-10-28 | Acoustic signal encoding method and acoustic signal decoding method |
Country Status (1)
Country | Link |
---|---|
JP (1) | JP3472974B2 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1527441B1 (en) * | 2002-07-16 | 2017-09-06 | Koninklijke Philips N.V. | Audio coding |
US7707034B2 (en) | 2005-05-31 | 2010-04-27 | Microsoft Corporation | Audio codec post-filter |
US7831421B2 (en) | 2005-05-31 | 2010-11-09 | Microsoft Corporation | Robust decoder |
CN117476017A (en) * | 2022-07-27 | 2024-01-30 | 华为技术有限公司 | Audio coding and decoding methods, devices, storage media and computer program products |
-
1996
- 1996-10-28 JP JP28503196A patent/JP3472974B2/en not_active Expired - Lifetime
Also Published As
Publication number | Publication date |
---|---|
JPH10133695A (en) | 1998-05-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP3881943B2 (en) | Acoustic encoding apparatus and acoustic encoding method | |
KR102063900B1 (en) | Frame error concealment method and apparatus, and audio decoding method and apparatus | |
US5684920A (en) | Acoustic signal transform coding method and decoding method having a high efficiency envelope flattening method therein | |
JP3483958B2 (en) | Broadband audio restoration apparatus, wideband audio restoration method, audio transmission system, and audio transmission method | |
JP3317470B2 (en) | Audio signal encoding method and audio signal decoding method | |
JP2009515212A (en) | Audio compression | |
KR20200004917A (en) | Method and apparatus for concealing frame error and method and apparatus for audio decoding | |
JP2010020346A (en) | Method for encoding speech signal and music signal | |
JP3814611B2 (en) | Method and apparatus for processing time discrete audio sample values | |
JPH0869299A (en) | Voice coding method, voice decoding method and voice coding/decoding method | |
WO2016016724A2 (en) | Method and apparatus for packet loss concealment, and decoding method and apparatus employing same | |
EP3664088B1 (en) | Audio coding mode determination | |
JP3765171B2 (en) | Speech encoding / decoding system | |
JP2002372996A (en) | Method and device for encoding acoustic signal, and method and device for decoding acoustic signal, and recording medium | |
JP3186007B2 (en) | Transform coding method, decoding method | |
JP3344944B2 (en) | Audio signal encoding device, audio signal decoding device, audio signal encoding method, and audio signal decoding method | |
JP3087814B2 (en) | Acoustic signal conversion encoding device and decoding device | |
JP3472974B2 (en) | Acoustic signal encoding method and acoustic signal decoding method | |
JP3297749B2 (en) | Encoding method | |
JP3237178B2 (en) | Encoding method and decoding method | |
JP3353267B2 (en) | Audio signal conversion encoding method and decoding method | |
JPH09127985A (en) | Signal coding method and device therefor | |
JP3218679B2 (en) | High efficiency coding method | |
JP3348759B2 (en) | Transform coding method and transform decoding method | |
JPH09127987A (en) | Signal coding method and device therefor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20080919 Year of fee payment: 5 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20080919 Year of fee payment: 5 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20090919 Year of fee payment: 6 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20090919 Year of fee payment: 6 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20100919 Year of fee payment: 7 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20100919 Year of fee payment: 7 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20110919 Year of fee payment: 8 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20120919 Year of fee payment: 9 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20130919 Year of fee payment: 10 |
|
S531 | Written request for registration of change of domicile |
Free format text: JAPANESE INTERMEDIATE CODE: R313531 |
|
R350 | Written notification of registration of transfer |
Free format text: JAPANESE INTERMEDIATE CODE: R350 |
|
EXPY | Cancellation because of completion of term |