zyxw
I096
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 40. NO. 5. MAY 1992
zyxwvutsrqp
zyxwvu
Noise Reduction in Recursive Digital Filters Using
High-Order Error Feedback
Timo I. Laakso, Student Member, IEEE, and Iiro 0 . Hartimo, Senior Member, IEEE
Abstract-The problem of solving the optimal (minimumnoise) error feedback coefficients for recursive digital filters is
addressed in the general high-order case. It is shown that when
minimum noise variance at the filter output is required, the
optimization problem leads to a set of familiar Wiener-Hopf or
Yule-Walker equations, demonstrating that the optimal error
feedback can be interpreted as a special case of Wiener filtering.
As an alternative to the optimal solution, the formulas for
suboptimal error feedback with symmetric or antisymmetric
coefficients are derived. In addition, the design of error feedback using power-of-two coefficients is discussed. These schemes
are often more suitable for practical implementation than the
optimal error feedback and in many cases almost as good
roundoff noise performance is achieved.
The efficiency of high-order error feedback is examined by
test implementations of the set of standard filters. It is concluded that the error feedback is a very powerful and versatile
method to cut down the quantization noise in any classical IIR
filter implemented as a cascade of second-order direct form sections. Second-order error feedback is sufficient for most cascade implementations, whereas the new high-order schemes are
attractive for use with high-order direct form sections.
I. INTRODUCTION
RROR feedback (EF) (or error spectrum shaping,
noise shaping, or residue feedback) is a general
method that can be used to reduce errors inherent in any
quantization operation. To our knowledge, error feedback
techniques are widely used in applications like predictive
speech coding 1291, predictive image coding [21], and
sigma-delta analog-to-digital conversion 1221.
Error feedback can also be used to reduce quantization
errors generated in finite-wordlength implementations of
recursive digital filters. Especially with fixed-point implementations of narrow-band low-pass filters its effect is
striking and usually superior to any other low-noise structure [63], 1581, 1241, 1351.
The error feedback is implemented by extracting the
quantization error after the product quantization and feeding the error signal back through a simple, usually
FIR-type filter (Fig. 1). As is well known, the level of the
output quantization noise of a recursive filter tends to be
high, especially when the poles are located close to the
unit circle [27]. By choosing the EF parameters appropri-
E
zyx
i/m
111
A
Lrl
x'(
U
Fig. 1, A quantizer with Nth-order error feedback
ately, zeros can be placed in the error spectrum to reduce
the noise very efficiently 1251.
It should be emphasized that the EF affects only the
transfer function of the quantization error signal, while
the transfer function of the filter itself remains unchanged.
Thus, the EF cannot have any effect on the coefficient
sensitivity properties of the filter structure, neither can it
enhance the overflow properties of the filter implementation.
Originally used in a PCM rounding quantizer [60], error feedback has since been applied to direct form secondorder sections [62], [63], [7], [511, [24], to cascaded direct form structures 1251, [35], to high-order direct form
(lump) structure [61], to a cascade of fourth-order directform sections [36], to normal and state-space structures
VI, [4l, 1581, [591, 1641, [671, [711, to orthogonal polynomial (Gray-Markel) structures [69], [70], and, quite
recently, to a certain class of wave-digital-related VGIC
structures 1151. Special low-sensitivity second-order
structures that are amenable to the EF have been introduced [14].
The ability of the EF to reduce the amplitude to limit
cycles as well or, in some cases, to eliminate them completely has been demonstrated [ 11-[3], [7], [51], [58],
[67], [41], [40]. Feeding back the error generated in the
overflow situation can be used to reduce the amplitude of
overflow transients [ 121 or to eliminate zero-input overflow oscillations completely [39].
zyxwvutsrqp
zyxwvutsrqp
zyxwvutsrqpo
zyxwvuts
Manuscript received December 14, 1988; revised February 11, 1991.
The authors are with the Laboratory of Signal Processing and Computer
Technology, Helsinki, University of Technology, SF-02150 Espoo, Finland.
IEEE Log Number 9106580.
1053-587X/92$03.00 0 1992 IEEE
I097
LAAKSO AND HARTIMO: NOISE REDUCTION IN RECURSIVE DIGITAL FILTERS
Efficient hardware implementations of the EF have been
advanced [63], [8], [51], [59], [67], [681, [611, [651. It
has been shown that the EF can also be efficiently implemented with a signal processor [35], [36], [13]. With tailor made hardware one can save several bits in the internal
signal wordlength by investing in error feedback. With
general-purpose hardware (signal processors, etc .) it is
possible to increase the effective wordlength when limit
cycles or quantization noise grows too high.
Some basically different ways to formulate and apply
the EF can be distinguished in the existing literature. We
divide the proposed EF schemes into three categories according to how the parameters of the EF quantizer, the
structure, the coefficients, and the order, are determined:
1) Constant Error Feedback: These schemes allow
placing one or two zeros in the error transfer function at
the points z = 1 or z = - 1. Hence, they are best suited
for narrow-band low-pass and high-pass filters, respectively. The properties of these EF schemes and their hardware implementation were studied in the early works [62],
[63], [2], [4], [511. The analyzed filters were mostly simple second-order direct form sections.
2) Built-in Error Feedback: With this term we refer to
those EF schemes where the filter structure determines the
type of error feedback, that is, the order and the structure
of the EF quantizer and the values of the EF coefficients.
The structure of the EF quantizer is not necessarily a
FIR-type filter, but a replica of the recursive structure of
the filter (sometimes the nonrecursive parts are included,
which is called error feedforward [24]). The coefficients
are more or less directly obtained by simply quantizing
the recursive filter coefficients into integer values [8],
[58l, 1671, [681, 1611, [701, [711.
These EF schemes can be viewed as approximations of
extended precision arithmetic [24], [50]. When applied to
the state-space structure, elegant formulation and analysis
of the noise properties of the filter can be obtained [9],
[58], [67], [71]. Some schemes also allow several equivalent implementation strategies, as pointed out in [lo].
However, due to their complexity, these structures are
often more interesting from the theoretical point of view
than real alternatives for practical implementation.
3) Optimal Error Feedback: In this case the EF quantizer is completely general and isolated from the filter
structure. Any quantization point in the filter structure
where the wordlength is restored after multiplications can
be provided with this kind of error feedback. We call this
modified quantizer an error feedback quantizer. From the
implementation point of view the optimal EF is most attractive when there are only few quantization points in the
structure.
The EF coefficients are usually optimized by minimizing the quantization noise power at the filter output, that
is, by using an least-mean-square (LMS) type criterion
1241, P51, [141, 1151, [361-[381.
To our knowledge, only first- and second-order optimal
EF have been addressed in the literature so far. However,
in high-order systems high-order EF would naturally offer
better noise reduction. Also with the direct form (lump)
structure, which is still the most practical structure for
certain adaptive and time-varying filtering and control applications, as pointed out in [61], the high-order EF can
be of much help in enhancing the otherwise rather poor
finite-wordlength properties of this structure.
Higgins and Munson have derived formulas to calculate
the optimal EF coefficients for a second-order EF quantizer to minimize the power of the output quantization
noise [25]. In Section I1 of this paper we expand on the
work of Higgins and Munson and derive the general formulas for the optimal coefficients of an EF quantizer of
arbitrary order. Additionally, instead of numerical integration that Higgins and Munson utilized, an algorithm
for calculating these coefficients directly applying the total square integral in the z-domain [31] is derived.
From the implementation point of view, the Nth-order
optimal error feedback is often too costly due to the N
explicit multiplications required. The costs can be reduced, e.g., by constraining the EF polynomial to have
symmetric or antisymmetric coefficients, which cuts the
number of distinct coefficients in half. In Section 111, formulas for suboptimal error feedback with symmetric or
antisymmetric coefficients are derived. In many cases this
approach results in minimal losses in noise reduction as
compared to the optimal solution. It is thus well suited for
implementations where the coefficient symmetry can be
utilized.
Another strategy to reduce implementation costs is to
use EF coefficients with power-of-two values. This allows simple implementation by using mere additions (or
subtractions) and shifting. In Section IV we present a simple discrete optimization algorithm to find near-optimal
power-of-two EF coefficients that minimize the output
noise in the set of available coefficient values.
In Section V, second-, third-, and fourth-order EF is
applied to some direct form I cascade implementations of
some standard test filters. The cascades of both secondand fourth-order DF I sections are studied. In Section VI,
the implementation issues are briefly discussed. Two important implementation strategies, signal processors, and
custom VLSI techniques are considered from the standpoint of error feedback.
zyxwvuts
zyxwvutsrq
zyxwvutsrqpon
11. NTH-ORDER
OPTIMAL
ERRORFEEDBACK
The error feedback is implemented by modifying the
quantizer in the filter structure. In a fixed-point implementation, the quantization is usually performed by discarding the lower bits of the double-precision accumulator (two’s complement truncation), and thus the
quantization error equals this residue left in the lower part.
The error is fed back through a simple FIR filter, as shown
in Fig. 1. (Also a secondary error is introduced due to the
finite wordlength of the EF quantizer [24], but assuming
that the EF wordlength is sufficient, this error is seldom
of any importance and will be neglected in our discussion.)
zyxwvu
~
zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
1098
zyxwvutsrqpon
zyxwvutsrq
zyxwvutsrqponmlkjihgfe
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 40. NO. 5, MAY 1992
As usual, each quantizer is modeled as an independent
additive white noise source with variance o 2 = 2 - 2 b / 1 2
where (1 b) is the wordlength (1 b for sign) [27]. With
this assumption, the optimal coefficients for the EF parameters that minimize the power of the quantization noise
can be determined independently for each quantizer in the
filter structure [25].
It can be shown that, with an Nth-order purely recursive direct form section, the optimal coefficients for
N th-order EF are directly the recursive coefficients (for
the second-order case, see [24], [50]). Whenever the error
transfer function from the quantizer to the filter output is
not purely recursive (e.g., with direct form cascade structures) or when the order of the EF quantizer is lower than
that of the filter section, this is not the case.
Let us consider the Nth-order error feedback quantizer
shown in Fig. 1 . The EF network affects with the following z-domain transfer function:
+
B(z) = 1 +
The expressions in square brackets are identified as Fourier coefficients of the normalized power spectrum, that
is, as autocorrelation coefficients of the output error signal. Denoting these as
qk
= T
jT
COS
O
kwQ(w) dw
and observing that the autocorrelation sequence is symmetric ( q - k = qk),the integral can be expressed as
zyxwvuts
N
I =
k=l
N
N
c
1=I
PkPlqlk-ll
c P k q k + 90.
+ 2 k=
+ p2F2 +
& - I
..*
+
(1)
=
B(z)G ( z ) .
(2)
The normalized noise variance (noise gain) at the filter
output is obtained with the following integral [31]:
This can be expressed as a quadratic form
I = wTRw
+ 2pTw + qo
+'j'
T
IB(eJw)
I I G(ej")l2 dw.
O
(4)
Now it is desired to find the coefficients for B(z) so that
this integral is minimized. Clearly, this is an LMS-type
minimization problem that can be approached analytically.
For convenience, let us denote
Qb)= lG(eJW)l2.
(5)
This quantity is the normalized power spectrum of the error at the filter output. Proceeding as in [25] the integral
(4) can be elaborated into the following form:
N
r.
PkP!
k=l /=I
(9)
where superscript T denotes transposition and the matrices and vectors are
w =
(PI
P2
*
.
PNlT
(10a)
The matrix R is recognized as the N X N autocorrelation
matrix of the output error, which is known to have a symmetric Toeplitz structure [23]. The vector p is the crosscorrelation vector between the input and output error. The
optimal solution is found by setting the derivatives with
respect to the EF coefficients to zero, yielding the optimal
coefficient vector w * as
w* = - R - ' p .
or, equivalently [27]
N
(8)
zyxwvu
E(z)
=
1
zyxwvutsrqp
Let the transfer function from the quantization point to
the filter output be G(z),excluding the effect of error feedback. In general, G(z) is a recursive transfer function of
a linear, time-invariant, causal, and stable system of order
usually higher than N . Hence, G(z) is of the form
N ( z ) / D ( z ) where N ( z ) and D(z) are polynomials of z - I .
Here it is assumed that both G(z) and B(z) have real coefficients. The overall error transfer function is then expressed as
I
(74
T
3
PII
1
COS
(k
- l ) w Q ( w ) dw
O
(6)
(1 1)
This normal equation can be interpreted as a Wiener-Hopf
equation [66], [23], thus demonstrating that optimal error
feedback can be interpreted as an application of Wiener
filtering theory. More precisely, (1 1) is exactly of the form
of the Yule-Walker equation (see, e.g., [28]), which
means that B(z) can be viewed as an optimal inverse or
whitening filter for a given G(z) in the LMS sense. In
other words, the inverse of the optimal B(z) can be interpreted as an optimal AR model for the error power spectrum. The corresponding equations have also been derived in the context of inverse filtering of speech [44] and
considering the use of noise shaping with predictive coding of speech [43].
Since the matrix R is guaranteed to be positive definite
(see, e.g., [28], [ 2 3 ] ) ,the optimal solution is unique and
always exists. Once the autocorrelation coefficients are
known, the linear systems of equations ( 1 1) is most efficiently solved using the Levinson-Durbin recursion (see,
e.g., [ 2 3 ] )which takes advantage of the symmetric ToeDlitz structure of the autocorrelation matrix.
In the present problem, the autocorrelation coefficients
zyxw
zyxwvutsrqponmlkjihgf
zyxwvutsr
1099
LAAKSO AND HARTIMO: NOISE REDUCTION IN RECURSIVE DIGITAL FILTERS
zyxwvutsrq
qk depend only on the given rational transfer function G(z)
and can thus be determined exactly. However, the task is
not trivial but causes the main computational burden in
determining the optimal EF solution. Note that in order
to solve the optimal coefficients of an Nth-order EF quantizer, the calculation of the (N + 1) terms of the autocorrelation function is required.
In [25], the autocorrelation coefficients were determined by approximating (7a) by numerical integration,
which is computationally rather intensive. Another approach is to use the z-domain version of (7a) which gives
the autocorrelation coefficients via the inverse z transform
as
Several exact algorithms for this integral that operate directly in the z-domain have been proposed in the literature
[26], [ 191, [ 181, and [33]. We have found yet another algorithm which is believed to be new. Due to the symmetry of the autocorrelation sequence ( q C k= q k ) ,the integral (7b) can as well be given in the form
ments constrain the zeroes of the filter to be (in most cases)
exactly on the unit circle, as is well known from the exact
linear-phase design of FIR filters [52].
The approach is well motivated, since in practice the
error transfer function G(z) often has poles not far from
the unit circle, resulting in the optimal (unconstrained)
inverse filter B(z) with zeros very close to the unit circle.
In [24], the solution for second-order symmetric EF was
derived. In [38], some low-order solutions were presented. Essentially the same problem was addressed in
[20], where the ideal solution for an arbitrary odd-order
(even-length!) linear-phase adaptive filter with symmetric
coefficients was derived. Let us denote the odd filter order
n = 2M
1 . In our notation, the optimal solution can be
expressed as
+
zyxwvu
zyxw
where the optimal coefficients of the symmetric odd-order
solution are given by the M-length vector ( M = (N 1)/2)
wso
=
(PI
P2
..*
OM)'
(144
and the involved M x M-dimensional matrices and
M-length vectors are defined as
BY executing spectral factorization of (zk + zk)/2, i.e.,
expressing it as a product of the form
40
41
...
91
40
...
7zyxw
... .
4M-I
the integral can be expressed as a total square integral
[31]. The details are presented in the Appendix. Many
algorithms have been proposed for the total square integral in the literature [31], [72], 1321, 1461, [54]. @cording to our experiments [34], the algorithm due to Astrom
e! al. [72] is the most efficient. The relationship of
Astrom's algorithm to the Levinson recursion has been
discussed in [ 111.
It might also be interesting to optimize the EF coefficients with respect to other criterions, e.g., to make the
output noise spectrum as flat as possible in the minimax
sense. This is equal to determining B(z) so that
IB(e'") G(e'")I2 is minimized in the Chebyshev sense. The
solution for B(z) can be obtained by using the standard
Parks-McClellan algorithm [53] with slight modifications, as pointed out by Diniz [ 161. However, in this work
we concentrate on the minimization in the LMS sense
only.
4M- 2
...
40
4 2 M -2
R, =
4 2 M -3
zyxwvutsrq
WITH SYMMETRIC
OR
111. ERRORFEEDBACK
ANTISYMMETRIC
COEFFICIENTS
In practice, the hardware or softward implementation
of Nth-order optimal error feedback is often too costly
due to the N explicit multiplications required. One way to
to reduce the number of multiplications is to constrain B(z)
to be symmetric or antisymmetric, which halves the number of required multiplications. The symmetry require-
qM- 1
The matrix R, has a Hankel structure, i.e., the elements
on cross diagonals are equal [45]. Hence, the matrix [R,
I?,] to be inverted no longer has a Toeplitz structure.
However, it is still guaranteed to be positive definite, since
it can be interpreted as the autocorrelation matrix of a joint
nonstationary process [20]. Efficient algorithms for the
solution of systems of equations with this kind of Toeplitz-plus-Hankel structure have been proposed in [49].
It is easily found that, using the same notation, the corresponding antisymmetric odd-order solution is obtained
as
+
w:, = - [ R , - R,]-'[p,
- Po].
(15)
Note that in both cases (13) and (15) there are painvise
two coefficients of the same value. When the filter order
is even, say, N = 2L, (i.e., the filter length is odd) there
1100
zyxwvu
zyxwvutsrqponmlkjihg
zyxwvutsrqpon
zyxwvu
zyxwvuts
zyxwvutsrqp
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 40, NO. 5. MAY 1992
TABLE I
SUBOPTIMAL
SYMMETRIC
A N D ANTISYMMETRIC
EF COEFFICIENTS
FOR
FILTER
ORDERS 1 TO 4
is an odd middle value. This slightly complicates the solution which we have found to be
w,*+ = -
w+1- IP+
(16)
where the optimal coefficients of the symmetric or antisymmetric even-order solution are given by the L-length
vector
Suboptimal Solution
N
Symmetric B(z)
Antisymmetric B(z)
zyxwvutsrqpon
1
1
1
2
1 -z-I
+ z - I
+ plz-' + z - 2
p --2q,
1
- z - 2
I -
40
where L = N / 2 and
4
I
+ plz-' + P
PI
=
~ z -+~P I ~ - 3+ z - ~ 1 +
24142 - %(41 + 4 3 )
%(40
+
- 41(% + 43)
with
2 -
%(%
+
42) -
-
biz-'
PI
24:
=-
-
Plz-3
-
z - ~
-(41 - 43)
40
- 42
242(40 + 42)
47) - 24:
choose a fourth-order purely recursive transfer function
which contains the poles of a fourth-order elliptic lowpass filter. It is of the form 1/ D ( z ) , where
D(z)
Re
=
9 2 N -- 2
92L - 3
92L - 3
92L - 4
9L- 1
...
...
...
...
=
+ 0.801564~-~)
(1 - 1.833400~-' + 0.927062C2)
(1 - 1.7731522-'
*
= 1 - 3.6065522-I
- 3.1134092 -3
92
where the positive sign gives the symmetric solution and
the negative sign gives the antisymmetric solution. Note
that with the antisymmetric even-order solution the middle coefficient always vanishes, i.e., pL = 0, so that the
( L - 1)-length solution vector is obtained directly from
the reduced equation
+ 4.979522~-~
+ 0.7430992 -4.
(20)
Assuming signal quantization after the accumulation of
products, the noise gain (3) of this filter in direct form
implementation without error feedback is 43.51 dB. The
noise gain figures when first- to fourth-order optimal,
symmetric, and antisymmetric EF's applied, as collected
in Table 11. The corresponding EF coefficients are also
given.
From the data of optimal error feedback, it is observed
that increasing the order of the EF polynomial reduces the
noise gain to the order 4 when the solution B(z) = D(z)
is achieved, as expected, in Section 11. In this case, complete noise cancellation occurs (noise gain equals unity or
0 dB). Naturally, this solution is also obtained when trying
to solve a higher order EF from (1 1). This solution can
also be interpreted as a double-precision implementation
of the filter [24], [50].
However, it is to be noted that when the noise transfer
function G(z) is not purely recursive (i.e., with cascaded
direct form structures), complete cancellation is no longer
possible but the solution only asymptotically approaches
the 0 dB level when the EF order is increased.
The corresponding error spectra are shown in Fig. 2,
which clearly illustrates the zeros induced by the optimal
error feedback.
Unlike the unconstrained solutions, the symmetric and
antisymmetric EF polynomials do not offer monotonic
zyxwvut
Symmetric and antisymmetric solutions of the order 1 to
4 are collected in Table I. It is observed that the first-order
solutions have no free parameters but they set a fixed real
zero at z = k 1, thus being suitable for narrow-band, lowpass, or high-pass filters when only moderate noise reduction is required. The second- through fourth-order solutions contain at most 2 free parameters which control
the locations of the complex-conjugate zeros, thus offering more efficient noise reduction capabilities.
Let us illustrate the use of the proposed optimal and
suboptimal symmetric/antisymmetric EF schemes. We
zyxwvu
zyx
I I01
LAAKSO AND HARTIMO: NOISE REDUCTION IN RECURSIVE DIGITAL FILTERS
TABLE I1
THENOISEGAINSA N D THE CORRESPONDING
OPTIMAL
A N D SUBOPTIMAL
SYMMETRIC AND ANTISYMMETRIC
EF COEFFICIENTS APPLIED
TO THE
SAMPLE
FILTEROF (20)
Optimal
1-EF
Noise (dB)
30.25
-0.9761
PI
zyxwvutsrqpon
zyxwvutsrq
zyxwvuts
zyxw
zyxwvuts
2-EF
Noise (dB)
15.47
PI
- 1.9358
P2
0.9832
3-EF
Noise (dB)
3.49
PI
- 2.8874
Pz
2.8567
-0.9678
P3
4-EF
Noise
0.00
-3.6066
4.9795
-3.1134
0.7431
PI
Pz
P3
04
Symmetric
Antisymmetric
49.48
1
30.30
-1
15.51
36.23
- 1.9522
I
0
-1
21.42
-0.9526
-0.9526
3.56
-2.9 190
2.9190
-1
1
0.60
- 3.8552
'"I !I.y
I,>
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Normalired Frequcncy
Fig. 3. Error power spectra of the direct form implementation of the sample filter of (20) with zero- to fourth-order suboptimal symmetric error
feedback. Solid line: even-order EF. Dotted line: odd-order EF.
8.91
-1.9196
0
1.9196
-1
5.7134
-3.8552
1
0.2
I
60,
!
zyxwvutsrqponm
zyxwvutsr
zyxwv
Normalized Frcquency
-"0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
I
Fig. 4. Error power spectra of the direct form implementation of the sample filter of (20) with zero- to fourth-order suboptimal antisymmetric error
feedback. Solid line: even-order EF. Dotted line: odd-order EF.
Norrnalued Frequency
Fig. 2. Error power spectra of the direct form implementation of the sample filter of (20) with zero- to fourth-order optimal error feedback. Solid
line: even-order EF. Dotted line: odd-order EF.
noise reduction when the order is increased. This is due
to the fixed zeros inherent in the solutions: as is known
from the linear-phase FIR design, odd-order symmetric/
antisymmetric polynomials always have a zero at z = f 1.
Therefore a careful choice has to be made between symmetric and antisymmetric solutions on one hand and between odd and even order on the other hand. This is also
illustrated in the corresponding spectra which are shown
in Figs. 3 and 4.
Otherwise the example shows that, when the best one
of the symmetric or antisymmetric solutions is chosen,
the noise reduction is at most 0.60 dB worse than with the
optimal EF of the same order. The corresponding coefficients are also very close to the same. The zeros of the
optimal EF polynomial are seen to be very close to the
unit circle even though the pole radii of the filter are not
particularly critical (in this case rl = 0.90 and r2 = 0.96).
IV. ERRORFEEDBACK
WITH POWER-OF-TWO
COEFFICIENTS
The implementation of error feedback is often the most
efficient if explicit multiplications are not needed at all.
For example, if the coefficients are quantized to powers
of 2 (or to a sum of powers of 2 with only few terms),
only additions or subtractions with shift are needed for
implementation. This is usually advantageous, e.g., in
signal processor applications [35]. In the following we
discuss some methods to find these coefficients so that the
output error power is minimized.
A . Direct Rounding of Optimal Coe#cients
First- and second-order E F has been shown to retain
most of its power when the coefficients are simply rounded
I102
zyxwvutsrqponmlkjihg
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 40. NO. 5. MAY 1992
off to the nearest powers of 2 [25]. However, with higher
order EF this is not the case. Later it is shown that rounding the optimal coefficients of third- or fourth-order EF
directly to the nearest powers of 2 can make it practically
useless. Hence, either more bits should be allocated to the
coefficients or, preferably, more sophisticated quantization schemes should be considered.
TABLE I11
THENOISEGAINSAND THE CORRESPONDING
OPTIMAL
A N D POWER-OF-TWO
COEFFICIENTS
OBTAINED
BY DIRECTROUNDING
O N SEARCH
ALGORITHM.
SAMPLEFILTEROF (20)
zyxwvutsrq
zyxwvutsrqponm
B. Discrete Optimization
Obviously, by direct rounding of the optimal coefficients one cannot even guarantee a satisfactory solution,
not to speak about the global minimum-noise solution in
the desired power-of-two coefficient grid. A standard tool
for this kind of discrete optimization problem is dynamic
programming [ 6 ] ,which typically involves constructing a
search algorithm where the parameter space is constrained
to keep the number of combinations to be tested in reasonable limits. Often the global optimum cannot be guaranteed, but at least a suboptimal solution is obtained in
finite time.
Design of FIR and IIR filters with finite-wordlength
coefficients has been addressed extensively in the last
years (see, e.g., [42], [5], [30]). Instead of constructing
sophisticated and fast discrete optimization algorithms, we
tried some simple schemes that gave satisfactory results
within reasonable time. Rather than how the discrete
(sub)optimal solution is found, we want to focus on what
kind of performance can be achieved with error feedback
having power-of-two coefficients.
Let us denote with N-b power-of-two coefficient the
case where the coefficient is represented as a sum of N
terms each assumes a power-of-two value with sign. When
this kind of coding uses the minimum number of terms
for the representation of a conventional two’s complement number, it is called canonic signed-digit (CSD)
code. According to [57], CSD code is guaranteed when
there are no successive powers of 2 in the number presentation.
Here we are interested in EF filters of the order 1 to
4. We used simple search algorithms that check some values close to the continuous optimum, one for 1-b quantization and the other for 2-b quantization. The algorithm
for 1-b quantization is as follows: the quantized coefficient was allowed to take the following values:
@a = Lp*J . A . 2b’ A = -1, 0, 1;
2-EF
Noise (dB)
PI
P*
3-EF
Noise (dB)
PI
Pz
P3
Pz
P3
04
15.47
-1.9358
0.9832
l b
19.38
-2
1
2b
15.47
-1.9375
0.9844
3b
15.47
-1.9375
0.9844
l b
19.38
-2
I
2b
16.37
-1.875
0.9375
3.49
-2.8874
2.8567
-0.9678
l b
29.58
-2
2
-I
2b
15.92
-3
3
-0.9688
3b
13.40
-2.8750
2.8750
-0.9688
Ib
19.38
-2
1
0
2b
9.68
-3
3
-1
0.00
-3.6066
4.9795
-3.1134
0.7431
l b
51.32
-4
4
-4
0.5
2b
31.17
-3.5
5
-3
0.75
3b
9.48
-3.6250
4.9844
-3.1250
0.7500
l b
15.44
-2
0
2
-1
(I*)
2b
3.48
-4
6
-4
zyxwvutsrqp
4-EF
Noise
PI
Search
Algorithm
Direct Rounding
Optimal
1
(2*)
the next bit. By always quantizing the residue, a canonic
code is guaranteed. The algorithm for 2-b quantization is
as follows:
pp
=
Lp*J . 2” . (1
b2 -- -2
* *
*
- 4;
+A
*
A
-1, 0, 1.
=
2”)
b, = 0, 1;
(22)
Similar algorithms are easily constructed for quantization
to 3 or more bits. With a large number of bits it may be
sufficient to use a search routine only for some lower digits.
The algorithms were used to design power-of-two EF
of the order 2 to 4 for the sample filter (20). The results
are collected in Table 111.
It is observed that direct rounding works quite well with
second-order EF, but no longer with third- or fourth-order
EF. When the parameters of fourth-order EF are rounded
off to the nearest powers of 2, the resulting noise gain is
seen to be even higher than without any error feedback!
With second-order EF the results of the search algorithm are essentially the same as with direct rounding (the
slightly worse performance is due to the finite range of
powers-of-two used), whereas with third- and fourth-order EF the search algorithms gave dramatically improved
results. Although the best third-order 1-b EF is actually
of the second-order, the search algorithm was able to find
it. Most impressive are the results with fourth-order EF,
where the optima found with the 1- and 2-b search algorithms were 35 and 28 dB less noisy than those obtained
by direct rounding, respectively.
Interestingly, the polynomials (1*) and (2*) found by
the search algorithms are (1 z - I ) ( 1 - z
and (1 z
respectively.With 1-b rounding there seems to be
quite a limited number of possible polynomials. This implies that instead of searching for a solution based on the
continuous optimum (1 l ) , it may be profitable to use EF
zyxwvutsrqpo
b , = -1, 0, 1, 2
zyxwvuts
(21)
where the floor operation denotes magnitude truncation
down to the nearest power of 2. Thus, 4 values in the
power-of-two grid around an optimal coefficient p* (obtained from (1 1)) are checked. Additionally, the integer
factor A is introduced to include the value 0 in the set and
also the values with the opposite sign. The algorithm is
used separately for each coefficient to be optimized and
all the possible combinations are examined.
If it is desired to use two or more bits for all or some
of the coefficients, one can apply a similar algorithm for
+
zyx
zyxwvutsrqp
zyxwvuts
zyxwvutsrqp
zyxwvuts
zyxwvu
zyxw
zyxwvutsrqp
I103
LAAKSO AND HARTIMO: NOISE REDUCTION IN RECURSIVE DIGITAL FILTERS
TABLE IV
THETESTSPECIFICATIONS
polynomials with coefficients constrained to integer values, like cyclotomic polynomials. This approach has been
investigated in [ 171.
V. DESIGNEXAMPLES
For the sake of comparison, we consider the implementation of the following recursive digital filters:
LP10. A tenth-order narrow-band Chebyshev lowpass filter 1271.
LP7. A seventh-order broad-band elliptic low-pass
filter 1361.
BP6. A sixth-order multiband bandpass filter 1351,
1361.
BS6.
A sixth-order Butterworth bandstop filter [ 151.
The filters’ specifications are given in Table IV.
The use of fourth-order direct form sections has been
studied in 1361, where it was found that second-order optimal EF is very efficient with this structure, too. Third
and fourth-order EF are naturally likely to be even better.
The fourth-order sections were also found to be very efficient to implement with the current generation of signal
processors.
Unfortunately, the direct form sections of orders higher
than two have significantly worse overflow behavior than
second-order sections. Mitra has shown that, even with
saturation arithmetics, higher order direct form sections
can sustain zero-input overflow oscillations 1471. However, it is possible to determine if a given section implemented with saturation is free from overflow oscillations
by using Mitra’s criterion 1481. This criterion restricts the
possible pole locations rather severely: only one of our
test filters, the broad-band low-pass filter, passed Mitra’s
test.
The filters were implemented according to the following rules:
1) Direct form I (DF1) second- or fourth-order sections
are used [52]. They are here referred to as 2-cascades and
4-cascades, respectively.
2) L , norm scaling procedure is applied [27].
3) Signal quantization is performed by rounding after
additions.
4) The quantization noise sources are assumed to be
uncorrelated so that the standard noise model can be applied [27].
5 ) Second-order roundoff effects are ignored 1251.
6) All possible section orderings and pole-zero pairings are tested and the ordering with the lowest noise gain
(3) is chosen. However, the quantization of the EF coefficients is based on the minimum-noise ordering found
with unquantized coefficients.
7) The implementation with no EF, with optimal EF
(1 l), and EF with 1-b and 2-b power-of-two coefficients
was considered. The power-of-two coefficients were found
using the described algorithms (21) and (22).
We decided to examine only optimal EF for reference
and power-of-two EF for implementation, since accord-
Passband Specifications
Filter
TY Pe
A , (dB)
U,, (T rad)
LPlO
LP7
BP6
0.2
I .o
0.8
ca. 0.2
0.5
0.02-0.2
Stopband Specifications
A , (dB)
2.0
0.0-0.46
0.54-1 .O
rad)
70
60
60
40
BS6
U, (T
60
33.9
0.554
0.0-0.002
0.4-0.8
0.8-1.0
0.49-0.5 1
ing to our experience it is the most suitable way to implement the EF, e.g., with a signal processor [35]. In the
current-generation signal processors the coefficient symmetry is difficult to utilize and thus the proposed symmetric and antisymmetric solutions do not offer any real
advantage over the optimal solution.
With the implementations of the low-pass filter (LP7)
Mitra’s test [48] was used to form overflow-stable fourthorder sections. Those 4-cascade implementations are thus
guaranteed to be free from zero-input overflow oscillations when saturation arithmetic is used, whereas the other
4-cascade implementations are not.
The noise figures for DF1 2-cascade implementations
of the test filters are shown in Table V. It is observed that
with 2-cascades, increasing the EF order above 2 can no
longer reduce the noise much. This comes from the fact
that, with a good section-ordering and pole-zero pairing,
the noise transfer functions from each quantizer to the filter output are usually very close to second-order all-pole
transfer functions that can be compensated for with a second-order all-zero EF. Instead of increasing the EF order
it seems to be more rewarding to increase the wordlength
of the EF coefficients.
With the 4-cascades (Table VI) the high-order schemes
are more useful. When two bits are used for the EF coefficients, the 4-cascade implementations with 4-EF are all
less noisy than the 2-cascades with 2-EF. With 4-cascades
there is only half the number of quantizers of the 2-cascade
and thus a more efficient implementation can be obtained
(for the implementation with signal processors, see 1361).
However, since the 4-cascade implementations are not
guaranteed to be free from overflow oscillations (except
LP7), a more conservative scaling policy may have to be
applied which reduces the dynamic range of the filter.
It was noticed that very often the optimal third-order
EF with 1-b coefficients was actually of the second-order.
Due to the symmetry of the bandstop filter (BS6), the oddorder optimal EF coefficients were zero and hence the second- and third-order EF coefficients were equal.
It was also observed that error feedback cannot compensate for bad section-ordering and pole-zero pairing.
The implementation with the fourth-order optimal EF and
nonoptimal ordering can well be much noisier than the
optimal ordering with no EF. Hence, it is strongly recommended that all the orderings be checked if possible.
zyxwvutsrqpon
zyxwvutsrqponmlkjihgfedcbaZYXWVUT
zyxwvutsrqponmlkji
zyxw
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 40, NO. 5 , MAY 1992
I IC4
TABLE V
THENOISEGAINSFOR THE TESTFILTER
IMPLEMENTATIONS WITH SECOND-,
A N D FOURTH-ORDER
EF. CASCADE
OF DIRECT
FORMI SECONDTHIRD-,
ORDER
SECTIONS
TABLE VI
THENOISEGAINSFOR THE TESTFILTER
IMPLEMENTATIONS WITH SECOND-,
THIRD-,A N D FOURTH-ORDER
EF. CASCADE
OF DIRECT
FORMI
FOURTH-ORDER
SECTIONS
LPlO
LPl0
EF Version
No EF
2-EF
3-EF
4-EF
EF Version
Noise Gain (dB)
22.09
I -bit
11.64
9.40
8.66
Optimal
6.58
1.52
0.32
Noise Gain (dB)
No EF
2-bit
6.68
2.30
0.65
2-EF
3-EF
4-EF
2-EF
3-EF
4-EF
EF Version
Noise Gain (dB)
No EF
13.15
1-bit
6.85
4.91
4.78
Optimal
6.30
4.29
3.08
Noise Gain (dB)
No EF
2-bit
6.33
4.36
3.21
2-EF
3-EF
4-EF
2-EF
3-EF
4-EF
34.21
I-bit
4.73
4.73
3.79
Optimal
2.39
0.50
0.13
EF Version
Noise Gain (dB)
No EF
2-bit
2.53
0.79
0.29
2-EF
3-EF
4-EF
2-EF
3-EF
4-EF
Optimal
6.44
6.44
6.40
2-b
8.30
3.68
0.36
BS6
EF Version
Noise Gain (dB)
No EF
34.59
1-b
12.60
12.60
10.04
Optimal
8.26
1.95
0.02
BS6
EF Version
2-b
5.31
3.96
2.21
BP6
Noise Gain (dB)
No EF
12.83
I -b
5.53
4.58
3.93
Optimal
5.31
3.92
2.10
BP6
EF Version
2-b
13.79
7.49
2.47
LP7
LP7
EF Version
29.30
1-b
20.08
16.57
15.15
Optimal
13.22
5.04
0.76
11.00
1-b
6.85
6.85
6.59
Noise Gain (dB)
No EF
2-b
6.45
6.45
6.41
With high-order filters heuristic search algorithms, like
the ones proposed in [55] or [25], can be used.
However, when a reasonable ordering and a sophisticated quantization scheme is utilized, even a simple second-order EF with 1-b coefficients typically results in
large reduction in roundoff noise. Our opinion is that this
works with all basic filter types, even with bandstop filters
provided that they are noisy enough without the EF.
In [25] it was found that the EF was not able to reduce
the noise of a bandstop filter. Our feeling is that this was
partly because the authors did not check all the possible
orderings and partly because they used direct form I1 sections where the recursive part (the poles) of the section is
before the nonrecursive part (the zeros). According to
[27], the DF1 sections are better than DF2 sections, especially for the implementation of bandstop filters. This
was also verified in [36].
In general, it can be stated that the noisier the original
implementation, the more the use of the error feedback
helps. With only moderately noisy filters, as with BS6 of
our examples, the noise reduction obtained by error feedback may be too little to compensate for the implementation costs.
2-EF
3-EF
4-EF
Optimal
5.67
5.67
2.92
14.36
I-b
5.78
5.78
3.57
2-b
5.72
5.72
3.05
VI. IMPLEMENTATION
ISSUES
The DSP application sets the constraints to be met as
to the roundoff noise and limit cycle behavior. On the
other hand, the application also dictates the implementation resources, so that there is usually a tradeoff between
the finite-wordlength performance and implementation
costs.
When a signal proc&sor is used for implementation,
most parameters like data and coefficient wordlengths are
fixed and must be taken as such. The most critical parameter is usually the maximum achievable sampling frequency which depends on the length of the program code.
Due to its simplicity, the cascade of second-order direct
form sections is very efficient to be used in a signal processor in terms of the code length [35]. With narrow-band
filters, whose poles are close to the unit circle, the structure tends to be rather noisy. A very cost-effective solution is to apply error feedback to only one or two of the
noisiest sections, which may easily reduce the noise power
by 20-30 dB at the cost of a few additional instructions
in the filter code [35]. The amplitude of possible limit
cycles typically goes down with the roundoff noise, too
171, W l .
LAAKSO AND HARTIMO: NOISE REDUCTION IN RECURSIVE DIGITAL FILTERS
In the current signal processors the EF is not supported
but it must be implemented with additional instructions.
It would be, though, quite a simple enhancement of the
ALU of the signal processor to give the possibility of parallel computation of EF, e.g., with 2-b power-of-two coefficients. In that case no time penalty would be caused by
the use of EF.
In the case of VLSI implementations, there are far better possibilities to optimize the data wordlength of the different blocks. Then the crucial question is: which is more
cost effective, to increase the internal wordlength or to
use a shorter wordlength supported by error feedback?
Another free parameter is the wordlength of the error
feedback network.
In a recent work the VLSI implementation of recursive
filters with distributed arithmetic and error feedback using
the VHDL hardware description language was studied
[56] .The preliminary results indicate that the error feedback cannot save any silicon when the wordlengths chosen are such that equal roundoff noise performance is
achieved with both implementations. However, since the
error feedback introduces parallelism in the computations, it was found to result in a slightly shorter delay than
the pure distributed arithmetic implementation.
It is believed that this is typical so serial arithmetic implementations where the increase of internal wordlength
is cheap and simple. Similar results were also obtained in
an earlier implementation study using bit-serial LSI techniques 1591.
VII. CONCLUSIONS
In this work, general formulas for optimal Nth-order
error feedback (error-spectrum shaping) were derived and
the relations to Wiener filter theory and AR modeling were
discussed. A fast and accurate algorithm to obtain the optimal coefficients using the total square integral formula
was presented. As alternatives for efficient implementation, suboptimal schemes with symmetric or antisymmetric coefficients were presented and the design of the error
feedback quantizer with power-of-two coefficients was
considered,
The efficiency of the EF schemes was examined by test
implementations of some standard filters. It was found that
the error feedback is a very powerful and versatile method
to cut down the quantization noise in any recursive filter
implemented as a cascade of second-order direct form I
sections. Second-order error feedback seems to be sufficient for standard cascade implementations, whereas the
new high-order schemes are attractive for use with highorder direct form sections.
APPENDIX
CALCULATION
OF AUTOCORRELATION
COEFFICIENTS
USINGTHE TOTALSQUAREINTEGRAL
The total square integral is defined as [31]:
1
I=-$
F(z)F(z -')z dz
2 .Irj
zyxwv
zyxwvutsrqp
zyxw
zyxwvut
zyxwv
zyx
zy
I IO5
where the real-valued coefficients of F(z) are given as arguments, and F(z) being the z transform of a stable linear
shift-invariant system of the form
N
biz-'
i=O
F(z) = 7
C ajz-'
(A2)
i=O
where some of the coefficients may take zero values, except ao.Now it is desired to evaluate the integral (7b)
using some algorithm for the total square integral. In order to be able to do this, the spectral factorization of the
term ( z - ~ z k ) / 2 is required, i.e., it must be expressed
in the form C(z)C(z-') so that the argument for the algorithm is F(z) = C(z) G(z).With some algebra, it is easily derived
+
('44)
Hence, the desired C(z) has complex coefficients and can
be chosen as
C(Z>=
Cre(z)
+ jCirn(z)
(-45)
where
1 (1 + z
Cirn(Z)
= ;(1 - z - k > .
Cre(z)=
(A64
-k)
(A6b)
Now the factor C(z) C(z - I ) can be expressed as
C(Z>C(Z- I > = CJz>
Cre(z - I >
-
Cirn(z -')
~jm(z>
(~7)
so that the integral (A3) can be split into a difference of
two integrals as follows:
with real-coefficient argument functions
Fre(z) =
1(1 + z - k ) G ( z )
(A94
Fi,(Z) =
;(1 - z-%O).
(A9b)
When G(z) is given, the two integrals are readily calculated by using some of the well-known algorithms, e.g.,
those presented in 1311 .or [72]. In a recent comparison
[34], the algorithm by Astrom er al., 1721 was found to
be the fastest among several methods for the evaluation
of the total square integral.
Several other algorithms for the direct evaluation of autocorrelation sequences in the z-domain have been proposed before, e.g., in 1261, 1191, [18], and 1331. In I331
zyxwvutsrqp
zyxwvutsrqpo
I106
zyxwv
zyxwvutsrqponmlkji
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 40. NO. 5, MAY 1992
a method that is similar to that of Astrom’s algorithm was
proposed. It is believed that our method compares favorably with the other ones, but a more detailed study would
be needed to verify that.
intra-/interframe DPCM coding of color television signals,” IEEE
Trans. Commun., vol. 36, no. 3 , pp. 332-346, Mar. 1988.
[22] R. M. Gray, “Oversampled sigma-delta modulation,” IEEE Trans.
Commun.,vol. COM-35, pp. 481-489, May 1987.
[23] S . Haykin, Adaptive Filter Theory. Englewood Cliffs, NJ: PrenticeHall, 1987.
[24] W. E. Higgins and D. C. Munson, “Noise reduction strategies for
digital filters: Error spectrum shaping versus the optimal linear statespace formulation,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-30, pp. 963-973, Dec. 1982.
[25] W. E. Higgins and D. C. Munson, “Optimal and suboptimal errorspectrum shaping for cascade-form digital filters,” IEEE Trans. Circuits Syst., vol. CAS-31, pp. 429-437, May 1984.
[26] S. Y . Hwang, “Solution of complex integrals using the Laurent expansion,” IEEE Trans. Acoust., Speech, Signal Processing, vol.
ASSP-26, pp. 263-265, June 1978.
[27] L. B. Jackson, “Roundoff analysis for fixed-point digital filters realized in cascade or parallel form,” IEEE Trans. Audio Electroacoust., vol. AU-18, pp. 107-122, June 1970.
[28] L. B. Jackson, Digital Filters and Signal Processing. Massachusetts: Kluwer, 1986.
[29] N. S. Jayant and P. Noll, Digital Coding of Waveforms.’ Principles
and Applications to Speech and Video. Englewood Cliffs, NJ: Prentice-Hall, 1984.
(301 Z . Jing and A. T. Fam, “New scheme for designing IIR filters with
finite-wordlength coefficients, IEEE Trans. Acoust., Speech, Signal
Processing, vol. A S P - 3 4 , pp, 1335-1336, Oct. 1986.
(311 E. I. Jury, Theory and Application of the z-Transform Method. New
York: Wiley, 1964.
(321 E. I. Jury and S . Gutman, “The inner formulation for the total square
integral (SUM),’’ Proc. IEEE, vol. 61, pp. 395-397, Mar. 1973.
(331 S . Kay, “Generation of the autocorrelation sequence of an ARMA
process,” IEEE Trans. Acoust., Speech, Signal Processing, vol.
ASSP-33, pp. 733-734, June 1985.
(341 J. Koski and T. Laakso, “Efficient numerical evaluation of the total
square integral: A survey and a comparison,” in Proc. IASTED In?.
Conf. Signal Processing Digital Filtering (Lugano, Switzerland), June
18-21, 1990, pp. 260-262.
1351 T. Laakso, 0. Hyvarinen. and M. Renfors, “Signal processor implementation of some recursive digital filter structures,” in Proc. Inr.
Con5 Digital Signal Processing (Florence, Italy), Sept. 1987, pp.
220-224.
[36] T. Laakso and I. Hartimo, “Direct form revisited: Recursive filter
implementation using higher-order direct form sections,” in Proc.
Int. Symp. Circuits Syst. ISCAS’88 (Helsinki. Finland), June 7-9,
1988, pp. 791-795.
[37] T. Laakso and I. Hartimo, “Determining the optimal coefficients of
high-order error feedback,” in Proc. Int. Symp. Circuits Syst.
(ISCAS’89) (Portland, OR), May 9-11, 1989, pp. 728-731.
(381 T. Laakso and 1. Hartimo, “Efficient implementation of high-order
error feedback,” in Proc. In?. Con$ Circuits Syst. (ICCAS’89) (Nanjing, China), July 6-8, 1989, pp. 375-378.
[39] T . Laakso, “Suppression of overflow oscillations in recursive digital
filters using error feedback,” in Proc. IASTED In?. Conf: Signal Processing Digiral Filtering (Lugano, Switzerland), June 18-21, 1990,
pp. 76-80.
[40] T. Laakso, “Elimination of limit cycles in recursive digital filters
using, error feedback,” in Proc. Bilkenr Int. Conf New Trends CommunT (Ankara, Turkey), July 1990, pp. 1073-1079.
I411 A. Langinmaa, “Limit cycles in digital filters,’’ master’s thesis, Faculty of Inform. Technol., Helsinki Univ. Technol., 1987.
[42] Y. C. Lim and S . R. Parker, “Discrete coefficient FIR digital filter
design based upon an LMS criteria,” IEEE Trans. Circuits Syst., vol.
CAS-30, pp. 723-739, Oct. 1983.
[43] J. Makhoul and M. Berouti, “Adaptive noise spectral shaping and
entropy coding in predictive coding of speech,” IEEE Trans. Acousr.,
Speech, Signal Processing, vol. ASSP-27, pp. 63-73, Feb. 1979.
[44] J. D. Markel, “Digital inverse filtering-A new tool for formant trajectory estimation,” IEEE Trans. Audio Electroacoust., vol. AU-20,
no. 2, pp. 129-137, June 1972.
[45] S . L. Marple Jr., Digital Spectral Analysis with Applications. Englewood Cliffs, NJ: Prentice-Hall, 1987.
(461 S . K. Mitra, K. Hirano, and H. Sakaguchi, “A simple method of
computing the input quantization and multiplication roundoff error in
a digital filter,” IEEE Trans. Acoust., Speech, Signal Processing,
vol. ASSP-22, pp. 326-329, Oct. 1974.
[47] D. Mitra, “Large amplitude self-sustained oscillations in difference
equations which describe digital filter sections using saturation arith-
zyxwvutsr
ACKNOWLEDGMENT
The authors are especially grateful to Prof. P. S . R.
Diniz from the University of Rio de Janeiro, for his detailed comments and suggestions on the manuscript. They
also want to thank one of the reviewers for helping them
to understand the relations between optimal error feedback and autoregressive modeling.
REFERENCES
A. I. Abu-El-Haija and A. M. Peterson, “An approach to eliminate
roundoff errors in digital filters,” in Proc. IEEE Int. Conf. Acoust.,
Speech, Signal Processing, Apr. 1978, pp. 75-78.
A. I. Abu-El-Haija, “An approach to eliminate roundoff error in digital filters,” IEEE Trans. Acoust., Speech, Signal Processing, vol.
ASSP-27, pp. 195-198, Apr. 1979.
A. I. Abu-El-Haija, “On limit cycle amplitudes in error feedback
digital filters,’’ in Proc. In?. Conf. Acoust., Speech, Signal Processing, (ICASSP’81),Apr. 1981, pp. 1227-1230.
C . W. Barnes, “Error feedback in normal realization of recursive
digital filters,” IEEE Trans. Circuits Syst., vol. CAS-28, pp. 72-75,
Jan. 1981.
N. Benvenuto, L. E. Franks, and F. S. Hill, Jr., “On the design of
FIR filters with powers-of-two coefficients,” IEEE Trans. Commun.,
vol. COM-32, pp. 1299-1307, Dec. 1984.
D. M. Burley, Studies in Optimization. Norfolk: VA: Intertext,
1974.
T. L. Chang, “Suppression of limit cycles in digital filters designed
with one magnitude truncation quantizer,” IEEE Trans. Circuits Syst.,
vol. CAS-28, pp. 107-111, Feb. 1981.
T. L. Chang and C. A. White, “An error cancellation digital filter
structure and its distributed arithmetic implementation,” IEEE Trans.
Circuits Syst., vol. CAS-28, pp. 339-342, Apr. 1981.
T. L. Chang, “A unified analysis of roundoff noise reduction in digital filters,” in Proc. In?. Con6 Acoust., Speech, Signal Processing
(ICASSP’81), Apr. 1981, pp. 1209-1212.
T. L. Chang, “On low-roundoff noise and low-sensitivity digital filter
structures,” IEEE Trans. Acoust., Speech, Signal Processing, vol.
ASSP-29, pp. 1077-1080, Oct. 1981.
J.-K. Chan, “Applications of Astrom-Gray-Markel recursions,”
Signal Processing, vol. 22, no. 2, pp. 179-185, Feb. 1991.
T. A . C. M. Claasen and L. Kristiansson, “Improvement of overflow
behaviour of 2nd-order digital filters by means of error feedback,”
Electron. Lett., vol. 10, pp. 240-241, June 13, 1974.
J. Dattorro, “The implementation of recursive digital filters for highfidelity audio,” J . Audio Eng. Soc., vol. 36, no. 11, pp. 851-878,
Nov. 1988.
P. S. R. Diniz and A. Antoniou, “Low-sensitivity digital filter S ~ N C tures that are amenable to error-spectrum shaping:” IEEE Trans. Circuits Syst., vol. CAS-32, pp. 1000-1007, Oct. 1985.
[15] P. S . R. Diniz and A. Antoniou, “Digital-filter structures based on
the concept of the voltage-conversion generalized-immittance converter,” Can. J . Elec. Comp. Eng., vol. 13, no. 3-4, pp. 90-98, July
1988.
1161 P. S . R. Diniz, private communication, July 1988.
(171 P. S. R. Diniz and T. I. Laakso, “Error feedback filters with zeroes
on the unit circle employing simple FIR filters,” to be published.
1181 J. P. D u g d and E. I. Jury, “A note on the evaluation of complex
integrals using filtering interpretations,” IEEE Trans. Acoust.,
Speech, Signal Processing, vol. ASSP-30, pp. 804-807, Oct. 1982.
[19] D. Dzung, “Generation of cross-covariance sequences,” IEEE Trans.
Acoust., Speech, Signal Processing, vol. ASSP-29, pp. 922-923,
Aug. 1981.
[20] B. Friedlander, “Adaptive algorithms for FIR filters,” in Adaptive
Filters, C. F. N. Cowan and P. M. Grant, Ed. Englewood Cliffs,
NJ: Prentice-Hall, 1985.
(211 B. Girod, H . Almer, L. Bengtsson, B. Christensson, and P. Weiss,
“A subjective evaluation of noise-shaping quantization for adaptive
zyxwvutsrqponmlkj
zyxwvutsrqponm
zyxwvutsrq
”
zyxwvutsrqponm
zyxwvutsrqponm
zyxwvutsrqponmlkji
zyxwvutsrqponmlk
LAAKSO AND HARTIMO: NOISE REDUCTION IN RECURSIVE DIGITAL FILTERS
metic,” IEEE Trans. Acoust., Speech, Signal Processing, vol.
ASSP-25, pp. 134-143, Apr. 1977.
D. Mitra, “Criteria for determining if a high-order digital filter using
saturation arithmetic is free of overflow oscillations,” Bell Syst. Tech.
J . , vol. 56, pp. 1679-1699, Nov. 1977.
G . A. Merchant and T . W. Parks, “Efficient solution of a Toeplitzplus-Hankel coefficient matrix system of equations,” IEEE Trans.
Acoust., Speech, Signal Processing, vol. ASSP-30, pp. 40-44, Feb.
1982.
C. T . Mullis and R. A. Roberts, “An interpretation of error spectrum
shaping in digital filters,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-30, pp. 1013-1015, Dec. 1982.
D. C. Munson, and D. Liu, “Narrow-band recursive filters with error
spectrum shaping,” IEEE Trans. Circuits Syst., vol. CAS-28, pp.
160-163, Feb. 1981.
A. V. Oppenheim and R. W. Schafer, Digital Signal Processing.
Englewood Cliffs, NJ: Prentice-Hall, 1975.
J. H. McClellan, T . W. Parks, and L. R. Rabiner, “A computer program for designing optimum FIR linear phase filters,” IEEE Trans.
Audio Electoacoust., vol. AU-21, pp. 506-526, Dec. 1973.
R. K . Patney and S. C. Dutta Roy, “A different look at roundoff noise
in digital filters,” IEEE Trans. Circuits Syst., vol. CAS-27, pp. 5962, Jan. 1980.
A. Peled and B. Liu, Digital Signal Processing, Theory, Design and
Implementation. New York: Wiley, 1976. pp. 261-279.
P. Pitkanen, J. Skytta, and T. Laakso, “Comparison of digital filter
architectures using VHDL,” in Proc. Second Eur. Conf. VHDL
Methods (Euro-VHDL) (Stockholm, Sweden), Sept. 8-1 I , 1991, pp.
172-1 75.
G. W. Reitwiesner, “Binary arithmetic,” in Advances in Computers,
vol. 1. New York: Academic, 1960, pp. 232-313.
M. Renfors, “Roundoff noise in error-feedback state-space filters,”
in Proc. Inr. Con5 Acoust., Speech, Signal Processing (ICASSP’83),
Apr. 1983, pp. 619-622.
M. Renfors, B . Sikstrom, and L. Wanhammar, “LSI implementation
of limit-cycle-free digital filters using error feedback techniques,” in
Proc. Second Eur. Signal Processing Conf. (EUSIPCO ’83), Sept.
1983.
H. A. Spang, 111, and P. M. Schultheiss, “Reduction of quantizing
noise by use of feedback,” IRE Trans. Commun. Syst., vol. CS-IO,
pp. 373-380, Dec. 1962.
S . Sridharan and D. Williamson, “Implementation of high-order direct-form digital filter structures,” IEEE Trans. Circuits Syst., vol.
CAS-33, pp. 818-822, Aug. 1986.
Tran-Thone and B . Liu. “A recursive digital filter usine DPCM.”
IEEE Trans. Commun., vol. COM-24, ppr2-11, Jan. 1976.
Tran-Thong and B . Liu, “Error spectrum shaping in narrow-band recursive filters,” IEEE Trans. Acoust., Speech, Signal Processing, vol.
ASSP-25, pp. 200-203, Apr. 1977.
[64] P. P. Vaidyanathan, “On error-spectrum shaping in state-space digital filters,” IEEE Trans. Circuits Syst., vol. CAS-32, no. 1, pp. 8892, Jan. 1985 (correction, Apr. 1985, p. 413).
[65] S. L. White, “Applications of distributed arithmetic to digital signal
processing: A tutorial review,” IEEE ASSP Mag., vol. 6 , no. 3, pp.
4-19, July 1989.
[66] B. Widrow and S . D. Steams, Adaptive Signal Processing. Englewood Cliffs, NJ: Prentice-Hall, 1985.
1671 D. Williamson and S. Sridharan, “Residue feedback in digital filters
using fractional feedback coefficients,” IEEE Trans. Acoust., Speech,
Signal Processing, vol. ASSP-33, pp. 477-483, Apr. 1985.
I I07
zyxwvutsrq
[68] D. Williamson, S . Sridharan, and P. G. McCrea, “A new approach
for block floating-point arithmetic in recursive filters,” ZEEE Trans.
Circuits S y s ~ . vol.
,
CAS-32, pp. 719-722, July 1985.
1691 D. Williamson and S . Sridharan, “Residue feedback in ladder and
lattice filter structures,” in Proc. Int. Con5 Acoust., Speech, Signal
Processing (ICASSP’85), Apr. 1985, pp. 53-56.
[70] D. Williamson and S . Sridharan, “Error feedback in a class of orthogonal polynomial digital filter structures,” IEEE Trans. Acoust. ,
Speech, Signal Processing, vol. ASSP-34, pp. 1013-1016, Aug.
1986.
(711 D. Williamson, “Roundoff noise minimization and pole-zero sensitivity in fixed-point digital filters using residue feedback,” IEEE
Trans. Acoust., Speech, Signal Processing, vol. ASSP-34, pp. 12101220, Oct. 1986.
1721 K . J. Astrom, E. I . Jury, and R. G. Agniel, “A numerical method
for the evaluation of complex integrals,” IEEE Trans. Automat.
Contr., vol. AC-15, pp. 468-471, Aug. 1970.
zyxwvutsrqpo
zyxwvutsrqponm
zyxwvutsrqpon
zyx
Tima I. Laakso (S’87) was born in Vantaa, Finland, on February 1, 1961 He received the Diploma Engineer, Licentiate of Technology and
Doctor of Technology degrees from the Helsinki
University of Technology in 1987, 1990, and
1991. respectively
From 1987 to 1988 he was with the Laboratory
of Computer and Information Sciences at the Helsinki University of Technology as a Research Assistant. During 1988-1989, he worked in the Laboratory of Communications at the University of
Erlangen-Nuremberg, in Erlangen, Germany, as a Visiting Research Scientist Since 1989 he has been with the Laboratory of Signal Processing
and Computer Technology at the Helsinki University of Technology, where
he is currently working as a Research Scientist. His current research interests include design and implementation methods of DSP algorithms and
modeling of musical instruments
Iiro 0. Hartimo (S’68-M’72-SM’83) was born
in Helsinki, Finland, on October 22, 1943 He received the Diploma Engineer, Licentiate of Technology, and Doctor of Technology degrees from
the Helsinki University of Technology, Helsinki,
Finland, in 1969, 1975, and 1986, respectively
In 1969 he joined the Helsinki University of
Technology From 1969 to 1978 he was with the
Department of Electrical Engineering, from 1979
to 1988 he was an Associate Professor of Computer and Information Sciences in the Department
of Technical Physics, and since 1988 he has been a Professor of Signal
Processing and Computer Science leading the DSP research group His
research interests are in the field of implementation methods of digital signal processing