Skip to main content

    Luca Gemignani

    In this paper we introduce a family of rational approximations of the inverse of a ϕ function involved in the explicit solutions of certain linear differential equations as well as in integration schemes evolving on manifolds. For... more
    In this paper we introduce a family of rational approximations of the inverse of a ϕ function involved in the explicit solutions of certain linear differential equations as well as in integration schemes evolving on manifolds. For symmetric banded matrices these novel approximations provide a computable reconstruction of the associated matrix function which exhibits decaying properties comparable to the best existing theoretical bounds. Numerical examples show the benefits of the proposed rational approximations w.r.t. the classical Taylor polynomials.
    In this paper we present a general strategy to deduce a family of interpolatory masks from a symmetric Hurwitz non-interpolatory one. This brings back to a polynomial equation involving the symbol of the non-interpolatory scheme we start... more
    In this paper we present a general strategy to deduce a family of interpolatory masks from a symmetric Hurwitz non-interpolatory one. This brings back to a polynomial equation involving the symbol of the non-interpolatory scheme we start with. The solution of the polynomial equation here proposed, tai-lored for symmetric Hurwitz subdivision symbols, leads to an efficient pro-cedure for the computation of the coefficients of the corresponding family of interpolatory masks. Several examples of interpolatory masks associated with classical approximating masks are given. AMS classication: 65F05; 65D05
    An algorithm based on the Ehrlich-Aberth iteration is presented for the computation of the zeros of p(#) = det(T - #I), where T is an irreducible nonsymmetric tridiagonal matrix. The algorithm requires the evaluation of p(#)/p (#) =... more
    An algorithm based on the Ehrlich-Aberth iteration is presented for the computation of the zeros of p(#) = det(T - #I), where T is an irreducible nonsymmetric tridiagonal matrix. The algorithm requires the evaluation of p(#)/p (#) = -1/trace(T - #I) , which is done by exploiting the QR factorization of T - #I and the semiseparable structure of (T - #I) . The choice of initial approximations relies on a divide-and-conquer strategy, some results motivating this strategy are given. Guaranteed a-posteriori error bounds based on a running error analysis are proved. A Fortran 95 module implementing the algorithm is provided and numerical experiments that confirm the e#ectiveness and the robustness of the approach are presented. In particular, comparisons with the LAPACK subroutine dhseqr show that our algorithm is faster for large dimensions.
    Page 1. ON THE EUCLIDEAN SCHEME FOR POLYNOMIALS HAVING INTERLACED REAL ZEROS Dario Bini, Dipartimento di Matematica, Universitd di Pisa; Luca Gemignani, Centro elaborazione dati, Minister0 de1 Lavoro, sede regionale, Firenze. ... [Hj Hald... more
    Page 1. ON THE EUCLIDEAN SCHEME FOR POLYNOMIALS HAVING INTERLACED REAL ZEROS Dario Bini, Dipartimento di Matematica, Universitd di Pisa; Luca Gemignani, Centro elaborazione dati, Minister0 de1 Lavoro, sede regionale, Firenze. ... [Hj Hald OH, Inverse ...
    We approximate polynomial roots numerically as the eigenvalues of a unitary diagonal plus rank-one matrix. We rely on our earlier adaptation of the algorithm, which exploits the semiseparable matrix structure to approximate the... more
    We approximate polynomial roots numerically as the eigenvalues of a unitary diagonal plus rank-one matrix. We rely on our earlier adaptation of the algorithm, which exploits the semiseparable matrix structure to approximate the eigenvalues in a fast and robust way, but we substantially improve the performance of the resulting algorithm at the initial stage, as confirmed by our numerical tests.
    The algebraic characterization of dual univariate interpolating subdivision schemes is investigated. Specifically, we provide a constructive approach for finding dual univariate interpolating subdivision schemes based on the solutions of... more
    The algebraic characterization of dual univariate interpolating subdivision schemes is investigated. Specifically, we provide a constructive approach for finding dual univariate interpolating subdivision schemes based on the solutions of certain associated polynomial equations. The proposed approach also makes possible to identify conditions for the existence of the sought schemes.
    We present fast numerical methods for computing the Hessenberg reduction of a unitary plus low-rank matrix $A=G+U V^H$, where $G\in \mathbb C^{n\times n}$ is a unitary matrix represented in some compressed format using $O(nk)$ parameters... more
    We present fast numerical methods for computing the Hessenberg reduction of a unitary plus low-rank matrix $A=G+U V^H$, where $G\in \mathbb C^{n\times n}$ is a unitary matrix represented in some compressed format using $O(nk)$ parameters and $U$ and $V$ are $n\times k$ matrices with $k< n$. At the core of these methods is a certain structured decomposition, referred to as a LFR decomposition, of $A$ as product of three possibly perturbed unitary $k$ Hessenberg matrices of size $n$. It is shown that in most interesting cases an initial LFR decomposition of $A$ can be computed very cheaply. Then we prove structural properties of LFR decompositions by giving conditions under which the LFR decomposition of $A$ implies its Hessenberg shape. Finally, we describe a bulge chasing scheme for converting the initial LFR decomposition of $A$ into the LFR decomposition of a Hessenberg matrix by means of unitary transformations. The reduction can be performed at the overall computational cost ...
    We present new algorithms using structured matrix methods for manipulating polynomials expressed in generalized form, that is, relative to a polynomial basis $\{p_i(x)\}$ satisfying a three-term recurrence relation. This includes... more
    We present new algorithms using structured matrix methods for manipulating polynomials expressed in generalized form, that is, relative to a polynomial basis $\{p_i(x)\}$ satisfying a three-term recurrence relation. This includes computing the Euclidean remainders as well as the greatest common divisor of two polynomials $u(x)$ and $v(x)$ of degrees less than $n$. The computational schemes involve solving linear systems with nested Hankel matrices represented in the given generalized basis and they reach the optimal sequential complexity of $O(n^2)$ arithmetical operations.
    Subdivision schemes are iterative methods for the design of smooth curves and surfaces. Any linear subdivision scheme can be identified by a sequence of Laurent polynomials, also called subdivision symbols, which describe the linear rules... more
    Subdivision schemes are iterative methods for the design of smooth curves and surfaces. Any linear subdivision scheme can be identified by a sequence of Laurent polynomials, also called subdivision symbols, which describe the linear rules determining successive refinements of coarse initial meshes. One important property of subdivision schemes is their capability of exactly reproducing in the limit specific types of functions from which the data is sampled. Indeed, this property is linked to the approximation order of the scheme and to its regularity. When the capability of reproducing polynomials is required, it is possible to define a family of subdivision schemes that allows to meet various demands for balancing approximation order, regularity and support size. The members of this family are known in the literature with the name of pseudo-splines. In case reproduction of exponential polynomials instead of polynomials is requested, the resulting family turns out to be the non-stat...
    We show that the shifted QR iteration applied to a companion matrix F maintains the weakly semiseparable structure of F. More precisely, if Ai iI = QiRi, Ai+1 := RiQi + iI, i = 0; 1; : : :, where A0 = F , then we prove that Qi, Ri and Ai... more
    We show that the shifted QR iteration applied to a companion matrix F maintains the weakly semiseparable structure of F. More precisely, if Ai iI = QiRi, Ai+1 := RiQi + iI, i = 0; 1; : : :, where A0 = F , then we prove that Qi, Ri and Ai are semiseparable matrices having semiseparability rank at most 1, 4 and 3, respectively. This structural property is used to design an algorithm for performing a single step of the QR iteration in just O(n) flops. The robustness and reliability of this algorithm is discussed. Applications to approximating polynomial roots are shown.
    In this paper two fast algorithms that use orthogonal simila rity transformations to convert a symmetric rationally generated Toeplitz matrix to tridiagonal form a re developed, as a means of finding the eigenvalues of the matrix... more
    In this paper two fast algorithms that use orthogonal simila rity transformations to convert a symmetric rationally generated Toeplitz matrix to tridiagonal form a re developed, as a means of finding the eigenvalues of the matrix efficiently. The reduction algorithms achieve cost e fficiency by exploiting the rank structure of the input Toeplitz matrix. The proposed algorithms differ in the choi ce of the generator set for the rank structure of the input Toeplitz matrix.
    In this paper we consider fast numerical algorithms for solving certain modified matrix eigenvalue problems associated with algebraic equations. The matrices under consideration have the form A = T +uv T , where u,v2 R n◊n and T = (ti,j)2... more
    In this paper we consider fast numerical algorithms for solving certain modified matrix eigenvalue problems associated with algebraic equations. The matrices under consideration have the form A = T +uv T , where u,v2 R n◊n and T = (ti,j)2 R n◊n is a tridiagonal matrix such that tj+1,j =±tj,j+1, 1 j n 1. We show that the DQR approach proposed in [Uhlig F., Numer. Math. 76 (1997), no. 4, 515‐553] can compute the eigenvalues of such matrices eciently. Our algorithm employs a fast Hessenberg reduction scheme for tridiagonal plus rank-one matrices combined with an ecient eigenvalue method for the resulting Hessenberg matrices. Exploiting the rank structure of the input matrix enables both steps to be carried out using quadratic time and linear storage. Numerical implementation of these techniques is presented and discussed for a number of test problems. AMS classification: 65F15
    Some variants of the (block) Gauss–Seidel iteration for the solution of linear systems with M -matrices in (block) Hessenberg form are discussed. Comparison results for the asymptotic convergence rate of some regular splittings are... more
    Some variants of the (block) Gauss–Seidel iteration for the solution of linear systems with M -matrices in (block) Hessenberg form are discussed. Comparison results for the asymptotic convergence rate of some regular splittings are derived: in particular, we prove that for a lower-Hessenberg M-matrix ρ(PGS) ≥ ρ(PS) ≥ ρ(PAGS), where PGS , PS , PAGS are the iteration matrices of the Gauss–Seidel, staircase, and antiGauss–Seidel method. This is a result that does not seem to follow from classical comparison results, as these splittings are not directly comparable. It is shown that the concept of stair partitioning provides a powerful tool for the design of new variants that are suited for parallel computation. AMS classification: 65F15
    It is shown that the problem of balancing a nonnegative matrix by positive diagonal matrices can be recast as a constrained nonlinear multiparameter eigenvalue problem. Based on this equivalent formulation some adaptations of the power... more
    It is shown that the problem of balancing a nonnegative matrix by positive diagonal matrices can be recast as a constrained nonlinear multiparameter eigenvalue problem. Based on this equivalent formulation some adaptations of the power method and Arnoldi process are proposed for computing the dominant eigenvector which defines the structure of the diagonal transformations. Numerical results illustrate that our novel methods accelerate significantly the convergence of the customary Sinkhorn-Knopp iteration for matrix balancing in the case of clustered dominant eigenvalues.
    We present a class of fast subspace tracking algorithms based on orthogonal iterations for structured matrices/pencils that can be represented as small rank perturbations of unitary matrices. The algorithms rely upon an updated data... more
    We present a class of fast subspace tracking algorithms based on orthogonal iterations for structured matrices/pencils that can be represented as small rank perturbations of unitary matrices. The algorithms rely upon an updated data sparse factorization –named LFR factorization– using orthogonal Hessenberg matrices. These new subspace trackers reach a complexity of only O(nk2) operations per time update, where n and k are the size of the matrix and of the small rank perturbation, respectively.
    Subdivision schemes are iterative methods for the design of smooth curves and surfaces. Any linear subdivision scheme can be identified by a sequence of Laurent polynomials, also called subdivision symbols, which describe the linear rules... more
    Subdivision schemes are iterative methods for the design of smooth curves and surfaces. Any linear subdivision scheme can be identified by a sequence of Laurent polynomials, also called subdivision symbols, which describe the linear rules determining successive refinements of coarse initial meshes. One important property of subdivision schemes is their capability of exactly reproducing in the limit specific types of functions from which the data is sampled. Indeed, this property is linked to the approximation order of the scheme and to its regularity. When the capability of reproducing polynomials is required, it is possible to define a family of subdivision schemes that allows to meet various demands for balancing approximation order, regularity and support size. The members of this family are known in the literature with the name of pseudo-splines. In case reproduction of exponential polynomials instead of polynomials is requested, the resulting family turns out to be the non-stat...
    The problem of solving large linear systems whose coefficient matrix is a sparse M-matrix in block Hessenberg form has recently received much attention, especially for applications in Markov chains and queueing theory. Stewart proposed a... more
    The problem of solving large linear systems whose coefficient matrix is a sparse M-matrix in block Hessenberg form has recently received much attention, especially for applications in Markov chains and queueing theory. Stewart proposed a recursive algorithm which is shown to be backward stable. Although the theoretical derivation of such an algorithm is very simple, its efficient implementation is logically rather involved. An analysis of its computational cost in the case where the initial coefficient matrix satisfies quite general sparsity properties can be found in [P. Favati et al., Acta Tecnica Acad. Sci. Hungar., 108 (1997--1999), pp. 89--105]. In this paper we devise a different divide-and-conquer strategy for the solution of block Hessenberg linear systems. Our approach follows from a recursive application of the block Gaussian elimination algorithm. For dense matrices, the present method has a computational cost comparable to that of Stewart's algorithm; for sparse matrices it is more efficient and more robust.
    ... 1. [1] WN Anderson Jr., TD Morley, and GE Trapp ... 7. Dario Andrea Bini , Beatrice Meini, Solving block banded block Toeplitz systems with structured blocks: algorithms and applications, Structured matrices: recent developments in... more
    ... 1. [1] WN Anderson Jr., TD Morley, and GE Trapp ... 7. Dario Andrea Bini , Beatrice Meini, Solving block banded block Toeplitz systems with structured blocks: algorithms and applications, Structured matrices: recent developments in theory and computation, Nova Science Publishers ...
    ABSTRACT
    ABSTRACT
    ABSTRACT
    ... 1. [1] WN Anderson Jr., TD Morley, and GE Trapp ... 7. Dario Andrea Bini , Beatrice Meini, Solving block banded block Toeplitz systems with structured blocks: algorithms and applications, Structured matrices: recent developments in... more
    ... 1. [1] WN Anderson Jr., TD Morley, and GE Trapp ... 7. Dario Andrea Bini , Beatrice Meini, Solving block banded block Toeplitz systems with structured blocks: algorithms and applications, Structured matrices: recent developments in theory and computation, Nova Science Publishers ...
    ABSTRACT
    In this paper we propose a superfast implementation of Wilson's method for the spectral factorization of Laurent polynomials based on a preconditioned conjugate gradient algorithm. The new computational scheme follows by exploiting... more
    In this paper we propose a superfast implementation of Wilson's method for the spectral factorization of Laurent polynomials based on a preconditioned conjugate gradient algorithm. The new computational scheme follows by exploiting several recently established connections between the considered factorization problem and the solution of certain discrete-time Lyapunov matrix equations whose coefficients are in controllable canonical form. The results of many numerical experiments even involving polynomials of very high degree are reported and discussed by showing that our preconditioning strategy is quite effective just when starting the iterative phase with a roughly approximation of the sought factor. Thus, our approach provides an efficient refinement procedure which is particularly suited to be combined with linearly convergent factorization algorithms when suffering from a very slow convergence due to the occurrence of roots close to the unit circle.
    Research Interests:
    Frequently, in control system design, we are asked to locate the roots of a bivariate polynomial of the following form \begin{eqnarray*} H(s,k)=\sum_{i=0}^n Q_i(s) k^i\in {\bf Z}[k,s]={\bf Z}[k][s]={\bf D}[s], \end{eqnarray*} where... more
    Frequently, in control system design, we are asked to locate the roots of a bivariate polynomial of the following form \begin{eqnarray*} H(s,k)=\sum_{i=0}^n Q_i(s) k^i\in {\bf Z}[k,s]={\bf Z}[k][s]={\bf D}[s], \end{eqnarray*} where $Q_i(s)\in {\bf Z}[s]$ for each $i$ and, moreover, $k$ is a free parameter ranging in some real interval. For a fixed value $\bar k$ of $k$, the zero distribution of the univariate polynomial $p(s)=H(s,\bar k)$ with respect to the imaginary axis can be found by determining the inertia of a Bezout matrix $B=B(\bar k)$ whose entries are expressed in terms of the coefficients of $p(s)$. This evaluation is usually accomplished by computing a block factorization of $B$, namely, $U^TBU=D$ where $D$ is a block diagonal matrix with lower triangular blocks with respect to the antidiagonal. It is intended in this paper to propose an efficient hybrid approach for determining the zero-distribution of $H(s,k)$ with respect to the imaginary axis for any value of $k$. We develop a fast fraction-free method for factoring the Bezout matrix $B(k)$ with entries over ${\bf D}$ determined by $H(s,k)\in {\bf D}[s]$. In this way, we easily compute the sequence $\{\phi_i(k)\}$ of the trailing principal minors of $B(k)$. For almost any value $\bar k$ of $k$ the associated sign sequence $\{sign(\phi_i(\bar k))\}$ specifies the inertia of $B(\bar k)$ and, therefore, the zero-distribution of $H(s,\bar k)$. The function $sign(\phi_i( k))$ is finally obtained by numerically computing rational approximations of the real zeros of $\phi_i(k)\in {\bf Z}[k]$.
    Research Interests:
    ABSTRACT
    This paper is concerned with the efficient solution of (block) Hessenberg linear systems whose coefficient matrix is a Toeplitz matrix in (block) Hessenberg form plus a band matrix. Such problems arise, for instance, when we apply a... more
    This paper is concerned with the efficient solution of (block) Hessenberg linear systems whose coefficient matrix is a Toeplitz matrix in (block) Hessenberg form plus a band matrix. Such problems arise, for instance, when we apply a computational scheme based on the use of difference equations for the computation of many significant special functions and quantities occurring in engineering and physics. We present a divide-and-conquer algorithm that combines some recent techniques for the numerical treatment of structured Hessenberg linear systems. Our approach is computationally efficient and, moreover, in many practical cases it can be shown to be componentwise stable. Copyright © 2000 John Wiley & Sons, Ltd.

    And 107 more