-
-
Notifications
You must be signed in to change notification settings - Fork 10.9k
ENH: add numerically robust polygcd function for numerical calculation of polynomial greatest common divisor #4829
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Here's something that uses material from Jan Elias's thesis as a starting point. It's certainly not optimal but it works for me and maybe some of you can improve: import numpy as np
from scipy.linalg import toeplitz
def cauchy(p,k):
return toeplitz(np.hstack([p,np.zeros(k-1)]),np.zeros(k))
def sylvester(pa,pb,k=1):
na = len(pa)-k
nb = len(pb)-k
Sa = cauchy(pa,nb)
Sb = cauchy(pb,na)
return np.hstack([Sa,Sb])
def polygcd(pa,pb,thresh=1e-10):
'''
return polynomials (g,u,v) such that
g = approximate gcd(pa,pb)
pa = g*u
pb = g*v
this utilizes a simplified (suboptimal) version of
Jan Elias's thesis
https://who.rocq.inria.fr/Jan.Elias/pdf/je_masterthesis.pdf'''
# trivial case: at least one polynomial is degree 0:
if len(pa) == 1 or len(pb) == 1:
return (np.array([1]), pa, pb)
# compute rank of S:
# (there a
8000
re more efficient ways to do this, per the thesis)
S = sylvester(pa,pb)
_,s,_ = np.linalg.svd(S)
k = (s<thresh).sum()
# another trivial case: pa and pb are relatively prime
if (k == 0):
return (np.array([1]), pa, pb)
# find nullspace of Sk, partition into [-v,u]
# (there are probably more efficient ways to do this... not sure)
Sk = sylvester(pa,pb,k)
_,_,v=np.linalg.svd(Sk)
uv = v[-1,:]
# solve [Ck(u); Ck(v)]x = [pa; pb] for x
v = -uv[:len(pb)-k]
u = uv[-(len(pa)-k):]
Cuv = np.vstack([cauchy(u,k+1), cauchy(v,k+1)])
pab = np.hstack([pa,pb])
x,_,_,_ = np.linalg.lstsq(Cuv,pab)
return x/x[0],u*x[0],v*x[0] |
This looks interesting but I need some time to digest it. What is the numerical difficulty, leading zeros in the division/remainder? If so it may be useful to experiment with Chebyshev polynomials, although in that case the location of arbitrary zeros in the complex plane may not be quite as accurate. Hmm... I probably need to read the thesis. |
Thinking about it some more, it appears to me that when using floating point there needs to be a scale parameter of some sort that will be problem dependent. For example, |
Great questions -- I'd like to respond but I got diverted from this topic at work for a few days so it will have to wait before I can give a good background. The short answer is that I'm doing some rational transfer function manipulation, and was surprised to find out that even for small-order polynomials (e.g. 2nd-order), I'm running into numerical issues. |
I read the thesis and scaling is covered in the pre-processing step, but I think the general approach is not suitable for most practical problems. For one thing, the distance between polynomials is measured as the norm of the difference in coefficients, which assumes that the coefficients are exact modulo some small error. For polynomials derived from data, that is unlikely to be the case, and indeed the degree needed is uncertain. There are other problems along that line. So an important question to answer is how do the polynomials arise in a particular problem. The scaling is also chosen to optimize the Sylvester matrix, and that may not be the proper criterion. That is to say, it is important to specify exactly what you are attempting to do, where the data comes from, and what use will be made of the result. |
I have some working code for LCM and GCD of polynomials from some time ago and they work sufficiently well for most tasks (excluding the ill-conditioned corner cases). The code was written some time ago and I don't think it fits into my toolbox' overall theme so I was thinking about cleaning it up and putting it into NumPy. Especially LCM code I have been using a lot for MIMO systems in transfer function representations to handle minimality. It seems to agree with other commercial software and articles in the literature (as much as I could tested). Would it be a viable addition to NumPy after some cleanup? Or does it have to be some C/Fortran implementation? |
I'd argue that this belongs in |
GCD for polynomials is an important operation for computing sums of polynomial ratios A(x)/B(x) + C(x)/D(x). The naive approach to computing GCD(B(x),D(x)) is numerically ill-conditioned:
I figured there was an easy workaround that would be more robust, but apparently there isn't. There is some promising literature on the subject (see below), but I'm woefully underskilled for porting these algorithms to numpy, much less understanding how they work:
The text was updated successfully, but these errors were encountered: