
Andy Novocin
Address: United States
less
Related Authors
Cristina Bertone
Università degli Studi di Torino
William Pulleyblank
US Military Academy, West Point
Ziyun Zhang
California Institute of Technology
Florian Wilsch
Institute of Science and Technology Austria
Claus Fieker
TU Kaiserslautern
Mohamed Atri
King Khalid University
Uploads
Papers by Andy Novocin
n×n
, write A as A = p(A quo p)+
(A rem p) where the remainder and quotient operations are applied
element-wise. Write the p-adic expansion of A as A = A[0] + pA[1] +
p
2A[2] + · · · where each A[i] ∈ Z
n×n has entries between [0, p − 1].
Upper bounds are proven for the Z-ranks of A rem p, and A quo p.
Also, upper bounds are proven for the Z/pZ-rank of A[i]
for all i ≥ 0
when p = 2, and a conjecture is presented for odd primes.
over two families of local rings. The algorithms use the black-box
model which is suitable for sparse and structured matrices. The algorithms
depend on a number of tools, such as matrix rank computation
over finite fields, for which the best-known time- and memory-efficient
algorithms are probabilistic.
For an n × n matrix A over the ring F[z]/(f
e
), where f
e
is a power
of an irreducible polynomial f ∈ F[z] of degree d, our algorithm requires
O(ηde2n) operations in F, where our black-box is assumed to
require O(η) operations in F to compute a matrix-vector product by
a vector over F[z]/(f
e
) (and η is assumed greater than nde). The algorithm
only requires additional storage for O(nde) elements of F. In
particular, if η = O˜(nde), then our algorithm requires only O˜(n
2d
2
e
3
)
operations in F, which is an improvement on known dense methods
for small d and e.
For the ring Z/peZ, where p is a prime, we give an algorithm which
is time- and memory-efficient when the number of nontrivial invariant
factors is small. We describe a method for dimension reduction while
preserving the invariant factors. The time complexity is essentially
linear in µnre log p, where µ is the number of operations in Z/pZ to
evaluate the black-box (assumed greater than n) and r is the total
number of non-zero invariant factors. To avoid the practical cost of
conditioning, we give a Monte Carlo certificate, which at low cost,
provides either a high probability of success or a proof of failure. The
quest for a time- and memory-efficient solution without restrictions
on the number of nontrivial invariant factors remains open. We offer
a conjecture which may contribute toward that end.
in finding the subfields of K containing k. There can be
more than polynomially many subfields. We introduce the
notion of generating subfields, a set of up to n subfields
whose intersections give the rest. We provide an efficient
algorithm which uses linear algebra in k or lattice reduction
along with factorization. Our implementation shows that
previously difficult cases can now be handled.
for a mild modification of the Lenstra-Lenstra-Lovász reduction; It terminates in time O(d^(5+ε)β +d^(ω+1+ε)β^(1+ε)) where β = log max bits of a basis vector (for any ε > 0 and ω is a valid exponent for matrix multiplication).
This is the first LLL-reducing algorithm with a time complexity that is quasi-linear in β and polynomial in d.
The backbone structure of L1 is able to mimic the Knuth-Schönhage fast gcd algorithm thanks to
a combination of cutting-edge ingredients. First the bit-size of our lattice bases can be decreased via truncations whose validity are backed by recent numerical stability results on the QR matrix
factorization. Also we establish a new framework for analyzing unimodular transformation matrices which reduce shifts of reduced bases, this includes bit-size control and new perturbation tools. We illustrate the power of this framework by generating a family of reduction algorithms.
reduction. The applications are for lattice bases with a generalized knapsack-type structure, where the
target vectors are boundably short. For such applications, the complexity of the algorithm improves
traditional lattice reduction by replacing some dependence on the bit-length of the input vectors by
some dependence on the bound for the output vectors. If the bit-length of the target vectors is unrelated
to the bit-length of the input, then our algorithm is only linear in the bit-length of the input entries,
which is an improvement over the quadratic complexity floating-point LLL algorithms. To illustrate
the usefulness of this algorithm we show that a direct application to factoring univariate polynomials
over the integers leads to the first complexity bound improvement since 1984. A second application is
algebraic number reconstruction, where a new complexity bound is obtained as well.
n×n
, write A as A = p(A quo p)+
(A rem p) where the remainder and quotient operations are applied
element-wise. Write the p-adic expansion of A as A = A[0] + pA[1] +
p
2A[2] + · · · where each A[i] ∈ Z
n×n has entries between [0, p − 1].
Upper bounds are proven for the Z-ranks of A rem p, and A quo p.
Also, upper bounds are proven for the Z/pZ-rank of A[i]
for all i ≥ 0
when p = 2, and a conjecture is presented for odd primes.
over two families of local rings. The algorithms use the black-box
model which is suitable for sparse and structured matrices. The algorithms
depend on a number of tools, such as matrix rank computation
over finite fields, for which the best-known time- and memory-efficient
algorithms are probabilistic.
For an n × n matrix A over the ring F[z]/(f
e
), where f
e
is a power
of an irreducible polynomial f ∈ F[z] of degree d, our algorithm requires
O(ηde2n) operations in F, where our black-box is assumed to
require O(η) operations in F to compute a matrix-vector product by
a vector over F[z]/(f
e
) (and η is assumed greater than nde). The algorithm
only requires additional storage for O(nde) elements of F. In
particular, if η = O˜(nde), then our algorithm requires only O˜(n
2d
2
e
3
)
operations in F, which is an improvement on known dense methods
for small d and e.
For the ring Z/peZ, where p is a prime, we give an algorithm which
is time- and memory-efficient when the number of nontrivial invariant
factors is small. We describe a method for dimension reduction while
preserving the invariant factors. The time complexity is essentially
linear in µnre log p, where µ is the number of operations in Z/pZ to
evaluate the black-box (assumed greater than n) and r is the total
number of non-zero invariant factors. To avoid the practical cost of
conditioning, we give a Monte Carlo certificate, which at low cost,
provides either a high probability of success or a proof of failure. The
quest for a time- and memory-efficient solution without restrictions
on the number of nontrivial invariant factors remains open. We offer
a conjecture which may contribute toward that end.
in finding the subfields of K containing k. There can be
more than polynomially many subfields. We introduce the
notion of generating subfields, a set of up to n subfields
whose intersections give the rest. We provide an efficient
algorithm which uses linear algebra in k or lattice reduction
along with factorization. Our implementation shows that
previously difficult cases can now be handled.
for a mild modification of the Lenstra-Lenstra-Lovász reduction; It terminates in time O(d^(5+ε)β +d^(ω+1+ε)β^(1+ε)) where β = log max bits of a basis vector (for any ε > 0 and ω is a valid exponent for matrix multiplication).
This is the first LLL-reducing algorithm with a time complexity that is quasi-linear in β and polynomial in d.
The backbone structure of L1 is able to mimic the Knuth-Schönhage fast gcd algorithm thanks to
a combination of cutting-edge ingredients. First the bit-size of our lattice bases can be decreased via truncations whose validity are backed by recent numerical stability results on the QR matrix
factorization. Also we establish a new framework for analyzing unimodular transformation matrices which reduce shifts of reduced bases, this includes bit-size control and new perturbation tools. We illustrate the power of this framework by generating a family of reduction algorithms.
reduction. The applications are for lattice bases with a generalized knapsack-type structure, where the
target vectors are boundably short. For such applications, the complexity of the algorithm improves
traditional lattice reduction by replacing some dependence on the bit-length of the input vectors by
some dependence on the bound for the output vectors. If the bit-length of the target vectors is unrelated
to the bit-length of the input, then our algorithm is only linear in the bit-length of the input entries,
which is an improvement over the quadratic complexity floating-point LLL algorithms. To illustrate
the usefulness of this algorithm we show that a direct application to factoring univariate polynomials
over the integers leads to the first complexity bound improvement since 1984. A second application is
algebraic number reconstruction, where a new complexity bound is obtained as well.