Skip to main content

    P. Kornerup

    Semantics are given for the four elementary arithmetic operations and the square root, to characterize what we term exact floating point operations. The operands of the arithmetic operations and the argument of the square root are all... more
    Semantics are given for the four elementary arithmetic operations and the square root, to characterize what we term exact floating point operations. The operands of the arithmetic operations and the argument of the square root are all floating point numbers in one for-mat. In every case, ...
    Closed approximate rational arithmetic systems are described and their number theoretic foundations are surveyed. The arithmetic is shown to implicitly contain an adaptive single-to-double precision natural rounding behavior that acts to... more
    Closed approximate rational arithmetic systems are described and their number theoretic foundations are surveyed. The arithmetic is shown to implicitly contain an adaptive single-to-double precision natural rounding behavior that acts to recover true simple fractional results. The probability of such recovery is investigated and shown to be quite favorable.
    ABSTRACT Fraction number systems characterized by fixed-slash and floating-slash formats are specified. The structure of arithmetic over such systems is prescribed by the rounding obtained from "best rational... more
    ABSTRACT Fraction number systems characterized by fixed-slash and floating-slash formats are specified. The structure of arithmetic over such systems is prescribed by the rounding obtained from "best rational approximation." Multitiered precision hierarchies of both the fixed-slash and floating-slash type are described and analyzed with regards to their support of both exact rational and approximate real computation.
    ABSTRACT Data flow computers promise efficient parallel computation limited in speed only by data dependencies in the calculation being performed. At the Massachusetts Institute of Technology Laboratory for Computer Science, the... more
    ABSTRACT Data flow computers promise efficient parallel computation limited in speed only by data dependencies in the calculation being performed. At the Massachusetts Institute of Technology Laboratory for Computer Science, the Computation Structures Group is ...
    ABSTRACT A binary representation of the rationals derived from their continued fraction expansions is described and analysed. The concepts “adjacency”, “mediant” and “convergent” from the literature on Farey fractions and continued... more
    ABSTRACT A binary representation of the rationals derived from their continued fraction expansions is described and analysed. The concepts “adjacency”, “mediant” and “convergent” from the literature on Farey fractions and continued fractions are suitably extended to provide a foundation for this new binary representation system. Worst case representation-induced precision loss for any real number by a fixed length representable number of the system is shown to be at most 19% of bit word length, with no precision loss whatsoever induced in the representation of any reasonably sized rational number. The representation is supported by a computer arithmetic system implementing exact rational and approximate real computations in an on-line fashion.
    In a class of cryptosystems, fast computation of modulo exponentials is essential. The authors present a parallel version of a well-known exponentiation algorithm that halves the worst-case computing time. It is described how a high radix... more
    In a class of cryptosystems, fast computation of modulo exponentials is essential. The authors present a parallel version of a well-known exponentiation algorithm that halves the worst-case computing time. It is described how a high radix modulo multiplication can be implemented by interleaving a serial-parallel multiplication scheme with an SRT division scheme. The problems associated with high radices are efficiently solved by the use of a redundant representation of intermediate operands. It is shown how the algorithms can be realized as a highly regular VLSI circuit. Simulations indicate that a radix 32 implementation of the algorithms is capable of computing 512-b operand exponentials in 3.2 ms. This is more than five times faster than other known implementations
    Research Interests:
    Page 1. 25 An overview of the MATHILDA system * by Peter Kornerup, University of Aarhus, Aarhus, Denmark Bruce D. Shriver, University of Southwestern Louisiana, Lafayette~ Louisiana Abstract A dynamical ly microprogrammable processor cal... more
    Page 1. 25 An overview of the MATHILDA system * by Peter Kornerup, University of Aarhus, Aarhus, Denmark Bruce D. Shriver, University of Southwestern Louisiana, Lafayette~ Louisiana Abstract A dynamical ly microprogrammable processor cal led MATHILDA is described. ...