Plenary talks

Talks by the invited speakers:

Jean-Claude Bajard (IMJ-PRG, Sorbonne Université, Paris) – Representation of Numbers and Cryptography

Cryptography requires a lot of modular arithmetic and exponentiation on large numbers. We will present three types of representations. The first, based on the Chinese remainder theorem, allows calculations to be distributed over small operators. The second category is directly adapted to modular arithmetic and is modulo-dependent. The last category is useful for exponentiation calculations.

Valérie Berthé (IRIF, CNRS, Paris) – An Arithmetical Viewpoint on Conversion Theorems

Lochs' theorem and its generalizations are conversion theorems that relate the number of digits determined in one expansion of a real number as a function of the number of digits given in some other expansion. In its original version, Lochs' theorem related decimal expansions with continued fraction expansions. Such conversion results can also be stated for general numeration systems and sequences of interval partitions. We state Lochs-type theorems which work even in the case of zero entropy, in particular for several important sequences of partitions of a number-theoretic nature, such as the partition provided by the three distance theorem related to the table maker's dilemma.

We also consider higher-dimensional partitions involving e.g. Ostrowski numeration.

This is a joint work with Eda Cesaratto, Pablo Rotondo and Martín D. Safe.

Sylvie Boldo (Toccata, LMF, Inria Saclay) – Mechanical and User-Friendly Lemmas for Lagrange Finite Elements and Floating-Point Errors in Rocq

This talk is about the formal verification of numerical programs and algorithms. We will go from the motivations and usefulness of a higher guarantee, to the actual verification of C programs, through the formalization in Rocq of both computer arithmetic and applied mathematics. Applications will range from floating-point algorithms and programs from the literature to the work-in-progress formalization of the finite element method.

Javier Bruguera (Arm, Cambridge, UK) – Floating-Point in Transition: Bridging Scientific Computation and AI Acceleration

For decades, floating-point units (FPUs) have been the backbone of high-performance computing, carefully optimized for scientific workloads that demanded precision and standardization. But the rise of new applications — especially machine learning and AI — is reshaping these foundations. In this talk we'll examine today's disruptive shift toward low-bit, domain-specific formats and novel operations tailored for AI acceleration. Looking at the topic from both academic and industrial perspectives, we will highlight how research insights are shaping real-world architectures — and how industry's urgent needs are pushing new research directions. Finally, we'll look ahead at what these trends mean for the future of floating-point design: more flexibility, more heterogeneity.

Florent de Dinechin (CITI, Insa, Lyon) – Elementary Functions (or not) ... and Implementation

In a futile, pretentious and ultimately pointless endeavour to find interesting topics about the implementation of elementary functions in the shadow of Jean-Michel Muller's luminary book "Elementary functions: algorithms and implementation", this talk will survey some recent generic techniques targetting low-precision hardware.

Miloš Ercegovac (University of California, Los Angeles) – On Some Ideas Used in Arithmetic

We begin with a discussion of seminal ideas that were introduced over early years and influenced development of computer arithmetic. Probably the most significant one was the concept of redundancy in the representation of numbers which eliminated use of carry chains and allowed use of low-precision estimates in fast division and square root, among other things. Its early development took place at the University of Illinois in the 50's and 60's and continued into the 80's. A system of residue representation originated in Prague in the 50's and influenced algorithms in signal processing and cryptography. Another idea of conditionally computing sums was adapted with success to division and square root algorithms besides addition. There is also the idea of recoding to higher radix combined with tree-structured arrays of adders. This forms the basis of fast multipliers.

We will talk about left-to-right method of computing which allows overlap between communication of operands/results and computation. Its key property is that it can be gracefully terminated when sufficient precision is attained. This method is the basis of composite algorithms for computing complex functions such as the rotation factors in matrix computations and solving linear systems. It is efficient for recursive algorithms. The left-to right linear system solver computes polynomials, rational functions and general expression in time directly proportional to the precision.

After making some comments on arithmetic in AI/machine learning, we will conclude by talking of some work done with Jean-Michel and his students.

Guillaume Hanrot (CryptoLab, Lyon) – The Table Maker's Dilemma: Old Stories and New Results

The Table Maker's Dilemma was introduced by William Kahan in the context of computing correctly rounded values of mathematical functions. Put simply, the problem asks: given a function f, how close can f(x), for floating-point arguments x, come to another floating-point number?

This question has always been especially dear to Jean-Michel Muller, who made several major contributions to the field and inspired many colleagues, throughout his career, to explore it further.

In this talk, we will survey the rich history of the Table Maker's Dilemma—from its early developments to more recent advances—and examine its impact on the implementation of correctly rounded mathematical functions. In particular, we will highlight recent joint results obtained with Nicolas Brisebarre.

John Harrison (Amazon Web Services, Portland) – Learning about Computer Arithmetic by Formally Verifying It

I have had the experience in my career of launching into formal verification work in two distinct fields of computer arithmetic despite initially having almost no knowledge of the underlying subject matter.

When I started floating-point verification at Intel in 1998, I knew almost nothing about floating-point numerics, but I tentatively started on verification work and gradually picked up or pieced together enough knowledge to start contributing to the design. The focus of trying to write formal proofs gave this apprenticeship a somewhat unusual flavour, always making me look for precise definitions and clear theorems. I feel that this helped me to piece together and clarify the scattered results in the literature, but how much easier it would have been if clear textbooks like Jean-Michel's "Elementary Functions" had been available at the time!

Starting in 2018 I have had a curiously similar experience at AWS, launching into formal verification of cryptographic arithmetic primitives despite essentially no knowledge of the field and its special requirements, like the constant-time programming restriction. Yet a few more years on, I've again picked up enough knowledge to help set up an open-source library of formally verified cryptographic software that is now being used millions of times a day. In this case I still don't know of any single textbook that would have helped navigate the special difficulties of getting into the field, which perhaps means that someone should write one.

Mioara Joldes (LAAS, CNRS, Toulouse) –

[Not available yet.]

Théo Mary (LIP6, CNRS, Paris) – Multiword and Multimodular Algorithms for Emulating High Accuracy with Low Precision

The growing availability of lower precision arithmetics has led to the development of mixed precision algorithms in numerical linear algebra. Most recently we have seen the emergence of AI-specialized hardware such as GPU accelerators implementing extremely fast matrix multiply–accumulate operations, but using extremely low precisions such as 16-, 8- or even 4-bit floating-point or integer formats. This motivates a new class of mixed precision algorithms aiming to emulate high accuracy with low precision computations. In this talk, we will review the dense and rapidly growing literature on precision emulation. We will notably compare two competing classes of methods respectively based on multiword and multimodular decompositions, and discuss in which situation each method is preferable.

Peter Tang (Rivos, New York) – Rounding Error Statistics as Numerics Signature

The current proliferation of application specific computer processors brings us many customized computer arithmetic designs with various precisions and/or numerical properties. Increasingly, application developers find themselves carrying out numerical quality assurance. The tools for this task are quite limited. A common method that requires computed result of a kernel operation on an example input to be accurate within a certain threshold is not robust: Not only is a good threshold hard to establish, but this very approach is fundamentally unreliable, as computation errors are so sensitive to minute computational differences that they can be quite different even using numerics of similar qualities. In this paper, we show that the statistics, rather than a few isolated instances, of computation errors are so robust that they can be thought of as signatures of the underlying numerics. We can therefore compare these statistics against reference signatures as a more robust quality assurance, or use observed statistics as diagnostic methods, or incorporate them during numerics and arithmetic design.

Arnaud Tisserand (Lab-STICC, CNRS, Brest) – Electrical Activity in Arithmetic Operations vs Security

Electrical activity in circuits, including processors running codes, partly comes from transitions of signals used during arithmetic operations. Some side channel attacks can analyze this activity to deduce internal information. The talk will present some links between arithmetic algorithms, representations of numbers, circuit architecture and information leakage.

Loading... Loading...