Journée « calcul ensembliste »
et « interprétation abstraite » au CNAM

Groupe de travail MEA (méthodes ensemblistes pour l’automatique)



Organizers : Eric Goubault, Sylvie Putot, Alexandre Chapoutot,

Nacim Ramdani and Luc Jaulin

 

La prochaine réunion du GT MEA s'est tenue le Jeudi 19 mars 2015 au Conservatoire National des Arts et Métiers  CNAM, à la Salle des Conseils qui se situe au 2 rue Conté, Accès 37, 1er étage,

métro Arts et Métiers, lignes 3 et 11.




Motivation. Il semble que de nombreux algorithmes et concepts soient communs à nos deux communautés.

L'objectif de cette journée est donc de mieux se connaître, de trouver de nouveaux domaines d'application 

pour nos méthodes et trouver de nouveaux outils pour résoudre les problèmes qui nous intéressent.





Program



La journée commence à 10h et termine à 17h. Le programme est donné ci-dessous. Un exposé dure environ 30 minutes avec un peu de temps (15 minutes) pour les discussions.





10h. Eric Goubault et Sylvie Putot. Slides.

Titre : Analyse Statique par Interpretation Abstraite de Programmes et de Systèmes Numériques

Résumé : Nous présenterons dans cet expose les méthodes d’analyse statique directement inspirées par un certain nombre de méthodes ensemblistes, que nous avons développées, d’abord au CEA, puis a l’Ecole Polytechnique, depuis 2001. L’analyse statique que l’on a développé vise a prouver la correction des programmes (ou des systèmes), vus comme des systèmes dynamiques particuliers, le plus souvent en temps discret (ou temps discret couplet à du temps continu comme pour les systèmes hybrides). Ces systèmes manipulent également des valeurs numériques en precision finie. Le long de l’expose on montrera comment on a utilisé et transformé pour nos problèmes, des techniques et des idées de l’arithmétique affine, de l’arithmétique ellipsoïdale et des méthodes de Taylor, pour la sur-approximation des ensembles atteignables de valeurs de variables dans un programme numérique ; de l’arithmétique de Kaucher et des théoremes de valeurs intermédiaires généralises (a la Goldstejn) pour la sous-approximation ; des P-boxes et des structures de Demster-Shafer pour l’analyse de systèmes avec incertitudes mixtes probabilistes et non-deterministes ; des méthodes d’intégration garantie pour l’analyse de systèmes hybrides ; et des méthodes de raisonnement sur l’arithmétique à précision finie. Outre les différences de vocabulaire entre l’analyse statique et les méthodes ensemblistes, non essentielles, on a tendance en analyse statique à calculer des ensembles invariants, a horizon infini (donc pas uniquement des approximations d’ensembles atteignables a horizon fini), par un calcul de point fixe, et on doit donc développer des moyens de calculs efficaces de bonnes approximations de schémas numériques ensemblistes, d’union et d’intersections dans certaines classes d’ensemble (et pas seulement, donc, des approximations de l’arithmétique), ce qui sera explicite dans cet exposé.





10h45. Alexandre Chapoutot, ENSTA Paris. Slides.

Title. Analyse par intervalles et interprétation abstraite : une question de vocabulaire ?"

Abstract. Nous proposons de faire une mise en relation des briques élémentaires de l'analyse par intervalles, e.g., les fonctions d'inclusion et celles de l'interprétation abstraite, e.g., les abstractions. L'objectif est d'essayer de mettre en avant de possibles points communs et/ou différences entre les deux approches.







11h30. Luc Jaulin, ENSTA Bretagne. Slides.

Title. Contractors in lattices for solving set-valued constraint satisfaction problems. Video

Abstract. I will consider the resolution of constraint satisfaction problems in the case where the variables of the problem are subsets of Rⁿ. In order to use contractor techniques, we will introduce set intervals, which are sets of subsets of Rⁿ with a lower bound and an upper bound with respect to the inclusion. Then, we propose an arithmetic for them. This makes possible to build projection operators that are then used by the propagation. In order to illustrate the principle and the efficiency of the approach, a test-case is provided.



12h15. Repas.



13h45. Nacim Ramdani. Univ Orléans, Bourges. Slides.

Title : On the verification of hybrid dynamical systems

Abstract. During this talk, I will overview some recent outcomes in the verification of hybrid discrete-continuous dynamical systems. The hybrid systems modelling paradigm is flexible and naturally covers a very large variety of actual systems, and in particular those safety-critical embedded or interconnected systems, hence verifying that such systems behave consistently, e.g. safely, for all their possible behaviours and physical world interactions is of paramount importance. I will describe some recent efforts focusing on extending satisfiability solvers to cover formulae that contain ODEs, hence allowing direct encoding and verification of hybrid systems. [A. Eggers, N. Ramdani et al., Improving the SAT Modulo ODE Approach to Hybrid Systems Analysis by Combining Different Enclosure Methods, Software & Systems Modeling 14(1) pp 121-148, 2015]



14h30. Antoine Miné, CNRS & ENS, Paris. Slides.

Title: Numeric abstract domains in static analysis and constraint programming

Abstract: In this talk, we present some core concepts of abstract interpretation and its application to sound static program analysis, focusing in particular on inferring properties about numeric variables. We describe a few well-established numeric abstract domains, including the interval domain but also more complex, relational domains, such as polyhedra. We will also mention a recent joint work with Marie Pelleau, Charlotte Truchet, and Frédéric Benhamou on applying numeric abstract domains to design novel constraint solving methods.





15h15. Sylvie Putot (speaker) et Eric Goubault. Slides.

Title. Robustness analysis of finite precision implementations.

Abstract: A desirable property of control systems is to be robust to inputs, that is small perturbations of the inputs of a system will cause only small perturbations on its outputs. But it is not clear whether this property is maintained at the implementation level, when two close inputs can lead to very different execution paths. The problem becomes particularly crucial when considering finite precision implementations, where any elementary computation can be affected by a small error. In this context, almost every test is potentially unstable, that is, for a given input, the computed (finite precision) path may differ from the ideal (same computation in real numbers) path. Still, state-of-the-art error analyses do not consider this possibility and rely on the stable test hypothesis, that control flows are identical. If there is a discontinuity between the treatments in the two branches, that is the conditional block is not robust to uncertainties, the error bounds can be unsound. I will present an abstract-interpretation based error analysis of finite precision implementations, which is sound in presence of unstable tests. It automatically bounds the discontinuity error coming from the difference between the float and real values when there is a path divergence, and introduces a new error term labeled by the test that introduced this potential discontinuity. This gives a tractable error analysis, implemented in the static analyzer FLUCTUAT.





16h. Guillaume Melquiond. Slides.

Titre. Formal verification of a floating-point elementary function

Abstract. Mathematical libraries (so called libm) provide floating-point approximations of elementary functions such as exponential, trigonometric functions, and so on. Not only should such functions execute safely, but they should also return an accurate result, that is, a result close from the ideal mathematical function. There might be several causes of inaccuracy: one can evaluate only a finite number of operations (no evaluation of complete power series); these operations have limited precision (floating-point arithmetic); accuracy is also lost during argument reduction and result reconstruction. Verifying an implementation thus means verifying that none of these sources of errors can lead to an inaccurate result. Some automated tools can tackle parts of the verification, but human reasoning is still sorely needed. Indeed, the floating-point code is usually so contrived that reasoning just on the code is not sufficient to prove its correctness, one also has to know the mathematical reasons that led to choosing this code in the first place. Such correctness proofs are so error-prone that relying on a pen-and-paper approach might not offer sufficient guarantees. This advocates for doing formal proofs with a proof assistant such as Coq. But, while this gives the highest level of confidence, this makes the verification process even more long and tedious. This talk will show how interval arithmetic can be used to (partly) automate such Coq formal proofs, what the limitations of interval arithmetic are, and how they can be overcome. The talk will use the floating-point approximation of exponential from Cody and Waite as a running example. While this implementation is no longer state-of-the-art, it nonetheless shows some of the challenges one encounters when formally proving the correctness of a mathematical library.





16h45. Discussion, bilan

 

17h15. Fin de la journée