
LABORATOIRE DE MATHEMATIQUES, PHYSIQUE ET SYSTEMES
LABORATOIRE DE MATHEMATIQUES, PHYSIQUE ET SYSTEMES
2 Projects, page 1 of 1
assignment_turned_in ProjectFrom 2021Partners:LIP6, LABORATOIRE DE MATHEMATIQUES, PHYSIQUE ET SYSTEMES, ANEO, Commissariat à l'énergie atomique et aux énergies alternatives (CEA) / Laboratoire d'Intégration des Systèmes et des Technologies, General Electric (France) +7 partnersLIP6,LABORATOIRE DE MATHEMATIQUES, PHYSIQUE ET SYSTEMES,ANEO,Commissariat à l'énergie atomique et aux énergies alternatives (CEA) / Laboratoire d'Intégration des Systèmes et des Technologies,General Electric (France),LABORATOIRE DE MATHEMATIQUES, PHYSIQUE ET SYSTEMES,TRISCALE INNOV,Laboratoire dinformatique Parallélisme, Réseaux et Algorithmique Distribuée,Laboratoire d'informatique Parallélisme, Réseaux et Algorithmique Distribuée,Commissariat à lénergie atomique et aux énergies alternatives (CEA) / Laboratoire dIntégration des Systèmes et des Technologies,INTEL Corporation SAS / Data Center Group – Entreprise & Gouvernement,INTEL Corporation SAS / Data Center Group – Entreprise & GouvernementFunder: French National Research Agency (ANR) Project Code: ANR-20-CE46-0009Funder Contribution: 642,880 EURIntensive usage of floating-point (FP) arithmetic in today’s real software affect numerical quality and reproducibility of program. The problem of the detection, localization and correction of numerical bug is entering a new area with the tremendous increases in computational horsepower needed to address new classes of problems, combined with the emergence of new multicore processors and new application-specific floating-point formats. InterFLOP project aims at providing a modular and scalable platform to both analyze and control the costs of FP behavior of today’s real programs facing those new paradigms. The platform will propose interoperable tools, starting from existing one developed by the partners combined with new composite one, to analyze and optimize FP calculus. By making those tools interoperable, it will be possible to take advantage of the properties and information specific to each of them in order to build composite and new analyses, inaccessible otherwise.
more_vert assignment_turned_in ProjectFrom 2021Partners:LIG, LABORATOIRE DE MATHEMATIQUES, PHYSIQUE ET SYSTEMES, LABORATOIRE DE MATHEMATIQUES, PHYSIQUE ET SYSTEMES, CHROME: Détection, évaluation, gestion des risques chroniques et émergents, Institut de Recherche en Informatique de ToulouseLIG,LABORATOIRE DE MATHEMATIQUES, PHYSIQUE ET SYSTEMES,LABORATOIRE DE MATHEMATIQUES, PHYSIQUE ET SYSTEMES,CHROME: Détection, évaluation, gestion des risques chroniques et émergents,Institut de Recherche en Informatique de ToulouseFunder: French National Research Agency (ANR) Project Code: ANR-20-CE38-0013Funder Contribution: 495,000 EURLAWBOT is first, an applied research project in law, on the use of automated natural language processing techniques. The LAWBOT project aims to create an artificial case-law intelligence capable of predicting the judicial outcome for a given case, by imitating the decisions previously rendered by the courts on similar cases. LAWBOT is based on an artificial neural network for the deep learning of textual characteristics predictive of the judicial outcome. The project highlights five results. - Firstly, the provision of 24,000 decisions annotated by lawyers on 120 classes of claim. - Second, the creation of an automatic annotator which, from the annotated decisions, will automatically generate a large-scale classification of the legal decisions made public on a daily basis. - Third, based on the large volume of classified decisions, predict the class of claim, the outcome and the sum allocated to the claimant using artificial intelligence (AI) models. - Fourth, generate from the IA models legal reasons, in other words, summaries of court decisions highlighting the link between the facts which led to the dispute and its outcome. - Fifth, measure the ethical, psychological and economic impacts of using a predictive justice AI. LAWBOT aims to produce fundamental experimental knowledge on the very nature of law, and its epistemology. Indeed, the formulation of predictive models is possible, if and only if, two fundamental hypotheses on law and jurisprudence are verified. First hypothesis, the polysemy of language does not constitute an insurmountable obstacle to the modeling of jurisdictional decisions in a quantifiable form of computable data. Second hypothesis, there are sufficient statistical correlations to formulate a prediction on the outcome of the decision from explicit factors present in the text, without the need to resort to hidden variables likely to induce legal uncertainty, such as factors non-quantifiable human beings (personality of the parties, performance of lawyers, prejudices of judges). It is assumed that the magistrates are rational and consistent, and tend to judge in the same way, the cases considered - from their point of view - as being similar. From these two hypotheses, the quantification of past case law should make it possible to predict the decision of a given judge in a given case, based on the existence of a correlation link between a known dependent variable (the result judicial), and unknown independent variables (the explicit statement of the dispute represented as a combination of quantifiable factors). It is a classic experimental approach, identifying a statistical correlation link between a known result and controlled variable factors.
more_vert