Monday, September 9, 2019

UQSay #04

The fourth UQSay seminar, organized by L2S and MSSMAT, will take place on Thursday afternoon, October 3, 2019, at CentraleSupelec Paris-Saclay (Eiffel building, amphi V).

We will have two talks:

14h — Merlin Keller (EDF R&D / PRISME dept.) — [slides]

Bayesian calibration and validation of a numerical model: an overview

Computer experiments are widely used in industrial studies to complement or replace costly physical experiments, in many applications: design, reliability, risk assessment, etc. One main concern with such a widespread use is the confidence one can have in the outcome of a numerical simulation, that aims to mimick an actual physical phenomenon. Indeed, the result of a simulation is tainted by different sources of uncertainty: numerical, parametric, due to modelisation and/or extrapolation, to cite only a few. Quantifying all sources of uncertainty, and their influence on the result of the study, is the primary goal of the verification, validation and uncertainty quantification (VVUQ) framework. An important step of VVUQ is calibration, wherein uncertain parameters within the computer model are tuned to reduce the gap between computations and available field measures.
   EDF R&D has devoted considerable efforts in the last few years to develop generic, mathematically well-grounded and computationally efficient calibration and validation methods, adapted to industrial applications. Two PhD programs and a post-doc have been devoted to this subject, whose main outcomes are reviewed in this talk. Hence, we will present the main methods available today to quantify and reduce the uncertainty on the result of a numerical experiment through calibration (from ordinary least squares (OLS) to sequential strategies adapted to costly black-box models) and validation, seen as the task of detecting and accounting for a possible systematic model bias (or model discrepancy) term, based on Bayesian model averaging. All proposed methods are illustrated using several industrial case studies, and we discuss available implementations.

Joint work with Pierre Barbillon, Mathieu Carmassi, Matthieu Chiodetti, Guillaume Damblin, Cédric Gœury, Kaniav Kamary, Éric Parent.

References: arXiv:1711.10016, arXiv:1903.03387, arXiv:1801.01810, arXiv:1808.01932.

15h — Didier Clouteau (MSSMAT) — [slides]

Blending Physics-Based numerical simulations and seismic databases using Generative Adversarial Network (GAN)

On the one hand, High Performance Computing (HPC) allows the numerical simulation of highly complicated physics-based scenarios accounting, to a certain extent, for Uncertainty Quantification and Propagation (UQ). On the second hand, Machine Learning (ML) techniques and Artificial Neural Networks (ANN) have reached outstanding but yet-not-fully-understood prediction capabilities for both supervised and unsupervised learning, at least in fields such as image or speech recognition. Yet, ANN are both prone to overfitting and highly sensitive to outliers questioning their usefulness in risk assessment studies. However, development of generative networks has allowed to better constrain the ANN responses and quantify the related Uncertainty. Adversarial training techniques have also appeared to provide a generic and efficient way to train these generative networks on huge un-labelled datasets.
   In this talk, we will first show how Generative Adversarial Networks (GAN) can be cast and used in the framework of Uncertainty quantification. Then we will propose an adversarial Generative Auto-Encoder aiming at transforming medium resolution signals obtained by physics-based methods into broadband seismic signals similar to those recorded in seismic databases.

Joint work with Filippo Gatti.

References: DOI:10.1785/0120170293 and hal-01860115.

Organizers: Julien Bect (L2S) and Fernando Lopez Caballero (MSSMAT).

No registration is needed, but an email would be appreciated if you intend to come.