Friday, October 18, 2019

UQSay #05, #06 and #07 : save the dates !

Save the dates for upcoming UQSay sessions !
  • October  31: UQSay #05
  • December 5: UQSay #06
  • January 16: UQSay #07
It's always on Thursday afternoons, easy to remember 😎

UQSay #05

The fifth UQSay seminar, organized by L2S and MSSMAT, will take place on Thursday afternoon, October 31, 2019, at CentraleSupelec Paris-Saclay (Eiffel building, amphi IV).

We will have two talks:

14h — Speaker to be announced

Title to be announced

15h — Emmanuel Gobet (CMAP)

Meta-model of a large credit risk portfolio in the Gaussian copula model

We design a meta-model for the loss distribution of a large credit portfolio in the Gaussian copula model. Using both the Wiener chaos expansion on the systemic economic factor and a Gaussian approximation on the associated truncated loss, we significantly reduce the computational time needed for sampling the loss and therefore estimating risk measures on the loss distribution. The accuracy of our method is confirmed by many numerical examples.

Joint work with Florian Bourgey and Clément Rey.

Reference: hal-02291548v2.

Organizers: Julien Bect (L2S) and Fernando Lopez Caballero (MSSMAT).

No registration is needed, but an email would be appreciated if you intend to come.

Monday, September 9, 2019

UQSay #04

The fourth UQSay seminar, organized by L2S and MSSMAT, will take place on Thursday afternoon, October 3, 2019, at CentraleSupelec Paris-Saclay (Eiffel building, amphi V).

We will have two talks:

14h — Merlin Keller (EDF R&D / PRISME dept.) — [slides]

Bayesian calibration and validation of a numerical model: an overview

Computer experiments are widely used in industrial studies to complement or replace costly physical experiments, in many applications: design, reliability, risk assessment, etc. One main concern with such a widespread use is the confidence one can have in the outcome of a numerical simulation, that aims to mimick an actual physical phenomenon. Indeed, the result of a simulation is tainted by different sources of uncertainty: numerical, parametric, due to modelisation and/or extrapolation, to cite only a few. Quantifying all sources of uncertainty, and their influence on the result of the study, is the primary goal of the verification, validation and uncertainty quantification (VVUQ) framework. An important step of VVUQ is calibration, wherein uncertain parameters within the computer model are tuned to reduce the gap between computations and available field measures.
   EDF R&D has devoted considerable efforts in the last few years to develop generic, mathematically well-grounded and computationally efficient calibration and validation methods, adapted to industrial applications. Two PhD programs and a post-doc have been devoted to this subject, whose main outcomes are reviewed in this talk. Hence, we will present the main methods available today to quantify and reduce the uncertainty on the result of a numerical experiment through calibration (from ordinary least squares (OLS) to sequential strategies adapted to costly black-box models) and validation, seen as the task of detecting and accounting for a possible systematic model bias (or model discrepancy) term, based on Bayesian model averaging. All proposed methods are illustrated using several industrial case studies, and we discuss available implementations.

Joint work with Pierre Barbillon, Mathieu Carmassi, Matthieu Chiodetti, Guillaume Damblin, Cédric Gœury, Kaniav Kamary, Éric Parent.

References: arXiv:1711.10016, arXiv:1903.03387, arXiv:1801.01810, arXiv:1808.01932.

15h — Didier Clouteau (MSSMAT) — [slides]

Blending Physics-Based numerical simulations and seismic databases using Generative Adversarial Network (GAN)

On the one hand, High Performance Computing (HPC) allows the numerical simulation of highly complicated physics-based scenarios accounting, to a certain extent, for Uncertainty Quantification and Propagation (UQ). On the second hand, Machine Learning (ML) techniques and Artificial Neural Networks (ANN) have reached outstanding but yet-not-fully-understood prediction capabilities for both supervised and unsupervised learning, at least in fields such as image or speech recognition. Yet, ANN are both prone to overfitting and highly sensitive to outliers questioning their usefulness in risk assessment studies. However, development of generative networks has allowed to better constrain the ANN responses and quantify the related Uncertainty. Adversarial training techniques have also appeared to provide a generic and efficient way to train these generative networks on huge un-labelled datasets.
   In this talk, we will first show how Generative Adversarial Networks (GAN) can be cast and used in the framework of Uncertainty quantification. Then we will propose an adversarial Generative Auto-Encoder aiming at transforming medium resolution signals obtained by physics-based methods into broadband seismic signals similar to those recorded in seismic databases.

Joint work with Filippo Gatti.

References: DOI:10.1785/0120170293 and hal-01860115.

Organizers: Julien Bect (L2S) and Fernando Lopez Caballero (MSSMAT).

No registration is needed, but an email would be appreciated if you intend to come.

Wednesday, May 8, 2019

UQSay #03

The third UQSay seminar, organized by L2S and EDF R&D, will take place on Thursday afternoon, June 13, 2019, at CentraleSupelec Paris-Saclay (Eiffel building, amphi V).

We will have two talks:

14h — Alexandre Janon (Laboratoire de Mathématique d'Orsay) — [slides]

Consistency of Sobol indices with respect to stochastic ordering of input parameters — Global optimization using Sobol indices

In the past decade, Sobol’s variance decomposition have been used as a tool - among others - in risk management. We show some links between global sensitivity analysis and stochastic ordering theories. This gives an argument in favor of using Sobol’s indices in uncertainty quantification, as one indicator among others.

Reference: (hal-01026373)

We propose and assess a new global (derivative-free) optimization algorithm, inspired by the LIPO algorithm, which uses variance-based sensitivity analysis (Sobol indices) to reduce the number of calls to the objective function. This method should be efficient to optimize costly functions satisfying the sparsity-of-effects principle.

Reference: hal-02154121
15h — Pierre Barbillon (MIA Paris) — [slides]

Sensitivity analysis of spatio-temporal models describing nitrogen transfers, transformations and losses at the landscape scale

Modelling complex systems such as agroecosystems often requires the quantification of a large number of input factors. Sensitivity analyses are useful to determine the appropriate spatial and temporal resolution of models and to reduce the number of factors to be measured or estimated accurately. Comprehensive spatial and temporal sensitivity analyses were applied to the NitroScape model, a deterministic spatially distributed model describing nitrogen transfers and transformations in rural landscapes. Simulations were led on a theoretical landscape that represented five years of intensive farm management and covering an area of 3km2. Cluster analyses were applied to summarize the results of the sensitivity analysis on the ensemble of model outputs.The methodology we applied is useful to synthesize sensitivity analyses of models with multiple space-time input and output variables and could be ported to other models than NitroScape.

Reference: (arXiv:1709.08608)

Organizers: Julien Bect (L2S) and Bertrand Iooss (EDF R&D).

No registration is needed, but an email would be appreciated if you intend to come.

Thursday, March 28, 2019

UQSay #02

The second UQSay seminar, organized by L2S and MSSMAT, will take place on Thursday afternoon, April 18, 2019, at CentraleSupelec Paris-Saclay (Eiffel building, amphi V, next to the one where we had UQSay #01).

We will have two talks, and hopefully some coffee in between:

14h — Chu Mai (EDF R&D / MMC dept) — [slides]

Prediction of crack propagation kinetics through
multipoint stochastic simulations of microscopic fields

Prediction of crack propagation kinetics in the components of nuclear plant primary circuits undergoing Stress Corrosion Cracking (SCC) can be improved by a refinement of the SCC models. One of the steps in the estimation of the time to rupture is the crack propagation criterion. Current models make use of macroscopic measures (e.g. stress, strain,..) obtained for instance using the Finite Element Method. To go down to the microscopic scale and use local measures, a two steps approach is proposed. First, synthetic microstructures representing the material under specific loadings are simulated, and their quality is validated using statistical measures. Second, the shortest path to rupture in terms of propagation time is computed, and the distribution of those synthetic times to rupture is compared with the time to rupture estimated only from macroscopic values. The first step is realized with the Cross Correlation Simulation (CCSIM), a multipoint simulation algorithm that produces synthetic stochastic fields from a training field. The Earth Mover's Distance is the metric which allows to assess the quality of the realizations. The computation of shortest paths is realized using Dijkstra's algorithm. This approach allows to obtain a refinement in the prediction of the kinetics of crack propagation compared to the macroscopic approach. An influence of the loading conditions on the distribution of the computed synthetic times to rupture was observed, which could be reduced through a more robust use of the CCSIM.

Ref: hal-02068315

15h — Olivier Le Maître (LIMSI) — [slides]

Surrogate models and reduction methods for UQ
and inference in large-scale models

Uncertainty Quantification (UQ) and Global Sensitivity Analysis (GSA) in numerical models often rely on sampling approaches (either random or deterministic) that call for many resolutions of the model. Even though these computations can usually be carried out in parallel, the application of UQ and GSA methods to large-scale simulations remains challenging, both from the computational, storage and memory points of view. Similarly, Bayesian inference and assimilation problems can be favorably impacted by over-abundant observations, because of over-constrained update problems or numerical issues (overflows, complexity,...), raising the question of observations reduction.
A solution to alleviate the computational burden is to use a surrogate model of the full large scale model, that can be sampled extensively to estimate sensitivity coefficients and characterize the prediction uncertainty. However, building a surrogate for the whole large scale model solution can be extremely demanding and reduction strategies are needed. In this talk, I will introduce several techniques for the reduction of the model output and the construction of its surrogate. Some of these techniques will be illustrated on ocean circulation model simulations. For the reduction of observations, I will discuss and compare few strategies based on information theoretical considerations that have been recently proposed for the Bayesian framework.

Refs: [1], [2], [3], [4], [5], [6].


  • Julien Bect (L2S),
  • Fernando Lopez Caballero (MSSMAT),
  • and Didier Clouteau (MSSMAT).

No registration is needed, but an email would be appreciated if you intend to come.

Friday, March 1, 2019

UQSay scope

The scope of UQSay seminars will include
  • UQ methodology in a broad sense
    • propagation of uncertainty in numerical models,
    • sensitivity analysis,
    • design and analysis of computer experiments,
    • surrogate models, multi-fidelity,
    • reliability analysis, inversion, (Bayesian) optimization,
    • representations and elicitation of uncertainty,
    • verification and validation of numerical models,
    • data assimilation, calibration,
    • any topic in proba / stats / numerical analysis related to the above,
    • ...
  • and (of course) applications
    • engineering,
    • physical sciences,
    • biology,
    • environment,
    • ... 
(and by the way: UQ stands for "Uncertainty Quantification" 😉)

Tuesday, February 19, 2019

UQSay #01

The first UQSay seminar, organized by L2S, will take place in the afternoon of March 21, 2019, at CentraleSupelec Paris-Saclay (Eiffel building, amphi IV).  We will have two talks:

14h - Mickaël Binois (INRIA Sophia-Antipolis)  [slides]

Heteroskedastic Gaussian processes for simulation experiments

An increasing number of time-consuming simulators exhibit a complex noise structure that depends on the inputs. To conduct studies with limited budgets of evaluations, new surrogate methods are required to model simultaneously the mean and variance fields. To this end, we present recent advances in Gaussian process modeling with input-dependent noise. First, we describe a simple, yet efficient, joint modeling framework that rely on replication for both speed and accuracy. Then we tackle the issue of leveraging replication and exploration in a sequential manner for various goals, such as obtaining a globally accurate model, for optimization, contour finding, and active subspace estimation. We illustrate these on applications coming from epidemiology and inventory management.

Ref :

15h - François Bachoc (IMT, Toulouse)  [slides]

Gaussian process regression model for distribution inputs

Monge-Kantorovich distances, otherwise known as Wasserstein distances, have received a growing attention in statistics and machine learning as a powerful discrepancy measure for probability distributions. In this paper, we focus on forecasting a Gaussian process indexed by probability distributions. For this, we provide a family of positive definite kernels built using transportation based distances. We provide asymptotic results for covariance function estimation and prediction. We also provide numerical comparisons with other forecast methods based on distribution inputs.

Ref :

Organizers : Julien Bect (L2S) and Emmanuel Vazquez (L2S).

No registration is needed, but an email would be appreciated if you intend to come.