Keynotes


Thomas Augustin

Thomas Augustin is Professor at the Department of Statistics, Ludwig-Maximilians University (LMU) in Munich, and head of the group “Foundations of Statistics and their applications”. His research aims at a statistical methodology where data quality can be taken into account critically, resulting in generalized modelling of complex, non-idealized data in social sciences and biometrics. He has mainly published on imprecise probabilities in statistical inference and decision making, and on error measurement modelling.

Talk Title: On Statistical Modelling with Imprecise Probabilities

Abstract: By their generalized understanding of uncertainty, imprecise probability methods promise to be powerful also in statistical modelling. The talk discusses this claim in the context of some prototypic areas of
application.  We first recall how  imprecise probabilistic models provide a natural superstructure upon robustness aspects in frequentist and Bayesian approaches. We then use imprecise priors to model explicitly prior-data conflict. In the second part of the talk we consider several issues in statistical modelling of imprecise data.

In our talk we also distinguish in an ideal typical way between two basic motives for applying imprecise models, which we call ‘defensive view’ and ‘offensive view’. Most applications so far have emphasized the defensive view, understanding imprecise models merely as a tool to avoid unjustified, overprecise modelling assumptions. We also advocate the offensive view by giving examples where weak information from the application area can be powerfully utilized in the estimation of imprecise models, while traditional statistical models would be forced to ignore such valuable information.


Scott Ferson

Scott Ferson is director of the Institute for Risk and Uncertainty at the University of Liverpool in the UK. For many years he was senior scientist at Applied Biomathematics in New York and taught risk analysis at Stony Brook University. Dr. Ferson has over a hundred publications, mostly in risk analysis and uncertainty propagation, and is a fellow of the Society for Risk Analysis. His recent research, funded mostly by NIH and NASA, focuses on reliable statistical tools when empirical information is very sparse, and distribution-free methods for risk analysis.

Talk Title: Non-Laplacian uncertainty: practical consequences of an ugly paradigm shift about how we handle not knowing

Abstract: The relaxation of the completeness axiom of subjective utility theory leads to a non-LaplacIan kind of uncertainty commonly known as ignorance. Taking account of how this differs from the uncertainty of probability theory will have broad and important implications in engineering, statistics, and medicine. However, the change can be shallow in the sense that practices need not be radically transformed during the shift.


Ryan Martin

Ryan Martin is an Associate Professor in the Department of Statistics at North Carolina State University. He obtained his PhD in Statistics from Purdue University in 2009 and has since worked in a number of different research areas, including asymptotics, Bayes and empirical Bayes inference, especially for high-dimensional problems, and the foundations of statistics. In particular, he is co-author of the monograph Inferential Models that presents a general framework for valid statistical inference based on belief functions.

Talk Title: Belief functions and valid statistical inference

Abstract: Belief functions originated with Dempster’s work in statistics, but they are almost entirely absent in the statistics mainstream. In this talk, I will argue that the properties of non-additive belief and plausibility functions are very much in line with classical statistical reasoning (e.g., p-values, confidence, etc) and, furthermore, that non-additivity — and not the familiar additivity of probability measures — is necessary for valid statistical inference. Of course, not all belief functions would be satisfactory in a given problem, and I will present a general inferential model framework for the construction of valid belief functions for statistical inference.