Zum Inhalt springen
Calibrating Bayesian generative machine learning for Bayesiamplification:Machine Learning: Science and Technology
- Link:
-
- Autor/in:
-
- Erscheinungsjahr:
- 2024
- Medientyp:
- Text
- Schlagworte:
-
- Bayesian neural networks
- data amplification
- fast detector simulation
- generative neural networks
- Adversarial machine learning
- Contrastive Learning
- Deep learning
- Gaussian distribution
- Markov processes
- Neural networks
- Normal distribution
- Bayesian
- Data amplification
- Detector simulations
- Fast detector simulation
- Generative neural network
- Machine learning models
- Machine-learning
- Neural-networks
- Uncertainty
- Generative adversarial networks
- Beschreibung:
-
- Recently, combinations of generative and Bayesian deep learning have been introduced in particle physics for both fast detector simulation and inference tasks. These neural networks aim to quantify the uncertainty on the generated distribution originating from limited training statistics. The interpretation of a distribution-wide uncertainty however remains ill-defined. We show a clear scheme for quantifying the calibration of Bayesian generative machine learning models. For a Continuous Normalizing Flow applied to a low-dimensional toy example, we evaluate the calibration of Bayesian uncertainties from either a mean-field Gaussian weight posterior, or Monte Carlo sampling network weights, to gauge their behaviour on unsteady distribution edges. Well calibrated uncertainties can then be used to roughly estimate the number of uncorrelated truth samples that are equivalent to the generated sample and clearly indicate data amplification for smooth features of the distribution. © 2024 The Author(s). Published by IOP Publishing Ltd.
- Lizenz:
-
- info:eu-repo/semantics/openAccess
- Quellsystem:
- Forschungsinformationssystem der UHH
Interne Metadaten
- Quelldatensatz
- oai:www.edit.fis.uni-hamburg.de:publications/af9d4a10-baa5-4d60-8826-31316a569af7