Uncertainty aware learning and a cautionary tale on machine learning theory uncertainties
Machine learning models trained on simulated data pick up subtle patterns in high-dimensional feature spaces, some of which are not well-modelled by the simulators. These give rise to systematic uncertainties in physics measurements. The popular solution often discussed for this is to use debiasing techniques to make the model invariant to the source of uncertainty (nuisance parameters). We propose the opposite approach, that is to train a model that is fully aware of uncertainties and their corresponding nuisance parameters, which allows to adapt to their correct values from data at the time of inference. We show that this strategy actually enhances the sensitivity of the final physics measurement. In a second study, we investigate the dangers of using ML to try to mitigate theory uncertainties. Theory uncertainties may arise from our inability to properly simulate certain physics processes (like hadronization) or compute higher order quantum field theory terms. We show that in these cases, debiasing techniques only serve to hide the true bias / uncertainty from the physicist rather than actually reducing them.