Abstract
We present an illustration of a method to ensure reliable uncertainty percolation, within supervised learning performed using Gaussian Processes (GP), and Markov Chain Monte Carlo based inference. We show the effect of variously propagating the uncertainty, on predictions undertaken on the output variable, at test inputs, subsequent to the learning of the functional relationship between the input and the output, where this functional relation is modelled as a realisation from a GP. The efficiency of imposing a physically motivated constraints on the output - via priors imposed on the GP covariance kernel hyperparameters - is compromised under certain strategies adopted to propagate uncertainty. Tools such as DNNs, that are relatively more blind to uncertainty learning/propagation, are found to be diversely inaccurate in their output prediction.
Supported by EPSRC DTP Studentship.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Begoli, E., Bhattacharya, T., Kusnezov, D.: The need for uncertainty quantification in machine-assisted medical decision making. Nat. Mach. Intell. 1(1), 20ā23 (2019)
Dalal, S.R., Fowlkes, E.B., Hoadley, B.: Risk analysis of the space shuttle: pre-challenger prediction of failure. J. Am. Stat. Assoc. 84(408), 945ā957 (1989)
Der Kiureghian, A., Ditlevsen, O.: Aleatory or epistemic? does it matter? Struct. Saf. 31(2), 105ā112 (2009)
Ghahramani, Z.: Probabilistic machine learning and artificial intelligence. Nature 521(7553), 452ā459 (2015)
Kruschke, J.K.: Doing Bayesian Data Analysis (2nd edn). Academic Press, Cambridge (2015)
Liao, L., Li, H., Shang, W., Ma, L.: An empirical study of the impact of hyperparameter tuning and model optimization on the performance properties of deep neural networks. ACM Trans. Soft. Eng. Methodol. (TOSEM) 31(3), 1ā40 (2022)
Mallick, T., Balaprakash, P., Macfarlane, J.: Deep-ensemble-based uncertainty quantification in spatiotemporal graph neural networks for traffic forecasting (2022). arXiv:2204.01618
Neal, R.M.: Regression and classification using gaussian process priors (with discussion). In: Bernardo, J.M., et al. (eds.), Bayesian Statistics 6, pp. 475ā501. Oxford University Press (1998)
OāHagan, A.: Curve fitting and optimal design for prediction. J. R. Stat. Soc. Ser. B (Methodological) 40(1), 1ā24 (1978)
Rasmussen, C.E.: Evaluation of Gaussian processes and other methods for non-linear regression. PhD thesis, University of Toronto Toronto, Canada (1997)
Rasmussen, C.E., Williams, C.K I.: Gaussian Processes for Machine Learning. Adaptive Computation and Machine Learning. MIT Press (2006)
Robert, C.P., Casella, G.: Monte carlo statistical methods. In: Springer Texts in Statistics, Springer, New York (2004). https://doi.org/10.1007/978-1-4757-4145-2
Smith, H.J., Dinev, T., Xu, H.: Information privacy research: an interdisciplinary review. MIS quarterly, pp. 989ā1015 (2011)
Tensorflow. Basic regression: Predict fuel efficiency (2022). https://www.tensorflow.org/tutorials/keras/regression
Young, D.S.: Chapman and Hall/CRC, Handbook of regression methods (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Ā© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Roy, G., Chakrabarty, D. (2023). Efficient Uncertainty Quantification forĀ Under-Constraint Prediction Following Learning Using MCMC. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds) Neural Information Processing. ICONIP 2022. Communications in Computer and Information Science, vol 1791. Springer, Singapore. https://doi.org/10.1007/978-981-99-1639-9_23
Download citation
DOI: https://doi.org/10.1007/978-981-99-1639-9_23
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-1638-2
Online ISBN: 978-981-99-1639-9
eBook Packages: Computer ScienceComputer Science (R0)