Thomas Schön

Thomas Schön, Professor of Automatic Control at Uppsala University. Photo: Mikael Wallerstedt

Our aim is to automate the extraction of knowledge and understanding from data. Allowing machines (and humans) to understand what is happening and to acquire new skills and learn new things. We achieve this by developing new probabilistic models and deriving algorithms capable of learnings these models from data. The systematic use of probability in representing and manipulating these models is key. It allows us to represent not only what we know, but to some extent also what we do not know. We take a particular interest in dynamical phenomena evolving over time.

Our research is multi-disciplinary and it sits somewhere on the intersection of the areas of Machine learning and statistics, signal processing, automatic control and computer vision. We pursue both basic and applied research, which explains our tight collaboration with various companies. A slightly more detailed overview of our research is available here.

Recent research results/news

[Postdoc opening within new Sydney/Uppsala Machine Learning/Control project] Together with Ian Manchester at The University of Sydney we are looking for a postdoc for our new Machine Learning/Control project funded by the Australian Research Council. The position is based in Sydney, but time will also be spent in our team at Uppsala University. More information is available here.

December 27, 2018 [Two papers accepted for AISTATS!] We will present new results at the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS) in Naha (Japan) in April, 2019. The first paper presents a multiresolution Gaussian process (GP) model which assumes conditional independence among GPs across resolutions. The model is built on the hierarchical application of predictive processes using a particular representation of the GP via the Karhunen-Loeve expansion with a Bingham prior model. In the second paper we study model calibration in classification. A probabilistic classifier is said to be calibrated if the probability distributions that it outputs are consistent with the empirical frequencies observed in the measured data. We develop a rather general theoretical calibration evaluation framework for classification. We illustrate its use on standard deep learning classifiers.

Jalil Taghia and Thomas B. Schön. Conditionally independent multiresolution Gaussian processes. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), Naha, Japan, April, 2019. (Oral presentation) [arXiv]

Juozas Vaicenavičius, David Widmann, Carl Andersson, Fredrik Lindsten, Jacob Roll and Thomas B. Schön. Evaluating model calibration in classification. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), Naha, Japan, April, 2019.

December 5, 2018 [Paper accepted for Automatica] As a follow-up on our previous paper on some of the fundamantal properties of Linear Quadratic Gaussian (LGQ) control we have now established attitional results related to the discounted-cost case. The situation is different in this case in that the cost function has an exponential discount factor, also known as a prescribed degree of stability. In this case, the optimal control strategy is only available when the state is fully known. Our new results extends this result by deriving an optimal control strategy when working with an estimated state. Expressions for the resulting optimal expected cost are also given. 

Hildo Bijl and Thomas B. Schön. Optimal controller/observer gains of discounted-cost LQG systems. Automatica, 2019. [pdf] [arXiv]

November 26, 2018 [Two spotlight presentations at NeurIPS next week!] Our team will give two spotlight presentations at the Conference on Neural Information Processing Systems (NeurIPS) in Montréal (Canada) next week:

Jack Umenberger and Thomas B. Schön. Learning convex bounds for linear quadratic control policy synthesis. In Neural Information Processing Systems (NeurIPS), Montréal, Canada, December 2018. [NeurIPS] [arXiv] [poster] [video]

Fredrik Lindsten, Jouni Helske and Matti Vihola. Graphical model inference: Sequential Monte Carlo meets deterministic approximationsIn Neural Information Processing Systems (NeurIPS), Montréal, Canada, December 2018. [NeurIPS]


Click here for older news.

 © Thomas Schön 2018