My (CV) research studies (see list of my publications) different incarnations of Machine Learning (ML), and various applications of those. My interests are found in theoretical, algorithmical and application-oriented aspects of the question: 'What makes an ML algorithm potent for given case?' It is fair to say that ML (or data-based techniques) are evolving parallel to the ever-increasing availability of computational power. But while the availability of fast hardware is mostly regulating the latter, the limits of the former are often dictated by the availability of proper algorithms. Properly designed algorithms are (1) theoretically sound, (2) in-tune with its intended application aim, and (3) computationally attractive. These topics are invariantly present underlying trending topics as neural networks, kernel machines, compressed sensing, deep learning, business analytics and Big Data.

Theoretical Work

Modern, general principles for learning schemes to work have emerged from results in statistics, function approximation and information-theory. My interests circle mostly around the following topics:
- Generalisation: prediction vs. recovery (Engineering vs. scientism).
- Regularisation techniques (On the subjectivity of simplicity).
- Dynamical systems (The arrow of time, a.k.a. the control engineer in me.).
- Online learning and regret (How to act optimal over time?).
- Learning in the presence of high-dimensions (Complexity in action: what is possible?).
- Inference (the question of 'significant'?).

Application-Oriented Work

These generic principles need to be fine-tuned towards the application at hand. I have done this interfacing in a wide variety of application sciences. For full details work towards either, see my
publication list.

- System Identification.
- Statistical inference in neuroimaging.
- Social behavior in biology.
- Survival and failure-time analysis.
- Bio-informatics.
- Acoustic signal processing.
- Electro-microscopy in physics.
- Cosmology.
- Wireless Sensor Networks
- Room temperature control.
- Video compression.
- Design of recommender systems.

Computational Requirements

My general strategy here is mostly to formulate ('interface') the devised learning algorithms to standard schemes (tools) of optimisation. Especially, I have been advocating methods of convex optimization for this aim. Research on methods of convex optimization has been prolific the last 20 years or so, with many results paralleling algorithmical research in ML, as well as a resulting availability of highly tuned (software) toolboxes (see e.g.
cvx, cvxopt, yalmip). By doing so, I follow the adagium saying that 'the presence of local minima is emblematic of an ill-posed problem formulation'. Convex optimization provides a rigorous framework for a class of optimisation problems where no such local minima are present. Since those algorithms come with rigorous guarantees, also the formulated ML schemes can be analysed rigorously. Other ways to study computational issues are found in theoretical computer science, numerical algebra and in discrete optimization techniques.


Through education, I have been working to get those results across to new researchers. On an undergrad level, I have been running the System Identification course in UU for 6 years. I have been running a grad courses in a variety of topics, having a focus on applicability based on a selection of experts in the area. Here, topics included 'Nonlinear System Identification and its applications' (2011), 'Next Generation Bioinformatics Tools: From Data Generation to Data Analysis' (2012), 'Compressive Sensing and Structured Random Matrices' (2012), 'Fundamentals of Machine Learning' (2015).