Course Summary: Bayesian Data Analysis

Jul 18, 2020

Previously, I have given an overview of studying machine learning and promised to give some details on some of the courses. I will start with Bayesian Data Analysis (BDA) which is probably the most general course in my schedule – all others are very AI-specific.

Contents and Learnings

As the name already implies the topic of this course was Bayesian’ statistics, specifically, analysing data with it.

The main learnings I gained from the course have been through the given assignments. These were mostly practical: We solved different programming tasks in R and submitted them using rmarkdown. Except for some tasks that required doing some maths by hand, this usually meant finding and understanding the relevant functions in R. For more advanced modelling we used Stan.

Except for R, there have, of course, been other learnings.

First of all, a prerequisite is to know the basics of probability theory, especially the Bayes’ theorem. With this in hand, we set out to learn more about Bayesian’ statistics, specifically Bayesian’ inference (where we use a prior and likelihood to get a posterior distribution – I am not going to go into details here).

Bayesian’ inference is the foundation of many machine learning methods, e.g., variational autoencoders. It is also long and widely used in real-world problems, including medicine. In fact, machine learning is often just used as a fancy name for optimising statistical models, such as Bayesian’ models.

This first part included learning about conjugate priors and different distributions (Normal, Beta, …). Next, we set out to sampling from our newly gained (posterior) distributions. For many well-explored distributions, like the Gaussian one, it is pretty straight forward to sample from it. However, not always. In this case, computational approximations, such as importance sampling, Metropolis and others, can help out.

Then we learned how to use Stan to fit the models, i.e., finding good (often optimal is not feasible) parameters using our data. Finally, we learned how to evaluate our models and compare them, e.g., LOO-CV (leave-one-out cross-validation).

According to the course description, this was the content, which I can confirm:

  • Bayesian probability and inference
  • Bayesian models and learning their parameters
  • Computational methods for sampling from a distribution

Organisation

This course included a weekly lecture and weekly assignments. Interestingly, the assignments were not graded by the teaching assistants but rather peer-reviewed. This meant that additionally to submitting the assignments every student had to grade three others’. Admittedly, the learnings during the grading were very limited.

As mentioned before, the homework assignments were very practical, sometimes including some probability calculations.

References/Material

The lecture was based on the book Bayesian Data Analysis (3rd edition).

The lecturer was Aki Vehtari was is also one of the authors of the book. The Stan project also comes from his research group.

Apart from this, this YouTube course has incredibly saved me time to understand some concepts: A Student’s Guide to Bayesian Statistics. It covers most of the BDA course and even more.

I hope this post helps some people to find some entry points into Bayesian’ statistics or machine learning and gives others an impression of my studies. This is my first post of this type and I would love to hear your feedback!

If you enjoyed reading this post I would love to get some feedback. Give feedback, write a comment, or enable analytics (you can click on "Cookie Policy" on the bottom left of the page). Thank you!