Search programme

​Use the search function to search amongst programmes at Chalmers. The study programme and the study programme syllabus relating to your studies are generally from the academic year you began your studies.

Syllabus for

Academic year
SSY316 - Advanced probabilistic machine learning  
Avancerad probabilistisk maskininlärning
 
Syllabus adopted 2021-02-10 by Head of Programme (or corresponding)
Owner: MPCOM
7,5 Credits
Grading: TH - Pass with distinction (5), Pass with credit (4), Pass (3), Fail
Education cycle: Second-cycle
Major subject: Electrical Engineering
Department: 32 - ELECTRICAL ENGINEERING


Teaching language: English
Application code: 13116
Open for exchange students: Yes
Block schedule: B
Status, available places (updated regularly): Yes

Module   Credit distribution   Examination dates
Sp1 Sp2 Sp3 Sp4 Summer course No Sp
0120 Project 7,5c Grading: TH   7,5c    

In programs

MPDSC DATA SCIENCE AND AI, MSC PROGR, Year 1 (compulsory elective)
MPCOM COMMUNICATION ENGINEERING, MSC PROGR, Year 2 (elective)
MPCOM COMMUNICATION ENGINEERING, MSC PROGR, Year 1 (compulsory elective)
MPSYS SYSTEMS, CONTROL AND MECHATRONICS, MSC PROGR, Year 2 (elective)
MPSYS SYSTEMS, CONTROL AND MECHATRONICS, MSC PROGR, Year 1 (elective)

Examiner:

Alexandre Graell i Amat

  Go to Course Homepage


Eligibility

General entry requirements for Master's level (second cycle)
Applicants enrolled in a programme at Chalmers where the course is included in the study programme are exempted from fulfilling the requirements above.

Specific entry requirements

English 6 (or by other approved means with the equivalent proficiency level)
Applicants enrolled in a programme at Chalmers where the course is included in the study programme are exempted from fulfilling the requirements above.

Course specific prerequisites

Working knowledge of probability and statistics and linear algebra.

Aim

This course delves into the connections between machine learning and probability theory and statistics. In particular, it will give a probabilistic viewpoint of machine learning problems. Probability theory can be applied to any problem involving uncertainty. In machine learning, uncertainty comes in many forms, e.g., the noise in the collected data, or uncertainty about the best prediction given some past data or in what is the best suited model to explain the data. The key idea behind the probabilistic framework to machine learning is that learning can be thought of as inferring plausible (probabilistic) models to describe data that one could observe from a system. Probabilistic models are able to make predictions and statements about observable data and are also capable to express the uncertainty of the predictions. 

The course will describe a wide variety of probabilistic models, suitable for a wide variety of data and tasks. It will also describe a wide variety of algorithms for inference and learning, when using such models. The goal is to present a unified view of machine learning through the lens of probabilistic modeling and inference. 
In the course the students will also learn universal models/methods that are useful in probabilistic machine learning, but also in other areas.

Learning outcomes (after completion of the course the student should be able to)

  • Explain the philosophy behind Bayesian inference
  • Develop an inference algorithm using the principles of Bayesian decision theory and a given cost function
  • Understand the connections between probability theory and machine learning
  • Explain similarities and differences between probabilistic and “classical” machine learning methods
  • Interpret and explain results from probabilistic machine learning
  • Derive, analyze, and implement the probabilistic methods introduced in the course
  • Understand how to apply several probabilistic models to data and determine the most suitable one for a given task
  • Discuss and determine whether an engineering-relevant problem can be formulated as a supervised or unsupervised machine learning problem

Content

  • Bayesian inference, probabilistic modeling of data
  • Supervised learning: Bayesian linear regression
  • Bayesian graphical models
  • Monte Carlo techniques: importance sampling, Gibbs sampling, Markov Chain Monte Carlo
  • Markov random fields, factor graphs 
  • Belief propagation, variable elimination 
  • Hidden Markov models
  • Expectation propagation and variational inference
  • Gaussian processes
  • Unsupervised learning
  • Generative adversarial networks and variational autoencoders: two methods for unsupervised learning,
  • Probabilistic deep learning 

Organisation

The course comprises lectures, weekly home assignments and tutorial sessions related to the home assignments.

Literature

We will mainly use Christopher M. Bishop, "Pattern Recognition and Machine Learning", Springer, 2006

Examination including compulsory elements

The final grade (TH) is based on scores from a project, quizzes, and a written exam. The project and the literature study are mandatory in the sense that they must be passed to pass the course.

The course examiner may assess individual students in other ways than what is stated above if there are special reasons for doing so, for example if a student has a decision from Chalmers on educational support due to disability.


Page manager Published: Mon 28 Nov 2016.