Search programme

​Use the search function to search amongst programmes at Chalmers. The study programme and the study programme syllabus relating to your studies are generally from the academic year you began your studies.

Syllabus for

Academic year
FFR135 - Artificial neural networks
 
Owner: FCMAS
5,0 Credits (ECTS 7,5)
Grading: TH - Five, Four, Three, Not passed
Level: C
Department: 16 - PHYSICS


Teaching language: English

Course module   Credit distribution   Examination dates
Sp1 Sp2 Sp3 Sp4 No Sp
0100 Examination 5,0 c Grading: TH   5,0 c   Contact examiner,  Contact examiner

In programs

TDATA COMPUTER SCIENCE AND ENGINEERING - Algorithms, Year 4 (elective)
TTFYA ENGINEERING PHYSICS, Year 4 (elective)
TELTA ELECTRICAL ENGINEERING, Year 4 (elective)
TITEA SOFTWARE ENGINEERING, Year 4 (elective)
TITEA SOFTWARE ENGINEERING, Year 3 (elective)
TAUTA AUTOMATION AND MECHATRONICS ENGENEERING, Year 4 (elective)
FCMAS MSc PROGRAMME IN COMPLEX ADAPTIVE SYSTEMS, Year 1 (compulsory)

Examiner:

Professor  Bernhard Mehlig



Eligibility:

For single subject courses within Chalmers programmes the same eligibility requirements apply, as to the programme(s) that the course is part of.

Course specific prerequisites

Basic programming skills

Aim

Neural networks are distributed computational models inspired by the structure of the human brain, consisting of many simple processing elements, connected in a network. Neural networks are increasingly used in many different fields of engineering for tasks such as pattern recognition, prediction and control. The theory of neural networks is a crossdisciplinary field which includes neurobiology, computer science and statistical physics.
The course gives an overview and a basic understanding of neural-network algorithms. Topics covered: associative memory models (Hopfield model), algorithms for learning from examples (perceptrons, back-propagation), and models for self-organisation (Hebbian learning). By learning methods from statistics and computer science students can develop an understanding of when neural networks are useful in application problems.

Content

Course home page

Introduction to neurobiology
Associative memory. The Hopfield model. Optimi-zation problems.
The perceptron
Multilayer perceptrons. Back-propagation and other learning algorithms. Radial basis network.
Introduction to learning theory. Generalization.
Reinforcement learning - learning in situations with infrequent information from the environment.
Self-organization in neural networks. Hebbian learning. Self-organizing feature maps.
Neural network applications.
Classification and prediction using methods from statistics and machine learning.

Organisation

Lectures

In order to develop a thorough understanding of the basic neural network algorithms, students are expected to develop their own implementations (in a language of their chocie), and apply them to simple application problems. This is done through a sequence of small projects (see below).

Literature

J. Hertz, A. Krogh, R.G. Palmer: Introduction to the Theory of Neural Computation (Addison-Wesley, 1991). S. Haykin, Neural Networks: A Comprehensive Foundation, Maxmillan/IEEE Press, 1994.

Examination

Through a sequence of small projects organised into five examples sheets, where students implement the basic models in the course, and use them in application problems.Written presentation of projects.


Page manager Published: Mon 28 Nov 2016.