CS B553: Algorithms for Optimization and Learning
Instructor: Kris Hauser
The goal of this course is to provide students with an understanding of foundational computational techniques for large-scale optimization and probabilistic inference, which find broad application in the advanced study of artificial intelligence, robotics, computer vision, computational biology, and applied sciences.
· Unconstrained optimization: gradient descent, Newton and quasi-Newton methods
· Constrained optimization: Lagrange duality, KKT conditions, convex optimization, nonlinear programming
· Stochastic optimization: simulated annealing, genetic algorithms, stochastic gradient descent.
· Bayesian inference, Monte-Carlo techniques
· Graphical models: Bayes nets. Exact and approximate inference. Parameter & structure learning.
· Temporal sequence processing: Markov chains, HMM, Viterbi algorithm, particle filtering