The accompanying material for each lecture is posted here.
Origins of deep learning, course goals, overview of machine-learning paradigms, intro to computational acceleration.
Supervised learning problem statement, data sets, hypothesis classes, loss functions, basic examples of supervised machine learning models, adding non-linear...
Linear and multilayer Perceptron, loss functions, activation functions, pooling, weight sharing, convolutional layers, gradient descent, backpropagation.
Approximation, estimation and optimization errors, regularization, loss surface curvature, descent-based optimization methods, second-order methods.
RNN model, input-output sequences relationships, non-sequential input, layered RNN, backpropagation through time, word embeddings, attention, transformers.
Subspace models, autoencoders, unsupervised loss, generative adversarial nets, domain adaptation.
Markov decision process, policies, rewards, value functions, the Bellman equation, q-learning, policy learning, actor-critic learning, AutoML.
Toeplitz operators, graphs, fields, gradients, divergence, Laplace-Beltrami operator, non-euclidean convolution, spectral and spatial CNN for graphs.
CV-based approaches, R-CNN, RPN, YOLO, SSD, losses, benchmarks and performance metrics.