- The Hamilton-Jacobi-Bellman Equation.
- Heuristic derivation of the HJB equation.
- Davis-Varaiya Martingale Prinicple for Optimality

# Category: Control

## Continuous Time Dynamic Programs

- Continuous-time dynamic programs
- The HJB equation; a heuristic derivation; and proof of optimality.

## Algorithms for MDPs

- High level idea: Policy Improvement and Policy Evaluation.
- Value Iteration; Policy Iteration.
- Temporal Differences; Q-factors.

## Infinite Time Horizon, MDP

- Positive Programming, Negative Programming & Discounted Programming.
- Optimality Conditions.

## Markov Decision Processes

- Markov Decisions Problems; Bellman’s Equation; Two examples

## Dynamic Programming

## Blackwell Approachability

Sequentially a player decides to play and his adversary decides . At time , a decision results in a vector payoff . Given is the average vector payoff at time , Blackwell’s Approachability Theorem is a necessary and sufficient condition so that, regardless of the adversary’s decisions, the player makes the sequence of vectors approach a convex set .