SARSA

Applies Temporal-Difference (TD) learning to the Q-function, with ε-greedy exploration for policy improvement. At each time step, update the Q-function by: \(Q(S_t, A_t) = Q(S_t,A_t) + \alpha(R_t + \gamma Q(S_{t+1},A_{t+1}) - Q(S_t,A_t))\)

SARSA is an on-policy method.

SARSA converges under Greedy in the Limit of Infinite Exploration (GLIE) and \(\sum_t \alpha_t < \infty\) and \(\sum_t \alpha^2_t <\infty\).

SARSA(\(\lambda\)) uses TD(\(\lambda\)) updates, and makes use of eligibility traces.

Emacs 29.4 (Org mode 9.6.15)