Skip to content

Latest commit

 

History

History
9 lines (7 loc) · 562 Bytes

File metadata and controls

9 lines (7 loc) · 562 Bytes

Chapter complex-decisions-chapter defined a proper policy for an MDP as one that is guaranteed to reach a terminal state. Show that it is possible for a passive ADP agent to learn a transition model for which its policy $\pi$ is improper even if $\pi$ is proper for the true MDP; with such models, the POLICY-EVALUATION step may fail if $\gamma{{,=,}}1$. Show that this problem cannot arise if POLICY-EVALUATION is applied to the learned model only at the end of a trial.