Skip to content

RUFFY-369/SAC_implementation

Repository files navigation

SAC:Soft Actor Critic_implementation

This is a Pytorch implementation of the 'Soft Actor Critic' paper (Both of the jupyter notebook and python script is used for the 'main' file and for several others too)

Paper: Soft actor critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor

Down below are the graphical results on some of the 'mujoco' and 'box2d' environments of this implementation. Various other environments are also to be solved and added and improvements are to be made in the script for better results and to be on par with the SOTA.

Env: InvertedPendulum-v1

Note: In the title of the plot there should be '250 scores' instead of '100 scores'

Env: HalfCheetah-v1

Note: In the title of the plot there should be '1000 scores' instead of '100 scores'

Env: LunarLanderContinuous-v2

About

Implementation of Soft actor critic paper

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published