This project implements a Reinforcement Learning (RL) based control system for a Continuous Stirred Tank Reactor (CSTR). The CSTR is a common type of chemical reactor used in industrial processes. The goal of this project is to design an RL agent that can efficiently control the CSTR to achieve desired performance metrics.
- Introduction
- Project Structure
- Installation
- Usage
- Reinforcement Learning Approach
- Environment
- Simulation
- Utilities
- Results
- Documentation
- Contributing
- License
- Acknowledgements
A Continuous Stirred Tank Reactor (CSTR) is widely used in chemical engineering for its simplicity and ease of operation. However, controlling the CSTR is challenging due to its nonlinear dynamics. This project explores the use of reinforcement learning to develop a control strategy for the CSTR, aiming to maintain the reactor at optimal operating conditions.
The project is organized into the following directories:
Env/
: Contains the environment code for the CSTR.Simulation/
: Includes the simulator code for running experiments.Utils/
: Utility scripts for metrics, plotting, and other helper functions.docs/
: Documentation files for the project..git/
: Git version control directory.
env.py
: Defines the environment for the CSTR where the RL agent interacts.
simulator.py
: Contains the simulation logic for the CSTR, integrating the environment and the RL agent.
metrics.py
: Provides functions to calculate performance metrics.plotting.py
: Scripts for plotting results and visualizations.random_sa.py
: Random search algorithms for hyperparameter tuning.utils.py
: General utility functions used across the project.
- Sphinx-generated documentation for the project.
To run this project, you need to have Python installed along with several dependencies. The recommended way to install the dependencies is to use a virtual environment.
-
Clone the Repository:
git clone https://github.com/vikash9899/Contorl-CSTR-using-Reinforcement-learning.git
-
Create a Virtual Environment:
python3 -m venv venv source venv/bin/activate # On Windows, use `venv\Scripts\activate`
-
Install Dependencies:
pip install -r requirements.txt
After installing the dependencies, you can run the simulations and train the RL agent.
-
Navigate to the Simulation Directory:
cd Simulation
-
Run the Simulator:
python simulator.py
The RL approach used in this project involves training an agent to learn the optimal policy for controlling the CSTR. The agent interacts with the environment, receives rewards based on its actions, and updates its policy accordingly.
- State: Represents the current condition of the CSTR.
- Action: The control input provided by the RL agent.
- Reward: A scalar feedback signal used to guide the learning process.
- Policy: The strategy used by the agent to decide actions based on states.
The environment (env.py
) defines the interaction between the RL agent and the CSTR. It includes the state space, action space, reward function, and dynamics of the CSTR.
- State Space: Variables representing the current status of the reactor (e.g., concentration, temperature).
- Action Space: Possible control actions (e.g., adjusting flow rates, temperature settings).
- Reward Function: Designed to encourage desired behaviors such as stability, efficiency, and safety.
The simulation (simulator.py
) integrates the environment and the RL agent, allowing for training and evaluation. It handles the initialization, execution of episodes, and data collection for analysis.
- Episode Management: Running multiple episodes for training and testing.
- Data Logging: Collecting data on states, actions, rewards, and performance metrics.
- Visualization: Plotting the results for analysis and interpretation.
The Utils/
directory contains helper functions and scripts to support the main codebase.
metrics.py
provides functions to evaluate the performance of the RL agent, such as calculating cumulative rewards and stability measures.
plotting.py
includes scripts to visualize the results, such as state trajectories, reward curves, and action distributions.
random_sa.py
implements random search algorithms for hyperparameter tuning, helping to find the best settings for the RL agent.
utils.py
contains general-purpose functions used throughout the project, such as data normalization, logging, and configuration handling.
The results of the experiments, including trained models and performance metrics, are stored in the results/
directory. Key findings and visualizations are documented to provide insights into the effectiveness of the RL-based control strategy.
Comprehensive documentation is provided in the docs/
directory, generated using Sphinx. It includes detailed descriptions of the project components, installation instructions, usage guides, and API references.
To build the documentation locally, navigate to the docs/
directory and run:
make html
The generated HTML files will be available in docs/_build/html/
.
Contributions to the project are welcome! If you have suggestions for improvements or new features, please create an issue or submit a pull request.
- Fork the repository.
- Create a new branch (
git checkout -b feature-branch
). - Commit your changes (
git commit -am 'Add new feature'
). - Push to the branch (
git push origin feature-branch
). - Create a new pull request.
This project is licensed under the MIT License. See the LICENSE
file for more details.
This project builds upon numerous open-source libraries and research contributions in the fields of reinforcement learning and chemical process control. We extend our gratitude to the contributors and maintainers of these projects.