Contributions of any kind are welcome! Check below contribution options and select the option that fits you best.
We encourage thumbing up 👍 to good issues and/or pull requests in high demand.
If you encounter any bugs, or come up with any feature/algorithm requests, check the issue-tracker. If you can not find any existing issue, please feel free to post a new issue.
Please do NOT post questions about the library, such as usage, installation, etc., to the issue-tracker. Use nnabla user group for such questions.
If you find any typo, gramatical error, incorrect explanation etc. in nnablaRL's documentation or READMEs follow the below procedure and send pull request!
- Search existing issues and/or pull requests in the nnablaRL GitHub repository.
- If doesn't exist, post an issue for the improvement proposal.
- Fork the repository, and improve the document.
- (If you improve the nnablaRL's documentation) Check that the document successfully builds and properly displayed. (See: How to build the document section to build the document on your machine)
- Create a pull request of your development branch to nnablaRL's master branch. Our maintainers will then review your changes.
- Once your change is accepted, our maintainer will merge your change.
To build the documentation, you will need Sphinx and some additional python packages.
cd docs/
pip install -r requirements.txt
You can then build the documentation by running make <format>
from the
docs/
folder. Run make
to get a list of all available output formats.
We recommend building the document as html.
cd docs/
make html
We appreciate contributors in the community, that are willing to improve nnablaRL. We follow the development style used in nnabla listed below.
- Search existing issues and/or pull requests in the nnablaRL GitHub repository.
- If doesn't exist, post an issue for the feature proposal.
- Fork the repository, and develop your feature.
- Format your code according to the nnablaRL's coding style. (See: Code format guidelines section below for details)
- Write an unit test(s) and also check that linters do not raise any error. If you implement a deep reinforcement learning algorithm, please also check that your implementation reproduces the result presented in the paper that you referred. (See: Testing guidelines section below for details)
- Create a pull request of your development branch to nnablaRL's master branch. Our maintainers will then review your changes.
- Once your change is accepted, our maintainer will merge your change.
NOTE: Before starting to develop nnablaRL's code, install extra python packages that will be used for code formatting and testing. You can install extra packages as follows.
cd <nnabla-rl root directory>
pip install -r requirements.txt
We also recommend installing the nnablaRL package as follows to reflect code changes made during the development automatically.
$ cd <nnabla-rl root directory>
$ pip install -e .
We use black and isort to keep consistent coding style. After finishing developing the code, run black and isort to ensure that your code is correctly formatted.
You can run black and isort as follows.
cd <nnabla-rl root directory>
black .
cd <nnabla-rl root directory>
isort .
If there is no existing test that checks your changes, please write a test(s) to check the validity of your code. Any pull request without unit test will NOT be accepted.
When adding a new unit test file, place the unit test file under the tests/ directory with name test_<the file name to test>.py. See the below example.
Example: When adding tests for your_new_file.py placed under nnabla_rl/utils.
.
├── ./nnabla_rl
│ └── ./nnabla_rl/utils
│ └── ./nnabla_rl/utils/your_new_file.py
└── ./tests
└── ./tests/utils
└── ./tests/utils/test_your_new_file.py
You can run tests with the following command.
cd <nnabla-rl root directory>
pytest
In case your pull request contains a new implementation of deep reinforcement learning algorithm, please check that your implementation reproduces the original paper's result and include the result that you obtained in the pull request comment. Please also provide a python script that reproduces the result and a README.md file that summarizes the evaluation result. Place the script used for reproduction and README.md under reproductions/ directory as follows.
.
└── ./reproductions
└── ./reproductions/<evaluated_env>
└── ./reproductions/<evaluated_env>/<algorithm_name>
└── ./reproductions/<evaluated_env>/<algorithm_name>/<algorithm_name>_reproduction.py
└── ./reproductions/<evaluated_env>/<algorithm_name>/README.md
If you can not find an appropriate benchmark, scores, dataset, etc. for the implemented algorithm (for example, the dataset used in the paper is inaccessible), please consider evaluating the implementation using an alternative environment and provide the evaluation result obtained in the alternative environment.
We use flake8 and mypy to check code consistency and type annotations. Run flake8 and mypy to check that your implementation does not raise any error.
cd <nnabla-rl root directory>
flake8
cd <nnabla-rl root directory>
mypy
Use docformatter to properly format the pydoc written in each python files. Run docformatter as follows:
docformatter --exclude build --i --config pyproject.toml .