A small tool for collecting a large amount of responses from GPT-3.5 models.
ChatGPT rate limits the number of questions users may ask. The goal of this project is to allow users to just leave their computers on for extended periods of time to collect large amounts of responses from ChatGPT. Contributions are welcome! 🤗
To install sleepyask, do one of the following:
> pip install sleepyask
> py -m pip install sleepyask
> python -m pip install sleepyask
This project also depends on the following packages
> openai
You are required to provide an organization as well as an API Key
organization
- Your OpenAI organization ID. Get it here: https://platform.openai.com/account/org-settingsapi_key
- You OpenAI API Key. To get it:
> Go to https://platform.openai.com/account/api-keys
> Login (if it is required)
> Click on your profile picture on the top-right
> View API Keys
> Create new secret key.
count
- This specifies the number of workers to create for asking questions. You can have multiple workers asking questions in parallel.
It is recommended that you do not store your user credentials directly in your code. Instead, use something like python-dotenv
to store your credentials in another file.
import os
from dotenv import load_dotenv
from sleepyask.chat import Sleepyask
load_dotenv() # take environment variables from .env.
TIMEOUT = 10000
RETRY_TIME = 5
RATE_LIMIT = 5
API_KEY = os.getenv('OPENAI_API_KEY')
# Index should be unique as it will be used to avoid repeat questions
QUESTION_LIST = [
{'index': 1, 'text': 'What is 1 + 1?'},
{'index': 2, 'text': 'What is 1 + 2?'},
{'index': 3, 'text': 'What is 1 + 3?'}
]
OUT_PATH = 'output.jsonl'
CONFIGS = { "model": "gpt-3.5-turbo", "n": 10, "temperature": 0.7}
sleepyask = Sleepyask(configs=CONFIGS,
rate_limit=RATE_LIMIT,
api_key=API_KEY,
timeout=TIMEOUT,
verbose=True,
retry_time=RETRY_TIME)
sleepyask.start(question_list=QUESTION, out_path=OUT_PATH)
- 🐛 Found a bug or interested in adding a feature? - Create an issue
- 🤗 Want to help? - Create a pull-request!