Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handling rate-limits #30

Open
fabiopires opened this issue Jan 12, 2025 · 0 comments
Open

Handling rate-limits #30

fabiopires opened this issue Jan 12, 2025 · 0 comments

Comments

@fabiopires
Copy link

Hey there,

I'm having some issues with rate-limit with Anthropic, and while I know I can potentially reach out to them and negotiate a custom deal, I was wondering if it could be a way to handle these differently (without raising an exception).

I'm thinking in either adding a delay when these are triggered, or even allowing a state of the analysis to be saved and reused a few days later?

Thank you in advance.

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/vulnhuntr/LLMs.py", line 96, in send_message
    return self.client.messages.create(
  File "/usr/local/lib/python3.10/site-packages/anthropic/_utils/_utils.py", line 275, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/anthropic/resources/messages/messages.py", line 901, in create
    return self._post(
  File "/usr/local/lib/python3.10/site-packages/anthropic/_base_client.py", line 1279, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
  File "/usr/local/lib/python3.10/site-packages/anthropic/_base_client.py", line 956, in request
    return self._request(
  File "/usr/local/lib/python3.10/site-packages/anthropic/_base_client.py", line 1045, in _request
    return self._retry_request(
  File "/usr/local/lib/python3.10/site-packages/anthropic/_base_client.py", line 1094, in _retry_request
    return self._request(
  File "/usr/local/lib/python3.10/site-packages/anthropic/_base_client.py", line 1045, in _request
    return self._retry_request(
  File "/usr/local/lib/python3.10/site-packages/anthropic/_base_client.py", line 1094, in _retry_request
    return self._request(
  File "/usr/local/lib/python3.10/site-packages/anthropic/_base_client.py", line 1045, in _request
    return self._retry_request(
  File "/usr/local/lib/python3.10/site-packages/anthropic/_base_client.py", line 1094, in _retry_request
    return self._request(
  File "/usr/local/lib/python3.10/site-packages/anthropic/_base_client.py", line 1060, in _request
    raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': 'This request would exceed your organization’s rate limit of 40,000 input tokens per minute. For details, refer to: https://docs.anthropic.com/en/api/rate-limits; see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase.'}}

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/bin/vulnhuntr", line 8, in <module>
    sys.exit(run())
  File "/usr/local/lib/python3.10/site-packages/vulnhuntr/__main__.py", line 464, in run
    secondary_analysis_report: Response = llm.chat(vuln_specific_user_prompt, response_model=Response)
  File "/usr/local/lib/python3.10/site-packages/vulnhuntr/LLMs.py", line 68, in chat
    response = self.send_message(messages, max_tokens, response_model)
  File "/usr/local/lib/python3.10/site-packages/vulnhuntr/LLMs.py", line 105, in send_message
    raise RateLimitError("Request was rate-limited") from e
vulnhuntr.LLMs.RateLimitError: Request was rate-limited
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant