You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The error encountered indicates that the LLM provider has not been correctly specified. Specifically, the system is attempting to use the WatsonxLLM model but is not passing the corresponding provider information.
Error Breakdown:
Error Message: BadRequestError: LLM Provider NOT provided.
Details: The model WatsonxLLM was passed, but no provider was specified. The system expects an LLM provider to be provided, such as 'Huggingface', 'Azure', or other supported models.
2024-11-12 16:59:14,065 - 27388 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
2024-11-12 16:59:14,075 - 27388 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
2024-11-12 16:59:14,084 - 27388 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
2024-11-12 16:59:14,094 - 27388 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
2024-11-12 16:59:14,102 - 27388 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
2024-11-12 16:59:14,111 - 27388 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
2024-11-12 16:59:14,120 - 27388 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
2024-11-12 16:59:14,131 - 27388 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
2024-11-12 16:59:14,138 - 27388 - llm.py-llm:161 - ERROR: LiteLLM call failed: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=WatsonxLLM
Params: {'model_id': 'ibm/granite-13b-instruct-v2', 'deployment_id': None, 'params': {'decoding_method': 'sample', 'max_new_tokens': 2000, 'temperature': 0.0}, 'project_id': '4ea20d19-516f-4fdf-9af6-7bd946e6d9b5', 'space_id': None}
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers
2024-11-12 16:59:14,148 - 27388 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
Task: Fetch Yahoo Finance data for IBM, including stock price, P/E ratio, ROE, and recent news. for the timeframe from 2022-01-01 to 2024-11-12.
2024-11-12 16:59:14,168 - 27388 - llm.py-llm:161 - ERROR: LiteLLM call failed: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=WatsonxLLM
Params: {'model_id': 'ibm/granite-13b-instruct-v2', 'deployment_id': None, 'params': {'decoding_method': 'sample', 'max_new_tokens': 2000, 'temperature': 0.0}, 'project_id': '4ea20d19-516f-4fdf-9af6-7bd946e6d9b5', 'space_id': None}
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers
2024-11-12 16:59:14,178 - 27388 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
2024-11-12 16:59:14,206 - 27388 - llm.py-llm:161 - ERROR: LiteLLM call failed: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=WatsonxLLM
Params: {'model_id': 'ibm/granite-13b-instruct-v2', 'deployment_id': None, 'params': {'decoding_method': 'sample', 'max_new_tokens': 2000, 'temperature': 0.0}, 'project_id': '4ea20d19-516f-4fdf-9af6-7bd946e6d9b5', 'space_id': None}
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers
File ~\anaconda3\Lib\site-packages\crewai\agents\crew_agent_executor.py:93, in CrewAgentExecutor.invoke(self, inputs)
92 self.ask_for_human_input = bool(inputs.get("ask_for_human_input", False))
---> 93 formatted_answer = self._invoke_loop()
95 if self.ask_for_human_input:
File ~\anaconda3\Lib\site-packages\crewai\agents\crew_agent_executor.py:175, in CrewAgentExecutor._invoke_loop(self, formatted_answer)
174 else:
--> 175 raise e
177 self._show_logs(formatted_answer)
File ~\anaconda3\Lib\site-packages\crewai\agents\crew_agent_executor.py:115, in CrewAgentExecutor._invoke_loop(self, formatted_answer)
114 if not self.request_within_rpm_limit or self.request_within_rpm_limit():
--> 115 answer = self.llm.call(
116 self.messages,
117 callbacks=self.callbacks,
118 )
120 if not self.use_stop_words:
File ~\anaconda3\Lib\site-packages\crewai\llm.py:155, in LLM.call(self, messages, callbacks)
153 params = {k: v for k, v in params.items() if v is not None}
--> 155 response = litellm.completion(**params)
156 return response["choices"][0]["message"]["content"]
File ~\anaconda3\Lib\site-packages\litellm\utils.py:1013, in client..wrapper(*args, **kwargs)
1010 logging_obj.failure_handler(
1011 e, traceback_exception, start_time, end_time
1012 ) # DO NOT MAKE THREADED - router retry fallback relies on this!
-> 1013 raise e
File ~\anaconda3\Lib\site-packages\litellm\utils.py:903, in client..wrapper(*args, **kwargs)
902 # MODEL CALL
--> 903 result = original_function(*args, **kwargs)
904 end_time = datetime.datetime.now()
File ~\anaconda3\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py:313, in get_llm_provider(model, custom_llm_provider, api_base, api_key, litellm_params)
312 if isinstance(e, litellm.exceptions.BadRequestError):
--> 313 raise e
314 else:
File ~\anaconda3\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py:290, in get_llm_provider(model, custom_llm_provider, api_base, api_key, litellm_params)
289 # maps to openai.NotFoundError, this is raised when openai does not recognize the llm
--> 290 raise litellm.exceptions.BadRequestError( # type: ignore
291 message=error_str,
292 model=model,
293 response=httpx.Response(
294 status_code=400,
295 content=error_str,
296 request=httpx.Request(method="completion", url="https://github.com/BerriAI/litellm"), # type: ignore
297 ),
298 llm_provider="",
299 )
300 if api_base is not None and not isinstance(api_base, str):
BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=WatsonxLLM
Params: {'model_id': 'ibm/granite-13b-instruct-v2', 'deployment_id': None, 'params': {'decoding_method': 'sample', 'max_new_tokens': 2000, 'temperature': 0.0}, 'project_id': '4ea20d19-516f-4fdf-9af6-7bd946e6d9b5', 'space_id': None}
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers
During handling of the above exception, another exception occurred:
File ~\anaconda3\Lib\site-packages\crewai\agents\crew_agent_executor.py:93, in CrewAgentExecutor.invoke(self, inputs)
92 self.ask_for_human_input = bool(inputs.get("ask_for_human_input", False))
---> 93 formatted_answer = self._invoke_loop()
95 if self.ask_for_human_input:
File ~\anaconda3\Lib\site-packages\crewai\agents\crew_agent_executor.py:175, in CrewAgentExecutor._invoke_loop(self, formatted_answer)
174 else:
--> 175 raise e
177 self._show_logs(formatted_answer)
File ~\anaconda3\Lib\site-packages\crewai\agents\crew_agent_executor.py:115, in CrewAgentExecutor._invoke_loop(self, formatted_answer)
114 if not self.request_within_rpm_limit or self.request_within_rpm_limit():
--> 115 answer = self.llm.call(
116 self.messages,
117 callbacks=self.callbacks,
118 )
120 if not self.use_stop_words:
File ~\anaconda3\Lib\site-packages\crewai\llm.py:155, in LLM.call(self, messages, callbacks)
153 params = {k: v for k, v in params.items() if v is not None}
--> 155 response = litellm.completion(**params)
156 return response["choices"][0]["message"]["content"]
File ~\anaconda3\Lib\site-packages\litellm\utils.py:1013, in client..wrapper(*args, **kwargs)
1010 logging_obj.failure_handler(
1011 e, traceback_exception, start_time, end_time
1012 ) # DO NOT MAKE THREADED - router retry fallback relies on this!
-> 1013 raise e
File ~\anaconda3\Lib\site-packages\litellm\utils.py:903, in client..wrapper(*args, **kwargs)
902 # MODEL CALL
--> 903 result = original_function(*args, **kwargs)
904 end_time = datetime.datetime.now()
File ~\anaconda3\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py:313, in get_llm_provider(model, custom_llm_provider, api_base, api_key, litellm_params)
312 if isinstance(e, litellm.exceptions.BadRequestError):
--> 313 raise e
314 else:
File ~\anaconda3\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py:290, in get_llm_provider(model, custom_llm_provider, api_base, api_key, litellm_params)
289 # maps to openai.NotFoundError, this is raised when openai does not recognize the llm
--> 290 raise litellm.exceptions.BadRequestError( # type: ignore
291 message=error_str,
292 model=model,
293 response=httpx.Response(
294 status_code=400,
295 content=error_str,
296 request=httpx.Request(method="completion", url="https://github.com/BerriAI/litellm"), # type: ignore
297 ),
298 llm_provider="",
299 )
300 if api_base is not None and not isinstance(api_base, str):
BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=WatsonxLLM
Params: {'model_id': 'ibm/granite-13b-instruct-v2', 'deployment_id': None, 'params': {'decoding_method': 'sample', 'max_new_tokens': 2000, 'temperature': 0.0}, 'project_id': '4ea20d19-516f-4fdf-9af6-7bd946e6d9b5', 'space_id': None}
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers
During handling of the above exception, another exception occurred:
File ~\anaconda3\Lib\site-packages\crewai\agent.py:248, in Agent.execute_task(self, task, context, tools)
246 if self._times_executed > self.max_retry_limit:
247 raise e
--> 248 result = self.execute_task(task, context, tools)
250 if self.max_rpm and self._rpm_controller:
251 self._rpm_controller.stop_rpm_counter()
File ~\anaconda3\Lib\site-packages\crewai\agent.py:248, in Agent.execute_task(self, task, context, tools)
246 if self._times_executed > self.max_retry_limit:
247 raise e
--> 248 result = self.execute_task(task, context, tools)
250 if self.max_rpm and self._rpm_controller:
251 self._rpm_controller.stop_rpm_counter()
File ~\anaconda3\Lib\site-packages\crewai\agent.py:247, in Agent.execute_task(self, task, context, tools)
245 self._times_executed += 1
246 if self._times_executed > self.max_retry_limit:
--> 247 raise e
248 result = self.execute_task(task, context, tools)
250 if self.max_rpm and self._rpm_controller:
File ~\anaconda3\Lib\site-packages\crewai\agents\crew_agent_executor.py:175, in CrewAgentExecutor._invoke_loop(self, formatted_answer)
173 return self._invoke_loop(formatted_answer)
174 else:
--> 175 raise e
177 self._show_logs(formatted_answer)
178 return formatted_answer
File ~\anaconda3\Lib\site-packages\crewai\agents\crew_agent_executor.py:115, in CrewAgentExecutor._invoke_loop(self, formatted_answer)
113 while not isinstance(formatted_answer, AgentFinish):
114 if not self.request_within_rpm_limit or self.request_within_rpm_limit():
--> 115 answer = self.llm.call(
116 self.messages,
117 callbacks=self.callbacks,
118 )
120 if not self.use_stop_words:
121 try:
File ~\anaconda3\Lib\site-packages\crewai\llm.py:155, in LLM.call(self, messages, callbacks)
152 # Remove None values to avoid passing unnecessary parameters
153 params = {k: v for k, v in params.items() if v is not None}
--> 155 response = litellm.completion(**params)
156 return response["choices"][0]["message"]["content"]
157 except Exception as e:
File ~\anaconda3\Lib\site-packages\litellm\utils.py:1013, in client..wrapper(*args, **kwargs)
1009 if logging_obj:
1010 logging_obj.failure_handler(
1011 e, traceback_exception, start_time, end_time
1012 ) # DO NOT MAKE THREADED - router retry fallback relies on this!
-> 1013 raise e
File ~\anaconda3\Lib\site-packages\litellm\utils.py:903, in client..wrapper(*args, **kwargs)
901 print_verbose(f"Error while checking max token limit: {str(e)}")
902 # MODEL CALL
--> 903 result = original_function(*args, **kwargs)
904 end_time = datetime.datetime.now()
905 if "stream" in kwargs and kwargs["stream"] is True:
File ~\anaconda3\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py:313, in get_llm_provider(model, custom_llm_provider, api_base, api_key, litellm_params)
311 except Exception as e:
312 if isinstance(e, litellm.exceptions.BadRequestError):
--> 313 raise e
314 else:
315 error_str = (
316 f"GetLLMProvider Exception - {str(e)}\n\noriginal model: {model}"
317 )
File ~\anaconda3\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py:290, in get_llm_provider(model, custom_llm_provider, api_base, api_key, litellm_params)
288 error_str = f"LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model={model}\n Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers"
289 # maps to openai.NotFoundError, this is raised when openai does not recognize the llm
--> 290 raise litellm.exceptions.BadRequestError( # type: ignore
291 message=error_str,
292 model=model,
293 response=httpx.Response(
294 status_code=400,
295 content=error_str,
296 request=httpx.Request(method="completion", url="https://github.com/BerriAI/litellm"), # type: ignore
297 ),
298 llm_provider="",
299 )
300 if api_base is not None and not isinstance(api_base, str):
301 raise Exception(
302 "api base needs to be a string. api_base={}".format(api_base)
303 )
BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=WatsonxLLM
Params: {'model_id': 'ibm/granite-13b-instruct-v2', 'deployment_id': None, 'params': {'decoding_method': 'sample', 'max_new_tokens': 2000, 'temperature': 0.0}, 'project_id': '4ea20d19-516f-4fdf-9af6-7bd946e6d9b5', 'space_id': None}
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers
Description
The error encountered indicates that the LLM provider has not been correctly specified. Specifically, the system is attempting to use the WatsonxLLM model but is not passing the corresponding provider information.
Error Breakdown:
Error Message: BadRequestError: LLM Provider NOT provided.
Details: The model WatsonxLLM was passed, but no provider was specified. The system expects an LLM provider to be provided, such as 'Huggingface', 'Azure', or other supported models.
Steps to Reproduce
Official documentation https://docs.crewai.com/concepts/llms#ibm-watsonx-ai
Parameter sets for each agent type
historical_params = {"decoding_method": "sample", "max_new_tokens": 2000, "temperature": 0.1} # Structured and focused on facts
current_params = {"decoding_method": "greedy", "max_new_tokens": 1500, "temperature": 0.2} # Low variation, focused on precision
report_params = {"decoding_method": "sample", "max_new_tokens": 3500, "temperature": 0.7} # More creative and expansive for summaries
yahoo_params = {"decoding_method": "sample", "max_new_tokens": 2000, "temperature": 0.0} # No variation, focused on precision
Dedicated models for each agent
historical_data_model = "meta-llama/llama-2-13b-chat"
current_data_model = "ibm/granite-13b-instruct-v2"
report_generation_model = "meta-llama/llama-2-13b-chat"
Dedicated models for each agent
historical_data_model = "meta-llama/llama-2-13b-chat"
current_data_model = "meta-llama/llama-2-13b-chat"
report_generation_model = "meta-llama/llama-2-13b-chat"
yahoo_finance_model = "meta-llama/llama-2-13b-chat"
Date range (last two years)
end_date = datetime.now().date()
start_date = datetime(datetime.now().year - 2, 1, 1).date()
Initialize Watsonx LLM instances
historical_data_llm = WatsonxLLM(
model_id=historical_data_model,
url="https://eu-de.ml.cloud.ibm.com",
params=historical_params,
project_id=os.getenv("WATSONX_PROJECT_ID"),
apikey=os.getenv("WATSONX_APIKEY")
)
current_data_llm = WatsonxLLM(
model_id=current_data_model,
url="https://eu-de.ml.cloud.ibm.com",
params=current_params,
project_id=os.getenv("WATSONX_PROJECT_ID"),
apikey=os.getenv("WATSONX_APIKEY")
)
report_writer_llm = WatsonxLLM(
model_id=report_generation_model,
url="https://eu-de.ml.cloud.ibm.com",
params=report_params,
project_id=os.getenv("WATSONX_PROJECT_ID"),
apikey=os.getenv("WATSONX_APIKEY")
)
yahoo_finance_llm = WatsonxLLM(
model_id=yahoo_finance_model,
url="https://eu-de.ml.cloud.ibm.com",
params=yahoo_params,
project_id=os.getenv("WATSONX_PROJECT_ID"),
apikey=os.getenv("WATSONX_APIKEY")
)
Expected behavior
Communicate with LLM
Screenshots/Code snippets
2024-11-12 16:59:14,065 - 27388 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
2024-11-12 16:59:14,075 - 27388 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
2024-11-12 16:59:14,084 - 27388 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
2024-11-12 16:59:14,094 - 27388 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
2024-11-12 16:59:14,102 - 27388 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
2024-11-12 16:59:14,111 - 27388 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
2024-11-12 16:59:14,120 - 27388 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
2024-11-12 16:59:14,131 - 27388 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
2024-11-12 16:59:14,138 - 27388 - llm.py-llm:161 - ERROR: LiteLLM call failed: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=WatsonxLLM
Params: {'model_id': 'ibm/granite-13b-instruct-v2', 'deployment_id': None, 'params': {'decoding_method': 'sample', 'max_new_tokens': 2000, 'temperature': 0.0}, 'project_id': '4ea20d19-516f-4fdf-9af6-7bd946e6d9b5', 'space_id': None}
Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers2024-11-12 16:59:14,148 - 27388 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
Provider List: https://docs.litellm.ai/docs/providers
Provider List: https://docs.litellm.ai/docs/providers
Provider List: https://docs.litellm.ai/docs/providers
Provider List: https://docs.litellm.ai/docs/providers
Provider List: https://docs.litellm.ai/docs/providers
Provider List: https://docs.litellm.ai/docs/providers
Provider List: https://docs.litellm.ai/docs/providers
Provider List: https://docs.litellm.ai/docs/providers
Agent: Yahoo Finance Data Specialist
Task: Fetch Yahoo Finance data for IBM, including stock price, P/E ratio, ROE, and recent news. for the timeframe from 2022-01-01 to 2024-11-12.
Provider List: https://docs.litellm.ai/docs/providers
Agent: Yahoo Finance Data Specialist
Task: Fetch Yahoo Finance data for IBM, including stock price, P/E ratio, ROE, and recent news. for the timeframe from 2022-01-01 to 2024-11-12.
2024-11-12 16:59:14,168 - 27388 - llm.py-llm:161 - ERROR: LiteLLM call failed: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=WatsonxLLM
Params: {'model_id': 'ibm/granite-13b-instruct-v2', 'deployment_id': None, 'params': {'decoding_method': 'sample', 'max_new_tokens': 2000, 'temperature': 0.0}, 'project_id': '4ea20d19-516f-4fdf-9af6-7bd946e6d9b5', 'space_id': None}
Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers2024-11-12 16:59:14,178 - 27388 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
2024-11-12 16:59:14,206 - 27388 - llm.py-llm:161 - ERROR: LiteLLM call failed: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=WatsonxLLM
Params: {'model_id': 'ibm/granite-13b-instruct-v2', 'deployment_id': None, 'params': {'decoding_method': 'sample', 'max_new_tokens': 2000, 'temperature': 0.0}, 'project_id': '4ea20d19-516f-4fdf-9af6-7bd946e6d9b5', 'space_id': None}
Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providersProvider List: https://docs.litellm.ai/docs/providers
Agent: Yahoo Finance Data Specialist
Task: Fetch Yahoo Finance data for IBM, including stock price, P/E ratio, ROE, and recent news. for the timeframe from 2022-01-01 to 2024-11-12.
BadRequestError Traceback (most recent call last)
File ~\anaconda3\Lib\site-packages\crewai\agent.py:236, in Agent.execute_task(self, task, context, tools)
235 try:
--> 236 result = self.agent_executor.invoke(
237 {
238 "input": task_prompt,
239 "tool_names": self.agent_executor.tools_names,
240 "tools": self.agent_executor.tools_description,
241 "ask_for_human_input": task.human_input,
242 }
243 )["output"]
244 except Exception as e:
File ~\anaconda3\Lib\site-packages\crewai\agents\crew_agent_executor.py:93, in CrewAgentExecutor.invoke(self, inputs)
92 self.ask_for_human_input = bool(inputs.get("ask_for_human_input", False))
---> 93 formatted_answer = self._invoke_loop()
95 if self.ask_for_human_input:
File ~\anaconda3\Lib\site-packages\crewai\agents\crew_agent_executor.py:175, in CrewAgentExecutor._invoke_loop(self, formatted_answer)
174 else:
--> 175 raise e
177 self._show_logs(formatted_answer)
File ~\anaconda3\Lib\site-packages\crewai\agents\crew_agent_executor.py:115, in CrewAgentExecutor._invoke_loop(self, formatted_answer)
114 if not self.request_within_rpm_limit or self.request_within_rpm_limit():
--> 115 answer = self.llm.call(
116 self.messages,
117 callbacks=self.callbacks,
118 )
120 if not self.use_stop_words:
File ~\anaconda3\Lib\site-packages\crewai\llm.py:155, in LLM.call(self, messages, callbacks)
153 params = {k: v for k, v in params.items() if v is not None}
--> 155 response = litellm.completion(**params)
156 return response["choices"][0]["message"]["content"]
File ~\anaconda3\Lib\site-packages\litellm\utils.py:1013, in client..wrapper(*args, **kwargs)
1010 logging_obj.failure_handler(
1011 e, traceback_exception, start_time, end_time
1012 ) # DO NOT MAKE THREADED - router retry fallback relies on this!
-> 1013 raise e
File ~\anaconda3\Lib\site-packages\litellm\utils.py:903, in client..wrapper(*args, **kwargs)
902 # MODEL CALL
--> 903 result = original_function(*args, **kwargs)
904 end_time = datetime.datetime.now()
File ~\anaconda3\Lib\site-packages\litellm\main.py:2999, in completion(model, messages, timeout, temperature, top_p, n, stream, stream_options, stop, max_completion_tokens, max_tokens, modalities, audio, presence_penalty, frequency_penalty, logit_bias, user, response_format, seed, tools, tool_choice, logprobs, top_logprobs, parallel_tool_calls, deployment_id, extra_headers, functions, function_call, base_url, api_version, api_key, model_list, **kwargs)
2997 except Exception as e:
2998 ## Map to OpenAI Exception
-> 2999 raise exception_type(
3000 model=model,
3001 custom_llm_provider=custom_llm_provider,
3002 original_exception=e,
3003 completion_kwargs=args,
3004 extra_kwargs=kwargs,
3005 )
File ~\anaconda3\Lib\site-packages\litellm\main.py:906, in completion(model, messages, timeout, temperature, top_p, n, stream, stream_options, stop, max_completion_tokens, max_tokens, modalities, audio, presence_penalty, frequency_penalty, logit_bias, user, response_format, seed, tools, tool_choice, logprobs, top_logprobs, parallel_tool_calls, deployment_id, extra_headers, functions, function_call, base_url, api_version, api_key, model_list, **kwargs)
905 custom_llm_provider = "azure"
--> 906 model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
907 model=model,
908 custom_llm_provider=custom_llm_provider,
909 api_base=api_base,
910 api_key=api_key,
911 )
912 if model_response is not None and hasattr(model_response, "_hidden_params"):
File ~\anaconda3\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py:313, in get_llm_provider(model, custom_llm_provider, api_base, api_key, litellm_params)
312 if isinstance(e, litellm.exceptions.BadRequestError):
--> 313 raise e
314 else:
File ~\anaconda3\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py:290, in get_llm_provider(model, custom_llm_provider, api_base, api_key, litellm_params)
289 # maps to openai.NotFoundError, this is raised when openai does not recognize the llm
--> 290 raise litellm.exceptions.BadRequestError( # type: ignore
291 message=error_str,
292 model=model,
293 response=httpx.Response(
294 status_code=400,
295 content=error_str,
296 request=httpx.Request(method="completion", url="https://github.com/BerriAI/litellm"), # type: ignore
297 ),
298 llm_provider="",
299 )
300 if api_base is not None and not isinstance(api_base, str):
BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=WatsonxLLM
Params: {'model_id': 'ibm/granite-13b-instruct-v2', 'deployment_id': None, 'params': {'decoding_method': 'sample', 'max_new_tokens': 2000, 'temperature': 0.0}, 'project_id': '4ea20d19-516f-4fdf-9af6-7bd946e6d9b5', 'space_id': None}
Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providersDuring handling of the above exception, another exception occurred:
BadRequestError Traceback (most recent call last)
File ~\anaconda3\Lib\site-packages\crewai\agent.py:236, in Agent.execute_task(self, task, context, tools)
235 try:
--> 236 result = self.agent_executor.invoke(
237 {
238 "input": task_prompt,
239 "tool_names": self.agent_executor.tools_names,
240 "tools": self.agent_executor.tools_description,
241 "ask_for_human_input": task.human_input,
242 }
243 )["output"]
244 except Exception as e:
File ~\anaconda3\Lib\site-packages\crewai\agents\crew_agent_executor.py:93, in CrewAgentExecutor.invoke(self, inputs)
92 self.ask_for_human_input = bool(inputs.get("ask_for_human_input", False))
---> 93 formatted_answer = self._invoke_loop()
95 if self.ask_for_human_input:
File ~\anaconda3\Lib\site-packages\crewai\agents\crew_agent_executor.py:175, in CrewAgentExecutor._invoke_loop(self, formatted_answer)
174 else:
--> 175 raise e
177 self._show_logs(formatted_answer)
File ~\anaconda3\Lib\site-packages\crewai\agents\crew_agent_executor.py:115, in CrewAgentExecutor._invoke_loop(self, formatted_answer)
114 if not self.request_within_rpm_limit or self.request_within_rpm_limit():
--> 115 answer = self.llm.call(
116 self.messages,
117 callbacks=self.callbacks,
118 )
120 if not self.use_stop_words:
File ~\anaconda3\Lib\site-packages\crewai\llm.py:155, in LLM.call(self, messages, callbacks)
153 params = {k: v for k, v in params.items() if v is not None}
--> 155 response = litellm.completion(**params)
156 return response["choices"][0]["message"]["content"]
File ~\anaconda3\Lib\site-packages\litellm\utils.py:1013, in client..wrapper(*args, **kwargs)
1010 logging_obj.failure_handler(
1011 e, traceback_exception, start_time, end_time
1012 ) # DO NOT MAKE THREADED - router retry fallback relies on this!
-> 1013 raise e
File ~\anaconda3\Lib\site-packages\litellm\utils.py:903, in client..wrapper(*args, **kwargs)
902 # MODEL CALL
--> 903 result = original_function(*args, **kwargs)
904 end_time = datetime.datetime.now()
File ~\anaconda3\Lib\site-packages\litellm\main.py:2999, in completion(model, messages, timeout, temperature, top_p, n, stream, stream_options, stop, max_completion_tokens, max_tokens, modalities, audio, presence_penalty, frequency_penalty, logit_bias, user, response_format, seed, tools, tool_choice, logprobs, top_logprobs, parallel_tool_calls, deployment_id, extra_headers, functions, function_call, base_url, api_version, api_key, model_list, **kwargs)
2997 except Exception as e:
2998 ## Map to OpenAI Exception
-> 2999 raise exception_type(
3000 model=model,
3001 custom_llm_provider=custom_llm_provider,
3002 original_exception=e,
3003 completion_kwargs=args,
3004 extra_kwargs=kwargs,
3005 )
File ~\anaconda3\Lib\site-packages\litellm\main.py:906, in completion(model, messages, timeout, temperature, top_p, n, stream, stream_options, stop, max_completion_tokens, max_tokens, modalities, audio, presence_penalty, frequency_penalty, logit_bias, user, response_format, seed, tools, tool_choice, logprobs, top_logprobs, parallel_tool_calls, deployment_id, extra_headers, functions, function_call, base_url, api_version, api_key, model_list, **kwargs)
905 custom_llm_provider = "azure"
--> 906 model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
907 model=model,
908 custom_llm_provider=custom_llm_provider,
909 api_base=api_base,
910 api_key=api_key,
911 )
912 if model_response is not None and hasattr(model_response, "_hidden_params"):
File ~\anaconda3\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py:313, in get_llm_provider(model, custom_llm_provider, api_base, api_key, litellm_params)
312 if isinstance(e, litellm.exceptions.BadRequestError):
--> 313 raise e
314 else:
File ~\anaconda3\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py:290, in get_llm_provider(model, custom_llm_provider, api_base, api_key, litellm_params)
289 # maps to openai.NotFoundError, this is raised when openai does not recognize the llm
--> 290 raise litellm.exceptions.BadRequestError( # type: ignore
291 message=error_str,
292 model=model,
293 response=httpx.Response(
294 status_code=400,
295 content=error_str,
296 request=httpx.Request(method="completion", url="https://github.com/BerriAI/litellm"), # type: ignore
297 ),
298 llm_provider="",
299 )
300 if api_base is not None and not isinstance(api_base, str):
BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=WatsonxLLM
Params: {'model_id': 'ibm/granite-13b-instruct-v2', 'deployment_id': None, 'params': {'decoding_method': 'sample', 'max_new_tokens': 2000, 'temperature': 0.0}, 'project_id': '4ea20d19-516f-4fdf-9af6-7bd946e6d9b5', 'space_id': None}
Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providersDuring handling of the above exception, another exception occurred:
BadRequestError Traceback (most recent call last)
Cell In[26], line 2
1 # Execute tasks
----> 2 print(crew.kickoff())
File ~\anaconda3\Lib\site-packages\crewai\crew.py:494, in Crew.kickoff(self, inputs)
491 metrics: List[UsageMetrics] = []
493 if self.process == Process.sequential:
--> 494 result = self._run_sequential_process()
495 elif self.process == Process.hierarchical:
496 result = self._run_hierarchical_process()
File ~\anaconda3\Lib\site-packages\crewai\crew.py:598, in Crew._run_sequential_process(self)
596 def _run_sequential_process(self) -> CrewOutput:
597 """Executes tasks sequentially and returns the final output."""
--> 598 return self._execute_tasks(self.tasks)
File ~\anaconda3\Lib\site-packages\crewai\crew.py:696, in Crew._execute_tasks(self, tasks, start_index, was_replayed)
693 futures.clear()
695 context = self._get_context(task, task_outputs)
--> 696 task_output = task.execute_sync(
697 agent=agent_to_use,
698 context=context,
699 tools=agent_to_use.tools,
700 )
701 task_outputs = [task_output]
702 self._process_task_result(task, task_output)
File ~\anaconda3\Lib\site-packages\crewai\task.py:191, in Task.execute_sync(self, agent, context, tools)
184 def execute_sync(
185 self,
186 agent: Optional[BaseAgent] = None,
187 context: Optional[str] = None,
188 tools: Optional[List[Any]] = None,
189 ) -> TaskOutput:
190 """Execute the task synchronously."""
--> 191 return self._execute_core(agent, context, tools)
File ~\anaconda3\Lib\site-packages\crewai\task.py:247, in Task._execute_core(self, agent, context, tools)
243 tools = tools or self.tools or []
245 self.processed_by_agents.add(agent.role)
--> 247 result = agent.execute_task(
248 task=self,
249 context=context,
250 tools=tools,
251 )
253 pydantic_output, json_output = self._export_output(result)
255 task_output = TaskOutput(
256 name=self.name,
257 description=self.description,
(...)
263 output_format=self._get_output_format(),
264 )
File ~\anaconda3\Lib\site-packages\crewai\agent.py:248, in Agent.execute_task(self, task, context, tools)
246 if self._times_executed > self.max_retry_limit:
247 raise e
--> 248 result = self.execute_task(task, context, tools)
250 if self.max_rpm and self._rpm_controller:
251 self._rpm_controller.stop_rpm_counter()
File ~\anaconda3\Lib\site-packages\crewai\agent.py:248, in Agent.execute_task(self, task, context, tools)
246 if self._times_executed > self.max_retry_limit:
247 raise e
--> 248 result = self.execute_task(task, context, tools)
250 if self.max_rpm and self._rpm_controller:
251 self._rpm_controller.stop_rpm_counter()
File ~\anaconda3\Lib\site-packages\crewai\agent.py:247, in Agent.execute_task(self, task, context, tools)
245 self._times_executed += 1
246 if self._times_executed > self.max_retry_limit:
--> 247 raise e
248 result = self.execute_task(task, context, tools)
250 if self.max_rpm and self._rpm_controller:
File ~\anaconda3\Lib\site-packages\crewai\agent.py:236, in Agent.execute_task(self, task, context, tools)
233 task_prompt = self._use_trained_data(task_prompt=task_prompt)
235 try:
--> 236 result = self.agent_executor.invoke(
237 {
238 "input": task_prompt,
239 "tool_names": self.agent_executor.tools_names,
240 "tools": self.agent_executor.tools_description,
241 "ask_for_human_input": task.human_input,
242 }
243 )["output"]
244 except Exception as e:
245 self._times_executed += 1
File ~\anaconda3\Lib\site-packages\crewai\agents\crew_agent_executor.py:93, in CrewAgentExecutor.invoke(self, inputs)
90 self._show_start_logs()
92 self.ask_for_human_input = bool(inputs.get("ask_for_human_input", False))
---> 93 formatted_answer = self._invoke_loop()
95 if self.ask_for_human_input:
96 human_feedback = self._ask_human_input(formatted_answer.output)
File ~\anaconda3\Lib\site-packages\crewai\agents\crew_agent_executor.py:175, in CrewAgentExecutor._invoke_loop(self, formatted_answer)
173 return self._invoke_loop(formatted_answer)
174 else:
--> 175 raise e
177 self._show_logs(formatted_answer)
178 return formatted_answer
File ~\anaconda3\Lib\site-packages\crewai\agents\crew_agent_executor.py:115, in CrewAgentExecutor._invoke_loop(self, formatted_answer)
113 while not isinstance(formatted_answer, AgentFinish):
114 if not self.request_within_rpm_limit or self.request_within_rpm_limit():
--> 115 answer = self.llm.call(
116 self.messages,
117 callbacks=self.callbacks,
118 )
120 if not self.use_stop_words:
121 try:
File ~\anaconda3\Lib\site-packages\crewai\llm.py:155, in LLM.call(self, messages, callbacks)
152 # Remove None values to avoid passing unnecessary parameters
153 params = {k: v for k, v in params.items() if v is not None}
--> 155 response = litellm.completion(**params)
156 return response["choices"][0]["message"]["content"]
157 except Exception as e:
File ~\anaconda3\Lib\site-packages\litellm\utils.py:1013, in client..wrapper(*args, **kwargs)
1009 if logging_obj:
1010 logging_obj.failure_handler(
1011 e, traceback_exception, start_time, end_time
1012 ) # DO NOT MAKE THREADED - router retry fallback relies on this!
-> 1013 raise e
File ~\anaconda3\Lib\site-packages\litellm\utils.py:903, in client..wrapper(*args, **kwargs)
901 print_verbose(f"Error while checking max token limit: {str(e)}")
902 # MODEL CALL
--> 903 result = original_function(*args, **kwargs)
904 end_time = datetime.datetime.now()
905 if "stream" in kwargs and kwargs["stream"] is True:
File ~\anaconda3\Lib\site-packages\litellm\main.py:2999, in completion(model, messages, timeout, temperature, top_p, n, stream, stream_options, stop, max_completion_tokens, max_tokens, modalities, audio, presence_penalty, frequency_penalty, logit_bias, user, response_format, seed, tools, tool_choice, logprobs, top_logprobs, parallel_tool_calls, deployment_id, extra_headers, functions, function_call, base_url, api_version, api_key, model_list, **kwargs)
2996 return response
2997 except Exception as e:
2998 ## Map to OpenAI Exception
-> 2999 raise exception_type(
3000 model=model,
3001 custom_llm_provider=custom_llm_provider,
3002 original_exception=e,
3003 completion_kwargs=args,
3004 extra_kwargs=kwargs,
3005 )
File ~\anaconda3\Lib\site-packages\litellm\main.py:906, in completion(model, messages, timeout, temperature, top_p, n, stream, stream_options, stop, max_completion_tokens, max_tokens, modalities, audio, presence_penalty, frequency_penalty, logit_bias, user, response_format, seed, tools, tool_choice, logprobs, top_logprobs, parallel_tool_calls, deployment_id, extra_headers, functions, function_call, base_url, api_version, api_key, model_list, **kwargs)
904 model = deployment_id
905 custom_llm_provider = "azure"
--> 906 model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
907 model=model,
908 custom_llm_provider=custom_llm_provider,
909 api_base=api_base,
910 api_key=api_key,
911 )
912 if model_response is not None and hasattr(model_response, "_hidden_params"):
913 model_response._hidden_params["custom_llm_provider"] = custom_llm_provider
File ~\anaconda3\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py:313, in get_llm_provider(model, custom_llm_provider, api_base, api_key, litellm_params)
311 except Exception as e:
312 if isinstance(e, litellm.exceptions.BadRequestError):
--> 313 raise e
314 else:
315 error_str = (
316 f"GetLLMProvider Exception - {str(e)}\n\noriginal model: {model}"
317 )
File ~\anaconda3\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py:290, in get_llm_provider(model, custom_llm_provider, api_base, api_key, litellm_params)
288 error_str = f"LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model={model}\n Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers"289 # maps to openai.NotFoundError, this is raised when openai does not recognize the llm
--> 290 raise litellm.exceptions.BadRequestError( # type: ignore
291 message=error_str,
292 model=model,
293 response=httpx.Response(
294 status_code=400,
295 content=error_str,
296 request=httpx.Request(method="completion", url="https://github.com/BerriAI/litellm"), # type: ignore
297 ),
298 llm_provider="",
299 )
300 if api_base is not None and not isinstance(api_base, str):
301 raise Exception(
302 "api base needs to be a string. api_base={}".format(api_base)
303 )
BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=WatsonxLLM
Params: {'model_id': 'ibm/granite-13b-instruct-v2', 'deployment_id': None, 'params': {'decoding_method': 'sample', 'max_new_tokens': 2000, 'temperature': 0.0}, 'project_id': '4ea20d19-516f-4fdf-9af6-7bd946e6d9b5', 'space_id': None}
Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providersOperating System
Windows 11
Python Version
3.12
crewAI Version
Version: 0.76.9
crewAI Tools Version
Version: 0.13.4
Virtual Environment
Conda
Evidence
![Screenshot 2024-11-12 170759](https://github.com/user-attachments/assets/ec04fa99-fc0c-4ace-8dbd-af11b36c88f8) ![Screenshot 2024-11-12 170827](https://github.com/user-attachments/assets/09b23a56-c0ec-4a04-a591-1e93d2610cfd) ![Screenshot 2024-11-12 170903](https://github.com/user-attachments/assets/ecd4c91f-4fe8-4917-9033-9e8127dbb0d2) ![Screenshot 2024-11-12 170923](https://github.com/user-attachments/assets/1777370f-ec73-438b-88ef-ecccbe8ce121)Possible Solution
None
Additional context
NA
The text was updated successfully, but these errors were encountered: