You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From 0.4.3+, information specified in the configuration YAML is automatically available to the context object in Python processors.
This feature is extremely helpful.
However, it doesn't cover every case.
For example, to switch between using OpenAI vs Azure OpenAI for the LLM in LangChain implementations, there's a different wrapper that must be provided.
In these cases, to configure the env variables, we still need to either:
map them in the pipeline YAML file (which is a potential source of mapping errors)
use os.enviroment in Python (which is worse since it's also subject to the same human error but now adds more code to maintain and can't be validated as easily at build time)
Then, we also need to ensure that the correct LLM wrapper is instantiated in the Python code.
It would be quite useful if LangStream could simplify this by providing something like: llm = context.buildLLM()
In LangChain, there is a small tree now of LLM classes, but most of them appear to implement either BaseLanguageModel or BaseLLM.
Different implementations require different env vars to be set, so it is quite annoying to need to manually keep track of them all, especially when needing to switch between providers.
Perhaps we can leverage this to add additional no-code configuration to add more value.
The text was updated successfully, but these errors were encountered:
One idea is that we could provide a Mixin or Decorator that automatically wires up certain properties based on the configuration specified. That way, if someone wants a different behavior, there's a path to customize it without introducing breaking changes.
For conversational memory and other such features, they're extremely useful, and I'd expect the list of them to grow quickly. The base LLMs are more primitive models, so often once someone gets beyond the initial case of API integration with a base model, they then want things like conversational memory, and that's where the chat models are really useful.
From 0.4.3+, information specified in the configuration YAML is automatically available to the context object in Python processors.
This feature is extremely helpful.
However, it doesn't cover every case.
For example, to switch between using OpenAI vs Azure OpenAI for the LLM in LangChain implementations, there's a different wrapper that must be provided.
(https://python.langchain.com/docs/integrations/llms/azure_openai#deployments)
We run into a similar issue with WatsonX. (In https://ibm.github.io/watson-machine-learning-sdk/fm_extensions.html you can see that
WatsonxLLM(model=model)
must be declared.)In these cases, to configure the env variables, we still need to either:
Then, we also need to ensure that the correct LLM wrapper is instantiated in the Python code.
It would be quite useful if LangStream could simplify this by providing something like:
llm = context.buildLLM()
In LangChain, there is a small tree now of LLM classes, but most of them appear to implement either BaseLanguageModel or BaseLLM.
For example:
Different implementations require different env vars to be set, so it is quite annoying to need to manually keep track of them all, especially when needing to switch between providers.
Perhaps we can leverage this to add additional no-code configuration to add more value.
The text was updated successfully, but these errors were encountered: