You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the current instrumentation approach its really hard to get the actual values from:
User Prompt
System Prompt
Output
It requires you to have system level knowledge of where these values reside within the object. In the case of OpenAI they are the first index of the array of the message field. This can also vary between providers.
[BEGIN DATA]
************
[Input Question, System message and Context to AI Assistant]:
{attributes.llm.input_messages.0.message.content}
{attributes.llm.input_messages.1.message.content}
************
[AI Assistant Answer]:
{attributes.llm.output_messages.0.message.content}
************
[END DATA]
I'd like to suggest we have an option in implementation to copy the parameters to well known fields. This makes the downstream accessibility much much easier, with the added burden of a slight bit of copying of data.
In the current instrumentation approach its really hard to get the actual values from:
User Prompt
System Prompt
Output
It requires you to have system level knowledge of where these values reside within the object. In the case of OpenAI they are the first index of the array of the message field. This can also vary between providers.
I'd like to suggest we have an option in implementation to copy the parameters to well known fields. This makes the downstream accessibility much much easier, with the added burden of a slight bit of copying of data.
attributes.llm.user_prompt
attributes.llm.sys_prompt
attributes.llm.output
The other option is these are short cuts for the final data locations that will have to proliferate through our software everywhere.
The text was updated successfully, but these errors were encountered: