You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When exporting the workflow using the Ensemble module, the NvTabular triton config file creates two parameters for each ragged feature: "feature_name___offsets" and "feature_name___values" for both the inputs and outputs.
Is there a solution to avoid creating these new parameters and keep the inputs as is ?
Any workaround appreciated.
@Azilyss this is done because then we can train DL models with ragged inputs, and then serve the on Triton accordingly. Is using pad=True does not set the is_ragged to False?
Apologies, the outputs are actually the correct ones.
However, because the inputs are expected to be ragged, the parameters item_id-list_seq___offsets, item_id-list_seq___values are created for the reasons you mentioned. In my current setup, I am running Triton inference on a single request at a time, not batched requests. So I was wondering if it was possible to keep the input as is, without having to pad the training dataset before fitting the workflow.
** Can I ignore the is_ragged property of the categorical features when exporting the Workflow ? **
Setup :
nvtabular version : 23.6.0
merlin-systems version : 23.6.0
The NvTabular workflow is defined as follows :
The dataset typically has sequences of items of different length and the workflow slice and pads them to the specified sequence_length.
The workflow is exported as follows:
When exporting the workflow using the Ensemble module, the NvTabular triton config file creates two parameters for each ragged feature: "feature_name___offsets" and "feature_name___values" for both the inputs and outputs.
Is there a solution to avoid creating these new parameters and keep the inputs as is ?
Any workaround appreciated.
Code to reproduce
The text was updated successfully, but these errors were encountered: