-
Notifications
You must be signed in to change notification settings - Fork 181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ADAP-852] [Feature] Data type conversions on schema change when unsupported by alter statement in Snowflake #755
Comments
Thanks for raising this @kbrock91 ! 🧠 CruxIt seems like the crux of the issue is that some data types (plus their precision/scale/length) can't always be converted to others without some kind of loss. e.g. suppose we have a So we'd need someone (or something) to make a decision of how to do those types of conversions. Proposed ideas
You proposed a couple ideas of how to proceed:
Idea 2I don't think dbt could safely do idea 2 without possible data loss due to the main crux of the issue listed above. An off-the-wall idea would be to utilize Snowflake's Idea 1It sounds like you are suggesting a new flag / config like One way a user could do something similar would be to re-run the failed model with the Of course always using the Next stepsOverall, there is a solid reason that Snowflake doesn't allow alter statements in these types of scenarios. It seems pretty reasonable for dbt to be hands-off in cases like this and handing it over to a human to put their eyes on it. I'd also expect these cases to be relatively infrequent 🤞 Do you have any other ideas how to empower dbt users when there is an incompatible change of data types? Maybe we could output an error message with more detail on potential steps to resolve the issue? Would adding contracts to upstream relations alleviate the risk of surprising changes in data types? This seems like it is generalizable beyond just dbt-snowflake, so we might end up transferring this issue to dbt-core at some point. Either way, dbt-snowflake is behaving the way we expect currently, so I'm going to re-label this as a feature request. |
This issue has been marked as Stale because it has been open for 180 days with no activity. If you would like the issue to remain open, please comment on the issue or else it will be closed in 7 days. |
Although we are closing this issue as stale, it's not gone forever. Issues can be reopened if there is renewed community interest. Just add a comment to notify the maintainers. |
1 similar comment
Although we are closing this issue as stale, it's not gone forever. Issues can be reopened if there is renewed community interest. Just add a comment to notify the maintainers. |
Hi team, just re-opening this to ask a few questions on behalf of a user regarding this issue:
|
Is this a new bug in dbt-snowflake?
Current Behavior
in snowflake, when a model is set to an incremental materialization, on_schema_change = 'sync_all_columns' , and the detected change is a column data type change (e.g. FLOAT to NUMBER), dbt recognizes the schema change, executes an alter statement to try to change the data type. for Snowflake we're attempting to just use alter table ... alter set data type, which works for some type changes but not for others. For unsupported type conversions links out to what is supported/unsupported)(e.g. FLOAT to NUMBER) snowflake fails on the alter statement and the incremental model is unsuccessful. Today, we have to either manually recreate the column with the right data type or run a full refresh on our incremental to be able to load the data.
Expected Behavior
When dbt detects a schema change with an unsupported data type conversion, it should change the behavior to account for this limit. We could either have a flag to run a full-refresh (not ideal) or more creatively, dbt could execute a series of commands to fully recreate the column with the new type, copy the values over, delete the old column.
Steps To Reproduce
Relevant log output
No response
Environment
Additional Context
No response
The text was updated successfully, but these errors were encountered: