-
Notifications
You must be signed in to change notification settings - Fork 127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Internal] Update Jobs GetRun API to support paginated responses for jobs and ForEach tasks #819
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
If integration tests don't run automatically, an authorized user can run them manually by following the instructions below: Trigger: Inputs:
Checks will be approved automatically on success. |
gkiko10
temporarily deployed
to
test-trigger-is
November 8, 2024 14:33 — with
GitHub Actions
Inactive
gkiko10
changed the title
get_run paginates tasks and iterations
[Internal] Update Jobs GetRun API to support paginated responses for jobs and ForEach tasks
Nov 8, 2024
3 tasks
Test Details: go/deco-tests/11755465331 |
renaudhartert-db
approved these changes
Nov 9, 2024
renaudhartert-db
added a commit
that referenced
this pull request
Nov 18, 2024
### New Features and Improvements * Read streams by 1MB chunks by default. ([#817](#817)). ### Bug Fixes * Rewind seekable streams before retrying ([#821](#821)). ### Internal Changes * Reformat SDK with YAPF 0.43. ([#822](#822)). * Update Jobs GetRun API to support paginated responses for jobs and ForEach tasks ([#819](#819)). * Update PR template ([#814](#814)). ### API Changes: * Added `databricks.sdk.service.apps`, `databricks.sdk.service.billing`, `databricks.sdk.service.catalog`, `databricks.sdk.service.compute`, `databricks.sdk.service.dashboards`, `databricks.sdk.service.files`, `databricks.sdk.service.iam`, `databricks.sdk.service.jobs`, `databricks.sdk.service.marketplace`, `databricks.sdk.service.ml`, `databricks.sdk.service.oauth2`, `databricks.sdk.service.pipelines`, `databricks.sdk.service.provisioning`, `databricks.sdk.service.serving`, `databricks.sdk.service.settings`, `databricks.sdk.service.sharing`, `databricks.sdk.service.sql`, `databricks.sdk.service.vectorsearch` and `databricks.sdk.service.workspace` packages. OpenAPI SHA: 2035bf5234753adfd080a79bff325dd4a5b90bc2, Date: 2024-11-15
This was referenced Nov 18, 2024
github-merge-queue bot
pushed a commit
that referenced
this pull request
Nov 18, 2024
### New Features and Improvements * Read streams by 1MB chunks by default. ([#817](#817)). ### Bug Fixes * Rewind seekable streams before retrying ([#821](#821)). * Properly serialize nested data classes. ### Internal Changes * Reformat SDK with YAPF 0.43. ([#822](#822)). * Update Jobs GetRun API to support paginated responses for jobs and ForEach tasks ([#819](#819)). ### API Changes: * Added `service_principal_client_id` field for `databricks.sdk.service.apps.App`. * Added `azure_service_principal`, `gcp_service_account_key` and `read_only` fields for `databricks.sdk.service.catalog.CreateCredentialRequest`. * Added `azure_service_principal`, `read_only` and `used_for_managed_storage` fields for `databricks.sdk.service.catalog.CredentialInfo`. * Added `omit_username` field for `databricks.sdk.service.catalog.ListTablesRequest`. * Added `azure_service_principal` and `read_only` fields for `databricks.sdk.service.catalog.UpdateCredentialRequest`. * Added `external_location_name`, `read_only` and `url` fields for `databricks.sdk.service.catalog.ValidateCredentialRequest`. * Added `is_dir` field for `databricks.sdk.service.catalog.ValidateCredentialResponse`. * Added `only` field for `databricks.sdk.service.jobs.RunNow`. * Added `restart_window` field for `databricks.sdk.service.pipelines.CreatePipeline`. * Added `restart_window` field for `databricks.sdk.service.pipelines.EditPipeline`. * Added `restart_window` field for `databricks.sdk.service.pipelines.PipelineSpec`. * Added `private_access_settings_id` field for `databricks.sdk.service.provisioning.UpdateWorkspaceRequest`. * Changed `create_credential()` and `generate_temporary_service_credential()` methods for [w.credentials](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/credentials.html) workspace-level service with new required argument order. * Changed `access_connector_id` field for `databricks.sdk.service.catalog.AzureManagedIdentity` to be required. * Changed `access_connector_id` field for `databricks.sdk.service.catalog.AzureManagedIdentity` to be required. * Changed `name` field for `databricks.sdk.service.catalog.CreateCredentialRequest` to be required. * Changed `credential_name` field for `databricks.sdk.service.catalog.GenerateTemporaryServiceCredentialRequest` to be required. OpenAPI SHA: f2385add116e3716c8a90a0b68e204deb40f996c, Date: 2024-11-15
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What changes are proposed in this pull request?
Introduces extension for jobs get_run call that paginates tasks and iterations arrays in the response and returns aggregated response to the caller. This change is necessary to prepare for jobs API 2.2 release that serves paginated response. Pagination is over once the next_page_token is absent from the response. The pagination logic is not exposed to the customer.
How is this tested?
Unit tests and manual test