-
Notifications
You must be signed in to change notification settings - Fork 227
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
datasets tests, GHA merging, minor fix-ups #680
Merged
Merged
Changes from all commits
Commits
Show all changes
6 commits
Select commit
Hold shift + click to select a range
6ec0307
datasets tests, GHA merging, minor fix-ups
a23c264
Merge branch 'main' into andrei/datasets-crud-tests
AndreiCautisanu 9ed6a05
descriptions
98efe61
fix dataset creation when no datasets created
78a4457
traces locator fix
1772e7d
fixed sanity after UI changes too
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
189 changes: 189 additions & 0 deletions
189
tests_end_to_end/tests/Datasets/test_datasets_crud_operations.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,189 @@ | ||
import pytest | ||
from playwright.sync_api import Page, expect | ||
from page_objects.DatasetsPage import DatasetsPage | ||
from page_objects.ProjectsPage import ProjectsPage | ||
from page_objects.TracesPage import TracesPage | ||
from sdk_helpers import delete_dataset_by_name_if_exists, update_dataset_name, get_dataset_by_name | ||
import opik | ||
import time | ||
|
||
|
||
class TestDatasetsCrud: | ||
|
||
def test_create_dataset_ui_datasets_page(self, page: Page): | ||
""" | ||
Basic test to check dataset creation via UI. Uses the UI after creation to check the dataset exists | ||
1. Create dataset via UI from the datasets page | ||
2. Check the dataset exists in the dataset table | ||
3. If no errors raised, test passes | ||
""" | ||
datasets_page = DatasetsPage(page) | ||
datasets_page.go_to_page() | ||
dataset_name = 'automated_tests_dataset' | ||
try: | ||
datasets_page.create_dataset_by_name(dataset_name=dataset_name) | ||
datasets_page.check_dataset_exists_on_page_by_name(dataset_name=dataset_name) | ||
except Exception as e: | ||
print(f'error during dataset creation: {e}') | ||
raise | ||
finally: | ||
delete_dataset_by_name_if_exists(dataset_name=dataset_name) | ||
|
||
|
||
def test_create_dataset_ui_add_traces_to_new_dataset(self, page: Page, create_delete_project_sdk, create_10_test_traces): | ||
""" | ||
Basic test to check dataset creation via "add to new dataset" functionality in the traces page. Uses the UI after creation to check the project exists | ||
1. Create a project with some traces | ||
2. Via the UI, select the traces and add them to a new dataset | ||
3. Switch to the datasets page, check the dataset exists in the dataset table | ||
4. If no errors raised and dataset exists, test passes | ||
""" | ||
dataset_name = 'automated_tests_dataset' | ||
proj_name = create_delete_project_sdk | ||
projects_page = ProjectsPage(page) | ||
projects_page.go_to_page() | ||
projects_page.click_project(project_name=proj_name) | ||
|
||
traces_page = TracesPage(page) | ||
traces_page.add_all_traces_to_new_dataset(dataset_name=dataset_name) | ||
|
||
try: | ||
datasets_page = DatasetsPage(page) | ||
datasets_page.go_to_page() | ||
datasets_page.check_dataset_exists_on_page_by_name(dataset_name=dataset_name) | ||
except Exception as e: | ||
print(f'error: dataset not created: {e}') | ||
raise | ||
finally: | ||
delete_dataset_by_name_if_exists(dataset_name=dataset_name) | ||
|
||
|
||
def test_create_dataset_sdk_client(self, client: opik.Opik): | ||
""" | ||
Basic test to check dataset creation via SDK. Uses the SDK to fetch the created dataset to check it exists | ||
1. Create dataset via SDK Opik client | ||
2. Get the project via SDK OpikAPI client | ||
3. If dataset creation fails, client.get_dataset will throw an error and the test will fail. | ||
""" | ||
dataset_name = 'automated_tests_dataset' | ||
try: | ||
client.create_dataset(name=dataset_name) | ||
time.sleep(0.2) | ||
assert client.get_dataset(name=dataset_name) is not None | ||
except Exception as e: | ||
print(f'error during dataset creation: {e}') | ||
raise | ||
finally: | ||
delete_dataset_by_name_if_exists(dataset_name=dataset_name) | ||
|
||
|
||
@pytest.mark.parametrize('dataset_fixture', ['create_delete_dataset_ui', 'create_delete_dataset_sdk']) | ||
def test_dataset_visibility(self, request, page: Page, client: opik.Opik, dataset_fixture): | ||
""" | ||
Checks a created dataset is visible via both the UI and SDK. Test split in 2: checks on datasets created on both UI and SDK | ||
1. Create a dataset via the UI/the SDK (2 "instances" of the test created for each one) | ||
2. Fetch the dataset by name using the SDK Opik client and check the dataset exists in the datasets table in the UI | ||
3. Check that the correct dataset is returned in the SDK and that the name is correct in the UI | ||
""" | ||
dataset_name = request.getfixturevalue(dataset_fixture) | ||
time.sleep(0.5) | ||
|
||
datasets_page = DatasetsPage(page) | ||
datasets_page.go_to_page() | ||
datasets_page.check_dataset_exists_on_page_by_name(dataset_name) | ||
|
||
dataset_sdk = client.get_dataset(dataset_name) | ||
assert dataset_sdk.name == dataset_name | ||
|
||
|
||
@pytest.mark.parametrize('dataset_fixture', ['create_dataset_sdk_no_cleanup', 'create_dataset_ui_no_cleanup']) | ||
def test_dataset_name_update(self, request, page: Page, client: opik.Opik, dataset_fixture): | ||
""" | ||
Checks using the SDK update method on a dataset. Test split into 2: checks on dataset created on both UI and SDK | ||
1. Create a dataset via the UI/the SDK (2 "instances" of the test created for each one) | ||
2. Send a request via the SDK OpikApi client to update the dataset's name | ||
3. Check on both the SDK and the UI that the dataset has been renamed (on SDK: check dataset ID matches when sending a get by name reequest. on UI: check | ||
dataset with new name appears and no dataset with old name appears) | ||
""" | ||
dataset_name = request.getfixturevalue(dataset_fixture) | ||
time.sleep(0.5) | ||
new_name = 'updated_test_dataset_name' | ||
|
||
name_updated = False | ||
try: | ||
dataset_id = update_dataset_name(name=dataset_name, new_name=new_name) | ||
name_updated = True | ||
|
||
dataset_new_name = get_dataset_by_name(dataset_name=new_name) | ||
|
||
dataset_id_updated_name = dataset_new_name['id'] | ||
assert dataset_id_updated_name == dataset_id | ||
|
||
datasets_page = DatasetsPage(page) | ||
datasets_page.go_to_page() | ||
datasets_page.check_dataset_exists_on_page_by_name(dataset_name=new_name) | ||
datasets_page.check_dataset_not_exists_on_page_by_name(dataset_name=dataset_name) | ||
|
||
except Exception as e: | ||
print(f'Error occured during update of project name: {e}') | ||
raise | ||
|
||
finally: | ||
if name_updated: | ||
delete_dataset_by_name_if_exists(new_name) | ||
else: | ||
delete_dataset_by_name_if_exists(dataset_name) | ||
|
||
|
||
@pytest.mark.parametrize('dataset_fixture', ['create_dataset_sdk_no_cleanup', 'create_dataset_ui_no_cleanup']) | ||
def test_dataset_deletion_in_sdk(self, request, page: Page, client: opik.Opik, dataset_fixture): | ||
""" | ||
Checks proper deletion of a dataset via the SDK. Test split into 2: checks on datasets created on both UI and SDK | ||
1. Create a dataset via the UI/the SDK (2 "instances" of the test created for each one) | ||
2. Send a request via the SDK to delete the dataset | ||
3. Check on both the SDK and the UI that the dataset no longer exists (client.get_dataset should throw a 404 error, dataset does not appear in datasets table in UI) | ||
""" | ||
dataset_name = request.getfixturevalue(dataset_fixture) | ||
time.sleep(0.5) | ||
client.delete_dataset(name=dataset_name) | ||
dataset_page = DatasetsPage(page) | ||
dataset_page.go_to_page() | ||
dataset_page.check_dataset_not_exists_on_page_by_name(dataset_name=dataset_name) | ||
try: | ||
_ = client.get_dataset(dataset_name) | ||
assert False, f'datasets {dataset_name} somehow still exists after deletion' | ||
except Exception as e: | ||
if '404' in str(e) or 'not found' in str(e).lower(): | ||
pass | ||
else: | ||
raise | ||
|
||
|
||
@pytest.mark.parametrize('dataset_fixture', ['create_dataset_sdk_no_cleanup', 'create_dataset_ui_no_cleanup']) | ||
def test_dataset_deletion_in_ui(self, request, page: Page, client: opik.Opik, dataset_fixture): | ||
""" | ||
Checks proper deletion of a dataset via the SDK. Test split into 2: checks on datasets created on both UI and SDK | ||
1. Create a dataset via the UI/the SDK (2 "instances" of the test created for each one) | ||
2. Delete the dataset from the UI using the delete button in the datasets page | ||
3. Check on both the SDK and the UI that the dataset no longer exists (client.get_dataset should throw a 404 error, dataset does not appear in datasets table in UI) | ||
""" | ||
dataset_name = request.getfixturevalue(dataset_fixture) | ||
time.sleep(0.5) | ||
datasets_page = DatasetsPage(page) | ||
datasets_page.go_to_page() | ||
datasets_page.delete_dataset_by_name(dataset_name=dataset_name) | ||
time.sleep(1) | ||
|
||
try: | ||
_ = client.get_dataset(dataset_name) | ||
assert False, f'datasets {dataset_name} somehow still exists after deletion' | ||
except Exception as e: | ||
if '404' in str(e) or 'not found' in str(e).lower(): | ||
pass | ||
else: | ||
raise | ||
|
||
dataset_page = DatasetsPage(page) | ||
dataset_page.go_to_page() | ||
dataset_page.check_dataset_not_exists_on_page_by_name(dataset_name=dataset_name) | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this print something that you need for something in particular? I mean, why not letting it fail and having some way to "after" the test run hook to destroy the dataset?
Not sure I get the idea on why to wrap every creation on try/catch here, I believe it should be a better way to handle it but for now we can keep it as is