Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify test dependant packages against napari #6336

Open
wants to merge 38 commits into
base: main
Choose a base branch
from

Conversation

Czaki
Copy link
Collaborator

@Czaki Czaki commented Oct 13, 2023

Description

This PR add special workflow that simplify testing if some change in direct napari dependencies will break current main or latest released napari version.

This require that package could be installed using pip install ./path_to_repository

To use this workflow there is enough to put this job in test workflow

  test_against_napari:
    name: Test if napari works
    uses: napari/napari/.github/workflows/test_external_package_workflow.yml@main

By default this workflow runs headless test set, runs on ubuntu latest and use python 3.10 but it could be adjusted:

  test_against_napari:
    name: tmp run test external package
    uses: napari/napari/.github/workflows/test_external_package_workflow.yml@main
    with:
        python_version: "3.10"
        platform: "ubuntu-latest"
        backend: "headless"

Example of usage is here: napari/npe2#322

For testing this PR I have created main_test and v0.4.19x_test in the napari repository that will need to be deleted after the final decision

@github-actions github-actions bot added the task Tasks for contributors and maintainers label Oct 13, 2023
Czaki added a commit that referenced this pull request Oct 25, 2023
Czaki added a commit that referenced this pull request Oct 25, 2023
Czaki added a commit that referenced this pull request Oct 25, 2023
@Czaki Czaki added this to the 0.5.0 milestone Oct 25, 2023
@Czaki Czaki added the feature New feature or request label Oct 25, 2023
@tlambert03
Copy link
Contributor

pretty cool @Czaki! trying in app model: pyapp-kit/app-model#129

I do have a quick question... is it possible for the user to parametrize this? or would you have to do:

  test_napari_macos_pyqt5:
    runs-on: macos-latest
    uses: napari/napari/.github/workflows/test_external_package_workflow.yml@main
    with:
      backend: "pyqt5"

  test_napari_ubuntu_pyqt5:
    runs-on: ubuntu-latest
    uses: napari/napari/.github/workflows/test_external_package_workflow.yml@main
    with:
      backend: "pyqt5"

  test_napari_macos_pyside2:
    runs-on: macos-latest
    uses: napari/napari/.github/workflows/test_external_package_workflow.yml@main
    with:
      backend: "pyside2"

  test_napari_ubuntu_pyside2:
    runs-on: ubuntu-latest
    uses: napari/napari/.github/workflows/test_external_package_workflow.yml@main
    with:
      backend: "pyside2"

@tlambert03
Copy link
Contributor

oh actually... looks like I can't actually specify runs-on when I use jobs.<id>.uses... So I guess that's a second question. can I specify a platform?

Comment on lines +9 to +33
inputs:
package_to_test:
required: false
type: string
default: ""
package_to_test_ref:
required: false
type: string
default: ""
python_version:
required: false
type: string
default: "3.10"
platform:
required: false
type: string
default: "ubuntu-latest"
backend:
required: false
type: string
default: "headless"
pytest_path:
required: false
type: string
default: ""
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tlambert03 Here is the full list that could be specified.

It includes backend, platform, python_version and pytest_path where pytest_path is passed as environment variable to tox and then placed in pytest command.

@tlambert03
Copy link
Contributor

tlambert03 commented Oct 29, 2023

this is great @Czaki, I like it and would likely use it for pyapp-kit repos on which napari depends.
Couple observations / nits:

  • it's a bit hard to tell which test is which:
    Screenshot 2023-10-29 at 12 51 48 PM
    though you can tell from clicking on them and looking at the longer title:
    napari (pyside2) / Initial test (3.10, ubuntu-latest, pyside2, main_test) / ubuntu-latest py 3.10 pyside2 no_cov
    napari (pyside2) / Initial test (3.10, ubuntu-latest, pyside2, v0.4.19x) / ubuntu-latest py 3.10 pyside2 no_cov
    ... that main_test and v0.4.19 are pretty tucked in there and hard to see
  • along those lines, it might be nice to fetch tags for the napari repo, so it's clear from the logs what got installed. Without fetch_depth, they get shown as napari-0.1.dev1+gd4dca88 and napari-0.1.dev1+g11298b0 ... making it a bit harder to know exactly what I'm testing against.
  • I would like to pass arguments to pytest (e.g. -W ignore), but it doesn't appear to work if I use pytest_path (it needs to come after the -- in the pytest command used by tox. So I think a new argument would need to be added to the workflow. that's important for me to be able to deprecate things with warnings, without having my own tests fail (because of proper warnings now emitted in napari tests)
  • fyi: splitting backends by commas ended up being much to slow, so I'm back to using a matrix.
  • not yet sure why, but the 4.19.x tests are significantly slower (7 or 8 minutes for just the _qt and _app_model modules). all my other tests are less than a minute, so it represents a big wait for app-model. Any thoughts on why 4.19 would be so much slower than HEAD?
  • can you tell me what's going on at the end of the test? I don't see any failures, but it says failed... and then ignores it and marks it as a success. Makes me wonder what would happen if an actual failure happened?
  Results (158.67s):
         426 passed
           5 skipped
  py310-linux-pyside2: commands[1] /home/runner/work/app-model/app-model/napari_repo> -ignore napari/_qt --ignore napari/_tests --ignore tools --json-report --json-report-file=/home/runner/work/app-model/app-model/napari_repo/report-py310-linux-pyside2.json
  py310-linux-pyside2: exit 2 (0.00 seconds) /home/runner/work/app-model/app-model/napari_repo> -ignore napari/_qt --ignore napari/_tests --ignore tools --json-report --json-report-file=/home/runner/work/app-model/app-model/napari_repo/report-py310-linux-pyside2.json
  py310-linux-pyside2: command failed but is marked ignore outcome so handling it as success
py310-linux-pyside2: OK ✔ in 6 minutes 14.27 seconds

@Czaki
Copy link
Collaborator Author

Czaki commented Oct 29, 2023

does such an update of the job name as implemented here is ok:

Screenshot 2023-10-30 at 00 25 56
  • can you tell me what's going on at the end of the test? I don't see any failures, but it says failed... and then ignores it and marks it as a success. Makes me wonder what would happen if an actual failure happened?
  Results (158.67s):
         426 passed
           5 skipped
  py310-linux-pyside2: commands[1] /home/runner/work/app-model/app-model/napari_repo> -ignore napari/_qt --ignore napari/_tests --ignore tools --json-report --json-report-file=/home/runner/work/app-model/app-model/napari_repo/report-py310-linux-pyside2.json
  py310-linux-pyside2: exit 2 (0.00 seconds) /home/runner/work/app-model/app-model/napari_repo> -ignore napari/_qt --ignore napari/_tests --ignore tools --json-report --json-report-file=/home/runner/work/app-model/app-model/napari_repo/report-py310-linux-pyside2.json
  py310-linux-pyside2: command failed but is marked ignore outcome so handling it as success
py310-linux-pyside2: OK ✔ in 6 minutes 14.27 seconds

This is caused by your implementation of headless tests from #4245. In general, tox allows a fail line that starts with -- without crashing the whole test suite. So you guard only the first line of the call headless test.

I will investigate other points tomorrow.

@jni jni modified the milestones: 0.5.0, 0.5 Jul 8, 2024
@jni
Copy link
Member

jni commented Jul 8, 2024

@Czaki sorry I missed this originally. I've moved it to the 0.5 milestone, but it needs an update.

@willingc willingc added the needs:review Needs initial review (contributors and core-devs) label Nov 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New feature or request needs:review Needs initial review (contributors and core-devs) task Tasks for contributors and maintainers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants