-
Notifications
You must be signed in to change notification settings - Fork 354
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Github actions: "Re-run failed jobs" will run the entire test suite #531
Comments
im facing the same issue, in re run failed jobs, cypress runs all the tests again without the parallel setup that it used in the first run |
I'm having this same issue as well -- anyone found any workarounds to this? |
+1 👍 |
That's definitly something critical |
@BioCarmen @ninasarabia do you have any links to runs that you could share where this is happening? |
Is there any update on this? Getting charged for an entire test suite re-run when one test fails on one parallel job is really upsetting, given the size of our test suite. |
Im experiencing this also. Any updates on a fix or workaround? |
We are seeing the same issue - here is our configuration
|
Reading up on https://docs.cypress.io/guides/guides/parallelization, I think this may be a side effect on how cypress load balances things. Things aren't evenly split up in a deterministic way, it tries to find un-run tests and distribute them to the workers which are free. When you do re-run only failures, you are essentially just reducing the amount of total workers that can process all the jobs. |
so this mean cypress code base might need more work to address that issue ? we're still getting over billed because of internal implementation, is that what you're saying ? |
I can't comment since I don't work for cypress but this part https://docs.cypress.io/guides/guides/parallelization#CI-parallelization-interactions makes me think that each individual container won't always run the same tests. The re-run is just interpreted as another parallel test run but with less containers to run them. |
it's slightly worse than this if you use standard github
so you're down to re-rub the matrix run (not the title of this issue) |
@tebeco is it that it doesn't parallelize or that only one of the containers failed so all the tests get run in that one container? I'd have to try a few more times to know for certain but I think if you have multiple containers fail it will parallelize those containers but more tests will run per container since the "passing" containers don't do anything. |
both are bad, think about the billing and that is regardless is the parallel matrix respected or not since all test / run minutes are accounted i think not trimming in parallel would be less critical if
for now it's unpredictable / and full rerun / full billing only |
There were recently some changes in our services repo that may have taken care of this issue. Can someone retest with |
@admah I just tested this after upgrading to 10.8.0 and still saw all of the tests run in a single job when one of the parallelized containers had a failed test. To give some more detail, the codebase I am working on uses the Cypress parallelization feature, attached to Cypress dashboard, to split our test suite into 5 different jobs. In this situation, one test failed in one of the parallelized jobs. To retry this test, I clicked the "re-run failed jobs" button in GitHub and that kicked off the Cypress tests again in the same job containing the failed test. But, instead of running the same set of tests, it re-ran all of the tests in the single job. I have included a screenshot that should hopefully illustrate this a little better. Thanks for looking into this, it would be a huge improvement to our CI pipeline if this issue was resolved! |
Agree. This fix is very much needed to optimise the CI run time and avoid the unnecessary trigger of the tests that have already passed in one attempt. There by reducing the billing. |
Yes, I also tried to replicate this last night and saw this same behavior:
This by default will fail the job, because one single worker can't possibly run all of the tests before the job timeout kicks in (which is why it is parallelized in the first place). We are on 10.7.0. I believe what would need to happen is for Cypress to remember which tests get allocated to which workers so that if there is a failure on worker 3 of 5, and "re-run failed jobs" is selected on the GHA side, the same set of tests will get re-run on that worker. |
I was able to get some more clarity on this from our Cloud team. Issue #574 also has some additional context. Here is the current status:
I will be updating this issue as new information is available. |
For the issue I wrote that is linked above, turns out to be any re-run job is skipping every single test regardless if its from start or failed when run against cypress |
@admah Any news? Thanks |
All those problems could be fixed if dashboard could work this way for same dashboard run KEY. 7 test, 3 workers first run
next runs with same cypress KEY
At least this would work fine with GitHub imho. Not sure how hard is it to implement but it is on dashboard side to orchestrate and send tests to workers so my guess would be that this should not be very hard. Unless somehow done tests suites runs cannot be updated... |
@admah Does it mean if I rerun failed workers with same cypress run KEY (
@admah You mean rerun with same cypress run KEY? If so this seems contradicting with first point. |
@admah is there a planned release version for this yet?
|
The ability to re-run failed tests is becoming more and more necessary as we scale; it's making us consider alternatives to Cypress cloud |
@admah Any news on this? |
According to https://www.linkedin.com/in/amu/ Adam Murray (@admah) doesn't work for Cypress.io any more. |
Any update on this? Or does anyone have a workaround for getting a re-run to ONLY run the failed tests? |
Hi all, I’ll surface our latest thinking thus far about this: We’ve heard from some of you that you don’t want your team to use GHA’s Rerun at all, for any Cypress test run. It may not be desirable either because the test scenario is not idempotent, it creates performance issues, or it simply hides potentially real issues. GHA unfortunately does not have a means to disable their “Rerun” functionality on specific jobs. I’ve posed a question in the GHA community about this. If you agree it would be better for your team to disable GHA reruns for specific jobs, please upvote/comment on that question and it hopefully gets more attention. As for use-cases where the rerun is still a useful feature, the ideal solution has been a challenge to pinpoint due to differing use-cases. Keep in mind, the only valid situation for a rerun is when there are false failures (failed tests that you expect to pass on a 2nd run, without any code or environment changes). There are two features which already exist in Cypress for this:
If you have not yet tried any of the above workarounds, I highly recommend checking them out to see if they would work for you. If the above workarounds don’t work for you, then there’s likely a more elusive problem for your test environment. It’s probably either infrastructure related or due to sporadic issues with the machines the test ran on. We want to gather feedback in the future (you can also let us know now, but I don’t want to imply we’re solving for this right now) that indicate to us the most common root cause, so we can later solve for that first. If you give your feedback here in this GH issue to tell us whether your “false” failures are due to infrastructure or to specific machines experiencing issues on a given run, or something else, please also provide a little context about these issues, such as:
Answers to the above questions give us both contextual clarity as well as tell us whether your specific issue is correctly categorized (in the past we’ve seen confusion about which is which). |
It feels like Cypress Dashboard is able to store the GHA run (which changes by commit)- is it too far a leap to potentially store the failures for a given run ID and (if the number is greater than 0) simply use the array of failed specs when it's rerun? I doubt anyone's really re-running their tests for the fun of it. |
+1 to bump this ticket to top priority. What's the latest update about this issue? For anyone who experienced this issue; did you end up finding a solution that you could share here? |
@ryanpei You are correct. In my last project the failures were not application or test related but due to infrastructure problems. In those scenarios we really just want to re-run the failed tests in the existing workflow. Note that there are two observed behaviors of this issue:
In that scenario the only "solution" was to opt to |
Hello @jaffrepaul |
|
+1 to bump this ticket to top priority. |
+1 to bump this ticket to top priority. |
The GitHub Actions runner option Re-run failed jobs documentation says:
This means that if a CI run on GitHub Actions fails and then offers you the "Re-run failed jobs" option, this is not useful if your Application under Test (AUT) is also part of the same repository. If you now correct the error in your application which caused the Cypress test failure and then push the correction to GitHub, selecting "Re-run failed jobs" will not use the application correction, since it is tied to the previous state of the application (commit SHA) and it does not use the new commit with the correction. GitHub Actions "Re-run failed jobs" is useful to repeat workflow runs that have failed due to GitHub issues such as temporary network connectivity. It does not seem to be helpful to re-run Cypress tests in general, because they would need to run on a corrected version of the AUT.
Auto Cancellation is supported by the Cypress GitHub Action with the parameter auto-cancel-after-failures since So the recommendation would be to use Spec Prioritization and Auto Cancellation and avoid the use of GitHub Action's "Re-run failed jobs". If there is any enhancement to |
Aside from github issues, its useful to be able to re-run due to issues with any other external dependency. |
The README section > Parallel has been revised to reflect how the action currently works together with Cypress Cloud: The Cypress GH Action does not spawn or create any additional containers - it only links the multiple containers spawned using the matrix strategy into a single logical Cypress Cloud run where it splits the specs amongst the machines. See the Cypress Cloud Smart Orchestration guide for a detailed explanation. If you use the GitHub Actions facility for Re-running workflows and jobs, note that Re-running failed jobs in a workflow is not suited for use with parallel recording into Cypress Cloud. Re-running failed jobs in this situation does not simply re-run failed Cypress tests. Instead it re-runs all Cypress tests, load-balanced over the containers with failed jobs. To optimize runs when there are failing tests present, refer to optional Cypress Cloud Smart Orchestration Premium features:
|
+1 to bump this ticket to top priority. |
This issue is left open so that users can more easily find the discussion although the dependency is on Cypress Cloud and any improvement would need to be initiated in that software function, not in the Cypress GitHub Action. At this time there is nothing which can be done to improve the Cypress GitHub Action in this regard. You can find the recommendations about how to use the action optimally in the documentation (and in the previous posting #531 (comment)). See also the previous discussions in this thread. |
@MikeMcC399 is there another repository where we should instead raise this issue to be fixed? If Cypress controls how tests are distributed with "Smart Orchestration" then hopefully they can distribute the same tests on the failing container back on the single container being re-run 🤞🏻 |
Cypress Cloud is not open source, and so there is no public repository containing the Cypress Cloud code or one which can accept external issues. The Cypress Help Center however describes the support available for Team, Business and Enterprise Cypress Cloud subscribers including the ability to submit tickets. The page also links to the Cypress technical user community on Discord where there is a dedicated Cypress Cloud channel. Discord is available to all users, including those using Cypress Cloud under the Free plan.
It seems that there is no simple solution for this requirement in a GitHub Actions environment, so the recommendation remains to take advantage of the facilities currently offered by "Smart Orchestration" as described in previous postings. |
@MikeMcC399, thank you for always taking the time and patience to answer questions and provide solutions when possible. I really appreciate you 🙏 |
Thank you for your kind comments! 😄 |
@MikeMcC399 I'm curious to know if there's a way to circumvent record + parallel + GA limitation when using self-hosted runners. Or will it be the same problem? |
I have no idea why the problem would be any different on GitHub self-hosted runners. |
|
We have the e2e tests configure to run on cypress dashboard parellely.
I was following this thread to add the
custom-build-id
to the command to distinguish different run based on different build id. Every thing works fine until github actions roll out the ability to Re-run failed jobs.If i just set the
custom-build-id
to${{ github.run_id }}
, the second attempt will always marks tests as passing with 'Run Finished' but tests are not triggered at all.So I set set the
custom-build-id
to${{ github.run_id }}-${{ github.run_attempt }}
, now it will run the entire test suite instead of running the originally allocated subset of tests.The text was updated successfully, but these errors were encountered: