-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
handling large vu shifts #3470
handling large vu shifts #3470
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @VincentSchmid,
thanks for your contribution 🎉
The current approach is working but I would prefer if we use a different one. We should have a validating function instead of one returning the sum. In this way, we can directly return an error from the function. It would be similar to the validateStages function that already exists.
Also, note, that the same logic has to be applied to the ramping arrival rate executor.
d9b0df4
to
74d797e
Compare
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## master #3470 +/- ##
==========================================
+ Coverage 72.55% 73.48% +0.93%
==========================================
Files 276 275 -1
Lines 20840 20206 -634
==========================================
- Hits 15120 14848 -272
+ Misses 4758 4405 -353
+ Partials 962 953 -9
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
74d797e
to
7d8be76
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now, checking this again it doesn't sound like the logic for the validation based on the sum is required.
I may miss some obvious reason, but can you clarify why is it not enough to just validate the maxTarget
returned by this code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great job @VincentSchmid 👏🏻 🙇🏻
The current implementation matches my initial assumptions and expectations. I've left a bunch of comments more on the minor side that I think should still be addressed before we merge this, but besides that, it starts looking good 🎉
I couldn't help but notice that although we use the mechanism in the context of the ramping arrival rate executor, we haven't added tests for it (which might have led to the issue of naming somehow?). I would prefer to see tests for that usage too 🙇🏻
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great job @VincentSchmid 👏🏻 🙇🏻
The current implementation matches my initial assumptions and expectations. I've left a bunch of comments more on the minor side that I think should still be addressed before we merge this, but besides that, it starts looking good 🎉
I couldn't help but notice that although we use the mechanism in the context of the ramping arrival rate executor, we haven't added tests for it (which might have led to the issue of naming somehow?). I would prefer to see tests for that usage too 🙇🏻
As I'm at it, in case that proves useful to any of you folks, here's the script I used to try the feature locally: import http from 'k6/http';
export const options = {
scenarios: {
vus: {
executor: 'ramping-vus',
startVUs: 0,
stages: [
{duration: '20s', target: 10},
{duration: '10s', target: 100000100},
],
},
rate: {
executor: 'ramping-arrival-rate',
startRate: 0,
preAllocatedVUs: 50,
stages: [
{duration: '20s', target: 10},
{duration: '10s', target: 100000100},
]
}
}
}
export default function () {
http.get('http://test.k6.io')
} Which made me think that it could also be worth adding end 2 end tests demonstrating the behavior? |
Thank you guys for your patience, I had fun doing this with you and I learned some things! |
Hey @VincentSchmid, thank you so much for your effort here. 🎉 |
During #3470 some sub tests were added and they access the same variable from the not sub test. This obviously is a race condition that nobody noticed.
During #3470 some sub tests were added and they access the same variable from the not sub test. This obviously is a race condition that nobody noticed.
What?
It validates the VU values. If a VU is larger than the chosen constant, an error is appended.
This includes the startVU.
Why?
The conclusion was reached to handle it that way during the discussions of this pull request.
Checklist
make lint
) and all checks pass.make tests
) and all tests pass.Related PR(s)/Issue(s)
Closes #1693