-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Finer control on the duration/significance of criterium runs #9
Comments
I think the suggestions by @davidsantiago would be helpful. @hugoduncan Are you suggesting that we read over those papers to get ideas on how to expose additional tuning parameters? Are you saying more? Perhaps it is not clear on how to do so? |
👍 to @davidsantiago's suggestions. I've been playing with perforate, and was wanting to start using it to get a rough idea of performance across multiple services. Whilst I appreciate criterium's efforts to support micro benchmarking on the JVM, it seems that perforate even with I appreciate that perhaps this is a different usecase, but having a crude sledgehammer to hard code the number of test runs would be really useful. |
A way to know how long we allow the benchmark to run at most would be useful. Sometimes I start a benchmark on a function that takes 1 or 2 second to run and criterium runs to a point that I don't even have the patient to wait for. While other times, it takes only a few second to finish. So if there was a way to say, run at most 10 second, and do your best to achieve reliable results within that time frame, it would be a great feature. |
This would be helpful. I started using criterium for some long-running fns, but if the fns take long enough, the warmup period becomes unpredictable and can stretch out for a looooong time, because the JIT stabilization process isn't too smart. (Based on what Java exposes, it can't be.) Parameters to more finely control this (or at least a max time to warmup/stabilize) would be helpful. As is, I have to stop using criterium in this case. |
As discussed in issue #8, it would be helpful to be able to control more finely the amount of time criterium runs benchmarks.
The gap between "quick" and "full" can be enormous. Having more ways available to pinpoint the statistical sampling parameters in criterium would really help in making it a tool you can run to understand your code's performance changes without slowing you down too much. I have some benchmarks that take over a second to run a single iteration, while others will run thousands and thousands of times in a benchmark run due to the minimum time requirements. Being able to independently set "minimum runs", "maximum runs", "minimum time", "maximum time" parameters would be great. Or at least having a few more levels between quick and full. Or even better, break out the independent parameters and then give helpful names to a few stock configurations for convenience.
The text was updated successfully, but these errors were encountered: