Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarks #7

Open
willtebbutt opened this issue Apr 26, 2020 · 1 comment
Open

Benchmarks #7

willtebbutt opened this issue Apr 26, 2020 · 1 comment

Comments

@willtebbutt
Copy link
Member

Benchmarking AD tools has come up a lot recently, and this seems like a good place to implement some benchmarks, in addition to "correctness" testing.

I was thinking that they should be micro-benchmarks, and the benchmarks themselves shouldn't depend on any functionality outside of Base and the standard libraries, with the possible exception of things that are needed to test supports for accelerators eg. CuArrays.jl. Equally these could be supported by typing things sufficiently abstractly 🤷

The first thing to do is figure out what it people actually care about the performance of. For example, I really care about broadcasting and operations involving linear algebra, but not so much about control flow, but I know that the Turing team has a different set of priorities. So perhaps if everyone could solicit what sorts of things they're interested in benchmarking, we can start to think about how to chop up tests. For example, there's a distinction between control-flow that depends on values and control-flow that doesn't from the perspective of reverse-mode AD, so we should probably be testing that kind of thing.

cc @vchuravy @yebai @oxinabox

@vchuravy
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants