Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance optimizations with scipy #20

Open
bradbeattie opened this issue Sep 20, 2014 · 0 comments
Open

Performance optimizations with scipy #20

bradbeattie opened this issue Sep 20, 2014 · 0 comments

Comments

@bradbeattie
Copy link
Owner

Just ran an experiment with 100 voters and about 2000 candidates, ballots being fairly sparse (usually between 10-30 candidates mentioned per ballot). Obviously, this has some applicability to users rating entities (imdb, yelp, etc).

Unfortunately, the current dictionary-based implementation is horrible for such large datasets. The generated pair entries in the internal dictionary ended up around 2000^2. Eesh. It's a decent implementation when we're talking about 20 candidates, but I think making use of http://docs.scipy.org/doc/scipy/reference/sparse.html will reveal significant optimizations in both the SchulzeMethod and derivative classes. Never used scipy before, so this'll be new.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant