You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Just ran an experiment with 100 voters and about 2000 candidates, ballots being fairly sparse (usually between 10-30 candidates mentioned per ballot). Obviously, this has some applicability to users rating entities (imdb, yelp, etc).
Unfortunately, the current dictionary-based implementation is horrible for such large datasets. The generated pair entries in the internal dictionary ended up around 2000^2. Eesh. It's a decent implementation when we're talking about 20 candidates, but I think making use of http://docs.scipy.org/doc/scipy/reference/sparse.html will reveal significant optimizations in both the SchulzeMethod and derivative classes. Never used scipy before, so this'll be new.
The text was updated successfully, but these errors were encountered:
Just ran an experiment with 100 voters and about 2000 candidates, ballots being fairly sparse (usually between 10-30 candidates mentioned per ballot). Obviously, this has some applicability to users rating entities (imdb, yelp, etc).
Unfortunately, the current dictionary-based implementation is horrible for such large datasets. The generated pair entries in the internal dictionary ended up around 2000^2. Eesh. It's a decent implementation when we're talking about 20 candidates, but I think making use of http://docs.scipy.org/doc/scipy/reference/sparse.html will reveal significant optimizations in both the SchulzeMethod and derivative classes. Never used scipy before, so this'll be new.
The text was updated successfully, but these errors were encountered: