This repository contains data and source code for reproducing results of the paper:
Sampling Bias Due to Near-Duplicates in Learning to Rank from SIGIR 2020.
Read abstract.
Clone this repository:
git clone https://github.com/webis-de/sigir20-sampling-bias-due-to-near-duplicates-in-learning-to-rank.git
cd sigir20-sampling-bias-due-to-near-duplicates-in-learning-to-rank
git lfs pull
We recommend to enable Git LFS for best performance.
Ensure, the project builds on your machine:
./gradlew build
Run ranking experiments with Gradle (needs access to a configured Yarn cluster):
ClueWeb 09
./gradlew runClueWeb09TrainingRerankingExperimentsSpark
GOV2 (requires LETOR 4.0)
./gradlew runGov2TrainingRerankingExperimentsSpark
To test your configuration, you may run a small subset of the experiments locally:
ClueWeb 09
./gradlew runClueWeb09TrainingRerankingExperiments
GOV2 (requires LETOR 4.0)
./gradlew runGov2TrainingRerankingExperiments
Adjust timestamp in source/evaluation/build.gradle.kts, and run evaluations on Spark.
./gradlew runEvaluationSpark
Adjust timestamp in source/evaluation/build.gradle.kts, and split evaluation results.
./gradlew splitEvaluationResults
Start Jupyter notebook to explore results.
./gradlew startNotebooks
- JDK 14 (Kotlin is loaded by Gradle)
- Python 3.8 (via Pipenv)
Our ClueWeb 09 features dataset can be found here.
@InProceedings{webis:2020d,
author = {Maik Fr{\"o}be and
Janek Bevendorff and
{Jan Heinrich} Reimer and
Martin Potthast and
Matthias Hagen},
booktitle = {43nd International ACM Conference on Research and Development
in Information Retrieval (SIGIR 2020)},
month = jul,
publisher = {ACM},
site = {Xi'an, China},
title = {{Sampling Bias Due to Near-Duplicates in Learning to Rank}},
year = 2020
}
Literature links: DOI, ACM Digital Library, Webis publications, DBLP
If you hit any problems reproducing our study, please mail us:
We're happy to help!
The source code is released under terms of the MIT License.
Our features dataset is licensed under terms of the CC BY-SA 4.0 license.
Learning to rank (LTR) is the de facto standard for web search, improving upon classical retrieval models by exploiting (in)direct relevance feedback from user judgments, interaction logs, etc. We investigate for the first time the effect of a sampling bias on LTR models due to the potential presence of near-duplicate web pages in the training data, and how (in)consistent relevance feedback of duplicates influences an LTR model's decisions. To examine this bias, we construct a series of specialized LTR datasets based on the ClueWeb09 corpus with varying amounts of near-duplicates. We devise worst-case and average-case train/test splits that are evaluated on popular pointwise, pairwise, and listwise LTR models. Our experiments demonstrate that duplication causes overfitting and thus less effective models, making a strong case for the benefits of systematic deduplication before training and model evaluation.