- [2019 ICCV] VideoBERT: A Joint Model for Video and Language Representation Learning, [paper], [bibtex].
- [2019 ICCV] HowTo100M: Learning a Text-Video Embedding by Watching Hundred Million Narrated Video Clips, [paper], [bibtex], [homepage], sources: [antoine77340/howto100m].
- [2019 ArXiv] Learning Video Representations Using Contrastive Bidirectional Transformer, [paper], [bibtex].
- [2020 CVPR] End-to-End Learning of Visual Representations from Uncurated Instructional Videos, [paper], [bibtex], [homepage], sources: [antoine77340/MIL-NCE_HowTo100M], [MIL-NCE TFHub].
- [2020 ArXiv] UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation, [paper], [bibtex].
- [2020 ArXiv] Token-level Contrast for Video and Language Alignment, [paper], [bibtex].
- [2020 CVPR] ActBERT: Learning Global-Local Video-Text Representations, [paper], [bibtex].
- [2020 EMNLP] HERO: Hierarchical Encoder for Video+Language Omni-representation Pre-training, [paper], [bibtex], sources: [linjieli222/HERO].
- [2021 CVPR] Less is More: CLIPBERT for Video-and-Language Learning via Sparse Sampling, [paper], [bibtex], sources: [jayleicn/ClipBERT].
- [2021 ACL Findings] VLM: Task-agnostic Video-Language Model Pre-training for Video Understanding, [paper], [bibtex], sources: [pytorch/fairseq].
- [2021 ACMMM] CoCo-BERT: Improving Video-Language Pre-training with Contrastive Cross-modal Matching and Denoising, [paper], [bibtex].
- [2021 EMNLP] VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding, [paper], [bibtex], sources: [pytorch/fairseq/MMPT].
- [2021 ArXiv] VIOLET: End-to-End Video-Language Transformers with Masked Visual-token Modeling, [paper], [bibtex], sources: [tsujuifu/pytorch_violet].
- [2021 NeurIPS] MERLOT: Multimodal Neural Script Knowledge Models, [paper], [bibtex], [supplementary], [homepage], sources: [rowanz/merlot].
- [2022 CVPR] MERLOT Reserve: Multimodal Neural Script Knowledge through Vision and Language and Sound, [paper], [bibtex], [homepage], sources: [rowanz/merlot_reserve].
- [2021 ArXiv] Cross-Modal Attention Consistency for Video-Audio Unsupervised Learning, [paper], [bibtex].
- [2021 NeurIPS] Attention Bottlenecks for Multimodal Fusion, [paper], [bibtex], [homepage], sources: [google-research/mbt].
- [2021 ArXiv] VALUE: A Multi-Task Benchmark for Video-and-Language Understanding Evaluation, [paper], [bibtex], [homepage], sources: [VALUE-Leaderboard/StarterCode].