- [2016 AAAI] Character-Aware Neural Language Models, [paper], [bibtex], sources: [carpedm20/lstm-char-cnn-tensorflow], [yoonkim/lstm-char-cnn].
- [2009 NIPS] HLBL: A Scalable Hierarchical Distributed Language Model, [paper], [bibtex], [wenjieguan/Log-bilinear-language-models].
- [2010 INTERSPEECH] Recurrent neural network based language model, [paper], [bibtex], [Ph.D. Thesis], [slides], sources: [mspandit/rnnlm].
- [2013 NIPS] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality, [paper], [bibtex], [word2vec explained], [params explained], [blog], sources: [word2vec], [dav/word2vec], [yandex/faster-rnnlm], [tf-word2vec], [zake7749/word2vec-tutorial].
- [2013 CoNLL] Better Word Representations with Recursive Neural Networks for Morphology, [paper], [bibtex].
- [2014 ACL] Word2Vecf: Dependency-Based Word Embeddings, [paper], [bibtex], sources: [Yoav Goldberg/word2vecf], [IsaacChanghau/Word2VecfJava].
- [2014 EMNLP] GloVe: Global Vectors for Word Representation, [paper], [bibtex], [homepage], sources: [stanfordnlp/GloVe].
- [2014 ICML] Compositional Morphology for Word Representations and Language Modelling, [paper], [bibtex], sources: [thompsonb/comp-morph], [claravania/subword-lstm-lm].
- [2015 ACL] Hyperword: Improving Distributional Similarity with Lessons Learned from Word Embeddings, [paper], [bibtex], sources: [Omer Levy/hyperwords].
- [2016 NAACL] Counter-fitting Word Vectors to Linguistic Constraints, [paper], [bibtex], sources: [nmrksic/counter-fitting].
- [2016 ICLR] Exploring the Limits of Language Modeling, [paper], [bibtex], [slides], sources: [tensorflow/models/lm_1b].
- [2016 CoNLL] Context2Vec: Learning Generic Context Embedding with Bidirectional LSTM, [paper], [bibtex], sources: [orenmel/context2vec].
- [2016 IEEE Intelligent Systems] How to Generate a Good Word Embedding?, [paper], [bibtex], [基于神经网络的词和文档语义向量表示方法研究], [blog], sources: [licstar/compare].
- [2017 ACL] FastText: Enriching Word Vectors with Subword Information, [paper], [bibtex], sources: [facebookresearch/fastText], [salestock/fastText.py].
- [2017 ArXiv] Implicitly Incorporating Morphological Information into Word Embedding, [paper], [bibtex].
- [2017 AAAI] Improving Word Embeddings with Convolutional Feature Learning and Subword Information, [paper], [bibtex], sources: [ShelsonCao/IWE].
- [2018 TACL] Linear Algebraic Structure of Word Senses, with Applications to Polysemy, [paper], [bibtex], [slides], sources: [YingyuLiang/SemanticVector].
- [2018 ICML] Learning K-way D-dimensional Discrete Codes for Compact Embedding Representations, [paper], [bibtex], supplementary, sources: [chentingpc/kdcode-lm].
- [2018 ICLR] Compressing Word Embeddings via Deep Compositional Code Learning, [paper], [bibtex], sources: [msobroza/compositional_code_learning].
- [2018 ACL] Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms, [paper], [bibtex], sources: [dinghanshen/SWEM].
- [2019 ACL] Few-Shot Representation Learning for Out-Of-Vocabulary Words, [paper], [bibtex], sources: [acbull/HiCE].
- [2019 ACL] Towards Understanding Linear Word Analogies, [paper], [bibtex].
- [2015 NIPS] Skip Thought Vectors, [paper], [bibtex], sources: [ryankiros/skip-thoughts].
- [2017 ICLR] A Simple But Tough-to-beat Baseline for Sentence Embeddings, [paper], [bibtex], sources: [PrincetonML/SIF].
- [2017 ICLR] A Structured Self-attentive Sentence Embedding, [paper], [bibtex], sources: [ExplorerFreda/Structured-Self-Attentive-Sentence-Embedding], [flrngel/Self-Attentive-tensorflow], [kaushalshetty/Structured-Self-Attention].
- [2017 EMNLP] Supervised Learning of Universal Sentence Representations from Natural Language Inference Data, [paper], [bibtex], sources: [facebookresearch/InferSent].
- [2018 ICLR] Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning, [paper], [bibtex], sources: [Maluuba/gensen].
- [2018 ArXiv] Universal Sentence Encoder, [paper], [bibtex], sources: [TensorFlow Hub/universal-sentence-encoder], [helloeve/universal-sentence-encoder-fine-tune].
- [2018 ArXiv] Evaluation of Sentence Embeddings in Downstream and Linguistic Probing Tasks, [paper], [bibtex].
- [2018 EMNLP] XNLI: Evaluating Cross-lingual Sentence Representations, [paper], [bibtex], sources: [facebookresearch/XNLI].
- [2018 EMNLP] Dynamic Meta-Embeddings for Improved Sentence Representations, [paper], [bibtex], sources: [facebookresearch/DME].
- [2018 ICLR] An Efficient Framework for Learning Sentence Representations, [paper], [bibtex], sources: [lajanugen/S2V], [mhiro2/quick-thoughts].
- [2018 ACL] Sentence-State LSTM for Text Representation, [paper], [bibtex], [poster], sources: [leuchine/S-LSTM], [bill-kalog/S-LSTM_pytorch].
- [2019 EMNLP] Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks, [paper], [bibtex], sources: [UKPLab/sentence-transformers].
- [2020 EMNLP] On the Sentence Embeddings from Pre-trained Language Models, [paper], [bibtex], sources: [bohanli/BERT-flow].
- [2021 ArXiv] SimCSE: Simple Contrastive Learning of Sentence Embeddings, [paper], [bibtex], sources: [princeton-nlp/SimCSE].
- [2016 NAACL] Learning Natural Language Inference with LSTM, [paper], [bibtex], source: [shuohangwang/SeqMatchSeq].
- [2017 IJCAI] BiMPM: Bilateral Multi-Perspective Matching for Natural Language Sentences, [paper], [bibtex], sources: [zhiguowang/BiMPM].
- [2017 ArXiv] Distance-based Self-Attention Network for Natural Language Inference, [paper], [bibtex].
- [2017 ACL] Enhanced LSTM for Natural Language Inference, [paper], [bibtex], sources: [lukecq1231/nli], [coetaur0/ESIM], [HsiaoYetGun/ESIM], [sdnr1/EBIM-NLI], [JasonForJoy/ESIM-NLI].
- [2018 AAAI] DiSAN: Directional Self-Attention Network for RNN/CNN-free Language Understanding, [paper], [bibtex], sources: [taoshen58/DiSAN].
- [2018 IJCAI] Reinforced Self-Attention Network: a Hybrid of Hard and Soft Attention for Sequence Modeling, [paper], [bibtex].
- [2018 IJCAI] Hermitian Co-Attention Networks for Text Matching in Asymmetrical Domains, [paper], [bibtex].
- [2019 ACL] Multi-Task Deep Neural Networks for Natural Language Understanding, [paper], [bibtex], sources: [namisan/mt-dnn].
- [2019 AAAI] Gaussian Transformer: A Lightweight Approach for Natural Language Inference, [paper], [bibtex], sources: [lzy1732008/GaussionTransformer].
- [2020 ACL] Adversarial NLI: A New Benchmark for Natural Language Understanding, [paper], [bibtex], sources: [facebookresearch/anli].
- [2020 ACL] Mind the Trade-off: Debiasing NLU Models without Degrading the In-distribution Performance, [paper], [bibtex], sources: [UKPLab/acl2020-confidence-regularization].
- [2020 ArXiv] MultiMix: A Robust Data Augmentation Framework for Cross-Lingual NLP, [paper], [bibtex].
- [2020 ICML] XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization, [paper], [bibtex], [supplementary], [homepage], sources: [google-research/xtreme].
- [2020 ICLR] Learning the Difference that Makes a Difference with Counterfactually-Augmented Data, [paper], [bibtex], sources: [dkaushik96/counterfactually-augmented-data].
- [2020 EMNLP] Zero-Shot Cross-Lingual Transfer with Meta Learning, [paper], [bibtex], sources: [copenlu/X-MAML].
- [2020 EMNLP] Towards Debiasing NLU Models from Unknown Biases, [paper], [bibtex], sources: [UKPLab/emnlp2020-debiasing-unknown].
- [2020 EMNLP] Translation Artifacts in Cross-lingual Transfer Learning, [paper], [bibtex], sources: [artetxem/esxnli].
- [2021 AAAI] FILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding, [paper], [bibtex], [slides], sources: [yuwfan/FILTER].
- [2021 NAACL] Towards Interpreting and Mitigating Shortcut Learning Behavior of NLU models, [paper], [bibtex].
- [2021 ACL Findings] Empowering Language Understanding with Counterfactual Reasoning, [paper], [bibtex], sources: [fulifeng/Counterfactual_Reasoning_Model].
- [2021 ACL Findings] DocNLI: A Large-scale Dataset for Document-level Natural Language Inference, [paper], [bibtex], sources: [salesforce/DocNLI].
- [2021 ACL] Syntax-augmented Multilingual BERT for Cross-lingual Transfer, [paper], [bibtex], sources: [wasiahmad/Syntax-MBERT].
- [2021 ACL] Consistency Regularization for Cross-Lingual Fine-Tuning, [paper], [bibtex], sources: [bozheng-hit/xTune].
- [2018 EMNLP] GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding, [paper], [bibtex], [homepage], sources: [nyu-mll/GLUE-baselines].
- [2019 ICLR] GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding, [paper], [bibtex], [homepage], sources: [nyu-mll/GLUE-baselines].