Skip to content
forked from chenllliang/MLS

Source code of our paper "Focus on the Target’s Vocabulary: Masked Label Smoothing for Machine Translation" @ ACL 2022

Notifications You must be signed in to change notification settings

pkunlp-icler/MLS

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

65 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

**** Refer to this official Repository for latest news/raising issues,

Focus on the Target’s Vocabulary: Masked Label Smoothing for Machine Translation

News 🚩

  • Release preprocessed data and model output. 2022.03.05
  • Code released at Github. 2022.03.04
  • Accepted by ACL 2022 Main Conference. 2022.02.24

Hi, this is the source code of our paper "Focus on the Target’s Vocabulary: Masked Label Smoothing for Machine Translation" accepted by ACL 2022. You can find the paper in https://arxiv.org/abs/2203.02889.

Introduction

Label smoothing and vocabulary sharing are two widely used techniques in neural machine translation models. However, we argue that simply applying both techniques can be conflicting and even leads to sub-optimal performance. When allocating smoothed probability, original label smoothing treats the source-side words that would never appear in the target language equally to the real target-side words, which could bias the translation model. To address this issue, we propose Masked Label Smoothing (MLS), a new mechanism that masks the soft label probability of source-side words to zero. Simple yet effective, MLS manages to better integrate label smoothing with vocabulary sharing.


Venn graph showing the token distribution between lanugages.

Illustration of Masked Label Smoothing (bottom right) and Weighted Label Smoothing (upper right & bottom left)

Preparations

git clone [email protected]:chenllliang/MLS.git
cd MLS

conda create -n MLS python=3.7
conda activate MLS

cd fairseq # We place the MLS criterions inside fairseq's criterion sub-folder, you can find them there.
pip install --editable ./
pip install sacremoses

# Make sure you have the right version of pytorch and CUDA, we use torch 1.10+cu113

We adopt mosesdecoder for tokenization , subword-nmt for BPE and fairseq for experiment pipelines. You need to clone the first two repos into ./Tools before next step.

Preprocess

We have prepared a pre-processed binary data of IWSLT14 DE-EN in the ./databin folder (unzip it and put the two unzipped folders under ./databin/, you can jump to next section then) .

If you plan to try your own dataset. You may refer to this script for preprocessing and parameter setting.

Before running code, you should have your original translation data's structure looks like belows, each line contains one sentence.

./data/dataset-src-tgt/
-- train.src
-- train.tgt
-- dev.src
-- dev.tgt
-- test.src
-- test.tgt

Then,

cd script
bash preprocess.sh ../data/dataset-src-tgt/ src tgt

if it works succeefully, two folders containing binary files will be saved in the databin folder.

Train with MLS and original LS

cd scripts
bash train_LS.sh # end up in 20 epoches with valid_best_bleu = 36.91

bash train_MLS.sh # end up in 20 epoches with valid_best_bleu = 37.16

The best valid checkpoint will be saved in checkpoints folder for testing.

Get result on Test Set

cd scripts

bash generate.sh ../databin/iwslt14-de-en-joined-new ../checkpoints/de-en-LS-0.1 ../Output/de-en-ls-0.1.out # get BLEU4 = 35.20


bash generate.sh ../databin/iwslt14-de-en-joined-new ../checkpoints/de-en-MLS-0.1 ../Output/de-en-mls-0.1.out # get BLEU4 = 35.76

We have uploaded the generated texts in the Output folder, which you can also refer to.

Some results on single GPU

BLEU IWSLT14 DE-EN WMT16 RO-EN
LS dev: 36.91 test: 35.20 dev: 22.38 test: 22.54
MLS(Ours) dev: 37.16 test: 35.76 dev: 22.72 test: 22.89

Using Weighted Label Smoothing

You can change the lp_beta,lp_gamma,lp_eps in train_WLS.sh to control the weights distribution.

cd scripts

bash train_WLS.sh  # you should change the path to the source,target and joined vocabulary individually

The test procedure follows previous section.

Citation

If you feel our work helpful, please kindly cite

@inproceedings{chen2022focus,
   title={Focus on the Target’s Vocabulary: Masked Label Smoothing for Machine Translation},
   author={Chen, Liang and Xu, Runxin and Chang, Baobao},
   booktitle={The 60th Annual Meeting of the Association for Computational Linguistics},
   year={2022}
}

About

Source code of our paper "Focus on the Target’s Vocabulary: Masked Label Smoothing for Machine Translation" @ ACL 2022

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 94.3%
  • Shell 4.0%
  • Cuda 0.8%
  • C++ 0.5%
  • Cython 0.3%
  • Lua 0.1%