[go: up one dir, main page]

Skip to content

Learning ASR-Robust Contextualized Embeddings for Spoken Language Understanding

Notifications You must be signed in to change notification settings

MiuLab/SpokenVec

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Learning ASR-Robust Contextualized Embeddings

Paper(Main page) | Paper(PDF) | Slides | Presentation

Implementation of our ICASSP 2020 paper Learning ASR-Robust Contextualized Embeddings for Spoken Language Understanding.

Requirements

  • Python >= 3.6
  • Install the required Python packages with pip3 install -r requirements.txt

How to run

We provide a transcribed and processed dataset of the SNIPS NLU benchmark, where the audio files were generated with a TTS system, for training and evaluation.

The training configs are located in models.

Steps

For training baseline models with or without ELMo embeddings:

# For static word embeddings
python3 main.py ../models/snips_tts/1

# For pre-trained ELMo embeddings
python3 main.py ../models/snips_tts/2

For fine-tuning ELMo with only LM objective (ULMFit) and using it to train SLU classifier

# Fine-tuning LM
python3 main_lm.py ../models/lm/snips_tts/1

# Training SLU classifier with the fine-tuned LM, you might want to modify the specific checkpoint in the config.
python3 main.py ../models/snips_tts/3

For fine-tuning ELMo with our method and using it to train SLU classifier

# Fine-tuning LM with unsupervised extracted confusions
python3 main_lm.py ../models/lm/snips_tts/2

# Fine-tuning LM with supervised extracted confusions
python3 main_lm.py ../models/lm/snips_tts/3

# Training SLU classifier with the fine-tuned LM, you might want to modify the specific checkpoint in the config.
# with lm/snips_tts/2, which uses unsupervised extraction
python3 main.py ../models/snips_tts/4

# with lm/snips_tts/3, which uses supervised extraction
python3 main.py ../models/snips_tts/5

Reference

If you find our work useful, please cite the following paper

    @inproceedings{
        9054689,
        author={C. {Huang} and Y. {Chen}},
        booktitle={ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, 
        title={Learning Asr-Robust Contextualized Embeddings for Spoken Language Understanding}, 
        year={2020},
        volume={},
        number={},
        pages={8009-8013},
    }

About

Learning ASR-Robust Contextualized Embeddings for Spoken Language Understanding

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages