[go: up one dir, main page]

×
Feb 28, 2019 · When applied to ELMo, our method achieves a 4 times speedup and eliminates 80% trainable parameters while achieving competitive performance on ...
This work redesigns the learning objective and proposes an efficient framework for training contextual representation models that bypasses the softmax layer ...
Our framework reduces the time spent on the output layer to a negligible level, eliminates almost all the trainable parameters of the softmax layer and performs ...
Thank you for checking out our repository! This repo contains code for the paper Efficient Contextual Representation Learning With Continuous Outputs.
We introduced an efficient framework to learn contextual representation without the softmax layer. The experiments with ELMo showed that we significantly ...
Efficient Contextual Representation Learning Without Softmax Layer ... Our framework reduces the time spent on the output layer to a negligible level, eliminates ...
Contextual representation models have achieved great success in improving various downstream natural language processing tasks. Language Modelling ...
Contextual representation models have achieved great success in improving various downstream natural language processing tasks.
Hsieh, K. Keutzer, J. Demmel. To appear in ICDM 2019. Efficient Contextual Representation Learning Without Softmax Layer , Liunian Harold ...