AdapLeR: Speeding up Inference by Adaptive Length Reduction

Ali Modarressi, Hosein Mohebbi, Mohammad Taher Pilehvar

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

Pre-trained language models have shown stellar performance in various downstream tasks. But, this usually comes at the cost of high latency and computation, hindering their usage in resource-limited settings. In this work, we propose a novel approach for reducing the computational cost of BERT with minimal loss in downstream performance. Our method dynamically eliminates less contributing tokens through layers, resulting in shorter lengths and consequently lower computational cost. To determine the importance of each token representation, we train a Contribution Predictor for each layer using a gradient-based saliency method. Our experiments on several diverse classification tasks show speedups up to 22x during inference time without much sacrifice in performance. We also validate the quality of the selected tokens in our method using human annotations in the ERASER benchmark. In comparison to other widely used strategies for selecting important tokens, such as saliency and attention, our proposed method has a significantly lower false positive rate in generating rationales
Original languageEnglish
Title of host publicationProceedings of the 60th Annual Meeting of the Association for Computational Linguistics
Place of PublicationDublin
Pages1-15
Number of pages15
Volume1
Publication statusPublished - May 2022

Keywords

  • Pre-trained Language Models
  • Computational Cost
  • Contribution Predictor

Fingerprint

Dive into the research topics of 'AdapLeR: Speeding up Inference by Adaptive Length Reduction'. Together they form a unique fingerprint.

Cite this