The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester, Rami Al-Rfou, Noah Constant
November 2021Abstract
This work shows how prompt tuning quality improves with model scale, enabling parameter-efficient adaptation of large language models.
Publication
Proceedings of the Conference on Empirical Methods in Natural Language Processing

Member of Technical Staff - TLM
My research interests include language modeling, embodied AI, motion forecasting, and multilingual modeling.