The Power of Scale for Parameter-Efficient Prompt Tuning

Abstract

This work shows how prompt tuning quality improves with model scale, enabling parameter-efficient adaptation of large language models.

Publication
Proceedings of the Conference on Empirical Methods in Natural Language Processing
Rami Al-Rfou
Rami Al-Rfou
Member of Technical Staff - TLM

My research interests include language modeling, embodied AI, motion forecasting, and multilingual modeling.

Related