Fine-tuning pre-trained models is common in NLP, but forking the model for each task can be a burden. Prompt tuning adds a small set of learnable vectors to the input and can match fine-tuning quality while sharing the same frozen model across all tasks. https://t.co/NKHhMzk056 https://t.co/sceEtJz1u2

Favorite tweet:
Fine-tuning pre-trained models is common in NLP, but forking the model for each task can be a burden. Prompt tuning adds a small set of learnable vectors to the input and can match fine-tuning quality while sharing the same frozen model across all tasks. https://t.co/NKHhMzk056 https://t.co/sceEtJz1u2 https://twitter.com/GoogleAI/status/1491915977138720770 https://t.co/NKHhMzk056 February 11, 2022 at 08:24AMFine-tuning pre-trained models is common in NLP, but forking the model for each task can be a burden. Prompt tuning adds a small set of learnable vectors to the input and can match fine-tuning quality while sharing the same frozen model across all tasks. https://t.co/NKHhMzk056 https://t.co/sceEtJz1u2
— Google AI (@GoogleAI) Feb 10, 2022
댓글
댓글 쓰기