Stay on topic with Classifier-Free Guidance

In Stable Diffusion, CFG (Classifier-Free Guidance) is used to guide a model to follow a given instruction better. As it was shown by the authors of the paper Stay on topic with Classifier-Free Guidance the same technique can be used to increase adherence to the prompt in autoregressive language modeling, bringing improvements equivalent to a model with twice the parameter-count. CFG can be used as an inference-time technique in pure language modeling and improves the performance of Pythia, GPT-2 and LLaMA-family models across an array of tasks: Q&A, reasoning, code generation, and machine translation, achieving SOTA on LAMBADA with LLaMA-7B over PaLM-540B. CFG can also be used to increase the faithfulness and coherence of assistants in challenging form-driven and content-driven prompts.

Interestingly the best CGF-scale value used for LLM is ~1.5-2.0 unlike diffusion models there people often use 7.0-8.0 (or even more).

This approach could be used for many other tasks like reducing hallucinations.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.