GPT-4: LLaMA-Adapter is a lightweight adaption method for fine-tuning instruction-following LLaMA models using Stanford Alpaca’s 52K data. With only 1.2M learnable parameters, it can turn a LLaMA into an instruction-following model within an hour. The method introduces a novel Zero-init Attention mechanism for stabilizing training at early stages and can be extended to multi-modal input instructions. LLaMA-Adapter generates high-quality instruction-following sentences, comparable to fully fine-tuned Stanford Alpaca and Alpaca-Lora models.
Read more at GitHub…