OpenOrca has fine-tuned the Llama2-13B model using its own dataset and OpenChat packing, surpassing the performance of Microsoft Research’s Orca Paper. The model achieved this with less than 1/10th of the compute requirement and less than 20% of the dataset size. The model is expected to top the HuggingFaceH4 Open LLM Leaderboard and the GPT4ALL Leaderboard for 13B models. The training was significantly more efficient, thanks to the OpenChat MultiPack algorithm.
Read more…