LLaMA-3 is a series of foundation language models that support multilinguality, coding, reasoning, and tool usage. The models come in different sizes, with the largest having 405B parameters and a 128K token context window. The development of Llama 3 focused on optimizing data, scale, and managing complexity, using a combination of web data, code, and mathematical text, with specific pipelines for each. The models underwent pre-training, supervised finetuning, and direct preference optimization to enhance their performance and safety. Llama 3 models have demonstrated strong performance in various benchmarks and aim to balance helpfulness with harmlessness.