Originally published in the-decoder.com, Oct 18, 2024. Nvidia has introduced a new large language model that outperforms others in alignment benchmarks. The company achieved this through a special training procedure combining evaluation and preference models. The new model, called Llama-3.1-Nemotron-70B-Instruct, is based on Meta’s open-source Llama 3.1 model. Nvidia optimized it to provide helpful […]
Nvidia improves Meta’s Llama model with new training approach
November 18, 2024