Originally published in the-decoder.com, Oct 18, 2024. Nvidia has introduced a new large language model that outperforms others in alignment benchmarks. The company achieved this through a special training procedure combining evaluation and preference models. The new model, called Llama-3.1-Nemotron-70B-Instruct, is based on Meta’s open-source Llama 3.1 model. Nvidia optimized it to provide helpful […]

Source: https://www.predictiveanalyticsworld.com/machinelearningtimes/nvidia-improves-metas-llama-model-with-new-training-approach/13637/