What are the most popular LLM benchmarks?

An overview of the various LLM benchmarks you can use to evaluate models

This post covers the most widely used benchmarks for assessing the performance of large language models.

Table of contents:

Why use benchmarks for LLM evaluation?

If you’re new to the topic of LLM evaluation, here’s a quick reminder. LLM benchmarks help evaluate a large language model’s performance by providing a standardized procedure to measure metrics around a variety of tasks.

Benchmarks contain all the setup and data you need to evaluate LLMs for your purposes, including:

  • “Ground truth” datasets (relevant tasks/questions/prompts with expected answers)
  • How the input prompts should be fed into the LLM
  • How the output responses should be interpreted/collected
  • Which metrics and scores to compute (and how)

Together, these provide a consistent way to compare performance across models. But which LLM benchmark should you use? That depends mostly on the use case e.g. what you intend to use a large language model for.

🔍 Find the right LLM for your project

The “best model” is not necessarily the best for your programming language, framework or use case. Find your best model using the DevQualityEval leaderboard.

The best LLM benchmarks

If you’re looking for a one-stop-shop, HuggingFace's Big Benchmarks Collection is a pretty comprehensive list of widely used benchmarks. It includes the benchmarks covered by the popular OpenLLM Leaderboard and adds a variety of other important benchmarks.

For an evaluation comparing the performance LLMs for software development based on quality assessments, check out the DevQualityEval benchmark.

For a more granular view on LLM benchmarks, we’re introducing a few of the most popular benchmarks categorized by use case:

Reasoning, conversation, Q&A benchmarks

These benchmarks assess model capabilities including reasoning, argumentation, and question answering. Some are domain-specific, others are general.

HellaSwag (GitHub)

This benchmark focuses on commonsense natural language inference, e.g. whether a model can really finish realistic human sentences. It contains questions that are trivial for humans, but may pose a challenge to models.

The dataset contains 70,000 multiple-choice questions (based on activitynet or wikihow) with an adversarial set of machine-generated (and human-verified) wrong answers. Four choices are offered as to how the given sentence might continue, and the model is asked to select the right one.

BIG-Bench Hard (GitHub)

Based on BIG-Bench aka the Beyond the Imitation Game Benchmark. That benchmark contains over 200 tasks spanning a range of task types and domains.

BIG-Bench Hard focuses on a subset of 23 of the most challenging BIG-Bench tasks. These are the tasks for which model evaluations couldn’t outperform the average human rater (before introducing the benchmark).

SQuAD (GitHub)

The Stanford Question Answering Dataset (SQuAD) tests reading comprehension. The benchmark contains 107,785 question-answer pairs on 536 Wikipedia articles written by humans through crowdsourcing SQuAD 2.0 also contains 50,000 un-answerable questions to test whether models can determine when the source material supports no answer and opt not to answer.

A separate test set is kept private so as not to compromise the integrity of the results (e.g. by letting models be trained on them). To get your model evaluated on the SQuAD test set, you just need to submit it to the benchmark’s developers.

IFEval (GitHub)

IFEval evaluates the ability of models to follow instructions provided in natural language. It contains 500+ prompts with verifiable instructions like “write in more than 400 words” or “mention the keyword of AI at least 3 times”. IFEval is part of the Open LLM Leaderboard by Hugging Face.

MuSR (GitHub)

MuSR stands for Multi-step Soft Reasoning. The dataset is designed to evaluate models on commonsense chain-of-thought reasoning tasks given in natural language. MuSR has two main characteristics that differentiate it from other benchmarks:

  • Algorithmically generated dataset with complex problems
  • The dataset contains free-text narratives that correspond to real-world reasoning domains.

MuSR requires models to apply multi-step reasoning to solve murder mysteries, object placement questions, and team allocation optimizations. Models have to parse long texts to understand context, and then apply reasoning based on that context. MuSR is part of the Open LLM Leaderboard by Hugging Face.

MMLU-PRO (GitHub)

The abbreviation stands for Massive Multitask Language Understanding - Professional. It’s an improved version of the standard MMLU dataset.

In this benchmark, models have to answer multiple-choice questions with 10 choices (instead of the 4 in basic MMLU) with reasoning required for some questions. The dataset is of higher quality than MMLU, which was considered to have noisy data and data contamination (meaning lots of newer models are likely trained on the questions contained therein), reducing its difficulty for models and therefore, its usefulness. MMLU-PRO rectifies that and is considered more challenging than MMLU. MMLU-PRO is part of the Open LLM Leaderboard by Hugging Face.

MT-Bench

MT-Bench is a multi-turn benchmark (with follow-up questions) that evaluates models' ability to participate in coherent, informative, and engaging conversations. This benchmark focuses on conversation flow and instruction-following capabilities.

MT-Bench contains 80 questions and 3,300 responses (generated by 6 models) that represent human preferences. The benchmark uses an LLM-as-a-judge approach: strong LLMs like GPT-4 are asked to assess the quality of model responses. Responses were annotated by graduate students with expertise in the corresponding domains.

Domain-specific benchmarks

GPQA (GitHub)

GPQA stands for Graduate-Level Google-Proof Q&A Benchmark. It’s a challenging dataset of 448 multiple-choice questions spanning the domains of biology, physics, and chemistry. The questions in GPQA can be considered very difficult: experts including those with PhDs can only achieve about 65% accuracy on answering these questions.

Questions are actually hard enough to be Google-proof e.g. even with free web access and 30+ minutes of researching a topic, out-of-domain validators (e.g. a biologist answering a chemistry question) could only achieve 34% accuracy. GPQA is part of the Open LLM Leaderboard by Hugging Face.

MedQA (GitHub)

The abbreviation stands for Medical Question Answering benchmark. It’s a multiple-choice question-answering evaluation based on United States Medical License Exams. This benchmark covers three languages with tons of questions: English (12k+ questions), simplified Chinese (34k+ questions), and traditional Chinese (14k+ questions).

PubMedQA (GitHub)

The PubMedQA is a dataset for question answering about biomedical research. Models are required to answer questions with 3 possible answers based on the provided abstracts: yes, no, or maybe.

Reasoning is required to provide answers to questions about the supplied pieces of biomedical research. The dataset comes with expert-labeled (1k), unlabeled (61.2k), and artificially generated (211.3k) QA instances.

🤔 Curious about coding benchmarks?

We’re covering software code generation benchmarks in a separate post in this series: Comparing LLM benchmarks for software development.

Math benchmarks

GSM8K ( GitHub)

The goal of this benchmark is to evaluate multi-step mathematical reasoning. GSM8K is a lower-level benchmark with 8,500 grade school math problems that a smart middle schooler could solve. The dataset is divided into 7,500 training problems and 1,000 test problems

Problems (written by human problem writers) are linguistically diverse and require between 2 to 8 steps to solve. The solution requires the LLM to use a sequence of basic arithmetic operations (+ - / *) to arrive at the correct result.

MATH (GitHub)

The MATH dataset contains 12,500 competition-level mathematics problems to challenge LLMs. It provides the ground truth: each of these problems comes with a step-by-step solution. This enables the evaluation of the LLM’s problem-solving skills. MATH is part of the Open LLM Leaderboard by Hugging Face.

MathEval (GitHub)

MathEval is intended to thoroughly evaluate the mathematical capabilities of LLMs. Its developers meant MathEval to be the standard reference for comparing the mathematical abilities of models.

It is a collection of 20 datasets (including GSM8K and MATH) that cover a range of mathematical domains with over 30,000 math problems. MathEval provides comprehensive evaluation across various difficulties and subfields of mathematics (arithmetic, primary and secondary school competition problems, and advanced subfields). Besides evaluation, MathEval also aims to guide developers in further improving the mathematical capabilities of models. You can extend it with new mathematical evaluation datasets as you see fit.

PyRIT

PyRIT stands for Python Risk Identification Tool for generative AI (PyRIT). It’s more of a framework than a standalone benchmark, but a useful tool developed by Microsoft.

PyRIT is a tool to evaluate LLM robustness against a range of harm categories. It can be used to identify harm categories including fabricated/ungrounded content (for instance, hallucination), misuse (bias, malware generation, jailbreaking), prohibited content (such as harassment), and privacy harms (identity theft). The tool automates red teaming tasks for foundation models, and thus aims to contribute to the efforts of securing the future of AI.

Purple Llama CyberSecEval (GitHub)

CyberSecEval (a product of Meta’s Purple Llama project focuses on the cybersecurity of models used in coding. It claims to be the most extensive unified cybersecurity safety benchmark.

CyberSecEval covers two crucial security domains:

  • the likelihood of generating insecure code
  • compliance when prompted to help with cyberattacks.

It can be used to assess how much LLMs are willing and able to assist cyber attackers, safeguarding against misuse. CyberSecEval provides metrics for quantifying the cybersecurity risks associated with LLM-generated code. CyberSecEval 2 is an improvement to the original benchmark which extends the evaluation to prompt injection and code interpreter abuse.

LLM benchmarks for different domains

The above list should help you get started with choosing the right benchmarks to evaluate LLMs for your use case. Whatever the domain or intended purpose, choose the appropriate benchmark(s) to identify the LLM that would work best for you.

Looking for a model to generate software code? Check out the next post in this series that focuses on code generation benchmarks, and see how Symflower generates test code with Claude 3.5 Sonnet:

Make sure you never miss any of our upcoming content by signing up for our newsletter and by following us on X, LinkedIn, or Facebook.

Try Symflower in your IDE to generate unit test templates & test suites
| 2024-07-10