# hf_benchmarks **Repository Path**: mirrors_huggingface/hf_benchmarks ## Basic Information - **Project Name**: hf_benchmarks - **Description**: A starter kit for evaluating benchmarks on the 🤗 Hub - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2022-03-24 - **Last Updated**: 2025-12-20 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Hugging Face Benchmarks > A toolkit for evaluating benchmarks on the [Hugging Face Hub](https://huggingface.co) ## Hosted benchmarks The list of hosted benchmarks is shown in the table below: | Benchmark | Description | Submission | Leaderboard | | :---: | :---: | :---: | :---: | | RAFT | A benchmark to test few-shot learning in NLP | [`ought/raft-submission`](https://huggingface.co/datasets/ought/raft-submission) | [`ought/raft-leaderboard`](https://huggingface.co/spaces/ought/raft-leaderboard) | | GEM | A large-scale benchmark for natural language generation | [`GEM/submission-form`](https://huggingface.co/spaces/GEM/submission-form) | [`GEM/results`](https://huggingface.co/spaces/GEM/results) | ## Developer installation Clone the repository and install the requirements: ``` git clone git@github.com:huggingface/hf_benchmarks.git cd hf_benchmarks pip install '.[dev]' ```