# ragas **Repository Path**: mayjean/ragas ## Basic Information - **Project Name**: ragas - **Description**: Supercharge Your LLM Application Evaluations ๐Ÿš€ - **Primary Language**: Python - **License**: Apache-2.0 - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-11-20 - **Last Updated**: 2025-11-20 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README

Supercharge Your LLM Application Evaluations ๐Ÿš€

Latest release Made with Python License Apache-2.0 Ragas Downloads per month Join Ragas community on Discord Ask DeepWiki.com

Documentation | Quick start | Join Discord | Blog | NewsLetter | Careers

Objective metrics, intelligent test generation, and data-driven insights for LLM apps Ragas is your ultimate toolkit for evaluating and optimizing Large Language Model (LLM) applications. Say goodbye to time-consuming, subjective assessments and hello to data-driven, efficient evaluation workflows. Don't have a test dataset ready? We also do production-aligned test set generation. > [!NOTE] > Need help setting up Evals for your AI application? We'd love to help! We are conducting Office Hours every week. You can sign up [here](https://cal.com/team/vibrantlabs/office-hours). ## Key Features - ๐ŸŽฏ Objective Metrics: Evaluate your LLM applications with precision using both LLM-based and traditional metrics. - ๐Ÿงช Test Data Generation: Automatically create comprehensive test datasets covering a wide range of scenarios. - ๐Ÿ”— Seamless Integrations: Works flawlessly with popular LLM frameworks like LangChain and major observability tools. - ๐Ÿ“Š Build feedback loops: Leverage production data to continually improve your LLM applications. ## :shield: Installation Pypi: ```bash pip install ragas ``` Alternatively, from source: ```bash pip install git+https://github.com/explodinggradients/ragas ``` ## :fire: Quickstart ### Clone a Complete Example Project The fastest way to get started is to use the `ragas quickstart` command: ```bash # List available templates ragas quickstart # Create a RAG evaluation project ragas quickstart rag_eval # Create an agent evaluation project ragas quickstart agent_evals -o ./my-project ``` Available templates: - `rag_eval` - Evaluate RAG systems - `agent_evals` - Evaluate AI agents - `benchmark_llm` - Benchmark and compare LLMs - `prompt_evals` - Evaluate prompt variations - `workflow_eval` - Evaluate complex workflows ### Evaluate your LLM App This is a simple example evaluating a summary for accuracy: ```python import asyncio from ragas.metrics.collections import AspectCritic from ragas.llms import llm_factory # Setup your LLM llm = llm_factory("gpt-4o") # Create a metric metric = AspectCritic( name="summary_accuracy", definition="Verify if the summary is accurate and captures key information.", llm=llm ) # Evaluate test_data = { "user_input": "summarise given text\nThe company reported an 8% rise in Q3 2024, driven by strong performance in the Asian market. Sales in this region have significantly contributed to the overall growth. Analysts attribute this success to strategic marketing and product localization. The positive trend in the Asian market is expected to continue into the next quarter.", "response": "The company experienced an 8% increase in Q3 2024, largely due to effective marketing strategies and product adaptation, with expectations of continued growth in the coming quarter.", } score = await metric.ascore( user_input=test_data["user_input"], response=test_data["response"] ) print(f"Score: {score.value}") print(f"Reason: {score.reason}") ``` > **Note**: Make sure your `OPENAI_API_KEY` environment variable is set. Find the complete [Quickstart Guide](https://docs.ragas.io/en/latest/getstarted/evals) ## Want help in improving your AI application using evals? In the past 2 years, we have seen and helped improve many AI applications using evals. If you want help with improving and scaling up your AI application using evals. ๐Ÿ”— Book a [slot](https://bit.ly/3EBYq4J) or drop us a line: [founders@explodinggradients.com](mailto:founders@explodinggradients.com). ## ๐Ÿซ‚ Community If you want to get more involved with Ragas, check out our [discord server](https://discord.gg/5qGUJ6mh7C). It's a fun community where we geek out about LLM, Retrieval, Production issues, and more. ## Contributors ```yml +----------------------------------------------------------------------------+ | +----------------------------------------------------------------+ | | | Developers: Those who built with `ragas`. | | | | (You have `import ragas` somewhere in your project) | | | | +----------------------------------------------------+ | | | | | Contributors: Those who make `ragas` better. | | | | | | (You make PR to this repo) | | | | | +----------------------------------------------------+ | | | +----------------------------------------------------------------+ | +----------------------------------------------------------------------------+ ``` We welcome contributions from the community! Whether it's bug fixes, feature additions, or documentation improvements, your input is valuable. 1. Fork the repository 2. Create your feature branch (git checkout -b feature/AmazingFeature) 3. Commit your changes (git commit -m 'Add some AmazingFeature') 4. Push to the branch (git push origin feature/AmazingFeature) 5. Open a Pull Request ## ๐Ÿ” Open Analytics At Ragas, we believe in transparency. We collect minimal, anonymized usage data to improve our product and guide our development efforts. โœ… No personal or company-identifying information โœ… Open-source data collection [code](./src/ragas/_analytics.py) โœ… Publicly available aggregated [data](https://github.com/explodinggradients/ragas/issues/49) To opt-out, set the `RAGAS_DO_NOT_TRACK` environment variable to `true`. ### Cite Us ``` @misc{ragas2024, author = {ExplodingGradients}, title = {Ragas: Supercharge Your LLM Application Evaluations}, year = {2024}, howpublished = {\url{https://github.com/explodinggradients/ragas}}, } ```