# ai-peer-review **Repository Path**: LTCM/ai-peer-review ## Basic Information - **Project Name**: ai-peer-review - **Description**: No description available - **Primary Language**: Python - **License**: Not specified - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-05-22 - **Last Updated**: 2025-05-22 ## Categories & Tags **Categories**: Uncategorized **Tags**: paper ## README # AI Peer Review This package facilitates AI-based peer review of academic papers, particularly in neuroscience. It uses multiple large language models (LLMs) to generate independent reviews of a paper, and then creates a meta-review summarizing the key points. NOTE: All code in this project was AI-generated using Claude Code. ## Features - Submit papers for review by multiple LLMs - Generate individual peer reviews from various models - Create a meta-review analyzing common themes and unique insights - Generate a concerns table identifying which model found each concern - Store results in markdown, CSV, and JSON formats ## Supported Models - GPT-4o (via OpenAI API) - GPT-4o-mini (via OpenAI API) - Claude 3.7 Sonnet (via Anthropic API) - Google Gemini 2.5 Pro (via Google AI API) - DeepSeek R1 (via Together API) - Llama 4 Maverick (via Together API) ## Installation ```bash # Create and activate a virtual environment (recommended) python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate # Install in development mode pip install -e . ``` ## Usage ### API Keys You can set API keys in two ways: #### Using the CLI config command ```bash # Set API keys (recommended) ai-peer-review config openai "your-openai-key" ai-peer-review config anthropic "your-anthropic-key" ai-peer-review config google "your-google-key" ai-peer-review config together "your-together-ai-key" # Used for DeepSeek R1 and Llama 4 Maverick ``` Keys are stored in `~/.ai-peer-review/config.json`. #### Using environment variables Alternatively, you can set environment variables either by exporting them or by using a `.env` file: **Option 1: Export variables in your shell:** ```bash export OPENAI_API_KEY="your-openai-key" export ANTHROPIC_API_KEY="your-anthropic-key" export GOOGLE_API_KEY="your-google-key" export TOGETHER_API_KEY="your-together-ai-key" # Used for DeepSeek R1 and Llama 4 Maverick ``` **Option 2: Create a .env file:** Copy the `.env.example` file to `.env` and fill in your API keys: ```bash cp .env.example .env # Edit the .env file with your API keys ``` You can place the `.env` file in: - The current working directory - Your home directory at `~/.ai-peer-review/.env` ### Command Line Interface Review a paper with all available models: ```bash ai-peer-review review path/to/paper.pdf ``` Specify specific models to use: ```bash ai-peer-review review path/to/paper.pdf --models "gpt4-o1,claude-3.7-sonnet" ``` Specify output directory: ```bash ai-peer-review review path/to/paper.pdf --output-dir ./my_reviews ``` Skip meta-review generation: ```bash ai-peer-review review path/to/paper.pdf --no-meta-review ``` Use a custom configuration file: ```bash ai-peer-review --config-file /path/to/custom/config.json review path/to/paper.pdf ``` ## Prompts and Configuration The tool uses specific prompts for generating peer reviews and meta-reviews: ### Review Prompt By default, papers are submitted to LLMs with the following prompt: ``` You are a neuroscientist and expert in brain imaging who has been asked to provide a peer review for a submitted research paper, which is attached here. Please provide a thorough and critical review of the paper. First provide a summary of the study and its results, and then provide a detailed point-by-point analysis of any flaws in the study. ``` ### Meta-Review Prompt The meta-review is generated using the following prompt: ``` The attached files contain peer reviews of a research article. Please summarize these into a meta-review, highlighting both the common points raised across reviewers as well as any specific concerns that were only raised by some reviewers. Then rank the reviews in terms of their usefulness and identification of critical issues. ``` When generating the meta-review, all model identifiers are removed from the individual reviews to prevent bias. ### Customizing Prompts You can customize the prompts used by the tool by editing the configuration file: 1. Locate or create the configuration file at `~/.ai-peer-review/config.json` 2. Add or modify the `prompts` section: ```json { "api_keys": { ... }, "prompts": { "system": "Your custom system prompt", "review": "Your custom review prompt. Include the {paper_text} placeholder where the paper text should be inserted.", "metareview": "Your custom meta-review prompt. Include the {reviews_text} placeholder where the reviews should be inserted." } } ``` The configuration file will be created automatically with default prompts if it doesn't exist. You can modify it to suit your needs. ### Using a Custom Configuration File You can specify a custom configuration file path using the `--config-file` option: ```bash ai-peer-review --config-file /path/to/custom/config.json review path/to/paper.pdf ``` This allows you to maintain multiple configuration files for different purposes or environments. The custom config file will be used for all operations in that command session, including loading prompts and API keys. ## Outputs The tool generates the following outputs in the specified directory (default: `./papers/[paper-name]`): - `review_[model-name].md` - Individual reviews from each LLM - `meta_review.md` - Summary and analysis of all reviews - `concerns_table.csv` - CSV file with concerns identified by each model - `results.json` - Structured data containing all reviews, meta-review, and reviewer-to-model mapping