# cc-proxy **Repository Path**: azhao-1981/cc-proxy ## Basic Information - **Project Name**: cc-proxy - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-07-24 - **Last Updated**: 2025-07-27 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Anthropic API Proxy for Gemini & OpenAI Models 🔄 **Use Anthropic clients (like Claude Code) with Gemini or OpenAI backends.** 🤝 A proxy server that lets you use Anthropic clients with Gemini or OpenAI models via LiteLLM. 🌉 ![Anthropic API Proxy](pic.png) ## Quick Start ⚡ ### Prerequisites - OpenAI API key 🔑 - Google AI Studio (Gemini) API key (if using Google provider) 🔑 - [uv](https://github.com/astral-sh/uv) installed. ### Setup 🛠️ 1. **Clone this repository**: ```bash git clone https://github.com/1rgs/claude-code-openai.git cd claude-code-openai ``` 2. **Install uv** (if you haven't already): ```bash curl -LsSf https://astral.sh/uv/install.sh | sh ``` *(`uv` will handle dependencies based on `pyproject.toml` when you run the server)* 3. **Configure Environment Variables**: Copy the example environment file: ```bash cp .env.example .env ``` Edit `.env` and fill in your API keys and model configurations: * `ANTHROPIC_API_KEY`: (Optional) Needed only if proxying *to* Anthropic models. * `OPENAI_API_KEY`: Your OpenAI API key (Required if using the default OpenAI preference or as fallback). * `GEMINI_API_KEY`: Your Google AI Studio (Gemini) API key (Required if PREFERRED_PROVIDER=google). * `PREFERRED_PROVIDER` (Optional): Set to `openai` (default) or `google`. This determines the primary backend for mapping `haiku`/`sonnet`. * `BIG_MODEL` (Optional): The model to map `sonnet` requests to. Defaults to `gpt-4.1` (if `PREFERRED_PROVIDER=openai`) or `gemini-2.5-pro-preview-03-25`. * `SMALL_MODEL` (Optional): The model to map `haiku` requests to. Defaults to `gpt-4.1-mini` (if `PREFERRED_PROVIDER=openai`) or `gemini-2.0-flash`. **Mapping Logic:** - If `PREFERRED_PROVIDER=openai` (default), `haiku`/`sonnet` map to `SMALL_MODEL`/`BIG_MODEL` prefixed with `openai/`. - If `PREFERRED_PROVIDER=google`, `haiku`/`sonnet` map to `SMALL_MODEL`/`BIG_MODEL` prefixed with `gemini/` *if* those models are in the server's known `GEMINI_MODELS` list (otherwise falls back to OpenAI mapping). 4. **Run the server**: ```bash uv run uvicorn server:app --host 0.0.0.0 --port 8082 --reload uv run uvicorn server:app --host 0.0.0.0 --port 8082 --reload --log-level debug ``` *(`--reload` is optional, for development)* ### Using with Claude Code 🎮 1. **Install Claude Code** (if you haven't already): ```bash npm install -g @anthropic-ai/claude-code ``` 2. **Connect to your proxy**: ```bash ANTHROPIC_BASE_URL=http://localhost:8082 claude ``` 3. **That's it!** Your Claude Code client will now use the configured backend models (defaulting to Gemini) through the proxy. 🎯 ## Model Mapping 🗺️ The proxy automatically maps Claude models to either OpenAI or Gemini models based on the configured model: | Claude Model | Default Mapping | When BIG_MODEL/SMALL_MODEL is a Gemini model | |--------------|--------------|---------------------------| | haiku | openai/gpt-4o-mini | gemini/[model-name] | | sonnet | openai/gpt-4o | gemini/[model-name] | ### Supported Models #### OpenAI Models The following OpenAI models are supported with automatic `openai/` prefix handling: - o3-mini - o1 - o1-mini - o1-pro - gpt-4.5-preview - gpt-4o - gpt-4o-audio-preview - chatgpt-4o-latest - gpt-4o-mini - gpt-4o-mini-audio-preview - gpt-4.1 - gpt-4.1-mini #### Gemini Models The following Gemini models are supported with automatic `gemini/` prefix handling: - gemini-2.5-pro-preview-03-25 - gemini-2.0-flash ### Model Prefix Handling The proxy automatically adds the appropriate prefix to model names: - OpenAI models get the `openai/` prefix - Gemini models get the `gemini/` prefix - The BIG_MODEL and SMALL_MODEL will get the appropriate prefix based on whether they're in the OpenAI or Gemini model lists For example: - `gpt-4o` becomes `openai/gpt-4o` - `gemini-2.5-pro-preview-03-25` becomes `gemini/gemini-2.5-pro-preview-03-25` - When BIG_MODEL is set to a Gemini model, Claude Sonnet will map to `gemini/[model-name]` ### Customizing Model Mapping Control the mapping using environment variables in your `.env` file or directly: **Example 1: Default (Use OpenAI)** No changes needed in `.env` beyond API keys, or ensure: ```dotenv OPENAI_API_KEY="your-openai-key" GEMINI_API_KEY="your-google-key" # Needed if PREFERRED_PROVIDER=google # PREFERRED_PROVIDER="openai" # Optional, it's the default # BIG_MODEL="gpt-4.1" # Optional, it's the default # SMALL_MODEL="gpt-4.1-mini" # Optional, it's the default ``` **Example 2: Prefer Google** ```dotenv GEMINI_API_KEY="your-google-key" OPENAI_API_KEY="your-openai-key" # Needed for fallback PREFERRED_PROVIDER="google" # BIG_MODEL="gemini-2.5-pro-preview-03-25" # Optional, it's the default for Google pref # SMALL_MODEL="gemini-2.0-flash" # Optional, it's the default for Google pref ``` **Example 3: Use Specific OpenAI Models** ```dotenv OPENAI_API_KEY="your-openai-key" GEMINI_API_KEY="your-google-key" PREFERRED_PROVIDER="openai" BIG_MODEL="gpt-4o" # Example specific model SMALL_MODEL="gpt-4o-mini" # Example specific model ``` ## How It Works 🧩 This proxy works by: 1. **Receiving requests** in Anthropic's API format 📥 2. **Translating** the requests to OpenAI format via LiteLLM 🔄 3. **Sending** the translated request to OpenAI 📤 4. **Converting** the response back to Anthropic format 🔄 5. **Returning** the formatted response to the client ✅ The proxy handles both streaming and non-streaming responses, maintaining compatibility with all Claude clients. 🌊 ## Contributing 🤝 Contributions are welcome! Please feel free to submit a Pull Request. 🎁