# model-cards **Repository Path**: mirrors_docker/model-cards ## Basic Information - **Project Name**: model-cards - **Description**: No description available - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-03-29 - **Last Updated**: 2025-12-27 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # πŸ“¦ Docker AI Models Repository Official model cards and documentation for AI models available on [Docker Hub](https://hub.docker.com/u/ai). --- ## πŸš€ Models Overview ### DeepCoder Preview ![DeepCoder Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/agentica-120x-hub@2x.png) πŸ“Œ **Description:** DeepCoder-14B-Preview is a code reasoning LLM fine-tuned to scale up to long context lengths. πŸ“‚ **Model File:** [`ai/deepcoder-preview.md`](ai/deepcoder-preview.md) 🐳 **Docker Hub:** [`docker.io/ai/deepcoder-preview`](https://hub.docker.com/r/ai/deepcoder-preview) **Source:** - https://huggingface.co/agentica-org/DeepCoder-14B-Preview --- ### DeepSeek R1 Distill LLaMA ![DeepSeek Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/deepseek-120x-hub@2x.svg) πŸ“Œ **Description:** Distilled LLaMA by DeepSeek, fast and optimized for real-world tasks. πŸ“‚ **Model File:** [`ai/deepseek-r1-distill-llama.md`](ai/deepseek-r1-distill-llama.md) 🐳 **Docker Hub:** [`docker.io/ai/deepseek-r1-distill-llama`](https://hub.docker.com/r/ai/deepseek-r1-distill-llama) **Sources:** - https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B - https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B --- ### Devstral Small 1.1 ![Mistral Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/mistral-120x-hub@2x.svg) πŸ“Œ **Description:** Devstral Small 1.1 is an agentic coding LLM (24B) fine-tuned from Mistral-Small-3.1 with a 128K context window. πŸ“‚ **Model File:** [`ai/devstral-small.md`](ai/devstral-small.md) 🐳 **Docker Hub:** [`docker.io/ai/devstral-small`](https://hub.docker.com/r/ai/devstral-small) --- ### Gemma 3 ![Gemma Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/gemma-120x-hub@2x.svg) πŸ“Œ **Description:** Google's latest Gemma, small yet strong for chat and generation. πŸ“‚ **Model File:** [`ai/gemma3.md`](ai/gemma3.md) 🐳 **Docker Hub:** [`docker.io/ai/gemma3`](https://hub.docker.com/r/ai/gemma3) **Source:** - https://huggingface.co/google/gemma-3-4b-it --- ### Gemma 3 QAT ![Gemma Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/gemma-120x-hub@2x.svg) πŸ“Œ **Description:** Google's latest Gemma, in its QAT (quantization aware trained) variant. πŸ“‚ **Model File:** [`ai/gemma3-qat.md`](ai/gemma3-qat.md) 🐳 **Docker Hub:** [`docker.io/ai/gemma3-qat`](https://hub.docker.com/r/ai/gemma3-qat) --- ### Gemma 3N ![Gemma Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/gemma-120x-hub@2x.svg) πŸ“Œ **Description:** Efficient multimodal AI for text, image, audio, and video on low-resource devices. πŸ“‚ **Model File:** [`ai/gemma3n.md`](ai/gemma3n.md) 🐳 **Docker Hub:** [`docker.io/ai/gemma3n`](https://hub.docker.com/r/ai/gemma3n) --- ### GPT-OSS ![OpenAI Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/openai-120x-hub.svg) πŸ“Œ **Description:** OpenAI's open-weight models designed for powerful reasoning, agentic tasks. πŸ“‚ **Model File:** [`ai/gpt-oss.md`](ai/gpt-oss.md) 🐳 **Docker Hub:** [`docker.io/ai/gpt-oss`](https://hub.docker.com/r/ai/gpt-oss) --- ### Granite Embedding Multilingual ![IBM Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/ibm-120x-hub.svg) πŸ“Œ **Description:** Granite Embedding Multilingual is a 278 million parameter, encoder‑only XLM‑RoBERTa‑style model. πŸ“‚ **Model File:** [`ai/granite-embedding-multilingual.md`](ai/granite-embedding-multilingual.md) 🐳 **Docker Hub:** [`docker.io/ai/granite-embedding-multilingual`](https://hub.docker.com/r/ai/granite-embedding-multilingual) --- ### Granite 4.0 Micro ![IBM Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/ibm-120x-hub.svg) πŸ“Œ **Description:** 3B long-context instruct model with RL alignment, IF, tool use, and enterprise optimization. πŸ“‚ **Model File:** [`ai/granite-4.0-micro.md`](ai/granite-4.0-micro.md) 🐳 **Docker Hub:** [`docker.io/ai/granite-4.0-micro`](https://hub.docker.com/r/ai/granite-4.0-micro) --- ### Granite 4.0 H Micro ![IBM Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/ibm-120x-hub.svg) πŸ“Œ **Description:** 3B long-context instruct model with RL alignment, IF, tool calling, and enterprise readiness. πŸ“‚ **Model File:** [`ai/granite-4.0-h-micro.md`](ai/granite-4.0-h-micro.md) 🐳 **Docker Hub:** [`docker.io/ai/granite-4.0-h-micro`](https://hub.docker.com/r/ai/granite-4.0-h-micro) --- ### Granite 4.0 H Tiny ![IBM Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/ibm-120x-hub.svg) πŸ“Œ **Description:** 7B long-context instruct model with RL alignment, IF, tool use, and enterprise optimization. πŸ“‚ **Model File:** [`ai/granite-4.0-h-tiny.md`](ai/granite-4.0-h-tiny.md) 🐳 **Docker Hub:** [`docker.io/ai/granite-4.0-h-tiny`](https://hub.docker.com/r/ai/granite-4.0-h-tiny) --- ### Granite 4.0 H Small ![IBM Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/ibm-120x-hub.svg) πŸ“Œ **Description:** 32B long-context instruct model with RL alignment, IF, tool use, and enterprise optimization. πŸ“‚ **Model File:** [`ai/granite-4.0-h-small.md`](ai/granite-4.0-h-small.md) 🐳 **Docker Hub:** [`docker.io/ai/granite-4.0-h-small`](https://hub.docker.com/r/ai/granite-4.0-h-small) ### Llama 3.1 ![Meta Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/meta-120x-hub@2x.svg) πŸ“Œ **Description:** Meta's LLaMA 3.1: Chat-focused, benchmark-strong, multilingual-ready. πŸ“‚ **Model File:** [`ai/llama3.1.md`](ai/llama3.1.md) 🐳 **Docker Hub:** [`docker.io/ai/llama3.1`](https://hub.docker.com/r/ai/llama3.1) **Source:** - https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct --- ### Llama 3.2 ![Meta Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/meta-120x-hub@2x.svg) πŸ“Œ **Description:** Solid LLaMA 3 update, reliable for coding, chat, and Q&A tasks. πŸ“‚ **Model File:** [`ai/llama3.2.md`](ai/llama3.2.md) 🐳 **Docker Hub:** [`docker.io/ai/llama3.2`](https://hub.docker.com/r/ai/llama3.2) **Sources:** - https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct - https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct --- ### Llama 3.3 ![Meta Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/meta-120x-hub@2x.svg) πŸ“Œ **Description:** Newest LLaMA 3 release with improved reasoning and generation quality. πŸ“‚ **Model File:** [`ai/llama3.3.md`](ai/llama3.3.md) 🐳 **Docker Hub:** [`docker.io/ai/llama3.3`](https://hub.docker.com/r/ai/llama3.3) --- ### Magistral Small 3.2 ![Mistral Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/mistral-120x-hub@2x.svg) πŸ“Œ **Description:** 24B multimodal instruction model by Mistral AI, tuned for accuracy, tool use & fewer repeats. πŸ“‚ **Model File:** [`ai/magistral-small-3.2.md`](ai/magistral-small-2506.md) 🐳 **Docker Hub:** [`docker.io/ai/magistral-small-3.2`](https://hub.docker.com/r/ai/magistral-small-2506) --- ### Mistral ![Mistral Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/mistral-120x-hub@2x.svg) πŸ“Œ **Description:** Efficient open model with top-tier performance and fast inference. πŸ“‚ **Model File:** [`ai/mistral.md`](ai/mistral.md) 🐳 **Docker Hub:** [`docker.io/ai/mistral`](https://hub.docker.com/r/ai/mistral) **Source:** - https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3 --- ### Mistral Nemo ![Mistral Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/mistral-120x-hub@2x.svg) πŸ“Œ **Description:** Mistral fine-tuned via NVIDIA NeMo for smoother enterprise use. πŸ“‚ **Model File:** [`ai/mistral-nemo.md`](ai/mistral-nemo.md) 🐳 **Docker Hub:** [`docker.io/ai/mistral-nemo`](https://hub.docker.com/r/ai/mistral-nemo) **Source:** - https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407 --- ### mxbai-embed-large ![Mixedbread Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/mixedbread-120x-hub@2x.svg) πŸ“Œ **Description:** mxbai-embed-large-v1 is a top English embed model by Mixedbread AI, great for RAG and more. πŸ“‚ **Model File:** [`ai/mxbai-embed-large.md`](ai/mxbai-embed-large.md) 🐳 **Docker Hub:** [`docker.io/ai/mxbai-embed-large`](https://hub.docker.com/r/ai/mxbai-embed-large) **Source:** - https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1 --- ### Nomic Embed Text v1.5 ![Nomic Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/nomic-120x-hub.svg) πŸ“Œ **Description:** Nomic Embed Text v1 is an open‑source, fully auditable text embedding model. πŸ“‚ **Model File:** [`ai/nomic-embed-text-v1.5.md`](ai/nomic-embed-text-v1.5.md) 🐳 **Docker Hub:** [`docker.io/ai/nomic-embed-text-v1.5`](https://hub.docker.com/r/ai/nomic-embed-text-v1.5) --- ### Phi-4 ![Microsoft Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/phi-120x-hub@2x.svg) πŸ“Œ **Description:** Microsoft's compact model, surprisingly capable at reasoning and code. πŸ“‚ **Model File:** [`ai/phi4.md`](ai/phi4.md) 🐳 **Docker Hub:** [`docker.io/ai/phi4`](https://hub.docker.com/r/ai/phi4) **Source:** - https://huggingface.co/microsoft/phi-4 --- ### Qwen 2.5 ![Qwen Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/qwen-120x-hub@2x.svg) πŸ“Œ **Description:** Versatile Qwen update with better language skills and wider support. πŸ“‚ **Model File:** [`ai/qwen2.5.md`](ai/qwen2.5.md) 🐳 **Docker Hub:** [`docker.io/ai/qwen2.5`](https://hub.docker.com/r/ai/qwen2.5) **Source:** - https://huggingface.co/Qwen/Qwen2.5-7B-Instruct --- ### Qwen 3 ![Qwen Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/qwen-120x-hub@2x.svg) πŸ“Œ **Description:** Qwen3 is the latest Qwen LLM, built for top-tier coding, math, reasoning, and language tasks. πŸ“‚ **Model File:** [`ai/qwen3.md`](ai/qwen3.md) 🐳 **Docker Hub:** [`docker.io/ai/qwen3`](https://hub.docker.com/r/ai/qwen3) --- ### Qwen 3 Coder ![Qwen Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/qwen-120x-hub@2x.svg) πŸ“Œ **Description:** Qwen3-Coder is Qwen's new series of coding agent models. πŸ“‚ **Model File:** [`ai/qwen3-coder.md`](ai/qwen3-coder.md) 🐳 **Docker Hub:** [`docker.io/ai/qwen3-coder`](https://hub.docker.com/r/ai/qwen3-coder) --- ### QwQ ![Qwen Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/qwen-120x-hub@2x.svg) πŸ“Œ **Description:** Experimental Qwen variantβ€”lean, fast, and a bit mysterious. πŸ“‚ **Model File:** [`ai/qwq.md`](ai/qwq.md) 🐳 **Docker Hub:** [`docker.io/ai/qwq`](https://hub.docker.com/r/ai/qwq) **Source:** - https://huggingface.co/Qwen/QwQ-32B --- ### Seed OSS ![ByteDance Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/byte-seed-120x.svg) πŸ“Œ **Description:** Designed for reasoning, agent and general capabilities, and versatile developer-friendly features. πŸ“‚ **Model File:** [`ai/seed-oss.md`](ai/seed-oss.md) 🐳 **Docker Hub:** [`docker.io/ai/seed-oss`](https://hub.docker.com/r/ai/seed-oss) --- ### SmolLM 2 ![Huggingface Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/hugginfface-120x-hub@2x.svg) πŸ“Œ **Description:** Tiny LLM built for speed, edge devices, and local development. πŸ“‚ **Model File:** [`ai/smollm2.md`](ai/smollm2.md) 🐳 **Docker Hub:** [`docker.io/ai/smollm2`](https://hub.docker.com/r/ai/smollm2) **Sources:** - https://huggingface.co/HuggingFaceTB/SmolLM2-360M-Instruct - https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct --- ### SmolLM 3 ![Huggingface Logo](https://github.com/docker/model-cards/raw/refs/heads/main/logos/hugginfface-120x-hub@2x.svg) πŸ“Œ **Description:** SmolLM3 is a 3.1B model for efficient on-device use, with strong performance in chat. πŸ“‚ **Model File:** [`ai/smollm3.md`](ai/smollm3.md) 🐳 **Docker Hub:** [`docker.io/ai/smollm3`](https://hub.docker.com/r/ai/smollm3) ## πŸ› οΈ Tools ### Model Cards CLI A command-line tool for working with model cards. See [tools/model-cards-cli/README.md](tools/model-cards-cli/README.md) for details. Key features: - Update "Available model variants" tables in model card markdown files - Inspect model repositories to extract metadata - Upload model overviews to Docker Hub - Support for custom namespaces and private repositories ## πŸ“ Contributing To add or update model cards: 1. Create/edit the markdown file in the `ai/` directory 2. Use the Model Cards CLI to update variant tables 3. Submit a pull request ## πŸ“„ License See [LICENSE](LICENSE) file for details.