# workbench-example-phi3-finetune
**Repository Path**: mirrors_NVIDIA/workbench-example-phi3-finetune
## Basic Information
- **Project Name**: workbench-example-phi3-finetune
- **Description**: An NVIDIA AI Workbench example project for finetuning a Phi-3 Mini Model
- **Primary Language**: Unknown
- **License**: Apache-2.0
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2024-08-17
- **Last Updated**: 2026-03-29
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# Table of Contents
* [Introduction](#nvidia-ai-workbench-introduction)
* [Project Description](#project-description)
* [Sizing Guide](#sizing-guide)
* [Quickstart](#quickstart)
* [Prerequisites](#prerequisites)
* [Tutorial (Desktop App)](#tutorial-desktop-app)
* [Tutorial (CLI-Only)](#tutorial-cli-only)
* [License](#license)
# NVIDIA AI Workbench: Introduction [](https://ngc.nvidia.com/open-ai-workbench/aHR0cHM6Ly9naXRodWIuY29tL05WSURJQS93b3JrYmVuY2gtZXhhbXBsZS1waGkzLWZpbmV0dW5l)
:arrow_down: Download AI Workbench • :book: Read the Docs • :open_file_folder: Explore Example Projects • :rotating_light: Facing Issues? Let Us Know!
## Project Description The Phi-3-Mini Instruct model is a cost-effective and efficient language model that can implement powerful AI capabilities without the extensive resource requirements of larger models. In this project, we will focus on finetuning this base model on the GEM/viggo video game dataset, which is a domain specific dataset capturing meaning representation. Meaning representation is the task of representing natural language in a structured and logical form that machines can understand and manipulate. We show in this project that once finetuned to convergence, this model can generate text crucial for downstream tasks such as named entity relation, semantic parsing, and relation extraction, outperforming its own out-of-the-box capabilities. * ```phi3-finetune.ipynb```: This notebook provides a sample workflow for fine-tuning a 8-bit quantized Phi-3-Mini Instruct model for meaning representation on the GEM/viggo dataset using Low-Rank Adaptation Fine-tuning (LoRA), a popular parameter-efficient fine-tuning method. | :memo: Remember | | :---------------------------| | This project is meant as an example workflow and a starting point; you are free to swap out the dataset, choose a different task, and edit the training prompts as you see fit for your particular use case! | ## Sizing Guide | GPU VRAM | Example Hardware | Compatible? | | -------- | ------- | ------- | | <16 GB | RTX 3080, RTX 3500 Ada | Y | | 16 GB | RTX 4080 16GB, RTX A4000 | Y | | 24 GB | RTX 3090/4090, RTX A5000/5500, A10/30 | Y | | 32 GB | RTX 5000 Ada | Y | | 40 GB | A100-40GB | Y | | 48 GB | RTX 6000 Ada, L40/L40S, A40 | Y | | 80 GB | A100-80GB | Y | | >80 GB | 8x A100-80GB | Y | # Quickstart ## Prerequisites AI Workbench will prompt you to provide a few pieces of information before running any apps in this project. Ensure you have this information ready. * The location where you would like the Phi-3-Mini Instruct model to live on the underlying **host** system. * The Hugging Face API Token. ## Tutorial (Desktop App) If you do not NVIDIA AI Workbench installed, first complete the installation for AI Workbench [here](https://www.nvidia.com/en-us/deep-learning-ai/solutions/data-science/workbench/). Then, 1. Fork this Project to your own GitHub namespace and copy the link ``` https://github.com/[your_namespace]/