# gemm-fp8 **Repository Path**: underdogs/gemm-fp8 ## Basic Information - **Project Name**: gemm-fp8 - **Description**: No description available - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: hopper - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-03-31 - **Last Updated**: 2025-03-31 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # FP8 GEMM with PyTorch Interface ## Usage Insall the kernels using the following commands: ```bash git clone https://github.com/IST-DASLab/gemm_fp8.git cd gemm_fp8 pip install -e . # or pip install . ``` Then, the kernel can be used as follows: ```python import torch import gemm_fp8 y = gemm_fp8.matmul(a, b, alpha=1.0) ``` where `a` and `b` are the input matrices (in `torch.float8_e4m3fn` format) and `alpha` is the scaling factor (in `float`). ## Benchmark Run the following command to benchmark the kernel: ```bash python benchmark.py ```