# AI-Interview-Code
**Repository Path**: dragon515/AI-Interview-Code
## Basic Information
- **Project Name**: AI-Interview-Code
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: Apache-2.0
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2025-08-02
- **Last Updated**: 2025-08-02
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# AI-Interview-Code
AI 大模型相关算法中手写的面试题,(非 LeetCode),一般比 LeetCode 更考察一个人的综合能力,又更贴近业务和基础知识一点
- 欢迎关注我的博客:[chaofa用代码打点酱油](https://bruceyuan.com/) ,内容一般会首先更新到博客上面,并且有更好的阅读体验。
- 当前也欢迎关注同名公众号 **[chaofa用代码打点酱油](https://mp.weixin.qq.com/s/WxLbKvW4_9g0ajQ0wGRruQ)**,这样可以获得订阅更新
## 目录
| 题目 | 难度 | 知识点 | 文字解读 | 视频解读 |
| ---- | ---- | ---- | ---- | ---- |
| 手写 Self-Attention | ⭐⭐⭐ | 注意力机制 | [手写 Self-Attention 的四重境界](https://bruceyuan.com/hands-on-code/from-self-attention-to-multi-head-self-attention.html#%E9%9D%A2%E8%AF%95%E5%86%99%E6%B3%95-%E5%AE%8C%E6%95%B4%E7%89%88-%E6%B3%A8%E6%84%8F%E6%B3%A8%E9%87%8A) | [](https://www.bilibili.com/video/BV19YbFeHETz/)
[](https://www.youtube.com/watch?v=d_jwwnYCzIg) |
| 手写 Multi-Head Self-Attention | ⭐⭐⭐ | 注意力机制 | [手写 Multi-Head Self-Attention](https://bruceyuan.com/hands-on-code/from-self-attention-to-multi-head-self-attention.html#%E9%9D%A2%E8%AF%95%E5%86%99%E6%B3%95-%E5%AE%8C%E6%95%B4%E7%89%88-%E6%B3%A8%E6%84%8F%E6%B3%A8%E9%87%8A) | [](https://www.bilibili.com/video/BV19mxdeBEbu/)
[](https://www.youtube.com/watch?v=SsWxatYLB-s) |
| 手写 Group-Query Attention | ⭐⭐⭐ | 注意力机制 | [手写大模型组件之Group Query Attention,从 MHA -> MQA -> GQA](https://bruceyuan.com/hands-on-code/hands-on-group-query-attention-and-multi-query-attention.html) | [](https://www.bilibili.com/video/BV1ZmqpYfEGY/)
[](https://www.youtube.com/watch?v=1jBW7qcyd7A) |
| 手写 Transformer Decoder(Causal Language Model, LM)| ⭐⭐⭐⭐ | Transformer 架构 | [手写 Transformer Decoder](https://bruceyuan.com/hands-on-code/hands-on-causallm-decoder.html) | [](https://www.bilibili.com/video/BV1Nh1QYCEsS/)
[](https://www.youtube.com/watch?v=yzEotGJaQ74) |
| 计算 LLM (Decoder) 模型的参数量 | ⭐⭐⭐ | 模型参数量 | TODO | [](https://www.bilibili.com/video/BV1Zw4ue2ELg/)
[](https://www.youtube.com/watch?v=q5quYPt2z5s) |
| LLM 模型训练推理显存占用预估 | ⭐⭐⭐⭐ | 显存占用 | [LLM 大模型训练-推理显存占用分析](https://bruceyuan.com/post/llm-train-infer-memoery-usage-calculation.html) | |
| 手写 AUC | ⭐⭐ | 模型评估 | TODO | |
| 手写 KMeans | ⭐⭐⭐⭐⭐ | 聚类 | TODO | |
| 手写 线性回归 | ⭐⭐⭐⭐⭐ | 线性回归 | TODO | |
| 手写 BPE 分词器 | ⭐⭐⭐⭐⭐ | 分词 | TODO | |
| 手写 LayerNorm | ⭐⭐ | 归一化 | TODO | |
| 手写 BatchNorm | ⭐⭐ | 归一化 | TODO | |
| 手写 Softmax | ⭐⭐ | 激活函数 | TODO | |
## 一般不会考的
| 题目 | 难度 | 知识点 | 文字解读 | 视频解读 |
| ---- | ---- | ---- | ---- | ---- |
| 手写 LoRA | ⭐⭐⭐⭐⭐ | 目的是为了深入了解 LoRA | [LoRA 原理和 PyTorch 代码实现](https://bruceyuan.com/hands-on-code/hands-on-lora.html)
[](https://openbayes.com/console/bbruceyuan/containers/OPg9Oo99ET6) | [](https://www.bilibili.com/video/BV1fHmkYyE2w/)
|