# AI-Practices
**Repository Path**: leeolevis/AI-Practices
## Basic Information
- **Project Name**: AI-Practices
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: MIT
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2025-12-21
- **Last Updated**: 2025-12-21
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
## 📋 Table of Contents
Quick Navigation
| Section | Description | Link |
|:-------:|:------------|:----:|
| 🎯 | **Overview** - 项目概述与研究背景 | [Jump](#-overview) |
| 🏗️ | **Architecture** - 系统架构与模块设计 | [Jump](#️-architecture) |
| 📚 | **Curriculum** - 九大核心学习模块 | [Jump](#-curriculum) |
| 🛠️ | **Tech Stack** - 技术栈与工具链 | [Jump](#️-tech-stack) |
| 🚀 | **Quick Start** - 环境配置与快速启动 | [Jump](#-quick-start) |
| 📊 | **Results** - 实验结果与竞赛成绩 | [Jump](#-results) |
| 📄 | **Citation** - 引用本项目 | [Jump](#-citation) |
## 🎯 Overview

可复现实验
Jupyter Notebooks
|

核心模块
Learning Modules
|

Python 文件
Production Code
|

代码行数
Lines of Code
|

竞赛金牌
Competition Medals
|
### Research Background
> **AI-Practices** 是一个系统化、工程化的人工智能学习与研究平台,采用 **渐进式学习框架 (Progressive Learning Framework)** 方法论,为研究人员、工程师和学习者提供完整的 AI 技术栈学习路径。
### Methodology
本项目遵循 **"理论驱动、实践为本、工程导向"** 的三位一体设计理念,构建从理论到实战的完整学习闭环:
| Phase | Principle | Method | Output | Goal |
|:-----:|:----------|:-------|:-------|:----:|
| **I** | **Theory First** | Math derivation + Algorithm analysis | Theory Notes | 🎯 |
| **II** | **From Scratch** | NumPy implementation from scratch | Core Code | 🔧 |
| **III** | **Framework** | PyTorch / TensorFlow engineering | Production Code | ⚡ |
| **IV** | **Practice** | Kaggle + Real-world Projects | Complete Solutions | 🏆 |
**Core Advantages:**
- 🔄 **Progressive Learning** — Each phase builds upon the previous, ensuring knowledge continuity
- 🧠 **Theory + Practice** — Not just "How", but "Why" — develop independent problem-solving skills
- 🏭 **Engineering Mindset** — From academic research to industrial deployment
## 🏗️ Architecture
### Module Dependencies
> 模块间的依赖关系遵循**渐进式学习路径**,每个阶段都建立在前一阶段的基础之上。
| Phase | Module | Prerequisites | Core Skills Acquired |
|:-----:|:-------|:--------------|:--------------------|
| **Ⅰ** | 📚 **01-Foundations** | Python, NumPy | 经典 ML 算法、数学基础、模型评估 |
| **Ⅱ** | 🧠 **02-Neural Networks** | 01-Foundations | 反向传播、优化器、正则化技术 |
| **Ⅱ** | 👁️ **03-Computer Vision** | 02-Neural Networks | CNN 架构、迁移学习、图像处理 |
| **Ⅱ** | 📝 **04-Sequence Models** | 02-Neural Networks | RNN/LSTM、Attention、Transformer |
| **Ⅲ** | ⚡ **05-Advanced Topics** | 03-CV, 04-Seq | 分布式训练、超参优化、模型部署 |
| **Ⅲ** | 🎨 **06-Generative Models** | 05-Advanced | VAE、GAN、Diffusion Models |
| **Ⅲ** | 🎮 **07-Reinforcement Learning** | 05-Advanced | 值函数、策略梯度、Actor-Critic |
| **Ⅳ** | 🏆 **09-Practical Projects** | 03-CV, 04-Seq, 06-Gen | 端到端项目、Kaggle 竞赛实战 |
| **—** | 📖 **08-Theory Notes** | — | 数学推导参考(可随时查阅) |
### Directory Structure
```
AI-Practices/
├── 📚 01-foundations/ # 机器学习基础 (ML Fundamentals)
├── 🧠 02-neural-networks/ # 神经网络 (Deep Learning Core)
├── 👁️ 03-computer-vision/ # 计算机视觉 (CNN, ViT, Detection)
├── 📝 04-sequence-models/ # 序列模型 (RNN, Transformer, LLM)
├── ⚡ 05-advanced-topics/ # 高级专题 (Optimization, Deployment)
├── 🎨 06-generative-models/ # 生成模型 (VAE, GAN, Diffusion)
├── 🎮 07-reinforcement-learning/ # 强化学习 (DQN, PPO, SAC)
├── 📖 08-theory-notes/ # 理论笔记 (Math, Optimization)
├── 🏆 09-practical-projects/ # 实战项目 (Kaggle, Industry)
└── 🔧 utils/ # 工具库 (Data, Viz, Metrics)
```
📂 展开完整目录结构
```
AI-Practices/
│
├── 📚 01-foundations/ # 机器学习基础 (75+ 文件)
│ ├── 01-training-models/ # 训练模型: 梯度下降、正则化、批量学习
│ ├── 02-classification/ # 分类算法: Logistic Regression, SVM
│ ├── 03-support-vector-machines/ # 支持向量机: 核技巧、软间隔
│ ├── 04-decision-trees/ # 决策树: CART, 剪枝策略
│ ├── 05-ensemble-learning/ # 集成学习: Bagging, Boosting, Stacking
│ ├── 06-dimensionality-reduction/ # 降维: PCA, t-SNE, UMAP
│ ├── 07-unsupervised-learning/ # 无监督学习: K-Means, DBSCAN, GMM
│ └── 08-end-to-end-project/ # 端到端项目: 完整 ML 流程
│
├── 🧠 02-neural-networks/ # 神经网络与深度学习 (42+ 文件)
│ ├── 01-keras-introduction/ # Keras 入门: Sequential, Functional API
│ ├── 02-training-deep-networks/ # 训练技巧: BatchNorm, Dropout, Residual
│ ├── 03-custom-models-training/ # 自定义模型: Layer, Loss, Training Loop
│ └── 04-data-loading-preprocessing/ # 数据处理: TFRecord, Data Pipeline
│
├── 👁️ 03-computer-vision/ # 计算机视觉 (37k+ 文件)
│ ├── 01-cnn-basics/ # CNN 基础: 卷积、池化、激活函数
│ ├── 02-classic-architectures/ # 经典架构: LeNet, AlexNet, VGG, ResNet
│ ├── 03-transfer-learning/ # 迁移学习: Feature Extraction, Fine-tuning
│ └── 04-visualization/ # 可视化: Grad-CAM, Feature Maps
│
├── 📝 04-sequence-models/ # 序列模型与 NLP
│ ├── 01-rnn-basics/ # RNN 基础: 循环神经网络原理
│ ├── 02-lstm-gru/ # LSTM/GRU: 长短期记忆网络
│ ├── 03-text-processing/ # 文本处理: Tokenization, Embedding
│ └── 04-cnn-for-sequences/ # 序列 CNN: 1D 卷积应用
│
├── ⚡ 05-advanced-topics/ # 高级专题
│ ├── 01-functional-api/ # Functional API: 复杂模型构建
│ ├── 02-callbacks-tensorboard/ # 回调与监控: TensorBoard, EarlyStopping
│ └── 03-model-optimization/ # 模型优化: 量化、剪枝、蒸馏
│
├── 🎨 06-generative-models/ # 生成式模型
│ ├── 02-gans/ # GAN: DCGAN, WGAN, StyleGAN
│ ├── 04-text-generation/ # 文本生成: RNN, Transformer
│ └── 05-deepdream/ # DeepDream: 神经网络可视化
│
├── 🎮 07-reinforcement-learning/ # 强化学习 (542+ 文件)
│ ├── 01-mdp-basics/ # MDP 基础: 马尔可夫决策过程
│ ├── 02-q-learning/ # Q-Learning: 值迭代、策略迭代
│ ├── 03-deep-q-learning/ # Deep Q-Learning: DQN, Double DQN
│ ├── 03-deep-rl/ # 深度强化学习: 算法与架构
│ ├── 04-policy-gradient/ # 策略梯度: REINFORCE, PPO, A3C
│ ├── 马尔科夫决策过程/ # MDP 理论详解
│ ├── 时序差分学习/ # TD Learning: SARSA, Q-Learning
│ ├── 深度Q学习的变体/ # DQN 变体: Rainbow, Dueling DQN
│ ├── 策略梯度/ # Policy Gradient 详解
│ ├── 策略搜索/ # Policy Search 方法
│ ├── 神经网络策略/ # Neural Network Policies
│ ├── 评估动作-信用分配问题/ # Credit Assignment Problem
│ ├── 学习优化奖励/ # Reward Shaping
│ └── 流行强化学习算法概述/ # Popular RL Algorithms Overview
│
├── 📖 08-theory-notes/ # 理论参考手册 (16+ 文件)
│ ├── QUICK-REFERENCE.md # ⭐ 激活函数与损失函数速查表
│ ├── ARCHITECTURE-HYPERPARAMETER-TUNING.md # ⭐ 架构选型与超参数调优
│ ├── MODEL-SELECTION-TROUBLESHOOTING.md # ⭐ 模型选择与问题诊断
│ ├── activation-functions/ # 激活函数详解: 30+ 函数
│ ├── loss-functions/ # 损失函数详解: 分类/回归/排序
│ └── architectures/ # 架构笔记: CNN, RNN, Transformer
│
├── 🏆 09-practical-projects/ # 实战项目 (566+ 文件)
│ ├── 01-ml-basics/ # ML 基础项目: Titanic, Otto
│ ├── 02-computer-vision/ # CV 项目: MNIST, Image Classification
│ ├── 03-nlp/ # NLP 项目: Sentiment, NER, Text Classification
│ ├── 04-time-series/ # 时序项目: Temperature Prediction
│ └── 05-kaggle-competitions/ # Kaggle 竞赛: Gold Medal Solutions
│
└── 🔧 utils/ # 工具库
├── data/ # 数据处理工具
├── visualization/ # 可视化工具
└── metrics/ # 评估指标工具
```
### 💻 Core Implementation
> 以下代码展示了本项目的工程质量标准,体现了 **类型安全**、**模块化设计** 和 **生产级实践**。
🔥 Multi-Head Self-Attention (PyTorch)
```python
"""
Multi-Head Self-Attention Implementation
Reference: Vaswani et al., "Attention Is All You Need" (NeurIPS 2017)
"""
from __future__ import annotations
import math
from typing import Optional, Tuple
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch import Tensor
class MultiHeadSelfAttention(nn.Module):
"""
Scaled Dot-Product Multi-Head Self-Attention mechanism.
Implements the attention formula:
Attention(Q, K, V) = softmax(QK^T / √d_k) V
Args:
d_model: Model embedding dimension
n_heads: Number of parallel attention heads
dropout: Dropout probability for attention weights
bias: Whether to include bias in linear projections
"""
def __init__(
self,
d_model: int = 512,
n_heads: int = 8,
dropout: float = 0.1,
bias: bool = True,
) -> None:
super().__init__()
assert d_model % n_heads == 0, \
f"d_model ({d_model}) must be divisible by n_heads ({n_heads})"
self.d_model = d_model
self.n_heads = n_heads
self.d_k = d_model // n_heads # Dimension per head
self.scale = math.sqrt(self.d_k)
# Fused QKV projection for efficiency
self.qkv_proj = nn.Linear(d_model, 3 * d_model, bias=bias)
self.out_proj = nn.Linear(d_model, d_model, bias=bias)
self.dropout = nn.Dropout(dropout)
self._init_weights()
def _init_weights(self) -> None:
"""Xavier uniform initialization for stable training."""
nn.init.xavier_uniform_(self.qkv_proj.weight)
nn.init.xavier_uniform_(self.out_proj.weight)
if self.qkv_proj.bias is not None:
nn.init.zeros_(self.qkv_proj.bias)
nn.init.zeros_(self.out_proj.bias)
def forward(
self,
x: Tensor,
mask: Optional[Tensor] = None,
return_attention: bool = False,
) -> Tuple[Tensor, Optional[Tensor]]:
"""
Forward pass of multi-head self-attention.
Args:
x: Input tensor of shape (batch, seq_len, d_model)
mask: Optional attention mask (batch, 1, 1, seq_len) or (batch, 1, seq_len, seq_len)
return_attention: Whether to return attention weights
Returns:
output: Attended output of shape (batch, seq_len, d_model)
attn_weights: Attention weights if return_attention=True, else None
"""
B, L, _ = x.shape
# Fused QKV projection: (B, L, 3*d_model) -> 3 x (B, n_heads, L, d_k)
qkv = self.qkv_proj(x).reshape(B, L, 3, self.n_heads, self.d_k)
qkv = qkv.permute(2, 0, 3, 1, 4) # (3, B, n_heads, L, d_k)
q, k, v = qkv.unbind(0)
# Scaled dot-product attention: softmax(QK^T / √d_k) V
attn_scores = torch.matmul(q, k.transpose(-2, -1)) / self.scale
if mask is not None:
attn_scores = attn_scores.masked_fill(mask == 0, float("-inf"))
attn_weights = F.softmax(attn_scores, dim=-1)
attn_weights = self.dropout(attn_weights)
# Apply attention to values
attn_output = torch.matmul(attn_weights, v)
# Reshape and project: (B, n_heads, L, d_k) -> (B, L, d_model)
attn_output = attn_output.transpose(1, 2).reshape(B, L, self.d_model)
output = self.out_proj(attn_output)
return output, attn_weights if return_attention else None
```
🎮 PPO Trainer (Reinforcement Learning)
```python
"""
Proximal Policy Optimization (PPO) Trainer
Reference: Schulman et al., "Proximal Policy Optimization Algorithms" (2017)
"""
from __future__ import annotations
from dataclasses import dataclass
from typing import Dict, Iterator, Tuple
import torch
import torch.nn as nn
from torch import Tensor
from torch.distributions import Categorical
@dataclass
class PPOConfig:
"""PPO hyperparameters with sensible defaults."""
gamma: float = 0.99 # Discount factor
gae_lambda: float = 0.95 # GAE parameter
clip_epsilon: float = 0.2 # PPO clipping range
entropy_coef: float = 0.01 # Entropy bonus coefficient
value_coef: float = 0.5 # Value loss coefficient
max_grad_norm: float = 0.5 # Gradient clipping threshold
n_epochs: int = 4 # PPO update epochs
batch_size: int = 64 # Mini-batch size
class PPOTrainer:
"""
Production-ready PPO trainer with GAE and gradient clipping.
Implements the clipped surrogate objective:
L^CLIP(θ) = E[min(r_t(θ)A_t, clip(r_t(θ), 1-ε, 1+ε)A_t)]
where r_t(θ) = π_θ(a_t|s_t) / π_θ_old(a_t|s_t)
"""
def __init__(
self,
policy: nn.Module,
optimizer: torch.optim.Optimizer,
config: PPOConfig = PPOConfig(),
) -> None:
self.policy = policy
self.optimizer = optimizer
self.config = config
def compute_gae(
self,
rewards: Tensor,
values: Tensor,
dones: Tensor,
next_value: Tensor,
) -> Tuple[Tensor, Tensor]:
"""
Compute Generalized Advantage Estimation (GAE).
GAE(γ,λ) = Σ_{l=0}^{∞} (γλ)^l δ_{t+l}
where δ_t = r_t + γV(s_{t+1}) - V(s_t)
"""
T = len(rewards)
advantages = torch.zeros_like(rewards)
last_gae = 0.0
for t in reversed(range(T)):
next_val = next_value if t == T - 1 else values[t + 1]
delta = rewards[t] + self.config.gamma * next_val * (1 - dones[t]) - values[t]
advantages[t] = last_gae = delta + \
self.config.gamma * self.config.gae_lambda * (1 - dones[t]) * last_gae
returns = advantages + values
return advantages, returns
def update(self, rollout_buffer: Dict[str, Tensor]) -> Dict[str, float]:
"""
Perform PPO update with clipped objective.
Returns:
Dictionary of training metrics
"""
states = rollout_buffer["states"]
actions = rollout_buffer["actions"]
old_log_probs = rollout_buffer["log_probs"]
advantages = rollout_buffer["advantages"]
returns = rollout_buffer["returns"]
# Normalize advantages for stable training
advantages = (advantages - advantages.mean()) / (advantages.std() + 1e-8)
metrics = {"policy_loss": 0.0, "value_loss": 0.0, "entropy": 0.0}
for _ in range(self.config.n_epochs):
for batch in self._get_batches(len(states)):
# Forward pass
logits, values = self.policy(states[batch])
dist = Categorical(logits=logits)
new_log_probs = dist.log_prob(actions[batch])
entropy = dist.entropy().mean()
# PPO clipped objective
ratio = torch.exp(new_log_probs - old_log_probs[batch])
surr1 = ratio * advantages[batch]
surr2 = torch.clamp(
ratio,
1 - self.config.clip_epsilon,
1 + self.config.clip_epsilon
) * advantages[batch]
policy_loss = -torch.min(surr1, surr2).mean()
value_loss = nn.functional.mse_loss(values.squeeze(), returns[batch])
# Combined loss with entropy bonus
loss = (
policy_loss
+ self.config.value_coef * value_loss
- self.config.entropy_coef * entropy
)
# Optimization step with gradient clipping
self.optimizer.zero_grad()
loss.backward()
nn.utils.clip_grad_norm_(self.policy.parameters(), self.config.max_grad_norm)
self.optimizer.step()
# Accumulate metrics
metrics["policy_loss"] += policy_loss.item()
metrics["value_loss"] += value_loss.item()
metrics["entropy"] += entropy.item()
# Average metrics
n_updates = self.config.n_epochs * (len(states) // self.config.batch_size)
return {k: v / n_updates for k, v in metrics.items()}
def _get_batches(self, dataset_size: int) -> Iterator[Tensor]:
"""Generate random mini-batch indices."""
indices = torch.randperm(dataset_size)
for start in range(0, dataset_size, self.config.batch_size):
yield indices[start:start + self.config.batch_size]
```
## 📚 Curriculum
📘 01 - Foundations | 机器学习基础
> 建立坚实的机器学习理论基础,掌握经典算法的原理与实现
| Topic | Algorithm | Complexity | Key Concepts |
|:------|:----------|:-----------|:-------------|
| Linear Models | OLS, Ridge, Lasso | O(nd²) | Regularization, Bias-Variance |
| Classification | Logistic, SVM | O(n²) ~ O(n³) | Maximum Margin, Kernel Trick |
| Tree Methods | CART, RF, GBDT | O(n log n) | Information Gain, Ensemble |
| Dimensionality | PCA, t-SNE, UMAP | O(n²) ~ O(n³) | Manifold Learning |
**Tech**:   
🧠 02 - Neural Networks | 神经网络
> 掌握深度学习核心技术与训练方法
| Topic | Techniques | Description |
|:------|:-----------|:------------|
| Initialization | Xavier, He, Orthogonal | 权重初始化策略 |
| Normalization | BatchNorm, LayerNorm, GroupNorm | 归一化技术 |
| Regularization | Dropout, DropConnect, Stochastic Depth | 正则化方法 |
| Optimization | SGD+Momentum, Adam, AdamW, LAMB | 优化算法 |
**Tech**:  
👁️ 03 - Computer Vision | 计算机视觉
> 系统学习 CNN 架构演进与视觉任务
**架构演进**:
```
LeNet (1998) → AlexNet (2012) → VGG (2014) → GoogLeNet (2014)
↓
ResNet (2015) → DenseNet (2016) → EfficientNet (2019) → ViT (2020)
```
**Tech**:  
📝 04 - Sequence Models | 序列模型
> 掌握序列建模从 RNN 到 Transformer,深入理解注意力机制的数学本质
**Scaled Dot-Product Attention** *(Vaswani et al., NeurIPS 2017)*:
$$\text{Attention}(Q, K, V) = \text{softmax}\left(\frac{QK^T}{\sqrt{d_k}}\right)V$$
其中 $Q \in \mathbb{R}^{n \times d_k}$, $K \in \mathbb{R}^{m \times d_k}$, $V \in \mathbb{R}^{m \times d_v}$。缩放因子 $\sqrt{d_k}$ 防止点积过大导致 softmax 梯度消失。
**Multi-Head Attention** 通过并行计算多个注意力头捕获不同子空间的信息:
$$\text{MultiHead}(Q, K, V) = \text{Concat}(\text{head}_1, ..., \text{head}_h)W^O$$
$$\text{where } \text{head}_i = \text{Attention}(QW_i^Q, KW_i^K, VW_i^V)$$
**Tech**:  
⚡ 05 - Advanced Topics | 高级专题
> 深度学习工程化实践与模型优化技术
| Topic | Techniques | Description |
|:------|:-----------|:------------|
| Functional API | Complex Models | 多输入多输出、共享层、残差连接 |
| Callbacks | Training Control | EarlyStopping, ModelCheckpoint, ReduceLROnPlateau |
| TensorBoard | Visualization | 训练监控、模型图、嵌入可视化 |
| Optimization | Model Compression | 量化、剪枝、知识蒸馏 |
**Tech**:  
🎨 06 - Generative Models | 生成式模型
> 探索生成式 AI 的前沿技术
| Model Type | Algorithms | Applications |
|:-----------|:-----------|:-------------|
| GAN | DCGAN, WGAN, StyleGAN | 图像生成、风格迁移 |
| Text Generation | RNN, LSTM, Transformer | 文本生成、对话系统 |
| Neural Art | DeepDream, Style Transfer | 艺术创作、图像风格化 |
**Tech**:  
🎮 07 - Reinforcement Learning | 强化学习
> 从 MDP 基础到深度强化学习的完整学习路径 (542+ 文件)
**核心算法体系**:
| Category | Algorithms | Key Features |
|:---------|:-----------|:-------------|
| **Value-Based** | Q-Learning, DQN, Double DQN, Dueling DQN | 值函数近似、经验回放 |
| **Policy-Based** | REINFORCE, PPO, A3C, TRPO | 策略梯度、优势函数 |
| **Actor-Critic** | A2C, A3C, SAC, TD3 | 结合值函数与策略 |
| **Model-Based** | Dyna-Q, World Models, MuZero | 环境建模、规划 |
**🎯 核心理论公式**:
**Bellman Optimality Equation** — 强化学习的理论基石:
$$Q^*(s, a) = \mathbb{E}\left[r + \gamma \max_{a'} Q^*(s', a') \mid s, a\right]$$
**PPO Clipped Objective** *(Schulman et al., 2017)* — 稳定的策略梯度方法:
$$L^{CLIP}(\theta) = \mathbb{E}_t\left[\min\left(r_t(\theta)\hat{A}_t, \text{clip}(r_t(\theta), 1-\epsilon, 1+\epsilon)\hat{A}_t\right)\right]$$
其中 $r_t(\theta) = \frac{\pi_\theta(a_t|s_t)}{\pi_{\theta_{old}}(a_t|s_t)}$ 为重要性采样比率,$\hat{A}_t$ 为优势函数估计。
**学习资源**:
- 📚 中英文双语知识体系
- 🔬 从零实现经典算法
- 🎯 Gymnasium 环境实战
- 📊 完整的实验与可视化
**Tech**:   
📖 08 - Theory Notes | 理论笔记
> 深度学习理论的高密度速查表库 (4000+ 行理论笔记)
**三大快速参考卡**:
| Document | Content | Usage Time |
|:---------|:--------|:-----------|
| **QUICK-REFERENCE.md** | 激活函数 & 损失函数速查表 | 5 分钟 |
| **ARCHITECTURE-HYPERPARAMETER-TUNING.md** | 架构选型 & 超参数调优指南 | 30 分钟 |
| **MODEL-SELECTION-TROUBLESHOOTING.md** | 模型选择 & 问题诊断决策树 | 问题排查 |
**核心特色**:
- ✅ **一句话选择指南** — 快速决策
- ✅ **对比矩阵** — 全面对比
- ✅ **决策树** — 系统化选择
- ✅ **代码示例** — PyTorch 实现
**涵盖内容**:
- 30+ 激活函数详解
- 分类/回归/排序损失函数
- 网络架构选择决策树
- 超参数调优策略
- 常见问题诊断与解决
**Tech**: 纯理论 + PyTorch 代码示例
🏆 09 - Practical Projects | 实战项目
```
09-practical-projects/
├── 📊 01-ml-basics/ # ML基础项目
│ ├── titanic-survival-xgboost/ # Titanic 生存预测
│ └── otto-classification/ # Otto 产品多分类
│
├── 👁️ 02-computer-vision/ # CV项目
│ └── mnist-cnn/ # MNIST 手写数字识别
│
├── 📝 03-nlp/ # NLP项目
│ ├── sentiment-analysis-lstm/ # LSTM 情感分析
│ ├── transformer-text-classification/# Transformer 文本分类
│ └── transformer-ner/ # 命名实体识别
│
├── 📈 04-time-series/ # 时序项目
│ └── temperature-prediction-lstm/ # 温度预测
│
└── 🏆 05-kaggle-competitions/ # Kaggle 竞赛
├── Feedback-ELL-1st-Place/ # 🥇 金牌方案
└── RSNA-2023-1st-Place/ # 🥇 金牌方案
```
## 🛠️ Tech Stack
| 🤖 Deep Learning |
📊 Data Science |
🔧 Development |
|




|




|




|
## 🚀 Quick Start
### Installation
```bash
# Clone repository
git clone https://github.com/zimingttkx/AI-Practices.git
cd AI-Practices
# Create Conda environment
conda create -n ai-practices python=3.10 -y
conda activate ai-practices
# Install dependencies
pip install -r requirements.txt
# Verify installation
python -c "import tensorflow as tf; print(f'TensorFlow: {tf.__version__}')"
python -c "import torch; print(f'PyTorch: {torch.__version__}')"
# Launch Jupyter Lab
jupyter lab
```
### Hardware Requirements
| Component | Minimum | Recommended |
|:----------|:--------|:------------|
| **CPU** | 4 cores | 8+ cores |
| **RAM** | 8 GB | 32 GB |
| **GPU** | GTX 1060 | RTX 3080+ |
| **Storage** | 50 GB | 200 GB SSD |
## 📊 Results
### Kaggle Competitions
| Competition | Rank | Medal | Year |
|:------------|:----:|:-----:|:----:|
| Feedback Prize - ELL | **Top 1%** | 🥇 Gold | 2023 |
| RSNA Abdominal Trauma | **Top 1%** | 🥇 Gold | 2023 |
| American Express Default | Top 5% | 🥈 Silver | 2022 |
| RSNA Lumbar Spine | Top 10% | 🥉 Bronze | 2024 |
### Model Benchmarks
| Model | Dataset | Top-1 Acc | Params | FLOPs |
|:------|:--------|:---------:|:------:|:-----:|
| ResNet-50 | ImageNet | 76.1% | 25.6M | 4.1G |
| EfficientNet-B0 | ImageNet | 77.1% | 5.3M | 0.4G |
| ViT-B/16 | ImageNet | 77.9% | 86M | 17.6G |
| BERT-base | SST-2 | 93.2% | 110M | - |
## 📄 Citation
If this project helps your research, please cite:
```bibtex
@misc{ai-practices2024,
author = {zimingttkx},
title = {AI-Practices: A Systematic Approach to AI Research and Engineering},
year = {2024},
publisher = {GitHub},
howpublished = {\url{https://github.com/zimingttkx/AI-Practices}}
}
```
## 📜 License
This project is licensed under the **MIT License** - see [LICENSE](LICENSE) for details.
## ⭐ Star History
## ☕ Sponsor
如果这个项目对你有帮助,欢迎请作者喝杯咖啡 ☕

微信支付
|

支付宝
|
**感谢所有赞助者的支持!** 🙏