# deepseek-local-host **Repository Path**: FadingFool/deepseek-local-host ## Basic Information - **Project Name**: deepseek-local-host - **Description**: No description available - **Primary Language**: Python - **License**: Apache-2.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-02-25 - **Last Updated**: 2025-04-30 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README ### 辅助 # 修改huggingface镜像 export HF_ENDPOINT="https://hf-mirror.com" # 查看GPU占用 watch -n 1 nvidia-smi # 修改huggingface下载地址 vim ~/.bashrc 最后一行 export HF_HOME="/root/autodl-tmp/.cache" source ~/.bashrc 查看是否成功 env | grep HF_HOME ### ollama 安装: 打开学术加速: source /etc/network_turbo 关闭学术加速: unset http_proxy && unset https_proxy 下载并安装 curl -fsSL https://ollama.com/install.sh | sh curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz sudo tar -C ./ollama/ -xzf ollama-linux-amd64.tgz # 不安装在系统盘 ollama 运行与测试: ollama serve ollama -v 注意: ollama serve控制台需要开着 需要 添加公钥到账户 ### 转换为GGUF格式: 下载代码 git clone https://github.com/ggml-org/llama.cpp conda create -n myenv python=3.10 conda init # 初始化并且重启cmd conda info --envs # 查看虚拟环境列表 conda deactivate # 关闭虚拟环境 conda activate myenv pip install -r requirements.txt python convert_hf_to_gguf.py "../deepseek-local-host/outputs/" --outtype f16 --verbose --outfile "../deepseek-local-host/guff/" # 上传到ollama 创建ModelFile FROM Qwen2.5-7B-F16.gguf PARAMETER num_ctx 1024 PARAMETER repeat_penalty 1.0 PARAMETER repeat_last_n 1024 PARAMETER temperature 0.7 PARAMETER stop "<|start_header_id|>" PARAMETER stop "<|end_header_id|>" PARAMETER stop "<|eot_id|>" PARAMETER stop "<|reserved_special_token|>" SYSTEM """ 你是一个专业的中国网络安全工程师,你需要利用你的专业知识对问题进行回答。 """ TEMPLATE """ {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|>""" 创建本地模型 ollama create -f Modelfile qingmian/Qwen2.5-1.5B-CyberSecurity-Demo 上传 ollama push qingmian/Qwen2.5-1.5B-CyberSecurity-Demo ### 数据下载 Cmder软件: scp -rP 14495 root@connect.bjc1.seetacloud.com:~/autodl-tmp/deepseek-local-host/outputs D:\deepseek-local-host\ ### 分布式计算: 配置: compute_environment: LOCAL_MACHINE debug: false deepspeed_config: gradient_accumulation_steps: 2 offload_optimizer_device: cpu offload_param_device: cpu zero3_init_flag: false zero_stage: 2 distributed_type: DEEPSPEED downcast_bf16: 'no' enable_cpu_affinity: false machine_rank: 0 main_training_function: main mixed_precision: fp16 num_machines: 1 num_processes: 3 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false 运行: accelerate launch GPU_SFT.py