代码拉取完成,页面将自动刷新
#@title Clone Project
!git clone https://github.com/karpathy/llama2.c.git
%cd llama2.c
#@title Build
!make runfast
#@title Pick Your Model
#@markdown Choose model
model = "stories15M" #@param ["stories15M", "stories42M", "stories110M"]
download_url = ""
if(model == "stories15M"):
download_url = "https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin"
if(model == "stories42M"):
download_url = "https://huggingface.co/karpathy/tinyllamas/resolve/main/stories42M.bin"
if(model == "stories110M"):
download_url = "https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.bin"
print(f"download_url: {download_url}")
!wget $download_url
model_file = model + ".bin"
#@title Generate Stories
# Generate args
max_token = 256 #@param {type:"slider", min:32, max:1024, step:32}
temperature = 0.8 #@param {type:"slider", min:0.0, max:1, step:0.05}
top_p = 0.9 #@param {type:"slider", min:0.0, max:1.0, step:0.05}
prompt = "One day, Lily met a Shoggoth" #@param {type:"string"}
print(f"model: {model_file}, max_token: {max_token}, temperature: {temperature}, top_p: {top_p}, prompt: {prompt}")
print(f"----------------------------\n")
cmd = f'./run {model_file} -t {temperature} -p {top_p} -n {max_token} -i "{prompt}"'
!{cmd}
#@title Run Meta's Llama 2 models
#@markdown input your huggingface [access token](https://huggingface.co/settings/tokens) to download Meta's Llama 2 models.
from huggingface_hub import snapshot_download
token = "replace your huggingface access token" #@param {type:"string"}
path = snapshot_download(repo_id="meta-llama/Llama-2-7b",cache_dir="Llama-2-7b", use_auth_token=token)
!python export_meta_llama_bin.py $path llama2_7b.bin
print("./run llama2_7b.bin\n")
!./run llama2_7b.bin
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。