LLaMA-Factory微调
如需使用LLaMA-Factory微调,可按如下步骤操作:
- 安装工具
pip install -U huggingface_hub -i https://pypi.tuna.tsinghua.edu.cn/simple
- 拷贝LLaMA-Factory
cp -r LLaMA-Factory /root
- 下载模型
export HF_ENDPOINT=http://hfmirror.mas.zetyun.cn:8082
huggingface-cli download --resume-download Qwen/Qwen2.5-VL-7B-Instruct --local-dir /root/model/Qwen2.5-VL-7B-Instruct --max-workers=4
- 下载数据
export HF_ENDPOINT=http://hfmirror.mas.zetyun.cn:8082
huggingface-cli download AlayaNeW/QA_from_CoVLA_zh --repo-type dataset --local-dir /root/data/QA_from_CoVLA_zh
- 上传启动脚本
chmod +x start.sh finish.sh
请自行新建start.sh和finish.sh的脚本文件:
5.1 start.sh脚本:点击展开 / 收起
#!/bin/bash
# 手动添加路径(临时生效)
# export PATH=/opt/conda/envs/lf/bin/llamafactory-cli:$PATH
conda init
source ~/.bashrc
conda activate lf
#上传模型到/dev/shm目录,提升训练过程模型文件的加载速度
#命令格式:rclone copy 源目录 目的目录,实际上传的时候需要修改为您自己的源目录路径
rclone copy /workspace/model/Qwen2.5-VL-7B-Instruct /dev/shm/llamafactory/model/Qwen2.5-VL-7B-Instruct --transfers 8 -P
#上传数据集到/dev/shm
rclone copy /workspace/LLaMA-Factory/data/images /dev/shm/llamafactory/dataset/qa_images/images --transfers 8 -P
# 进入工作目录
echo "正在进入工作目录..."
if ! cd "/workspace/LLaMA-Factory"; then
echo "错误:无法进入目录 '/ts-llamafactory/LLaMA-Factory'"
exit 1
fi
# 检查 CLI 是否存在
if ! command -v llamafactory-cli &> /dev/null; then
echo "错误:llamafactory-cli 未找到,请确认是否已安装"
exit 1
fi
# if ! llamafactory-cli version &> /dev/null; then
# echo "错误:llamafactory-cli 未找到或版本不正确,请确认是否已安装"
# exit 1
# fi
# 启动 WebUI(替换模型路径为实际路径)
echo "正在启动 WebUI..."
if llamafactory-cli webui; then
echo "✅ WebUI 成功启动!"
echo "访问地址:http://localhost:7860"
echo "Share URL:$(curl -s --retry 5 -w "%{redirect_url}" http://127.0.0.1:7860/share || echo '获取失败')"
else
echo "❌ 启动失败!请检查:"
echo " 1. 端口 7860 是否被占用"
echo " 2. 模型路径是否正确"
echo " 3. 权限是否足够"
exit 1
fi
5.2 finish.sh脚本:
LOGFILE="llamafactory-start_$(date +'%Y-%m-%d_%H-%M').log"
bash /workspace/LLaMA-Factory/start.sh > /workspace/LLaMA-Factory/"$LOGFILE"
- 拷贝数据配置
cp /root/data/QA_from_CoVLA_zh/data/dataset_info.json /root/LLaMA-Factory/data
cp /root/data/QA_from_CoVLA_zh/data/QA_from_CoVLA_zh.json /root/LLaMA-Factory/data
-
启动run shell,选择配置Python解释器
-
使用Aladdin插件暴漏端口:7860
-
打开LLaMA-Factory web页面