We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation .
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
https://github.com/dvlab-research/MGM/blob/main/scripts/llama/train/stage_1_2_full_v7b_336_hr_768.sh 在??脚本中pretrain用的--version plain 而finetune用的是--version v1 前后不一致模型不?混??
The text was updated successfully, but these errors were encountered:
Hi, for the LLaMA 7B and 13B, we follow the instruction format in LLaVA . In the pretraining stage, the main focus is image caption. So, it works well with plain style.
plain
Sorry, something went wrong.
No branches or pull requests