Update FAQ.md

This commit is contained in:
Joseph Spisak
2023-09-16 08:04:21 -07:00
committed by GitHub
parent 4869110d27
commit d58f9ae95c

1
FAQ.md
View File

@@ -71,4 +71,5 @@ A:
You can adapt the finetuning script found [here](https://github.com/facebookresearch/llama-recipes/blob/main/llama_finetuning.py) for pretraining. You can also find the hyperparams used for pretraining in Section 2 of [the LLama 2 paper](https://arxiv.org/pdf/2307.09288.pdf).
**Q: Am I allowed a develop derivative models through fine-tuning based on Llama 2 for languages other than english? Is this a violation of the acceptable use policy?**
A: No, it is NOT a violation of the acceptable use policy (AUP) to finetune on a non-english language and then use commercially as long as you follow the AUP and the terms of the license. We did include language in the responsible use guide around this because documentation and support doesn't yet exist for languages beyond english. Llama 2 itself is english language centric and you can read the paper for more details [here](https://arxiv.org/abs/2307.09288).