From d58f9ae95c299fe6388ee2da2c87fd90cd360d41 Mon Sep 17 00:00:00 2001 From: Joseph Spisak Date: Sat, 16 Sep 2023 08:04:21 -0700 Subject: [PATCH] Update FAQ.md --- FAQ.md | 1 + 1 file changed, 1 insertion(+) diff --git a/FAQ.md b/FAQ.md index 60c5b96..c936299 100644 --- a/FAQ.md +++ b/FAQ.md @@ -71,4 +71,5 @@ A: You can adapt the finetuning script found [here](https://github.com/facebookresearch/llama-recipes/blob/main/llama_finetuning.py) for pretraining. You can also find the hyperparams used for pretraining in Section 2 of [the LLama 2 paper](https://arxiv.org/pdf/2307.09288.pdf). **Q: Am I allowed a develop derivative models through fine-tuning based on Llama 2 for languages other than english? Is this a violation of the acceptable use policy?** + A: No, it is NOT a violation of the acceptable use policy (AUP) to finetune on a non-english language and then use commercially as long as you follow the AUP and the terms of the license. We did include language in the responsible use guide around this because documentation and support doesn't yet exist for languages beyond english. Llama 2 itself is english language centric and you can read the paper for more details [here](https://arxiv.org/abs/2307.09288).