Name Date Size #Lines LOC

..--

README.mdH A D25-Apr-20252.4 KiB5333

README.md

1# Summary
2For Llama enablement, please see the [Llama README page](../llama/README.md) for complete details.
3
4This page contains Llama2 specific instructions and information.
5
6
7## Enablement
8
9We have verified running Llama 2 7B [mobile applications](#step-6-build-mobile-apps) efficiently on select devices including the iPhone 15 Pro, iPhone 15 Pro Max, Samsung Galaxy S22 and S24, and OnePlus 12.
10
11Since Llama 2 7B needs at least 4-bit quantization to fit even within some of the highend phones, results presented here correspond to 4-bit groupwise post-training quantized model.
12
13## Results
14
15### Llama2 7B
16Llama 2 7B performance was measured on the Samsung Galaxy S22, S24, and OnePlus 12 devices. The performance measurement is expressed in terms of tokens per second using an [adb binary-based approach](#step-5-run-benchmark-on).
17
18|Device  | Groupwise 4-bit (128) | Groupwise 4-bit (256)
19|--------| ---------------------- | ---------------
20|Galaxy S22  | 8.15 tokens/second | 8.3 tokens/second |
21|Galaxy S24 | 10.66 tokens/second | 11.26 tokens/second |
22|OnePlus 12 | 11.55 tokens/second | 11.6 tokens/second |
23
24Below are the results for two different groupsizes, with max_seq_length 2048, and limit 1000, based on WikiText perplexity using [LM Eval](https://github.com/EleutherAI/lm-evaluation-harness).
25
26|Model | Baseline (FP32) | Groupwise 4-bit (128) | Groupwise 4-bit (256)
27|--------|-----------------| ---------------------- | ---------------
28|Llama 2 7B | 9.2 | 10.2 | 10.7
29
30## Prepare model
31
32You can export and run the original Llama 2 7B model.
33
341. Llama 2 pretrained parameters can be downloaded from [Meta's official website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) or from [Hugging Face](https://huggingface.co/meta-llama/Llama-2-7b).
35
362. Edit `params.json` file. Replace `"vocab_size": -1` with `"vocab_size": 32000`. This is a short-term workaround.
37
383. Export model and generate `.pte` file:
39    ```
40    python -m examples.models.llama.export_llama --checkpoint <checkpoint.pth> --params <params.json> -kv --use_sdpa_with_kv_cache -X -qmode 8da4w --group_size 128 -d fp32
41    ```
424. Create tokenizer.bin.
43    ```
44    python -m extension.llm.tokenizer.tokenizer -t <tokenizer.model> -o tokenizer.bin
45    ```
46
47    Pass the converted `tokenizer.bin` file instead of `tokenizer.model` for subsequent steps.
48
49
50# Run
51
52Running will be the same [by following this step](../llama/README.md#step-4-run-on-your-computer-to-validate).
53