schmidt-sebastian commited on
Commit
6f5a812
·
verified ·
1 Parent(s): 4faefdc

Add Model Card

Browse files
Files changed (1) hide show
  1. README.md +55 -3
README.md CHANGED
@@ -1,3 +1,55 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model:
4
+ - microsoft/Phi-4-mini-instruct
5
+ ---
6
+ # litert-community/Phi-4-mini-instruct
7
+
8
+ This model provides a few variants of [microsoft/Phi-4-mini-instruct](microsoft/Phi-4-mini-instruct) that are ready for deployment on Android using the [LiteRT stack](https://ai.google.dev/edge/litert) and [MediaPipe LLM Inference API](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference).
9
+
10
+ ## Use the models
11
+
12
+ ### Colab
13
+
14
+ *Disclaimer: The target deployment surface for the LiteRT models is Android/iOS/Web and the stack has been optimized for performance on these targets. Trying out the system in Colab is an easier way to familiarize yourself with the LiteRT stack, with the caveat that the performance (memory and latency) on Colab could be much worse than on a local device.*
15
+
16
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/#fileId=https://huggingface.co/litert-community/Phi-4-mini-instruct/blob/main/phi4_tflite.ipynb)
17
+
18
+ ### Android
19
+
20
+ * Download and install [the apk](https://github.com/google-ai-edge/mediapipe-samples/releases/download/v0.2.0/llm_inference_v0.1.0-debug.apk).
21
+ * Follow the instructions in the app.
22
+
23
+
24
+ To build the demo app from source, please follow the [instructions](https://github.com/google-ai-edge/mediapipe-samples/blob/main/examples/llm_inference/android/README.md) from the GitHub repository.
25
+
26
+ ## Performance
27
+
28
+ ### Android
29
+
30
+ Note that all benchmark stats are from a Samsung S24 Ultra with 1280 KV cache size, 512 tokens prefill, 128 tokens decode on CPU.
31
+
32
+ <table border="1">
33
+ <tr>
34
+ <th></th>
35
+ <th>Prefill (tokens/sec)</th>
36
+ <th>Decode (tokens/sec)</th>
37
+ <th>Time-to-first-token (sec)</th>
38
+ <th>Memory (RSS in MB)</th>
39
+ <th>Model size (MB)</th>
40
+ </tr>
41
+ <tr>
42
+ <td><p style="text-align: right">dynamic_int8</p></td>
43
+ <td><p style="text-align: right">80</p></td>
44
+ <td><p style="text-align: right">23</p></td>
45
+ <td><p style="text-align: right">2</p></td>
46
+ <td><p style="text-align: right">6,884 </p></td>
47
+ <td><p style="text-align: right">3,940</p></td>
48
+ </tr>
49
+ </table>
50
+
51
+ * Model Size: measured by the size of the .tflite flatbuffer (serialization format for LiteRT models)
52
+ * Memory: indicator of peak RAM usage
53
+ * The inference is run on CPU is accelerated via the LiteRT [XNNPACK](https://github.com/google/XNNPACK) delegate with 4 threads
54
+ * Benchmark is done assuming XNNPACK cache is enabled
55
+ * dynamic_int8: quantized model with int8 weights and float activations.