Update README.md
Browse files
README.md
CHANGED
|
@@ -78,7 +78,7 @@ For vLLM, simply start a server by executing the command below:
|
|
| 78 |
vllm serve tiiuae/Falcon-H1-1B-Instruct --tensor-parallel-size 2 --data-parallel-size 1
|
| 79 |
```
|
| 80 |
|
| 81 |
-
###
|
| 82 |
|
| 83 |
While we are working on integrating our architecture directly into `llama.cpp` library, you can install our fork of the library and use it directly: https://github.com/tiiuae/llama.cpp-Falcon-H1
|
| 84 |
Use the same installing guidelines as `llama.cpp`.
|
|
|
|
| 78 |
vllm serve tiiuae/Falcon-H1-1B-Instruct --tensor-parallel-size 2 --data-parallel-size 1
|
| 79 |
```
|
| 80 |
|
| 81 |
+
### 🦙 llama.cpp
|
| 82 |
|
| 83 |
While we are working on integrating our architecture directly into `llama.cpp` library, you can install our fork of the library and use it directly: https://github.com/tiiuae/llama.cpp-Falcon-H1
|
| 84 |
Use the same installing guidelines as `llama.cpp`.
|