Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | @@ -80,8 +80,7 @@ vllm serve tiiuae/Falcon-H1-1B-Instruct --tensor-parallel-size 2 --data-parallel | |
| 80 |  | 
| 81 | 
             
            ### 🦙 llama.cpp
         | 
| 82 |  | 
| 83 | 
            -
             | 
| 84 | 
            -
            Use the same installing guidelines as `llama.cpp`.
         | 
| 85 |  | 
| 86 | 
             
            # Evaluation
         | 
| 87 |  | 
|  | |
| 80 |  | 
| 81 | 
             
            ### 🦙 llama.cpp
         | 
| 82 |  | 
| 83 | 
            +
            Our architecture is integrated into the latest versions of `llama.cpp`: https://github.com/ggml-org/llama.cpp - you can use the our official GGUF files directly with llama.cpp
         | 
|  | |
| 84 |  | 
| 85 | 
             
            # Evaluation
         | 
| 86 |  | 

