| license: apache-2.0 | |
| This is a GGUF-formatted checkpoint of | |
| [rnj-1-instruct](https://huggingface.co/EssentialAI/rnj-1-instruct) suitable | |
| for use in llama.cpp, Ollama, or others. This has been quantized with the | |
| Q4\_K\_M scheme, which results in model weights of size 4.8GB. | |
| For llama.cpp, install (after version 7328, e.g., on Mac OSX `brew install llama.cpp`) and run either of these commands: | |
| ```bash | |
| llama-cli -hf EssentialAI/rnj-1-instruct-GGUF | |
| llama-server -hf EssentialAI/rnj-1-instruct-GGUF -c 0 # and open browser to localhost:8080 | |
| ``` | |
| For Ollama, install (after version v0.13.3 -- versions can be found [here](https://github.com/ollama/ollama/releases)) and run: | |
| ```bash | |
| ollama run rnj-1 | |
| ``` | |