Text Generation
GGUF
turkish
türkiye
english
ai
lamapi
gemma3
next
next-x1
efficient
open-source
1b
huggingface
large-language-model
llm
causal
transformer
artificial-intelligence
machine-learning
ai-research
natural-language-processing
nlp
finetuned
lightweight
creative
summarization
question-answering
chat-model
generative-ai
optimized-model
unsloth
trl
sft
chemistry
biology
finance
legal
music
art
code
climate
medical
agent
text-generation-inference
llama-cpp
gguf-my-repo
mattritchey/next-1b-Q4_K_M-GGUF
This model was converted to GGUF format from Lamapi/next-1b using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
brew install llama.cpp
Invoke the llama.cpp server or the CLI.
CLI:
llama-cli --hf-repo mattritchey/next-1b-Q4_K_M-GGUF --hf-file next-1b-q4_k_m.gguf -p "The meaning to life and the universe is"
Server:
llama-server --hf-repo mattritchey/next-1b-Q4_K_M-GGUF --hf-file next-1b-q4_k_m.gguf -c 2048
Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
git clone https://github.com/ggerganov/llama.cpp
Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
cd llama.cpp && LLAMA_CURL=1 make
Step 3: Run inference through the main binary.
./llama-cli --hf-repo mattritchey/next-1b-Q4_K_M-GGUF --hf-file next-1b-q4_k_m.gguf -p "The meaning to life and the universe is"
or
./llama-server --hf-repo mattritchey/next-1b-Q4_K_M-GGUF --hf-file next-1b-q4_k_m.gguf -c 2048
- Downloads last month
- 18
Hardware compatibility
Log In
to view the estimation
4-bit
Model tree for mattritchey/next-1b-Q4_K_M-GGUF
Base model
Lamapi/next-1b