GGUF
conversational

Can't use the model in OpenCode

#3
by HRKings - opened

The model works really well on its own, but everytime I try to use it inside opencode, seems that it can't make any tool calls and just spits messages like: "let me read that file, cat filename" and doesn't do anything. I'm using llama.cpp server in its current latest commit (34ce48d97), so the jinja flag is activated and all layers are loaded on the GPU. Also, it repeats itself a lot, really a lot

Sign up or log in to comment