llama cpp
now only if someone does convert this to gguf? it will be so awesome to run it with llama cpp.
thats only when it will be added to list by llama-cpp. I see by size that it could fit inside GPU, maybe Q8 quality in 16Gb VRAM, and almost original BF16 will fit in 24Gb VRAM cards.
It can be used as webbrowser companion, kinda only Brave supports local models (transmitted by Ollama or oobabooga or etc). I don't see the way to make same functionality in original non-GGUF format.
Hi, because our method is non-autoregressive, it doesn't work with llama.cpp out of the box and needs some extra dev work. The good news is that since we use pure causal attention, the inference logic is doable. It's on our roadmap, but please give us a little time.
However, our checkpoints do support AR generation. So right now, if you convert the model directly, it will only work in the standard AR mode.
Hi, because our method is non-autoregressive, it doesn't work with llama.cpp out of the box and needs some extra dev work. The good news is that since we use pure causal attention, the inference logic is doable. It's on our roadmap, but please give us a little time.
However, our checkpoints do support AR generation. So right now, if you convert the model directly, it will only work in the standard AR mode.
amazing work on this model. would be definitely waiting for more development on this. out of the box this is already incredible. :)