Usage with llamaCPP

#5
by aimped-gh - opened

Hello and thank you very much for open sourcing these small models!

I would like to use the gguf files locally in my llamaCPP docker. llamaCPP supports multi modal models as per https://github.com/ggml-org/llama.cpp/blob/master/docs/multimodal.md.

In order to use multi modal model in the Docker container, one would need to specify a multi modal projector using the --mmprojoption while building the container. I am not sure where to get this file for the liquidAI models.

Maybe this is rather a question for the developers of llamaCPP but maybe someone can share their thoughts on that.

Thanks in advance!

Sign up or log in to comment