experimental graigs
Collection
do not use these in public deployments, i am not responsible for anything these say
•
3 items
•
Updated
•
1
the latest state of the art model in the field of accuracy
other companies may be trying to reach artificial general intelligence, but we are trying to reach artificial grain intelligence. with the help of our team of the best grain farmers in the world, we are making huge strides in the field. fine tuned fully locally using a RX 9070 XT using unsloth.
ollama run hf.co/electron271/graig-code-turbo-fast-slow-4.5-mini:F16
temperature = 0.6top_k = 20min_p = 0.00 (llama.cpp's default is 0.1)top_p = 0.95presence_penalty = 0.0 to 2.0 (llama.cpp default turns it off, but to reduce repetitions, you can use this) Try 1.0 for example.131,072 context natively but you can set it to 32,768 tokens for less RAM useyou can also use /no_think for extra chaoticness