Update README.md
Browse files
README.md
CHANGED
|
@@ -15,8 +15,12 @@ tags:
|
|
| 15 |
- Forge: TBC
|
| 16 |
- stable-diffusion.cpp: [llama.cpp Feature-matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
|
| 17 |
|
|
|
|
|
|
|
|
|
|
| 18 |
# Alpha
|
| 19 |
-
|
|
|
|
| 20 |
|
| 21 |
## Experimental from q8
|
| 22 |
|
|
@@ -28,8 +32,8 @@ simple imatrix: 512x512 single image 8/20 steps q3_K_S euler data: `load_imatrix
|
|
| 28 |
| [flux1-dev-TQ2_0.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/flux1-dev-TQ2_0.gguf) | TQ2_0 | 3.19GB | TBC |
|
| 29 |
| [flux1-dev-IQ2_XXS.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/flux1-dev-IQ2_XXS.gguf) | IQ2_XXS | 3.19GB | TBC |
|
| 30 |
| [flux1-dev-IQ2_XS.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/flux1-dev-IQ2_XS.gguf) | IQ2_XS | 3.56GB | TBC |
|
| 31 |
-
| - | IQ2_S |
|
| 32 |
-
| - | IQ2_M |
|
| 33 |
| - | IQ3_XS | TBC | TBC |
|
| 34 |
| - | IQ3_S | TBC | TBC |
|
| 35 |
| - | IQ3_M | TBC | TBC |
|
|
|
|
| 15 |
- Forge: TBC
|
| 16 |
- stable-diffusion.cpp: [llama.cpp Feature-matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
|
| 17 |
|
| 18 |
+
# Bravo
|
| 19 |
+
Combined imatrix multiple images 25 and 50 steps q8 euler
|
| 20 |
+
|
| 21 |
# Alpha
|
| 22 |
+
Simple imatrix: 512x512 single image 8/20 steps q3_K_S euler data: `load_imatrix: loaded 314 importance matrix entries from imatrix.dat computed on 7 chunks`
|
| 23 |
+
Using [llama.cpp quantize cae9fb4](https://github.com/ggerganov/llama.cpp/commit/cae9fb4361138b937464524eed907328731b81f6).
|
| 24 |
|
| 25 |
## Experimental from q8
|
| 26 |
|
|
|
|
| 32 |
| [flux1-dev-TQ2_0.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/flux1-dev-TQ2_0.gguf) | TQ2_0 | 3.19GB | TBC |
|
| 33 |
| [flux1-dev-IQ2_XXS.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/flux1-dev-IQ2_XXS.gguf) | IQ2_XXS | 3.19GB | TBC |
|
| 34 |
| [flux1-dev-IQ2_XS.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/flux1-dev-IQ2_XS.gguf) | IQ2_XS | 3.56GB | TBC |
|
| 35 |
+
| [flux1-dev-IQ2_S.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/flux1-dev-IQ2_S.gguf) | IQ2_S | 3.56GB | TBC |
|
| 36 |
+
| [flux1-dev-IQ2_M.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/flux1-dev-IQ2_M.gguf) | IQ2_M | 3.93GB | TBC |
|
| 37 |
| - | IQ3_XS | TBC | TBC |
|
| 38 |
| - | IQ3_S | TBC | TBC |
|
| 39 |
| - | IQ3_M | TBC | TBC |
|