--- base_model: - mikeyandfriends/PixelWave_FLUX.1-schnell_04 license: apache-2.0 tags: - text-to-image - SVDQuant - FLUX.1-dev - INT4 - FLUX.1 - Diffusion - Quantization language: - en base_model_relation: quantized pipeline_tag: text-to-image library_name: diffusers --- # WIP - read P.S. ## Model Details Just the `SVDQuant` quantized `int4` variant of the base model [mikeyandfriends/PixelWave_FLUX.1-schnell_04](https://hf.co/mikeyandfriends/PixelWave_FLUX.1-schnell_04). It was quantized using official svdquant toolset using both `fast` and `gptq` presets. ### P.S. Yields worse than expected generation results, so **not recommended as for now**, I will take another try to quantize it using slow mode. ### P.P.S. I've ran full quantization, but due to the way the toolset implemented, it had a bug at the very end in the eval part of workflow, so it exited with error not saving a single byte of final quantization result, everything is lost. Due to the fact that I paid for the compute myself I consider this loss of ~$60 as a valuable lesson, but I would not redo it again, as I am short on free cash atm. Sorry. Fot those who willing to have it done - feel free to generate a redeemable credit code at RunPod, and send it to me via telegram: [t.me/WaveCut](https://t.me/WaveCut), and I'll be happy to get another shot at it.