|
|
--- |
|
|
tags: |
|
|
- text-to-image |
|
|
- stable-diffusion |
|
|
- lora |
|
|
- diffusers |
|
|
- image-generation |
|
|
- flux |
|
|
- safetensors |
|
|
widget: |
|
|
- text: >- |
|
|
Convert to a flat cartoon style while keeping the subject unchanged |
|
|
output: |
|
|
url: images/example1.png |
|
|
- text: >- |
|
|
Convert to a flat cartoon style while keeping the subject unchanged |
|
|
output: |
|
|
url: images/example2.png |
|
|
- text: Convert to a flat cartoon style while keeping the subject unchanged |
|
|
output: |
|
|
url: images/example3.png |
|
|
base_model: black-forest-labs/FLUX.1-Kontext-dev |
|
|
instance_prompt: Convert to a flat cartoon style while keeping the subject unchanged |
|
|
license: other |
|
|
license_name: flux-1-dev-non-commercial-license |
|
|
license_link: >- |
|
|
https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev/blob/main/LICENSE.md |
|
|
--- |
|
|
# FLUX.1-Kontext-dev-LoRA-Flat-Cartoon-Style |
|
|
|
|
|
This is a flat cartoon style LoRA trained on FLUX.1-Kontext-dev by [Mango](https://www.liblib.art/userpage/baf2e419ce1cb06812314957efd2e067/publish). |
|
|
|
|
|
## Showcases |
|
|
<Gallery /> |
|
|
|
|
|
|
|
|
## Trigger words |
|
|
|
|
|
You should use `Convert to a flat cartoon style while keeping the subject unchanged` to trigger the image generation. |
|
|
|
|
|
## Inference |
|
|
|
|
|
```python |
|
|
import torch |
|
|
from diffusers import FluxKontextPipeline |
|
|
from diffusers.utils import load_image |
|
|
|
|
|
pipe = FluxKontextPipeline.from_pretrained("black-forest-labs/FLUX.1-Kontext-dev", torch_dtype=torch.bfloat16) |
|
|
pipe.load_lora_weights("Shakker-Labs/FLUX.1-Kontext-dev-LoRA-Flat-Cartoon-Style", weight_name="FLUX-kontext-lora-flat-cartoon-style.safetensors") |
|
|
pipe.fuse_lora(lora_scale=1.0) |
|
|
pipe.to("cuda") |
|
|
|
|
|
input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") |
|
|
prompt = "Convert to a flat cartoon style while keeping the subject unchanged" |
|
|
|
|
|
image = pipe( |
|
|
image=input_image, |
|
|
prompt=prompt, |
|
|
num_inference_steps=24, |
|
|
guidance_scale=2.5, |
|
|
).images[0] |
|
|
image.save(f"example.png") |
|
|
``` |
|
|
|
|
|
## Acknowledgements |
|
|
This model is trained by our copyrighted users [Mango](https://www.liblib.art/userpage/baf2e419ce1cb06812314957efd2e067/publish). We release this model under permissions. The model follows [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md). |
|
|
|