Inference Providers documentation

WaveSpeed

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

WaveSpeed

All supported WaveSpeed models can be found here

WaveSpeedAI is a high-performance AI inference platform specializing in image and video generation. Built with cutting-edge infrastructure and optimization techniques, WaveSpeedAI provides fast, scalable, and cost-effective model serving for creative AI applications.

Supported tasks

Image To Image

Find out more about Image To Image here.

import os
from huggingface_hub import InferenceClient

client = InferenceClient(
    provider="wavespeed",
    api_key=os.environ["HF_TOKEN"],
)

with open("cat.png", "rb") as image_file:
   input_image = image_file.read()

# output is a PIL.Image object
image = client.image_to_image(
    input_image,
    prompt="Turn the cat into a tiger.",
    model="Qwen/Qwen-Image-Edit",
)

Text To Image

Find out more about Text To Image here.

import os
from huggingface_hub import InferenceClient

client = InferenceClient(
    provider="wavespeed",
    api_key=os.environ["HF_TOKEN"],
)

# output is a PIL.Image object
image = client.text_to_image(
    "Astronaut riding a horse",
    model="black-forest-labs/FLUX.1-dev",
)

Text To Video

Find out more about Text To Video here.

import os
from huggingface_hub import InferenceClient

client = InferenceClient(
    provider="wavespeed",
    api_key=os.environ["HF_TOKEN"],
)

video = client.text_to_video(
    "A young man walking on the street",
    model="Wan-AI/Wan2.2-TI2V-5B",
)
Update on GitHub