The model produces good results, but inference takes about 4–5 minutes per image on a 16 GB GPU. Please suggest ways to optimize or speed up the generation process.
· Sign up or log in to comment