diff --git a/README.md b/README.md
index 967c541607045d60f30ff504a99248e2c5068a06..1706558153baf5868b22afe5afe5c895df9082ea 100644
--- a/README.md
+++ b/README.md
@@ -18,7 +18,7 @@ The name "Forge" is inspired by "Minecraft Forge". This project aims to become t
-## Features [May. 28]
+## Features [Jul. 23]
> Most base features of the original [Automatic1111 Webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) should still function
#### New Features
@@ -48,6 +48,7 @@ The name "Forge" is inspired by "Minecraft Forge". This project aims to become t
- enable in **Settings/Optimizations**
- [X] Support fast `fp8` operation *(`torch._scaled_mm`)*
- requires RTX **40** +
+ - requires **UNet Weights in fp8** option
- ~10% speed up; reduce quality
- enable in **Settings/Optimizations**
@@ -55,12 +56,14 @@ The name "Forge" is inspired by "Minecraft Forge". This project aims to become t
> - Both `fp16_accumulation` and `cublas_ops` achieve the same speed up; if you already install/update to PyTorch **2.7.0**, you do not need to go for `cublas_ops`
> - The `fp16_accumulation` and `cublas_ops` require `fp16` precision, thus is not compatible with the `fp8` operation
+
+
- [X] Persistent LoRA Patching
- speed up LoRA loading in subsequent generations
- see [Commandline](#by-classic)
- [X] Implement new Samplers
- *(ported from reForge Webui)*
-- [X] Implement Scheduler Dropdown
+- [X] Implement Scheduler dropdown
- *(backported from Automatic1111 Webui upstream)*
- enable in **Settings/UI Alternatives**
- [X] Add `CFG` slider to the `Hires. fix` section
@@ -72,18 +75,34 @@ The name "Forge" is inspired by "Minecraft Forge". This project aims to become t
- enable in **Settings/UI Alternatives**
- [X] Implement full precision calculation for `Mask blur` blending
- enable in **Settings/img2img**
+- [X] Support loading upscalers in `half` precision
+ - speed up; reduce quality
+ - enable in **Settings/Upscaling**
+- [X] Support running tile composition on GPU
+ - enable in **Settings/Upscaling**
+- [X] Allow `newline` in LoRA metadata
+ - *(backported from Automatic1111 Webui upstream)*
+- [X] Implement sending parameters from generation result rather than from UI
+ - **e.g.** send the prompts instead of `Wildcard` syntax
+ - enable in **Settings/Infotext**
+- [X] Implement tiling optimization for VAE
+ - reduce memory usage; reduce speed
+ - enable in **Settings/VAE**
- [X] Implement `diskcache` for hashes
- *(backported from Automatic1111 Webui upstream)*
- [X] Implement `skip_early_cond`
- *(backported from Automatic1111 Webui upstream)*
- enable in **Settings/Optimizations**
-- [X] Support `v-pred` **SDXL** checkpoints *(**eg.** [NoobAI](https://civitai.com/models/833294?modelVersionId=1190596))*
+- [X] Allow inserting the upscaled image to the Gallery instead of overriding the input image
+ - *(backported from upstream [PR](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16405))*
+- [X] Support `v-pred` **SDXL** checkpoints *(**e.g.** [NoobAI](https://civitai.com/models/833294?modelVersionId=1190596))*
- [X] Support new LoRA architectures
- [X] Update `spandrel`
- support new Upscaler architectures
- [X] Add `pillow-heif` package
- support `.avif` and `.heif` images
- [X] Automatically determine the optimal row count for `X/Y/Z Plot`
+- [X] Support new LoRA architectures
- [X] `DepthAnything v2` Preprocessor
- [X] Support [NoobAI Inpaint](https://civitai.com/models/1376234/noobai-inpainting-controlnet) ControlNet
- [X] Support [Union](https://huggingface.co/xinsir/controlnet-union-sdxl-1.0) / [ProMax](https://huggingface.co/brad-twinkl/controlnet-union-sdxl-1.0-promax) ControlNet
@@ -110,15 +129,17 @@ The name "Forge" is inspired by "Minecraft Forge". This project aims to become t
- [X] Some Preprocessors *(ControlNet)*
- [X] `Photopea` and `openpose_editor` *(ControlNet)*
- [X] Unix `.sh` launch scripts
- - You can still use this WebUI by copying a launch script from another working WebUI; I just don't want to maintain them...
+ - You can still use this WebUI by simply copying a launch script from other working WebUI
#### Optimizations
- [X] **[Freedom]** Natively integrate the `SD1` and `SDXL` logics
- no longer `git` `clone` any repository on fresh install
- no more random hacks and monkey patches
+- [X] Fix `canvas-zoom-and-pan` built-in extension
+ - no more infinite-resizing bug when using `Send to` buttons
- [X] Fix memory leak when switching checkpoints
-- [X] Clean up the `ldm_patched` *(**ie.** `comfy`)* folder
+- [X] Clean up the `ldm_patched` *(**i.e.** `comfy`)* folder
- [X] Remove unused `cmd_args`
- [X] Remove unused `args_parser`
- [X] Remove unused `shared_options`
@@ -127,6 +148,9 @@ The name "Forge" is inspired by "Minecraft Forge". This project aims to become t
- [X] Remove redundant upscaler codes
- put every upscaler inside the `ESRGAN` folder
- [X] Optimize upscaler logics
+- [X] Optimize certain operations in `Spandrel`
+- [X] Optimize the creation of Extra Networks pages
+ - *(backported from Automatic1111 Webui upstream)*
- [X] Improve color correction
- [X] Improve hash caching
- [X] Improve error logs
@@ -135,16 +159,21 @@ The name "Forge" is inspired by "Minecraft Forge". This project aims to become t
- improve formatting
- update descriptions
- [X] Check for Extension updates in parallel
-- [X] Moved `embeddings` folder into `models` folder
+- [X] Move `embeddings` folder into `models` folder
- [X] ControlNet Rewrite
- change Units to `gr.Tab`
- remove multi-inputs, as they are "[misleading](https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/932)"
- change `visible` toggle to `interactive` toggle; now the UI will no longer jump around
- - improved `Presets` application
+ - improve `Presets` application
+ - fix `Inpaint not masked` mode
- [X] Disable Refiner by default
- enable again in **Settings/UI Alternatives**
- [X] Disable Tree View by default
- enable again in **Settings/Extra Networks**
+- [X] Hide Sampler Parameters by default
+ - enable again by adding **--adv-samplers** flag
+- [X] Hide some X/Y/Z Plot options by default
+ - enable again by adding **--adv-xyz** flag
- [X] Run `text encoder` on CPU by default
- [X] Fix `pydantic` Errors
- [X] Fix `Soft Inpainting`
@@ -154,7 +183,7 @@ The name "Forge" is inspired by "Minecraft Forge". This project aims to become t
- [X] Update `protobuf`
- faster `insightface` loading
- [X] Update to latest PyTorch
- - `torch==2.7.0+cu128`
+ - `torch==2.7.1+cu128`
- `xformers==0.0.30`
> [!Note]
@@ -175,7 +204,6 @@ The name "Forge" is inspired by "Minecraft Forge". This project aims to become t
- `--no-download-sd-model`: Do not download a default checkpoint
- can be removed after you download some checkpoints of your choice
- `--xformers`: Install the `xformers` package to speed up generation
- - Currently, `torch==2.7.0` does **not** support `xformers` yet
- `--port`: Specify a server port to use
- defaults to `7860`
- `--api`: Enable [API](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/API) access
@@ -449,6 +477,9 @@ In my experience, the speed of each attention function for SDXL is ranked in the
> [!Note]
> `SageAttention` is based on quantization, so its quality might be slightly worse than others
+> [!Important]
+> When using `SageAttention 2`, both positive prompts and negative prompts are required; omitting negative prompts can cause `NaN` issues
+
## Issues & Requests
diff --git a/extensions-builtin/Lora/network.py b/extensions-builtin/Lora/network.py
index 5ae6420ce964665afec844af7f9468efd8c1ee2c..c94a1bbc5e92279f332200570e8db71ae1fbdf8d 100644
--- a/extensions-builtin/Lora/network.py
+++ b/extensions-builtin/Lora/network.py
@@ -3,8 +3,6 @@ from __future__ import annotations
import enum
from collections import namedtuple
-import torch.nn as nn
-import torch.nn.functional as F
from modules import cache, errors, hashes, sd_models, shared
NetworkWeights = namedtuple("NetworkWeights", ["network_key", "sd_key", "w", "sd_module"])
@@ -33,12 +31,11 @@ class NetworkOnDisk:
def read_metadata():
metadata = sd_models.read_metadata_from_safetensors(filename)
- metadata.pop("ssmd_cover_images", None) # cover images are too big to display in UI
return metadata
if self.is_safetensors:
try:
- self.metadata = cache.cached_data_for_file("safetensors-metadata", "/".join(["lora", self.name]), filename, read_metadata)
+ self.metadata = cache.cached_data_for_file("safetensors-metadata", f"lora/{self.name}", filename, read_metadata)
except Exception as e:
errors.display(e, f"reading lora {filename}")
@@ -53,7 +50,7 @@ class NetworkOnDisk:
self.hash: str = None
self.shorthash: str = None
- self.set_hash(self.metadata.get("sshs_model_hash") or hashes.sha256_from_cache(self.filename, "/".join(["lora", self.name]), use_addnet_hash=self.is_safetensors) or "")
+ self.set_hash(self.metadata.get("sshs_model_hash") or hashes.sha256_from_cache(self.filename, f"lora/{self.name}", use_addnet_hash=self.is_safetensors) or "")
self.sd_version: "SDVersion" = self.detect_version()
@@ -76,14 +73,7 @@ class NetworkOnDisk:
def read_hash(self):
if not self.hash:
- self.set_hash(
- hashes.sha256(
- self.filename,
- "/".join(["lora", self.name]),
- use_addnet_hash=self.is_safetensors,
- )
- or ""
- )
+ self.set_hash(hashes.sha256(self.filename, f"lora/{self.name}", use_addnet_hash=self.is_safetensors) or "")
def get_alias(self):
import networks
@@ -107,89 +97,3 @@ class Network: # LoraModule
self.mentioned_name = None
"""the text that was used to add the network to prompt - can be either name or an alias"""
-
-
-class ModuleType:
- def create_module(self, net: Network, weights: NetworkWeights) -> Network | None:
- return None
-
-
-class NetworkModule:
- def __init__(self, net: Network, weights: NetworkWeights):
- self.network = net
- self.network_key = weights.network_key
- self.sd_key = weights.sd_key
- self.sd_module = weights.sd_module
-
- if hasattr(self.sd_module, "weight"):
- self.shape = self.sd_module.weight.shape
-
- self.ops = None
- self.extra_kwargs = {}
- if isinstance(self.sd_module, nn.Conv2d):
- self.ops = F.conv2d
- self.extra_kwargs = {
- "stride": self.sd_module.stride,
- "padding": self.sd_module.padding,
- }
- elif isinstance(self.sd_module, nn.Linear):
- self.ops = F.linear
- elif isinstance(self.sd_module, nn.LayerNorm):
- self.ops = F.layer_norm
- self.extra_kwargs = {
- "normalized_shape": self.sd_module.normalized_shape,
- "eps": self.sd_module.eps,
- }
- elif isinstance(self.sd_module, nn.GroupNorm):
- self.ops = F.group_norm
- self.extra_kwargs = {
- "num_groups": self.sd_module.num_groups,
- "eps": self.sd_module.eps,
- }
-
- self.dim = None
- self.bias = weights.w.get("bias")
- self.alpha = weights.w["alpha"].item() if "alpha" in weights.w else None
- self.scale = weights.w["scale"].item() if "scale" in weights.w else None
-
- def multiplier(self):
- if "transformer" in self.sd_key[:20]:
- return self.network.te_multiplier
- else:
- return self.network.unet_multiplier
-
- def calc_scale(self):
- if self.scale is not None:
- return self.scale
- if self.dim is not None and self.alpha is not None:
- return self.alpha / self.dim
-
- return 1.0
-
- def finalize_updown(self, updown, orig_weight, output_shape, ex_bias=None):
- if self.bias is not None:
- updown = updown.reshape(self.bias.shape)
- updown += self.bias.to(orig_weight.device, dtype=updown.dtype)
- updown = updown.reshape(output_shape)
-
- if len(output_shape) == 4:
- updown = updown.reshape(output_shape)
-
- if orig_weight.size().numel() == updown.size().numel():
- updown = updown.reshape(orig_weight.shape)
-
- if ex_bias is not None:
- ex_bias = ex_bias * self.multiplier()
-
- return updown * self.calc_scale() * self.multiplier(), ex_bias
-
- def calc_updown(self, target):
- raise NotImplementedError
-
- def forward(self, x, y):
- """A general forward implementation for all modules"""
- if self.ops is None:
- raise NotImplementedError
-
- updown, ex_bias = self.calc_updown(self.sd_module.weight)
- return y + self.ops(x, weight=updown, bias=ex_bias, **self.extra_kwargs)
diff --git a/extensions-builtin/Lora/ui_edit_user_metadata.py b/extensions-builtin/Lora/ui_edit_user_metadata.py
index f4ab1fb8666538440c9184e9ea512e75ed4fa609..ba15303be1884d06fbac8622398080144d876ac9 100644
--- a/extensions-builtin/Lora/ui_edit_user_metadata.py
+++ b/extensions-builtin/Lora/ui_edit_user_metadata.py
@@ -51,13 +51,13 @@ class LoraUserMetadataEditor(UserMetadataEditor):
def save_lora_user_metadata(
self,
- name,
- desc,
- sd_version,
- activation_text,
- preferred_weight,
- negative_text,
- notes,
+ name: str,
+ desc: str,
+ sd_version: str,
+ activation_text: str,
+ preferred_weight: float,
+ negative_text: str,
+ notes: str,
):
user_metadata = self.get_user_metadata(name)
user_metadata["description"] = desc
@@ -68,7 +68,6 @@ class LoraUserMetadataEditor(UserMetadataEditor):
user_metadata["notes"] = notes
self.write_user_metadata(name, user_metadata)
- self.page.refresh()
def get_metadata_table(self, name):
table = super().get_metadata_table(name)
@@ -157,8 +156,8 @@ class LoraUserMetadataEditor(UserMetadataEditor):
self.create_default_editor_elems()
self.taginfo = gr.HighlightedText(label="Training dataset tags")
- self.edit_activation_text = gr.Text(label="Activation text", info="Will be added to prompt along with Lora")
- self.edit_negative_text = gr.Text(label="Negative prompt", info="Will be added to negative prompts")
+ self.edit_activation_text = gr.Textbox(label="Positive Prompt", info="Will be added to the prompt after the LoRA syntax", lines=2)
+ self.edit_negative_text = gr.Textbox(label="Negative Prompt", info="Will be added to the negative prompt", lines=2)
self.slider_preferred_weight = gr.Slider(
label="Preferred weight",
info="Set to 0 to use the default set in Settings",
diff --git a/extensions-builtin/canvas-zoom-and-pan/javascript/zoom.js b/extensions-builtin/canvas-zoom-and-pan/javascript/zoom.js
index c6146f38730370b5bc7f18a2efaba4ba03b8b54a..055c412a41f1160fc9c877ca3371f1189e8fa935 100644
--- a/extensions-builtin/canvas-zoom-and-pan/javascript/zoom.js
+++ b/extensions-builtin/canvas-zoom-and-pan/javascript/zoom.js
@@ -1,53 +1,53 @@
-(function () {
+const elementIDs = {
+ img2imgTabs: "#mode_img2img .tab-nav",
+ inpaint: "#img2maskimg",
+ inpaintSketch: "#inpaint_sketch",
+ rangeGroup: "#img2img_column_size",
+ sketch: "#img2img_sketch",
+};
+
+const tabNameToElementId = {
+ "Inpaint sketch": elementIDs.inpaintSketch,
+ "Inpaint": elementIDs.inpaint,
+ "Sketch": elementIDs.sketch,
+};
+(function () {
onUiLoaded(async () => {
- const elementIDs = {
- img2imgTabs: "#mode_img2img .tab-nav",
- inpaint: "#img2maskimg",
- inpaintSketch: "#inpaint_sketch",
- rangeGroup: "#img2img_column_size",
- sketch: "#img2img_sketch"
- };
-
- const tabNameToElementId = {
- "Inpaint sketch": elementIDs.inpaintSketch,
- "Inpaint": elementIDs.inpaint,
- "Sketch": elementIDs.sketch
- };
-
/** Waits for an element to be present in the DOM */
- const waitForElement = (id) => new Promise(resolve => {
- const checkForElement = () => {
- const element = document.querySelector(id);
- if (element) return resolve(element);
- setTimeout(checkForElement, 100);
- };
- checkForElement();
- });
+ const waitForElement = (id) =>
+ new Promise((resolve) => {
+ const checkForElement = () => {
+ const element = document.querySelector(id);
+ if (element) return resolve(element);
+ setTimeout(checkForElement, 100);
+ };
+ checkForElement();
+ });
function getActiveTab(elements, all = false) {
+ if (!elements.img2imgTabs) return null;
+
const tabs = elements.img2imgTabs.querySelectorAll("button");
if (all) return tabs;
for (let tab of tabs) {
- if (tab.classList.contains("selected"))
- return tab;
+ if (tab.classList.contains("selected")) return tab;
}
}
// Get tab ID
function getTabId(elements) {
const activeTab = getActiveTab(elements);
+ if (!activeTab) return null;
return tabNameToElementId[activeTab.innerText];
}
// Wait until opts loaded
async function waitForOpts() {
for (; ;) {
- if (window.opts && Object.keys(window.opts).length) {
- return window.opts;
- }
- await new Promise(resolve => setTimeout(resolve, 100));
+ if (window.opts && Object.keys(window.opts).length) return window.opts;
+ await new Promise((resolve) => setTimeout(resolve, 100));
}
}
@@ -108,8 +108,7 @@
typeof userValue === "object" ||
userValue === "disable"
) {
- result[key] =
- userValue === undefined ? defaultValue : userValue;
+ result[key] = userValue === undefined ? defaultValue : userValue;
} else if (isValidHotkey(userValue)) {
const normalizedUserValue = normalizeHotkey(userValue);
@@ -120,20 +119,20 @@
} else {
console.error(
`Hotkey: ${formatHotkeyForDisplay(
- userValue
+ userValue,
)} for ${key} is repeated and conflicts with another hotkey. The default hotkey is used: ${formatHotkeyForDisplay(
- defaultValue
- )}`
+ defaultValue,
+ )}`,
);
result[key] = defaultValue;
}
} else {
console.error(
`Hotkey: ${formatHotkeyForDisplay(
- userValue
+ userValue,
)} for ${key} is not valid. The default hotkey is used: ${formatHotkeyForDisplay(
- defaultValue
- )}`
+ defaultValue,
+ )}`,
);
result[key] = defaultValue;
}
@@ -145,11 +144,10 @@
// Disables functions in the config object based on the provided list of function names
function disableFunctions(config, disabledFunctions) {
// Bind the hasOwnProperty method to the functionMap object to avoid errors
- const hasOwnProperty =
- Object.prototype.hasOwnProperty.bind(functionMap);
+ const hasOwnProperty = Object.prototype.hasOwnProperty.bind(functionMap);
// Loop through the disabledFunctions array and disable the corresponding functions in the config object
- disabledFunctions.forEach(funcName => {
+ disabledFunctions.forEach((funcName) => {
if (hasOwnProperty(funcName)) {
const key = functionMap[funcName];
config[key] = "disable";
@@ -179,16 +177,14 @@
if (!img || !imageARPreview) return;
imageARPreview.style.transform = "";
- if (parseFloat(mainTab.style.width) > 865) {
+ if (parseFloat(mainTab.style.width) > 800) {
const transformString = mainTab.style.transform;
const scaleMatch = transformString.match(
- /scale\(([-+]?[0-9]*\.?[0-9]+)\)/
+ /scale\(([-+]?[0-9]*\.?[0-9]+)\)/,
);
let zoom = 1; // default zoom
- if (scaleMatch && scaleMatch[1]) {
- zoom = Number(scaleMatch[1]);
- }
+ if (scaleMatch && scaleMatch[1]) zoom = Number(scaleMatch[1]);
imageARPreview.style.transformOrigin = "0 0";
imageARPreview.style.transform = `scale(${zoom})`;
@@ -200,7 +196,7 @@
setTimeout(() => {
img.style.display = "none";
- }, 400);
+ }, 500);
}
const hotkeysConfigOpts = await waitForOpts();
@@ -229,39 +225,39 @@
"Moving canvas": "canvas_hotkey_move",
"Fullscreen": "canvas_hotkey_fullscreen",
"Reset Zoom": "canvas_hotkey_reset",
- "Overlap": "canvas_hotkey_overlap"
+ "Overlap": "canvas_hotkey_overlap",
};
// Loading the configuration from opts
const preHotkeysConfig = createHotkeyConfig(
defaultHotkeysConfig,
- hotkeysConfigOpts
+ hotkeysConfigOpts,
);
// Disable functions that are not needed by the user
const hotkeysConfig = disableFunctions(
preHotkeysConfig,
- preHotkeysConfig.canvas_disabled_functions
+ preHotkeysConfig.canvas_disabled_functions,
);
let isMoving = false;
- let mouseX, mouseY;
let activeElement;
+ let interactedWithAltKey = false;
const elements = Object.fromEntries(
- Object.keys(elementIDs).map(id => [
+ Object.keys(elementIDs).map((id) => [
id,
- gradioApp().querySelector(elementIDs[id])
- ])
+ gradioApp().querySelector(elementIDs[id]),
+ ]),
);
const elemData = {};
// Apply functionality to the range inputs. Restore redmask and correct for long images.
- const rangeInputs = elements.rangeGroup ?
- Array.from(elements.rangeGroup.querySelectorAll("input")) :
- [
+ const rangeInputs = elements.rangeGroup
+ ? Array.from(elements.rangeGroup.querySelectorAll("input"))
+ : [
gradioApp().querySelector("#img2img_width input[type='range']"),
- gradioApp().querySelector("#img2img_height input[type='range']")
+ gradioApp().querySelector("#img2img_height input[type='range']"),
];
for (const input of rangeInputs) {
@@ -272,7 +268,7 @@
const targetElement = gradioApp().querySelector(elemId);
if (!targetElement) {
- console.log("Element not found");
+ console.log(`Element ${elemId} not found...`);
return;
}
@@ -281,14 +277,13 @@
elemData[elemId] = {
zoom: 1,
panX: 0,
- panY: 0
+ panY: 0,
};
let fullScreenMode = false;
// Create tooltip
function createTooltip() {
- const toolTipElement =
- targetElement.querySelector(".image-container");
+ const toolTipElement = targetElement.querySelector(".image-container");
const tooltip = document.createElement("div");
tooltip.className = "canvas-tooltip";
@@ -306,39 +301,37 @@
{
configKey: "canvas_hotkey_zoom",
action: "Zoom canvas",
- keySuffix: " + wheel"
+ keySuffix: " + wheel",
},
{
configKey: "canvas_hotkey_adjust",
action: "Adjust brush size",
- keySuffix: " + wheel"
+ keySuffix: " + wheel",
},
{ configKey: "canvas_hotkey_reset", action: "Reset zoom" },
{
configKey: "canvas_hotkey_fullscreen",
- action: "Fullscreen mode"
+ action: "Fullscreen mode",
},
{ configKey: "canvas_hotkey_move", action: "Move canvas" },
- { configKey: "canvas_hotkey_overlap", action: "Overlap" }
+ { configKey: "canvas_hotkey_overlap", action: "Overlap" },
];
// Create hotkeys array with disabled property based on the config values
- const hotkeys = hotkeysInfo.map(info => {
+ const hotkeys = hotkeysInfo.map((info) => {
const configValue = hotkeysConfig[info.configKey];
- const key = info.keySuffix ?
- `${configValue}${info.keySuffix}` :
- configValue.charAt(configValue.length - 1);
+ const key = info.keySuffix
+ ? `${configValue}${info.keySuffix}`
+ : configValue.charAt(configValue.length - 1);
return {
key,
action: info.action,
- disabled: configValue === "disable"
+ disabled: configValue === "disable",
};
});
for (const hotkey of hotkeys) {
- if (hotkey.disabled) {
- continue;
- }
+ if (hotkey.disabled) continue;
const p = document.createElement("p");
p.innerHTML = `${hotkey.key} - ${hotkey.action}`;
@@ -353,16 +346,14 @@
toolTipElement.appendChild(tooltip);
}
- //Show tool tip if setting enable
- if (hotkeysConfig.canvas_show_tooltip) {
- createTooltip();
- }
+ // Show tool tip if setting enable
+ if (hotkeysConfig.canvas_show_tooltip) createTooltip();
// In the course of research, it was found that the tag img is very harmful when zooming and creates white canvases. This hack allows you to almost never think about this problem, it has no effect on webui.
function fixCanvas() {
- const activeTab = getActiveTab(elements).textContent.trim();
+ const activeTab = getActiveTab(elements)?.textContent.trim();
- if (activeTab !== "img2img") {
+ if (activeTab && activeTab !== "img2img") {
const img = targetElement.querySelector(`${elemId} img`);
if (img && img.style.display !== "none") {
@@ -377,12 +368,10 @@
elemData[elemId] = {
zoomLevel: 1,
panX: 0,
- panY: 0
+ panY: 0,
};
- if (isExtension) {
- targetElement.style.overflow = "hidden";
- }
+ if (isExtension) targetElement.style.overflow = "hidden";
targetElement.isZoomed = false;
@@ -390,16 +379,16 @@
targetElement.style.transform = `scale(${elemData[elemId].zoomLevel}) translate(${elemData[elemId].panX}px, ${elemData[elemId].panY}px)`;
const canvas = gradioApp().querySelector(
- `${elemId} canvas[key="interface"]`
+ `${elemId} canvas[key="interface"]`,
);
toggleOverlap("off");
fullScreenMode = false;
- const closeBtn = targetElement.querySelector("button[aria-label='Remove Image']");
- if (closeBtn) {
- closeBtn.addEventListener("click", resetZoom);
- }
+ const closeBtn = targetElement.querySelector(
+ "button[aria-label='Remove Image']",
+ );
+ if (closeBtn) closeBtn.addEventListener("click", resetZoom);
if (canvas && isExtension) {
const parentElement = targetElement.closest('[id^="component-"]');
@@ -411,14 +400,13 @@
fitToElement();
return;
}
-
}
if (
canvas &&
!isExtension &&
- parseFloat(canvas.style.width) > 865 &&
- parseFloat(targetElement.style.width) > 865
+ parseFloat(canvas.style.width) > 800 &&
+ parseFloat(targetElement.style.width) > 800
) {
fitToElement();
return;
@@ -435,11 +423,8 @@
targetElement.style.zIndex =
targetElement.style.zIndex !== zIndex2 ? zIndex2 : zIndex1;
- if (forced === "off") {
- targetElement.style.zIndex = zIndex1;
- } else if (forced === "on") {
- targetElement.style.zIndex = zIndex2;
- }
+ if (forced === "off") targetElement.style.zIndex = zIndex1;
+ else if (forced === "on") targetElement.style.zIndex = zIndex2;
}
// Adjust the brush size based on the deltaY value from a mouse wheel event
@@ -447,21 +432,18 @@
elemId,
deltaY,
withoutValue = false,
- percentage = 5
+ percentage = 5,
) {
const input =
gradioApp().querySelector(
- `${elemId} input[aria-label='Brush radius']`
+ `${elemId} input[aria-label='Brush radius']`,
) ||
- gradioApp().querySelector(
- `${elemId} button[aria-label="Use brush"]`
- );
+ gradioApp().querySelector(`${elemId} button[aria-label="Use brush"]`);
if (input) {
input.click();
if (!withoutValue) {
- const maxValue =
- parseFloat(input.getAttribute("max")) || 100;
+ const maxValue = parseFloat(input.getAttribute("max")) || 100;
const changeAmount = maxValue * (percentage / 100);
const newValue =
parseFloat(input.value) +
@@ -474,7 +456,7 @@
// Reset zoom when uploading a new image
const fileInput = gradioApp().querySelector(
- `${elemId} input[type="file"][accept="image/*"].svelte-116rqfv`
+ `${elemId} input[type="file"][accept="image/*"].svelte-116rqfv`,
);
fileInput.addEventListener("click", resetZoom);
@@ -482,18 +464,23 @@
function updateZoom(newZoomLevel, mouseX, mouseY) {
newZoomLevel = Math.max(0.1, Math.min(newZoomLevel, 15));
- elemData[elemId].panX +=
- mouseX - (mouseX * newZoomLevel) / elemData[elemId].zoomLevel;
- elemData[elemId].panY +=
- mouseY - (mouseY * newZoomLevel) / elemData[elemId].zoomLevel;
+ // Check if we're close to the original zoom level (1.0)
+ if (Math.abs(newZoomLevel - 1.0) < 0.01) {
+ newZoomLevel = 1;
+ elemData[elemId].panX = 0;
+ elemData[elemId].panY = 0;
+ } else {
+ elemData[elemId].panX +=
+ mouseX - (mouseX * newZoomLevel) / elemData[elemId].zoomLevel;
+ elemData[elemId].panY +=
+ mouseY - (mouseY * newZoomLevel) / elemData[elemId].zoomLevel;
+ }
targetElement.style.transformOrigin = "0 0";
targetElement.style.transform = `translate(${elemData[elemId].panX}px, ${elemData[elemId].panY}px) scale(${newZoomLevel})`;
toggleOverlap("on");
- if (isExtension) {
- targetElement.style.overflow = "visible";
- }
+ if (isExtension) targetElement.style.overflow = "visible";
return newZoomLevel;
}
@@ -502,27 +489,26 @@
function changeZoomLevel(operation, e) {
if (isModifierKey(e, hotkeysConfig.canvas_hotkey_zoom)) {
e.preventDefault();
+ if (hotkeysConfig.canvas_hotkey_zoom === "Alt")
+ interactedWithAltKey = true;
let zoomPosX, zoomPosY;
let delta = 0.2;
- if (elemData[elemId].zoomLevel > 7) {
- delta = 0.9;
- } else if (elemData[elemId].zoomLevel > 2) {
- delta = 0.6;
- }
+ if (elemData[elemId].zoomLevel > 7) delta = 0.9;
+ else if (elemData[elemId].zoomLevel > 2) delta = 0.6;
zoomPosX = e.clientX;
zoomPosY = e.clientY;
fullScreenMode = false;
elemData[elemId].zoomLevel = updateZoom(
- elemData[elemId].zoomLevel +
- (operation === "+" ? delta : -delta),
+ elemData[elemId].zoomLevel + (operation === "+" ? delta : -delta),
zoomPosX - targetElement.getBoundingClientRect().left,
- zoomPosY - targetElement.getBoundingClientRect().top
+ zoomPosY - targetElement.getBoundingClientRect().top,
);
- targetElement.isZoomed = true;
+ targetElement.isZoomed =
+ Math.abs(elemData[elemId].zoomLevel - 1.0) > 0.01;
}
}
@@ -533,17 +519,14 @@
*/
function fitToElement() {
- //Reset Zoom
+ // Reset Zoom
targetElement.style.transform = `translate(${0}px, ${0}px) scale(${1})`;
let parentElement;
- if (isExtension) {
+ if (isExtension)
parentElement = targetElement.closest('[id^="component-"]');
- } else {
- parentElement = targetElement.parentElement;
- }
-
+ else parentElement = targetElement.parentElement;
// Get element and screen dimensions
const elementWidth = targetElement.offsetWidth;
@@ -569,8 +552,7 @@
const originYValue = parseFloat(originY);
const offsetX =
- (screenWidth - elementWidth * scale) / 2 -
- originXValue * (1 - scale);
+ (screenWidth - elementWidth * scale) / 2 - originXValue * (1 - scale);
const offsetY =
(screenHeight - elementHeight * scale) / 2.5 -
originYValue * (1 - scale);
@@ -596,18 +578,15 @@
// Fullscreen mode
function fitToScreen() {
const canvas = gradioApp().querySelector(
- `${elemId} canvas[key="interface"]`
+ `${elemId} canvas[key="interface"]`,
);
if (!canvas) return;
- if (canvas.offsetWidth > 862 || isExtension) {
- targetElement.style.width = (canvas.offsetWidth + 2) + "px";
- }
+ if (canvas.offsetWidth > 800 || isExtension)
+ targetElement.style.width = canvas.offsetWidth + 16 + "px";
- if (isExtension) {
- targetElement.style.overflow = "visible";
- }
+ if (isExtension) targetElement.style.overflow = "visible";
if (fullScreenMode) {
resetZoom();
@@ -615,8 +594,8 @@
return;
}
- //Reset Zoom
- targetElement.style.transform = `translate(${0}px, ${0}px) scale(${1})`;
+ // Reset Zoom
+ targetElement.style.transform = 'translate(0px, 0px) scale(1.0)';
// Get scrollbar width to right-align the image
const scrollbarWidth =
@@ -670,24 +649,31 @@
// Handle keydown events
function handleKeyDown(event) {
// Disable key locks to make pasting from the buffer work correctly
- if ((event.ctrlKey && event.code === 'KeyV') || (event.ctrlKey && event.code === 'KeyC') || event.code === "F5") {
+ if (
+ (event.ctrlKey && event.code === "KeyV") ||
+ (event.ctrlKey && event.code === "KeyC") ||
+ event.code === "F5"
+ ) {
return;
}
// before activating shortcut, ensure user is not actively typing in an input field
if (!hotkeysConfig.canvas_blur_prompt) {
- if (event.target.nodeName === 'TEXTAREA' || event.target.nodeName === 'INPUT') {
+ if (
+ event.target.nodeName === "TEXTAREA" ||
+ event.target.nodeName === "INPUT"
+ )
return;
- }
}
-
const hotkeyActions = {
[hotkeysConfig.canvas_hotkey_reset]: resetZoom,
[hotkeysConfig.canvas_hotkey_overlap]: toggleOverlap,
[hotkeysConfig.canvas_hotkey_fullscreen]: fitToScreen,
- [hotkeysConfig.canvas_hotkey_shrink_brush]: () => adjustBrushSize(elemId, 10),
- [hotkeysConfig.canvas_hotkey_grow_brush]: () => adjustBrushSize(elemId, -10)
+ [hotkeysConfig.canvas_hotkey_shrink_brush]: () =>
+ adjustBrushSize(elemId, 10),
+ [hotkeysConfig.canvas_hotkey_grow_brush]: () =>
+ adjustBrushSize(elemId, -10),
};
const action = hotkeyActions[event.code];
@@ -699,15 +685,8 @@
if (
isModifierKey(event, hotkeysConfig.canvas_hotkey_zoom) ||
isModifierKey(event, hotkeysConfig.canvas_hotkey_adjust)
- ) {
+ )
event.preventDefault();
- }
- }
-
- // Get Mouse position
- function getMousePosition(e) {
- mouseX = e.offsetX;
- mouseY = e.offsetY;
}
// Simulation of the function to put a long image into the screen.
@@ -716,31 +695,40 @@
targetElement.isExpanded = false;
function autoExpand() {
- const canvas = document.querySelector(`${elemId} canvas[key="interface"]`);
+ const canvas = document.querySelector(
+ `${elemId} canvas[key="interface"]`,
+ );
if (canvas) {
- if (hasHorizontalScrollbar(targetElement) && targetElement.isExpanded === false) {
- targetElement.style.visibility = "hidden";
+ if (
+ hasHorizontalScrollbar(targetElement) &&
+ targetElement.isExpanded === false
+ ) {
setTimeout(() => {
fitToScreen();
resetZoom();
- targetElement.style.visibility = "visible";
targetElement.isExpanded = true;
- }, 10);
+ }, 25);
}
}
}
- targetElement.addEventListener("mousemove", getMousePosition);
-
- //observers
+ // Observers
// Creating an observer with a callback function to handle DOM changes
- const observer = new MutationObserver((mutationsList, observer) => {
- for (let mutation of mutationsList) {
+ const observer = new MutationObserver((mutationsList) => {
+ for (const mutation of mutationsList) {
// If the style attribute of the canvas has changed, by observation it happens only when the picture changes
- if (mutation.type === 'attributes' && mutation.attributeName === 'style' &&
- mutation.target.tagName.toLowerCase() === 'canvas') {
+ if (
+ mutation.type === "attributes" &&
+ mutation.attributeName === "style" &&
+ mutation.target.tagName.toLowerCase() === "canvas"
+ ) {
targetElement.isExpanded = false;
- setTimeout(resetZoom, 10);
+ setTimeout(resetZoom, 25);
+ setTimeout(autoExpand, 25);
+ setTimeout(() => {
+ const btn = targetElement.querySelector("button[aria-label='Undo']");
+ btn.click();
+ }, 25);
}
}
});
@@ -749,7 +737,11 @@
if (hotkeysConfig.canvas_auto_expand) {
targetElement.addEventListener("mousemove", autoExpand);
// Set up an observer to track attribute changes
- observer.observe(targetElement, { attributes: true, childList: true, subtree: true });
+ observer.observe(targetElement, {
+ attributes: true,
+ childList: true,
+ subtree: true,
+ });
}
// Handle events only inside the targetElement
@@ -778,44 +770,53 @@
targetElement.addEventListener("mouseleave", handleMouseLeave);
// Reset zoom when click on another tab
- elements.img2imgTabs.addEventListener("click", resetZoom);
- elements.img2imgTabs.addEventListener("click", () => {
- // targetElement.style.width = "";
- if (parseInt(targetElement.style.width) > 865) {
- setTimeout(fitToElement, 0);
- }
- });
+ if (elements.img2imgTabs) {
+ elements.img2imgTabs.addEventListener("click", resetZoom);
+ elements.img2imgTabs.addEventListener("click", () => {
+ // targetElement.style.width = "";
+ if (parseInt(targetElement.style.width) > 800)
+ setTimeout(fitToElement, 0);
+ });
+ }
- targetElement.addEventListener("wheel", e => {
- // change zoom level
- const operation = e.deltaY > 0 ? "-" : "+";
- changeZoomLevel(operation, e);
+ targetElement.addEventListener(
+ "wheel",
+ (e) => {
+ // change zoom level
+ const operation = (e.deltaY || -e.wheelDelta) > 0 ? "-" : "+";
+ changeZoomLevel(operation, e);
- // Handle brush size adjustment with ctrl key pressed
- if (isModifierKey(e, hotkeysConfig.canvas_hotkey_adjust)) {
- e.preventDefault();
+ // Handle brush size adjustment with ctrl key pressed
+ if (isModifierKey(e, hotkeysConfig.canvas_hotkey_adjust)) {
+ e.preventDefault();
- // Increase or decrease brush size based on scroll direction
- adjustBrushSize(elemId, e.deltaY);
- }
- });
+ if (hotkeysConfig.canvas_hotkey_adjust === "Alt")
+ interactedWithAltKey = true;
+
+ // Increase or decrease brush size based on scroll direction
+ adjustBrushSize(elemId, e.deltaY);
+ }
+ },
+ { passive: false },
+ );
// Handle the move event for pan functionality. Updates the panX and panY variables and applies the new transform to the target element.
function handleMoveKeyDown(e) {
-
// Disable key locks to make pasting from the buffer work correctly
- if ((e.ctrlKey && e.code === 'KeyV') || (e.ctrlKey && event.code === 'KeyC') || e.code === "F5") {
+ if (
+ (e.ctrlKey && e.code === "KeyV") ||
+ (e.ctrlKey && event.code === "KeyC") ||
+ e.code === "F5"
+ ) {
return;
}
// before activating shortcut, ensure user is not actively typing in an input field
if (!hotkeysConfig.canvas_blur_prompt) {
- if (e.target.nodeName === 'TEXTAREA' || e.target.nodeName === 'INPUT') {
+ if (e.target.nodeName === "TEXTAREA" || e.target.nodeName === "INPUT")
return;
- }
}
-
if (e.code === hotkeysConfig.canvas_hotkey_move) {
if (!e.ctrlKey && !e.metaKey && isKeyDownHandlerAttached) {
e.preventDefault();
@@ -826,21 +827,26 @@
}
function handleMoveKeyUp(e) {
- if (e.code === hotkeysConfig.canvas_hotkey_move) {
- isMoving = false;
- }
+ if (e.code === hotkeysConfig.canvas_hotkey_move) isMoving = false;
}
document.addEventListener("keydown", handleMoveKeyDown);
document.addEventListener("keyup", handleMoveKeyUp);
+ /** Prevent firefox from opening main menu when alt is used as a hotkey for zoom or brush size */
+ function handleAltKeyUp(e) {
+ if (e.key !== "Alt" || !interactedWithAltKey) return;
+ e.preventDefault();
+ interactedWithAltKey = false;
+ }
+
+ document.addEventListener("keyup", handleAltKeyUp);
+
// Detect zoom level and update the pan speed.
function updatePanPosition(movementX, movementY) {
let panSpeed = 2;
- if (elemData[elemId].zoomLevel > 8) {
- panSpeed = 3.5;
- }
+ if (elemData[elemId].zoomLevel > 8) panSpeed = 3.5;
elemData[elemId].panX += movementX * panSpeed;
elemData[elemId].panY += movementY * panSpeed;
@@ -857,10 +863,7 @@
updatePanPosition(e.movementX, e.movementY);
targetElement.style.pointerEvents = "none";
- if (isExtension) {
- targetElement.style.overflow = "visible";
- }
-
+ if (isExtension) targetElement.style.overflow = "visible";
} else {
targetElement.style.pointerEvents = "auto";
}
@@ -874,26 +877,36 @@
// Checks for extension
function checkForOutBox() {
const parentElement = targetElement.closest('[id^="component-"]');
- if (parentElement.offsetWidth < targetElement.offsetWidth && !targetElement.isExpanded) {
+ if (
+ parentElement.offsetWidth < targetElement.offsetWidth &&
+ !targetElement.isExpanded
+ ) {
resetZoom();
targetElement.isExpanded = true;
}
- if (parentElement.offsetWidth < targetElement.offsetWidth && elemData[elemId].zoomLevel == 1) {
+ if (
+ parentElement.offsetWidth < targetElement.offsetWidth &&
+ elemData[elemId].zoomLevel == 1
+ ) {
resetZoom();
}
- if (parentElement.offsetWidth < targetElement.offsetWidth && targetElement.offsetWidth * elemData[elemId].zoomLevel > parentElement.offsetWidth && elemData[elemId].zoomLevel < 1 && !targetElement.isZoomed) {
+ if (
+ parentElement.offsetWidth < targetElement.offsetWidth &&
+ targetElement.offsetWidth * elemData[elemId].zoomLevel >
+ parentElement.offsetWidth &&
+ elemData[elemId].zoomLevel < 1 &&
+ !targetElement.isZoomed
+ ) {
resetZoom();
}
}
- if (isExtension) {
+ if (isExtension)
targetElement.addEventListener("mousemove", checkForOutBox);
- }
-
- window.addEventListener('resize', (e) => {
+ window.addEventListener("resize", (e) => {
resetZoom();
if (isExtension) {
@@ -903,8 +916,6 @@
});
gradioApp().addEventListener("mousemove", handleMoveByKey);
-
-
}
applyZoomAndPan(elementIDs.sketch, false);
@@ -924,17 +935,20 @@
}
if (!mainEl) return;
- mainEl.addEventListener("click", async () => {
- for (const elementID of elementIDs) {
- const el = await waitForElement(elementID);
- if (!el) break;
- applyZoomAndPan(elementID);
- }
- }, { once: true });
+ mainEl.addEventListener(
+ "click",
+ async () => {
+ for (const elementID of elementIDs) {
+ const el = await waitForElement(elementID);
+ if (!el) break;
+ applyZoomAndPan(elementID);
+ }
+ },
+ { once: true },
+ );
};
window.applyZoomAndPan = applyZoomAndPan; // Only 1 elements, argument elementID, for example applyZoomAndPan("#txt2img_controlnet_ControlNet_input_image")
window.applyZoomAndPanIntegration = applyZoomAndPanIntegration; // for any extension
});
-
})();
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/densepose/densepose.py b/extensions-builtin/forge_legacy_preprocessors/annotator/densepose/densepose.py
index bde378de81436f0f33a8bcd94233aaf7f7409e02..5ded6967992886a0b9f5faaf5669d9c193b5b0bb 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/densepose/densepose.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/densepose/densepose.py
@@ -2,7 +2,7 @@ from typing import Tuple
import math
import numpy as np
from enum import IntEnum
-from typing import List, Tuple, Union
+from typing import List, Union
import torch
from torch.nn import functional as F
import logging
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/hed/__init__.py b/extensions-builtin/forge_legacy_preprocessors/annotator/hed/__init__.py
index 3bb86953d08f58582334c21600a6e4b5fa7cd031..056059bb74fbaace6e79f55aba480239551f88bd 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/hed/__init__.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/hed/__init__.py
@@ -11,7 +11,6 @@ import torch
import numpy as np
from einops import rearrange
-import os
from modules import devices
from annotator.annotator_path import models_path
from annotator.util import safe_step, nms
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/leres/leres/multi_depth_model_woauxi.py b/extensions-builtin/forge_legacy_preprocessors/annotator/leres/leres/multi_depth_model_woauxi.py
index c989a66829a65b9024c95c2f91af670986fc8675..b80ca086d2d498f42cb62b1aa534d5a806882926 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/leres/leres/multi_depth_model_woauxi.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/leres/leres/multi_depth_model_woauxi.py
@@ -2,7 +2,6 @@ from . import network_auxi as network
from .net_tools import get_func
import torch
import torch.nn as nn
-from modules import devices
class RelDepthModel(nn.Module):
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/leres/pix2pix/util/visualizer.py b/extensions-builtin/forge_legacy_preprocessors/annotator/leres/pix2pix/util/visualizer.py
index 63c3243b26ec942687dd790ed3589f79f05de3c7..5341a75a484c57b5671052c564c5313c1b0f9e5b 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/leres/pix2pix/util/visualizer.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/leres/pix2pix/util/visualizer.py
@@ -1,11 +1,9 @@
-import numpy as np
import os
import sys
import ntpath
import time
from . import util, html
from subprocess import Popen, PIPE
-import torch
if sys.version_info[0] == 2:
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/midas/midas/blocks.py b/extensions-builtin/forge_legacy_preprocessors/annotator/midas/midas/blocks.py
index 0daef3f3fe2fd4a00610d99f1c9023bfca180243..9415b9873bf75673c5db48e5a85219063dadc633 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/midas/midas/blocks.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/midas/midas/blocks.py
@@ -5,7 +5,6 @@ from .vit import (
_make_pretrained_vitb_rn50_384,
_make_pretrained_vitl16_384,
_make_pretrained_vitb16_384,
- forward_vit,
)
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/midas/midas/dpt_depth.py b/extensions-builtin/forge_legacy_preprocessors/annotator/midas/midas/dpt_depth.py
index d877bfda47f9c07ef8d73b46b5f5cd86397933eb..30dc6cce102ac883356206f3dc8b001e62f2eaa9 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/midas/midas/dpt_depth.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/midas/midas/dpt_depth.py
@@ -1,10 +1,8 @@
import torch
import torch.nn as nn
-import torch.nn.functional as F
from .base_model import BaseModel
from .blocks import (
- FeatureFusionBlock,
FeatureFusionBlock_custom,
Interpolate,
_make_encoder,
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/midas/midas/midas_net_custom.py b/extensions-builtin/forge_legacy_preprocessors/annotator/midas/midas/midas_net_custom.py
index b962f80ef884661de289d791a175f4bbfe2c5ac7..f59c3f0d7abaac7e1f5afb4b4673afe0e801406a 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/midas/midas/midas_net_custom.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/midas/midas/midas_net_custom.py
@@ -8,7 +8,6 @@ import torch.nn as nn
from .base_model import BaseModel
from .blocks import (
- FeatureFusionBlock,
FeatureFusionBlock_custom,
Interpolate,
_make_encoder,
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/mlsd/__init__.py b/extensions-builtin/forge_legacy_preprocessors/annotator/mlsd/__init__.py
index f4a06a45a02beaf25421eafa04b49edc38179894..54e119cdb26500dc45e39d44a266075e2952e70b 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/mlsd/__init__.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/mlsd/__init__.py
@@ -49,6 +49,6 @@ def apply_mlsd(input_image, thr_v, thr_d):
cv2.line(
img_output, (x_start, y_start), (x_end, y_end), [255, 255, 255], 1
)
- except Exception as e:
+ except Exception:
pass
return img_output[:, :, 0]
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/mlsd/models/mbv2_mlsd_large.py b/extensions-builtin/forge_legacy_preprocessors/annotator/mlsd/models/mbv2_mlsd_large.py
index 1751bb40fa04dbc17e1ede3269ee6acf171a91c4..e6bcc6be99d54cf15decacdf278834d9ad9893ad 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/mlsd/models/mbv2_mlsd_large.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/mlsd/models/mbv2_mlsd_large.py
@@ -1,5 +1,3 @@
-import os
-import sys
import torch
import torch.nn as nn
import torch.utils.model_zoo as model_zoo
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/mlsd/models/mbv2_mlsd_tiny.py b/extensions-builtin/forge_legacy_preprocessors/annotator/mlsd/models/mbv2_mlsd_tiny.py
index 1d556776603897d279fcad05498016587f233411..af2f4ebd8df5bbded297a18bae6095e161eb4b8a 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/mlsd/models/mbv2_mlsd_tiny.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/mlsd/models/mbv2_mlsd_tiny.py
@@ -1,5 +1,3 @@
-import os
-import sys
import torch
import torch.nn as nn
import torch.utils.model_zoo as model_zoo
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/mlsd/utils.py b/extensions-builtin/forge_legacy_preprocessors/annotator/mlsd/utils.py
index 758a4a91d41c6c4b13bbb8581d1e76009dcbfa80..1be5630e39b290499c22231b7305aee0bd678761 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/mlsd/utils.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/mlsd/utils.py
@@ -9,7 +9,6 @@ Copyright 2021-present NAVER Corp.
Apache License v2.0
"""
-import os
import numpy as np
import cv2
import torch
@@ -648,7 +647,7 @@ def pred_squares(
score_array = score_array[sorted_idx]
squares = squares[sorted_idx]
- except Exception as e:
+ except Exception:
pass
"""return list
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/mmpkg/mmcv/ops/fused_bias_leakyrelu.py b/extensions-builtin/forge_legacy_preprocessors/annotator/mmpkg/mmcv/ops/fused_bias_leakyrelu.py
index 35c6112ed3c91fb7b0338973f20e37b1cff0d270..496b90d1a54b3d03a0ed34747815d0f5c9f5af15 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/mmpkg/mmcv/ops/fused_bias_leakyrelu.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/mmpkg/mmcv/ops/fused_bias_leakyrelu.py
@@ -179,7 +179,7 @@ class FusedBiasLeakyReLUFunction(Function):
class FusedBiasLeakyReLU(nn.Module):
- """Fused bias leaky ReLU.
+ r"""Fused bias leaky ReLU.
This function is introduced in the StyleGAN2:
http://arxiv.org/abs/1912.04958
@@ -213,7 +213,7 @@ class FusedBiasLeakyReLU(nn.Module):
def fused_bias_leakyrelu(input, bias, negative_slope=0.2, scale=2**0.5):
- """Fused bias leaky ReLU function.
+ r"""Fused bias leaky ReLU function.
This function is introduced in the StyleGAN2:
http://arxiv.org/abs/1912.04958
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/mmpkg/mmseg/apis/inference.py b/extensions-builtin/forge_legacy_preprocessors/annotator/mmpkg/mmseg/apis/inference.py
index 611564a32a2051e1e94e80aab3787bbb58194278..b9a10125367cf3ec4384e981272f585ebce53cae 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/mmpkg/mmseg/apis/inference.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/mmpkg/mmseg/apis/inference.py
@@ -1,4 +1,3 @@
-import matplotlib.pyplot as plt
import annotator.mmpkg.mmcv as mmcv
import torch
from annotator.mmpkg.mmcv.parallel import collate, scatter
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/openpose/body.py b/extensions-builtin/forge_legacy_preprocessors/annotator/openpose/body.py
index 6a7442c65203b09636679908de6a3994b0be53c0..4b6bce22ee5c0d879d421de27de126c0b82de6b1 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/openpose/body.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/openpose/body.py
@@ -1,13 +1,10 @@
import cv2
import numpy as np
import math
-import time
from scipy.ndimage import gaussian_filter
import matplotlib.pyplot as plt
-import matplotlib
import torch
-from torchvision import transforms
-from typing import NamedTuple, List, Union
+from typing import List
from . import util
from .model import bodypose_model
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/openpose/face.py b/extensions-builtin/forge_legacy_preprocessors/annotator/openpose/face.py
index a61cdb2fe92b5d51c1e8aca6b1387c1378af3b4e..20d43d9cf61e8276bf0664bac1b08f619bfa8e8e 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/openpose/face.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/openpose/face.py
@@ -3,7 +3,6 @@ import numpy as np
from torchvision.transforms import ToTensor, ToPILImage
import torch
import torch.nn.functional as F
-import cv2
from . import util
from torch.nn import Conv2d, Module, ReLU, MaxPool2d, init
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/openpose/hand.py b/extensions-builtin/forge_legacy_preprocessors/annotator/openpose/hand.py
index 77da5b127389dcea5af24d6cde955499041b4a02..a1cda2f6cd198ba4b360358da1c4ea7fac4fec65 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/openpose/hand.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/openpose/hand.py
@@ -1,11 +1,6 @@
import cv2
-import json
import numpy as np
-import math
-import time
from scipy.ndimage import gaussian_filter
-import matplotlib.pyplot as plt
-import matplotlib
import torch
from skimage.measure import label
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/openpose/model.py b/extensions-builtin/forge_legacy_preprocessors/annotator/openpose/model.py
index b95479ef1f983774f10d05732f204e6a0f13af24..8f273d1aa71f1060a28aad4346701ec23f05b6f3 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/openpose/model.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/openpose/model.py
@@ -1,7 +1,6 @@
import torch
from collections import OrderedDict
-import torch
import torch.nn as nn
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/openpose/types.py b/extensions-builtin/forge_legacy_preprocessors/annotator/openpose/types.py
index 3136612f8535517de5acf053d9f5851d29bbcdba..45a4f6e5594dac67a1c804a0e3e071941783b7f5 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/openpose/types.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/openpose/types.py
@@ -1,4 +1,4 @@
-from typing import NamedTuple, List, Optional, Union
+from typing import NamedTuple, List, Optional
class Keypoint(NamedTuple):
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/pidinet/model.py b/extensions-builtin/forge_legacy_preprocessors/annotator/pidinet/model.py
index 74d7a6a7e9515b76ea40275fc84c97a3b275ee34..5ca2445b814afe2ba4da338ed3166c558daddfb5 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/pidinet/model.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/pidinet/model.py
@@ -5,8 +5,6 @@ Date: Feb 18, 2021
import math
-import cv2
-import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/teed/Fsmish.py b/extensions-builtin/forge_legacy_preprocessors/annotator/teed/Fsmish.py
index 49e124068f6fd37ec64aaf20fd005f2cea725dcb..69691029aee958e66076a9990f258546ee3c7eaf 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/teed/Fsmish.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/teed/Fsmish.py
@@ -7,7 +7,6 @@ Wang, Xueliang, Honge Ren, and Achuan Wang.
# import pytorch
import torch
-import torch.nn.functional as F
@torch.jit.script
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/teed/Xsmish.py b/extensions-builtin/forge_legacy_preprocessors/annotator/teed/Xsmish.py
index 44ef861168058fb34cff2a35fe9eb4612c9c4635..55eebdbc13cc419ceb3bf86c38450961369d3d14 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/teed/Xsmish.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/teed/Xsmish.py
@@ -7,8 +7,6 @@ smish(x) = x * tanh(softplus(x)) = x * tanh(ln(1 + sigmoid(x)))
"""
# import pytorch
-import torch
-import torch.nn.functional as F
from torch import nn
# import activation functions
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/next_vit.py b/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/next_vit.py
index 8caf8e0411e2b8e498c2d5e3ead825582b14b7e2..b4779153964e1502a4eda492c7a95f8e262ceded 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/next_vit.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/next_vit.py
@@ -2,7 +2,6 @@ import timm
import torch.nn as nn
-from pathlib import Path
from .utils import activations, forward_default, get_activation
from ..external.next_vit.classification.nextvit import *
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/blocks.py b/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/blocks.py
index 0d1ebb0af8c075a7fa5514ed095c8de43b4ac13a..4e502740bd42c8b159ca0c0bda0083bf79577a96 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/blocks.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/blocks.py
@@ -5,10 +5,6 @@ from .backbones.beit import (
_make_pretrained_beitl16_512,
_make_pretrained_beitl16_384,
_make_pretrained_beitb16_384,
- forward_beit,
-)
-from .backbones.swin_common import (
- forward_swin,
)
from .backbones.swin2 import (
_make_pretrained_swin2l24_384,
@@ -20,13 +16,11 @@ from .backbones.swin import (
)
from .backbones.levit import (
_make_pretrained_levit_384,
- forward_levit,
)
from .backbones.vit import (
_make_pretrained_vitb_rn50_384,
_make_pretrained_vitl16_384,
_make_pretrained_vitb16_384,
- forward_vit,
)
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/midas_net_custom.py b/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/midas_net_custom.py
index b962f80ef884661de289d791a175f4bbfe2c5ac7..f59c3f0d7abaac7e1f5afb4b4673afe0e801406a 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/midas_net_custom.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/midas_net_custom.py
@@ -8,7 +8,6 @@ import torch.nn as nn
from .base_model import BaseModel
from .blocks import (
- FeatureFusionBlock,
FeatureFusionBlock_custom,
Interpolate,
_make_encoder,
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/ros/midas_cpp/scripts/listener.py b/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/ros/midas_cpp/scripts/listener.py
index 95b426121657682fa3ddec1172f5efdf48ff00a9..8e305085355ee6a92036c769ce1c72e6242bf03a 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/ros/midas_cpp/scripts/listener.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/ros/midas_cpp/scripts/listener.py
@@ -1,14 +1,11 @@
#!/usr/bin/env python3
from __future__ import print_function
-import roslib
# roslib.load_manifest('my_package')
import sys
import rospy
import cv2
-import numpy as np
-from std_msgs.msg import String
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/ros/midas_cpp/scripts/listener_original.py b/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/ros/midas_cpp/scripts/listener_original.py
index 172a22f55dcc48806a134f0f1b8ed09ae53b983b..dc2fa151dd82dc4d29a149dbcdce73b6fd535e80 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/ros/midas_cpp/scripts/listener_original.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/ros/midas_cpp/scripts/listener_original.py
@@ -1,14 +1,11 @@
#!/usr/bin/env python3
from __future__ import print_function
-import roslib
# roslib.load_manifest('my_package')
import sys
import rospy
import cv2
-import numpy as np
-from std_msgs.msg import String
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/ros/midas_cpp/scripts/talker.py b/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/ros/midas_cpp/scripts/talker.py
index 036b38b8005904b0444b54746a1a918e71aa4939..556d34dc42c9a7cc270c9f1eb4c4c950f31df1fd 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/ros/midas_cpp/scripts/talker.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/ros/midas_cpp/scripts/talker.py
@@ -1,13 +1,10 @@
#!/usr/bin/env python3
-import roslib
# roslib.load_manifest('my_package')
-import sys
import rospy
import cv2
-from std_msgs.msg import String
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/tf/make_onnx_model.py b/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/tf/make_onnx_model.py
index ec36c6bbe05f157259677c1c6a87adbe9a860431..68c9005ef8cf753969ad8a85deb5efc1ee05e107 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/tf/make_onnx_model.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/tf/make_onnx_model.py
@@ -2,16 +2,10 @@
import os
import ntpath
-import glob
import torch
-import utils
-import cv2
import numpy as np
-from torchvision.transforms import Compose, Normalize
-from torchvision import transforms
from shutil import copyfile
-import fileinput
import sys
sys.path.append(os.getcwd() + "/..")
@@ -46,7 +40,6 @@ def restore_file():
modify_file()
from midas.midas_net import MidasNet
-from midas.transforms import Resize, NormalizeImage, PrepareForNet
restore_file()
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/tf/run_onnx.py b/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/tf/run_onnx.py
index be9686af6f6df13eda9c15c9b30f6f132b6bf0c6..b6f879c76a48bd0d4f9a4c3f46fcbe8435b11c27 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/tf/run_onnx.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/tf/run_onnx.py
@@ -4,14 +4,12 @@ import os
import glob
import utils
import cv2
-import sys
import numpy as np
import argparse
-import onnx
import onnxruntime as rt
-from transforms import Resize, NormalizeImage, PrepareForNet
+from transforms import Resize, PrepareForNet
def run(input_path, output_path, model_path, model_type="large"):
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/tf/run_pb.py b/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/tf/run_pb.py
index de121302b6677dd81f923d8f91ea1ffcfe3cebbe..ea389c9dd274321be52297d6e382d8f2a9db27ba 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/tf/run_pb.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/models/base_models/midas_repo/tf/run_pb.py
@@ -8,7 +8,7 @@ import argparse
import tensorflow as tf
-from transforms import Resize, NormalizeImage, PrepareForNet
+from transforms import Resize, PrepareForNet
def run(input_path, output_path, model_path, model_type="large"):
diff --git a/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/utils/misc.py b/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/utils/misc.py
index 1539abd94a80e11826b4ba5151882e6ec65a9f76..d964915ff698f9e300abc648c998326eb9ba282b 100644
--- a/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/utils/misc.py
+++ b/extensions-builtin/forge_legacy_preprocessors/annotator/zoe/zoedepth/utils/misc.py
@@ -24,11 +24,7 @@
"""Miscellaneous utility functions."""
-from scipy import ndimage
-import base64
-import math
-import re
from io import BytesIO
import matplotlib
diff --git a/extensions-builtin/forge_legacy_preprocessors/install.py b/extensions-builtin/forge_legacy_preprocessors/install.py
index 1ee0ea1f32b45d02f5fa2b9350c2ffa6f358e9a8..01d9dbf672541fe893246345424122a5525df945 100644
--- a/extensions-builtin/forge_legacy_preprocessors/install.py
+++ b/extensions-builtin/forge_legacy_preprocessors/install.py
@@ -32,7 +32,7 @@ def try_install_from_wheel(pkg_name: str, wheel_url: str):
try:
launch.run_pip(
- f"install -U {wheel_url}",
+ f"install {wheel_url}",
f"Legacy Preprocessor Requirement: {pkg_name}",
)
except Exception as e:
diff --git a/extensions-builtin/forge_legacy_preprocessors/legacy_preprocessors/preprocessor.py b/extensions-builtin/forge_legacy_preprocessors/legacy_preprocessors/preprocessor.py
index aef174e7786641eb6ba69895b65cef9f28467d93..07459ef3ca2d149af1d7a79f7b3ae05bbf69cbb3 100644
--- a/extensions-builtin/forge_legacy_preprocessors/legacy_preprocessors/preprocessor.py
+++ b/extensions-builtin/forge_legacy_preprocessors/legacy_preprocessors/preprocessor.py
@@ -3,7 +3,6 @@ import cv2
import numpy as np
import torch
import math
-import functools
from dataclasses import dataclass
from transformers.models.clip.modeling_clip import CLIPVisionModelOutput
@@ -849,9 +848,9 @@ class InsightFaceModel:
img, remove_pad = resize_image_with_pad(img, res)
face_info = self.model.get(img)
if not face_info:
- raise Exception(f"Insightface: No face found in image.")
+ raise Exception("Insightface: No face found in image.")
if len(face_info) > 1:
- print("Insightface: More than one face is detected in the image. " f"Only the biggest one will be used.")
+ print("Insightface: More than one face is detected in the image. " "Only the biggest one will be used.")
# only use the maximum face
face_info = sorted(
face_info,
diff --git a/extensions-builtin/forge_legacy_preprocessors/legacy_preprocessors/preprocessor_compiled.py b/extensions-builtin/forge_legacy_preprocessors/legacy_preprocessors/preprocessor_compiled.py
index 64095afdba898d23fc838a1e4a3a3d0f0f5048a6..9e07abb1ece173f0a2456cb16eeeb4d065d1aaa9 100644
--- a/extensions-builtin/forge_legacy_preprocessors/legacy_preprocessors/preprocessor_compiled.py
+++ b/extensions-builtin/forge_legacy_preprocessors/legacy_preprocessors/preprocessor_compiled.py
@@ -1,3 +1,5 @@
+import functools
+
from legacy_preprocessors.preprocessor import *
diff --git a/extensions-builtin/sd_forge_controlnet/lib_controlnet/controlnet_ui/controlnet_ui_group.py b/extensions-builtin/sd_forge_controlnet/lib_controlnet/controlnet_ui/controlnet_ui_group.py
index 76d64e129dc7e1e3e1672aa2b55b12f8c3dd5e3a..d098ea6b61797a2fde87f8a738b801e83e6ef58a 100644
--- a/extensions-builtin/sd_forge_controlnet/lib_controlnet/controlnet_ui/controlnet_ui_group.py
+++ b/extensions-builtin/sd_forge_controlnet/lib_controlnet/controlnet_ui/controlnet_ui_group.py
@@ -384,7 +384,7 @@ class ControlNetUiGroup:
with gr.Row(elem_classes=["controlnet_control_type", "controlnet_row"]):
self.type_filter = gr.Radio(
global_state.get_all_preprocessor_tags(),
- label=f"Control Type",
+ label="Control Type",
value="All",
elem_id=f"{elem_id_tabname}_{tabname}_controlnet_type_filter_radio",
elem_classes="controlnet_control_type_filter_group",
@@ -420,7 +420,7 @@ class ControlNetUiGroup:
with gr.Row(elem_classes=["controlnet_weight_steps", "controlnet_row"]):
self.weight = gr.Slider(
- label=f"Control Weight",
+ label="Control Weight",
value=self.default_unit.weight,
minimum=0.0,
maximum=2.0,
@@ -960,6 +960,7 @@ class ControlNetUiGroup:
@staticmethod
def reset():
ControlNetUiGroup.a1111_context = A1111Context()
+ ControlNetUiGroup.all_callbacks_registered = False
ControlNetUiGroup.callbacks_registered = False
ControlNetUiGroup.all_ui_groups = []
diff --git a/extensions-builtin/sd_forge_controlnet/lib_controlnet/external_code.py b/extensions-builtin/sd_forge_controlnet/lib_controlnet/external_code.py
index 2208dd1ce8469a2f64bdf59b8e21b56b4663b821..d096daeaecb84cd65abe631c0322817ca41682a5 100644
--- a/extensions-builtin/sd_forge_controlnet/lib_controlnet/external_code.py
+++ b/extensions-builtin/sd_forge_controlnet/lib_controlnet/external_code.py
@@ -128,7 +128,7 @@ def pixel_perfect_resolution(
else:
estimation = max(k0, k1) * float(min(raw_H, raw_W))
- logger.debug(f"Pixel Perfect Computation:")
+ logger.debug("Pixel Perfect Computation:")
logger.debug(f"resize_mode = {resize_mode}")
logger.debug(f"raw_H = {raw_H}")
logger.debug(f"raw_W = {raw_W}")
diff --git a/extensions-builtin/sd_forge_controlnet/scripts/controlnet.py b/extensions-builtin/sd_forge_controlnet/scripts/controlnet.py
index d17c93556ad86d5cd08939e331f39f25a2b7f5f2..e32b2bcfb7c1634cd95c2a9bd3d866c548f4b0c3 100644
--- a/extensions-builtin/sd_forge_controlnet/scripts/controlnet.py
+++ b/extensions-builtin/sd_forge_controlnet/scripts/controlnet.py
@@ -1,37 +1,28 @@
-from modules import shared, scripts, script_callbacks, masking, images
-from modules_forge.supported_controlnet import ControlModelPatcher
-from modules_forge.shared import try_load_supported_control_model
-from modules_forge.forge_util import HWC3, numpy_to_pytorch
-from modules.processing import (
- StableDiffusionProcessingImg2Img,
- StableDiffusionProcessingTxt2Img,
- StableDiffusionProcessing,
-)
-
-from typing import Optional
-from PIL import Image
-import gradio as gr
-import numpy as np
import functools
-import torch
-import cv2
+from typing import Optional, TYPE_CHECKING
-from lib_controlnet import global_state, external_code
-from lib_controlnet.external_code import ControlNetUnit
-from lib_controlnet.utils import (
- align_dim_latent,
- crop_and_resize_image,
- judge_image_type,
- prepare_mask,
- set_numpy_seed,
-)
+if TYPE_CHECKING:
+ from modules_forge.supported_preprocessor import Preprocessor
+import cv2
+import gradio as gr
+import numpy as np
+import torch
+from lib_controlnet import external_code, global_state
+from lib_controlnet.api import controlnet_api
from lib_controlnet.controlnet_ui.controlnet_ui_group import ControlNetUiGroup
from lib_controlnet.enums import HiResFixOption
-from lib_controlnet.api import controlnet_api
+from lib_controlnet.external_code import ControlNetUnit
from lib_controlnet.infotext import Infotext
from lib_controlnet.logging import logger
+from lib_controlnet.utils import align_dim_latent, crop_and_resize_image, judge_image_type, prepare_mask, set_numpy_seed
+from PIL import Image, ImageOps
+from modules import images, masking, script_callbacks, scripts, shared
+from modules.processing import StableDiffusionProcessing, StableDiffusionProcessingImg2Img, StableDiffusionProcessingTxt2Img
+from modules_forge.forge_util import HWC3, numpy_to_pytorch
+from modules_forge.shared import try_load_supported_control_model
+from modules_forge.supported_controlnet import ControlModelPatcher
global_state.update_controlnet_filenames()
@@ -80,9 +71,7 @@ class ControlNetForForgeOfficial(scripts.Script):
with gr.Tab(label=f"ControlNet Unit {i + 1}", id=i):
group = ControlNetUiGroup(is_img2img, default_unit)
ui_groups.append(group)
- controls.append(
- group.render(f"ControlNet-{i}", elem_id_tabname)
- )
+ controls.append(group.render(f"ControlNet-{i}", elem_id_tabname))
for i, ui_group in enumerate(ui_groups):
infotext.register_unit(i, ui_group)
@@ -93,64 +82,36 @@ class ControlNetForForgeOfficial(scripts.Script):
return controls
- def get_enabled_units(self, units):
- # Parse dict from API calls
- units = [
- ControlNetUnit.from_dict(unit) if isinstance(unit, dict) else unit
- for unit in units
- ]
+ def get_enabled_units(self, units: list[ControlNetUnit]): # Parse dict from API calls
+ units = [ControlNetUnit.from_dict(unit) if isinstance(unit, dict) else unit for unit in units]
assert all(isinstance(unit, ControlNetUnit) for unit in units)
enabled_units = [x for x in units if x.enabled]
return enabled_units
@staticmethod
- def try_crop_image_with_a1111_mask(
- p: StableDiffusionProcessing,
- input_image: np.ndarray,
- resize_mode: external_code.ResizeMode,
- preprocessor,
- ) -> np.ndarray:
+ def try_crop_image_with_a1111_mask(p: StableDiffusionProcessing, input_image: np.ndarray, resize_mode: external_code.ResizeMode, preprocessor: "Preprocessor") -> np.ndarray:
a1111_mask_image: Optional[Image.Image] = getattr(p, "image_mask", None)
- is_only_masked_inpaint: bool = (
- issubclass(type(p), StableDiffusionProcessingImg2Img)
- and p.inpaint_full_res
- and a1111_mask_image is not None
- )
+ is_only_masked_inpaint: bool = issubclass(type(p), StableDiffusionProcessingImg2Img) and p.inpaint_full_res and a1111_mask_image is not None
- if (
- preprocessor.corp_image_with_a1111_mask_when_in_img2img_inpaint_tab
- and is_only_masked_inpaint
- ):
+ if preprocessor.corp_image_with_a1111_mask_when_in_img2img_inpaint_tab and is_only_masked_inpaint:
logger.info("Crop input image based on A1111 mask.")
input_image = [input_image[:, :, i] for i in range(input_image.shape[2])]
input_image = [Image.fromarray(x) for x in input_image]
mask = prepare_mask(a1111_mask_image, p)
- crop_region = masking.get_crop_region(
- np.array(mask), p.inpaint_full_res_padding
- )
- crop_region = masking.expand_crop_region(
- crop_region, p.width, p.height, mask.width, mask.height
- )
+ crop_region = masking.get_crop_region(np.array(mask), p.inpaint_full_res_padding)
+ crop_region = masking.expand_crop_region(crop_region, p.width, p.height, mask.width, mask.height)
- input_image = [
- images.resize_image(resize_mode.int_value(), i, mask.width, mask.height)
- for i in input_image
- ]
+ input_image = [images.resize_image(resize_mode.int_value(), i, mask.width, mask.height) for i in input_image]
input_image = [x.crop(crop_region) for x in input_image]
- input_image = [
- images.resize_image(
- external_code.ResizeMode.OUTER_FIT.int_value(), x, p.width, p.height
- )
- for x in input_image
- ]
+ input_image = [images.resize_image(external_code.ResizeMode.OUTER_FIT.int_value(), x, p.width, p.height) for x in input_image]
input_image = [np.asarray(x)[:, :, 0] for x in input_image]
input_image = np.stack(input_image, axis=2)
return input_image
- def get_input_data(self, p, unit, preprocessor, h, w):
+ def get_input_data(self, p: StableDiffusionProcessing, unit: ControlNetUnit, preprocessor: "Preprocessor", h: int, w: int):
resize_mode = external_code.resize_mode_from_value(unit.resize_mode)
image_list = []
@@ -159,6 +120,9 @@ class ControlNetForForgeOfficial(scripts.Script):
a1111_i2i_image = getattr(p, "init_images", [None])[0]
a1111_i2i_mask = getattr(p, "image_mask", None)
+ if a1111_i2i_mask is not None and getattr(p, "inpainting_mask_invert", False):
+ a1111_i2i_mask = ImageOps.invert(a1111_i2i_mask)
+
using_a1111_data = False
if unit.image is None:
@@ -198,16 +162,11 @@ class ControlNetForForgeOfficial(scripts.Script):
(image.shape[1], image.shape[0]),
interpolation=cv2.INTER_NEAREST,
)
- mask = self.try_crop_image_with_a1111_mask(
- p, mask, resize_mode, preprocessor
- )
+ mask = self.try_crop_image_with_a1111_mask(p, mask, resize_mode, preprocessor)
image_list = [[image, mask]]
- if (
- resize_mode == external_code.ResizeMode.OUTER_FIT
- and preprocessor.expand_mask_when_resize_and_fill
- ):
+ if resize_mode == external_code.ResizeMode.OUTER_FIT and preprocessor.expand_mask_when_resize_and_fill:
new_image_list = []
for input_image, input_mask in image_list:
if input_mask is None:
@@ -232,16 +191,12 @@ class ControlNetForForgeOfficial(scripts.Script):
return image_list, resize_mode
@staticmethod
- def get_target_dimensions(
- p: StableDiffusionProcessing,
- ) -> tuple[int, int, int, int]:
+ def get_target_dimensions(p: StableDiffusionProcessing) -> tuple[int, int, int, int]:
"""Returns (h, w, hr_h, hr_w)."""
h = align_dim_latent(p.height)
w = align_dim_latent(p.width)
- high_res_fix = getattr(p, "enable_hr", False) and isinstance(
- p, StableDiffusionProcessingTxt2Img
- )
+ high_res_fix = getattr(p, "enable_hr", False) and isinstance(p, StableDiffusionProcessingTxt2Img)
if high_res_fix:
if p.hr_resize_x == 0 and p.hr_resize_y == 0:
@@ -258,20 +213,11 @@ class ControlNetForForgeOfficial(scripts.Script):
return h, w, hr_y, hr_x
@torch.no_grad()
- def process_unit_after_click_generate(
- self,
- p: StableDiffusionProcessing,
- unit: ControlNetUnit,
- params: ControlNetCachedParameters,
- *args,
- **kwargs,
- ) -> bool:
+ def process_unit_after_click_generate(self, p: StableDiffusionProcessing, unit: ControlNetUnit, params: ControlNetCachedParameters, *args, **kwargs) -> bool:
h, w, hr_y, hr_x = self.get_target_dimensions(p)
- has_high_res_fix = isinstance(p, StableDiffusionProcessingTxt2Img) and getattr(
- p, "enable_hr", False
- )
+ has_high_res_fix = isinstance(p, StableDiffusionProcessingTxt2Img) and getattr(p, "enable_hr", False)
if unit.use_preview_as_input:
unit.module = "None"
@@ -322,9 +268,7 @@ class ControlNetForForgeOfficial(scripts.Script):
control_masks.append(input_mask)
if len(input_list) > 1 and not preprocessor_output_is_image:
- logger.info(
- "Batch wise input only support controlnet, control-lora, and t2i adapters!"
- )
+ logger.info("Batch wise input only support controlnet, control-lora, and t2i adapters!")
break
if has_high_res_fix:
@@ -335,14 +279,7 @@ class ControlNetForForgeOfficial(scripts.Script):
alignment_indices = [i % len(preprocessor_outputs) for i in range(p.batch_size)]
def attach_extra_result_image(img: np.ndarray, is_high_res: bool = False):
- if (
- not shared.opts.data.get("control_net_no_detectmap", False)
- and (
- (is_high_res and hr_option.high_res_enabled)
- or (not is_high_res and hr_option.low_res_enabled)
- )
- and unit.save_detected_map
- ):
+ if not shared.opts.data.get("control_net_no_detectmap", False) and ((is_high_res and hr_option.high_res_enabled) or (not is_high_res and hr_option.low_res_enabled)) and unit.save_detected_map:
p.extra_result_images.append(img)
if preprocessor_output_is_image:
@@ -350,35 +287,21 @@ class ControlNetForForgeOfficial(scripts.Script):
params.control_cond_for_hr_fix = []
for preprocessor_output in preprocessor_outputs:
- control_cond = crop_and_resize_image(
- preprocessor_output, resize_mode, h, w
- )
- attach_extra_result_image(
- external_code.visualize_inpaint_mask(control_cond)
- )
- params.control_cond.append(
- numpy_to_pytorch(control_cond).movedim(-1, 1)
- )
+ control_cond = crop_and_resize_image(preprocessor_output, resize_mode, h, w)
+ attach_extra_result_image(external_code.visualize_inpaint_mask(control_cond))
+ params.control_cond.append(numpy_to_pytorch(control_cond).movedim(-1, 1))
- params.control_cond = torch.cat(params.control_cond, dim=0)[
- alignment_indices
- ].contiguous()
+ params.control_cond = torch.cat(params.control_cond, dim=0)[alignment_indices].contiguous()
if has_high_res_fix:
for preprocessor_output in preprocessor_outputs:
- control_cond_for_hr_fix = crop_and_resize_image(
- preprocessor_output, resize_mode, hr_y, hr_x
- )
+ control_cond_for_hr_fix = crop_and_resize_image(preprocessor_output, resize_mode, hr_y, hr_x)
attach_extra_result_image(
external_code.visualize_inpaint_mask(control_cond_for_hr_fix),
is_high_res=True,
)
- params.control_cond_for_hr_fix.append(
- numpy_to_pytorch(control_cond_for_hr_fix).movedim(-1, 1)
- )
- params.control_cond_for_hr_fix = torch.cat(
- params.control_cond_for_hr_fix, dim=0
- )[alignment_indices].contiguous()
+ params.control_cond_for_hr_fix.append(numpy_to_pytorch(control_cond_for_hr_fix).movedim(-1, 1))
+ params.control_cond_for_hr_fix = torch.cat(params.control_cond_for_hr_fix, dim=0)[alignment_indices].contiguous()
else:
params.control_cond_for_hr_fix = params.control_cond
else:
@@ -392,30 +315,20 @@ class ControlNetForForgeOfficial(scripts.Script):
for input_mask in control_masks:
fill_border = preprocessor.fill_mask_with_one_when_resize_and_fill
- control_mask = crop_and_resize_image(
- input_mask, resize_mode, h, w, fill_border
- )
+ control_mask = crop_and_resize_image(input_mask, resize_mode, h, w, fill_border)
attach_extra_result_image(control_mask)
control_mask = numpy_to_pytorch(control_mask).movedim(-1, 1)[:, :1]
params.control_mask.append(control_mask)
if has_high_res_fix:
- control_mask_for_hr_fix = crop_and_resize_image(
- input_mask, resize_mode, hr_y, hr_x, fill_border
- )
+ control_mask_for_hr_fix = crop_and_resize_image(input_mask, resize_mode, hr_y, hr_x, fill_border)
attach_extra_result_image(control_mask_for_hr_fix, is_high_res=True)
- control_mask_for_hr_fix = numpy_to_pytorch(
- control_mask_for_hr_fix
- ).movedim(-1, 1)[:, :1]
+ control_mask_for_hr_fix = numpy_to_pytorch(control_mask_for_hr_fix).movedim(-1, 1)[:, :1]
params.control_mask_for_hr_fix.append(control_mask_for_hr_fix)
- params.control_mask = torch.cat(params.control_mask, dim=0)[
- alignment_indices
- ].contiguous()
+ params.control_mask = torch.cat(params.control_mask, dim=0)[alignment_indices].contiguous()
if has_high_res_fix:
- params.control_mask_for_hr_fix = torch.cat(
- params.control_mask_for_hr_fix, dim=0
- )[alignment_indices].contiguous()
+ params.control_mask_for_hr_fix = torch.cat(params.control_mask_for_hr_fix, dim=0)[alignment_indices].contiguous()
else:
params.control_mask_for_hr_fix = params.control_mask
@@ -434,31 +347,18 @@ class ControlNetForForgeOfficial(scripts.Script):
params.preprocessor = preprocessor
- params.preprocessor.process_after_running_preprocessors(
- process=p, params=params, **kwargs
- )
- params.model.process_after_running_preprocessors(
- process=p, params=params, **kwargs
- )
+ params.preprocessor.process_after_running_preprocessors(process=p, params=params, **kwargs)
+ params.model.process_after_running_preprocessors(process=p, params=params, **kwargs)
logger.info(f"{type(params.model).__name__}: {model_filename}")
return True
@torch.no_grad()
- def process_unit_before_every_sampling(
- self,
- p: StableDiffusionProcessing,
- unit: ControlNetUnit,
- params: ControlNetCachedParameters,
- *args,
- **kwargs,
- ):
+ def process_unit_before_every_sampling(self, p: StableDiffusionProcessing, unit: ControlNetUnit, params: ControlNetCachedParameters, *args, **kwargs):
is_hr_pass = getattr(p, "is_hr_pass", False)
- has_high_res_fix = isinstance(p, StableDiffusionProcessingTxt2Img) and getattr(
- p, "enable_hr", False
- )
+ has_high_res_fix = isinstance(p, StableDiffusionProcessingTxt2Img) and getattr(p, "enable_hr", False)
if has_high_res_fix:
hr_option = HiResFixOption.from_value(unit.hr_option)
@@ -466,11 +366,11 @@ class ControlNetForForgeOfficial(scripts.Script):
hr_option = HiResFixOption.BOTH
if has_high_res_fix and is_hr_pass and (not hr_option.high_res_enabled):
- logger.info(f"ControlNet Skipped High-res pass.")
+ logger.info("ControlNet Skipped High-res pass.")
return
if has_high_res_fix and (not is_hr_pass) and (not hr_option.low_res_enabled):
- logger.info(f"ControlNet Skipped Low-res pass.")
+ logger.info("ControlNet Skipped Low-res pass.")
return
if is_hr_pass:
@@ -543,16 +443,13 @@ class ControlNetForForgeOfficial(scripts.Script):
params.model.positive_advanced_weighting = soft_weighting.copy()
params.model.negative_advanced_weighting = soft_weighting.copy()
- cond, mask = params.preprocessor.process_before_every_sampling(
- p, cond, mask, *args, **kwargs
- )
+ cond, mask = params.preprocessor.process_before_every_sampling(p, cond, mask, *args, **kwargs)
params.model.advanced_mask_weighting = mask
params.model.process_before_every_sampling(p, cond, mask, *args, **kwargs)
logger.info(f"ControlNet Method {params.preprocessor.name} patched.")
- return
@staticmethod
def bound_check_params(unit: ControlNetUnit) -> None:
@@ -567,35 +464,16 @@ class ControlNetForForgeOfficial(scripts.Script):
preprocessor = global_state.get_preprocessor(unit.module)
if unit.processor_res < 0:
- unit.processor_res = int(
- preprocessor.slider_resolution.gradio_update_kwargs.get("value", 512)
- )
-
+ unit.processor_res = int(preprocessor.slider_resolution.gradio_update_kwargs.get("value", 512))
if unit.threshold_a < 0:
- unit.threshold_a = int(
- preprocessor.slider_1.gradio_update_kwargs.get("value", 1.0)
- )
-
+ unit.threshold_a = int(preprocessor.slider_1.gradio_update_kwargs.get("value", 1.0))
if unit.threshold_b < 0:
- unit.threshold_b = int(
- preprocessor.slider_2.gradio_update_kwargs.get("value", 1.0)
- )
-
- return
+ unit.threshold_b = int(preprocessor.slider_2.gradio_update_kwargs.get("value", 1.0))
@torch.no_grad()
- def process_unit_after_every_sampling(
- self,
- p: StableDiffusionProcessing,
- unit: ControlNetUnit,
- params: ControlNetCachedParameters,
- *args,
- **kwargs,
- ):
-
+ def process_unit_after_every_sampling(self, p: StableDiffusionProcessing, unit: ControlNetUnit, params: ControlNetCachedParameters, *args, **kwargs):
params.preprocessor.process_after_every_sampling(p, params, *args, **kwargs)
params.model.process_after_every_sampling(p, params, *args, **kwargs)
- return
@torch.no_grad()
def process(self, p, *args, **kwargs):
@@ -614,19 +492,15 @@ class ControlNetForForgeOfficial(scripts.Script):
if i not in self.current_params:
logger.warning(f"ControlNet Unit {i + 1} is skipped...")
continue
- self.process_unit_before_every_sampling(
- p, unit, self.current_params[i], *args, **kwargs
- )
+ self.process_unit_before_every_sampling(p, unit, self.current_params[i], *args, **kwargs)
@torch.no_grad()
def postprocess_batch_list(self, p, pp, *args, **kwargs):
for i, unit in enumerate(self.get_enabled_units(args)):
if i in self.current_params:
- self.process_unit_after_every_sampling(
- p, unit, self.current_params[i], pp, *args, **kwargs
- )
+ self.process_unit_after_every_sampling(p, unit, self.current_params[i], pp, *args, **kwargs)
- def postprocess(self, p, processed, *args):
+ def postprocess(self, *args):
self.current_params = {}
@@ -689,4 +563,6 @@ script_callbacks.on_ui_settings(on_ui_settings)
script_callbacks.on_infotext_pasted(Infotext.on_infotext_pasted)
script_callbacks.on_after_component(ControlNetUiGroup.on_after_component)
script_callbacks.on_before_reload(ControlNetUiGroup.reset)
-script_callbacks.on_app_started(controlnet_api)
+
+if shared.cmd_opts.api:
+ script_callbacks.on_app_started(controlnet_api)
diff --git a/extensions-builtin/sd_forge_multidiffusion/lib_multidiffusion/tiled_diffusion.py b/extensions-builtin/sd_forge_multidiffusion/lib_multidiffusion/tiled_diffusion.py
new file mode 100644
index 0000000000000000000000000000000000000000..b579afca0a872b5a0e07fdd33ac4516247ac3602
--- /dev/null
+++ b/extensions-builtin/sd_forge_multidiffusion/lib_multidiffusion/tiled_diffusion.py
@@ -0,0 +1,539 @@
+# 1st Edit by. https://github.com/shiimizu/ComfyUI-TiledDiffusion
+# 2nd Edit by. Forge Official
+# 3rd Edit by. Panchovix
+# 4th Edit by. Haoming02
+# - Based on: https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111
+
+from enum import Enum
+from typing import Callable, Final, Union
+
+import numpy as np
+import torch
+from numpy import exp, pi, sqrt
+from torch import Tensor
+
+from ldm_patched.modules.controlnet import ControlNet, T2IAdapter
+from ldm_patched.modules.model_base import BaseModel
+from ldm_patched.modules.model_management import current_loaded_models, get_torch_device, load_models_gpu
+from ldm_patched.modules.model_patcher import ModelPatcher
+from ldm_patched.modules.utils import common_upscale
+
+opt_C: Final[int] = 4
+opt_f: Final[int] = 8
+device: Final[torch.device] = get_torch_device()
+
+
+class BlendMode(Enum):
+ FOREGROUND = "Foreground"
+ BACKGROUND = "Background"
+
+
+class BBox:
+ def __init__(self, x: int, y: int, w: int, h: int):
+ self.x = x
+ self.y = y
+ self.w = w
+ self.h = h
+ self.box = [x, y, x + w, y + h]
+ self.slicer = slice(None), slice(None), slice(y, y + h), slice(x, x + w)
+
+ def __getitem__(self, idx: int) -> int:
+ return self.box[idx]
+
+
+def processing_interrupted():
+ from modules import shared
+
+ return shared.state.interrupted or shared.state.skipped
+
+
+def ceildiv(big: int, small: int) -> int:
+ return -(big // -small)
+
+
+def repeat_to_batch_size(tensor: torch.Tensor, batch_size: int, dim: int = 0):
+ if dim == 0 and tensor.shape[dim] == 1:
+ return tensor.expand([batch_size] + [-1] * (len(tensor.shape) - 1))
+ if tensor.shape[dim] > batch_size:
+ return tensor.narrow(dim, 0, batch_size)
+ elif tensor.shape[dim] < batch_size:
+ return tensor.repeat(dim * [1] + [ceildiv(batch_size, tensor.shape[dim])] + [1] * (len(tensor.shape) - 1 - dim)).narrow(dim, 0, batch_size)
+ return tensor
+
+
+def split_bboxes(w: int, h: int, tile_w: int, tile_h: int, overlap: int = 16, init_weight: Union[Tensor, float] = 1.0) -> tuple[list[BBox], Tensor]:
+ cols = ceildiv((w - overlap), (tile_w - overlap))
+ rows = ceildiv((h - overlap), (tile_h - overlap))
+ dx = (w - tile_w) / (cols - 1) if cols > 1 else 0
+ dy = (h - tile_h) / (rows - 1) if rows > 1 else 0
+
+ bbox_list: list[BBox] = []
+ weight = torch.zeros((1, 1, h, w), device=device, dtype=torch.float32)
+ for row in range(rows):
+ y = min(int(row * dy), h - tile_h)
+ for col in range(cols):
+ x = min(int(col * dx), w - tile_w)
+
+ bbox = BBox(x, y, tile_w, tile_h)
+ bbox_list.append(bbox)
+ weight[bbox.slicer] += init_weight
+
+ return bbox_list, weight
+
+
+class AbstractDiffusion:
+ def __init__(self):
+ self.method = self.__class__.__name__
+
+ self.w: int = 0
+ self.h: int = 0
+ self.tile_width: int = None
+ self.tile_height: int = None
+ self.tile_overlap: int = None
+ self.tile_batch_size: int = None
+
+ self.x_buffer: Tensor = None
+ self._weights: Tensor = None
+ self._init_grid_bbox = None
+ self._init_done = None
+
+ self.step_count = 0
+ self.inner_loop_count = 0
+ self.kdiff_step = -1
+
+ self.enable_grid_bbox: bool = False
+ self.tile_w: int = None
+ self.tile_h: int = None
+ self.tile_bs: int = None
+ self.num_tiles: int = None
+ self.num_batches: int = None
+ self.batched_bboxes: list[list[BBox]] = []
+
+ self.enable_controlnet: bool = False
+ self.control_tensor_batch_dict = {}
+ self.control_tensor_batch: list[list[Tensor]] = [[]]
+ self.control_params: dict[tuple, list[list[Tensor]]] = {}
+ self.control_tensor_cpu: bool = False
+ self.control_tensor_custom: list[list[Tensor]] = []
+
+ self.refresh = False
+ self.weights = None
+
+ def reset(self):
+ tile_width = self.tile_width
+ tile_height = self.tile_height
+ tile_overlap = self.tile_overlap
+ tile_batch_size = self.tile_batch_size
+ compression = self.compression
+ width = self.width
+ height = self.height
+ overlap = self.overlap
+ self.__init__()
+ self.compression = compression
+ self.width = width
+ self.height = height
+ self.overlap = overlap
+ self.tile_width = tile_width
+ self.tile_height = tile_height
+ self.tile_overlap = tile_overlap
+ self.tile_batch_size = tile_batch_size
+
+ def repeat_tensor(self, x: Tensor, n: int, concat=False, concat_to=0) -> Tensor:
+ """repeat the tensor on it's first dim"""
+ if n == 1:
+ return x
+ B = x.shape[0]
+ r_dims = len(x.shape) - 1
+ if B == 1:
+ shape = [n] + [-1] * r_dims
+ return x.expand(shape)
+ else:
+ if concat:
+ return torch.cat([x for _ in range(n)], dim=0)[:concat_to]
+ shape = [n] + [1] * r_dims
+ return x.repeat(shape)
+
+ def reset_buffer(self, x_in: Tensor):
+ if self.x_buffer is None or self.x_buffer.shape != x_in.shape:
+ self.x_buffer = torch.zeros_like(x_in, device=x_in.device, dtype=x_in.dtype)
+ else:
+ self.x_buffer.zero_()
+
+ def init_grid_bbox(self, tile_w: int, tile_h: int, overlap: int, tile_bs: int):
+ self.weights = torch.zeros((1, 1, self.h, self.w), device=device, dtype=torch.float32)
+ self.enable_grid_bbox = True
+
+ self.tile_w = min(tile_w, self.w)
+ self.tile_h = min(tile_h, self.h)
+ overlap = max(0, min(overlap, min(tile_w, tile_h) - 4))
+ bboxes, weights = split_bboxes(self.w, self.h, self.tile_w, self.tile_h, overlap, self.get_tile_weights())
+ self.weights += weights
+ self.num_tiles = len(bboxes)
+ self.num_batches = ceildiv(self.num_tiles, tile_bs)
+ self.tile_bs = ceildiv(len(bboxes), self.num_batches)
+ self.batched_bboxes = [bboxes[i * self.tile_bs : (i + 1) * self.tile_bs] for i in range(self.num_batches)]
+
+ def get_grid_bbox(self, tile_w: int, tile_h: int, overlap: int, tile_bs: int, w: int, h: int, device: torch.device, get_tile_weights: Callable = lambda: 1.0) -> list[list[BBox]]:
+ weights = torch.zeros((1, 1, h, w), device=device, dtype=torch.float32)
+
+ tile_w = min(tile_w, w)
+ tile_h = min(tile_h, h)
+ overlap = max(0, min(overlap, min(tile_w, tile_h) - 4))
+ bboxes, weights_ = split_bboxes(w, h, tile_w, tile_h, overlap, get_tile_weights())
+ weights += weights_
+ num_tiles = len(bboxes)
+ num_batches = ceildiv(num_tiles, tile_bs)
+ tile_bs = ceildiv(len(bboxes), num_batches)
+ batched_bboxes = [bboxes[i * tile_bs : (i + 1) * tile_bs] for i in range(num_batches)]
+ return batched_bboxes
+
+ def get_tile_weights(self) -> Union[Tensor, float]:
+ return 1.0
+
+ def init_noise_inverse(self, steps: int, retouch: float, get_cache_callback, set_cache_callback, renoise_strength: float, renoise_kernel: int):
+ self.noise_inverse_enabled = True
+ self.noise_inverse_steps = steps
+ self.noise_inverse_retouch = float(retouch)
+ self.noise_inverse_renoise_strength = float(renoise_strength)
+ self.noise_inverse_renoise_kernel = int(renoise_kernel)
+ self.noise_inverse_set_cache = set_cache_callback
+ self.noise_inverse_get_cache = get_cache_callback
+
+ def init_done(self):
+ """
+ Call this after all `init_*`, settings are done, now perform:
+ - settings sanity check
+ - pre-computations, cache init
+ - anything thing needed before denoising starts
+ """
+
+ self.total_bboxes = 0
+ if self.enable_grid_bbox:
+ self.total_bboxes += self.num_batches
+ assert self.total_bboxes > 0, "Nothing to paint! No background to draw and no custom bboxes were provided."
+
+ def prepare_controlnet_tensors(self, refresh: bool = False, tensor=None):
+ """Crop the control tensor into tiles and cache them"""
+ if not refresh:
+ if self.control_tensor_batch is not None or self.control_params is not None:
+ return
+ tensors = [tensor]
+ self.org_control_tensor_batch = tensors
+ self.control_tensor_batch = []
+ for i in range(len(tensors)):
+ control_tile_list = []
+ control_tensor = tensors[i]
+ for bboxes in self.batched_bboxes:
+ single_batch_tensors = []
+ for bbox in bboxes:
+ if len(control_tensor.shape) == 3:
+ control_tensor.unsqueeze_(0)
+ control_tile = control_tensor[:, :, bbox[1] * opt_f : bbox[3] * opt_f, bbox[0] * opt_f : bbox[2] * opt_f]
+ single_batch_tensors.append(control_tile)
+ control_tile = torch.cat(single_batch_tensors, dim=0)
+ if self.control_tensor_cpu:
+ control_tile = control_tile.cpu()
+ control_tile_list.append(control_tile)
+ self.control_tensor_batch.append(control_tile_list)
+
+ def switch_controlnet_tensors(self, batch_id: int, x_batch_size: int, tile_batch_size: int, is_denoise=False):
+ if self.control_tensor_batch is None:
+ return
+
+ for param_id in range(len(self.control_tensor_batch)):
+ control_tile = self.control_tensor_batch[param_id][batch_id]
+ if x_batch_size > 1:
+ all_control_tile = []
+ for i in range(tile_batch_size):
+ this_control_tile = [control_tile[i].unsqueeze(0)] * x_batch_size
+ all_control_tile.append(torch.cat(this_control_tile, dim=0))
+ control_tile = torch.cat(all_control_tile, dim=0)
+ self.control_tensor_batch[param_id][batch_id] = control_tile
+
+ def process_controlnet(self, x_noisy, c_in: dict, cond_or_uncond: list, bboxes, batch_size: int, batch_id: int, shifts=None, shift_condition=None):
+ control: ControlNet = c_in["control"]
+ param_id = -1
+ tuple_key = tuple(cond_or_uncond) + tuple(x_noisy.shape)
+ while control is not None:
+ param_id += 1
+
+ if tuple_key not in self.control_params:
+ self.control_params[tuple_key] = [[None]]
+
+ while len(self.control_params[tuple_key]) <= param_id:
+ self.control_params[tuple_key].append([None])
+
+ while len(self.control_params[tuple_key][param_id]) <= batch_id:
+ self.control_params[tuple_key][param_id].append(None)
+
+ if self.refresh or control.cond_hint is None or not isinstance(self.control_params[tuple_key][param_id][batch_id], Tensor):
+ if control.cond_hint is not None:
+ del control.cond_hint
+ control.cond_hint = None
+ compression_ratio = control.compression_ratio
+ if control.vae is not None:
+ compression_ratio *= control.vae.downscale_ratio
+ else:
+ if control.latent_format is not None:
+ raise ValueError("This Controlnet needs a VAE but none was provided, please use a ControlNetApply node with a VAE input and connect it.")
+ PH, PW = self.h * compression_ratio, self.w * compression_ratio
+
+ device = getattr(control, "device", x_noisy.device)
+ dtype = getattr(control, "manual_cast_dtype", None)
+ if dtype is None:
+ dtype = getattr(getattr(control, "control_model", None), "dtype", None)
+ if dtype is None:
+ dtype = x_noisy.dtype
+
+ if isinstance(control, T2IAdapter):
+ width, height = control.scale_image_to(PW, PH)
+ cns = common_upscale(control.cond_hint_original, width, height, control.upscale_algorithm, "center").float().to(device=device)
+ if control.channels_in == 1 and control.cond_hint.shape[1] > 1:
+ cns = torch.mean(control.cond_hint, 1, keepdim=True)
+ elif control.__class__.__name__ == "ControlLLLiteAdvanced":
+ if getattr(control, "sub_idxs", None) is not None and control.cond_hint_original.shape[0] >= control.full_latent_length:
+ cns = common_upscale(control.cond_hint_original[control.sub_idxs], PW, PH, control.upscale_algorithm, "center").to(dtype=dtype, device=device)
+ else:
+ cns = common_upscale(control.cond_hint_original, PW, PH, control.upscale_algorithm, "center").to(dtype=dtype, device=device)
+ else:
+ cns = common_upscale(control.cond_hint_original, PW, PH, control.upscale_algorithm, "center").to(dtype=dtype, device=device)
+ if getattr(control, "vae", None) is not None:
+ loaded_models_ = current_loaded_models(only_currently_used=True)
+ cns = control.vae.encode(cns.movedim(1, -1))
+ load_models_gpu(loaded_models_)
+ if getattr(control, "latent_format", None) is not None:
+ cns = control.latent_format.process_in(cns)
+ if len(getattr(control, "extra_concat_orig", ())) > 0:
+ to_concat = []
+ for c in control.extra_concat_orig:
+ c = c.to(device=device)
+ c = common_upscale(c, cns.shape[3], cns.shape[2], control.upscale_algorithm, "center")
+ to_concat.append(repeat_to_batch_size(c, cns.shape[0]))
+ cns = torch.cat([cns] + to_concat, dim=1)
+
+ cns = cns.to(device=device, dtype=dtype)
+ cf = control.compression_ratio
+ if cns.shape[0] != batch_size:
+ cns = repeat_to_batch_size(cns, batch_size)
+ if shifts is not None:
+ control.cns = cns
+ sh_h, sh_w = shifts
+ sh_h *= cf
+ sh_w *= cf
+ if (sh_h, sh_w) != (0, 0):
+ if sh_h == 0 or sh_w == 0:
+ cns = control.cns.roll(shifts=(sh_h, sh_w), dims=(-2, -1))
+ else:
+ if shift_condition:
+ cns = control.cns.roll(shifts=sh_h, dims=-2)
+ else:
+ cns = control.cns.roll(shifts=sh_w, dims=-1)
+ cns_slices = [cns[:, :, bbox[1] * cf : bbox[3] * cf, bbox[0] * cf : bbox[2] * cf] for bbox in bboxes]
+ control.cond_hint = torch.cat(cns_slices, dim=0).to(device=cns.device)
+ del cns_slices
+ del cns
+ self.control_params[tuple_key][param_id][batch_id] = control.cond_hint
+ else:
+ if hasattr(control, "cns") and shifts is not None:
+ cf = control.compression_ratio
+ cns = control.cns
+ sh_h, sh_w = shifts
+ sh_h *= cf
+ sh_w *= cf
+ if (sh_h, sh_w) != (0, 0):
+ if sh_h == 0 or sh_w == 0:
+ cns = control.cns.roll(shifts=(sh_h, sh_w), dims=(-2, -1))
+ else:
+ if shift_condition:
+ cns = control.cns.roll(shifts=sh_h, dims=-2)
+ else:
+ cns = control.cns.roll(shifts=sh_w, dims=-1)
+ cns_slices = [cns[:, :, bbox[1] * cf : bbox[3] * cf, bbox[0] * cf : bbox[2] * cf] for bbox in bboxes]
+ control.cond_hint = torch.cat(cns_slices, dim=0).to(device=cns.device)
+ del cns_slices
+ del cns
+ else:
+ control.cond_hint = self.control_params[tuple_key][param_id][batch_id]
+ control = control.previous_controlnet
+
+
+class MultiDiffusion(AbstractDiffusion):
+
+ @torch.inference_mode()
+ def __call__(self, model_function: BaseModel.apply_model, args: dict):
+ x_in: Tensor = args["input"]
+ t_in: Tensor = args["timestep"]
+ c_in: dict = args["c"]
+ cond_or_uncond: list = args["cond_or_uncond"]
+
+ N, C, H, W = x_in.shape
+
+ self.refresh = False
+ if self.weights is None or self.h != H or self.w != W:
+ self.h, self.w = H, W
+ self.refresh = True
+ self.init_grid_bbox(self.tile_width, self.tile_height, self.tile_overlap, self.tile_batch_size)
+ self.init_done()
+ self.h, self.w = H, W
+ self.reset_buffer(x_in)
+
+ for batch_id, bboxes in enumerate(self.batched_bboxes):
+ if processing_interrupted():
+ return x_in
+
+ x_tile = torch.cat([x_in[bbox.slicer] for bbox in bboxes], dim=0)
+ t_tile = repeat_to_batch_size(t_in, x_tile.shape[0])
+ c_tile = {}
+ for k, v in c_in.items():
+ if isinstance(v, torch.Tensor):
+ if len(v.shape) == len(x_tile.shape):
+ bboxes_ = bboxes
+ if v.shape[-2:] != x_in.shape[-2:]:
+ cf = x_in.shape[-1] * self.compression // v.shape[-1]
+ bboxes_ = self.get_grid_bbox(
+ self.width // cf,
+ self.height // cf,
+ self.overlap // cf,
+ self.tile_batch_size,
+ v.shape[-1],
+ v.shape[-2],
+ x_in.device,
+ self.get_tile_weights,
+ )
+ v = torch.cat([v[bbox_.slicer] for bbox_ in bboxes_[batch_id]])
+ if v.shape[0] != x_tile.shape[0]:
+ v = repeat_to_batch_size(v, x_tile.shape[0])
+ c_tile[k] = v
+
+ if "control" in c_in:
+ self.process_controlnet(x_tile, c_in, cond_or_uncond, bboxes, N, batch_id)
+ c_tile["control"] = c_in["control"].get_control_orig(x_tile, t_tile, c_tile, len(cond_or_uncond))
+
+ x_tile_out = model_function(x_tile, t_tile, **c_tile)
+
+ for i, bbox in enumerate(bboxes):
+ self.x_buffer[bbox.slicer] += x_tile_out[i * N : (i + 1) * N, :, :, :]
+ del x_tile_out, x_tile, t_tile, c_tile
+
+ return torch.where(self.weights > 1, self.x_buffer / self.weights, self.x_buffer)
+
+
+class MixtureOfDiffusers(AbstractDiffusion):
+ """
+ Mixture-of-Diffusers Implementation
+ https://github.com/albarji/mixture-of-diffusers
+ """
+
+ def init_done(self):
+ super().init_done()
+ self.rescale_factor = 1 / self.weights
+
+ @staticmethod
+ def get_weight(tile_w: int, tile_h: int) -> Tensor:
+ """
+ Copy from the original implementation of Mixture of Diffusers
+ https://github.com/albarji/mixture-of-diffusers/blob/master/mixdiff/tiling.py
+ This generates gaussian weights to smooth the noise of each tile.
+ This is critical for this method to work.
+ """
+ f = lambda x, midpoint, var=0.01: exp(-(x - midpoint) * (x - midpoint) / (tile_w * tile_w) / (2 * var)) / sqrt(2 * pi * var)
+ x_probs = [f(x, (tile_w - 1) / 2) for x in range(tile_w)]
+ y_probs = [f(y, tile_h / 2) for y in range(tile_h)]
+
+ w = np.outer(y_probs, x_probs)
+ return torch.from_numpy(w).to(device, dtype=torch.float32)
+
+ def get_tile_weights(self) -> Tensor:
+ self.tile_weights = self.get_weight(self.tile_w, self.tile_h)
+ return self.tile_weights
+
+ @torch.inference_mode()
+ def __call__(self, model_function: BaseModel.apply_model, args: dict):
+ x_in: Tensor = args["input"]
+ t_in: Tensor = args["timestep"]
+ c_in: dict = args["c"]
+ cond_or_uncond: list = args["cond_or_uncond"]
+
+ N, C, H, W = x_in.shape
+
+ self.refresh = False
+ if self.weights is None or self.h != H or self.w != W:
+ self.h, self.w = H, W
+ self.refresh = True
+ self.init_grid_bbox(self.tile_width, self.tile_height, self.tile_overlap, self.tile_batch_size)
+ self.init_done()
+ self.h, self.w = H, W
+ self.reset_buffer(x_in)
+
+ for batch_id, bboxes in enumerate(self.batched_bboxes):
+ if processing_interrupted():
+ return x_in
+ x_tile_list = []
+ for bbox in bboxes:
+ x_tile_list.append(x_in[bbox.slicer])
+
+ x_tile = torch.cat(x_tile_list, dim=0)
+ t_tile = repeat_to_batch_size(t_in, x_tile.shape[0])
+ c_tile = {}
+ for k, v in c_in.items():
+ if isinstance(v, torch.Tensor):
+ if len(v.shape) == len(x_tile.shape):
+ bboxes_ = bboxes
+ if v.shape[-2:] != x_in.shape[-2:]:
+ cf = x_in.shape[-1] * self.compression // v.shape[-1]
+ bboxes_ = self.get_grid_bbox(
+ (tile_w := self.width // cf),
+ (tile_h := self.height // cf),
+ self.overlap // cf,
+ self.tile_batch_size,
+ v.shape[-1],
+ v.shape[-2],
+ x_in.device,
+ lambda: self.get_weight(tile_w, tile_h),
+ )
+ v = torch.cat([v[bbox_.slicer] for bbox_ in bboxes_[batch_id]])
+ if v.shape[0] != x_tile.shape[0]:
+ v = repeat_to_batch_size(v, x_tile.shape[0])
+ c_tile[k] = v
+
+ if "control" in c_in:
+ self.process_controlnet(x_tile, c_in, cond_or_uncond, bboxes, N, batch_id)
+ c_tile["control"] = c_in["control"].get_control_orig(x_tile, t_tile, c_tile, len(cond_or_uncond))
+
+ x_tile_out = model_function(x_tile, t_tile, **c_tile)
+
+ for i, bbox in enumerate(bboxes):
+ w = self.tile_weights * self.rescale_factor[bbox.slicer]
+ self.x_buffer[bbox.slicer] += x_tile_out[i * N : (i + 1) * N, :, :, :] * w
+ del x_tile_out, x_tile, t_tile, c_tile
+
+ return self.x_buffer
+
+
+class TiledDiffusion:
+
+ @staticmethod
+ def apply(model: ModelPatcher, method: str, tile_width: int, tile_height: int, tile_overlap: int, tile_batch_size: int):
+ match method:
+ case "MultiDiffusion":
+ impl = MultiDiffusion()
+ case "Mixture of Diffusers":
+ impl = MixtureOfDiffusers()
+ case _:
+ raise SystemError
+
+ compression = 8
+ impl.tile_width = tile_width // compression
+ impl.tile_height = tile_height // compression
+ impl.tile_overlap = tile_overlap // compression
+ impl.tile_batch_size = tile_batch_size
+
+ impl.compression = compression
+ impl.width = tile_width
+ impl.height = tile_height
+ impl.overlap = tile_overlap
+
+ model = model.clone()
+ model.set_model_unet_function_wrapper(impl)
+
+ return model
diff --git a/extensions-builtin/sd_forge_multidiffusion/scripts/forge_multidiffusion.py b/extensions-builtin/sd_forge_multidiffusion/scripts/forge_multidiffusion.py
new file mode 100644
index 0000000000000000000000000000000000000000..442656c377511f0a8392c29674e66276155b824e
--- /dev/null
+++ b/extensions-builtin/sd_forge_multidiffusion/scripts/forge_multidiffusion.py
@@ -0,0 +1,46 @@
+import gradio as gr
+from lib_multidiffusion.tiled_diffusion import TiledDiffusion
+
+from modules import scripts
+from modules.ui_components import InputAccordion
+
+
+class MultiDiffusionForForge(scripts.Script):
+ sorting_priority = 16
+
+ def title(self):
+ return "MultiDiffusion Integrated"
+
+ def show(self, is_img2img):
+ return scripts.AlwaysVisible if is_img2img else None
+
+ def ui(self, *args, **kwargs):
+ with InputAccordion(False, label=self.title()) as enabled:
+ method = gr.Radio(label="Method", choices=("MultiDiffusion", "Mixture of Diffusers"), value="Mixture of Diffusers")
+ with gr.Row():
+ tile_width = gr.Slider(label="Tile Width", minimum=256, maximum=2048, step=64, value=768)
+ tile_height = gr.Slider(label="Tile Height", minimum=256, maximum=2048, step=64, value=768)
+ with gr.Row():
+ tile_overlap = gr.Slider(label="Tile Overlap", minimum=0, maximum=1024, step=16, value=64)
+ tile_batch_size = gr.Slider(label="Tile Batch Size", minimum=1, maximum=8, step=1, value=1)
+
+ return enabled, method, tile_width, tile_height, tile_overlap, tile_batch_size
+
+ def process_before_every_sampling(self, p, enabled: bool, method: str, tile_width: int, tile_height: int, tile_overlap: int, tile_batch_size: int, **kwargs):
+ if not enabled:
+ return
+
+ unet = p.sd_model.forge_objects.unet
+ unet = TiledDiffusion.apply(unet, method, tile_width, tile_height, tile_overlap, tile_batch_size)
+ p.sd_model.forge_objects.unet = unet
+
+ p.extra_generation_params.update(
+ {
+ "multidiffusion_enabled": enabled,
+ "multidiffusion_method": method,
+ "multidiffusion_tile_width": tile_width,
+ "multidiffusion_tile_height": tile_height,
+ "multidiffusion_tile_overlap": tile_overlap,
+ "multidiffusion_tile_batch_size": tile_batch_size,
+ }
+ )
diff --git a/extensions-builtin/xyz/lib_xyz/builtins.py b/extensions-builtin/xyz/lib_xyz/builtins.py
index f1433eb6e5cca6e7d8e1da74f2a97c5b071a7fdf..08fa0719756c050831913a594cc0ac006be631db 100644
--- a/extensions-builtin/xyz/lib_xyz/builtins.py
+++ b/extensions-builtin/xyz/lib_xyz/builtins.py
@@ -31,44 +31,56 @@ from .utils import boolean_choice, str_permutations
builtin_options = [
AxisOption("Nothing", str, do_nothing, format_value=format_nothing),
AxisOption("Seed", int, apply_field("seed")),
- AxisOption("Var. seed", int, apply_field("subseed")),
- AxisOption("Var. strength", float, apply_field("subseed_strength")),
AxisOption("Steps", int, apply_field("steps")),
- AxisOptionTxt2Img("Hires steps", int, apply_field("hr_second_pass_steps")),
+ AxisOptionTxt2Img("Hires. steps", int, apply_field("hr_second_pass_steps")),
AxisOption("CFG Scale", float, apply_field("cfg_scale")),
- AxisOptionImg2Img("Image CFG Scale", float, apply_field("image_cfg_scale")),
AxisOption("Prompt S/R", str, apply_prompt, format_value=format_value),
AxisOption("Prompt order", str_permutations, apply_order, format_value=format_value_join_list),
AxisOptionTxt2Img("Sampler", str, apply_field("sampler_name"), format_value=format_value, confirm=confirm_samplers, choices=sd_samplers.visible_sampler_names),
- AxisOptionTxt2Img("Hires sampler", str, apply_field("hr_sampler_name"), confirm=confirm_samplers, choices=sd_samplers.visible_sampler_names),
+ AxisOptionTxt2Img("Hires. sampler", str, apply_field("hr_sampler_name"), confirm=confirm_samplers, choices=sd_samplers.visible_sampler_names),
AxisOptionImg2Img("Sampler", str, apply_field("sampler_name"), format_value=format_value, confirm=confirm_samplers, choices=sd_samplers.visible_sampler_names),
+ AxisOption("Schedule type", str, apply_field("scheduler"), choices=lambda: [x.label for x in sd_schedulers.schedulers]),
AxisOption("Checkpoint name", str, apply_checkpoint, format_value=format_remove_path, confirm=confirm_checkpoints, cost=1.0, choices=lambda: sorted(sd_models.checkpoints_list, key=str.casefold)),
- AxisOption("Negative Guidance minimum sigma", float, apply_field("s_min_uncond")),
AxisOption("Size", str, apply_size),
- AxisOption("Sigma Churn", float, apply_field("s_churn")),
- AxisOption("Sigma min", float, apply_field("s_tmin")),
- AxisOption("Sigma max", float, apply_field("s_tmax")),
- AxisOption("Sigma noise", float, apply_field("s_noise")),
- AxisOption("Schedule type", str, apply_field("scheduler"), choices=lambda: [x.label for x in sd_schedulers.schedulers]),
- AxisOption("Schedule min sigma", float, apply_override("sigma_min")),
- AxisOption("Schedule max sigma", float, apply_override("sigma_max")),
- AxisOption("Schedule rho", float, apply_override("rho")),
- AxisOption("Eta", float, apply_field("eta")),
- AxisOption("Clip skip", int, apply_clip_skip),
AxisOption("Denoising", float, apply_field("denoising_strength")),
- AxisOption("Initial noise multiplier", float, apply_field("initial_noise_multiplier")),
- AxisOption("Extra noise", float, apply_override("img2img_extra_noise")),
- AxisOptionTxt2Img("Hires upscaler", str, apply_field("hr_upscaler"), choices=lambda: [*shared.latent_upscale_modes, *[x.name for x in shared.sd_upscalers]]),
- AxisOptionImg2Img("Cond. Image Mask Weight", float, apply_field("inpainting_mask_weight")),
+ AxisOptionTxt2Img("Hires. upscaler", str, apply_field("hr_upscaler"), choices=lambda: [*shared.latent_upscale_modes, *[x.name for x in shared.sd_upscalers]]),
AxisOption("VAE", str, apply_vae, cost=0.7, choices=lambda: ["None"] + list(sd_vae.vae_dict)),
AxisOption("Styles", str, apply_styles, choices=lambda: list(shared.prompt_styles.styles)),
- AxisOption("UniPC Order", int, apply_uni_pc_order, cost=0.5),
- AxisOption("Face restore", str, apply_face_restore, format_value=format_value),
- AxisOption("Token merging ratio", float, apply_override("token_merging_ratio")),
- AxisOption("Token merging ratio high-res", float, apply_override("token_merging_ratio_hr")),
- AxisOption("Always discard next-to-last sigma", str, apply_override("always_discard_next_to_last_sigma", boolean=True), choices=boolean_choice(reverse=True)),
- AxisOption("SGM noise multiplier", str, apply_override("sgm_noise_multiplier", boolean=True), choices=boolean_choice(reverse=True)),
- AxisOption("Refiner checkpoint", str, apply_field("refiner_checkpoint"), format_value=format_remove_path, confirm=confirm_checkpoints_or_none, cost=1.0, choices=lambda: ["None"] + sorted(sd_models.checkpoints_list, key=str.casefold)),
- AxisOption("Refiner switch at", float, apply_field("refiner_switch_at")),
- AxisOption("RNG source", str, apply_override("randn_source"), choices=lambda: ["GPU", "CPU", "NV"]),
]
+
+if shared.cmd_opts.adv_xyz:
+ builtin_options.extend(
+ [
+ AxisOption("Var. seed", int, apply_field("subseed")),
+ AxisOption("Var. strength", float, apply_field("subseed_strength")),
+ AxisOption("Clip skip", int, apply_clip_skip),
+ AxisOption("Initial noise multiplier", float, apply_field("initial_noise_multiplier")),
+ AxisOption("Extra noise", float, apply_override("img2img_extra_noise")),
+ AxisOptionImg2Img("Image CFG Scale", float, apply_field("image_cfg_scale")),
+ AxisOptionImg2Img("Cond. Image Mask Weight", float, apply_field("inpainting_mask_weight")),
+ AxisOption("Face restore", str, apply_face_restore, format_value=format_value),
+ AxisOption("SkipEarly", float, apply_field("skip_early_cond")),
+ AxisOption("NGMS", float, apply_field("s_min_uncond")),
+ AxisOption("Token merging ratio", float, apply_override("token_merging_ratio")),
+ AxisOption("Always discard next-to-last sigma", str, apply_override("always_discard_next_to_last_sigma", boolean=True), choices=boolean_choice(reverse=True)),
+ AxisOption("SGM noise multiplier", str, apply_override("sgm_noise_multiplier", boolean=True), choices=boolean_choice(reverse=True)),
+ AxisOption("Refiner checkpoint", str, apply_field("refiner_checkpoint"), format_value=format_remove_path, confirm=confirm_checkpoints_or_none, cost=1.0, choices=lambda: ["None"] + sorted(sd_models.checkpoints_list, key=str.casefold)),
+ AxisOption("Refiner switch at", float, apply_field("refiner_switch_at")),
+ AxisOption("RNG source", str, apply_override("randn_source"), choices=lambda: ["GPU", "CPU", "NV"]),
+ ]
+ )
+
+if shared.cmd_opts.adv_samplers:
+ builtin_options.extend(
+ [
+ AxisOption("Sigma Churn", float, apply_field("s_churn")),
+ AxisOption("Sigma min", float, apply_field("s_tmin")),
+ AxisOption("Sigma max", float, apply_field("s_tmax")),
+ AxisOption("Sigma noise", float, apply_field("s_noise")),
+ AxisOption("Schedule min sigma", float, apply_override("sigma_min")),
+ AxisOption("Schedule max sigma", float, apply_override("sigma_max")),
+ AxisOption("Schedule rho", float, apply_override("rho")),
+ AxisOption("Eta", float, apply_field("eta")),
+ AxisOption("UniPC Order", int, apply_uni_pc_order, cost=0.5),
+ ]
+ )
diff --git a/html/extra-networks-no-cards.html b/html/extra-networks-no-cards.html
index 647808338679aea1c4d076d7b77d220d408db034..a463bf25250dde5da1187634dc2e3c05c8b123d7 100644
--- a/html/extra-networks-no-cards.html
+++ b/html/extra-networks-no-cards.html
@@ -1,6 +1,5 @@
Image contains no metadata...
Image contains no metadata...