Update README.md
Browse files
README.md
CHANGED
|
@@ -1,393 +1,7 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|:---:|:---:|
|
| 9 |
-
| |  |
|
| 10 |
-
|
| 11 |
-
## Features
|
| 12 |
-
|
| 13 |
-
* 3 interface modes: default (two columns), notebook, and chat
|
| 14 |
-
* Multiple model backends: [transformers](https://github.com/huggingface/transformers), [llama.cpp](https://github.com/ggerganov/llama.cpp), [ExLlama](https://github.com/turboderp/exllama), [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ), [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa), [ctransformers](https://github.com/marella/ctransformers)
|
| 15 |
-
* Dropdown menu for quickly switching between different models
|
| 16 |
-
* LoRA: load and unload LoRAs on the fly, train a new LoRA using QLoRA
|
| 17 |
-
* Precise instruction templates for chat mode, including Llama-2-chat, Alpaca, Vicuna, WizardLM, StableLM, and many others
|
| 18 |
-
* 4-bit, 8-bit, and CPU inference through the transformers library
|
| 19 |
-
* Use llama.cpp models with transformers samplers (`llamacpp_HF` loader)
|
| 20 |
-
* [Multimodal pipelines, including LLaVA and MiniGPT-4](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/multimodal)
|
| 21 |
-
* [Extensions framework](docs/Extensions.md)
|
| 22 |
-
* [Custom chat characters](docs/Chat-mode.md)
|
| 23 |
-
* Very efficient text streaming
|
| 24 |
-
* Markdown output with LaTeX rendering, to use for instance with [GALACTICA](https://github.com/paperswithcode/galai)
|
| 25 |
-
* API, including endpoints for websocket streaming ([see the examples](https://github.com/oobabooga/text-generation-webui/blob/main/api-examples))
|
| 26 |
-
|
| 27 |
-
To learn how to use the various features, check out the Documentation: https://github.com/oobabooga/text-generation-webui/tree/main/docs
|
| 28 |
-
|
| 29 |
-
## Installation
|
| 30 |
-
|
| 31 |
-
### One-click installers
|
| 32 |
-
|
| 33 |
-
| Windows | Linux | macOS | WSL |
|
| 34 |
-
|--------|--------|--------|--------|
|
| 35 |
-
| [oobabooga-windows.zip](https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga_windows.zip) | [oobabooga-linux.zip](https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga_linux.zip) |[oobabooga-macos.zip](https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga_macos.zip) | [oobabooga-wsl.zip](https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga_wsl.zip) |
|
| 36 |
-
|
| 37 |
-
Just download the zip above, extract it, and double-click on "start". The web UI and all its dependencies will be installed in the same folder.
|
| 38 |
-
|
| 39 |
-
* The source codes and more information can be found here: https://github.com/oobabooga/one-click-installers
|
| 40 |
-
* There is no need to run the installers as admin.
|
| 41 |
-
* Huge thanks to [@jllllll](https://github.com/jllllll), [@ClayShoaf](https://github.com/ClayShoaf), and [@xNul](https://github.com/xNul) for their contributions to these installers.
|
| 42 |
-
|
| 43 |
-
### Manual installation using Conda
|
| 44 |
-
|
| 45 |
-
Recommended if you have some experience with the command-line.
|
| 46 |
-
|
| 47 |
-
#### 0. Install Conda
|
| 48 |
-
|
| 49 |
-
https://docs.conda.io/en/latest/miniconda.html
|
| 50 |
-
|
| 51 |
-
On Linux or WSL, it can be automatically installed with these two commands ([source](https://educe-ubc.github.io/conda.html)):
|
| 52 |
-
|
| 53 |
-
```
|
| 54 |
-
curl -sL "https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh" > "Miniconda3.sh"
|
| 55 |
-
bash Miniconda3.sh
|
| 56 |
-
```
|
| 57 |
-
|
| 58 |
-
#### 1. Create a new conda environment
|
| 59 |
-
|
| 60 |
-
```
|
| 61 |
-
conda create -n textgen python=3.10.9
|
| 62 |
-
conda activate textgen
|
| 63 |
-
```
|
| 64 |
-
|
| 65 |
-
#### 2. Install Pytorch
|
| 66 |
-
|
| 67 |
-
| System | GPU | Command |
|
| 68 |
-
|--------|---------|---------|
|
| 69 |
-
| Linux/WSL | NVIDIA | `pip3 install torch torchvision torchaudio` |
|
| 70 |
-
| Linux/WSL | CPU only | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu` |
|
| 71 |
-
| Linux | AMD | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.4.2` |
|
| 72 |
-
| MacOS + MPS | Any | `pip3 install torch torchvision torchaudio` |
|
| 73 |
-
| Windows | NVIDIA | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117` |
|
| 74 |
-
| Windows | CPU only | `pip3 install torch torchvision torchaudio` |
|
| 75 |
-
|
| 76 |
-
The up-to-date commands can be found here: https://pytorch.org/get-started/locally/.
|
| 77 |
-
|
| 78 |
-
#### 3. Install the web UI
|
| 79 |
-
|
| 80 |
-
```
|
| 81 |
-
git clone https://github.com/oobabooga/text-generation-webui
|
| 82 |
-
cd text-generation-webui
|
| 83 |
-
pip install -r requirements.txt
|
| 84 |
-
```
|
| 85 |
-
|
| 86 |
-
#### AMD, Metal, Intel Arc, and CPUs without AVCX2
|
| 87 |
-
|
| 88 |
-
1) Replace the last command above with
|
| 89 |
-
|
| 90 |
-
```
|
| 91 |
-
pip install -r requirements_nocuda.txt
|
| 92 |
-
```
|
| 93 |
-
|
| 94 |
-
2) Manually install llama-cpp-python using the appropriate command for your hardware: [Installation from PyPI](https://github.com/abetlen/llama-cpp-python#installation-from-pypi).
|
| 95 |
-
|
| 96 |
-
3) AMD: Manually install AutoGPTQ: [Installation](https://github.com/PanQiWei/AutoGPTQ#installation).
|
| 97 |
-
|
| 98 |
-
4) AMD: Manually install [ExLlama](https://github.com/turboderp/exllama) by simply cloning it into the `repositories` folder (it will be automatically compiled at runtime after that):
|
| 99 |
-
|
| 100 |
-
```
|
| 101 |
-
cd text-generation-webui
|
| 102 |
-
mkdir repositories
|
| 103 |
-
cd repositories
|
| 104 |
-
git clone https://github.com/turboderp/exllama
|
| 105 |
-
```
|
| 106 |
-
|
| 107 |
-
#### bitsandbytes on older NVIDIA GPUs
|
| 108 |
-
|
| 109 |
-
bitsandbytes >= 0.39 may not work. In that case, to use `--load-in-8bit`, you may have to downgrade like this:
|
| 110 |
-
|
| 111 |
-
* Linux: `pip install bitsandbytes==0.38.1`
|
| 112 |
-
* Windows: `pip install https://github.com/jllllll/bitsandbytes-windows-webui/raw/main/bitsandbytes-0.38.1-py3-none-any.whl`
|
| 113 |
-
|
| 114 |
-
### Alternative: Docker
|
| 115 |
-
|
| 116 |
-
```
|
| 117 |
-
ln -s docker/{Dockerfile,docker-compose.yml,.dockerignore} .
|
| 118 |
-
cp docker/.env.example .env
|
| 119 |
-
# Edit .env and set TORCH_CUDA_ARCH_LIST based on your GPU model
|
| 120 |
-
docker compose up --build
|
| 121 |
-
```
|
| 122 |
-
|
| 123 |
-
* You need to have docker compose v2.17 or higher installed. See [this guide](https://github.com/oobabooga/text-generation-webui/blob/main/docs/Docker.md) for instructions.
|
| 124 |
-
* For additional docker files, check out [this repository](https://github.com/Atinoda/text-generation-webui-docker).
|
| 125 |
-
|
| 126 |
-
### Updating the requirements
|
| 127 |
-
|
| 128 |
-
From time to time, the `requirements.txt` changes. To update, use these commands:
|
| 129 |
-
|
| 130 |
-
```
|
| 131 |
-
conda activate textgen
|
| 132 |
-
cd text-generation-webui
|
| 133 |
-
pip install -r requirements.txt --upgrade
|
| 134 |
-
```
|
| 135 |
-
|
| 136 |
-
## Downloading models
|
| 137 |
-
|
| 138 |
-
Models should be placed in the `text-generation-webui/models` folder. They are usually downloaded from [Hugging Face](https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads).
|
| 139 |
-
|
| 140 |
-
* Transformers or GPTQ models are made of several files and must be placed in a subfolder. Example:
|
| 141 |
-
|
| 142 |
-
```
|
| 143 |
-
text-generation-webui
|
| 144 |
-
├── models
|
| 145 |
-
│ ├── lmsys_vicuna-33b-v1.3
|
| 146 |
-
│ │ ├── config.json
|
| 147 |
-
│ │ ├── generation_config.json
|
| 148 |
-
│ │ ├── pytorch_model-00001-of-00007.bin
|
| 149 |
-
│ │ ├── pytorch_model-00002-of-00007.bin
|
| 150 |
-
│ │ ├── pytorch_model-00003-of-00007.bin
|
| 151 |
-
│ │ ├── pytorch_model-00004-of-00007.bin
|
| 152 |
-
│ │ ├── pytorch_model-00005-of-00007.bin
|
| 153 |
-
│ │ ├── pytorch_model-00006-of-00007.bin
|
| 154 |
-
│ │ ├── pytorch_model-00007-of-00007.bin
|
| 155 |
-
│ │ ├── pytorch_model.bin.index.json
|
| 156 |
-
│ │ ├── special_tokens_map.json
|
| 157 |
-
│ │ ├── tokenizer_config.json
|
| 158 |
-
│ │ └── tokenizer.model
|
| 159 |
-
```
|
| 160 |
-
|
| 161 |
-
* GGML/GGUF models are a single file and should be placed directly into `models`. Example:
|
| 162 |
-
|
| 163 |
-
```
|
| 164 |
-
text-generation-webui
|
| 165 |
-
├── models
|
| 166 |
-
│ ├── llama-13b.ggmlv3.q4_K_M.bin
|
| 167 |
-
```
|
| 168 |
-
|
| 169 |
-
In both cases, you can use the "Model" tab of the UI to download the model from Hugging Face automatically. It is also possible to download via the command-line with `python download-model.py organization/model` (use `--help` to see all the options).
|
| 170 |
-
|
| 171 |
-
#### GPT-4chan
|
| 172 |
-
|
| 173 |
-
<details>
|
| 174 |
-
<summary>
|
| 175 |
-
Instructions
|
| 176 |
-
</summary>
|
| 177 |
-
|
| 178 |
-
[GPT-4chan](https://huggingface.co/ykilcher/gpt-4chan) has been shut down from Hugging Face, so you need to download it elsewhere. You have two options:
|
| 179 |
-
|
| 180 |
-
* Torrent: [16-bit](https://archive.org/details/gpt4chan_model_float16) / [32-bit](https://archive.org/details/gpt4chan_model)
|
| 181 |
-
* Direct download: [16-bit](https://theswissbay.ch/pdf/_notpdf_/gpt4chan_model_float16/) / [32-bit](https://theswissbay.ch/pdf/_notpdf_/gpt4chan_model/)
|
| 182 |
-
|
| 183 |
-
The 32-bit version is only relevant if you intend to run the model in CPU mode. Otherwise, you should use the 16-bit version.
|
| 184 |
-
|
| 185 |
-
After downloading the model, follow these steps:
|
| 186 |
-
|
| 187 |
-
1. Place the files under `models/gpt4chan_model_float16` or `models/gpt4chan_model`.
|
| 188 |
-
2. Place GPT-J 6B's config.json file in that same folder: [config.json](https://huggingface.co/EleutherAI/gpt-j-6B/raw/main/config.json).
|
| 189 |
-
3. Download GPT-J 6B's tokenizer files (they will be automatically detected when you attempt to load GPT-4chan):
|
| 190 |
-
|
| 191 |
-
```
|
| 192 |
-
python download-model.py EleutherAI/gpt-j-6B --text-only
|
| 193 |
-
```
|
| 194 |
-
|
| 195 |
-
When you load this model in default or notebook modes, the "HTML" tab will show the generated text in 4chan format:
|
| 196 |
-
|
| 197 |
-

|
| 198 |
-
|
| 199 |
-
</details>
|
| 200 |
-
|
| 201 |
-
## Starting the web UI
|
| 202 |
-
|
| 203 |
-
conda activate textgen
|
| 204 |
-
cd text-generation-webui
|
| 205 |
-
python server.py
|
| 206 |
-
|
| 207 |
-
Then browse to
|
| 208 |
-
|
| 209 |
-
`http://localhost:7860/?__theme=dark`
|
| 210 |
-
|
| 211 |
-
Optionally, you can use the following command-line flags:
|
| 212 |
-
|
| 213 |
-
#### Basic settings
|
| 214 |
-
|
| 215 |
-
| Flag | Description |
|
| 216 |
-
|--------------------------------------------|-------------|
|
| 217 |
-
| `-h`, `--help` | Show this help message and exit. |
|
| 218 |
-
| `--multi-user` | Multi-user mode. Chat histories are not saved or automatically loaded. WARNING: this is highly experimental. |
|
| 219 |
-
| `--character CHARACTER` | The name of the character to load in chat mode by default. |
|
| 220 |
-
| `--model MODEL` | Name of the model to load by default. |
|
| 221 |
-
| `--lora LORA [LORA ...]` | The list of LoRAs to load. If you want to load more than one LoRA, write the names separated by spaces. |
|
| 222 |
-
| `--model-dir MODEL_DIR` | Path to directory with all the models. |
|
| 223 |
-
| `--lora-dir LORA_DIR` | Path to directory with all the loras. |
|
| 224 |
-
| `--model-menu` | Show a model menu in the terminal when the web UI is first launched. |
|
| 225 |
-
| `--settings SETTINGS_FILE` | Load the default interface settings from this yaml file. See `settings-template.yaml` for an example. If you create a file called `settings.yaml`, this file will be loaded by default without the need to use the `--settings` flag. |
|
| 226 |
-
| `--extensions EXTENSIONS [EXTENSIONS ...]` | The list of extensions to load. If you want to load more than one extension, write the names separated by spaces. |
|
| 227 |
-
| `--verbose` | Print the prompts to the terminal. |
|
| 228 |
-
|
| 229 |
-
#### Model loader
|
| 230 |
-
|
| 231 |
-
| Flag | Description |
|
| 232 |
-
|--------------------------------------------|-------------|
|
| 233 |
-
| `--loader LOADER` | Choose the model loader manually, otherwise, it will get autodetected. Valid options: transformers, autogptq, gptq-for-llama, exllama, exllama_hf, llamacpp, rwkv, ctransformers |
|
| 234 |
-
|
| 235 |
-
#### Accelerate/transformers
|
| 236 |
-
|
| 237 |
-
| Flag | Description |
|
| 238 |
-
|---------------------------------------------|-------------|
|
| 239 |
-
| `--cpu` | Use the CPU to generate text. Warning: Training on CPU is extremely slow.|
|
| 240 |
-
| `--auto-devices` | Automatically split the model across the available GPU(s) and CPU. |
|
| 241 |
-
| `--gpu-memory GPU_MEMORY [GPU_MEMORY ...]` | Maximum GPU memory in GiB to be allocated per GPU. Example: `--gpu-memory 10` for a single GPU, `--gpu-memory 10 5` for two GPUs. You can also set values in MiB like `--gpu-memory 3500MiB`. |
|
| 242 |
-
| `--cpu-memory CPU_MEMORY` | Maximum CPU memory in GiB to allocate for offloaded weights. Same as above.|
|
| 243 |
-
| `--disk` | If the model is too large for your GPU(s) and CPU combined, send the remaining layers to the disk. |
|
| 244 |
-
| `--disk-cache-dir DISK_CACHE_DIR` | Directory to save the disk cache to. Defaults to `cache/`. |
|
| 245 |
-
| `--load-in-8bit` | Load the model with 8-bit precision (using bitsandbytes).|
|
| 246 |
-
| `--bf16` | Load the model with bfloat16 precision. Requires NVIDIA Ampere GPU. |
|
| 247 |
-
| `--no-cache` | Set `use_cache` to False while generating text. This reduces the VRAM usage a bit with a performance cost. |
|
| 248 |
-
| `--xformers` | Use xformer's memory efficient attention. This should increase your tokens/s. |
|
| 249 |
-
| `--sdp-attention` | Use torch 2.0's sdp attention. |
|
| 250 |
-
| `--trust-remote-code` | Set trust_remote_code=True while loading a model. Necessary for ChatGLM and Falcon. |
|
| 251 |
-
|
| 252 |
-
#### Accelerate 4-bit
|
| 253 |
-
|
| 254 |
-
⚠️ Requires minimum compute of 7.0 on Windows at the moment.
|
| 255 |
-
|
| 256 |
-
| Flag | Description |
|
| 257 |
-
|---------------------------------------------|-------------|
|
| 258 |
-
| `--load-in-4bit` | Load the model with 4-bit precision (using bitsandbytes). |
|
| 259 |
-
| `--compute_dtype COMPUTE_DTYPE` | compute dtype for 4-bit. Valid options: bfloat16, float16, float32. |
|
| 260 |
-
| `--quant_type QUANT_TYPE` | quant_type for 4-bit. Valid options: nf4, fp4. |
|
| 261 |
-
| `--use_double_quant` | use_double_quant for 4-bit. |
|
| 262 |
-
|
| 263 |
-
#### GGML/GGUF (for llama.cpp and ctransformers)
|
| 264 |
-
|
| 265 |
-
| Flag | Description |
|
| 266 |
-
|-------------|-------------|
|
| 267 |
-
| `--threads` | Number of threads to use. |
|
| 268 |
-
| `--n_batch` | Maximum number of prompt tokens to batch together when calling llama_eval. |
|
| 269 |
-
| `--n-gpu-layers N_GPU_LAYERS` | Number of layers to offload to the GPU. Only works if llama-cpp-python was compiled with BLAS. Set this to 1000000000 to offload all layers to the GPU. |
|
| 270 |
-
| `--n_ctx N_CTX` | Size of the prompt context. |
|
| 271 |
-
|
| 272 |
-
#### llama.cpp
|
| 273 |
-
|
| 274 |
-
| Flag | Description |
|
| 275 |
-
|---------------|---------------|
|
| 276 |
-
| `--no-mmap` | Prevent mmap from being used. |
|
| 277 |
-
| `--mlock` | Force the system to keep the model in RAM. |
|
| 278 |
-
| `--mul_mat_q` | Activate new mulmat kernels. |
|
| 279 |
-
| `--cache-capacity CACHE_CAPACITY` | Maximum cache capacity. Examples: 2000MiB, 2GiB. When provided without units, bytes will be assumed. |
|
| 280 |
-
| `--tensor_split TENSOR_SPLIT` | Split the model across multiple GPUs, comma-separated list of proportions, e.g. 18,17 |
|
| 281 |
-
| `--llama_cpp_seed SEED` | Seed for llama-cpp models. Default 0 (random). |
|
| 282 |
-
| `--n_gqa N_GQA` | GGML only (not used by GGUF): Grouped-Query Attention. Must be 8 for llama-2 70b. |
|
| 283 |
-
| `--rms_norm_eps RMS_NORM_EPS` | GGML only (not used by GGUF): 5e-6 is a good value for llama-2 models. |
|
| 284 |
-
| `--cpu` | Use the CPU version of llama-cpp-python instead of the GPU-accelerated version. |
|
| 285 |
-
|`--cfg-cache` | llamacpp_HF: Create an additional cache for CFG negative prompts. |
|
| 286 |
-
|
| 287 |
-
#### ctransformers
|
| 288 |
-
|
| 289 |
-
| Flag | Description |
|
| 290 |
-
|-------------|-------------|
|
| 291 |
-
| `--model_type MODEL_TYPE` | Model type of pre-quantized model. Currently gpt2, gptj, gptneox, falcon, llama, mpt, starcoder (gptbigcode), dollyv2, and replit are supported. |
|
| 292 |
-
|
| 293 |
-
#### AutoGPTQ
|
| 294 |
-
|
| 295 |
-
| Flag | Description |
|
| 296 |
-
|------------------|-------------|
|
| 297 |
-
| `--triton` | Use triton. |
|
| 298 |
-
| `--no_inject_fused_attention` | Disable the use of fused attention, which will use less VRAM at the cost of slower inference. |
|
| 299 |
-
| `--no_inject_fused_mlp` | Triton mode only: disable the use of fused MLP, which will use less VRAM at the cost of slower inference. |
|
| 300 |
-
| `--no_use_cuda_fp16` | This can make models faster on some systems. |
|
| 301 |
-
| `--desc_act` | For models that don't have a quantize_config.json, this parameter is used to define whether to set desc_act or not in BaseQuantizeConfig. |
|
| 302 |
-
| `--disable_exllama` | Disable ExLlama kernel, which can improve inference speed on some systems. |
|
| 303 |
-
|
| 304 |
-
#### ExLlama
|
| 305 |
-
|
| 306 |
-
| Flag | Description |
|
| 307 |
-
|------------------|-------------|
|
| 308 |
-
|`--gpu-split` | Comma-separated list of VRAM (in GB) to use per GPU device for model layers, e.g. `20,7,7` |
|
| 309 |
-
|`--max_seq_len MAX_SEQ_LEN` | Maximum sequence length. |
|
| 310 |
-
|`--cfg-cache` | ExLlama_HF: Create an additional cache for CFG negative prompts. Necessary to use CFG with that loader, but not necessary for CFG with base ExLlama. |
|
| 311 |
-
|
| 312 |
-
#### GPTQ-for-LLaMa
|
| 313 |
-
|
| 314 |
-
| Flag | Description |
|
| 315 |
-
|---------------------------|-------------|
|
| 316 |
-
| `--wbits WBITS` | Load a pre-quantized model with specified precision in bits. 2, 3, 4 and 8 are supported. |
|
| 317 |
-
| `--model_type MODEL_TYPE` | Model type of pre-quantized model. Currently LLaMA, OPT, and GPT-J are supported. |
|
| 318 |
-
| `--groupsize GROUPSIZE` | Group size. |
|
| 319 |
-
| `--pre_layer PRE_LAYER [PRE_LAYER ...]` | The number of layers to allocate to the GPU. Setting this parameter enables CPU offloading for 4-bit models. For multi-gpu, write the numbers separated by spaces, eg `--pre_layer 30 60`. |
|
| 320 |
-
| `--checkpoint CHECKPOINT` | The path to the quantized checkpoint file. If not specified, it will be automatically detected. |
|
| 321 |
-
| `--monkey-patch` | Apply the monkey patch for using LoRAs with quantized models.
|
| 322 |
-
|
| 323 |
-
#### DeepSpeed
|
| 324 |
-
|
| 325 |
-
| Flag | Description |
|
| 326 |
-
|---------------------------------------|-------------|
|
| 327 |
-
| `--deepspeed` | Enable the use of DeepSpeed ZeRO-3 for inference via the Transformers integration. |
|
| 328 |
-
| `--nvme-offload-dir NVME_OFFLOAD_DIR` | DeepSpeed: Directory to use for ZeRO-3 NVME offloading. |
|
| 329 |
-
| `--local_rank LOCAL_RANK` | DeepSpeed: Optional argument for distributed setups. |
|
| 330 |
-
|
| 331 |
-
#### RWKV
|
| 332 |
-
|
| 333 |
-
| Flag | Description |
|
| 334 |
-
|---------------------------------|-------------|
|
| 335 |
-
| `--rwkv-strategy RWKV_STRATEGY` | RWKV: The strategy to use while loading the model. Examples: "cpu fp32", "cuda fp16", "cuda fp16i8". |
|
| 336 |
-
| `--rwkv-cuda-on` | RWKV: Compile the CUDA kernel for better performance. |
|
| 337 |
-
|
| 338 |
-
#### RoPE (for llama.cpp, ExLlama, and transformers)
|
| 339 |
-
|
| 340 |
-
| Flag | Description |
|
| 341 |
-
|------------------|-------------|
|
| 342 |
-
| `--alpha_value ALPHA_VALUE` | Positional embeddings alpha factor for NTK RoPE scaling. Use either this or compress_pos_emb, not both. |
|
| 343 |
-
| `--rope_freq_base ROPE_FREQ_BASE` | If greater than 0, will be used instead of alpha_value. Those two are related by rope_freq_base = 10000 * alpha_value ^ (64 / 63). |
|
| 344 |
-
| `--compress_pos_emb COMPRESS_POS_EMB` | Positional embeddings compression factor. Should be set to (context length) / (model's original context length). Equal to 1/rope_freq_scale. |
|
| 345 |
-
|
| 346 |
-
#### Gradio
|
| 347 |
-
|
| 348 |
-
| Flag | Description |
|
| 349 |
-
|---------------------------------------|-------------|
|
| 350 |
-
| `--listen` | Make the web UI reachable from your local network. |
|
| 351 |
-
| `--listen-host LISTEN_HOST` | The hostname that the server will use. |
|
| 352 |
-
| `--listen-port LISTEN_PORT` | The listening port that the server will use. |
|
| 353 |
-
| `--share` | Create a public URL. This is useful for running the web UI on Google Colab or similar. |
|
| 354 |
-
| `--auto-launch` | Open the web UI in the default browser upon launch. |
|
| 355 |
-
| `--gradio-auth USER:PWD` | set gradio authentication like "username:password"; or comma-delimit multiple like "u1:p1,u2:p2,u3:p3" |
|
| 356 |
-
| `--gradio-auth-path GRADIO_AUTH_PATH` | Set the gradio authentication file path. The file should contain one or more user:password pairs in this format: "u1:p1,u2:p2,u3:p3" |
|
| 357 |
-
| `--ssl-keyfile SSL_KEYFILE` | The path to the SSL certificate key file. |
|
| 358 |
-
| `--ssl-certfile SSL_CERTFILE` | The path to the SSL certificate cert file. |
|
| 359 |
-
|
| 360 |
-
#### API
|
| 361 |
-
|
| 362 |
-
| Flag | Description |
|
| 363 |
-
|---------------------------------------|-------------|
|
| 364 |
-
| `--api` | Enable the API extension. |
|
| 365 |
-
| `--public-api` | Create a public URL for the API using Cloudfare. |
|
| 366 |
-
| `--public-api-id PUBLIC_API_ID` | Tunnel ID for named Cloudflare Tunnel. Use together with public-api option. |
|
| 367 |
-
| `--api-blocking-port BLOCKING_PORT` | The listening port for the blocking API. |
|
| 368 |
-
| `--api-streaming-port STREAMING_PORT` | The listening port for the streaming API. |
|
| 369 |
-
|
| 370 |
-
#### Multimodal
|
| 371 |
-
|
| 372 |
-
| Flag | Description |
|
| 373 |
-
|---------------------------------------|-------------|
|
| 374 |
-
| `--multimodal-pipeline PIPELINE` | The multimodal pipeline to use. Examples: `llava-7b`, `llava-13b`. |
|
| 375 |
-
|
| 376 |
-
## Presets
|
| 377 |
-
|
| 378 |
-
Inference settings presets can be created under `presets/` as yaml files. These files are detected automatically at startup.
|
| 379 |
-
|
| 380 |
-
The presets that are included by default are the result of a contest that received 7215 votes. More details can be found [here](https://github.com/oobabooga/oobabooga.github.io/blob/main/arena/results.md).
|
| 381 |
-
|
| 382 |
-
## Contributing
|
| 383 |
-
|
| 384 |
-
If you would like to contribute to the project, check out the [Contributing guidelines](https://github.com/oobabooga/text-generation-webui/wiki/Contributing-guidelines).
|
| 385 |
-
|
| 386 |
-
## Community
|
| 387 |
-
|
| 388 |
-
* Subreddit: https://www.reddit.com/r/oobabooga/
|
| 389 |
-
* Discord: https://discord.gg/jwZCF2dPQN
|
| 390 |
-
|
| 391 |
-
## Acknowledgment
|
| 392 |
-
|
| 393 |
-
In August 2023, [Andreessen Horowitz](https://a16z.com/) (a16z) provided a generous grant to encourage and support my independent work on this project. I am **extremely** grateful for their trust and recognition, which will allow me to dedicate more time towards realizing the full potential of text-generation-webui.
|
|
|
|
| 1 |
+
title: {{title}}
|
| 2 |
+
emoji: {{emoji}}
|
| 3 |
+
colorFrom: {{colorFrom}}
|
| 4 |
+
colorTo: {{colorTo}}
|
| 5 |
+
sdk: {{sdk}}
|
| 6 |
+
sdk_version: {{sdkVersion}}
|
| 7 |
+
app_file: app.py
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|