Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 29 additions & 25 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@
# TensorRT Extension for Stable Diffusion
# TensorRT Extension for Stable Diffusion

This extension enables the best performance on NVIDIA RTX GPUs for Stable Diffusion with TensorRT.

You need to install the extension and generate optimized engines before using the extension. Please follow the instructions below to set everything up.

Supports Stable Diffusion 1.5 and 2.1. Native SDXL support coming in a future release. Please use the [dev branch](https://github.com/AUTOMATIC1111/stable-diffusion-webui/tree/dev) if you would like to use it today. Note that the Dev branch is not intended for production work and may break other things that you are currently using.
This extension enables the best performance on NVIDIA RTX GPUs for Stable Diffusion with TensorRT.
You need to install the extension and generate optimized engines before using the extension. Please follow the instructions below to set everything up.
Supports Stable Diffusion 1.5,2.1, SDXL, SDXL Turbo, and LCM. For SDXL and SDXL Turbo, we recommend using a GPU with 12 GB or more VRAM for best performance due to its size and computational intensity.

## Installation

Expand All @@ -15,44 +13,50 @@ Example instructions for Automatic1111:
3. Copy the link to this repository and paste it into URL for extension's git repository
4. Click Install


## How to use

1. Click on the “Generate Default Engines” button. This step takes 2-10 minutes depending on your GPU. You can generate engines for other combinations.
1. Click on the “Generate Default Engines” button. This step takes 2-10 minutes depending on your GPU. You can generate engines for other combinations.
2. Go to Settings → User Interface → Quick Settings List, add sd_unet. Apply these settings, then reload the UI.
3. Back in the main UI, select the TRT model from the sd_unet dropdown menu at the top of the page.
4. You can now start generating images accelerated by TRT. If you need to create more Engines, go to the TensorRT tab.
3. Back in the main UI, select “Automatic” from the sd_unet dropdown menu at the top of the page if not already selected.
4. You can now start generating images accelerated by TRT. If you need to create more Engines, go to the TensorRT tab.

Happy prompting!

### LoRA

To use LoRA / LyCORIS checkpoints they first need to be converted to a TensorRT format. This can be done in the TensorRT extension in the Export LoRA tab.
1. Select a LoRA checkpoint from the dropdown.
2. Export. (This will not generate an engine but only convert the weights in ~20s)
3. You can use the exported LoRAs as usual using the prompt embedding.


## More Information

TensorRT uses optimized engines for specific resolutions and batch sizes. You can generate as many optimized engines as desired. Types:

- The "Export Default Engines” selection adds support for resolutions between 512x512 and 768x768 for Stable Diffusion 1.5 and 768x768 to 1024x1024 for SDXL with batch sizes 1 to 4.
- Static engines support a single specific output resolution and batch size.
- Dynamic engines support a range of resolutions and batch sizes, at a small cost in performance. Wider ranges will use more VRAM.
- The "Export Default Engines” selection adds support for resolutions between `512 x 512` and 768x768 for Stable Diffusion 1.5 and 2.1 with batch sizes 1 to 4. For SDXL, this selection generates an engine supporting a resolution of `1024 x 1024` with a batch size of `1`.
- Static engines support a single specific output resolution and batch size.
- Dynamic engines support a range of resolutions and batch sizes, at a small cost in performance. Wider ranges will use more VRAM.
- The first time generating an engine for a checkpoint will take longer. Additional engines generated for the same checkpoint will be much faster.

Each preset can be adjusted with the “Advanced Settings” option. More detailed instructions can be found [here](https://nvidia.custhelp.com/app/answers/detail/a_id/5487/~/tensorrt-extension-for-stable-diffusion-web-ui).

### Common Issues/Limitations

**HIRES FIX:** If using the hires.fix option in Automatic1111 you must build engines that match both the starting and ending resolutions. For instance, if initial size is `512 x 512` and hires.fix upscales to `1024 x 1024`, you must either generate two engines, one at 512 and one at 1024, or generate a single dynamic engine that covers the whole range.
Having two seperate engines will heavily impact performance at the moment. Stay tuned for updates.
**HIRES FIX**: If using the hires.fix option in Automatic1111 you must build engines that match both the starting and ending resolutions. For instance, if the initial size is `512 x 512` and hires.fix upscales to `1024 x 1024`, you must generate a single dynamic engine that covers the whole range.

**Resolution:** When generating images the resolution needs to be a multiple of 64. This applies to hires.fix as well, requiring the low and high-res to be divisible by 64.
**Resolution**: When generating images, the resolution needs to be a multiple of 64. This applies to hires.fix as well, requiring the low and high-res to be divisible by 64.

**Failing CMD arguments:**
**Failing CMD arguments**:

- `medvram` and `lowvram` Have caused issues when compiling the engine and running it.
- `medvram` and `lowvram` Have caused issues when compiling the engine.
- `api` Has caused the `model.json` to not be updated. Resulting in SD Unets not appearing after compilation.

**Failing installation or TensorRT tab not appearing in UI:** This is most likely due to a failed install. To resolve this manually use this [guide](https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/issues/27#issuecomment-1767570566).
- Failing installation or TensorRT tab not appearing in UI: This is most likely due to a failed install. To resolve this manually use this [guide](https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/issues/27#issuecomment-1767570566).

## Requirements
Driver:

**Driver**:

- Linux: >= 450.80.02
- Windows: >=452.39
Linux: >= 450.80.02
- Windows: >= 452.39

We always recommend keeping the driver up-to-date for system wide performance improvments.
We always recommend keeping the driver up-to-date for system wide performance improvements.
239 changes: 239 additions & 0 deletions datastructures.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,239 @@
from dataclasses import dataclass
from enum import Enum
from json import JSONEncoder
import torch


class SDVersion(Enum):
SD1 = 1
SD2 = 2
SDXL = 3
Unknown = -1

def __str__(self):
return self.name

@classmethod
def from_str(cls, str):
try:
return cls[str]
except KeyError:
return cls.Unknown

def match(self, sd_model):
if sd_model.is_sd1 and self == SDVersion.SD1:
return True
elif sd_model.is_sd2 and self == SDVersion.SD2:
return True
elif sd_model.is_sdxl and self == SDVersion.SDXL:
return True
elif self == SDVersion.Unknown:
return True
else:
return False


class ModelType(Enum):
UNET = 0
CONTROLNET = 1
LORA = 2
UNDEFINED = -1

@classmethod
def from_string(cls, s):
return getattr(cls, s.upper(), None)

def __str__(self):
return self.name.lower()


@dataclass
class ModelConfig:
profile: dict
static_shapes: bool
fp32: bool
inpaint: bool
refit: bool
lora: bool
vram: int
unet_hidden_dim: int = 4

def is_compatible_from_dict(self, feed_dict: dict):
distance = 0
for k, v in feed_dict.items():
_min, _opt, _max = self.profile[k]
v_tensor = torch.Tensor(list(v.shape))
r_min = torch.Tensor(_max) - v_tensor
r_opt = (torch.Tensor(_opt) - v_tensor).abs()
r_max = v_tensor - torch.Tensor(_min)
if torch.any(r_min < 0) or torch.any(r_max < 0):
return (False, distance)
distance += r_opt.sum() + 0.5 * (r_max.sum() + 0.5 * r_min.sum())
return (True, distance)

def is_compatible(
self, width: int, height: int, batch_size: int, max_embedding: int
):
distance = 0
sample = self.profile["sample"]
embedding = self.profile["encoder_hidden_states"]

batch_size *= 2
width = width // 8
height = height // 8

_min, _opt, _max = sample
if _min[0] > batch_size or _max[0] < batch_size:
return (False, distance)
if _min[2] > height or _max[2] < height:
return (False, distance)
if _min[3] > width or _max[3] < width:
return (False, distance)

_min_em, _opt_em, _max_em = embedding
if _min_em[1] > max_embedding or _max_em[1] < max_embedding:
return (False, distance)

distance = (
abs(_opt[0] - batch_size)
+ abs(_opt[2] - height)
+ abs(_opt[3] - width)
+ 0.5 * (abs(_max[2] - height) + abs(_max[3] - width))
)

return (True, distance)


class ModelConfigEncoder(JSONEncoder):
def default(self, o: ModelConfig):
return o.__dict__


@dataclass
class ProfileSettings:
bs_min: int
bs_opt: int
bs_max: int
h_min: int
h_opt: int
h_max: int
w_min: int
w_opt: int
w_max: int
t_min: int
t_opt: int
t_max: int
static_shape: bool = False

def __str__(self) -> str:
return "Batch Size: {}-{}-{}\nHeight: {}-{}-{}\nWidth: {}-{}-{}\nToken Count: {}-{}-{}".format(
self.bs_min,
self.bs_opt,
self.bs_max,
self.h_min,
self.h_opt,
self.h_max,
self.w_min,
self.w_opt,
self.w_max,
self.t_min,
self.t_opt,
self.t_max,
)

def out(self):
return (
self.bs_min,
self.bs_opt,
self.bs_max,
self.h_min,
self.h_opt,
self.h_max,
self.w_min,
self.w_opt,
self.w_max,
self.t_min,
self.t_opt,
self.t_max,
)

def token_to_dim(self, static_shapes: bool):
self.t_min = (self.t_min // 75) * 77
self.t_opt = (self.t_opt // 75) * 77
self.t_max = (self.t_max // 75) * 77

if static_shapes:
self.t_min = self.t_max = self.t_opt
self.bs_min = self.bs_max = self.bs_opt
self.h_min = self.h_max = self.h_opt
self.w_min = self.w_max = self.w_opt
self.static_shape = True

def get_latent_dim(self):
return (
self.h_min // 8,
self.h_opt // 8,
self.h_max // 8,
self.w_min // 8,
self.w_opt // 8,
self.w_max // 8,
)

def get_a1111_batch_dim(self):
static_batch = self.bs_min == self.bs_max == self.bs_opt
if self.t_max <= 77:
return (self.bs_min * 2, self.bs_opt * 2, self.bs_max * 2)
elif self.t_max > 77 and static_batch:
return (self.bs_opt, self.bs_opt, self.bs_opt)
elif self.t_max > 77 and not static_batch:
if self.t_opt > 77:
return (self.bs_min, self.bs_opt, self.bs_max * 2)
return (self.bs_min, self.bs_opt * 2, self.bs_max * 2)
else:
raise Exception("Uncovered case in get_batch_dim")


class ProfilePrests:
def __init__(self):
self.profile_presets = {
"512x512 | Batch Size 1 (Static)": ProfileSettings(
1, 1, 1, 512, 512, 512, 512, 512, 512, 75, 75, 75
),
"768x768 | Batch Size 1 (Static)": ProfileSettings(
1, 1, 1, 768, 768, 768, 768, 768, 768, 75, 75, 75
),
"1024x1024 | Batch Size 1 (Static)": ProfileSettings(
1, 1, 1, 1024, 1024, 1024, 1024, 1024, 1024, 75, 75, 75
),
"256x256 - 512x512 | Batch Size 1-4": ProfileSettings(
1, 1, 4, 256, 512, 512, 256, 512, 512, 75, 75, 150
),
"512x512 - 768x768 | Batch Size 1-4": ProfileSettings(
1, 1, 4, 512, 512, 768, 512, 512, 768, 75, 75, 150
),
"768x768 - 1024x1024 | Batch Size 1-4": ProfileSettings(
1, 1, 4, 768, 1024, 1024, 768, 1024, 1024, 75, 75, 150
),
}
self.default = ProfileSettings(
1, 1, 4, 512, 512, 768, 512, 512, 768, 75, 75, 150
)
self.default_xl = ProfileSettings(
1, 1, 1, 1024, 1024, 1024, 1024, 1024, 1024, 75, 75, 75
)

def get_settings_from_version(self, version: str):
static = False
if version == "Default":
return *self.default.out(), static
if "Static" in version:
static = True
return *self.profile_presets[version].out(), static

def get_choices(self):
return list(self.profile_presets.keys()) + ["Default"]

def get_default(self, is_xl: bool):
if is_xl:
return self.default_xl
return self.default
Loading