[Feature]: Regional Prompting script fails when Model CPU offload is enabled #3343
Open
2 tasks done
Labels
enhancement
New feature or request
Issue Description
It seems that Regional Prompting requires the entire model to be on a single device, which isn't useful for those with limited vram.
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
Tested on nvidia, zluda, and directml.
Functions normally while Model CPU offload is disabled.
Version Platform Description
Relevant log output
Backend
Diffusers
UI
Standard
Branch
Master
Model
StableDiffusion 1.5
Acknowledgements
The text was updated successfully, but these errors were encountered: