Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Issue]: IPEX Cannot generate an image, only "txt2img starting" is shown #3355

Open
2 tasks done
prakal opened this issue Aug 1, 2024 · 14 comments
Open
2 tasks done
Labels
platform Platform specific problem

Comments

@prakal
Copy link

prakal commented Aug 1, 2024

Issue Description

  1. Run sd.next by using ./webui.sh --use-ipex after git pull
  2. Try to generate an image with no extensions enabled. I see the "txt2img starting" but no new logs in the browser or in the terminal.
  3. I also noticed that models are unable to be loaded although I can verify they are in the correct folder.
  4. I waited for more than 5 min but nothing generated
  5. Running in debug mode only provided one more log line: DEBUG sd launch Server: alive=True jobs=2 requests=102 uptime=53 memory=5.61/31.24 backend=Backend.ORIGINAL state=job='txt2img' 0/-1
  6. Running it in safe mode gives me a log line: WARNING Sampler: invalid

Version Platform Description

hash: a874b27
OS: Ubuntu 24.04 LTS
Browser: Firefox
GPU: Intel ARC A770 16 GB

Relevant log output

prakal@computadora:~/SD/automatic$ ./webui.sh --use-ipex
Activate python venv: /home/prakal/SD/automatic/venv
Launch: venv/bin/python3
13:14:22-993423 INFO     Starting SD.Next                                                                                                                                                                                                   
13:14:22-996000 INFO     Logger: file="/home/prakal/SD/automatic/sdnext.log" level=INFO size=3641981 mode=append                                                                                                                                
13:14:22-996844 INFO     Python version=3.11.5 platform=Linux bin="/home/prakal/SD/automatic/venv/bin/python3" venv="/home/prakal/SD/automatic/venv"                                                                                                
13:14:23-008703 INFO     Version: app=sd.next updated=2024-07-09 hash=99c9fd28 branch=HEAD url=https://github.com/vladmandic/automatic/tree/HEAD ui=main                                                                                    
13:14:23-382143 INFO     Latest published version: a874b27e50a343ac55148756a0a80eafb3a7f87f 2024-07-24T20:16:33Z                                                                                                                            
13:14:23-384215 INFO     Platform: arch=x86_64 cpu=x86_64 system=Linux release=6.8.0-39-generic python=3.11.5                                                                                                                               
13:14:23-385274 INFO     HF cache folder: /home/prakal/.cache/huggingface/hub                                                                                                                                                                   
13:14:23-385977 INFO     Python version=3.11.5 platform=Linux bin="/home/prakal/SD/automatic/venv/bin/python3" venv="/home/prakal/SD/automatic/venv"                                                                                                
13:14:23-386716 INFO     Intel OneAPI Toolkit detected                                                                                                                                                                                      
13:14:23-429541 INFO     Verifying requirements                                                                                                                                                                                             
13:14:23-432454 INFO     Verifying packages                                                                                                                                                                                                 
13:14:23-442299 INFO     Extensions: disabled=['sdnext-modernui', 'a1111-sd-webui-lycoris', 'sd-webui-animatediff', 'Adetailer', 'ReActor']                                                                                                 
13:14:23-443129 INFO     Extensions: enabled=['LDSR', 'sd-extension-system-info', 'multidiffusion-upscaler-for-automatic1111', 'stable-diffusion-webui-images-browser', 'sd-webui-agent-scheduler', 'clip-interrogator-ext', 'Lora',        
                         'sd-extension-chainner', 'stable-diffusion-webui-rembg', 'SwinIR', 'ScuNET', 'sd-webui-controlnet'] extensions-builtin                                                                                             
13:14:23-444608 INFO     Extensions: enabled=['adetailer', 'sd-webui-reactor'] extensions                                                                                                                                                   
13:14:23-445791 INFO     Startup: standard                                                                                                                                                                                                  
13:14:23-446278 INFO     Verifying submodules                                                                                                                                                                                               
13:14:28-417758 INFO     Extensions enabled: ['LDSR', 'sd-extension-system-info', 'multidiffusion-upscaler-for-automatic1111', 'stable-diffusion-webui-images-browser', 'sd-webui-agent-scheduler', 'clip-interrogator-ext', 'Lora',        
                         'sd-extension-chainner', 'stable-diffusion-webui-rembg', 'SwinIR', 'ScuNET', 'sd-webui-controlnet', 'adetailer', 'sd-webui-reactor']                                                                               
13:14:28-418805 INFO     Verifying requirements                                                                                                                                                                                             
13:14:28-421731 INFO     Command line args: ['--use-ipex'] use_ipex=True                                                                                                                                                                    
13:14:32-783649 INFO     Load packages: {'torch': '2.1.0.post0+cxx11.abi', 'diffusers': '0.29.1', 'gradio': '3.43.2'}                                                                                                                       
13:14:33-277250 INFO     VRAM: Detected=15.91 GB Optimization=none                                                                                                                                                                          
13:14:33-279441 INFO     Engine: backend=Backend.ORIGINAL compute=ipex device=xpu attention="Scaled-Dot-Product" mode=no_grad                                                                                                               
13:14:33-280833 INFO     Device: device=Intel(R) Arc(TM) A770 Graphics n=1 ipex=2.1.20+xpu                                                                                                                                                  
13:14:33-751451 INFO     Available VAEs: path="models/VAE" items=3                                                                                                                                                                          
13:14:33-752690 INFO     Disabled extensions: ['a1111-sd-webui-lycoris', 'sdnext-modernui', 'sd-webui-animatediff']                                                                                                                         
13:14:33-755387 INFO     Available models: path="models/Stable-diffusion" items=10 time=0.00                                                                                                                                                
13:14:33-807656 INFO     LoRA networks: available=4 folders=2                                                                                                                                                                               
13:14:34-049300 INFO     Extension: script='extensions-builtin/sd-webui-agent-scheduler/scripts/task_scheduler.py' Using sqlite file: extensions-builtin/sd-webui-agent-scheduler/task_scheduler.sqlite3                                    
13:14:34-324811 INFO     Extension: script='extensions-builtin/sd-webui-controlnet/scripts/controlnet.py' Warning: ControlNet failed to load SGM - will use LDM instead.                                                                    
13:14:34-325697 INFO     Extension: script='extensions-builtin/sd-webui-controlnet/scripts/controlnet.py' ControlNet preprocessor location: /home/prakal/SD/automatic/extensions-builtin/sd-webui-controlnet/annotator/downloads                
13:14:34-331285 INFO     Extension: script='extensions-builtin/sd-webui-controlnet/scripts/hook.py' Warning: ControlNet failed to load SGM - will use LDM instead.                                                                          
13:14:35-472458 INFO     Extension: script='extensions/adetailer/scripts/!adetailer.py' [-] ADetailer initialized. version: 24.5.1, num models: 10                                                                                          
13:14:35-571234 INFO     UI theme: type=Standard name="black-teal"                                                                                                                                                                          
13:14:36-820131 INFO     Local URL: http://127.0.0.1:7860/                                                                                                                                                                                  
13:14:37-234106 INFO     [AgentScheduler] Total pending tasks: 1                                                                                                                                                                            
13:14:37-236256 INFO     [AgentScheduler] Executing task task(aqt618he83u1azt)                                                                                                                                                              
13:14:37-237178 INFO     [AgentScheduler] Registering APIs                                                                                                                                                                                  
13:14:37-283453 INFO     Select: model="epicrealism_naturalSinRC1VAE [84d76a0328]"                                                                                                                                                          
Loading model: /home/prakal/SD/automatic/models/Stable-diffusion/epicrealism_naturalSinRC1VAE.safetensors ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/2.1 GB -:--:--13:14:37-379678 INFO     Torch override dtype: no-half set
13:14:37-382612 INFO     Torch override VAE dtype: no-half set                                                                                                                                                                              
13:14:37-388173 INFO     Setting Torch parameters: device=xpu dtype=torch.float32 vae=torch.float32 unet=torch.float32 context=no_grad fp16=None bf16=None optimization=Scaled-Dot-Product                                                  
Loading model: /home/prakal/SD/automatic/models/Stable-diffusion/epicrealism_naturalSinRC1VAE.safetensors ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/2.1 GB -:--:--
13:14:37-398093 INFO     Startup time: 8.97 torch=3.65 gradio=0.54 diffusers=0.16 libraries=0.88 samplers=0.09 extensions=1.78 ui-en=0.06 ui-txt2img=0.10 ui-img2img=0.12 ui-settings=0.11 ui-extensions=0.60 ui-defaults=0.07 launch=0.11  
                         api=0.08 app-started=0.47                                                                                                                                                                                          
13:14:43-092443 INFO     LDM: LatentDiffusion: mode=eps                                                                                                                                                                                     
13:14:43-093336 INFO     LDM: DiffusionWrapper params=859.52M                                                                                                                                                                               
13:14:43-094041 INFO     Autodetect: model="Stable Diffusion" class=StableDiffusionPipeline file="/home/prakal/SD/automatic/models/Stable-diffusion/epicrealism_naturalSinRC1VAE.safetensors" size=2034MB                                       
13:14:44-403914 INFO     Applied IPEX Optimize.                                                                                                                                                                                             
13:14:44-404779 INFO     Cross-attention: optimization=Scaled-Dot-Product                                                                                                                                                                   
13:14:50-315676 INFO     MOTD: N/A                                                                                                                                                                                                          
13:15:09-514179 INFO     Browser session: user=None client=127.0.0.1 agent=Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:127.0) Gecko/20100101 Firefox/127.0

Backend

Original

UI

Standard

Branch

Master

Model

StableDiffusion 1.5

Acknowledgements

  • I have read the above and searched for existing issues
  • I confirm that this is classified correctly and its not an extension issue
@brknsoul
Copy link
Contributor

brknsoul commented Aug 2, 2024

First blush; use diffusers backend.
Second blush; ensure Sampler isn't set to Default. Use something like Euler a or DPM 2M.

@Disty0
Copy link
Collaborator

Disty0 commented Aug 2, 2024

WARNING  Sampler: invalid

What sampler did you use? If you changed back to Original backend for some reason, don't forget to clear the browser caches.
Also don't expect any more support on the Original backed. Use the default Diffusers backend.

Also do you have any particular reason to use FP32? dtype=torch.float32 vae=torch.float32

@prakal
Copy link
Author

prakal commented Aug 3, 2024

Thanks for the help, all. I used the default sampler. I will clear cache and retry, however I also did a fresh install of Chrome and encountered the same error.

Got it, will switch to Diffusers backend. I recall having trouble getting ControlNet working on it, but can look into it again.

No particular reason for FP32.

@brknsoul
Copy link
Contributor

brknsoul commented Aug 4, 2024

SD.Next has its own implementation of ControlNet while in the Diffusers backend. This can be found on the Control Tab. Classic ControlNet functions can be found by scrolling down and expanding the Control elements accordion;

chrome_AuE0J99ePg

@prakal
Copy link
Author

prakal commented Aug 5, 2024

SD.Next has its own implementation of ControlNet while in the Diffusers backend. This can be found on the Control Tab. Classic ControlNet functions can be found by scrolling down and expanding the Control elements accordion;

chrome_AuE0J99ePg chrome_AuE0J99ePg

Thanks for letting me know, once I get things working with Diffusers backend, I will give it a try.

@prakal
Copy link
Author

prakal commented Aug 5, 2024

WARNING  Sampler: invalid

What sampler did you use? If you changed back to Original backend for some reason, don't forget to clear the browser caches. Also don't expect any more support on the Original backed. Use the default Diffusers backend.

Also do you have any particular reason to use FP32? dtype=torch.float32 vae=torch.float32

I used the Diffusers backend, cleared cache, switched to FP16, but still encountering the same issues. One weird thing, when I click on System > Models and Networks, I see none at all. I haven't modified the folder structure, they are under automatic/models/Stable-diffusion/.

Also, when I run it in safe mode and load a model, I see:

Loading pipeline components...   0%

and it stays that way for more than 3 min

@RichardAblitt
Copy link

There's a bug with the latest kernel in Ubuntu 24.04 for Intel graphics, I think this is the same problem I've had. It worked for me when I went back to linux image 6.8.0-36.

@vladmandic
Copy link
Owner

what is the status of this issue?

@vladmandic vladmandic added the question Further information is requested label Aug 28, 2024
@prakal
Copy link
Author

prakal commented Sep 6, 2024

Still no luck. I tried kernel 6.8.0-38 and see the error:

[W Device.cpp:127] Warning: XPU Device count is zero! (function initGlobalDevicePoolState)

I triaged this error to #3201 and updated my intel-compute-runtime but it still fails. I haven't gotten around to downgrading my kernel to 6.8.0-36 as per #3355 (comment) yet, but that's likely my next step.

@Disty0
Copy link
Collaborator

Disty0 commented Sep 6, 2024

Use Linux Kernel 6.10.

@prakal
Copy link
Author

prakal commented Oct 5, 2024

Use Linux Kernel 6.10.

Using linux kernel 6.10.10-061010-generic but still no luck. intel-opencl-icd is at latest: 24.35.30872.22

I tried it with combinations of ipex and torch installations as well, but it errors:

[W Device.cpp:127] Warning: XPU Device count is zero! (function initGlobalDevicePoolState)
Segmentation fault (core dumped)

@vladmandic vladmandic changed the title [Issue]: Cannot generate an image, only "txt2img starting" is shown [Issue]: IPEX Cannot generate an image, only "txt2img starting" is shown Oct 30, 2024
@vladmandic vladmandic added platform Platform specific problem and removed question Further information is requested labels Oct 30, 2024
@prakal
Copy link
Author

prakal commented Nov 13, 2024

I re-installed my intel basekit, purged venv, and SD.Next UI is able to run again, however when I attempt to generate an image, I get a RuntimeError: could not create an engine , logs with -- debug :

 /home/pk/SD/automatic/modules/processing_diffusers.py:99 in process_base                                                                                                                           │
│                                                                                                                                                                                                    │
│    98 │   │   else:                                                                                                                                                                                │
│ ❱  99 │   │   │   output = shared.sd_model(**base_args)                                                                                                                                            │
│   100 │   │   if isinstance(output, dict):                                                                                                                                                         │
│                                                                                                                                                                                                    │
│ /home/pk/SD/automatic/venv/lib/python3.11/site-packages/torch/utils/_contextlib.py:115 in decorate_context                                                                                         │
│                                                                                                                                                                                                    │
│   114 │   │   with ctx_factory():                                                                                                                                                                  │
│ ❱ 115 │   │   │   return func(*args, **kwargs)                                                                                                                                                     │
│   116                                                                                                                                                                                              │
│                                                                                                                                                                                                    │
│ /home/pk/SD/automatic/venv/lib/python3.11/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py:944 in __call__                                                          │
│                                                                                                                                                                                                    │
│    943 │   │                                                                                                                                                                                       │
│ ❱  944 │   │   prompt_embeds, negative_prompt_embeds = self.encode_prompt(                                                                                                                         │
│    945 │   │   │   prompt,                                                                                                                                                                         │
│                                                                                                                                                                                                    │
│ /home/pk/SD/automatic/venv/lib/python3.11/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py:413 in encode_prompt                                                     │
│                                                                                                                                                                                                    │
│    412 │   │   │   if clip_skip is None:                                                                                                                                                           │
│ ❱  413 │   │   │   │   prompt_embeds = self.text_encoder(text_input_ids.to(device), attention_mask=attention_mask)                                                                                 │
│    414 │   │   │   │   prompt_embeds = prompt_embeds[0]                                                                                                                                            │
│                                                                                                                                                                                                    │
│ /home/pk/SD/automatic/venv/lib/python3.11/site-packages/torch/nn/modules/module.py:1532 in _wrapped_call_impl                                                                                      │
│                                                                                                                                                                                                    │
│   1531 │   │   else:                                                                                                                                                                               │
│ ❱ 1532 │   │   │   return self._call_impl(*args, **kwargs)                                                                                                                                         │
│   1533                                                                                                                                                                                             │
│                                                                                                                                                                                                    │
│ /home/pk/SD/automatic/venv/lib/python3.11/site-packages/torch/nn/modules/module.py:1541 in _call_impl                                                                                              │
│                                                                                                                                                                                                    │
│   1540 │   │   │   │   or _global_forward_hooks or _global_forward_pre_hooks):                                                                                                                     │
│ ❱ 1541 │   │   │   return forward_call(*args, **kwargs)                                                                                                                                            │
│   1542                                                                                                                                                                                             │
│                                                                                                                                                                                                    │
│ /home/pk/SD/automatic/venv/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py:1050 in forward                                                                                  │
│                                                                                                                                                                                                    │
│   1049 │   │                                                                                                                                                                                       │
│ ❱ 1050 │   │   return self.text_model(                                                                                                                                                             │
│   1051 │   │   │   input_ids=input_ids,                                                                                                                                                            │
│                                                                                                                                                                                                    │
│ /home/pk/SD/automatic/venv/lib/python3.11/site-packages/torch/nn/modules/module.py:1532 in _wrapped_call_impl                                                                                      │
│                                                                                                                                                                                                    │
│   1531 │   │   else:                                                                                                                                                                               │
│ ❱ 1532 │   │   │   return self._call_impl(*args, **kwargs)                                                                                                                                         │
│   1533                                                                                                                                                                                             │
│                                                                                                                                                                                                    │
│                                                                                      ... 7 frames hidden ...                                                                                       │
│                                                                                                                                                                                                    │
│ /home/pk/SD/automatic/venv/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py:608 in forward                                                                                   │
│                                                                                                                                                                                                    │
│    607 │   │   hidden_states = self.layer_norm1(hidden_states)                                                                                                                                     │
│ ❱  608 │   │   hidden_states, attn_weights = self.self_attn(                                                                                                                                       │
│    609 │   │   │   hidden_states=hidden_states,                                                                                                                                                    │
│                                                                                                                                                                                                    │
│ /home/pk/SD/automatic/venv/lib/python3.11/site-packages/torch/nn/modules/module.py:1532 in _wrapped_call_impl                                                                                      │
│                                                                                                                                                                                                    │
│   1531 │   │   else:                                                                                                                                                                               │
│ ❱ 1532 │   │   │   return self._call_impl(*args, **kwargs)                                                                                                                                         │
│   1533                                                                                                                                                                                             │
│                                                                                                                                                                                                    │
│ /home/pk/SD/automatic/venv/lib/python3.11/site-packages/torch/nn/modules/module.py:1541 in _call_impl                                                                                              │
│                                                                                                                                                                                                    │
│   1540 │   │   │   │   or _global_forward_hooks or _global_forward_pre_hooks):                                                                                                                     │
│ ❱ 1541 │   │   │   return forward_call(*args, **kwargs)                                                                                                                                            │
│   1542                                                                                                                                                                                             │
│                                                                                                                                                                                                    │
│ /home/pk/SD/automatic/venv/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py:524 in forward                                                                                   │
│                                                                                                                                                                                                    │
│    523 │   │                                                                                                                                                                                       │
│ ❱  524 │   │   query_states = self.q_proj(hidden_states)                                                                                                                                           │
│    525 │   │   key_states = self.k_proj(hidden_states)                                                                                                                                             │
│                                                                                                                                                                                                    │
│ /home/pk/SD/automatic/venv/lib/python3.11/site-packages/torch/nn/modules/module.py:1532 in _wrapped_call_impl                                                                                      │
│                                                                                                                                                                                                    │
│   1531 │   │   else:                                                                                                                                                                               │
│ ❱ 1532 │   │   │   return self._call_impl(*args, **kwargs)                                                                                                                                         │
│   1533                                                                                                                                                                                             │
│                                                                                                                                                                                                    │
│ /home/pk/SD/automatic/venv/lib/python3.11/site-packages/torch/nn/modules/module.py:1541 in _call_impl                                                                                              │
│                                                                                                                                                                                                    │
│   1540 │   │   │   │   or _global_forward_hooks or _global_forward_pre_hooks):                                                                                                                     │
│ ❱ 1541 │   │   │   return forward_call(*args, **kwargs)                                                                                                                                            │
│   1542                                                                                                                                                                                             │
│                                                                                                                                                                                                    │
│ /home/pk/SD/automatic/venv/lib/python3.11/site-packages/torch/nn/modules/linear.py:116 in forward                                                                                                  │
│                                                                                                                                                                                                    │
│   115 │   def forward(self, input: Tensor) -> Tensor:                                                                                                                                              │
│ ❱ 116 │   │   return F.linear(input, self.weight, self.bias)                                                                                                                                       │
│   117                                                                                                                                                                                              │
│                                                                                                                                                                                                    │
│ /home/pk/SD/automatic/modules/intel/ipex/hijacks.py:150 in functional_linear                                                                                                                       │
│                                                                                                                                                                                                    │
│   149 │   │   bias.data = bias.data.to(dtype=weight.data.dtype)                                                                                                                                    │
│ ❱ 150 │   return original_functional_linear(input, weight, bias=bias)                                                                                                                              │
│   151                                                                                                                                                                                              │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: could not create an engine
17:33:22-995695 DEBUG    Analyzed: model="AnythingV5Ink_ink" type=sd class=StableDiffusionPipeline size=2132626066 mtime="2023-09-30 21:25:18" modules=[name="vae" cls=AutoencoderKL config=True      
                         device=xpu:0 dtype=torch.float32 params=83653863 modules=220, name="text_encoder" cls=CLIPTextModel config=True device=xpu:0 dtype=torch.float32 params=123066624            
                         modules=152, name="tokenizer" cls=CLIPTokenizer config=False, name="unet" cls=UNet2DConditionModel config=True device=xpu:0 dtype=torch.float32 params=859520964 modules=686,
                         name="scheduler" cls=PNDMScheduler config=True, name="safety_checker" cls=NoneType config=False, name="feature_extractor" cls=CLIPImageProcessor config=False,               
                         name="image_encoder" cls=NoneType config=False, name="requires_safety_checker" cls=bool config=False]                                                                        
17:33:23-002815 INFO     Processed: images=0 its=0.00 time=0.89 timers={'encode': 0.17, 'args': 0.17, 'process': 0.71} memory={'ram': {'used': 3.72, 'total': 31.25}, 'gpu': {'used': 4.05, 'total':  
                         15.91}, 'retries': 0, 'oom': 0}                                                                                                                                              
17:34:00-296628 DEBUG    Server: alive=True jobs=1 requests=74 uptime=72 memory=3.72/31.25 backend=Backend.DIFFUSERS state=idle           

Any idea why this is happening? I tried a fresh install of SD.Next but still see the same issues

@Disty0
Copy link
Collaborator

Disty0 commented Nov 13, 2024

Your drivers (Level Zero and Intel Compute Runtime) are probably too old for IPEX. IPEX is unable use the GPU.

@prakal
Copy link
Author

prakal commented Nov 14, 2024

Your drivers (Level Zero and Intel Compute Runtime) are probably too old for IPEX. IPEX is unable use the GPU.

I just checked the versions, and they look to be the latest. apt-get update also shows no new updates. 😕

intel-level-zero-gpu is already the newest version (1.6.31294.12).
intel-opencl-icd is already the newest version (24.39.31294.20-1032~24.04).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
platform Platform specific problem
Projects
None yet
Development

No branches or pull requests

5 participants