-
Notifications
You must be signed in to change notification settings - Fork 440
Description
Your current environment
commit id #ad3011dcafa58ded4375d35ce380cbfcb5d030d7
The output of python examples/offline_inference/text_to_image/text_to_image.py
[Stage-0] INFO 01-05 03:49:10 [gpu_worker.py:174] Worker 0 created result MessageQueue
[Stage-0] INFO 01-05 03:49:10 [scheduler.py:228] Chunked prefill is enabled with max_num_batch
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Stage-0] INFO 01-05 03:49:11 [gpu_worker.py:75] Worker 0: Initialized device and distributed
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████
Process DiffusionWorker-0:
Traceback (most recent call last):
File "/workspace/omni/vllm-omni/vllm_omni/diffusion/data.py", line 115, in __getattr__
return params[item]
~~~~~~^^^^^^
KeyError: 'dual_attention_layers'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/workspace/omni/vllm-omni/vllm_omni/diffusion/worker/gpu_worker.py", line 305, in work
worker_proc = WorkerProc(
^^^^^^^^^^^
File "/workspace/omni/vllm-omni/vllm_omni/diffusion/worker/gpu_worker.py", line 177, in __in
self.worker = self._create_worker(gpu_id, od_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/omni/vllm-omni/vllm_omni/diffusion/worker/gpu_worker.py", line 183, in _cre
return GPUWorker(
^^^^^^^^^^
File "/workspace/omni/vllm-omni/vllm_omni/diffusion/worker/gpu_worker.py", line 49, in _ini
self.init_device_and_model()
File "/workspace/omni/vllm-omni/vllm_omni/diffusion/worker/gpu_worker.py", line 91, in init
self.pipeline = model_loader.load_model(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/omni/vllm-omni/vllm_omni/diffusion/model_loader/diffusers_loader.py", line
model = initialize_model(od_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/omni/vllm-omni/vllm_omni/diffusion/registry.py", line 91, in initialize_mod
model = model_class(od_config=od_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/omni/vllm-omni/vllm_omni/diffusion/models/sd3/pipeline_sd3.py", line 176, i
self.transformer = SD3Transformer2DModel(od_config=od_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/omni/vllm-omni/vllm_omni/diffusion/models/sd3/sd3_transformer.py", line 342
self.dual_attention_layers = model_config.dual_attention_layers
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/omni/vllm-omni/vllm_omni/diffusion/data.py", line 117, in getattr
raise AttributeError(item) from exc
AttributeError: dual_attention_layers
File "/workspace/omni/vllm-omni/vllm_omni/diffusion/data.py", line 117, in __getattr__
raise AttributeError(item) from exc
AttributeError: dual_attention_layers
🐛 Describe the bug
Encounters error while running sd3 (stable-diffusion-3-medium-diffusers). It shows AttributeError: dual_attention_layers inside sd3_transformer.py self.dual_attention_layers = model_config.dual_attention_layers
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.