Description
Hi @sayakpaul I am working on batchinferencing of flux_controlnet_inpainting_pipeline, but I'm encountering the following error:,
Traceback (most recent call last):
File "/home/ubuntu/dev_anand/script/flux_testing.py", line 31, in
result = pipe(
^^^^^
File "/home/ubuntu/anaconda3/envs/inference/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/anaconda3/envs/inference/lib/python3.12/site-packages/diffusers/pipelines/flux/pipeline_flux_controlnet_inpainting.py", line 900, in call
prompt_embeds, pooled_prompt_embeds, text_ids = self.encode_prompt(
^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/anaconda3/envs/inference/lib/python3.12/site-packages/diffusers/pipelines/flux/pipeline_flux_controlnet_inpainting.py", line 398, in encode_prompt
pooled_prompt_embeds = self._get_clip_prompt_embeds(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/anaconda3/envs/inference/lib/python3.12/site-packages/diffusers/pipelines/flux/pipeline_flux_controlnet_inpainting.py", line 315, in _get_clip_prompt_embeds
text_inputs = self.tokenizer(
^^^^^^^^^^^^^^^
File "/home/ubuntu/anaconda3/envs/inference/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 2855, in call
encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/anaconda3/envs/inference/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 2943, in _call_one
return self.batch_encode_plus(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/anaconda3/envs/inference/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 3144, in batch_encode_plus
return self._batch_encode_plus(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/anaconda3/envs/inference/lib/python3.12/site-packages/transformers/tokenization_utils.py", line 885, in _batch_encode_plus
ids, pair_ids = ids_or_pair_ids
^^^^^^^^^^^^^
ValueError: not enough values to unpack (expected 2, got 1)
image = pipe(
prompt = [prompt_txt, prompt_txt],
image=[img1, img2],
mask_image=[mask1, mask2],
control_image=[control1, control2],
control_guidance_start=0.2,
control_guidance_end=0.8,
controlnet_conditioning_scale=0.7,
strength=0.7,
num_inference_steps=28,
guidance_scale=3.5,
)
Could you please confirm whether batch inference is supported in this pipeline? If not, any suggestions or pointers for modifying the pipeline to make it compatible with batched inputs would be really helpful.