When I try to use clap_model.get_text_embedding() on an array with a single prompt in it, the call fails with an error in the Roberta tokenizer. It seems that it's confused about the shape of the array unless there's more than one element in it.
File ".../transformers/models/roberta/modeling_roberta.py", line 802, in forward
batch_size, seq_length = input_shape
ValueError: not enough values to unpack (expected 2, got 1)
When I try to use clap_model.get_text_embedding() on an array with a single prompt in it, the call fails with an error in the Roberta tokenizer. It seems that it's confused about the shape of the array unless there's more than one element in it.
File ".../transformers/models/roberta/modeling_roberta.py", line 802, in forward
batch_size, seq_length = input_shape
ValueError: not enough values to unpack (expected 2, got 1)