Skip to content
This repository was archived by the owner on Feb 7, 2025. It is now read-only.

Conversation

@marksgraham
Copy link
Collaborator

Fixes #281

The only issue is that likelihood inference is slow when the sequence length exceeds the transformer's max sequence length; as the transformer doesn't do any caching of previous calculations during the for loop.

@Warvito Warvito merged commit 589d79e into main Mar 6, 2023
@Warvito Warvito deleted the 281_max_seq_length branch March 18, 2023 19:45
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

VQVAETransformerInferer not compatible with a maximum sequence length

3 participants