-
Notifications
You must be signed in to change notification settings - Fork 6
Video Generation via Tokens #5
Copy link
Copy link
Open
Labels
MLRequires machine-learning knowledge (can be built up on the fly)Requires machine-learning knowledge (can be built up on the fly)downstreamChanges code wrapping the core modelChanges code wrapping the core modelresearchCreative project that might fail but could give high returnsCreative project that might fail but could give high returns
Metadata
Metadata
Assignees
Labels
MLRequires machine-learning knowledge (can be built up on the fly)Requires machine-learning knowledge (can be built up on the fly)downstreamChanges code wrapping the core modelChanges code wrapping the core modelresearchCreative project that might fail but could give high returnsCreative project that might fail but could give high returns
If we tokenise frames of a video with a VQGAN, we can autoregressively predict the next token using our current language model. More specifically, using our current context of 2 million tokens, we could fit 2048 frames (~34 minutes at 1 FPS) with current state-of-the-art image quantisation models.
This issue is about implementing such a model end-to-end and having a working demo.