Part 4/8:
- Multi-tasking: Using unified modeling when applicable enhances performance across tasks.
The VQV Tokenization Process
Central to the Totem methodology is vqv tokenization. This process begins by encoding time series data into a continuous representation, subsequently breaking down the data into discrete tokens.
Input Time Series: The original time series gets encoded into chunks based on a set compression factor.
Learning Representations: The encoder captures a D-dimensional representation, which undergoes tokenization via a discrete codebook.
Reconstruction: The discretized representations are passed through a decoder to regenerate the time series output.