A decoder-only foundation model for time-series forecasting
MP3•Episode home
Manage episode 418218346 series 2954468
Content provided by Rob. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Rob or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Motivated by recent advances in large language models for Natural Language Processing (NLP), we design a time-series foundation model for forecasting whose out-of-the-box zero-shot performance on a variety of public datasets comes close to the accuracy of state-of-the-art supervised forecasting models for each individual dataset. Our model is based on pretraining a patched-decoder style attention model on a large time-series corpus, and can work well across different forecasting history lengths, prediction lengths and temporal granularities.
2023: Abhimanyu Das, Weihao Kong, Rajat Sen, Yichen Zhou
https://arxiv.org/pdf/2310.10688
…
continue reading
2023: Abhimanyu Das, Weihao Kong, Rajat Sen, Yichen Zhou
https://arxiv.org/pdf/2310.10688
294 episodes