Fine-tuning
Adapt TimeGPT to your specific datasets for more accurate forecasts
This tutorial covers how to fine-tune TimeGPT for more accurate and specialized forecasts.
Fine-tuning allows you to adapt the TimeGPT foundation model to your specific dataset, resulting in improved forecasting accuracy for your unique use case. By focusing training on specialized data, you can adjust model parameters to capture domain-specific patterns that might not be fully addressed by the broad training data used for the original model.
1. Import Packages and Initialize Client
Fine-tuning uses the same NixtlaClient as standard TimeGPT forecasting. Begin by importing the required packages:
Next, initialize the NixtlaClient instance, providing your API key (or rely on environment variables):
2. Load Data
Load the dataset from the provided CSV URL:
timestamp | value | |
---|---|---|
0 | 1949-01-01 | 112 |
1 | 1949-02-01 | 118 |
2 | 1949-03-01 | 132 |
3 | 1949-04-01 | 129 |
4 | 1949-05-01 | 121 |
3. Fine-tuning the Model
Set the number of fine-tuning iterations with the finetune_steps parameter. For example, setting finetune_steps=10 performs ten additional training iterations on your dataset:
During execution, the console provides informational output, such as:
The public TimeGPT API supports two models by default:
timegpt-1 (Default)
Ideal for most general-purpose forecasting needs.
timegpt-1-long-horizon
Recommended for longer forecast horizons as explained in Long-Horizon Forecasting.
Visualize forecasts to confirm performance:
Fine-tuning forecast visualization
Fine-tuning requires experimentation to find the optimal number of steps. More steps can improve accuracy, but also risk overfitting and prolong training time.
Fine-tuning is powerful but requires thoughtful application. Start with a moderate number of steps and monitor performance metrics. Increase only as needed to avoid overfitting.