Fine-tuning
How to fine-tune TimeGPT forecasts using finetune_steps and finetune_depth parameters for improved accuracy.
Overview
You can fine-tune TimeGPT forecasts by specifying the finetune_steps
parameter. Fine-tuning allows you to further adjust the model to the nuances of your specific time series data, potentially improving forecast accuracy.
Fine-tuning uses additional training iterations on your own dataset. While it can improve forecast performance, it also increases the compute time needed to generate predictions.
Key Fine-tuning Concepts
finetune_steps
The number of additional training steps to run. Increasing this value can improve accuracy, but also requires more computation.
finetune_depth
The intensity or depth of the fine-tuning. By default, finetune_depth=1
. Increasing it can further refine forecasts but also makes training more resource-intensive.
If you set both finetune_steps
and finetune_depth
too high, the training process can become very time-consuming.
Setting Up Your Environment
Before creating forecasts, you need to install and initialize the Nixtla client with your API key.
Install and Import Libraries
Initialize the Nixtla Client
If you’re using an Azure AI endpoint, set the base_url
parameter accordingly:
Forecasting with Fine-tuning
Follow these steps to fine-tune your TimeGPT forecasts.
Load Data
Create a Forecast (Example: 5 Fine-tune Steps)
(Optional) Include a relevant forecast plot or diagram here.
If you are using Azure AI, specify the model name explicitly:
In the public Nixtla API, you can choose from two models:
timegpt-1
(default)timegpt-1-long-horizon
See the long-horizon forecasting tutorial for information about when and how to use timegpt-1-long-horizon
.
Advanced Fine-tuning
Increasing finetune_depth
and finetune_steps
will increase computation time, thus requiring more time to generate predictions.
Additional Resources
For more detailed information and advanced configurations, see the fine-tuning tutorial.