This tutorial covers how to fine-tune TimeGPT for more accurate and specialized forecasts.

Fine-tuning allows you to adapt the TimeGPT foundation model to your specific dataset, resulting in improved forecasting accuracy for your unique use case. By focusing training on specialized data, you can adjust model parameters to capture domain-specific patterns that might not be fully addressed by the broad training data used for the original model.

1

1. Import Packages and Initialize Client

Fine-tuning uses the same NixtlaClient as standard TimeGPT forecasting. Begin by importing the required packages:

import-packages
import pandas as pd
from nixtla import NixtlaClient
from utilsforecast.losses import mae, mse
from utilsforecast.evaluation import evaluate

Next, initialize the NixtlaClient instance, providing your API key (or rely on environment variables):

initialize-client
nixtla_client = NixtlaClient(
    api_key='my_api_key_provided_by_nixtla'  # Defaults to os.environ.get("NIXTLA_API_KEY")
)
2

2. Load Data

Load the dataset from the provided CSV URL:

load-data
df = pd.read_csv(
    "https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/air_passengers.csv"
)
df.head()
timestampvalue
01949-01-01112
11949-02-01118
21949-03-01132
31949-04-01129
41949-05-01121
3

3. Fine-tuning the Model

Set the number of fine-tuning iterations with the finetune_steps parameter. For example, setting finetune_steps=10 performs ten additional training iterations on your dataset:

fine-tune
timegpt_fcst_finetune_df = nixtla_client.forecast(
    df=df,
    h=12,
    finetune_steps=10,
    time_col='timestamp',
    target_col='value',
)

During execution, the console provides informational output, such as:

The public TimeGPT API supports two models by default:

timegpt-1 (Default)

Ideal for most general-purpose forecasting needs.

timegpt-1-long-horizon

Recommended for longer forecast horizons as explained in Long-Horizon Forecasting.

Visualize forecasts to confirm performance:

visualize
nixtla_client.plot(
    df,
    timegpt_fcst_finetune_df,
    time_col='timestamp',
    target_col='value',
)

Fine-tuning forecast visualization

Fine-tuning requires experimentation to find the optimal number of steps. More steps can improve accuracy, but also risk overfitting and prolong training time.

Fine-tuning is powerful but requires thoughtful application. Start with a moderate number of steps and monitor performance metrics. Increase only as needed to avoid overfitting.