Use the finetune_depth parameter to control how many model parameters TimeGPT adjusts during fine-tuning. Valid values include [1, 2, 3, 4, 5]. Generally, higher values can yield better performance yet also increase training time and the risk of overfitting.

Key Concept: finetune_depth

The finetune_depth parameter determines how extensively TimeGPT is adapted to your dataset. A depth of 1 tunes only a small subset of parameters, while 5 fine-tunes the entire model.

Be cautious when choosing a large finetune_depth on small datasetsβ€”it can lead to overfitting, hurting prediction accuracy.

1

1. Import Packages

First, import the required packages and initialize the Nixtla client.

ImportingPackages
import pandas as pd
from nixtla import NixtlaClient
from utilsforecast.losses import mae, mse
from utilsforecast.evaluation import evaluate
InitializingClient
nixtla_client = NixtlaClient(
    # defaults to os.environ.get("NIXTLA_API_KEY")
    api_key='my_api_key_provided_by_nixtla'
)
2

2. Load Data

Next, load the dataset and split it into training and testing sets to analyze the effect of different finetune_depth values.

LoadingDataset
df = pd.read_csv(
    'https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/air_passengers.csv'
)
df.head()
SplittingData
train = df[:-24]
test = df[-24:]
3

3. Fine-Tuning With finetune_depth

For Azure AI, specify model="azureai" in the forecast method. The public API supports timegpt-1 and timegpt-1-long-horizon.

Now, fine-tune TimeGPT with varying depths to compare performance. Note that larger depths can yield better results with sufficiently large datasets but may lead to overfitting on smaller ones.

FineTuningLoop
depths = [1, 2, 3, 4, 5]

test = test.copy()

for depth in depths:
    preds_df = nixtla_client.forecast(
        df=train,
        h=24,
        finetune_steps=5,
        finetune_depth=depth,
        time_col='timestamp',
        target_col='value'
    )

    preds = preds_df['TimeGPT'].values
    test.loc[:, f'TimeGPT_depth{depth}'] = preds

Evaluate the forecasts using MAE and MSE metrics:

EvaluationMetrics
test['unique_id'] = 0

evaluation = evaluate(
    test,
    metrics=[mae, mse],
    time_col="timestamp",
    target_col="value"
)
evaluation
unique_idmetricTimeGPT_depth1TimeGPT_depth2TimeGPT_depth3TimeGPT_depth4TimeGPT_depth5
0mae22.67554017.90896321.31851824.74509628.734302
0mse677.254283461.320852676.202126991.8353591119.722602

The results show that finetune_depth=2 yields the lowest MAE and MSE, making it optimal for this dataset. However, depths of 4 and 5 lead to overfitting, demonstrating the importance of balanced fine-tuning.

Choosing the right fine-tuning parameters often involves experimentation. Monitor performance carefully and adjust finetune_steps and finetune_depth based on your dataset size and complexity.