Finetuning with a Custom Loss Function

Fine-tuning your model allows you to specify a preferred loss function for forecasting tasks. This can significantly impact how your model learns from the data and, ultimately, its performance.

Overview

When fine-tuning, specify the loss function using the finetune_loss argument.

The current supported values for finetune_loss are:

mae

mse

rmse

mape

smape

Steps to Fine-Tune Your Forecast Model

1

1. Import Libraries

Import Libraries
import pandas as pd
from nixtla import NixtlaClient
2

2. Initialize the Nixtla Client

By default, the NixtlaClient will look for the API key in your environment variables (NIXTLA_API_KEY).

Nixtla Public API Initialization
nixtla_client = NixtlaClient(
    # defaults to os.environ.get("NIXTLA_API_KEY")
    api_key='my_api_key_provided_by_nixtla'
)
3

3. Load Your Data

Load CSV Data
# Read data
df = pd.read_csv(
    "https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/air_passengers.csv"
)
4

4. Fine-Tune the Model and Forecast

Fine-Tune Model with Loss Function
# Fine-tune with a specified loss function and make predictions
forecast_df = nixtla_client.forecast(
    df=df,
    h=12,
    finetune_steps=5,
    finetune_loss="mae",
    time_col='timestamp',
    target_col="value"
)

Using Azure AI or Nixtla’s Public API

When using an Azure AI endpoint, set model="azureai" explicitly in the forecast method.

Forecast with Azure AI Model
nixtla_client.forecast(
    df=df,
    h=12,
    finetune_steps=5,
    finetune_loss="mae",
    time_col='timestamp',
    target_col="value",
    model="azureai"
)

When using Nixtla’s public API, two models are available:

timegpt-1

timegpt-1-long-horizon

For more details on choosing or using timegpt-1-long-horizon, see the tutorial:
Long-Horizon Forecasting


Learn More

For a detailed explanation on how different loss functions influence model performance, read the tutorial:
Fine-tuning with a Specific Loss Function