Adjusting the Anomaly Detection Process
Learn how to refine TimeGPT’s anomaly detection process by tuning parameters for improved accuracy and alignment with specific use cases.
This notebook shows you how to refine TimeGPT’s anomaly detection process. By tuning parameters, you can align anomaly detection with specific use cases and improve accuracy.
1. Install and Import Dependencies
In your environment, install and import the necessary libraries:
2. Define a Plotting Utility Function
Use this helper function to visualize detected anomalies:
3. Initialize the Nixtla Client
Create an instance of NixtlaClient with your API key:
If you are using an Azure AI endpoint, set the base_url parameter:
Why Anomaly Detection?
TimeGPT leverages forecast errors to identify anomalies in your time-series data. By optimizing parameters, you can detect subtle deviations and customize results for specific use cases.
Key Parameters
• detection_size determines data window size for threshold calculation.
• level sets confidence intervals for anomaly thresholds.
• freq aligns detection with data frequency (e.g., “D” for daily).
Conduct a Baseline Anomaly Detection
Load a portion of the Peyton Manning dataset to illustrate the default anomaly detection process:
Load a portion of the Peyton Manning dataset to illustrate the default anomaly detection process:
Baseline Anomaly Detection Visualization
You have successfully refined anomaly detection using TimeGPT. Experiment with different fine-tuning strategies, horizons, and step sizes to tailor alerts for your unique data patterns.