Alert Processing Pipeline allows you to cleanse, transform, and filter alert data at the moment it enters Flashduty On-call (integration layer). It acts like a data processing factory, ensuring alerts flowing into channels are standardized, clear, and valuable.
How It Works
Pipeline sits between Alert Ingestion and Route Distribution. Its execution logic is as follows:
- Chain Processing: You can configure multiple processing rules that execute sequentially from top to bottom
- Input/Output: The result of a previous rule (e.g., modified title) can serve as input for the next rule
- Layer Positioning: Pipeline operates at the integration layer. This means once rules take effect, all alerts ingested through this integration will be affected, regardless of which channel they’re ultimately routed to
Configuration Entry
Go to Integration Center => Select created integration => Alert Processing tab.
Core Features and Scenarios
Custom Severity
Flashduty On-call has built-in severity mapping for standard integrations (e.g., mapping Zabbix High to Critical). But if default rules don’t meet your needs, you can override them via Pipeline.
Scenario: Warning level alerts in the monitoring system are actually critical for core payment business and need to be upgraded to Critical to trigger phone notifications.
| Config Item | Value |
|---|
| Condition | labels.service equals payment AND severity equals Warning |
| Action | Update severity to Critical |
Convert cryptic machine language to human-readable business language using template syntax.
Rewrite Title
Rewrite Description
Scenario: Original title contains lots of IDs without business meaning.| Item | Content |
|---|
| Original Title | [Problem] CPU Load High on i-12345678 |
| New Title Template | [TPL]{{.Labels.env}} Environment - {{.Labels.service}} Service CPU Load Alert |
| Result | Production Environment - Order Service CPU Load Alert |
Scenario: Automatically append Runbook links or dashboard URLs in description to assist quick troubleshooting.| Config Item | Value |
|---|
| Action | Update alert description |
| Append Content | Grafana Dashboard: https://grafana.corp.com/d/cpu?var-host={{.Labels.host}} |
Alert Discard
Discard directly before data storage, leaving no records. This is similar to “Exclusion Rules” in channels but takes effect earlier.
| Scenario | Advantage |
|---|
| Frequent restart alerts from dev environment | Cleanse at source |
| Known harmless errors (like “NTP offset”) | Reduce load on subsequent routing and storage resources |
Alert Inhibition
Pipeline’s inhibition feature is exactly the same as channel inhibition rules, both supporting dependency-based inhibition based on source incident, target incident, and correlation conditions.
| Comparison | Pipeline Inhibition | Channel Inhibition |
|---|
| Effective Layer | Integration layer | Channel layer |
| Use Case | Global inhibition logic, like inhibiting all alerts from a datacenter after network outage | Inhibition rules for specific channels |
When an entire datacenter loses network, all alerts from that datacenter (regardless of business line) should be inhibited. Configuring one rule at the integration layer is much more efficient than configuring separately in dozens of channels.
Reference Syntax
When rewriting titles or descriptions, you can use Go Template syntax to reference internal alert variables. Proper use of variable references makes your alert notifications more dynamic and informative.
| Variable | Description | Example |
|---|
[TPL]{{.Labels.xxx}} | Reference specific label | [TPL]{{.Labels.host}} |
[TPL]{{.Title}} | Reference current title | [TPL][Forwarded] {{.Title}} |
[TPL]{{.Description}} | Reference current description | [TPL]Details: {{.Description}} |
[TPL]{{.Severity}} | Reference current severity | [TPL]Current Level: {{.Severity}} |
Before configuring Pipeline, we recommend observing raw alert data for a period to identify which fields (Labels) are stable and which need cleansing.
Further Reading