Why Noise Reduction
| Scenario | Without Noise Reduction | With Noise Reduction |
|---|---|---|
| Server outage triggers 100 alerts | Receive 100 notifications, must handle each one | Receive 1 incident, handle uniformly |
| Network flapping causes repeated alert trigger/recovery | Notification bombardment, exhausting to respond | Marked as flapping, reduced interference |
| Batch alerts at midnight | Woken up multiple times by calls/SMS | Notified only once, sleep unaffected |
Core Concepts
Before understanding noise reduction, you need to understand the relationship between three core objects:| Object | Definition | Source |
|---|---|---|
| Event | Raw notification from monitoring system, each trigger or recovery is an event | Zabbix, Prometheus, etc. |
| Alert | Automatically triggered by events, multiple events for the same alert merge into one alert | Automatically created by Flashduty |
| Incident | Primary object processed by Flashduty, triggered by alerts or created manually | Auto-triggered or manually created |
Key understanding:
- One alert can contain multiple events (same alert’s triggers, recoveries)
- One incident can contain multiple alerts (similar alerts grouped together)
- Noise reduction occurs at the “Alert → Incident” stage
Noise Reduction Process
When a monitoring system pushes alerts to Flashduty On-call, the system automatically executes the following workflow:Process New Alert
Determine whether to merge into an existing incident, otherwise create a new incident.

Alert Grouping
Go to Channel Details → Noise Reduction to configure. Alert grouping merges multiple similar alerts into a single incident for unified assignment and notification. When an alert storm hits, you only need to handle one incident instead of hundreds of repeated notifications.New channels have alert grouping disabled by default. When disabled, each alert creates an independent incident.
Grouping Modes
Flashduty On-call provides two grouping modes:| Mode | Use Case | Characteristics |
|---|---|---|
| Intelligent Grouping | Quick start, lower precision requirements | Based on machine learning semantic similarity analysis, no manual rule configuration needed |
| Rule-based Grouping | Need precise control over grouping logic | Exact matching by specified dimensions (attributes, labels) |
Common Configuration
| Configuration | Description |
|---|---|
| Grouping Window | Only group alerts within the time window, alerts outside the window trigger new incidents |
| Alert Storm Warning | Trigger warning notification when merged alert count reaches threshold, prompting urgent handling |
| Strict Grouping | When enabled, empty label values are treated as different; when disabled, empty values are treated as the same (not supported for intelligent grouping) |
- Intelligent Grouping
- Rule-based Grouping
Grouping Effect
After setting grouping by Alert Check Item, 5 alert notifications are grouped into 1 incident:- Click alert title to view alert timeline and associated events
- Click event point to view specific event content

Flapping Detection
When the same incident triggers and recovers frequently, the system marks it as “flapping” status to avoid notification bombardment. Go to Channel Details → Noise Reduction → Flapping Detection:| Option | Behavior |
|---|---|
| Off | Don’t detect flapping status (default) |
| Alert Only | Mark flapping status, continue notifications per policy |
| Alert Then Silence | Mark flapping status, no more notifications after first alert |
Silence Rules
During maintenance windows or known issue periods, silence rules can suppress alert notifications for specific conditions. Go to Channel Details → Noise Reduction → Silence Rules.Silence Time
| Type | Description |
|---|---|
| One-time Silence | Active during specified time period, rule retained but inactive after expiration |
| Recurring Silence - Weekly Mode | Active at fixed weekly time periods, e.g., every Saturday 00:00-06:00 |
| Recurring Silence - Calendar Mode | Active on workdays/rest days per Service Calendar |
Silence Conditions
Define which alerts should be silenced, supports multiple condition combinations.| Match Item | Description | Example |
|---|---|---|
| Severity | Match by alert level | Only silence Info level |
| Title | Match by alert title keywords | Title contains “Planned Maintenance” |
| Description | Match by alert description content | Description contains “restart” |
| Labels | Match by label key-value pairs | host=db-master-01 |
- AND: All conditions must be met to silence
- OR: Any condition met triggers silence
Silence Behavior
| Behavior | Description |
|---|---|
| Drop Directly | Alert doesn’t appear in any list, no record |
| Retain and Mark | Alert appears in Raw Alerts List marked as “Silenced”, can be filtered and viewed |
Quick Silence
Quickly create temporary silence rules based on existing incidents. Operation Path: Incident Details → More Actions → Quick Silence- Rule name defaults to incident ID + title
- Effective scope is the incident’s channel (cannot be changed)
- Default effective for 24 hours, automatically deleted after expiration
- Conditions default to exact match of incident labels

When repeatedly using quick silence on the same incident, it edits the original rule rather than creating a new one.
Inhibit Rules
When a root cause alert exists, automatically inhibit related secondary alerts. For example: When a Critical level incident exists, inhibit Warning/Info level incidents for the same check item.Configuration Path
| Location | Path | Characteristics |
|---|---|---|
| Channel | Channel Details → Noise Reduction → Inhibit Rules | Only effective for alerts in current channel |
| Alert Integration | Alert Integration Details → Alert Processing → Alert Inhibition | Effective for alerts from this integration |
Inhibit Conditions
When a new alert meets conditions, and there’s an active incident (not closed) meeting conditions within 10 minutes, and both share equal items, the new alert is inhibited.| Configuration | Description |
|---|---|
| New Alert Conditions | Conditions the inhibited alert must meet, e.g., severity is Warning/Info |
| Active Alert Conditions | Conditions the inhibiting source alert must meet, e.g., severity is Critical |
| Equal Items | Attributes or labels that must be identical between both, e.g., check item, hostname |
Inhibit Behavior
| Behavior | Description |
|---|---|
| Drop Directly | Alert doesn’t appear in any list, no record |
| Retain and Mark | Alert appears in Alert List marked as “Inhibited”, can be filtered and viewed |
Configuration Example
Scenario: When Critical level alerts exist, inhibit Warning/Info level alerts for the same check item.
FAQ
Will the incident title change when new alerts merge in?
Will the incident title change when new alerts merge in?
No. The incident title matches the first alert that triggered it and can be manually modified at any time; it won’t change with new alerts.
Will incident labels change when new alerts merge in?
Will incident labels change when new alerts merge in?
- Manually created incidents: No, labels list always remains empty
- Auto-triggered incidents: Possibly, incident labels stay consistent with the first alert; if that alert’s labels change, incident labels update accordingly
Will alert labels change when new events merge in?
Will alert labels change when new events merge in?
Yes. Alert labels always stay consistent with the latest merged event. However, if the new event is a recovery event, the alert keeps existing labels and only adds labels that didn’t exist before.
What's the maximum number of alerts a single incident can group?
What's the maximum number of alerts a single incident can group?
Up to 5000, mainly to ensure console rendering performance. Due to backend concurrent processing, actual count may slightly exceed this limit.
What's the maximum number of events a single alert can be associated with?
What's the maximum number of events a single alert can be associated with?
- Rule-based Grouping: No limit, maximum grouping window is 24 hours. After 24 hours from alert trigger, new events create new incidents
- Intelligent Grouping: No limit, maximum grouping window is 30 days. After 30 days from alert trigger, new events create new incidents

