Skip to main content
Noise reduction is one of the core capabilities of Flashduty On-call. When an alert storm hits, you might receive hundreds of similar notifications. The noise reduction feature groups these alerts into a single incident, so you only need to handle it once instead of being overwhelmed by repeated notifications.

Why Noise Reduction

ScenarioWithout Noise ReductionWith Noise Reduction
Server outage triggers 100 alertsReceive 100 notifications, must handle each oneReceive 1 incident, handle uniformly
Network flapping causes repeated alert trigger/recoveryNotification bombardment, exhausting to respondMarked as flapping, reduced interference
Batch alerts at midnightWoken up multiple times by calls/SMSNotified only once, sleep unaffected
Core value of noise reduction:
  • Reduce notification frequency, avoid alert fatigue
  • Focus on issues that truly need attention
  • Improve incident response and handling efficiency

Core Concepts

Before understanding noise reduction, you need to understand the relationship between three core objects:
Monitoring System → Event → Alert → Incident
ObjectDefinitionSource
EventRaw notification from monitoring system, each trigger or recovery is an eventZabbix, Prometheus, etc.
AlertAutomatically triggered by events, multiple events for the same alert merge into one alertAutomatically created by Flashduty
IncidentPrimary object processed by Flashduty, triggered by alerts or created manuallyAuto-triggered or manually created
Key understanding:
  • One alert can contain multiple events (same alert’s triggers, recoveries)
  • One incident can contain multiple alerts (similar alerts grouped together)
  • Noise reduction occurs at the “Alert → Incident” stage

Noise Reduction Process

When a monitoring system pushes alerts to Flashduty On-call, the system automatically executes the following workflow:
1

Receive Event

Determine whether to merge into an existing alert, otherwise create a new alert.
2

Process New Alert

Determine whether to merge into an existing incident, otherwise create a new incident.
3

Trigger Notification

New incidents notify relevant personnel according to escalation rules.
4

Subsequent Alerts Merge

Subsequent alerts merge into existing incidents without repeated notifications.
Alert Noise Reduction Flowchart

Alert Grouping

Go to Channel Details → Noise Reduction to configure. Alert grouping merges multiple similar alerts into a single incident for unified assignment and notification. When an alert storm hits, you only need to handle one incident instead of hundreds of repeated notifications.
New channels have alert grouping disabled by default. When disabled, each alert creates an independent incident.

Grouping Modes

Flashduty On-call provides two grouping modes:
ModeUse CaseCharacteristics
Intelligent GroupingQuick start, lower precision requirementsBased on machine learning semantic similarity analysis, no manual rule configuration needed
Rule-based GroupingNeed precise control over grouping logicExact matching by specified dimensions (attributes, labels)

Common Configuration

ConfigurationDescription
Grouping WindowOnly group alerts within the time window, alerts outside the window trigger new incidents
Alert Storm WarningTrigger warning notification when merged alert count reaches threshold, prompting urgent handling
Strict GroupingWhen enabled, empty label values are treated as different; when disabled, empty values are treated as the same (not supported for intelligent grouping)
When new alerts are highly similar to active incidents, automatically merge into the incident.
1

Select Grouping Mode

Select Intelligent Grouping mode
2

Specify Calculation Fields

Specify fields to participate in calculation (up to 10)
Intelligent Grouping Configuration

Grouping Effect

After setting grouping by Alert Check Item, 5 alert notifications are grouped into 1 incident:
Incident: cpu idle < 20% / es.nj.03, Critical

  - Alert cpu idle < 20% / es.nj.03:
      - Event1: es.nj.03, cpu.idle = 10%, Critical
      - Event2: es.nj.03, cpu.idle = 18%, Warning
      - Event4: es.nj.03, cpu.idle = 10%, Ok

  - Alert cpu idle < 20% / es.nj.01:
      - Event3: es.nj.01, cpu.idle = 15%, Warning
  
  - Alert cpu idle < 20% / es.nj.02:
      - Event5: es.nj.02, cpu.idle = 19%, Warning
View grouping relationships on the incident details page:
  • Click alert title to view alert timeline and associated events
  • Click event point to view specific event content
Grouping Effect

Flapping Detection

When the same incident triggers and recovers frequently, the system marks it as “flapping” status to avoid notification bombardment. Go to Channel Details → Noise Reduction → Flapping Detection:
OptionBehavior
OffDon’t detect flapping status (default)
Alert OnlyMark flapping status, continue notifications per policy
Alert Then SilenceMark flapping status, no more notifications after first alert
“Same incident” refers to incidents with the same Alert Key, typically using the alert ID pushed from the upstream system as a unique identifier.

Silence Rules

During maintenance windows or known issue periods, silence rules can suppress alert notifications for specific conditions. Go to Channel Details → Noise Reduction → Silence Rules.

Silence Time

TypeDescription
One-time SilenceActive during specified time period, rule retained but inactive after expiration
Recurring Silence - Weekly ModeActive at fixed weekly time periods, e.g., every Saturday 00:00-06:00
Recurring Silence - Calendar ModeActive on workdays/rest days per Service Calendar

Silence Conditions

Define which alerts should be silenced, supports multiple condition combinations.
Match ItemDescriptionExample
SeverityMatch by alert levelOnly silence Info level
TitleMatch by alert title keywordsTitle contains “Planned Maintenance”
DescriptionMatch by alert description contentDescription contains “restart”
LabelsMatch by label key-value pairshost=db-master-01
Combination Logic:
  • AND: All conditions must be met to silence
  • OR: Any condition met triggers silence
See Configure Filter Conditions for details.

Silence Behavior

BehaviorDescription
Drop DirectlyAlert doesn’t appear in any list, no record
Retain and MarkAlert appears in Raw Alerts List marked as “Silenced”, can be filtered and viewed

Quick Silence

Quickly create temporary silence rules based on existing incidents. Operation Path: Incident Details → More Actions → Quick Silence
  • Rule name defaults to incident ID + title
  • Effective scope is the incident’s channel (cannot be changed)
  • Default effective for 24 hours, automatically deleted after expiration
  • Conditions default to exact match of incident labels
Quick Silence
When repeatedly using quick silence on the same incident, it edits the original rule rather than creating a new one.

Inhibit Rules

When a root cause alert exists, automatically inhibit related secondary alerts. For example: When a Critical level incident exists, inhibit Warning/Info level incidents for the same check item.

Configuration Path

LocationPathCharacteristics
ChannelChannel Details → Noise Reduction → Inhibit RulesOnly effective for alerts in current channel
Alert IntegrationAlert Integration Details → Alert Processing → Alert InhibitionEffective for alerts from this integration

Inhibit Conditions

When a new alert meets conditions, and there’s an active incident (not closed) meeting conditions within 10 minutes, and both share equal items, the new alert is inhibited.
ConfigurationDescription
New Alert ConditionsConditions the inhibited alert must meet, e.g., severity is Warning/Info
Active Alert ConditionsConditions the inhibiting source alert must meet, e.g., severity is Critical
Equal ItemsAttributes or labels that must be identical between both, e.g., check item, hostname

Inhibit Behavior

BehaviorDescription
Drop DirectlyAlert doesn’t appear in any list, no record
Retain and MarkAlert appears in Alert List marked as “Inhibited”, can be filtered and viewed

Configuration Example

Scenario: When Critical level alerts exist, inhibit Warning/Info level alerts for the same check item.
Inhibit Rule Configuration

FAQ

No. The incident title matches the first alert that triggered it and can be manually modified at any time; it won’t change with new alerts.
  • Manually created incidents: No, labels list always remains empty
  • Auto-triggered incidents: Possibly, incident labels stay consistent with the first alert; if that alert’s labels change, incident labels update accordingly
Yes. Alert labels always stay consistent with the latest merged event. However, if the new event is a recovery event, the alert keeps existing labels and only adds labels that didn’t exist before.
Up to 5000, mainly to ensure console rendering performance. Due to backend concurrent processing, actual count may slightly exceed this limit.
  • Rule-based Grouping: No limit, maximum grouping window is 24 hours. After 24 hours from alert trigger, new events create new incidents
  • Intelligent Grouping: No limit, maximum grouping window is 30 days. After 30 days from alert trigger, new events create new incidents