Learn how Flashduty reduces alert noise.
When Flashduty receives an alert event (such as a Zabbix notification), the system automatically triggers an alert, which will then trigger an incident. Multiple similar active alerts may be grouped into the same incident for unified assignment, notification, and handling, significantly reducing notification frequency and improving response efficiency.Noise Reduction Model#
Basic Concepts#
Event: Originates from monitoring systems (like Zabbix), where each trigger and recovery notification is an event.
Alert: Automatically triggered by events. Events occurring at different times from the same alert in the original monitoring system will be merged into a single alert in Flashduty.
Incident: The primary object handled by Flashduty platform, typically triggered automatically by alerts but can also be created manually. An incident can be understood as a combination of similar alerts, which Flashduty automatically groups together based on user-configured grouping strategies for assignment and handling.
Noise Reduction Process#
Incidents can be triggered in two ways: manual creation and automatic triggering. Alert noise reduction only works in the automatic triggering scenario, typically following these steps:1.
Monitoring system generates an alert and pushes it to Flashduty
2.
Flashduty determines if the new event can be merged into an active alert; if not, it triggers a new alert
3.
Flashduty determines if the new alert can be merged into an active incident; if not, it triggers a new incident
4.
Flashduty assigns personnel according to escalation rules and sends notifications
5.
Assigned personnel receive notifications and begin handling the incident
Go to Channel Details
- Noise reduction
to set up alert grouping strategies. When creating a new channel, alert grouping is disabled by default. We recommend enabling it manually and configuring grouping strategies as needed.When alert grouping is disabled, each alert will create a separate incident with identical basic information.
General Configuration#
Grouping Window: You can choose to only group alerts that occur close together (stronger correlation). Alerts outside the time window will trigger new incidents. Note that this is a sliding window that extends as new alerts are merged.
Storm Warning: After an incident is triggered, the system will immediately assign and notify (assuming you haven't set notification delays). Subsequently merged alerts won't trigger new notifications, which could prevent you from detecting alert storms. Therefore, we provide this threshold - when the number of merged alerts reaches the threshold, the system will trigger a storm warning to remind you to handle it urgently.
Strict Grouping: When enabled, grouping conditions where both alert and incident values are empty will be treated as different items; when disabled, they will be treated as equal items (not supported in intelligent grouping).
Preview: You can use the preview function to fetch recent events and render noise reduction results in real-time to evaluate effectiveness.
Alert Grouping Mode#
Flashduty provides two alert grouping modes, Intelligent Grouping and Rule-based Grouping. Intelligent Grouping uses machine learning algorithms to deeply analyze the semantic content of incident information, identifying and merging similar events. This approach eliminates the need for manual configuration of grouping rules, making it suitable for scenarios with low aggregation requirements. In contrast, Rule-based Grouping provides a more direct and customizable approach, allowing users to configure specific grouping rules based on their business logic and requirements. Users can define which events should be grouped together, ensuring event management aligns with organizational requirements. This approach provides complete control over complex event streams, allowing them to be effectively managed and prioritized according to predefined standards.Intelligent Grouping#
Intelligent Grouping: When a newly triggered alert is highly similar to an active incident's content, the alert will be merged into the incident.

Rule-based Grouping#
Unified Control: All new alerts in the channel use the same dimensions for grouping. Supports selecting multiple attributes and labels, with conditions matching only when all attributes and labels are identical.

Fine-grained Control: New alerts are grouped by matching conditions dimensions. If no match is found, default dimensions are used. If no default dimensions are set, no grouping occurs.

Grouping Example#
With the channel set to group by alert check item, the system receives 5 alert notifications that trigger alerts and incidents in sequence:Incident: cpu idle < 20% / es.nj.03, Critical
- Alert cpu idle < 20% / es.nj.03:
- Event 1: es.nj.03, cpu.idle = 10%, Critical
- Event 2: es.nj.03, cpu.idle = 18%, Warning
- Event 4: es.nj.03, cpu.idle = 10%, Ok
- Alert cpu idle < 20% / es.nj.01:
- Event 3: es.nj.01, cpu.idle = 15%, Warning
- Alert cpu idle < 20% / es.nj.02:
- Event 5: es.nj.02, cpu.idle = 19%, Warning
Through the console's incident details page, we can see the final [Incident-Alert-Event] relationships:Click the alert title to view associated alert details, including the alert timeline and related events
Click an event point to view specific event report content, including labels and descriptions

FAQ#
Does the incident title change when alerts are merged?
No, by default, the incident title remains identical to the first alert that triggered it. You can manually modify the incident title at any time, and it won't change as new alerts are merged.Do incident labels change when alerts are merged?
Manually created alerts: No, their label list will always remain empty
Automatically triggered alerts: Possibly. The incident's labels will remain consistent with the first alert that triggered it. If the alert's labels change, the incident's labels will sync accordingly.
Do alert labels change when events are merged?
Yes, alert labels always stay consistent with newly merged events. For example, if you receive a "CPU idle too low" alert at 10:00 with a trigger value of 10%, this trigger value label may change dynamically as more events are merged. However, if a new recovery event is received, the alert will maintain existing labels and only add previously non-existent labels. Our principle is to keep alert labels as close as possible to their trigger state.Is there a limit to the number of alerts that can be merged into an incident?
Yes, we limit each incident to grouping a maximum of 1000 alerts, primarily to reduce console page rendering time. However, as Flashduty is a high-performance event processing system with extensive concurrent logic, it's normal to occasionally see incidents with more than 1000 grouped alerts.Is there a limit to the number of events that can be merged into an alert?
No, but an alert's event grouping window is maximum 24 hours. This means if an alert hasn't recovered after 24 hours, no new events will be merged. If Flashduty receives new events, it will generate new alerts.Why doesn't the number of events I pushed match the number of events associated with the alert?
Event-to-alert merging is also a noise reduction process. If Flashduty determines that a newly reported event doesn't significantly change the alert (e.g., no changes in status, severity, description, etc.), Flashduty will discard the new event and use its labels to override existing ones.