Use Cases
Alert responders are maintained in the source monitoring system and frequently adjusted; you want to sync them to Flashduty On-call in real-time.
Scenario 1: Big Data Task System
Scenario 1: Big Data Task System
Customer A has a self-developed big data task system where internal personnel can create various data batch processing tasks. Each task can specify a primary responder and secondary responder. When a batch processing task fails, the system will first notify the primary responder. If the alert is not resolved after 30 minutes, it escalates to the secondary responder.
Scenario 2: Zabbix Host Monitoring
Scenario 2: Zabbix Host Monitoring
Customer B uses Zabbix for host monitoring and has set a responsible person tag for each host. They want host alerts to notify the corresponding responder based on this tag.
Scenario 3: Self-developed Monitoring System
Scenario 3: Self-developed Monitoring System
Customer C has a self-developed monitoring system with many alert policies. Each policy is configured to notify a specific WeCom group. The customer has decided to migrate incident response to Flashduty but wants to maintain the policy-to-WeCom-group relationships from the source monitoring system and dynamically route alerts to WeCom groups based on these relationships.
Implementation
Add specific labels or query parameters to override assignment targets in Flashduty On-call, enabling dynamic assignment.- Replace Responders
- Replace Teams
- Replace WeCom Group Bot
- Replace Dingtalk Group Bot
- Replace Feishu/Lark Group Bot
| Configuration | Description |
|---|---|
| Parameter Name | Must match regex: ^layer_person_reset_(\d)_emails$, level numbers start from 0. For example, layer_person_reset_0_emails replaces responders in escalation rule level 1 |
| Parameter Value | Responder email addresses, multiple addresses separated by ,. For example, zhangsan@flashcat.cloud,lisi@flashcat.cloud replaces responders with Zhang San and Li Si |
| Parameter Location | Query parameter or label value. For example, set this label in Nightingale alerts, or auto-generate labels through label enhancement |
Push Example
Step 1: Set Up Template Escalation Rule
Configure an escalation rule for the channel. As shown below, this channel has only one assignment level, with the responder set to “Toutie Tech”, and also pushes to a WeCom group chat with a token ending in 5b96.
Step 2: Set Alert Labels
Using custom alert event integration as an example, push a sample alert to the target channel:- Set
layer_person_reset_0_emailslabel to replace level 1 responders with guoyuhang and yushuangyu - Set
layer_webhook_reset_0_wecomslabel to replace level 1 WeCom group chat token with a token ending in d9c0
Step 3: View Incident Assignment Timeline
As shown below, the target incident is triggered normally and assigned. The incident responders and target group chat have been replaced as expected.
FAQ
What if my monitoring system doesn't have these labels?
What if my monitoring system doesn't have these labels?
Option 1: Manually Add Labels
If your system supports manually adding labels, such as Prometheus or Nightingale, we recommend adding specific labels directly in the alert policy.
Option 2: Use Label Enhancement
If your system already has related labels but in a different format or naming convention. For example, your hosts have team labels and you need to find the corresponding responder based on the team - in this case, you can use the label enhancement feature to generate responder-related labels based on team labels.For details, see Configure Label Enhancement.