Alert Processing Pipeline
RUM alerts pass through four layers from Error generation to human notification:| Layer | Configuration Location | Core Function |
|---|---|---|
| ① Data Filtering | RUM App → Alert Settings | Exclude unwanted Errors at the source, reducing unnecessary Issues |
| ② Alert Grading | RUM App → Alert Settings | Set Issue priority based on Error attributes |
| ③ Alert Processing | Flashduty Integration → Alert Pipeline | Adjust priority, drop/suppress based on Issue dimensions |
| ④ Alert Dispatch | Flashduty Channel | Route to teams, notify responders |
Step 1: Filter Noise Data
Before configuring alert grading, start by cleaning up the data source. Common noise sources include:Third-party script errors
Third-party script errors
Errors from browser extensions or third-party ad/analytics scripts are unrelated to your business and should be excluded:
- Error Stack contains
chrome-extension:// - Error Stack contains
moz-extension:// - Error Stack contains
cdn.third-party.com
Known harmless errors
Known harmless errors
Some errors occur frequently but don’t affect user experience:
- Error Message contains
ResizeObserver loop - Error Message contains
Script error
Non-production environment errors
Non-production environment errors
If you only care about production alerts, filter out other environments:
- Environment not contains
production
Step 2: Configure Alert Grading
After filtering noise, use alert grading rules to differentiate the importance of different errors.Grading Strategy Recommendations
| Priority | Use Cases | Expected Response Time |
|---|---|---|
| P0 (Critical) | Core business disruption, VIP users affected, production crashes | Immediate response |
| P1 (Warning) | Important feature errors, critical page errors | Same-day resolution |
| P2 (Info) | Non-critical feature errors, low-impact issues | Scheduled resolution |
Recommended Rule Configuration
Here are recommended rules ranked by business priority from high to low:Production crashes → P0
A crash means the application is completely unavailable, requiring the highest priority response.
- Condition: Environment contains
production, AND Is Crash containstrue - Alert level: P0
VIP user errors → P0
VIP user experience is directly tied to business value.
- Condition: User ID contains
vip(or match via custom fieldcontext.user.levelcontainsvip) - Alert level: P0
Critical page errors → P1
Errors on payment, login, and checkout pages need priority handling.
- Condition: Page URL contains
/payment - Alert level: P1
Step 3: Fine-Tune in Flashduty
Alert grading on the RUM side is based on individual Error attributes. For further processing based on the overall impact of an Issue, configure it in the Flashduty Alert Pipeline.| Processing Scenario | Configuration |
|---|---|
| Suppress repeated alerts | Same alert_key alerts only once within 1 hour |
| Custom alert title | Template example: [RUM] [{{Labels.env}}] {{Labels.error_type}} - {{Labels.view_url}} |
| Downgrade low-impact errors | When labels.affected_users < 5, update severity to Info |
Typical Scenario Configurations
- E-commerce
- SaaS Application
- Content Website
E-commerce apps focus on the transaction flow, so alert configuration should center on payment and ordering.
| Layer | Configuration |
|---|---|
| Data Filtering | Exclude: third-party ad script errors, ResizeObserver loop |
| Alert Grading | P0: payment page errors, crashes; P1: product detail/cart errors |
| Alert Processing | Suppression window: 30 min; title template includes page path |
| Alert Dispatch | P0 → SMS + phone call, P1 → IM notification |
FAQ
What's the difference between data filtering and Flashduty alert dropping?
What's the difference between data filtering and Flashduty alert dropping?
| Comparison | RUM Data Filtering | Flashduty Alert Drop |
|---|---|---|
| Timing | Before Error aggregation into Issue | After Issue is delivered as alert |
| Data Retention | Error data retained, viewable in Explorer | Issue data retained |
| Impact Scope | Filtered Errors don’t participate in Issue aggregation or alerting | Issue exists, just no alert notification |
| Use Case | Long-term exclusion of noise data | Flexible alert control |
How do alert grading rules work with Flashduty Pipeline?
How do alert grading rules work with Flashduty Pipeline?
They complement each other, serving different dimensions:
- RUM Alert Grading: Based on individual Error attributes (user, page, environment, etc.), suitable for quick determination at the source
- Flashduty Pipeline: Based on overall Issue information (affected user count, error count, etc.), suitable for more comprehensive assessment
Will the default alert behavior change?
Will the default alert behavior change?
No. If you don’t configure any filter rules or alert grading, all Errors will still be aggregated into Issues and delivered to Flashduty with default severity. Existing behavior remains completely unchanged.
Further Reading
Issue Alerts
Complete configuration guide for alert triggers, custom grading, and data filtering
Alert Pipeline
Clean, transform, and filter alerts at the integration layer
Noise Reduction
Aggregate and suppress alerts at the channel level
Escalation Rules
Configure escalation rules to route alerts to the right responders