Skip to main content
Flashduty RUM provides a complete pipeline from data filtering and alert grading to Flashduty alert processing. Properly configuring this pipeline can effectively reduce alert noise and help your team focus on what truly matters. This guide covers the core principles and typical scenario configurations to help you quickly reduce unnecessary alert noise.

Alert Processing Pipeline

RUM alerts pass through four layers from Error generation to human notification:
LayerConfiguration LocationCore Function
① Data FilteringRUM App → Alert SettingsExclude unwanted Errors at the source, reducing unnecessary Issues
② Alert GradingRUM App → Alert SettingsSet Issue priority based on Error attributes
③ Alert ProcessingFlashduty Integration → Alert PipelineAdjust priority, drop/suppress based on Issue dimensions
④ Alert DispatchFlashduty ChannelRoute to teams, notify responders
We recommend configuring from top to bottom: first filter noise, then grade alerts, and finally fine-tune on the Flashduty side.

Step 1: Filter Noise Data

Before configuring alert grading, start by cleaning up the data source. Common noise sources include:
Errors from browser extensions or third-party ad/analytics scripts are unrelated to your business and should be excluded:
  • Error Stack contains chrome-extension://
  • Error Stack contains moz-extension://
  • Error Stack contains cdn.third-party.com
Some errors occur frequently but don’t affect user experience:
  • Error Message contains ResizeObserver loop
  • Error Message contains Script error
If you only care about production alerts, filter out other environments:
  • Environment not contains production
Filtered Errors will not participate in Issue aggregation or alerting, but the data is still retained. You can view these filtered errors in the Explorer using filter conditions.

Step 2: Configure Alert Grading

After filtering noise, use alert grading rules to differentiate the importance of different errors.

Grading Strategy Recommendations

PriorityUse CasesExpected Response Time
P0 (Critical)Core business disruption, VIP users affected, production crashesImmediate response
P1 (Warning)Important feature errors, critical page errorsSame-day resolution
P2 (Info)Non-critical feature errors, low-impact issuesScheduled resolution
Here are recommended rules ranked by business priority from high to low:
1

Production crashes → P0

A crash means the application is completely unavailable, requiring the highest priority response.
  • Condition: Environment contains production, AND Is Crash contains true
  • Alert level: P0
2

VIP user errors → P0

VIP user experience is directly tied to business value.
  • Condition: User ID contains vip (or match via custom field context.user.level contains vip)
  • Alert level: P0
3

Critical page errors → P1

Errors on payment, login, and checkout pages need priority handling.
  • Condition: Page URL contains /payment
  • Alert level: P1
You can create separate rules for each critical page, or use multiple match values in a single rule.
4

Other errors → P2 (default)

Errors not matching any rule are automatically classified as P2 and handled through standard processes. No additional configuration needed.
We recommend keeping the number of rules to 3-6, covering the most critical scenarios. Too many rules increase maintenance cost and can lead to priority confusion.

Step 3: Fine-Tune in Flashduty

Alert grading on the RUM side is based on individual Error attributes. For further processing based on the overall impact of an Issue, configure it in the Flashduty Alert Pipeline.
Processing ScenarioConfiguration
Suppress repeated alertsSame alert_key alerts only once within 1 hour
Custom alert titleTemplate example: [RUM] [{{Labels.env}}] {{Labels.error_type}} - {{Labels.view_url}}
Downgrade low-impact errorsWhen labels.affected_users < 5, update severity to Info

Typical Scenario Configurations

E-commerce apps focus on the transaction flow, so alert configuration should center on payment and ordering.
LayerConfiguration
Data FilteringExclude: third-party ad script errors, ResizeObserver loop
Alert GradingP0: payment page errors, crashes; P1: product detail/cart errors
Alert ProcessingSuppression window: 30 min; title template includes page path
Alert DispatchP0 → SMS + phone call, P1 → IM notification

FAQ

ComparisonRUM Data FilteringFlashduty Alert Drop
TimingBefore Error aggregation into IssueAfter Issue is delivered as alert
Data RetentionError data retained, viewable in ExplorerIssue data retained
Impact ScopeFiltered Errors don’t participate in Issue aggregation or alertingIssue exists, just no alert notification
Use CaseLong-term exclusion of noise dataFlexible alert control
They complement each other, serving different dimensions:
  • RUM Alert Grading: Based on individual Error attributes (user, page, environment, etc.), suitable for quick determination at the source
  • Flashduty Pipeline: Based on overall Issue information (affected user count, error count, etc.), suitable for more comprehensive assessment
We recommend setting base priority on the RUM side and making supplementary adjustments on the Flashduty side.
No. If you don’t configure any filter rules or alert grading, all Errors will still be aggregated into Issues and delivered to Flashduty with default severity. Existing behavior remains completely unchanged.

Further Reading