Skip to main content
RUM automatically aggregates error events reported by the SDK into Issues, helping you prioritize the most impactful problems, reduce service downtime, and minimize user frustration. You can inspect Issues through daily checks in the console, or configure alert notifications to be notified the moment a problem occurs. Flashduty RUM’s alerting capabilities include:
  • Alert Notifications: Deliver Issues as alert events to Flashduty channels, notifying responders through escalation rules
  • Alert Grading: Customize alert priority based on error attributes such as user, page, or environment
  • Data Filtering: Filter out noise data before Errors are aggregated into Issues, reducing unnecessary alerts

Enable Alerts

1

Enter Application Details

Go to “Application Details” - “Alert Settings” page
2

Enable Alerts

Turn on the alert switch and select multiple channels to deliver alerts to
3

Configure Notification Rules

Alert notification rules follow the escalation rules under the channel. You can set up responders for your team to assign alerts when they occur
Alert Settings
You must have the On-call service enabled to turn on Issue alerts. Note that the On-call service is charged based on active users, but members without a License can also receive alert notifications — even the free version has basic notification capabilities.

Alert Trigger Conditions

Trigger ConditionDescription
New IssueAn error event causes a new Issue to appear, triggering an alert event
Issue UpdateError events continue to merge into an unclosed Issue (For Review, Reviewed), and more than 24 hours have passed since the last alert event, a new alert event will be triggered
Issue ReopenedA new error merges into a closed Issue, causing the Issue to be reopened (regression)
Issue Priority UpgradeWhen a higher-priority error event enters a lower-priority Issue, the Issue priority is automatically upgraded and a new alert event is triggered. For example, a P2 Issue receiving an error matching a P0 rule will be upgraded to P0
  • An Issue triggers an alert event, which is delivered to the channel
  • Whether an alert notification is triggered depends on your integration configuration, noise reduction configuration, and escalation rule configuration under the channel
  • When an Issue is closed, the system triggers a close-type alert event, and its associated incident may automatically recover

Alert Severity

Default Grading Rules

If no custom alert grading rules are configured, the severity of alert events triggered by Issues is automatically determined by the system:
ConditionSeverity
Issue has existed for more than 7 daysInfo
Crash issueCritical

Custom Alert Grading

You can configure custom alert grading rules in “Alert Settings” to set alert priority (P0 / P1 / P2) based on error attributes, enabling more granular alert control. Custom grading rules are evaluated when an Error is reported, producing a “preset priority”. When the Error is aggregated into an Issue:
  • New Issue: The Issue priority is determined by the preset priority of the first Error
  • Matching existing Issue: If the Error’s preset priority is higher, the Issue priority is automatically upgraded (never downgraded)
  • No rule matched: Default priority P2 is used
Each grading rule consists of the following elements:
ElementDescription
Rule NameA name for easy identification and management
Match ConditionsFilter conditions based on Error attributes; multiple conditions within a rule use AND logic
Alert LevelPriority assigned on match: P0 (Critical) / P1 (Warning) / P2 (Info)
Rules are evaluated from top to bottom in priority order; the first matching rule takes effect immediately, and subsequent rules are not checked. You can drag and drop rules to adjust their priority order.
Each application supports a maximum of 6 alert grading rules. Each rule supports up to 2 OR condition groups, each group up to 3 AND conditions, and each condition up to 8 match values.

Supported Match Fields

FieldDescriptionExample
User ID (error.usr_id)User identifier that reported the errorvip_001
User Email (error.usr_email)User email address*@vip.com
Page URL (error.view_url)Full page URL where the error occurredContains /payment
Error Type (error.error_type)Error type classificationTypeError, SyntaxError
Error Message (error.error_message)Error description textContains Cannot read property
Stack (error.error_stack)Error stack traceContains at handleClick
Environment (error.env)Environment where the error occurredproduction, staging
Service (error.service)Service the error belongs topayment
Version (error.version)Application version1.2.0
Browser (error.browser_name)Browser nameChrome, Safari
Browser Version (error.browser_version)Browser version number120.0
Is Crash (error.is_crash)Whether it’s a crash errortrue
Match operators support “contains” and “not contains”.

Configuration Examples

Set the highest priority for VIP user errors to ensure immediate response:
  • Match condition: User ID contains vip
  • Alert level: P0 (Critical)
The payment page is a critical business flow; related errors need priority handling:
  • Match condition: Page URL contains /payment
  • Alert level: P1 (Warning)
Production crashes require immediate response:
  • Match condition: Environment contains production, AND Is Crash contains true
  • Alert level: P0 (Critical)
  • Issue priority can only be upgraded, never downgraded, ensuring important problems are not deprioritized by subsequent lower-priority errors
  • To adjust priority based on Issue impact (e.g., affected user count or error count), configure it in the Flashduty Alert Pipeline

Data Filtering

Data filtering allows you to filter out unwanted noise data before Errors are aggregated into Issues. Filtered Errors will not participate in Issue aggregation or trigger alerts. You can add filter rules in “Alert Settings”. Each rule can have multiple match conditions with AND logic within the same rule. Supported match fields are the same as Custom Alert Grading.
ScenarioExample Rule
Exclude third-party script errorsError Stack contains cdn.third-party.com
Exclude known harmless errorsError Message contains ResizeObserver loop
Exclude debug page errorsPage URL contains /debug
  • Filtered Errors will not participate in Issue aggregation or alerting, but the data is still retained and can be viewed in the Explorer using filter conditions
  • If you only want to temporarily suppress certain alerts while retaining Issue data, use the “Alert Drop” feature in the Flashduty Alert Pipeline

Integration with Flashduty

RUM alerts work in deep collaboration with Flashduty, forming a complete alert processing chain:
LayerConfiguration LocationCore CapabilityUse Cases
Data FilteringRUM Alert SettingsFilter noise ErrorsPermanently ignore third-party script errors, debug page errors, etc.
Alert GradingRUM Alert SettingsSet priority based on Error attributesVIP user alerts, critical page alerts, etc.
Alert ProcessingFlashduty Integration ConfigTitle customization, priority adjustment, drop/suppressionAdjust level based on affected user count, suppress repeated alerts, etc.
Alert DispatchFlashduty ChannelRouting, on-call scheduling, notification channelsDispatch to different teams, configure notification methods, etc.
You can further process RUM alerts in the Flashduty Alert Pipeline, such as adjusting alert levels based on affected user count, suppressing repeated alerts by time window, or customizing alert title formats.

Further Reading

RUM Alert Noise Reduction

Typical scenario configurations to quickly reduce unnecessary alert noise

Alert Pipeline

Clean, transform, and filter alerts at the integration layer

Noise Reduction

Aggregate and suppress alerts at the channel level

Escalation Rules

Configure escalation rules to route alerts to the right responders