New Relic + Slack

Connect New Relic to Slack: Real-Time Alerts & Incident Notifications Automated

Bring your observability data directly into Slack so engineering teams can detect, triage, and resolve incidents faster.

Why integrate New Relic and Slack?

New Relic and Slack are two tools engineering teams can't live without — one gives you deep visibility into application performance, infrastructure, and errors, the other is where your team actually talks. When they run in isolation, critical alerts get buried in dashboards nobody is watching, and incident response pays the price. Integrating New Relic with Slack through tray.ai puts the right signals in front of the right people when it matters.

Automate & integrate New Relic & Slack

Use case

Real-Time Alert Routing to Slack Channels

When New Relic fires an alert policy — whether for error rate spikes, Apdex degradation, or infrastructure CPU thresholds — tray.ai automatically posts a formatted message to the appropriate Slack channel. Teams can configure routing rules so frontend alerts go to #frontend-ops, database alerts go to #db-team, and so on. No more one-size-fits-all noise flooding a single channel.

Use case

Automated Incident War-Room Channel Creation

When a critical P1 or P2 incident fires in New Relic, tray.ai can automatically create a dedicated Slack channel, invite the relevant on-call engineers and stakeholders, and post the opening incident summary — affected services, error rates, and a direct link to the New Relic diagnostic view. No more manual scramble to set up incident bridges during a high-stress outage.

Use case

Deployment Tracking and Change Intelligence

Every time a deployment marker is recorded in New Relic, tray.ai posts a deployment notification to Slack with details including the deploying team, version, and linked changelog. If New Relic detects a performance regression or error spike shortly after, a follow-up Slack message automatically connects the deploy to the degradation — so bad releases don't stay hidden.

Use case

SLA and Uptime Breach Notifications

When New Relic detects a service has breached an SLA threshold — availability dropping below 99.9% or response times exceeding agreed limits — tray.ai fires an immediate Slack alert to both the engineering team and relevant business stakeholders. The message includes the affected SLA, current performance metrics, and a link to the live New Relic dashboard.

Use case

Daily and Weekly Performance Digest to Slack

Rather than requiring engineers to log into New Relic every morning, tray.ai can schedule automated digests that pull Apdex scores, error rates, throughput, and infrastructure health, then post a clean summary to a designated Slack channel. Weekly rollups can go to leadership channels to keep broader teams up to speed on system health.

Use case

Anomaly Detection Alerts with Contextual Enrichment

When New Relic's applied intelligence spots an anomaly — unusual traffic patterns, memory leaks, unexpected throughput changes — tray.ai enriches the Slack alert with historical baselines, recent deployments, and related entity health. Engineers get not just the raw signal but the context they need to prioritize and act.

Use case

On-Call Acknowledgement and Escalation via Slack

When a New Relic alert fires, tray.ai sends a Slack message with interactive buttons letting the on-call engineer acknowledge, escalate, or snooze the alert without leaving Slack. If no acknowledgement comes within a defined window, tray.ai automatically escalates to the next responder in the rotation and updates the Slack thread.

Get started with New Relic & Slack integration today

New Relic & Slack Challenges

What challenges are there when working with New Relic & Slack and how will using Tray.ai help?

Challenge

Alert Noise and Channel Overload

New Relic can generate a high volume of alerts across dozens of policies and conditions. Dumping every notification into a single Slack channel creates noise that causes engineers to tune out entirely, which defeats the whole point.

How Tray.ai Can Help:

tray.ai has conditional routing logic that filters and directs alerts based on severity, service tags, team ownership, and alert policy names. You can send critical alerts to specific team channels, suppress low-priority notifications during off-hours, and deduplicate related alerts into single threaded messages — so Slack stays useful instead of becoming a fire hose.

Challenge

Webhook Payload Complexity and Custom Formatting

New Relic webhook payloads are raw JSON with technical field names that are hard to read in Slack. Without transformation, alert messages land as unformatted data dumps that slow down triage rather than speeding it up.

How Tray.ai Can Help:

tray.ai's data mapping and transformation engine lets teams reshape New Relic webhook payloads into polished Slack Block Kit messages with color-coded severity levels, clear field labels, clickable links, and structured layouts — no custom code required. Teams can update message templates visually and ship changes in minutes.

Challenge

Dynamic On-Call Routing Without Manual Maintenance

Routing alerts to the right engineer means knowing who's currently on call, and that changes constantly. Hardcoding Slack user IDs or channel names goes stale fast and sends alerts to the wrong people.

How Tray.ai Can Help:

tray.ai integrates with on-call scheduling tools and can look up the current on-call responder at alert time, then route the Slack notification accordingly. Alert routing stays accurate as schedules rotate, without anyone touching the integration configuration.

Challenge

Bi-Directional Acknowledgement and Status Sync

When an engineer acknowledges or resolves an incident in Slack, that status change doesn't automatically reflect back in New Relic. You end up with a disconnect between the communication record in Slack and the official incident state in your monitoring platform.

How Tray.ai Can Help:

tray.ai supports bi-directional workflows where interactive Slack button actions — acknowledge, escalate, or resolve — trigger the corresponding API calls back to New Relic to update incident status. Both systems stay in sync, MTTR measurement in New Relic stays accurate, and nobody has to manually update status in two places.

Challenge

Multi-Account and Multi-Environment Alert Consolidation

Large engineering organizations often run multiple New Relic accounts for different environments — production, staging, regional deployments. Managing alert routing across all of them into a coherent Slack strategy is genuinely complicated without a centralized integration layer.

How Tray.ai Can Help:

tray.ai acts as a central hub that can receive webhooks and API data from multiple New Relic accounts simultaneously, apply unified routing and enrichment logic, and deliver properly attributed Slack notifications that clearly show which environment and account the alert came from. Platform teams get one place to manage the entire alert-to-Slack pipeline across their whole New Relic estate.

Start using our pre-built New Relic & Slack templates today

Start from scratch or use one of our pre-built New Relic & Slack templates to quickly solve your most common use cases.

New Relic & Slack Templates

Find pre-built New Relic & Slack solutions for common use cases

Browse all templates

Template

New Relic Alert Policy → Slack Channel Notification

Automatically posts a formatted Slack message to a designated channel whenever a New Relic alert policy fires, including the alert name, severity, affected entities, current metric values, and a direct link to the New Relic incident view.

Steps:

  • Trigger: New Relic alert condition fires and creates an incident via webhook or polling
  • Transform: tray.ai formats the alert payload into a Slack Block Kit message with severity color-coding and relevant metric details
  • Action: Post the formatted message to the appropriate Slack channel based on alert tag or policy name routing rules

Connectors Used: New Relic, Slack

Template

P1 Incident → Auto-Create Slack War-Room Channel

When a critical P1 incident opens in New Relic, this template automatically creates a dedicated Slack channel, sets the topic with incident details, and invites the relevant on-call engineers and service owners using predefined escalation mappings.

Steps:

  • Trigger: New Relic incident is created with a priority level of P1 or Critical
  • Action: tray.ai creates a new Slack channel named after the incident ID and affected service
  • Action: Invite pre-configured on-call responders and post an opening incident summary with New Relic dashboard link

Connectors Used: New Relic, Slack

Template

New Relic Deployment Marker → Slack Deploy Announcement

Posts a Slack notification every time a deployment is recorded in New Relic, then monitors for post-deployment performance changes and sends a follow-up message if an anomaly is detected within a configurable window after the deploy.

Steps:

  • Trigger: New Relic deployment marker event is received
  • Action: Post a deploy announcement to the #deployments Slack channel with version, team, and timestamp details
  • Monitor: Check New Relic error rate and Apdex for the next 15 minutes; if thresholds are breached, post a correlated follow-up alert to Slack

Connectors Used: New Relic, Slack

Template

Scheduled New Relic Performance Digest → Slack

Runs on a daily or weekly schedule to pull Apdex scores, error rates, and infrastructure health from New Relic, then posts a formatted digest to a designated Slack channel for team-wide visibility.

Steps:

  • Trigger: tray.ai scheduler fires at a configured daily or weekly interval
  • Action: Query New Relic NRQL API for Apdex, error rate, throughput, and infrastructure metrics across configured applications
  • Action: Format and post a digest summary card to the designated Slack channel with trend indicators and links to New Relic dashboards

Connectors Used: New Relic, Slack

Template

New Relic Anomaly Detection → Enriched Slack Alert

When New Relic Applied Intelligence detects an anomaly, this template enriches the raw signal with recent deployment data and historical baselines before posting a contextual alert to Slack, so engineers can triage faster with less context-switching.

Steps:

  • Trigger: New Relic Applied Intelligence anomaly detection event fires via webhook
  • Enrich: tray.ai queries New Relic for historical baseline metrics and recent deployment markers related to the affected entity
  • Action: Post an enriched Slack alert combining the anomaly details, baseline comparison, and recent change events with drill-down links

Connectors Used: New Relic, Slack

Template

New Relic SLA Breach → Slack Stakeholder Alert

Monitors New Relic availability and response time metrics against defined SLA thresholds and automatically sends targeted Slack notifications to both engineering channels and executive stakeholder channels when a breach is detected.

Steps:

  • Trigger: tray.ai polls New Relic NRQL at a defined interval and detects a metric value crossing an SLA threshold
  • Action: Post an urgent Slack alert to the #sre-oncall channel with full breach details and current performance metrics
  • Action: Send a separate, business-friendly summary to the #leadership or #customer-success Slack channel with impact context

Connectors Used: New Relic, Slack