Datadog Guardrails
Datadog-specific guardrails. Verifies monitors route to a pager via @handle syntax and that declared SLOs have a matching burn-rate alert monitor. Complements the tool-agnostic observability policy.
datadog to your lunar-config.yml:uses: github://earthly/lunar-lib/policies/datadog@v1.0.5
Included Guardrails
This policy includes 2 guardrails that enforce standards for your operational readiness.
monitor-has-pager-target
Verifies that every Datadog monitor for the service routes to at
least one pager target. Datadog monitors embed notification
handles directly in the monitor message body using @handle syntax
(e.g. @pagerduty-core, @opsgenie-platform, @slack-oncall).
The policy scans each monitor's message field and fails if any
monitor has no matching @<pager_prefix>-* handle. The prefix
list is configurable via the pager_handle_prefixes input so
teams can declare which notification routes count as "pager"
(defaults cover PagerDuty, Opsgenie, VictorOps). Skips cleanly
when the Datadog collector hasn't written native data.
slo-burn-rate-alert
Verifies that every declared SLO has a matching burn-rate alert
monitor. Datadog offers a dedicated "SLO alert" monitor type that
references an SLO by ID and fires when the error-budget burn rate
exceeds a threshold, preventing fire-and-forget SLOs that silently
drift. Cross-references .observability.native.datadog.api.slos
against .observability.native.datadog.api.monitors and fails if
any SLO lacks a monitor of type slo alert bound to its ID.
Skips cleanly when no SLOs are defined or when the Datadog
collector hasn't written native data.
How Guardrails Fit into Lunar
Lunar guardrails define your engineering standards as code. They evaluate data collected by integrations and produce pass/fail checks with actionable feedback.
Policies support gradual enforcement—from silent scoring to blocking PRs or deployments—letting you roll out standards at your own pace without disrupting existing workflows.
Learn How Lunar Works →Example Evaluated Data
This policy evaluates structured metadata from the Component JSON. Here's an example of the data it checks:
{
"observability": {
"source": {
"tool": "datadog",
"integration": "api"
},
"native": {
"datadog": {
"api": {
"service_tag": "payment-api",
"monitors": [
{
"id": 12345,
"name": "High p99 latency",
"type": "metric alert",
"message": "Paging @pagerduty-payments — investigate latency spike.",
"query": "avg(last_5m):..."
},
{
"id": 67890,
"name": "Error budget burn (payment-api availability)",
"type": "slo alert",
"message": "Budget burn — @pagerduty-payments",
"query": "burn_rate(\"abc-slo-id\").over(\"7d\") > 2"
}
],
"slos": [
{
"id": "abc-slo-id",
"name": "payment-api availability",
"type": "metric",
"target": 99.9
}
]
}
}
}
}
}
Required Integrations
This policy evaluates data gathered by one or more of the following integration(s).
Make sure to enable them in your lunar-config.yml.
Configuration
Configure this policy in your lunar-config.yml.
Inputs
| Input | Required | Default | Description |
|---|---|---|---|
pager_handle_prefixes
|
Optional |
pagerduty,opsgenie,victorops
|
Comma-separated list of Datadog notification handle prefixes that count as a pager target. Any monitor message containing at least one `@<prefix>-*` handle from this list passes the `monitor-has-pager-target` check. Defaults cover PagerDuty, Opsgenie, and VictorOps. |
Documentation
View on GitHubDatadog Guardrails
Datadog-specific monitor and SLO policies with no cross-tool equivalent.
Overview
This plugin enforces Datadog-shaped practices that don't generalize to other observability tools. monitor-has-pager-target checks that each monitor's message body routes to a pager via Datadog's @handle notification syntax. slo-burn-rate-alert checks that each declared SLO has a matching burn-rate alert monitor, preventing fire-and-forget SLOs. Pair this with the tool-agnostic observability policy — that one covers presence (dashboard/alerts/SLO exist), this one covers Datadog-native quality checks.
Policies
This plugin provides the following policies (use include to select a subset):
| Policy | Description |
|---|---|
monitor-has-pager-target |
Verifies every Datadog monitor routes to at least one pager handle |
slo-burn-rate-alert |
Verifies every declared SLO has a matching burn-rate alert monitor |
Required Data
This policy reads from the following Component JSON paths:
| Path | Type | Provided By |
|---|---|---|
.observability.native.datadog.api.monitors |
array | datadog collector (service sub-collector) |
.observability.native.datadog.api.slos |
array | datadog collector (service sub-collector) |
Note: The datadog collector's service sub-collector must be enabled to populate this data. Without it, both checks skip cleanly.
Installation
Add to your lunar-config.yml:
policies:
- uses: github://earthly/lunar-lib/policies/datadog@v1.0.0
on: ["domain:your-domain"]
enforcement: report-pr
# include: [monitor-has-pager-target] # Run a subset
with:
pager_handle_prefixes: "pagerduty,opsgenie" # Override default list
Configuring pager prefixes
Datadog monitor messages use @<handle> syntax to route notifications. The pager_handle_prefixes input is a comma-separated list of prefixes that count as a pager — any monitor message containing at least one @<prefix>-* handle from this list passes the check. Defaults to pagerduty,opsgenie,victorops. Teams that page through other routes (e.g. a custom webhook) can extend the list. @slack-* and @email-* are intentionally excluded from the defaults since they are not paging channels.
Examples
Passing Example
Monitor references a pager handle, SLO has a matching burn-rate alert:
{
"observability": {
"native": {
"datadog": {
"api": {
"monitors": [
{
"id": 12345,
"name": "High p99 latency",
"type": "metric alert",
"message": "Paging @pagerduty-payments — investigate latency spike.",
"query": "avg(last_5m):..."
},
{
"id": 67890,
"name": "Error budget burn (payment-api availability)",
"type": "slo alert",
"message": "Budget burn — @pagerduty-payments",
"query": "burn_rate(\"abc-slo-id\").over(\"7d\") > 2"
}
],
"slos": [
{ "id": "abc-slo-id", "name": "payment-api availability", "type": "metric", "target": 99.9 }
]
}
}
}
}
}
Failing Example
Monitor has no pager handle, SLO has no burn-rate alert:
{
"observability": {
"native": {
"datadog": {
"api": {
"monitors": [
{
"id": 12345,
"name": "High p99 latency",
"type": "metric alert",
"message": "Latency spike — check the dashboard.",
"query": "avg(last_5m):..."
}
],
"slos": [
{ "id": "abc-slo-id", "name": "payment-api availability", "type": "metric", "target": 99.9 }
]
}
}
}
}
}
Failure messages:
monitor-has-pager-target:Monitor 12345 ("High p99 latency") has no pager handle in its message (looked for @pagerduty-*, @opsgenie-*, @victorops-*)slo-burn-rate-alert:SLO abc-slo-id ("payment-api availability") has no matching burn-rate alert monitor
Remediation
When this policy fails, you can resolve it by:
- monitor-has-pager-target: Edit the Datadog monitor and include a pager handle in the notification message, e.g.
@pagerduty-<team>. Configure the handle in Datadog under Integrations → PagerDuty first. If the monitor shouldn't page (e.g. an informational alert), remove it or reclassify it under a different notification policy excluded from this check. - slo-burn-rate-alert: Create a new monitor in Datadog of type SLO alert referencing the SLO ID. Use a burn-rate condition (e.g.
burn_rate("<slo-id>").over("7d") > 2) and route notifications to the service's pager handle. Datadog's SLO detail page has a "Create alert" shortcut that scaffolds this.
Open Source
This policy is open source and available on GitHub. Contribute improvements, report issues, or fork it for your own use.
Common Use Cases
Explore how individual guardrails work with specific integrations.
Ready to Automate Your Standards?
See how Lunar can turn your AGENTS.md, engineering wiki, compliance docs, or postmortem action items into automated guardrails with our 200+ built-in guardrails.