Hamburger Cross Icon
Datadog Collector - Lunar Collector

Datadog Collector

Collector Experimental Service Catalog

Query the Datadog API for dashboards, monitors, and SLOs linked to each component. Normalizes to .observability for tool-agnostic policies; raw data at .observability.native.datadog.

Add datadog to your lunar-config.yml:
uses: github://earthly/lunar-lib/collectors/datadog@v1.0.5

What This Integration Collects

This integration includes 2 collectors that gather metadata from your systems.

Collector code

service

Queries the Datadog REST API for the component's monitors, dashboard, and SLOs. Discovers the Datadog service tag from the component's datadog/service-name meta annotation (typically set by a company-specific cataloger via lunar catalog component --meta datadog/service-name <name>), or falls back to the explicit service_name input for static cases. Optionally resolves a dashboard UUID from the datadog/dashboard-id meta or the dashboard_id input — dashboards in Datadog are not universally tagged with service:, so they must be mapped explicitly. Monitors (via /api/v1/monitor) and SLOs (via /api/v1/slo) are filtered by the service:<name> tag; dashboard is fetched by UUID when set. Writes normalized data to .observability.dashboard, .observability.alerts (monitors), .observability.slo, and .observability.source, with raw API responses under .observability.native.datadog.api.

datadog monitors alerts dashboards slo observability
Book a demo
Collector code

repo-files

Walks the component repository looking for Datadog-as-code JSON files by content fingerprint. Dashboards are identified by a top-level widgets array plus a layout_type field (the shape produced by Datadog's dashboard JSON export and the datadog_dashboard_json Terraform resource). Monitors are identified by a top-level object with type (e.g. metric alert, query alert, service check), query, and name fields. Users can narrow the scan by setting the find_command input (e.g. to restrict to a datadog/ directory); by default the full repo is walked. Writes each matching file's raw JSON (plus its repo path) into .observability.native.datadog.repo_dashboards and .observability.native.datadog.repo_monitors for users to build custom policies against. This sub-collector does not write to normalized .observability.dashboard / .observability.alerts paths — the API sub-collector owns those; repo files only surface raw content.

datadog dashboard-as-code monitors-as-code repo json
Book a demo

How Collectors Fit into Lunar

Lunar watches your code and CI/CD systems to collect SDLC data from config files, test results, IaC, deployment configurations, security scans, and more.

Collectors are the automatic data-gathering layer. They extract structured metadata from your repositories and pipelines, feeding it into Lunar's centralized database where guardrails evaluate it to enforce your engineering standards.

Learn How Lunar Works
1
Collectors Gather Data This Integration
Triggered by code changes or CI pipelines, collectors extract metadata from config files, tool outputs, test results, and scans
2
{ } Centralized as JSON
All data merged into each component's unified metadata document
3
Guardrails Enforce Standards
Real-time feedback in PRs and AI workflows

Example Collected Data

This collector writes structured metadata to the Component JSON. Here's an example of the data it produces:

{ } component.json Component JSON
{
  "observability": {
    "source": {
      "tool": "datadog",
      "integration": "api"
    },
    "dashboard": {
      "id": "abc-123-def",
      "exists": true,
      "url": "https://app.datadoghq.com/dashboard/abc-123-def"
    },
    "alerts": {
      "configured": true,
      "count": 7
    },
    "slo": {
      "defined": true,
      "count": 2,
      "has_error_budget": true
    },
    "native": {
      "datadog": {
        "api": {
          "service_tag": "payment-api",
          "monitors": [ "...list of Datadog monitor objects..." ],
          "dashboard": { "...full Datadog dashboard API response..." },
          "slos": [ "...list of SLO objects..." ]
        },
        "repo_dashboards": [
          {
            "path": "datadog/payment-api-overview.json",
            "dashboard": { "title": "Payment API", "layout_type": "ordered", "widgets": [] }
          }
        ],
        "repo_monitors": [
          {
            "path": "datadog/monitors/latency.json",
            "monitor": { "name": "High p99 latency", "type": "metric alert", "query": "avg(last_5m):..." }
          }
        ]
      }
    }
  }
}

Configuration

Configure this collector in your lunar-config.yml.

Inputs

Input Required Default Description
datadog_site Optional datadoghq.com Datadog site (e.g. `datadoghq.com`, `datadoghq.eu`, `us3.datadoghq.com`). Used to build API URLs and dashboard links. Defaults to the US1 site.
service_name Required Datadog service tag value (e.g. `payment-api`). Optional if the component has a `datadog/service-name` meta annotation set by a cataloger. Monitors and SLOs are filtered by `service:<name>`.
dashboard_id Required Datadog dashboard UUID (e.g. `abc-123-def`). Optional if the component has a `datadog/dashboard-id` meta annotation. Dashboards in Datadog are not universally tagged with `service:`, so they must be mapped explicitly.
find_command Optional find . -type f -name '*.json' Command used by the `repo-files` sub-collector to enumerate candidate JSON files (must output one path per line). Narrow this to restrict the scan to a subdirectory, e.g. `find ./datadog -type f -name '*.json'`.

Secrets

This collector requires the following secrets to be configured in Lunar:

Secret Description
DATADOG_API_KEY Datadog API key (from Organization Settings → API Keys). Used with `DD-API-KEY` header on all requests.
DATADOG_APP_KEY Datadog application key (from Organization Settings → Application Keys). Required for monitor, dashboard, and SLO reads — these endpoints require both `DD-API-KEY` and `DD-APPLICATION-KEY` headers. If created with custom scopes, the key must include `monitors_read`, `dashboards_read`, and `slos_read`; otherwise the corresponding endpoint returns 403.

Documentation

View on GitHub

Datadog Collector

Collect dashboard, monitor, and SLO data from Datadog via the API, and discover Datadog-as-code JSON files committed in the component repository.

Overview

This plugin provides two sub-collectors. The service sub-collector queries the Datadog REST API for monitors, dashboard, and SLOs tagged with the component's service. The repo-files sub-collector walks the repo for Datadog-as-code JSON files (dashboards and monitor definitions) and captures their raw contents. All data lands under the tool-agnostic .observability category, so the shared observability policy works regardless of whether the data came from Datadog, Grafana, or another provider.

Collected Data

This collector writes to the following Component JSON paths:

Path Type Description
.observability.source object Tool and integration metadata
.observability.dashboard.id string Tool-agnostic dashboard identifier (for Datadog, the dashboard UUID; set even when the dashboard no longer exists)
.observability.dashboard.exists boolean Whether the linked Datadog dashboard exists
.observability.dashboard.url string Direct URL to the dashboard
.observability.alerts.configured boolean Whether any Datadog monitors are configured for the service tag
.observability.alerts.count number Number of Datadog monitors scoped to the service tag
.observability.slo.defined boolean Whether any SLOs are configured for the service tag
.observability.slo.count number Number of SLOs scoped to the service tag
.observability.slo.has_error_budget boolean Whether at least one SLO defines an error budget (target below 100% or explicit warning threshold)
.observability.native.datadog.api object Raw Datadog API responses (monitors, dashboard, slos) plus the resolved service tag
.observability.native.datadog.repo_dashboards array Raw JSON of each Datadog dashboard file discovered in the repo, with its path
.observability.native.datadog.repo_monitors array Raw JSON of each Datadog monitor file discovered in the repo, with its path

Collectors

This plugin provides the following sub-collectors:

Collector Description
service Queries Datadog API for monitors (by service tag), dashboard (by UUID), and SLOs (by service tag) (code hook)
repo-files Discovers Datadog dashboard and monitor JSON files in the repo by content fingerprint (code hook)

Installation

Add to your lunar-config.yml:

collectors:
  - uses: github://earthly/lunar-lib/collectors/datadog@v1.0.0
    on: ["domain:your-domain"]
    with:
      datadog_site: "datadoghq.com"
      # service_name: "payment-api"   # Optional fallback if catalog meta isn't set
      # dashboard_id: "abc-123-def"   # Optional dashboard UUID
      # find_command: "find ./datadog -type f -name '*.json'"  # Optional, narrows repo scan

Required secrets:

  • DATADOG_API_KEY — Datadog API key (Organization Settings → API Keys)
  • DATADOG_APP_KEY — Datadog application key (Organization Settings → Application Keys). Required for monitor, dashboard, and SLO reads — these endpoints require both the API key and the application key.

Application key scopes. Modern Datadog application keys are scoped — if you pick "Custom Scopes" at creation time, select at minimum the scopes listed below, otherwise the API returns 403 for the matching endpoints. If you pick "All Scopes" at creation time no further action is needed, but least-privilege is preferred:

Scope Used by Datadog endpoint
monitors_read service sub-collector GET /api/v1/monitor
dashboards_read service sub-collector GET /api/v1/dashboard/{id}
slos_read service sub-collector GET /api/v1/slo

The repo-files sub-collector does not call the Datadog API and is unaffected by application-key scoping.

Service discovery

The service sub-collector resolves the component's Datadog service tag in this order:

  1. Catalog meta annotation — reads datadog/service-name from the component's lunar catalog meta. Set via lunar catalog component --meta datadog/service-name <name>, typically by a company-specific cataloger that knows which components map to which Datadog services. This is the recommended approach.
  2. service_name input — explicit value passed via with: service_name: <name> in lunar-config.yml. Useful for static cases or for orgs that don't run a cataloger.
  3. If neither is set, the sub-collector exits cleanly with no data written.

Monitors and SLOs are listed via the Datadog API and filtered on service:<name> tag. .observability.alerts.count and .observability.slo.count reflect the number of matching resources.

Dashboard discovery

Datadog dashboards are not universally tagged with service:, so the dashboard must be mapped explicitly:

  1. Catalog meta annotationdatadog/dashboard-id set via lunar catalog component --meta datadog/dashboard-id <uuid>.
  2. dashboard_id input — explicit value passed via with: dashboard_id: <uuid>.
  3. If neither is set, dashboard data is not collected (monitors and SLOs still run).

When the UUID resolves but the dashboard does not exist in Datadog, .observability.dashboard.exists=false is written so policies can flag the stale link. The UID is always written to .observability.dashboard.id so the link is visible in the component JSON even when the dashboard is missing.

Datadog site support

The datadog_site input selects which Datadog region to call. Defaults to datadoghq.com (US1). Supported values include datadoghq.eu (EU1), us3.datadoghq.com (US3), us5.datadoghq.com (US5), and ap1.datadoghq.com (AP1). The collector builds API URLs as https://api.<site> and dashboard links as https://app.<site>/dashboard/<id>.

If you are running outside of lunar collect and want to override the site without rewriting lunar-config.yml, set the DATADOG_SITE environment variable. Resolution order is datadog_site input → DATADOG_SITE env var → datadoghq.com default. Wrong-region requests return 401 Unauthorized, which is the most common cause of 4xx errors when keys are otherwise valid.

Repo file discovery (the repo-files sub-collector)

The repo-files sub-collector walks the cloned component repo and identifies Datadog-as-code JSON files by content fingerprint:

  • Dashboards — any .json file whose top-level object contains both a widgets array and a layout_type field (string, typically ordered or free). This is the shape produced by Datadog's UI JSON export and by the datadog_dashboard_json Terraform resource.
  • Monitors — any .json file whose top-level object contains type (string, e.g. metric alert, query alert, service check, log alert), query (string), and name (string). This is the shape of Datadog's Monitor API payload and what the datadog-ci and Datadog Terraform provider produce.

By default the full repo is walked. Set the find_command input to narrow the search to a specific directory (for example find ./datadog -type f -name '*.json'). This mirrors the pattern used by the Grafana collector's repo-dashboards sub-collector. Files that do not match either fingerprint are silently skipped. If no dashboards or monitors are found, nothing is written.

This sub-collector does not write to normalized .observability.dashboard / .observability.alerts paths — the API sub-collector owns those. .observability.native.datadog.repo_* is intentionally raw — users write their own policies against the dashboard/monitor JSON if they care about widget types, query shapes, notification targets, etc.

Notes on behavior

  • Both sub-collectors run on the code hook, so they fire on each push rather than a schedule. This matches the Grafana collector's pattern — the clone is cheap and keeps the data fresh on every change. The service sub-collector does not actually read from the repo, but the clone is cheap and keeps the hook model consistent across the plugin.
  • When Datadog API credentials are missing or the service name is not resolved, the service sub-collector exits 0 with a stderr message — no error, no partial data. The repo-files sub-collector works independently of API credentials.
  • Example Component JSON is defined in lunar-collector.yml under example_component_json.

Open Source

This collector is open source and available on GitHub. Contribute improvements, report issues, or fork it for your own use.

View Repository

Ready to Automate Your Standards?

See how Lunar can turn your AGENTS.md, engineering wiki, compliance docs, or postmortem action items into automated guardrails with our 200+ built-in guardrails.

Works with any process
check AI agent rules & prompt files
check Post-mortem action items
check Security & compliance policies
check Testing & quality requirements
Automate Now
Paste your AGENTS.md or manual process doc and get guardrails in minutes
Book a Demo