Hamburger Cross Icon
AI Use Guardrails - Ai Cli Safe Flags

Ai Cli Safe Flags

Guardrail AI Use Guardrails Experimental Devex Build And Ci
ai-use.ai-cli-safe-flags

Ensures AI CLI tools running in CI do not use dangerous permission-bypassing flags. Flags like --dangerously-skip-permissions (Claude), --yolo (Codex/Gemini), and --sandbox danger-full-access (Codex) remove safety guardrails.

ci safety dangerous flags permissions sandbox

Compatible Integrations

This guardrail works with the following integrations. Click to see how to use Ai Cli Safe Flags with each collector.

Enable This Guardrail

Add the parent policy to your lunar-config.yml to enable this guardrail.

📄 lunar-config.yml
policies:
  - uses: github://earthly/lunar-lib/policies/ai-use@v1.0.0
    include: [ai-cli-safe-flags]
    # with: ...

How This Guardrail Works

This guardrail is part of the AI Use Guardrails policy. It evaluates data collected by integrations and produces a pass/fail check with actionable feedback.

When enabled, this check runs automatically on every PR and in AI coding workflows, providing real-time enforcement of your engineering standards.

Learn How Lunar Works
1
Integrations Gather Data
Collectors extract metadata from code, CI pipelines, tool outputs, and scans
2
{ } Centralized as JSON
All data merged into each component's unified metadata document
3
This Guardrail Checks Current
Ai Cli Safe Flags runs and provides pass/fail feedback

Configuration Options

These inputs can be configured in your lunar-config.yml to customize how the parent policy (and this guardrail) behaves.

Input Required Default Description
canonical_filename Optional AGENTS.md The canonical (vendor-neutral) instruction filename
required_symlinks Optional CLAUDE.md Comma-separated list of symlinks required alongside the canonical file
min_lines Optional 10 Minimum number of lines for the root instruction file (0 to disable)
max_lines Optional 300 Maximum number of lines for the root instruction file (0 to disable)
max_total_bytes Optional 32768 Maximum combined bytes across all instruction files (0 to disable)
required_sections Optional Project Overview,Build Commands Comma-separated required section heading substrings (case-insensitive)
dangerous_flags_claude Optional --dangerously-skip-permissions,--allow-dangerously-skip-permissions Comma-separated dangerous flags for Claude CLI
dangerous_flags_codex Optional --dangerously-bypass-approvals-and-sandbox,--yolo,--full-auto Comma-separated dangerous flags for Codex CLI
dangerous_flags_gemini Optional --yolo,-y Comma-separated dangerous flags for Gemini CLI
min_annotation_percentage Optional 0 Minimum percentage of commits that should have AI annotations (0 = awareness mode)
AI Use Guardrails

AI Use Guardrails

This guardrail is part of the AI Use Guardrails policy, which includes 9 guardrails for devex build and ci.

View Policy

Ready to Automate Your Standards?

See how Lunar can turn your engineering wiki, compliance docs, or postmortem action items into automated guardrails with our 100+ built-in guardrails.

Works with any process
check Infrastructure conventions
check Post-mortem action items
check Security & compliance policies
check Testing & quality requirements
Automate Now
Turn any process doc into guardrails
Book a Demo