query-metrics

Runs metrics queries against Axiom MetricsDB via scripts. Discovers available metrics, tags, and tag values. Use when asked to query metrics, explore metric datasets, check metric values, or investigate OTel metrics data.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "query-metrics" with this command: npx skills add axiomhq/skills/axiomhq-skills-query-metrics

CRITICAL: ALL script paths are relative to this skill's folder. Run them with full path (e.g., scripts/metrics-query).

Querying Axiom Metrics

Query OpenTelemetry metrics stored in Axiom's MetricsDB.

Setup

Run scripts/setup to check requirements (curl, jq, ~/.axiom.toml).

Config in ~/.axiom.toml (shared with axiom-sre):

[deployments.prod]
url = "https://api.axiom.co"
token = "xaat-your-token"
org_id = "your-org-id"

The target dataset must be of kind otel:metrics:v1.


Discovering Datasets

List all datasets in a deployment:

scripts/datasets <deployment>

Filter to only metrics datasets:

scripts/datasets <deployment> --kind otel:metrics:v1

This returns each dataset's name, edgeDeployment, and kind. Use the dataset name in subsequent metrics-info and metrics-query calls.


Edge Deployment Resolution

Datasets can live in different edge deployments (e.g., us-east-1 vs eu-central-1). The scripts automatically resolve the correct regional edge URL before querying. No manual configuration is needed — metrics-info and metrics-query detect the dataset's edge deployment and route requests to the right endpoint.

Edge DeploymentEdge Endpoint
cloud.us-east-1.awshttps://us-east-1.aws.edge.axiom.co
cloud.eu-central-1.awshttps://eu-central-1.aws.edge.axiom.co

If resolution fails or the edge deployment is unknown, requests fall back to the deployment URL in ~/.axiom.toml.


Learning the Metrics Query Syntax

CRITICAL: You MUST run metrics-spec before composing your first query in a session. NEVER guess MPL syntax.

The query endpoint is self-describing. Before writing any query, fetch the full specification:

scripts/metrics-spec <deployment> <dataset>

This returns the complete metrics query specification with syntax, operators, and examples. Read it to understand query structure before composing queries.


MPL Quick Reference

This is a minimal reference to avoid common mistakes. Always run metrics-spec for the full specification.

Identifiers

Identifiers that contain anything other than ASCII letters, digits, or _ must be backtick-escaped:

// CORRECT — dots require backticks
`host.name`
`service.name`
`k8s.namespace.name`

// WRONG — parser will fail on the dot
host.name

Filtering with where

Use | where to filter series by tag values. Chain multiple | where clauses (equivalent to and):

my-dataset:cpu_usage
| where `service.name` == "query"
| where namespace == "production"
| align to 5m using avg

You can also combine conditions in a single where using and, or, not, and parentheses:

my-dataset:http_requests
| where method == "GET" and status_code >= 400
| where `service.name` == "frontend" or `service.name` == "gateway"
| align to 5m using sum

Important:

  • Use where, not filter (filter is deprecated and produces a warning)
  • Boolean operators are and, or, not — do NOT use && or || (these are not valid MPL)
  • String values must be double-quoted: "value"
  • Regular expressions use slashes: /.*pattern.*/

Workflow

  1. List datasets: Run scripts/datasets <deployment> to see available datasets and their edge deployments
  2. Learn the language: Run scripts/metrics-spec <deployment> <dataset> to read the metrics query spec — this step is mandatory
  3. Discover metrics: If possible use the find-metrics command, otherwise list available metrics via the info scripts
  4. Explore tags: List tags and tag values to understand filtering options
  5. Write and execute query: Compose a metrics query and run it via scripts/metrics-query
  6. Iterate: Refine filters, aggregations, and groupings based on results

If you are unsure what to query, start by searching for metrics that match a relevant tag value:

scripts/metrics-info <deployment> <dataset> find-metrics "frontend"

This finds metrics associated with a known value (e.g., a service name or host), giving you a starting point for building queries.


Query Metrics

Execute a metrics query against a dataset:

scripts/metrics-query <deployment> '<mpl>' '<startTime>' '<endTime>'

Examples:

# Simple query
scripts/metrics-query prod \
  '`my-dataset`:`http.server.duration` | align to 5m using avg' \
  '2025-06-01T00:00:00Z' \
  '2025-06-02T00:00:00Z'

# Query with filtering (note backticks on dotted tag names)
scripts/metrics-query prod \
  '`my-dataset`:`http.server.duration` | where `service.name` == "frontend" and method == "GET" | align to 5m using avg | group by status_code using sum' \
  'now-1d' \
  'now'
ParameterRequiredDescription
deploymentYesName from ~/.axiom.toml (e.g., prod)
mplYesMetrics query string. Dataset is extracted from the query itself.
startTimeYesRFC3339 (e.g., 2025-01-01T00:00:00Z) or relative expression (e.g., now-1h, now-1d)
endTimeYesRFC3339 (e.g., 2025-01-02T00:00:00Z) or relative expression (e.g., now)

Discovery (Info Endpoints)

Use scripts/metrics-info to explore what metrics, tags, and values exist in a dataset before writing queries. Time range defaults to the last 24 hours; override with --start and --end.

List metrics in a dataset

scripts/metrics-info <deployment> <dataset> metrics

List tags in a dataset

scripts/metrics-info <deployment> <dataset> tags

List values for a specific tag

scripts/metrics-info <deployment> <dataset> tags <tag> values

List tags for a specific metric

scripts/metrics-info <deployment> <dataset> metrics <metric> tags

List tag values for a specific metric and tag

scripts/metrics-info <deployment> <dataset> metrics <metric> tags <tag> values

Find metrics matching a tag value

scripts/metrics-info <deployment> <dataset> find-metrics "<search-value>"

Custom time range

All info commands accept --start and --end for custom time ranges:

scripts/metrics-info prod my-dataset metrics \
  --start 2025-06-01T00:00:00Z \
  --end 2025-06-02T00:00:00Z

Error Handling

HTTP errors return JSON with message, code, and optional detail fields:

{"message": "description", "code": 400, "detail": {"errorType": 1, "message": "raw error"}}

Common status codes:

  • 400 — Invalid query syntax or bad dataset name
  • 401 — Missing or invalid authentication
  • 403 — No permission to query/ingest this dataset
  • 404 — Dataset not found
  • 429 — Rate limited
  • 500 — Internal server error

On a 500 error, re-run the failing script call with curl -v flags to capture response headers, then report the traceparent or x-axiom-trace-id header value to the user. This trace ID is essential for debugging the failure with the backend team.


Scripts

ScriptUsage
scripts/setupCheck requirements and config
scripts/datasets <deploy> [--kind <kind>]List datasets (with edge deployment info)
scripts/metrics-spec <deploy> <dataset>Fetch metrics query specification
scripts/metrics-query <deploy> <mpl> <start> <end>Execute a metrics query
scripts/metrics-info <deploy> <dataset> ...Discover metrics, tags, and values
scripts/axiom-api <deploy> <method> <path> [body]Low-level API calls
scripts/resolve-url <deploy> <dataset>Resolve dataset to edge deployment URL

Run any script without arguments to see full usage.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

building-dashboards

No summary provided by upstream source.

Repository SourceNeeds Review
General

axiom-sre

No summary provided by upstream source.

Repository SourceNeeds Review
General

controlling-costs

No summary provided by upstream source.

Repository SourceNeeds Review
General

writing-evals

No summary provided by upstream source.

Repository SourceNeeds Review