finops-cloud-optimizer

Analyze cloud spend across AWS/GCP/Azure — identify waste, recommend rightsizing, spot idle resources, and produce actionable savings plans with projected dollar impact.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "finops-cloud-optimizer" with this command: npx skills add charlie-morrison/finops-cloud-optimizer

FinOps Cloud Optimizer

Analyze cloud billing data, resource utilization, and infrastructure configurations to identify waste and produce a prioritized savings plan with projected dollar impact. Covers AWS, GCP, and Azure — handling reserved instances, spot/preemptible opportunities, idle resources, rightsizing, storage tiering, and architectural changes that reduce cost without reducing reliability.

Use when: "analyze our cloud spend", "reduce cloud costs", "find idle resources", "rightsizing recommendations", "cloud cost audit", "finops review", or when monthly cloud bills are growing faster than revenue.

Prerequisites

The agent needs access to billing data and resource metrics. At least one of:

# AWS
aws ce get-cost-and-usage --time-period Start=2026-04-01,End=2026-04-30 \
  --granularity MONTHLY --metrics BlendedCost --output json 2>/dev/null && echo "AWS CE: OK"
aws cloudwatch get-metric-statistics --help >/dev/null 2>&1 && echo "CloudWatch: OK"

# GCP
gcloud billing accounts list 2>/dev/null && echo "GCP Billing: OK"
bq query --nouse_legacy_sql 'SELECT 1 FROM `project.dataset.gcp_billing_export_v1_*` LIMIT 1' 2>/dev/null

# Azure
az consumption usage list --top 1 2>/dev/null && echo "Azure Consumption: OK"

# Or: CSV/JSON billing export file
ls *billing*.csv *cost*.csv *CUR*.csv.gz 2>/dev/null

Usage

Provide:

  • Cloud provider(s) — AWS, GCP, Azure, or multi-cloud
  • Billing data source — API access, CSV export, or BigQuery/Athena table
  • Time range — analysis period (default: last 3 months)
  • Scope — specific account/project/subscription, or all
  • Constraints — things that cannot change (e.g., "must stay in us-east-1", "cannot use spot for production databases")

Example invocations:

Analyze our AWS bill for the last 3 months. We're spending $47K/month and the CFO wants 25% reduction.

We have a GCP billing export in BigQuery. Find all idle resources and rightsizing opportunities.

Review our Azure subscription and tell us if we should switch any VMs to reserved instances.

How It Works

Step 1: Billing Data Ingestion

Pull billing data and break it down by service, account, and tag:

AWS Cost Explorer:

# Monthly cost by service
aws ce get-cost-and-usage \
  --time-period Start=2026-02-01,End=2026-04-30 \
  --granularity MONTHLY \
  --metrics BlendedCost UnblendedCost UsageQuantity \
  --group-by Type=DIMENSION,Key=SERVICE \
  --output json > /tmp/aws-costs-by-service.json

# Cost by linked account
aws ce get-cost-and-usage \
  --time-period Start=2026-02-01,End=2026-04-30 \
  --granularity MONTHLY \
  --metrics BlendedCost \
  --group-by Type=DIMENSION,Key=LINKED_ACCOUNT \
  --output json > /tmp/aws-costs-by-account.json

# Cost by tag (for team/project attribution)
aws ce get-cost-and-usage \
  --time-period Start=2026-02-01,End=2026-04-30 \
  --granularity MONTHLY \
  --metrics BlendedCost \
  --group-by Type=TAG,Key=Environment Type=TAG,Key=Team \
  --output json > /tmp/aws-costs-by-tag.json

# Daily cost trend (detect anomalies)
aws ce get-cost-and-usage \
  --time-period Start=2026-04-01,End=2026-04-30 \
  --granularity DAILY \
  --metrics BlendedCost \
  --output json > /tmp/aws-daily-costs.json

GCP BigQuery billing export:

SELECT
  invoice.month AS month,
  service.description AS service,
  SUM(cost) + SUM(IFNULL((SELECT SUM(c.amount) FROM UNNEST(credits) c), 0)) AS net_cost,
  SUM(cost) AS gross_cost,
  SUM(IFNULL((SELECT SUM(c.amount) FROM UNNEST(credits) c WHERE c.type = 'SUSTAINED_USAGE_DISCOUNT'), 0)) AS sud_credits,
  SUM(IFNULL((SELECT SUM(c.amount) FROM UNNEST(credits) c WHERE c.type = 'COMMITTED_USAGE_DISCOUNT'), 0)) AS cud_credits
FROM `project.dataset.gcp_billing_export_v1_AABBCC`
WHERE invoice.month >= '202602'
GROUP BY 1, 2
ORDER BY net_cost DESC

Azure Consumption API:

az consumption usage list \
  --start-date 2026-02-01 --end-date 2026-04-30 \
  --output json > /tmp/azure-usage.json

# Aggregate by meter category
cat /tmp/azure-usage.json | jq 'group_by(.meterCategory) |
  map({service: .[0].meterCategory, cost: map(.pretaxCost | tonumber) | add}) |
  sort_by(-.cost)'

Step 2: Top Spend Identification

Identify the top cost drivers — this is where savings hide:

# AWS: Top 10 services by spend
cat /tmp/aws-costs-by-service.json | jq '[.ResultsByTime[-1].Groups[] | {
  service: .Keys[0],
  cost: (.Metrics.BlendedCost.Amount | tonumber)
}] | sort_by(-.cost) | .[0:10]'

Typical cost distribution and savings potential:

Service% of BillSavings PotentialApproach
EC2/Compute Engine/VMs30-50%20-60%Rightsizing, Reserved/Committed, Spot
RDS/Cloud SQL/DB15-25%15-40%Reserved instances, Aurora Serverless
S3/GCS/Blob Storage5-15%20-50%Lifecycle policies, storage classes
Data Transfer5-15%10-30%VPC endpoints, CDN, compression
EKS/GKE/AKS5-10%20-40%Node rightsizing, Karpenter, spot nodes
NAT Gateway2-8%30-70%VPC endpoints, NAT instance
CloudWatch/Monitoring1-5%30-60%Log retention, metric filters
EBS/Persistent Disks3-8%20-40%gp3 migration, snapshot cleanup

Step 3: Idle Resource Detection

Find resources that exist but aren't being used:

AWS idle resource checks:

# Unattached EBS volumes
aws ec2 describe-volumes --filters Name=status,Values=available \
  --query 'Volumes[].{ID:VolumeId,Size:Size,Type:VolumeType,Created:CreateTime}' --output json

# Idle Elastic IPs (allocated but not associated)
aws ec2 describe-addresses --query 'Addresses[?AssociationId==null].{IP:PublicIp,AllocID:AllocationId}' --output json

# Stopped EC2 instances (still paying for EBS)
aws ec2 describe-instances --filters Name=instance-state-name,Values=stopped \
  --query 'Reservations[].Instances[].{ID:InstanceId,Type:InstanceType}' --output json

Additional checks performed: unused load balancers (zero healthy targets), old EBS snapshots (>90 days, not used by AMI), idle RDS instances (0 connections for 7 days via CloudWatch), unused NAT Gateways (low throughput), and stale ECR images (no pull in 90 days).

Step 4: Rightsizing Analysis

Compare provisioned capacity to actual utilization:

Pull CPU utilization (average + maximum) from CloudWatch for the last 14 days for every instance. Apply the rightsizing decision matrix:

  • Avg < 5%, Max < 20% — downsize 2 tiers or terminate (high confidence)
  • Avg 5-20%, Max < 40% — downsize 1 tier (high confidence)
  • Avg 20-40%, Max < 70% — consider downsizing (medium confidence)
  • Avg 40-70%, Max < 90% — right-sized, no action
  • Avg > 70%, Max > 90% — consider upsizing

Also check EBS volume types: gp2 volumes should be migrated to gp3 (same performance, 20% cheaper, with 3000 IOPS and 125 MB/s included free).

Step 5: Reserved / Committed Use Analysis

Determine optimal reservation coverage:

# AWS: Current RI coverage
aws ce get-reservation-coverage \
  --time-period Start=2026-04-01,End=2026-04-30 \
  --granularity MONTHLY \
  --group-by Type=DIMENSION,Key=INSTANCE_TYPE \
  --output json

# AWS: RI purchase recommendations
aws ce get-reservation-purchase-recommendation \
  --service "Amazon Elastic Compute Cloud - Compute" \
  --term-in-years ONE_YEAR \
  --payment-option NO_UPFRONT \
  --lookback-period-in-days SIXTY_DAYS \
  --output json

# AWS: Savings Plans recommendations
aws ce get-savings-plans-purchase-recommendation \
  --savings-plans-type COMPUTE_SP \
  --term-in-years ONE_YEAR \
  --payment-option NO_UPFRONT \
  --lookback-period-in-days SIXTY_DAYS \
  --output json

Commitment strategy framework:

Workload TypeRecommended CommitmentTermSavings vs On-Demand
Steady-state productionReserved Instance / CUD1yr no-upfront30-40%
Steady-state (high confidence)Reserved Instance / CUD3yr partial-upfront50-60%
Variable but predictableCompute Savings Plan1yr20-30%
Batch / fault-tolerantSpot / PreemptibleNone60-90%
Dev/testSpot + scheduled shutdownNone70-85%

Step 6: Storage Optimization

Analyze storage costs across S3/GCS/Blob Storage. Enumerate bucket sizes and check for missing lifecycle policies. Generate recommended lifecycle rules: Standard -> Standard-IA (30 days) -> Glacier IR (90 days) -> Glacier (180 days) -> Deep Archive (365 days), with noncurrent version expiration at 90 days and abort incomplete multipart uploads after 7 days.

Step 7: Network Cost Optimization

Analyze data transfer charges — often the sneakiest cost driver. Break down transfer costs by type (inter-AZ, internet egress, NAT Gateway, VPC peering). Recommend VPC Endpoints for S3/DynamoDB (free, eliminates NAT charges), CloudFront for static content (cheaper than direct S3 egress), cross-AZ colocation for chatty services, and response compression.

Step 8: Quick Wins vs Strategic Savings

Categorize findings into actionable tiers:

Tier 1: Quick wins (implement this week, zero risk)

  • Delete unattached EBS volumes
  • Release unused Elastic IPs
  • Remove old EBS snapshots
  • Migrate gp2 to gp3
  • Enable S3 lifecycle policies
  • Delete unused ECR images

Tier 2: Medium effort (implement this month, low risk)

  • Rightsize overprovisioned instances
  • Purchase Reserved Instances / Savings Plans
  • Set up scheduled scaling for dev/test
  • Add VPC endpoints for S3/DynamoDB
  • Clean up idle load balancers

Tier 3: Strategic changes (plan for next quarter)

  • Move batch workloads to Spot/Preemptible
  • Implement Karpenter for Kubernetes node management
  • Architect for multi-region cost optimization
  • Migrate to Graviton/ARM instances (20% cheaper)
  • Evaluate serverless alternatives for low-traffic services

Output

The agent produces:

  1. Cost dashboard — monthly trend, top 10 services, cost per environment/team
  2. Waste inventory — every idle resource with monthly cost and recommended action
  3. Rightsizing report — instance-by-instance recommendations with utilization data
  4. Savings plan — prioritized list of changes with projected monthly savings and implementation effort
  5. Commitment recommendations — RI/Savings Plan/CUD purchase plan with breakeven analysis
  6. Storage optimization — lifecycle policies, class transitions, snapshot cleanup
  7. Network cost analysis — data transfer breakdown with VPC endpoint and CDN recommendations
  8. Executive summary — total addressable savings as dollar amount and percentage, broken into quick wins vs strategic
  9. Tracking spreadsheet — CSV with each finding, owner, status, projected savings, actual savings after implementation

Savings Projection Format

Each finding includes:

Finding: 14 unattached EBS volumes (gp2, total 2.8 TB)
Monthly cost: $280
Action: Delete volumes (verify no needed data first)
Effort: 30 minutes
Risk: Low (volumes are "available" state = not attached)
Monthly savings: $280
Annual savings: $3,360

Total report footer:

============================================
SAVINGS SUMMARY
============================================
Quick wins (this week):      $1,420/mo  ($17,040/yr)
Medium effort (this month):  $8,300/mo  ($99,600/yr)
Strategic (next quarter):    $5,800/mo  ($69,600/yr)
--------------------------------------------
Total addressable savings:  $15,520/mo ($186,240/yr)
Current monthly spend:      $47,000/mo
Savings percentage:         33%
============================================

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

Cloud Cost Audit

Analyze multi-cloud spend data to identify waste, rightsizing, reserved instance savings, and generate a prioritized 90-day cost optimization roadmap.

Registry SourceRecently Updated
5210Profile unavailable
Web3

ANVX - Token Economy Intel

Track and optimize AI API spending across 19 providers with live pricing and 6 optimization modules.

Registry SourceRecently Updated
2111Profile unavailable
General

Openclaw Email Bypass

Send emails via Google Apps Script when traditional SMTP ports (25/465/587) are blocked. Secure and self-hosted.

Registry SourceRecently Updated
1.7K3Profile unavailable
Automation

The AI Agent FinOps Playbook: Budget Enforcement, Cost Allocation & Spend Analytics for Multi-Agent Systems

The AI Agent FinOps Playbook: Budget Enforcement, Cost Allocation & Spend Analytics for Multi-Agent Systems. Complete guide to cost governance for multi-agen...

Registry SourceRecently Updated
1830Profile unavailable