nasa-safe-code-rater

Score C/C++ code against NASA safe coding guidance using 10 normalized rules, produce a 0-100 safety score, and return an English report with findings, exclusions, and remediation. Use when the user invokes $nasa-safe-code-rater or asks to audit a repository/current file/diff for NASA-style safe C coding compliance.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "nasa-safe-code-rater" with this command: npx skills add mslmyilmaz5/nasa-safe-code-rater/mslmyilmaz5-nasa-safe-code-rater-nasa-safe-code-rater

NASA Safe Code Rater

Use this skill to evaluate C/C++ code with a NASA-inspired 10-rule model and generate a scored report.

Inputs

Accept optional user arguments:

  • target=repo|file|diff (default repo)
  • path=<file-or-dir> (required for target=file)
  • include_cpp=true|false (default true)

If arguments are absent, run repository-wide analysis.

Required Workflow

  1. Resolve scope from user input.
  2. Analyze only C/C++ files by default (.c, .h, .cpp, .hpp).
  3. List non-C/C++ files as excluded items.
  4. Evaluate all 10 rules in references/nasa_rules.md.
  5. Produce an English markdown report containing:
  • Executive summary
  • Overall score and risk class (Good, Warning, Critical)
  • Rule-by-rule findings table
  • Top 5 remediation actions
  • Insufficient evidence section

Execution Commands

Use these scripts from the skill directory:

python3 scripts/analyze_repo.py --root <repo-path> --mode repo --json-out /tmp/nasa_analysis.json
python3 scripts/score_report.py --analysis /tmp/nasa_analysis.json --format markdown

For single file mode:

python3 scripts/analyze_repo.py --root <repo-path> --mode file --path <file-path> --json-out /tmp/nasa_analysis.json

For diff mode:

python3 scripts/analyze_repo.py --root <repo-path> --mode diff --json-out /tmp/nasa_analysis.json

To refresh normalized rules from a source PDF:

python3 scripts/extract_nasa_rules.py --pdf <path-to-pdf> --out references/nasa_rules.md

Quality Gates

Require for critical findings:

  • At least one concrete evidence line per failed rule.
  • Rule IDs traceable to references/nasa_rules.md.

If no C/C++ files are in scope, return not applicable with reason and next step.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

proof

A local-first cryptographic toolkit. Executes zero-knowledge proof (ZKP) generation, circuit compilation via SnarkJS/ZoKrates, and formal verification analys...

Registry SourceRecently Updated
Security

xProof — Blockchain Proof for Agents

Proof and accountability layer for AI agents. Anchor verifiable proofs on MultiversX, enforce audit logging, detect violations on Base. REST API, MCP, x402....

Registry SourceRecently Updated
Security

Cogdx Pre Trade Audit

Verify trading reasoning with cognitive diagnostics before executing trades. Detects logical fallacies, calibration issues, and cognitive biases in your trad...

Registry SourceRecently Updated
130Profile unavailable
Security

Moses Governance

MO§ES™ Governance Harness — constitutional enforcement layer for AI agents. Modes, postures, roles, SHA-256 audit chain, lineage custody, signing gate, commi...

Registry SourceRecently Updated
1950Profile unavailable