ai-security-guard

AI安全防护系统,集成危险命令检测、多层权限模式、Hook安全机制、沙箱隔离。当用户要求安全执行命令、检测危险操作、配置权限策略、审计AI行为、保护系统安全时使用。

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "ai-security-guard" with this command: npx skills add ai-security-guard

AI Security Guard Pro

AI安全防护系统 - Claude Code权限管理核心技能提炼

核心能力

  1. 危险命令检测 - 正则模式匹配、风险等级评估
  2. 多层权限模式 - default/auto/bypass/readonly
  3. Hook安全机制 - 前后置检查、错误处理
  4. 沙箱隔离 - 资源限制、网络隔离

权限模式

模式说明适用场景
default每次询问敏感操作、首次使用
auto自动执行信任环境
bypass完全信任开发者调试
readonly只读模式审查/分析模式

危险模式检测

Critical(直接拒绝)

模式说明示例
rm -rf递归删除rm -rf /
> /dev/sdX磁盘写入echo 1 > /dev/sda
dd if=裸磁盘操作dd if=/dev/zero of=/dev/sda
mkfs格式化mkfs.ext4 /dev/sdb
shutdown关机shutdown -h now
reboot重启reboot
kill -9 1杀死系统进程kill -9 1

High(询问确认)

模式说明示例
`curlsh`远程脚本执行
chmod 777过度权限chmod 777 /path
sudo提权操作sudo rm /var/log
wget远程下载wget -O script.sh url
pip install包安装pip install unknown
npm i -g全局安装npm i -g package

Medium(提示注意)

模式说明示例
rm删除文件rm file.txt
mv移动/重命名mv old new
kill杀死进程kill -9 pid
pkill模式杀进程pkill node

核心实现

分类决策

type ClassificationResult = {
  decision: 'allow' | 'deny' | 'ask'
  risk: 'low' | 'medium' | 'high' | 'critical'
  reason: string
  patterns?: string[]
}

const DANGEROUS_PATTERNS = [
  { pattern: /rm\s+-rf/, risk: 'critical', reason: '递归删除' },
  { pattern: />\s*\/dev\/sd/, risk: 'critical', reason: '磁盘写入' },
  { pattern: /curl\s+.*\|\s*sh/, risk: 'high', reason: '远程脚本执行' },
  { pattern: /chmod\s+777/, risk: 'high', reason: '权限过大' },
  { pattern: /dd\s+if=.*of=\/dev/, risk: 'critical', reason: '裸磁盘写入' },
  { pattern: /mkfs/, risk: 'critical', reason: '格式化' },
  { pattern: /shutdown|reboot/, risk: 'critical', reason: '系统控制' },
  { pattern: /kill\s+-9\s+1/, risk: 'critical', reason: '杀死系统进程' },
]

权限规则

type PermissionRule = {
  source: 'cliArg' | 'command' | 'session' | 'project' | 'global'
  behavior: 'allow' | 'deny' | 'ask'
  pattern: string | RegExp
}

const RULE_SOURCES = [
  'cliArg',    // 最高优先级
  'command',   // 命令行指定
  'session',   // 会话级别
  'project',   // 项目配置
  'global',    // 全局配置
]

Hook安全机制

安全检查Hook

// 工具执行前Hook
registerHook('pre_tool', async (ctx) => {
  if (ctx.tool === 'Bash') {
    const { decision, risk } = classifyBashCommand(ctx.args.command)

    if (decision === 'deny') {
      throw new Error(`命令被拒绝: ${risk}风险 - ${ctx.args.command}`)
    }

    if (decision === 'ask') {
      await requestPermission(ctx.args.command, risk)
    }
  }
})

// 压缩前Hook
registerHook('pre_compact', async (ctx) => {
  // 检查是否包含敏感信息
  for (const msg of ctx.messages) {
    if (containsSensitiveData(msg)) {
      ctx.preserveMessageIds.push(msg.id)
    }
  }
})

错误处理策略

type ErrorStrategy = 'ignore' | 'log' | 'warn' | 'throw'

function createHookExecutor(strategy: ErrorStrategy = 'log') {
  return async (event: string, context: any) => {
    try {
      await executeHook(event, context)
    } catch (error) {
      switch (strategy) {
        case 'ignore': break
        case 'log': console.error(`Hook ${event} error:`, error); break
        case 'warn': console.warn(`Hook ${event} warning:`, error); break
        case 'throw': throw error
      }
    }
  }
}

沙箱隔离

沙箱配置

type SandboxConfig = {
  timeout: number          // 超时时间 (ms)
  memoryLimit: number      // 内存限制 (MB)
  allowedDirs: string[]    // 允许访问的目录
  blockedDirs: string[]   // 禁止访问的目录
  networkAccess: boolean  // 是否允许网络
  env: Record<string, string>
}

const DEFAULT_SANDBOX_CONFIG: SandboxConfig = {
  timeout: 30000,
  memoryLimit: 512,
  allowedDirs: [process.cwd()],
  blockedDirs: ['/etc', '/root', '/home/*/.ssh'],
  networkAccess: true,
  env: {}
}

沙箱决策

function shouldUseSandbox(command: string): boolean {
  const result = classifyBashCommand(command)

  if (result.risk === 'critical') return true
  if (result.risk === 'high') return true
  if (isInBlockedList(command)) return true

  return false
}

权限配置

项目级别

{
  "permissions": {
    "session": {
      "allow": ["git *", "npm test", "ls *", "node *"],
      "deny": ["rm -rf", "curl | sh", "chmod 777"]
    }
  }
}

用户级别

{
  "permissions": {
    "global": {
      "allow": ["echo", "pwd", "ls", "cat"],
      "deny": ["rm -rf /", "> /dev/sda", "dd if="]
    }
  }
}

审计日志

function logPermissionDecision(
  command: string,
  result: ClassificationResult,
  context: PermissionContext
): void {
  logEvent('permission_decision', {
    command: sanitizeCommand(command),
    decision: result.decision,
    risk: result.risk,
    reason: result.reason,
    mode: context.mode,
    timestamp: Date.now(),
    userId: getCurrentUserId(),
    sessionId: getSessionId()
  })
}

使用示例

基本使用

import { classifyBashCommand, applyPermissionRules } from './permissions.js'

// 直接分类
const result = classifyBashCommand('rm -rf /tmp/test')
// { decision: 'deny', risk: 'critical', reason: '递归删除' }

// 带上下文的权限判断
const context = {
  mode: 'default',
  rules: {
    session: [{ behavior: 'allow', pattern: 'git commit' }],
    project: []
  }
}
const decision = applyPermissionRules('git commit -m "fix"', context)

集成执行

async function executeBashWithPermission(
  command: string,
  context: PermissionContext
): Promise<ToolResult> {
  const classification = applyPermissionRules(command, context)

  switch (classification.decision) {
    case 'allow':
      return await executeCommand(command)

    case 'deny':
      return {
        ok: false,
        error: `命令被拒绝: ${classification.reason}`
      }

    case 'ask':
      return await requestPermission(command, classification)
  }
}

安全检查清单

执行前检查

  • 命令是否匹配危险模式
  • 是否需要沙箱隔离
  • 权限规则是否允许
  • 是否需要用户确认

执行后检查

  • 命令是否成功执行
  • 是否需要记录审计日志
  • 是否需要清理临时文件
  • 资源使用是否正常

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

Network-AI

Local Python orchestration skill: multi-agent workflows via shared blackboard file, permission gating, token budget scripts, and persistent project context....

Registry SourceRecently Updated
2.3K6Profile unavailable
Security

Alfred OpenShell Sandbox

Provides isolated sandboxes using NVIDIA OpenShell for secure code execution, security scans, debugging, and test running with resource and network restricti...

Registry SourceRecently Updated
610Profile unavailable
Security

Morgana Mordred Security Sandbox

Performs semantic security analysis and stress testing of AI agents using vector embeddings and multi-node defense strategies.

Registry SourceRecently Updated
990Profile unavailable
Security

Morgana Mordred Security Sandbox

Educational security training sandbox for AI agents. Contains 5 intentionally vulnerable systems with annotated vulnerability descriptions and tested patches...

Registry SourceRecently Updated
1010Profile unavailable