python-automation

Full-stack Python automation toolkit for file processing, data extraction, PDF manipulation, Excel/workbook automation, web scraping, and system tasks. Use when the user needs to: (1) Process/rename/organize files in bulk, (2) Extract data from PDFs, CSVs, or web pages, (3) Generate or modify Excel reports, (4) Automate repetitive system tasks (cron, file watching), (5) Build quick CLI tools for data processing.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "python-automation" with this command: npx skills add ericlooi504/python-automation

Python Automation

Core Libraries Quick Reference

TaskLibraryInstallation
File systempathlib, shutil, osstdlib
CSVcsvstdlib
Excelopenpyxlpip install openpyxl
Excel (old)xlrd / xlwtpip install xlrd xlwt
PDF textPyMuPDF (fitz)pip install PyMuPDF
PDF tablescamelot-py / tabula-pypip install camelot-py
Web scrapingrequests + BeautifulSoup4pip install requests beautifulsoup4
Browser automationplaywright or seleniumpip install playwright
CLIargparse (stdlib) or clickstdlib / pip install click
Rich terminalrichpip install rich
File watchingwatchdogpip install watchdog
Schedulingschedule or cronpip install schedule

Common Patterns

1. Batch File Processing

from pathlib import Path

for f in Path(".").glob("**/*.txt"):
    content = f.read_text()
    # transform content
    f.write_text(content)

2. CSV Read/Write

import csv
with open("input.csv", newline="") as f:
    reader = csv.DictReader(f)
    for row in reader:
        print(row["column_name"])

with open("output.csv", "w", newline="") as f:
    writer = csv.writer(f)
    writer.writerow(["col1", "col2"])
    writer.writerow(["val1", "val2"])

3. Excel Generation

from openpyxl import Workbook
wb = Workbook()
ws = wb.active
ws["A1"] = "Hello"
ws["B1"] = 42
wb.save("output.xlsx")

4. Web Scraping

import requests
from bs4 import BeautifulSoup

resp = requests.get("https://example.com", timeout=10)
soup = BeautifulSoup(resp.text, "html.parser")
for link in soup.select("a[href]"):
    print(link["href"], link.text.strip())

Scripts

See scripts/ for ready-to-use automation scripts:

  • rename_batch.py — Batch rename files with pattern matching
  • csv_to_excel.py — Convert CSV files to Excel workbooks

Reference Files

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

Ghost Browser

Automated Chrome browser using nodriver for AI agent web tasks. Full CLI control with LLM-optimized commands — text-based interaction, markdown output, sessi...

Registry SourceRecently Updated
Coding

魔盒node服务开发技能包

Node.js + TypeScript 项目开发规范和最佳实践指南。用于指导 MagicBox Node 服务的开发、代码风格、目录结构、配置管理、容器部署等方面的规范。

Registry SourceRecently Updated
Coding

AI Editor Rules

AI代码编辑器规则模板集合 - 为Cursor、Windsurf、Claude Code、Cline等AI编辑器提供项目规则配置。适用于需要配置AI编码助手规则的开发者,包含全栈Web、移动端、Vue3+SpringBoot等技术栈模板。

Registry SourceRecently Updated
4370yinjin
Coding

Observability Lgtm

Set up a full local LGTM observability stack (Loki + Grafana + Tempo + Prometheus + Alloy) for FastAPI apps. One Docker Compose, one Python import, unified d...

Registry SourceRecently Updated
5620nissan