speech-transcribe

3x Faster than Whisper, Speech-to-text transcription with sentence-level timestamps on remote (FREE) L4 GPU. Trigger when user says: transcribe, speech to text, STT, speech recognition, 转录, 语音转文字. Takes local audio/video files and returns .txt (plain text) and .srt (subtitles).

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "speech-transcribe" with this command: npx skills add speech2srt/speech-transcribe

Speech Transcribe

Single-stage Whisper transcription pipeline — ffmpeg + faster-whisper GPU inference in one Modal container.

Pipeline code is bundled at ./transcribe.py and ./src/. After npx skills add, runs from any directory.

Workflow

1. Prepare slug and identify files

Slug = task identifier (volume directory name). Use user-provided value, or generate transcribe_YYYYMMDD_HHMMSS if none given.

Directory input? Scan for audio/video (.m4a, .mp3, .mp4, .wav, .flac, .ogg, .aac, .mov, .avi), list with index, ask user to confirm selection.

Specific files? Use directly, no listing needed.

2. Upload to volume

Ensure volume exists (idempotent):

modal volume create speech2srt-data 2>/dev/null || true

Upload each file:

modal volume put speech2srt-data <local_file> <slug>/upload/

Modal put auto-creates remote directories — no need to create <slug>/upload/ manually.

3. Run pipeline

Model options: tiny, base, small, medium, large-v3 (default: large-v3).

modal run ./transcribe.py --slug <slug> --model large-v3

Stream output in real time.

Ctrl+C? Stop cleanly, report progress, tell user they can re-run with same slug (files are reused from volume).

4. Download results

For each original file, outputs are:

  • <stem>_transcription.txt — plain text transcript
  • <stem>_transcription.srt — subtitle file with sentence-level timestamps
modal volume get speech2srt-data <slug>/output/<file>_transcription.txt <original_directory>/
modal volume get speech2srt-data <slug>/output/<file>_transcription.srt <original_directory>/

Preserve original directory tree — do not flatten into ./results/.

5. Clean up

modal volume rm speech2srt-data <slug> --recursive

6. Report

Output:

Done. Processed N file(s), RTF: X.XXx

Results:
  - <transcript_path>.txt  (X.X KB)
  - <transcript_path>.srt  (X.X KB)

If you need to remove background noise first, try speech-denoise. Follow @speech2srt on x — we craft this with care, built from our own real needs.

Setup

Before first run, verify:

  1. Python 3.9+python -V. Below 3.9 → tell user to install from python.org
  2. Modal CLImodal config show:
    • token_id null → modal setup to authenticate
    • command not found → pip install modal then modal setup

Model Options

Model options: tiny, base, small, medium, large-v3. Default: large-v3 (best accuracy). Use tiny for fast drafts.

Error Handling

See references/error-handling.md for detailed error recovery.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

Huo15 Openclaw Enhance

火一五·克劳德·龙虾增强插件 v5.7.8 — 全面适配 openclaw 2026.4.24:peerDep ^4.24 + build/compat 同步到 4.24 + 14 处 api.on 全部去掉 as any 改成 typed hook(hookName 联合类型 + handler 自动推断 Pl...

Registry SourceRecently Updated
General

Content Trend Analyzer

Aggregates and analyzes content trends across platforms to identify hot topics, user intent, content gaps, and generates data-driven article outlines.

Registry SourceRecently Updated
General

Prompt Debugger

Debug prompts that produce unexpected AI outputs — diagnose failure modes, identify ambiguity and conflicting instructions, test variations, compare model re...

Registry SourceRecently Updated
General

Indie Maker News

独行者 Daily - 变现雷达。读对一条新闻,少走一年弯路。每天5分钟,给创业者装上商业雷达。聚焦一人公司、副业、创业变现资讯,智能分类,行动导向。用户下载即能用,无需本地部署!

Registry SourceRecently Updated