s3-bulk-upload

Upload many files to S3 with automatic organization by first-character prefixes.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "s3-bulk-upload" with this command: npx skills add 6mile-puppet/s3-sort

S3 Bulk Upload

Upload files to S3 with automatic organization using first-character prefixes (e.g., a/apple.txt, b/banana.txt, 0-9/123.txt).

Quick Start

Use the included script for bulk uploads:

# Basic upload
./s3-bulk-upload.sh ./files my-bucket

# Dry run to preview
./s3-bulk-upload.sh ./files my-bucket --dry-run

# Use sync mode (faster for many files)
./s3-bulk-upload.sh ./files my-bucket --sync

# With storage class
./s3-bulk-upload.sh ./files my-bucket --storage-class STANDARD_IA

Prerequisites

Verify AWS credentials are configured:

aws sts get-caller-identity

If this fails, ensure AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are set, or configure via aws configure.

Organization Logic

Files are organized by the first character of their filename:

First CharacterPrefix
a-zLowercase letter (e.g., a/, b/)
A-ZLowercase letter (e.g., a/, b/)
0-90-9/
Other_other/

Single File Upload

Upload a single file with automatic prefix:

FILE="example.txt"
BUCKET="my-bucket"

# Compute prefix from first character
FIRST_CHAR=$(echo "${FILE}" | cut -c1 | tr '[:upper:]' '[:lower:]')
if [[ "$FIRST_CHAR" =~ [a-z] ]]; then
  PREFIX="$FIRST_CHAR"
elif [[ "$FIRST_CHAR" =~ [0-9] ]]; then
  PREFIX="0-9"
else
  PREFIX="_other"
fi

aws s3 cp "$FILE" "s3://${BUCKET}/${PREFIX}/${FILE}"

Bulk Upload

Upload all files from a directory:

SOURCE_DIR="./files"
BUCKET="my-bucket"

for FILE in "$SOURCE_DIR"/*; do
  [ -f "$FILE" ] || continue
  BASENAME=$(basename "$FILE")
  FIRST_CHAR=$(echo "$BASENAME" | cut -c1 | tr '[:upper:]' '[:lower:]')

  if [[ "$FIRST_CHAR" =~ [a-z] ]]; then
    PREFIX="$FIRST_CHAR"
  elif [[ "$FIRST_CHAR" =~ [0-9] ]]; then
    PREFIX="0-9"
  else
    PREFIX="_other"
  fi

  aws s3 cp "$FILE" "s3://${BUCKET}/${PREFIX}/${BASENAME}"
done

Efficient Bulk Sync

For large uploads, stage files with symlinks then use aws s3 sync:

SOURCE_DIR="./files"
STAGING_DIR="./staging"
BUCKET="my-bucket"

# Create staging directory with prefix structure
rm -rf "$STAGING_DIR"
mkdir -p "$STAGING_DIR"

for FILE in "$SOURCE_DIR"/*; do
  [ -f "$FILE" ] || continue
  BASENAME=$(basename "$FILE")
  FIRST_CHAR=$(echo "$BASENAME" | cut -c1 | tr '[:upper:]' '[:lower:]')

  if [[ "$FIRST_CHAR" =~ [a-z] ]]; then
    PREFIX="$FIRST_CHAR"
  elif [[ "$FIRST_CHAR" =~ [0-9] ]]; then
    PREFIX="0-9"
  else
    PREFIX="_other"
  fi

  mkdir -p "$STAGING_DIR/$PREFIX"
  ln -s "$(realpath "$FILE")" "$STAGING_DIR/$PREFIX/$BASENAME"
done

# Sync entire staging directory to S3
aws s3 sync "$STAGING_DIR" "s3://${BUCKET}/"

# Clean up
rm -rf "$STAGING_DIR"

Verification

List files by prefix:

BUCKET="my-bucket"
PREFIX="a"

aws s3 ls "s3://${BUCKET}/${PREFIX}/" --recursive

Generate a manifest of all uploaded files:

BUCKET="my-bucket"

aws s3 ls "s3://${BUCKET}/" --recursive | awk '{print $4}'

Count files per prefix:

BUCKET="my-bucket"

for PREFIX in {a..z} 0-9 _other; do
  COUNT=$(aws s3 ls "s3://${BUCKET}/${PREFIX}/" --recursive 2>/dev/null | wc -l | tr -d ' ')
  [ "$COUNT" -gt 0 ] && echo "$PREFIX: $COUNT files"
done

Error Handling

Common issues and solutions:

ErrorCauseSolution
AccessDeniedInsufficient permissionsCheck IAM policy has s3:PutObject on bucket
NoSuchBucketBucket doesn't existCreate bucket or check bucket name spelling
InvalidAccessKeyIdBad credentialsVerify AWS_ACCESS_KEY_ID is correct
ExpiredTokenSession token expiredRefresh credentials or re-authenticate

Test bucket access before bulk upload:

BUCKET="my-bucket"
echo "test" | aws s3 cp - "s3://${BUCKET}/_test_access.txt" && \
  aws s3 rm "s3://${BUCKET}/_test_access.txt" && \
  echo "Bucket access OK"

Storage Classes

Optimize costs with storage classes:

# Standard (default)
aws s3 cp file.txt s3://bucket/prefix/file.txt

# Infrequent Access (cheaper storage, retrieval fee)
aws s3 cp file.txt s3://bucket/prefix/file.txt --storage-class STANDARD_IA

# Glacier Instant Retrieval (archive with fast access)
aws s3 cp file.txt s3://bucket/prefix/file.txt --storage-class GLACIER_IR

# Intelligent Tiering (auto-optimize based on access patterns)
aws s3 cp file.txt s3://bucket/prefix/file.txt --storage-class INTELLIGENT_TIERING

Add --storage-class to bulk upload loops for cost optimization on infrequently accessed files.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

baidu-search

Comprehensive search API integration for Baidu Qianfan Web Search. Use when Claude needs to perform web searches using Baidu Qianfan's enterprise search API....

Registry SourceRecently Updated
General

Self Memory Manager

管理 Claude 的记忆和工作流程优化。包括:(1) Context 使用管理 (2) 重要信息存档 (3) 定时总结 (4) 工作文件夹维护 用于:context 超过 80%、重要信息需要记录、每日总结、清理旧 session

Registry SourceRecently Updated
General

Seedance Video

Generate AI videos using ByteDance Seedance. Use when the user wants to: (1) generate videos from text prompts, (2) generate videos from images (first frame,...

Registry SourceRecently Updated