parallel-requests

Detect sequential independent HTTP/API calls and refactor to parallel execution. Covers Promise.all (JS/TS), asyncio.gather (Python), and 7 more languages.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "parallel-requests" with this command: npx skills add zagi/parallel-requests/zagi-parallel-requests-parallel-requests

Parallel Requests Skill

Detect sequential independent HTTP/API calls and refactor them to parallel execution patterns.

When to Use

  • Code has 2+ sequential await fetch(), requests.get(), axios.get(), httpx.get(), or similar
  • Data aggregation from multiple independent APIs/endpoints
  • Pages loading data from multiple microservices
  • Batch operations that don't depend on each other

Detection Rules

Before refactoring, verify requests are truly independent:

SignalIndependent (parallelize)Dependent (keep sequential)
URL constructionStatic or uses only local varsUses response from previous call
Request bodyNo references to prior responsesContains data from prior response
HeadersStatic auth tokenToken from prior auth call
Control flowNo conditional on prior resultif (responseA.ok) before B

JS/TS Patterns

BEFORE (sequential — 3x latency)

const users = await fetch('/api/users').then(r => r.json())
const posts = await fetch('/api/posts').then(r => r.json())
const comments = await fetch('/api/comments').then(r => r.json())

AFTER (parallel — 1x latency)

const [users, posts, comments] = await Promise.all([
  fetch('/api/users').then(r => r.json()),
  fetch('/api/posts').then(r => r.json()),
  fetch('/api/comments').then(r => r.json()),
])

AFTER (with partial failure tolerance)

const results = await Promise.allSettled([
  fetch('/api/users').then(r => r.json()),
  fetch('/api/posts').then(r => r.json()),
  fetch('/api/comments').then(r => r.json()),
])
const [users, posts, comments] = results.map(r =>
  r.status === 'fulfilled' ? r.value : null
)

Python Patterns

BEFORE (synchronous — 3x latency)

users = httpx.get('/api/users').json()
posts = httpx.get('/api/posts').json()
comments = httpx.get('/api/comments').json()

AFTER (asyncio.gather)

async with httpx.AsyncClient() as client:
    users, posts, comments = await asyncio.gather(
        client.get('/api/users'),
        client.get('/api/posts'),
        client.get('/api/comments'),
    )

AFTER (Python 3.11+ TaskGroup)

async with asyncio.TaskGroup() as tg:
    users_task = tg.create_task(client.get('/api/users'))
    posts_task = tg.create_task(client.get('/api/posts'))
    comments_task = tg.create_task(client.get('/api/comments'))
users, posts, comments = users_task.result(), posts_task.result(), comments_task.result()

Other Languages — Quick Reference

LanguagePatternImport
Gogoroutines + errgroup.Groupgolang.org/x/sync/errgroup
Rusttokio::join!() / futures::join_all()tokio, futures
C#Task.WhenAll(...)System.Threading.Tasks
JavaCompletableFuture.allOf(...)java.util.concurrent
PHPUtils::all() (Guzzle Promises)guzzlehttp/promises
RubyAsync { ... }async gem
Shellcmd1 & cmd2 & wait / xargs -P Nbuilt-in

See skill/patterns.md for copy-pasteable code blocks in each language.

Dependency Detection & Restructuring

When requests have dependencies, don't blindly parallelize — restructure:

  1. Response of A used in B's URL/body — Keep sequential, but suggest: "Can these be a single batch API call?"
  2. Auth token from A needed for B, C, D — Get token first, then parallelize B+C+D:
    const token = await getAuthToken()
    const [users, posts, comments] = await Promise.all([
      fetch('/api/users', { headers: { Authorization: token } }).then(r => r.json()),
      fetch('/api/posts', { headers: { Authorization: token } }).then(r => r.json()),
      fetch('/api/comments', { headers: { Authorization: token } }).then(r => r.json()),
    ])
    
  3. Pagination — Sequential pages, but parallelize processing of each page's data
  4. Always ask: "Is there a batch/bulk endpoint? A GraphQL query that combines these?"

Error Handling

StrategyJS/TSPythonWhen
Fail-fastPromise.allasyncio.gather()All results required
Settle allPromise.allSettledgather(return_exceptions=True)Partial results OK
Per-request retrywrap each promisewrap each coroutineFlaky endpoints

JS/TS — per-request retry example

const withRetry = (fn, retries = 3) =>
  fn().catch(err => retries > 0 ? withRetry(fn, retries - 1) : Promise.reject(err))

const [users, posts] = await Promise.all([
  withRetry(() => fetch('/api/users').then(r => r.json())),
  withRetry(() => fetch('/api/posts').then(r => r.json())),
])

Python — per-request retry example

async def with_retry(coro_fn, retries=3):
    for attempt in range(retries):
        try:
            return await coro_fn()
        except Exception:
            if attempt == retries - 1:
                raise

users, posts = await asyncio.gather(
    with_retry(lambda: client.get('/api/users')),
    with_retry(lambda: client.get('/api/posts')),
)

Concurrency Control

For 10+ parallel requests, limit concurrency to avoid overwhelming the server:

LanguagePattern
JS/TSp-limit(5) or chunk array + sequential Promise.all per chunk
Pythonasyncio.Semaphore(5) wrapping each coroutine

JS/TS — p-limit

import pLimit from 'p-limit'

const limit = pLimit(5)
const results = await Promise.all(
  urls.map(url => limit(() => fetch(url).then(r => r.json())))
)

JS/TS — chunked processing

function chunk<T>(arr: T[], size: number): T[][] {
  return Array.from({ length: Math.ceil(arr.length / size) }, (_, i) =>
    arr.slice(i * size, i * size + size)
  )
}

const results = []
for (const batch of chunk(urls, 5)) {
  const batchResults = await Promise.all(
    batch.map(url => fetch(url).then(r => r.json()))
  )
  results.push(...batchResults)
}

Python — Semaphore

sem = asyncio.Semaphore(5)

async def limited_get(client, url):
    async with sem:
        return await client.get(url)

results = await asyncio.gather(
    *(limited_get(client, url) for url in urls)
)

Anti-patterns

  • Don't parallelize ordered side effectsPOST /create then PUT /update must stay sequential
  • Don't Promise.all unbounded arrays without concurrency limit — you'll DDoS yourself
  • Don't ignore errors — use allSettled or try/catch per request
  • Don't parallelize inside loops where iteration N depends on N-1 — e.g., cursor-based pagination
  • Don't parallelize requests that share mutable state — race conditions are worse than slow code

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

browser-curl

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

openclaw-version-monitor

监控 OpenClaw GitHub 版本更新,获取最新版本发布说明,翻译成中文, 并推送到 Telegram 和 Feishu。用于:(1) 定时检查版本更新 (2) 推送版本更新通知 (3) 生成中文版发布说明

Archived SourceRecently Updated
Coding

ask-claude

Delegate a task to Claude Code CLI and immediately report the result back in chat. Supports persistent sessions with full context memory. Safe execution: no data exfiltration, no external calls, file operations confined to workspace. Use when the user asks to run Claude, delegate a coding task, continue a previous Claude session, or any task benefiting from Claude Code's tools (file editing, code analysis, bash, etc.).

Archived SourceRecently Updated
Coding

ai-dating

This skill enables dating and matchmaking workflows. Use it when a user asks to make friends, find a partner, run matchmaking, or provide dating preferences/profile updates. The skill should execute `dating-cli` commands to complete profile setup, task creation/update, match checking, contact reveal, and review.

Archived SourceRecently Updated