gemini-deep-research

Trigger Gemini Deep Research via browser and save results to Notion. Use when the user asks to "deep research" a topic, says "gemini deep research", or wants an in-depth research report. Execute ALL steps in the main session (browser tool requires main session access).

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "gemini-deep-research" with this command: npx skills add palmpalm7/gemini-deep-research-notion

Gemini Deep Research → Notion

Execution Mode

Run ALL steps in the MAIN SESSION. Do NOT spawn a subagent.

The browser tool (OpenClaw managed profile) is only available in the main session. Subagents cannot access the browser, so all browser automation must happen here.

Reply first: "🔬 Deep Research starting for: [topic]. This takes ~25 min. I'll update you when done."

Then execute all phases below sequentially.


Instructions

Complete ALL steps below in the main session.

Phase 1: Trigger Deep Research

  1. browser action=open profile=openclaw targetUrl="https://gemini.google.com/app"
  2. Snapshot, find the text input, type the research query. Always prepend "请用中文回答。" to the query so the research output is in Chinese.
  3. Click "工具" (Tools) button (has page_info icon) → click "Deep Research" in the menu
  4. Click Send to submit the query
  5. Wait for research plan to appear (~10s), then click "Start research" / "开始研究" button
    • If snapshot-click doesn't work, use JS: (() => { var btn = Array.from(document.querySelectorAll('button')).find(b => /Start research|开始研究/.test(b.textContent.trim())); if (btn) { btn.click(); return 'clicked'; } return 'not found'; })()
  6. Verify research started: button should be disabled, status shows "Researching X websites..." or "正在研究..."
  7. Save the conversation URL from the browser

Phase 2: Wait for Completion

  1. Run exec("sleep 1200") (20 minutes) + process(poll, timeout=1200000)
  2. After waking, check status via JS: (() => { var el = document.querySelectorAll('message-content')[1]; return el ? el.innerText.substring(0, 200) : 'NOT_FOUND'; })()
  3. Look for completion signals: "I've completed your research" or "已完成"
  4. If still running, sleep another 600s and check again (max 2 retries)
  5. If failed/stuck after retries, announce the failure and exit

Phase 3: Extract Report

  1. Count message-content elements: document.querySelectorAll('message-content').length
  2. The research report is in the LAST message-content element (usually index 2)
  3. Get total length: document.querySelectorAll('message-content')[2]?.innerText?.length
  4. Extract in 8000-char chunks using substring: document.querySelectorAll('message-content')[N]?.innerText?.substring(START, END)
  5. Concatenate all chunks into the full report text
  6. Save to a temp file: write full report to /tmp/deep_research_<timestamp>.md

Phase 4: Export to Notion

Parent page ID: 31a4cfb5-c92b-809f-9d8a-dd451718a017 (Deep Research Database)

  1. Read the Notion API key: cat ~/.config/notion/api_key
  2. Parse the report into Notion blocks:
    • Lines starting with # → heading_2/heading_3 blocks
    • Bullet points → bulleted_list_item blocks
    • Regular text → paragraph blocks
    • Add a callout at top: "🔬 Generated by Gemini Deep Research on YYYY-MM-DD"
    • Split rich_text at 2000 chars
  3. Create the page via Notion API:
    curl -s -X POST "https://api.notion.com/v1/pages" \
      -H "Authorization: Bearer $NOTION_KEY" \
      -H "Notion-Version: 2025-09-03" \
      -H "Content-Type: application/json" \
      -d '{"parent":{"page_id":"31a4cfb5-c92b-809f-9d8a-dd451718a017"},"icon":{"type":"emoji","emoji":"🔬"},"properties":{"title":{"title":[{"text":{"content":"TOPIC"}}]}},"children":[BLOCKS]}'
    
  4. If >100 blocks, append remaining via PATCH to /v1/blocks/{page_id}/children
  5. Rate limit: wait 0.5s between batch requests

Phase 5: Announce

Report back with:

  • Research topic
  • Brief summary (2-3 key findings)
  • Notion page URL: https://www.notion.so/<page_id_without_dashes>

Notes

  • Always use profile="openclaw" for browser
  • Deep Research is under "工具" (Tools) menu, NOT the model selector
  • If Gemini needs login, announce failure — user must log in manually
  • The full pipeline should complete in ~25-30 min total

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Research

Lofy Career

Job search automation for the Lofy AI assistant — application tracking, resume tailoring to job descriptions, interview prep with company research, follow-up management with draft emails, and pipeline analytics. Use when tracking job applications, tailoring resumes, preparing for interviews, managing follow-ups, or analyzing job search strategy.

Registry SourceRecently Updated
Research

ARC Creator

Create and populate Annotated Research Contexts (ARCs) following the nfdi4plants ARC specification. Use when creating a new ARC, adding studies/assays/workflows/runs, annotating ISA metadata, organizing research data into ARC structure, or pushing ARCs to a DataHUB. Guides the user interactively through all required and optional metadata fields.

Registry SourceRecently Updated
Research

Options Spread Conviction Engine

Multi-regime options spread analysis engine with Kelly Criterion Position Sizing. Scores vertical spreads (bull put, bear call, bull call, bear put) and mult...

Registry SourceRecently Updated
Research

Moltarxiv

Outcome-driven scientific publishing for AI agents. Publish research papers, hypotheses, and experiments with validated artifacts, structured claims, milestone tracking, and independent replications. Claim replication bounties, submit peer reviews, and collaborate with other AI researchers.

Registry SourceRecently Updated
1.1K0Profile unavailable