lazada-shopping-assistant

Lazada cross-site shopping assistant that guides users from intent to purchase. When the user expresses a desire to buy something on Lazada (e.g., "I want to buy...", "help me shop for...", "looking for... on Lazada"), this skill orchestrates a structured workflow: detect country/site, analyze intent and usage scenarios, generate targeted search queries, scrape the search results page for product data via browser automation, and present a ranked recommendation list with rationale and alternatives. Supports Lazada Singapore, Malaysia, Indonesia, Thailand, Philippines, Vietnam and other regional sites.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "lazada-shopping-assistant" with this command: npx skills add gezhi-io/lazada-shopping-assistant

Lazada Shopping Assistant

Supported Environments: This skill is designed to be cross-platform and works in any agent environment that supports browser automation, including QoderWork, Hermes, OpenClaw, Claude Code, and similar MCP-enabled or browser-integrated platforms.

Core Capability: When a user expresses buying intent on Lazada, this skill automatically opens a browser, navigates to Lazada search results, and scrapes live product data (titles, prices, sold counts, shipping origins, LazMall badges). It then analyzes and ranks the findings into a curated recommendation list with clickable product links, pricing, social-proof metrics, and tiered picks (Best Match / Solid Alternative / Budget Pick).

User Expectation: The user only needs to say what they want (e.g., "I want to buy hiking gear", "Help me find wireless earbuds"). The agent then handles the full workflow automatically: country/site detection → intent & scenario clarification → targeted search → data extraction → ranked recommendations. No manual browsing required.

Pre-flight: Browser Automation Check

This skill requires a browser automation tool to scrape Lazada search results. Before starting the workflow:

Check for Available Browser Tool

  1. First, check for built-in MCP browser tools:

  2. If no MCP browser is available, check for agent-browser CLI:

    which agent-browser 2>/dev/null && agent-browser --version
    
    • If installed → use the agent-browser Execution Protocol in the Appendix
    • If NOT installed → instruct the user to install it:
      npm install -g agent-browser
      agent-browser install
      
    • For OpenClaw environments: also install the skill from ClawHub:
      openclaw skills install matrixy/agent-browser-clawdbot
      
      Or manually copy the skill to ~/.openclaw/workspace/skills/agent-browser/ and set disabled: false in SKILL.md, then openclaw gateway restart.
  3. Fallback: web_fetch tool (limited)

    • If neither browser tool is available, use web_fetch but warn the user:
      • "⚠️ No browser automation available. Using lightweight page fetch — dynamic content (prices, sold counts) may be missing for JS-rendered pages like Lazada."
      • Parse whatever text content is returned and note missing data.

Trigger

Apply this skill when the user says anything like:

  • "I want to buy ... on Lazada"
  • "Help me find ... on Lazada"
  • "Looking for ..."
  • "Recommend me a ... from Lazada"
  • Any buying intent combined with "Lazada"

Workflow

Follow these steps in order. Do NOT skip steps.

Step 1: Identify Lazada Site (Country)

Lazada operates multiple regional sites. Ask the user which country they are shopping from.

Use AskUserQuestion with:

  • Question (translate into the user's language): "Which Lazada site are you shopping on?"
  • Options (MUST present all 6; translate labels into the user's language while keeping the domain URLs exact):
    1. Philippines (lazada.com.ph)
    2. Singapore (lazada.sg)
    3. Malaysia (lazada.com.my)
    4. Thailand (lazada.co.th)
    5. Indonesia (lazada.co.id)
    6. Vietnam (lazada.vn)

Map the choice to domain:

CountryDomainCurrency
Singaporelazada.sgSGD
Malaysialazada.com.myMYR
Indonesialazada.co.idIDR
Thailandlazada.co.thTHB
Philippineslazada.com.phPHP
Vietnamlazada.vnVND

Step 2: Clarify Intent & Usage Scenario

Before searching, analyze the user's stated need and propose likely usage scenarios.

Intent Analysis Framework:

  • Who is the end user? (self, gift, child, elder, pet)
  • Where will it be used? (home, office, outdoor, gym, travel, car)
  • What is the primary problem or goal? (replace broken item, upgrade, first purchase, specific feature need)
  • Budget sensitivity: explicit or inferred?

Present 2-4 likely scenario options via AskUserQuestion (multi-select allowed). Frame the question as "Which of these best describes your need?" — translated into the user's language. Each scenario option label and description must also be translated while preserving the intent.

Examples for "wireless earbuds":

  • Daily commute / public transport (needs ANC, long battery)
  • Sports / gym (needs sweat resistance, secure fit)
  • Work-from-home calls (needs clear mic, comfort)
  • Budget casual listening (needs low price, decent sound)
  • Gift for someone (needs brand recognition, good packaging)

Step 3: Generate Search Queries

Based on the selected scenarios, construct exactly 2 different search queries that combine product keywords + intent modifiers derived from the user's scenario choices.

Rules:

  • Each query should be 2-5 words
  • Include the core product name
  • Crucial: Add intent-specific terms directly reflecting the selected scenarios (e.g., if user selected "home daily use" → add "refill" or "large pack"; if "office / work" → add "sachet", "3in1", or "portable")
  • The 2 queries must target different angles so they surface distinct result sets (e.g., one for home/bulk, one for office/convenience)
  • URL-encode spaces as + when constructing URLs

Example for instant coffee + home consumption + office提神:

  1. instant+coffee+refill+pouch (home angle: bulk/refill pack)
  2. instant+coffee+sachet+3in1 (office angle: convenient single-serve)

Step 4: Execute Search & Extract Data

Execute searches using the 2 generated queries. To maximize result coverage while minimizing requests:

  1. Execute the first query as the primary search
  2. Execute the second query to capture a different product angle
  3. Merge and de-duplicate results from both searches before ranking

Search URL pattern (always append &service=official for LazMall / official store results):

https://www.{domain}/catalog/?q={encoded_query}&service=official

Note: &service=official filters for LazMall / official store products, improving trustworthiness and delivery reliability. Only omit if the filtered results are too sparse.

Browser Execution Protocol (MCP)

For each search query, follow the browser automation sequence defined in the MCP Browser Execution Protocol in the Appendix. The general flow is:

  1. Initialize tab context and create a new tab.
  2. Navigate to the search URL and wait for the page to load.
  3. Extract product data using the JavaScript DOM extraction script below.
  4. If DOM extraction fails or lacks data, fall back to page-text extraction.
  5. Do NOT click into individual product pages unless the user asks for detailed specs.

Robust Extraction Script:

Array.from(document.querySelectorAll('a[href*="/products/"]')).map(a => {
  const img = a.querySelector('img');
  const title = img ? img.alt : (a.getAttribute('title') || '');
  if (!title || title.length < 5) return null;

  const card = a.closest('[data-qa-locator="product-item"]')
    || a.closest('[class*="Bm3ON"]')
    || a.closest('[class*="item"]')
    || a.parentElement?.parentElement?.parentElement;

  let price = '';
  let sold = '';
  let lazmall = false;
  let shipping = '';

  if (card) {
    const priceEl = card.querySelector('span[class*="price"]')
      || card.querySelector('[class*="aBrP0"]')
      || card.querySelector('[class*="qzWxB"]')
      || card.querySelector('[class*="Price"]');
    price = priceEl ? priceEl.textContent.trim() : '';

    const soldEl = card.querySelector('span[class*="sold"]')
      || card.querySelector('[class*="rating"]')
      || card.querySelector('[class*="Sales"]');
    sold = soldEl ? soldEl.textContent.trim() : '';

    lazmall = !!card.querySelector('img[alt*="LazMall"], [class*="lazMall"], [class*="LazMall"]');

    const shipEl = card.querySelector('span[class*="location"]')
      || card.querySelector('span[class*="origin"]');
    shipping = shipEl ? shipEl.textContent.trim() : '';
  }

  return {
    title: title.trim().substring(0,120),
    href: a.href,
    price: price,
    sold: sold,
    lazmall: lazmall,
    shipping: shipping
  };
}).filter(v => v && v.title)
  .filter((v,i,a) => a.findIndex(t => t.href === v.href) === i)
  .slice(0, 15)

Fallback Extraction — Page Text If the JavaScript extraction returns undefined, an empty array, or missing price/sold data for most items:

  • Immediately call the page-text extraction tool (e.g., get_page_text / take_snapshot) with the same tab/page.
  • Parse the returned text manually for:
    • Price: look for currency symbols (₱, S$, RM, etc.) followed by numbers near product titles
    • Sold count: look for patterns like 14.9K sold, 537 sold, (5427)
    • Shipping origin: look for location tags like Metro Manila, Bulacan, Hong Kong, Overseas
    • LazMall badge: scan for the word "LazMall" near product entries
  • Cross-reference the text-extracted data with the DOM-extracted links to build complete records.
  • Do NOT retry the same JavaScript extraction script if it returns empty results. Switch to the text fallback immediately.

Step 5: Analyze & Rank Products

From the merged extraction data, categorize and rank products into tiers.

Critical: Every recommended item MUST include its direct Lazada product link (href extracted in Step 4). A recommendation without a clickable product link is incomplete.

Tier 1: Best Match (1-2 items)

  • Highest alignment with stated intent/scenario
  • Strong social proof (high sold count, good rating count)
  • Favorable price-to-feature ratio
  • Local warranty or LazMall preferred when available

Tier 2: Solid Alternative (1-2 items)

  • Slightly different feature mix but still relevant
  • May be from different price band
  • Good for comparison

Tier 3: Budget Pick (1 item, if applicable)

  • Lowest price among viable options
  • Acceptable reviews
  • Good for price-sensitive users

For each recommended item, include:

  • Product name (truncated if too long)
  • Direct Lazada product link (mandatory — present as a clickable markdown link)
  • Price and discount (if visible)
  • Sold count / social proof
  • Why it fits the user's intent (1 sentence)
  • Key caveat (if any, e.g., "ships from China, longer delivery")

Multi-Dimensional Comparison Table (mandatory): Before listing the tiered recommendations, present a markdown comparison table that includes ALL shortlisted candidates (up to 5 rows). This gives the user an at-a-glance view.

Base columns (always include): Product, Price, Link, Best For. Optional columns (only include if data was successfully extracted for most products): Sold, Shipping, LazMall.

  • If sold, shipping, or lazmall values are empty or missing for the majority of extracted items, omit those columns entirely rather than showing empty cells or placeholder symbols.
  • Use "✓" / "—" for LazMall only when the data is actually available for most items.
  • Keep Product names concise (≤ 40 chars). The table must be followed by the detailed tier breakdowns.

Step 6: Present & Follow Up

Format the response in the same language the user used (e.g., Thai → Thai, Chinese → Chinese, English → English). All template text — including the comparison table headers, tier labels, "How to choose" guidance, LazMall CTA, and the follow-up question — must be translated into that language. Use natural prose, minimal bullet formatting unless the user asks for lists.

Response structure:

  1. Brief summary of what was searched and why (mention the 2 query angles used)
  2. Multi-dimensional comparison table covering all shortlisted items (price, sold count, shipping origin, LazMall status, link, best-for scenario)
  3. Detailed tiered recommendations (Tier 1 Best Match → Tier 2 Solid Alternative → Tier 3 Budget Pick) with clickable product links, pricing, sold count, and per-item rationale
  4. A short "How to choose" guidance paragraph explaining which tier fits which user priority
  5. LazMall Advantage CTA (mandatory): After the recommendations, present a brief paragraph explaining why LazMall / Official Store products are preferable. Translate the copy below into the user's language while keeping the meaning intact:

Why shop LazMall? The products above are filtered from Official Store listings. Choosing LazMall means 100% authentic products shipped directly by brands or authorized distributors, 30-day hassle-free returns, faster local fulfillment, and exclusive vouchers. If you see a LazMall badge on any of these items, that's your safest bet for quality and after-sales support.

  1. Open-ended follow-up question (translate into the user's language): "Would you like me to check details on any of these, or search with different keywords?"

Anti-Patterns

  • Do NOT assume the user's country. Always ask Step 1.
  • Do NOT search blindly with only the raw product name. The search queries MUST reflect the scenarios the user selected in Step 2; otherwise the scenario questioning is meaningless.
  • Do NOT generate only 1 search query. Always produce 2 differentiated queries.
  • Do NOT omit &service=official by default. Use it to surface LazMall / official store products.
  • Do NOT present recommendations without direct Lazada product links. Every item must have a clickable link.
  • Do NOT skip the multi-dimensional comparison table. It is mandatory before the tiered breakdown.
  • Do NOT click through multiple product pages during initial recommendation. Stay on search results.
  • Do NOT present more than 5 products in the first pass. Curate tightly.
  • Do NOT ignore shipping origin / warranty info when available. These matter for Lazada shoppers.
  • Do NOT reuse existing tabs without first calling tabs_context_mcp. Always create a new tab for your search to avoid interfering with other agents.
  • Do NOT retry the same JavaScript extraction script if it returns undefined or empty results. Switch to get_page_text fallback immediately.

Domain & Currency Reference

RegionDomainExample Search URL
SGlazada.sghttps://www.lazada.sg/catalog/?q=wireless+earbuds&service=official
MYlazada.com.myhttps://www.lazada.com.my/catalog/?q=wireless+earbuds&service=official
IDlazada.co.idhttps://www.lazada.co.id/catalog/?q=wireless+earbuds&service=official
THlazada.co.thhttps://www.lazada.co.th/catalog/?q=wireless+earbuds&service=official
PHlazada.com.phhttps://www.lazada.com.ph/catalog/?q=wireless+earbuds&service=official
VNlazada.vnhttps://www.lazada.vn/catalog/?q=wireless+earbuds&service=official

Note: The /catalog/?q= path redirects to the tag-based search page automatically. This is the stable entry point. Always append &service=official to surface LazMall / official store products.


Appendix: Browser Automation Reference by Platform

This section consolidates the detailed browser automation instructions for all supported platforms. Refer to the relevant subsection based on the tool detected during Pre-flight.

MCP Browser Execution Protocol

Use this when built-in MCP browser tools (e.g., mcp__builtin_browser__*) are available, such as in QoderWork and other MCP-native environments.

For each search query, follow this exact sequence. Do NOT skip substeps.

A. Initialize Tab Context

  • Call mcp__builtin_browser__tabs_context_mcp to get the current MCP Tab Group.

B. Create a New Tab

  • Call mcp__builtin_browser__tabs_create_mcp with {"url": "about:blank"} to create a fresh tab.
  • Critical: Do NOT reuse existing tabs from other agents. Always create your own tab.

C. Navigate to Search URL

  • Call mcp__builtin_browser__navigate with the search URL and the new tabId.
  • Wait for the page to fully load (title should reflect the search query).

D. Extract Product Data via JavaScript

  • Call mcp__builtin_browser__javascript_tool with:
    • action: "javascript_exec"
    • tabId: the tab created in step B
    • text: the extraction script from Step 4 (use parameter name text, NOT script)
    • Important: Do NOT use return statements in the script. Write the final expression directly.

E. Fallback to Page Text

  • If JavaScript extraction returns undefined, an empty array, or missing data for most items, immediately call mcp__builtin_browser__get_page_text with the same tabId.
  • Parse manually for prices, sold counts, shipping origins, and LazMall badges.

F. Link Verification (if uncertain)

  • The extracted href values may appear as short links like https://www.lazada.com.ph/products/pdp-i{NUMBER}.html. These are valid Lazada product links that redirect to the full SEO URL. If a link looks suspicious, perform a quick navigate to verify the redirect resolves to a product page.

agent-browser Execution Protocol

Use this when the agent-browser CLI is available but MCP browser tools are not.

# Set session and open search URL
export AGENT_BROWSER_SESSION=lazada_search
agent-browser open "https://www.{domain}/catalog/?q={encoded_query}&service=official"

# Wait for page load
agent-browser wait --load networkidle

# Extract product data as text
agent-browser get text "body" --json

Parsing: The JSON output contains a data.text field with the page's visible text. Parse it for:

  • Product titles: Lines of text immediately preceding a price line
  • Prices: Lines matching ₱|S\$|RM|₫|฿ followed by numbers
  • Sold counts: Patterns like 14.9K sold, 537 sold
  • Ratings: Patterns like (5427)
  • Discounts: Lines containing Off, Voucher save
  • Shipping origin: Location patterns (Metro Manila, Bulacan, etc.)

Getting product links:

# Get snapshot with refs
agent-browser snapshot -i --json
# Parse the refs to find product links (role: "link", name contains product title)
# Get href for each product:
agent-browser get attr @eXXX "href" --json

Close browser when done:

agent-browser close

Claude Code Browser Execution Protocol

When running this skill in Claude Code, use the Chrome DevTools MCP tools for browser automation. The tool mapping is:

Original Tool (builtin)Claude Code Equivalent
mcp__builtin_browser__tabs_context_mcpmcp__plugin_chrome-devtools-mcp_chrome-devtools__list_pages
mcp__builtin_browser__tabs_create_mcpmcp__plugin_chrome-devtools-mcp_chrome-devtools__new_page
mcp__builtin_browser__navigatemcp__plugin_chrome-devtools-mcp_chrome-devtools__navigate_page
mcp__builtin_browser__javascript_toolmcp__plugin_chrome-devtools-mcp_chrome-devtools__evaluate_script
mcp__builtin_browser__get_page_textmcp__plugin_chrome-devtools-mcp_chrome-devtools__take_snapshot (a11y tree text)

For each search query, follow this exact sequence:

A. Create a New Tab

  • Call mcp__plugin_chrome-devtools-mcp_chrome-devtools__new_page with the search URL and timeout: 30000.
  • The tool returns a page list — note the selected page ID.

B. Wait for Page Load

  • Call mcp__plugin_chrome-devtools-mcp_chrome-devtools__wait_for with text containing a keyword expected on the results page (e.g., ["coffee", "Coffee"]) and timeout: 15000.

C. Extract Product Data via JavaScript

  • Call mcp__plugin_chrome-devtools-mcp_chrome-devtools__evaluate_script with the extraction script from Step 4.
  • The script must be wrapped as an arrow function: () => { return /* script body */; }
  • If extraction returns empty/undefined, fall back to mcp__plugin_chrome-devtools-mcp_chrome-devtools__take_snapshot with verbose: true to get the page text.

D. Cleanup

  • After extracting data, close the tab with mcp__plugin_chrome-devtools-mcp_chrome-devtools__close_page using the page ID.

Important: Do NOT use tabId parameters with Chrome DevTools tools — page selection is handled by select_page or by working on the currently selected page.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

MigraQ

腾讯云迁移平台(CMG/MSP)全流程能力。触发词:资源扫描、扫描阿里云/AWS/华为云/GCP资源、生成云资源清单、选型推荐、对标腾讯云、推荐规格、帮我推荐、给我推荐、ECS对应什么腾讯云产品、成本分析、TCO、迁移报价、询价、价格计算器、cmg-scan、cmg-recommend、cmg-tco

Registry SourceRecently Updated
General

🫧 Flux 2 Klein — Pro Pack on RunComfy

Generate images with Flux 2 Klein (Black Forest Labs' distilled fast variant of Flux 2) on RunComfy — bundled with the model's documented prompting patterns...

Registry SourceRecently Updated
General

XHS Content Generator

根据热点榜单和主题自动生成吸引眼球的小红书爆款标题和内容框架,支持多种热门领域并优化emoji使用。

Registry SourceRecently Updated
General

Bosch Group

提供博世集团历史、业务板块、技术创新及治理结构的全面企业信息和产业发展分析。

Registry SourceRecently Updated