sf-ai-agentforce-observability

Agentforce session tracing extraction and analysis. TRIGGER when: user extracts STDM data from Data Cloud, analyzes agent session traces, debugs agent conversations via telemetry, or works with .parquet files from Agentforce. DO NOT TRIGGER when: testing agents (use sf-ai-agentforce-testing), Apex debug logs (use sf-debug), or building agents (use sf-ai-agentforce).

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "sf-ai-agentforce-observability" with this command: npx skills add jaganpro/sf-skills/jaganpro-sf-skills-sf-ai-agentforce-observability

sf-ai-agentforce-observability: Agentforce Session Tracing Extraction & Analysis

Use this skill when the user needs trace-based observability, not just testing: extract Session Tracing Data Model (STDM) records, work with Parquet datasets, reconstruct session timelines, analyze topic/action latency, or debug agent behavior from Data 360 telemetry.

When This Skill Owns the Task

Use sf-ai-agentforce-observability when the work involves:

  • Data 360 / Session Tracing extraction
  • .parquet files from Agentforce telemetry
  • session timeline reconstruction
  • trace-driven debugging of topic routing, action failures, or latency
  • Polars / PyArrow-based analysis of large telemetry datasets

Delegate elsewhere when the user is:


Prerequisites That Must Exist

Before extraction, verify:

  • Data 360 is enabled
  • Session Tracing is enabled
  • the Salesforce Standard Data Model version is sufficient
  • Einstein / Agentforce capabilities are enabled in the org
  • JWT / ECA auth for Data 360 access is configured

If auth is missing, hand off to:

Deep setup guide:


What This Skill Works With

Core storage / analysis model

  • extraction via Data 360 APIs
  • Parquet for storage efficiency
  • Polars for large-scale lazy analysis

Core STDM entities

At minimum, expect work around:

  • session
  • interaction / turn
  • interaction step
  • moment
  • message

GenAI Trust Layer / audit records may also be relevant for content-quality and generation debugging.

Full schema:


Required Context to Gather First

Ask for or infer:

  • target org alias
  • time window or date range
  • agent filter, if any
  • whether the goal is extraction, summary analysis, or single-session debugging
  • output location for extracted data
  • whether the user already has Parquet files on disk

Recommended Workflow

1. Verify setup and auth

Confirm Data 360 tracing exists and JWT/ECA auth is working.

2. Choose the extraction mode

NeedDefault approach
recent telemetry snapshotextract last N days
focused investigationfiltered extraction by date and agent
one broken conversationextract or debug a single session tree
ongoing usage analyticsincremental extraction

3. Extract to Parquet

Use the provided scripts under scripts/ rather than reimplementing extraction logic.

4. Analyze with Polars

Common analysis goals:

  • session volume and duration
  • topic distribution
  • action step failures
  • latency hotspots
  • abandonment / escalation patterns
  • session-level timeline reconstruction

5. Convert findings into next actions

Typical outcomes:

  • topic mismatch → improve routing or descriptions
  • action failure → inspect Flow / Apex implementation
  • latency issue → optimize downstream action path
  • test gap → add targeted agent tests

High-Signal Operational Rules

  • treat STDM as read-only telemetry
  • expect ingestion lag; this is not perfect real-time debugging
  • use date filters and focused extraction to avoid unnecessary volume / query cost
  • prefer Parquet over ad hoc JSON for durable analysis
  • use lazy Polars patterns for large datasets

Common pitfalls:

  • assuming missing data means no issue, when tracing may simply not be enabled
  • running huge broad queries without date or agent filters
  • trying to fix the agent inside this skill instead of handing off to authoring / testing skills

Output Format

When finishing, report in this order:

  1. What data was extracted or analyzed
  2. Scope (org, dates, agent filter, session IDs)
  3. Key findings
  4. Likely root causes
  5. Recommended next skill / next action

Suggested shape:

Observability task: <extract / analyze / debug-session>
Scope: <org, dates, agents, session ids>
Artifacts: <directories / parquet files>
Findings: <latency, routing, action, quality, abandonment patterns>
Root cause: <best current explanation>
Next step: <testing, agent fix, flow fix, apex fix>

Cross-Skill Integration

NeedDelegate toReason
auth / JWT setupsf-connected-appsData 360 access
fix agent routing / behaviorsf-ai-agentscriptauthoring corrections
formal regression / coverage testssf-ai-agentforce-testingreproducible test loops
Flow-backed action debuggingsf-flowdeclarative repair
Apex-backed action debuggingsf-debug or sf-apexcode / log investigation

Reference Map

Start here

Data model / querying

Analysis / debugging

Auth / troubleshooting


Score Guide

ScoreMeaning
90+strong telemetry-backed diagnosis
75–89useful analysis with minor gaps
60–74partial visibility only
< 60insufficient evidence; gather more telemetry

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

sf-apex

No summary provided by upstream source.

Repository SourceNeeds Review
General

sf-lwc

No summary provided by upstream source.

Repository SourceNeeds Review
General

sf-metadata

No summary provided by upstream source.

Repository SourceNeeds Review
General

sf-flow

No summary provided by upstream source.

Repository SourceNeeds Review