sf-data

Salesforce data operations with 130-point scoring. TRIGGER when: user creates test data, performs bulk import/export, uses sf data CLI commands, or needs data factory patterns for Apex tests. DO NOT TRIGGER when: SOQL query writing only (use sf-soql), Apex test execution (use sf-testing), or metadata deployment (use sf-deploy).

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "sf-data" with this command: npx skills add jaganpro/sf-skills/jaganpro-sf-skills-sf-data

Salesforce Data Operations Expert (sf-data)

Use this skill when the user needs Salesforce data work: record CRUD, bulk import/export, test data generation, cleanup scripts, or data factory patterns for validating Apex, Flow, or integration behavior.

When This Skill Owns the Task

Use sf-data when the work involves:

  • sf data CLI commands
  • record creation, update, delete, upsert, export, or tree import/export
  • realistic test data generation
  • bulk data operations and cleanup
  • Apex anonymous scripts for data seeding / rollback

Delegate elsewhere when the user is:


Important Mode Decision

Confirm which mode the user wants:

ModeUse when
Script generationthey want reusable .apex, CSV, or JSON assets without touching an org yet
Remote executionthey want records created / changed in a real org now

Do not assume remote execution if the user may only want scripts.


Required Context to Gather First

Ask for or infer:

  • target object(s)
  • org alias, if remote execution is required
  • operation type: query, create, update, delete, upsert, import, export, cleanup
  • expected volume
  • whether this is test data, migration data, or one-off troubleshooting data
  • any parent-child relationships that must exist first

Core Operating Rules

  • sf-data acts on remote org data unless the user explicitly wants local script generation.
  • Objects and fields must already exist before data creation.
  • For automation testing, prefer 251+ records when bulk behavior matters.
  • Always think about cleanup before creating large or noisy datasets.
  • Never use real PII in generated test data.

If metadata is missing, stop and hand off to:


Recommended Workflow

1. Verify prerequisites

Confirm object / field availability, org auth, and required parent records.

2. Choose the smallest correct mechanism

NeedDefault approach
small one-off CRUDsf data single-record commands
large import/exportBulk API 2.0 via sf data ... bulk
parent-child seed settree import/export
reusable test datasetfactory / anonymous Apex script
reversible experimentcleanup script or savepoint-based approach

3. Execute or generate assets

Use the built-in templates under assets/ when they fit:

  • assets/factories/
  • assets/bulk/
  • assets/cleanup/
  • assets/soql/
  • assets/csv/
  • assets/json/

4. Verify results

Check counts, relationships, and record IDs after creation or update.

5. Leave cleanup guidance

Provide exact cleanup commands or rollback assets whenever data was created.


High-Signal Rules

Bulk safety

  • use bulk operations for large volumes
  • test automation-sensitive behavior with 251+ records where appropriate
  • avoid one-record-at-a-time patterns for bulk scenarios

Data integrity

  • include required fields
  • verify parent IDs and relationship integrity
  • account for validation rules and duplicate constraints

Cleanup discipline

Prefer one of:

  • delete-by-ID
  • delete-by-pattern
  • delete-by-created-date window
  • rollback / savepoint patterns for script-based test runs

Common Failure Patterns

ErrorLikely causeDefault fix direction
INVALID_FIELDwrong field API name or FLS issueverify schema and access
REQUIRED_FIELD_MISSINGmandatory field omittedinclude required values
INVALID_CROSS_REFERENCE_KEYbad parent IDcreate / verify parent first
FIELD_CUSTOM_VALIDATION_EXCEPTIONvalidation rule blocked the recorduse valid test data or adjust setup
DUPLICATE_VALUEunique-field conflictquery existing data first
bulk limits / timeoutswrong tool for the volumeswitch to bulk / staged import

Output Format

When finishing, report in this order:

  1. Operation performed
  2. Objects and counts
  3. Target org or local artifact path
  4. Record IDs / output files
  5. Verification result
  6. Cleanup instructions

Suggested shape:

Data operation: <create / update / delete / export / seed>
Objects: <object + counts>
Target: <org alias or local path>
Artifacts: <record ids / csv / apex / json files>
Verification: <passed / partial / failed>
Cleanup: <exact delete or rollback guidance>

Cross-Skill Integration

NeedDelegate toReason
discover object / field structuresf-metadataaccurate schema grounding
run bulk-sensitive Apex validationsf-testingtest execution and coverage
deploy missing schema firstsf-deploymetadata readiness
implement production logic consuming the datasf-apex or sf-flowbehavior implementation

Reference Map

Start here

Query / bulk / cleanup

Examples / limits


Score Guide

ScoreMeaning
117+strong production-safe data workflow
104–116good operation with minor improvements possible
91–103acceptable but review advised
78–90partial / risky patterns present
< 78blocked until corrected

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

sf-apex

No summary provided by upstream source.

Repository SourceNeeds Review
General

sf-metadata

No summary provided by upstream source.

Repository SourceNeeds Review
General

sf-flow

No summary provided by upstream source.

Repository SourceNeeds Review