gtm-engineer-onboarding-coach

Coach a newly-hired or recently-promoted GTM (Go-to-Market) Engineer, RevOps Engineer, Marketing Engineer, or Sales Engineer-of-systems through their first 90 days and ongoing role definition. GTM Engineering is a 2024-2026 role that didn't exist in most companies until recently — it sits at the intersection of RevOps, growth marketing, and software engineering, and the role definition is genuinely confused across companies. Covers the role-disambiguation (GTM Engineer vs RevOps Engineer vs Marketing Engineer vs Marketing Ops vs Sales Engineer vs Solutions Engineer — these have overlapping skills but different reporting lines, KPIs, and success criteria), the principal stakeholder map (CMO vs CRO vs Head of RevOps vs Head of Growth — who you actually report to defines your scope), the first 30 days listening tour with sales/marketing/RevOps/data, the GTM-stack audit (CRM, MAP, sales engagement, prospecting, enrichment, identity, reverse-ETL, attribution, pipeline-data warehousing), the build-vs-buy decisions for the modern GTM stack (Apollo vs ZoomInfo vs Clay vs Zoominfo Copilot; Outreach vs Salesloft vs Apollo cadences; HubSpot vs Salesforce vs Pipedrive; Marketo vs Pardot vs Customer.io), the AI-native GTM tooling explosion (Clay, Default, Common Room, RegieAI, Aomni, Persana — what's real and what's vapor), the prioritization framework (foundational data quality > attribution accuracy > pipeline velocity automations > AI-driven prospecting > experimental growth automations), the most common org failure modes (CRO uses you as a Salesforce admin, CMO uses you as a Marketo admin, no one's running the GTM strategy so you become reactive ticket-runner), the GTM Engineer's specific failure modes (over-engineering attribution, ego in tooling, scope creep into data engineering, building automations no rep uses), the natural exit ramps (Head of RevOps, Director of GTM Engineering, GTM consultant, founder of GTM tools company), and what compensation realistic looks like at each stage. Use when someone says "first 90 days as GTM Engineer", "what does a GTM Engineer actually do", "GTM Engineer vs RevOps Engineer", "Clay automations", "we need to build GTM stack", "Head of RevOps wants me to do everything". Triggers on phrases like "GTM Engineer", "Go-to-Market Engineer", "RevOps Engineer", "Marketing Engineer", "Marketing Ops Engineer", "GTM stack", "Clay GTM automation", "Apollo Outreach Salesloft", "GTM AI tooling", "RevOps tooling", "pipeline automation", "attribution modeling", "lead enrichment", "Common Room", "Default GTM", "Aomni", "Persana".

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "gtm-engineer-onboarding-coach" with this command: npx skills add charlie-morrison/gtm-engineer-onboarding-coach

gtm-engineer-onboarding-coach

Coach a newly-hired or recently-promoted GTM Engineer through onboarding and ongoing role-definition. GTM Engineering is among the youngest functional roles in modern SaaS — most companies are still figuring out what it is, what it owns, and where it reports. Wrong choices in the first 90 days create either ticket-runner roles that frustrate the IC and waste the leverage, or sprawling pet projects that don't move pipeline.

This is parallel to revops-leader-onboarding-coach (already exists) and chief-of-staff-onboarding-coach in role-ambiguity, but with a stronger technical-IC-vs-leader axis: most GTM Engineers are senior ICs, not managers.

When to engage

Trigger when:

  • "I'm starting as GTM Engineer next month — what should my first 90 days look like?"
  • "Just promoted to first-ever GTM Engineer at our company — how do I scope this role?"
  • "Our GTM stack is Apollo + Outreach + HubSpot + Clay + Common Room + Marketo — what's broken?"
  • "Should we hire a GTM Engineer or a RevOps Engineer?"
  • "CRO wants me to fix the SDR pipeline; CMO wants me to fix attribution; head of RevOps wants me to fix routing"
  • "Do I report to RevOps or Marketing?"
  • "Building Clay automations all day — is this what GTM Engineering is?"

Don't engage when:

  • The user is a Marketing Operations Specialist with no engineering skill — different role, route to revops-leader or marketing-ops coaching
  • The user is a Sales Engineer (technical pre-sales) — totally different role, despite the "engineer" word
  • The user is a software engineer being asked to "do some GTM work" — this is a side-project conversation, not an onboarding one

Diagnostic intake (run first)

  1. What's your job title officially? — GTM Engineer, RevOps Engineer, Marketing Engineer, GTM Ops, Marketing Ops Engineer, RevOps Architect — these mean different things at different companies.
  2. Who do you report to? — CRO, CMO, Head of RevOps, VP Growth, COO. This is the single most important variable. Reporting line determines KPIs, scope, and political reality.
  3. Were you hired into a new role or replacing someone? — First-ever-in-role is fundamentally different from succession.
  4. Stage of company? — Pre-PMF, post-PMF/Series A, Series B/C scaling, growth/PE. The GTM stack and priorities shift dramatically by stage.
  5. What's your background? — Coming from RevOps Manager? Marketing Ops Manager? Software Engineering? Sales? Data? Each has a different first-90-days approach because your gaps are different.
  6. Is there a team? — IC role, manager of 2-3, hands-on player-coach? Most GTM Engineers are senior ICs in 2024-2026.
  7. What's the stated #1 priority for first 90 days? — Pipeline acceleration? Attribution fix? Data quality? RevOps platform consolidation? AI prospecting build-out? Be skeptical of "all of the above".

The GTM Engineer role disambiguation

This is the hardest part of the role. The same title means wildly different things at different companies. Sort yourself into one of these archetypes — or know which one you're being pushed toward:

Archetype A: GTM Automation Engineer (most common)

Ownership: Building automations across the GTM stack. Lead routing logic, sales cadence triggers, enrichment workflows, lead scoring, lifecycle stage transitions. Tools: Clay, Default, Tray, Workato, Zapier, native Salesforce flows, native HubSpot workflows. Reports to: Head of RevOps or Head of Marketing Ops. Looks like: 60% Clay/Default automations, 20% CRM admin, 15% data work, 5% strategy. Compensation (2026): $140-220K base + 10-25% bonus + equity. Total $170-280K at Series B+.

Archetype B: GTM Data Engineer

Ownership: Pipeline data warehouse + attribution + reporting infrastructure + reverse-ETL of insights to GTM tools. Tools: Snowflake/BigQuery, dbt, Census/Hightouch, attribution platforms (Bizible/Dreamdata/HockeyStack), Looker/Sigma/Hex. Reports to: Head of RevOps, sometimes Head of Data with dotted line to RevOps. Looks like: 40% data modeling, 30% attribution, 15% reverse-ETL, 15% reporting. Compensation: $160-240K base, total $200-320K. Often higher than Archetype A because data engineering scales.

Archetype C: AI-Native GTM Engineer (newest, hottest, most ambiguous)

Ownership: AI-driven prospecting (Clay enriched with LLM-driven research), AI-generated outbound at scale, AI SDR augmentation (Aomni, Persana, RegieAI, 11x), conversation intelligence + signal-driven plays. Tools: Clay (heavy), Aomni, Persana, RegieAI, 11x.ai, Common Room, Default, custom LLM prompts. Reports to: Head of Growth, sometimes CRO directly, sometimes Head of RevOps. Looks like: 50% AI-driven prospecting build-out, 25% experiment design, 15% data plumbing, 10% reporting on what's working. Compensation: $150-260K base, total $190-340K. Wide variance because role is new.

Archetype D: GTM Platform Engineer (rare, 1000+ employee companies)

Ownership: Salesforce architecture, full-stack GTM platform (CPQ, billing, partner portals, customer 360). Tools: Salesforce, CPQ, billing systems, integration platforms. Reports to: VP RevOps or VP Engineering. Looks like: 80% Salesforce/platform engineering, 20% process design. Compensation: $180-260K base.

Archetype E: Hybrid / Player-Coach (pre-PMF startups)

Ownership: Everything across A, B, C, D depending on the week. Tools: Whatever the stack happens to be. Reports to: Founder, CEO, CRO, or CMO. Looks like: 30% reactive sales/marketing requests, 30% strategy, 20% building, 20% admin. Compensation: $130-200K + meaningful equity (0.1-1.0%).

The first 90-day insight: You probably know which archetype the company thinks you are, but the actual day-to-day reveals which one you're actually being asked to be. Resolve this disagreement explicitly in the first 60 days or you'll spend year 1 frustrated.

The first 30 days: listening tour

Mandatory 1-on-1s in week 1-3

Plan and book in week 1:

  • Founder/CEO — strategy context, what's broken from their view, what's working
  • CRO + 2 most-senior sellers — pipeline reality, top frustrations with the GTM stack, what would 10x their week
  • CMO + Director of Demand Gen + Director of Marketing Ops — funnel reality, attribution disputes, MAP quality
  • Head of RevOps + senior Salesforce admin — process integrity, technical debt, in-flight projects
  • Head of Customer Success — the post-sale handoff and what's broken about it (CS attribution to expansion is often a goldmine)
  • Head of Data / VP Eng (if applicable) — data warehouse access, reverse-ETL infra, integration governance
  • Most-senior SDR + most-senior AE — frontline view of tools they actually use vs avoid

For each meeting, bring 5 questions and shut up:

  1. "What's working well in our GTM motion that we shouldn't break?"
  2. "If you could fix one thing in the GTM stack tomorrow, what would it be?"
  3. "What does pipeline look like to you — leading vs lagging indicators?"
  4. "Where are we losing customers / pipeline / efficiency that you think nobody's noticing?"
  5. "If I'm doing this right at day 90, what does success look like for you specifically?"

Document everything in a shared Notion / Doc that your manager can see — it builds credibility and shows you're listening.

What you're scanning for

  • Conflicts: Sales says "we have great data quality"; Marketing says "data is a nightmare". This is signal — go look at the data.
  • Repeated themes: Three people independently mentioning that lead routing is broken → likely true.
  • Sacred cows: "We've always used [tool]" — signal to dig deeper, sometimes the right answer, sometimes a sunk cost.
  • The "small thing" that's actually big: "We can't see SDR-touched-AE-closed pipeline" might mean you're missing co-attribution entirely.

The 30-day GTM stack audit

By end of week 4, deliver a written audit covering:

Stack inventory

  • Every GTM tool, who pays for it, who owns it, who uses it daily, contract value, renewal date
  • Common bloat: Apollo + ZoomInfo + Clay (paying for redundant data); Outreach + Salesloft (paying for both during a migration that never ended); HubSpot Marketing + Marketo (legacy not killed); Bizible + Dreamdata + HockeyStack (multiple attribution tools)

Data flow map

  • Lead origin → first touch → MAP → CRM → enrichment → routing → cadence → meeting → opportunity → pipeline → close → CS handoff → expansion
  • For each step: source of truth, sync frequency, known data quality issues, who manages it
  • Where does the data break? Almost always: enrichment lag, duplicate records, opportunity-to-account hierarchy issues, attribution stitching

Attribution reality check

  • What model is in use? First-touch / last-touch / linear / time-decay / W-shape / data-driven?
  • What does sales believe drives pipeline vs what attribution says vs what data shows?
  • Reconciliation gap: typical companies have 30-60% gap between "marketing-attributed pipeline" and "pipeline marketing actually drove"

Pipeline velocity

  • Stage-by-stage conversion rates, time-in-stage, stage-skipping rate (often hidden problem)
  • Compare to industry benchmarks (Pavilion benchmarks, OpenView, ScaleVP data)

AI tooling reality

  • What AI/automation tools are in the stack? Clay, Aomni, Persana, RegieAI, 11x, Common Room?
  • Are they delivering measurable lift or are they vanity? Most AI GTM tooling in 2025-2026 is unproven. Be skeptical. Run an attribution backtest before doubling down.

Quick-win list (5-10 items)

  • Things you can ship in week 5-8 that prove value
  • Common quick wins: fix lead routing latency, deduplication scripts, simple Clay enrichment for missing fields, lifecycle stage automation
  • Quick wins must be measurable — "we now route leads in <2 min vs 24h" not "improved routing"

Strategic priority list (3-5 items)

  • 90-day initiatives that compound
  • Common strategic priorities: attribution platform consolidation, AI-driven prospecting MVP, CRM data hygiene scoring, signal-based outbound system

Days 31-60: build the playbook

Pick 1-2 strategic initiatives, ship 5-7 quick wins

The temptation: Take on every problem you found. The discipline: You're one person; you can ship 1-2 strategic initiatives well in 60 days, plus quick wins.

Stakeholder management

  • Weekly written update to your manager + key cross-functional stakeholders
  • Friday demo for non-technical stakeholders — show what you shipped, in their language
  • Monthly retrospective with manager: what's working, what's not, what's the next 30 days

Metrics you commit to

Don't accept vague success criteria. Negotiate 3-5 numbers:

  • For Archetype A (Automation): Time-to-route, lead-to-meeting conversion lift, SDR cadence reply rate lift
  • For Archetype B (Data): Attribution accuracy improvement, pipeline data freshness, # of dashboards consolidated
  • For Archetype C (AI): % outbound that's AI-augmented, reply rate lift vs control, meetings/week from AI plays
  • For Archetype E (Hybrid): Pipeline created from your initiatives, # of cross-functional projects shipped

Days 61-90: ship the strategic initiative

By day 90, the strategic initiative should be either:

  • Live in production with measurable impact, even if Phase 1
  • De-risked with a clear path to ship in days 91-120

The deliverable at day 90: a written 90-day review covering what you committed to, what you shipped, what worked, what didn't, and the next 30/60/90 plan.

The principal stakeholder map — who you actually report to

Reporting to CRO:

  • KPI: Pipeline. Specifically, qualified pipeline created, AE productivity, SDR meetings booked.
  • Risk: You'll be asked to do anything that could move pipeline this quarter. Hard to invest in 6-month projects.
  • Win condition: AEs and SDRs see you as the person who unblocks their week.

Reporting to CMO:

  • KPI: MQLs, SQLs, pipeline-from-marketing, attribution.
  • Risk: Marketing-attribution wars consume you. CRO sees you as "marketing's tool".
  • Win condition: Sales validates marketing's pipeline contribution; attribution disputes resolve.

Reporting to Head of RevOps:

  • KPI: GTM-system uptime, data quality, process compliance.
  • Risk: You become a Salesforce admin with delusions of grandeur.
  • Win condition: The platform "just works" and people stop noticing it.

Reporting to Head of Growth:

  • KPI: Experiment velocity, % of pipeline from new growth motions, signal-driven plays.
  • Risk: Can't scale beyond pilots. Lots of cool experiments, none stick.
  • Win condition: 1-2 growth experiments graduate from pilot to "always-on" motion within 90 days.

Reporting to founder/CEO directly (pre-PMF):

  • KPI: Pipeline + revenue + everything in between.
  • Risk: Whiplash. Founder pivots weekly.
  • Win condition: You become the GTM "what's actually happening" voice founder relies on.

Build vs buy decisions for the modern GTM stack

For each layer, ask: how mature, what's the lock-in, what's the scale?

CRM layer

  • Salesforce: Default for B2B Series B+. Massive ecosystem, infinite customization, expensive total cost.
  • HubSpot: Default for SMB and growth-stage product-led companies. Faster to deploy, less flexible at scale, attribution natively integrated.
  • Pipedrive / Close / Copper: Default for very small teams. Outgrow by Series A.
  • Salesforce → HubSpot migrations are common at PLG companies. HubSpot → Salesforce migrations are common at growing enterprise companies. Both are expensive.

MAP (Marketing Automation Platform)

  • HubSpot Marketing Hub: For HubSpot CRM customers. Tightly integrated.
  • Marketo: Enterprise-grade, complex, expensive. Default for enterprise companies on Salesforce.
  • Pardot (Account Engagement): Salesforce-native MAP. Less powerful than Marketo but tightly integrated.
  • Customer.io / Iterable / Braze: B2C / B2B-PLG dominant. Better for product-triggered email.

Sales engagement

  • Outreach: Enterprise-leaning, deepest analytics, more rigid. ~$70-150/seat.
  • Salesloft: Sales-loved, more flexible, similar pricing.
  • Apollo: Lowest-cost, includes prospecting database, pipeline tools. Best for early-stage; hits ceiling at scale.
  • Note: HubSpot Sales Hub is "good enough" for most Series A-B companies; saves $30K-150K/year.

Prospecting / data

  • ZoomInfo: Most complete data, expensive ($30K-300K/year).
  • Apollo: Lower cost ($12-99K/year), 80% of ZoomInfo's data quality, includes engagement.
  • Clay: Not a database — a workflow tool that orchestrates other databases + LLMs. The big shift in 2024-2026. Roughly $5-30K/year for most teams. Pairs with everything.
  • Default: Newer Clay-alternative with stronger automation focus.
  • Common Room: Community + signal data — for PLG and developer-focused B2B.

Attribution

  • HubSpot native: Good enough for HubSpot-only stacks at Series A-B.
  • Bizible (Adobe): Enterprise. Salesforce-integrated. Expensive, harder to set up.
  • Dreamdata: B2B-native, easier setup, growing market share.
  • HockeyStack: PLG-strong, growing fast.
  • Build your own (warehouse-native): Series C+ with mature data team. Most flexibility, hardest to build right.

Reverse-ETL

  • Census vs Hightouch vs Polytomic — three serious players, similar pricing ($20-150K/year), pick by ecosystem fit and team preference.

AI prospecting layer (newest, most experimental)

  • Aomni, Persana, RegieAI, 11x — all 2023-2025 entrants. Treat as experiments, not production-grade. Run 6-week pilots; measure replied-meeting-pipeline lift vs control; don't scale until proven.

Common org failure modes

"GTM Engineer = Salesforce Admin" trap

Symptom: 60% of your week is processing tickets in Salesforce, not building. Fix: Negotiate up front a "ticket SLA + owned-projects" split. Push routine tickets to a Salesforce admin (existing or new hire). If there's no one to push to, this is the conversation with your manager: hire a Salesforce admin or accept that strategic work will be 20% of your time.

"Marketing-attribution war zone"

Symptom: Every meeting devolves into "who gets credit for this pipeline". Fix: Get executive alignment on the attribution model BEFORE running it. CRO + CMO + CEO sign off on the model in writing. After that, results are results — debates go to "do we want to change the model" not "is this number right".

"Lab without customers"

Symptom: You're shipping cool Clay automations and AI plays, but sales doesn't use them. Fix: Embed with the AE/SDR teams. Build with one team, scale to all. Ship-rate matters less than adoption-rate.

"Tribal warfare with data eng"

Symptom: Data engineering owns the warehouse and won't let GTM Engineering touch it. Fix: Negotiate a clear ownership model. Reverse-ETL is GTM Eng. Warehouse + dbt is Data Eng. Shared models negotiated quarterly. Don't fight the warehouse turf war — you'll lose.

The GTM Engineer's specific failure modes

Over-engineering attribution

Symptom: 6 weeks into building a multi-touch attribution model, you're still calibrating decay weights. Fix: 80/20 rule. A "good enough" attribution model deployed in 4 weeks beats a perfect one in 12 weeks. Iterate live.

Ego in tooling

Symptom: You're convinced Apollo is better than Outreach; you fight migrations. Fix: Tools are means, not ends. Whichever stack the company has, get the most out of it. Tool migrations are 1000x more disruptive than people think; only push for one when there's a compelling 5x reason.

Scope creep into data engineering

Symptom: You're now writing dbt models, defining warehouse schemas, fighting with the data team. Fix: GTM Eng + Data Eng partnership. Co-own pipeline data marts, don't compete.

Building automations no rep uses

Symptom: You shipped 12 Clay enrichments; SDRs are still not using them in cadences. Fix: Co-design with the SDR/AE team. Adoption is the only metric that matters. If a built thing isn't being used in 30 days, kill it.

Resume-driven development

Symptom: You're using Clay + Aomni + Persana + Default + 11x + Common Room because they're "what's hot". Fix: Pick 1-2 tools per layer, master them. Tool sprawl creates cognitive overhead and integration debt.

Natural exit ramps

  • Head of RevOps / Director of GTM Engineering — most common. 2-4 year path.
  • GTM consultant / fractional GTM Engineer — independent, $150-400/hr, common at year 4-6.
  • Founder of GTM tools company — Clay, Default, Common Room are all founded by ex-GTM Engineers.
  • Head of Growth — if your strength is experimentation and you've shipped pipeline-creating motions.
  • Sales Operations leadership at later-stage company — if you want enterprise scale.

Compensation reality (US, 2026)

StageBaseTotal CompEquity
First GTM Eng at seed/Series A startup$130-180K$150-220K0.1-0.5%
Senior GTM Eng at Series B$160-220K$190-280K0.05-0.2%
Staff/Principal GTM Eng at Series C+$200-260K$250-360K0.02-0.1%
Head of GTM Engineering at Series C+$220-300K$280-420K0.05-0.2%

These are 2026 US numbers. Discount 30-50% for European, 50-70% for non-US-non-EU. AI-native GTM Eng at hot companies (especially in SF) can be 20-40% above these ranges.

Output format

Always produce:

  • Diagnostic summary: archetype, reporting line, stage, prior background
  • First 30 days plan: listening tour, audit deliverable, quick-win shortlist
  • Days 31-60 plan: 1-2 strategic initiatives, 5-7 quick wins, metrics
  • Days 61-90 plan: ship deliverable
  • Stakeholder management cadence: weekly written, Friday demo, monthly retro
  • Build-vs-buy recommendation: for each major stack layer
  • Failure mode flags: which org / personal failure modes are highest-risk for this specific situation
  • 90-day review template

Anti-patterns

  • Don't recommend Clay heroics in the first 60 days — establish data quality first
  • Don't recommend AI-prospecting build-out before measuring attribution accuracy
  • Don't accept "we need it all done in 30 days" — push back, get alignment on phasing
  • Don't let the role become 80% Salesforce admin — that's a Salesforce admin role, not GTM Eng

What "great" looks like at day 90

  • Stack audit shipped, with prioritized recommendations agreed by CRO/CMO/Head of RevOps
  • 5-7 quick wins live, each with measurable impact
  • 1-2 strategic initiatives shipped or de-risked with clear next-30-days
  • Cross-functional partnerships strong: SDRs/AEs validate value, Data Eng partner positive
  • Manager + skip-level give "exceeds expectations" 90-day review
  • Backlog of high-leverage projects for days 91-180

A bad 90 days looks like:

  • Spent first 60 days as a Salesforce admin processing tickets
  • Built 3 Clay automations no one uses
  • "Working on attribution" but no model deployed
  • Manager is frustrated, you're frustrated, role definition still ambiguous

Coach toward the first picture, away from the second.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

TeamWork

Dynamically creates and manages AI agent teams for complex tasks. Invoke when user requests multi-agent collaboration, complex project execution, or when tasks require specialized roles and coordinated workflow.

Registry SourceRecently Updated
Automation

Website Usability Testing using Nova Act

AI-orchestrated usability testing using Amazon Nova Act. The agent generates personas, runs tests to collect raw data, interprets responses to determine goal achievement, and generates HTML reports. Tests real user workflows (booking, checkout, posting) with safety guardrails. Use when asked to "test website usability", "run usability test", "generate usability report", "evaluate user experience", "test checkout flow", "test booking process", or "analyze website UX".

Registry SourceRecently Updated
Automation

Gen Paylink Govilo

Upload files to Govilo and generate unlock links via Bot API. Use when: (1) Creating a Govilo unlock link from a ZIP, folder, or individual files, (2) Automating file upload to Govilo R2 storage with presigned URLs, (3) Managing Govilo Bot API interactions (presign → upload → create item). Requires GOVILO_API_KEY and SELLER_ADDRESS env vars. If missing, guides user to register at https://govilo.xyz/.

Registry SourceRecently Updated
Automation

FlowFi

REST API instructions for FlowFi—authorization, smart accounts, workflows (AI generate, edit, deploy, undeploy, delete, pause, resume, stop), execution (list...

Registry SourceRecently Updated