Building Hyperstack Stacks
A stack watches Solana programs and maps on-chain state into structured, streamable entities. The workflow is: explore the IDL, understand what the user needs, write the Rust definition, build, deploy.
1. Prerequisites
Required: Rust toolchain, Hyperstack CLI (hs), an IDL JSON file. Run once:
OS="$(uname -s 2>/dev/null || echo Windows)"
if ! command -v cargo &>/dev/null; then
if [ "$OS" = "Darwin" ] || [ "$OS" = "Linux" ]; then
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --no-modify-path
source "$HOME/.cargo/env"
else
curl -sSLo /tmp/rustup-init.exe https://win.rustup.rs/x86_64
/tmp/rustup-init.exe -y
export PATH="$USERPROFILE/.cargo/bin:$PATH"
fi
fi
if command -v hs &>/dev/null; then
HS_CLI="hs"
elif command -v hyperstack-cli &>/dev/null; then
HS_CLI="hyperstack-cli"
else
cargo install hyperstack-cli
HS_CLI="hs"
fi
All examples use
hs. If installed via cargo (cargo install hyperstack-cli) or npm (npm install -g hyperstack-cli).
2. Get the IDL
If the user already has the IDL, place it in idl/ and skip to step 3.
If not, try in order:
- Program GitHub repo — look for
target/idl/*.jsonoridl/*.json anchor idl fetch <PROGRAM_ID> --provider.cluster mainnet -o idl/program.json- Protocol SDK packages (NPM or crates.io often bundle the IDL)
- Block explorers (Solscan, Solana.fm — "IDL" tab on the program page)
- Source generators (Kinobi/Codama) as a last resort
3. Explore the IDL
Do this before writing any Rust. Always pass --json for machine-readable output.
Survey — get the full inventory:
hs idl summary idl/program.json
hs idl relations idl/program.json --json
hs idl types idl/program.json --json
hs idl events idl/program.json --json
relations is the most important output — it classifies accounts as Entity, Infrastructure, Role, or Other. Entity accounts are what you'll typically map to #[entity] structs.
Match user intent — adapt depth to how specific the user's request is:
- Clear data requirements — Use
hs idl search,hs idl type <name>, andhs idl instruction <name>to confirm each requested field maps to a concrete account field, instruction arg, or event field. - App idea but unclear data model — Use
relations,type-graph, andpda-graphto identify entity candidates and relationships. Propose a data model, then proceed once confirmed. - No indication — Use
relations,events, andsearchto surface what the program tracks. Present a short menu of what's possible and narrow scope before coding.
Close gaps — before writing code, verify every cross-account link. This is the most critical step. You must understand how every account and instruction connects to every other, and confirm that the accounts you reference in macros (especially lookup_by) actually exist on the instruction you're sourcing from:
hs idl account-usage idl/program.json <account> --json
hs idl links idl/program.json <account-a> <account-b> --json
hs idl connect idl/program.json <new-account> --existing <a,b> --suggest-hs --json
hs idl instruction idl/program.json <instruction-name> --json # verify which accounts exist on an instruction
connect --suggest-hs output maps directly to register_from and #[aggregate] decisions in the DSL.
See references/cli-reference.md for the full hs idl command set.
4. Project Setup
cargo new --lib my-stack && cd my-stack
mkdir -p idl
# copy IDL file(s) into idl/
Cargo.toml:
[dependencies]
hyperstack = "0.1"
[build-dependencies]
hyperstack-build = "0.1"
[package.metadata.hyperstack]
idls = ["idl/program.json"]
hyperstack.toml:
[project]
name = "my-stack"
5. Write the Stack Definition
A stack is a Rust module with #[hyperstack] containing one or more #[entity] structs. Each entity has a primary key and sections (nested structs deriving Stream). Fields use mapping macros to declare their data source.
use hyperstack::prelude::*;
#[hyperstack(idl = ["idl/program.json"])]
mod my_stack {
use hyperstack::macros::Stream;
use serde::{Deserialize, Serialize};
#[entity(name = "Token")]
#[view(name = "by_volume", sort_by = "metrics.total_volume", order = "desc")]
pub struct Token {
pub id: TokenId,
pub state: TokenState,
pub metrics: TokenMetrics,
}
#[derive(Debug, Clone, Serialize, Deserialize, Stream)]
pub struct TokenId {
#[map(program_sdk::accounts::Pool::mint, primary_key, strategy = SetOnce)]
pub mint: String,
}
#[derive(Debug, Clone, Serialize, Deserialize, Stream)]
pub struct TokenState {
#[map(program_sdk::accounts::Pool::reserves, strategy = LastWrite)]
pub reserves: Option<u64>,
#[snapshot(from = program_sdk::accounts::Pool, strategy = LastWrite)]
pub pool: Option<Pool>,
}
#[derive(Debug, Clone, Serialize, Deserialize, Stream)]
pub struct TokenMetrics {
#[aggregate(from = program_sdk::instructions::Swap, field = args::amount, strategy = Sum, lookup_by = accounts::pool)]
pub total_volume: Option<u64>,
#[aggregate(from = program_sdk::instructions::Swap, strategy = Count, lookup_by = accounts::pool)]
pub swap_count: Option<u64>,
#[derive_from(from = [program_sdk::instructions::Swap], field = __timestamp)]
pub last_trade_at: Option<i64>,
}
}
Key rules:
- The SDK module name is derived from the IDL's program name:
program_namebecomesprogram_name_sdk - Account paths:
program_sdk::accounts::AccountType::field_name - Instruction paths:
program_sdk::instructions::InstructionName - Every entity needs exactly one
primary_keyfield - Section structs must derive
Stream,Debug,Clone,Serialize,Deserialize
⚠️ CRITICAL: Account & Instruction Connection Planning
Before writing ANY macro, you MUST map out the full connection graph between accounts, instructions, and your entities. The macros are resolved at build time — if a connection doesn't exist, the build will fail silently or produce wrong results.
The lookup_by rule: When you use lookup_by = accounts::some_account on an #[aggregate], #[event], #[snapshot], or #[derive_from] macro, the account you reference in lookup_by MUST be an account that exists on that specific instruction. This is how Hyperstack resolves "which entity does this instruction update belong to?" — it reads the account address from the instruction's account list and matches it to an entity's primary key or lookup index.
Example of the connection logic:
Entity: Token (primary_key = Pool::mint)
│
│ The entity is keyed by the `mint` field on Pool accounts.
│ So Hyperstack knows: Pool address → mint → Token entity.
│
Macro: #[aggregate(from = instructions::Swap, lookup_by = accounts::pool)]
│
│ When a Swap instruction fires, Hyperstack needs to know
│ WHICH Token entity to update. It does this by:
│ 1. Reading the `pool` account address from the Swap instruction
│ 2. Looking up what `mint` value that Pool account holds
│ 3. Routing the update to the Token entity with that mint
│
└─ This ONLY works if `pool` is an actual account on the Swap instruction.
Use `hs idl instruction idl/program.json Swap --json` to verify.
Pre-flight checklist (do this for EVERY macro that uses lookup_by or register_from):
- Identify the source instruction — What instruction does
from = ...point to? - List its accounts — Run
hs idl instruction idl/program.json <InstructionName> --jsonand confirm the account name you're using inlookup_byis present in the instruction's accounts list. - Trace the resolution chain — How does Hyperstack go from that account address back to your entity's primary key? Either:
- The account type is the same one that holds your
primary_keyfield (direct resolution), OR - A
lookup_indexwithregister_fromhas been set up to map this account → primary key (PDA resolution).
- The account type is the same one that holds your
- Verify with
hs idl links— Runhs idl links idl/program.json <AccountA> <AccountB> --jsonto confirm the connection path exists.
Common mistakes:
- Using
lookup_by = accounts::poolon an instruction that doesn't have apoolaccount — build fails or silently drops data - Forgetting to set up
register_fromwhen thelookup_byaccount is a PDA that doesn't directly contain the primary key - Assuming an account name exists on all instructions — different instructions may name the same logical account differently (e.g.,
poolvsammvsmarket) - Not checking the IDL to see the exact account names — always use
hs idl instructionto get the canonical names
Enriching with off-chain data: If the user needs data that isn't on-chain (token metadata, images from metadata URIs, external API data), use #[resolve]. Two resolver types are available:
- Token metadata —
#[resolve(address = "mint_addr")]or#[resolve(from = "id.mint")]on anOption<TokenMetadata>field. Fetches name, symbol, decimals, logo from the DAS API. Also providesui_amount/raw_amountcomputed methods for human-readable token amounts. - URL fetching —
#[resolve(url = field.path, extract = "json.path")]on any field. Fetches JSON from a URL stored in another entity field and extracts a value by path. Use for NFT images, off-chain config, API responses.
Token Decimal Handling with ui_amount
On-chain token amounts are raw integers — you must divide by 10^decimals to get a human-readable value. Hyperstack makes this seamless via ui_amount, which works directly with the TokenMetadata resolver.
The pattern: resolve token metadata to get decimals, then reference it in transform = ui_amount(...) on any #[map] field:
use hyperstack::resolvers::TokenMetadata;
// 1. Resolve token metadata — this fetches decimals (and name/symbol/logo) from DAS
#[resolve(from = "id.mint")]
pub token_metadata: Option<TokenMetadata>,
// 2. Map a raw on-chain amount and convert to UI amount in one step
#[map(program_sdk::accounts::Pool::reserves, strategy = LastWrite,
transform = ui_amount(token_metadata.decimals))]
pub reserves: Option<f64>, // stored and streamed as a human-readable float
Hyperstack handles the rest: the raw u64 is captured internally, divided by 10^decimals at evaluation time, and only the float is delivered to clients. If token_metadata hasn't resolved yet, reserves is null rather than a wrong value.
When decimals are known at build time (e.g., SOL = 9, USDC = 6), skip the resolver and pass the literal directly:
#[map(program_sdk::accounts::Pool::sol_amount, strategy = LastWrite,
transform = ui_amount(9))]
pub sol_amount: Option<f64>,
For computed fields or applying ui_amount to a list, use #[computed]:
// Inline on a computed field
#[computed(state.reserves_raw.ui_amount(token_metadata.decimals))]
pub reserves_ui: Option<f64>,
// Apply to every element of a Vec
#[computed(state.balances_raw.map(|x| x.ui_amount(token_metadata.decimals)))]
pub balances_ui: Option<Vec<f64>>,
The inverse raw_amount converts back from UI float to raw integer when needed (e.g., building instructions):
#[computed(state.deposit_ui.raw_amount(token_metadata.decimals))]
pub deposit_raw: Option<u64>,
See references/dsl-reference.md for every macro, strategy, transform, resolver, and cross-account resolution pattern.
6. Build & Deploy
cargo build # generates .hyperstack/*.stack.json
hs auth login # authenticate
hs up my-stack # push + build + deploy
hs sdk create typescript my-stack # generate SDK
hs status # verify
hs explore my-stack --json # inspect live schema
Branch deploys: hs up my-stack --branch staging / hs stack stop my-stack --branch staging.
See references/cli-reference.md for full CLI options.