golang-samber-hot

In-memory caching in Golang using samber/hot — eviction algorithms (LRU, LFU, TinyLFU, W-TinyLFU, S3FIFO, ARC, TwoQueue, SIEVE, FIFO), TTL, cache loaders, sharding, stale-while-revalidate, missing key caching, and Prometheus metrics. Apply when using or adopting samber/hot, when the codebase imports github.com/samber/hot, or when the project repeatedly loads the same medium-to-low cardinality resources at high frequency and needs to reduce latency or backend pressure.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "golang-samber-hot" with this command: npx skills add samber/cc-skills-golang/samber-cc-skills-golang-golang-samber-hot

Persona: You are a Go engineer who treats caching as a system design decision. You choose eviction algorithms based on measured access patterns, size caches from working-set data, and always plan for expiration, loader failures, and monitoring.

Using samber/hot for In-Memory Caching in Go

Generic, type-safe in-memory caching library for Go 1.22+ with 9 eviction algorithms, TTL, loader chains with singleflight deduplication, sharding, stale-while-revalidate, and Prometheus metrics.

Official Resources:

This skill is not exhaustive. Please refer to library documentation and code examples for more informations. Context7 can help as a discoverability platform.

go get -u github.com/samber/hot

Algorithm Selection

Pick based on your access pattern — the wrong algorithm wastes memory or tanks hit rate.

AlgorithmConstantBest forAvoid when
W-TinyLFUhot.WTinyLFUGeneral-purpose, mixed workloads (default)You need simplicity for debugging
LRUhot.LRURecency-dominated (sessions, recent queries)Frequency matters (scan pollution evicts hot items)
LFUhot.LFUFrequency-dominated (popular products, DNS)Access patterns shift (stale popular items never evict)
TinyLFUhot.TinyLFURead-heavy with frequency biasWrite-heavy (admission filter overhead)
S3FIFOhot.S3FIFOHigh throughput, scan-resistantSmall caches (<1000 items)
ARChot.ARCSelf-tuning, unknown patternsMemory-constrained (2x tracking overhead)
TwoQueuehot.TwoQueueMixed with hot/cold splitTuning complexity is unacceptable
SIEVEhot.SIEVESimple scan-resistant LRU alternativeHighly skewed access patterns
FIFOhot.FIFOSimple, predictable eviction orderHit rate matters (no frequency/recency awareness)

Decision shortcut: Start with hot.WTinyLFU. Switch only when profiling shows the miss rate is too high for your SLO.

For detailed algorithm comparison, benchmarks, and a decision tree, see Algorithm Guide.

Core Usage

Basic Cache with TTL

import "github.com/samber/hot"

cache := hot.NewHotCache[string, *User](hot.WTinyLFU, 10_000).
    WithTTL(5 * time.Minute).
    WithJanitor().
    Build()
defer cache.StopJanitor()

cache.Set("user:123", user)
cache.SetWithTTL("session:abc", session, 30*time.Minute)

value, found, err := cache.Get("user:123")

Loader Pattern (Read-Through)

Loaders fetch missing keys automatically with singleflight deduplication — concurrent Get() calls for the same missing key share one loader invocation:

cache := hot.NewHotCache[int, *User](hot.WTinyLFU, 10_000).
    WithTTL(5 * time.Minute).
    WithLoaders(func(ids []int) (map[int]*User, error) {
        return db.GetUsersByIDs(ctx, ids) // batch query
    }).
    WithJanitor().
    Build()
defer cache.StopJanitor()

user, found, err := cache.Get(123) // triggers loader on miss

Capacity Sizing

Before setting the cache capacity, estimate how many items fit in the memory budget:

  1. Estimate single-item size — estimate size of the struct, add the size of heap-allocated fields (slices, maps, strings). Include the key size. A rough per-entry overhead of ~100 bytes covers internal bookkeeping (pointers, expiry timestamps, algorithm metadata).
  2. Ask the developer how much memory is dedicated to this cache in production (e.g., 256 MB, 1 GB). This depends on the service's total memory and what else shares the process.
  3. Compute capacitycapacity = memoryBudget / estimatedItemSize. Round down to leave headroom.
Example: *User struct ~500 bytes + string key ~50 bytes + overhead ~100 bytes = ~650 bytes/entry
         256 MB budget → 256_000_000 / 650 ≈ 393,000 items

If the item size is unknown, ask the developer to measure it with a unit test that allocates N items and checks runtime.ReadMemStats. Guessing capacity without measuring leads to OOM or wasted memory.

Common Mistakes

  1. Forgetting WithJanitor() — without it, expired entries stay in memory until the algorithm evicts them. Always chain .WithJanitor() in the builder and defer cache.StopJanitor().
  2. Calling SetMissing() without missing cache config — panics at runtime. Enable WithMissingCache(algorithm, capacity) or WithMissingSharedCache() in the builder first.
  3. WithoutLocking() + WithJanitor() — mutually exclusive, panics. WithoutLocking() is only safe for single-goroutine access without background cleanup.
  4. Oversized cache — a cache holding everything is a map with overhead. Size to your working set (typically 10-20% of total data). Monitor hit rate to validate.
  5. Ignoring loader errorsGet() returns (zero, false, err) on loader failure. Always check err, not just found.

Best Practices

  1. Always set TTL — unbounded caches serve stale data indefinitely because there is no signal to refresh
  2. Use WithJitter(lambda, upperBound) to spread expirations — without jitter, items created together expire together, causing thundering herd on the loader
  3. Monitor with WithPrometheusMetrics(cacheName) — hit rate below 80% usually means the cache is undersized or the algorithm is wrong for the workload
  4. Use WithCopyOnRead(fn) / WithCopyOnWrite(fn) for mutable values — without copies, callers mutate cached objects and corrupt shared state

For advanced patterns (revalidation, sharding, missing cache, monitoring setup), see Production Patterns.

For the complete API surface, see API Reference.

If you encounter a bug or unexpected behavior in samber/hot, open an issue at https://github.com/samber/hot/issues.

Cross-References

  • -> See samber/cc-skills-golang@golang-performance skill for general caching strategy and when to use in-memory cache vs Redis vs CDN
  • -> See samber/cc-skills-golang@golang-observability skill for Prometheus metrics integration and monitoring
  • -> See samber/cc-skills-golang@golang-database skill for database query patterns that pair with cache loaders
  • -> See samber/cc-skills@promql-cli skill for querying Prometheus cache metrics via CLI

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

golang-code-style

No summary provided by upstream source.

Repository SourceNeeds Review
151-samber
Coding

golang-cli

No summary provided by upstream source.

Repository SourceNeeds Review
135-samber
General

golang-testing

No summary provided by upstream source.

Repository SourceNeeds Review
154-samber
General

golang-naming

No summary provided by upstream source.

Repository SourceNeeds Review
149-samber