TensorFlow

Avoid common TensorFlow mistakes — tf.function retracing, GPU memory, data pipeline bottlenecks, and gradient traps.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "TensorFlow" with this command: npx skills add ivangdavila/tensorflow

tf.function Retracing

  • New input shape/dtype causes retrace — expensive, prints warning
  • Use input_signature for fixed shapes — @tf.function(input_signature=[tf.TensorSpec(...)])
  • Python values retrace — pass as tensors, not Python ints/floats
  • Avoid Python side effects in tf.function — only runs once during tracing

GPU Memory

  • TensorFlow grabs all GPU memory by default — set memory_growth=True before any ops
  • tf.config.experimental.set_memory_growth(gpu, True) — must be called before GPU init
  • OOM with large models — reduce batch size or use gradient checkpointing
  • CUDA_VISIBLE_DEVICES="" to force CPU — for testing without GPU

Data Pipeline

  • tf.data.Dataset without .prefetch() — CPU/GPU idle time between batches
  • .cache() after expensive ops — but before random augmentation
  • .batch() before .map() for vectorized ops — faster than per-element
  • num_parallel_calls=tf.data.AUTOTUNE — parallel preprocessing
  • Dataset iteration in eager mode is slow — use in tf.function or model.fit

Shape Issues

  • First dimension is batch — None for variable batch size in Input layer
  • model.build(input_shape) if not using Input layer — or first call errors
  • Reshape errors unclear — tf.debugging.assert_shapes() for debugging
  • Broadcasting silently succeeds — may hide shape bugs

Gradient Tape

  • Variables watched by default — tensors need tape.watch(tensor)
  • persistent=True for multiple gradients — otherwise tape consumed after first use
  • tape.gradient returns None if no path — check for disconnected graph
  • @tf.custom_gradient for custom backward — not all ops have gradients

Training Gotchas

  • model.trainable = False after compile does nothing — set before compile
  • BatchNorm behaves differently in training vs inference — training=True/False matters
  • model.fit shuffles by default — shuffle=False for time series
  • validation_split takes from end — shuffle data first if order matters

Saving Models

  • model.save() saves everything — architecture, weights, optimizer state
  • model.save_weights() only weights — need model code to restore
  • SavedModel format for serving — tf.saved_model.save(model, path)
  • H5 format limited — doesn't save custom objects well, use SavedModel

Common Mistakes

  • Mixing Keras and raw tf ops incorrectly — use layers.Lambda to wrap tf ops in Sequential
  • tf.print vs Python print — Python print only runs at trace time in tf.function
  • NumPy ops in graph — use tf ops, numpy executes eagerly only
  • Loss returns scalar per sample — Keras averages, custom loops may need tf.reduce_mean

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

bountyclaw

Register and log in an Agent account linked to a human user on the OpenClaw bounty platform to claim and execute tasks automatically.

Registry SourceRecently Updated
Automation

SAGE Memory

Persistent, consensus-validated memory for AI agents via SAGE MCP server. Gives you institutional memory that survives across conversations — memories go thr...

Registry SourceRecently Updated
Automation

funds-agent

自动生成基金日报,包含持仓基金数据、估值涨跌、单位净值和财经要闻。支持定时任务(每天下午 4 点自动发送),可配置基金代码列表。输出格式:Telegram 消息 + Word 文档。

Registry SourceRecently Updated