remotion-to-hyperframes

Translate a Remotion (React-based) video composition into a HyperFrames HTML composition. Use when (1) the user provides Remotion source (`.tsx` files using `useCurrentFrame`, `Sequence`, `AbsoluteFill`, `interpolate`, `spring`, `staticFile`, etc.) and asks to port, convert, or migrate it to HyperFrames; (2) the user pastes a Remotion entry point (`Root.tsx`, `Composition`) and wants HTML; (3) the user links a Remotion repo and asks for the HyperFrames equivalent; (4) the user says "port my Remotion project", "translate this Remotion code", "rewrite as HTML", or "I have a Remotion comp, make it HyperFrames". Skill detects unsupported patterns (useState, useEffect with side effects, async calculateMetadata, third-party React component libraries, `@remotion/lambda` features) and recommends the runtime interop escape hatch instead of attempting a lossy translation.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "remotion-to-hyperframes" with this command: npx skills add heygen-com/hyperframes/heygen-com-hyperframes-remotion-to-hyperframes

Remotion to HyperFrames

Overview

Translate Remotion (React-based) video compositions into HyperFrames (HTML + GSAP) compositions. Most Remotion idioms have direct HyperFrames equivalents — the translation is mechanical for ~80% of typical compositions. This skill encodes the mapping and guards against the lossy 20% by refusing to translate patterns that don't fit HF's seek-driven model and recommending the runtime interop pattern from PR #214 instead.

The skill ships with a tiered test corpus (T1–T4, 4 fixtures total) that grades translations against measured SSIM thresholds. Don't translate without running the eval — a translation that "looks right" but renders 0.05 SSIM lower than the validated baseline is silently wrong.

Workflow

Step 1: Lint the source

Run scripts/lint_source.py over the Remotion source directory. The lint detects patterns that can't translate cleanly:

  • Blockers (refuse + recommend interop): useState, useReducer, useEffect/useLayoutEffect with non-empty deps, async calculateMetadata, third-party React UI libraries (MUI, Chakra, Mantine, antd, shadcn, Radix, NextUI).
  • Warnings (translate after dropping the construct): @remotion/lambda config, delayRender, useCallback, useMemo, custom hooks.
  • Info (translate with note): staticFile, interpolateColors.

If any blocker fires, stop. Read references/escape-hatch.md and surface the recommendation message. Warnings don't stop translation — drop the offending construct in step 3 and note the gap in TRANSLATION_NOTES.md. @remotion/lambda config is the canonical warning case: the skill drops the import + renderMediaOnLambda(...) calls but translates the rest of the composition.

Step 2: Plan the translation

Read references/api-map.md — the index of every Remotion API and its HF equivalent or per-topic reference. Identify which topic references you'll need based on what the source uses:

Source containsLoad reference
Composition, defaultProps, schema, calculateMetadataparameters.md
Sequence, Series, Loop, AbsoluteFill, Freezesequencing.md
useCurrentFrame, interpolate, spring, Easing, interpolateColorstiming.md
Audio, Video, Img, IFrame, staticFile, delayRendermedia.md
TransitionSeries, @remotion/transitionstransitions.md
@remotion/lottielottie.md
@remotion/google-fonts/<Family>, Font.loadFont, @font-facefonts.md

Don't load all of them — load only what the specific source needs.

Step 3: Generate the HF composition

Emit index.html with:

  • Root <div id="stage"> carrying the composition's data-composition-id, data-start="0", data-duration (in seconds), data-fps, data-width, data-height, plus one data-* per scalar prop.
  • A flat list of scene divs with data-start / data-duration / data-track-index.
  • Inline <style> for layout; CSS sets the from state of every animated property.
  • A single <script> tag at the bottom containing one paused gsap.timeline({paused: true}). Every Remotion useCurrentFrame() derivation becomes a tween on this timeline at the right offset.
  • window.__timelines["<composition-id>"] = tl; registers the timeline with HF's runtime.

Custom React subcomponents inline as repeated HTML using the prop interface as the template (see parameters.md for the per-instance data-* pattern).

Step 4: Validate

Run the eval harness — references/eval.md for the full guide. Quick path:

# Render Remotion baseline (after npm install in the fixture)
cd remotion-src && npx remotion render <CompositionId> out/baseline.mp4

# Render HF translation
cd ../hf-src && npx hyperframes render --output ../hf.mp4

# SSIM diff
../../scripts/render_diff.sh ./remotion-src/out/baseline.mp4 ./hf.mp4 ./diff

Threshold: ~0.02 below p05 of the source's complexity tier (see eval.md's validated thresholds table). If the diff fails, run scripts/frame_strip.sh to see which frames diverged, then re-read the relevant timing/sequencing/media reference.

Critical: both renders must use matching pixel format. Set Config.setVideoImageFormat("png") + Config.setColorSpace("bt709") in the Remotion source's remotion.config.ts — otherwise the diff measures encoder differences (~0.05 SSIM hit), not translation fidelity.

Step 5: Document gaps

Anything that didn't translate cleanly (volume ramps dropped, custom presentations approximated, fonts substituted) gets a TRANSLATION_NOTES.md written next to the HF output. See references/limitations.md for the format.

What this skill explicitly does NOT do

  • Translate React state machines. Compositions that drive animation via useState + useEffect are not deterministic frame-capture targets in HyperFrames' seek-driven model. Recommend the runtime interop pattern.
  • Run Remotion's render pipeline alongside HyperFrames. That's the runtime interop pattern from PR #214 — a separate solution for compositions that fail this skill's lint.

(@remotion/lambda is not a blocker — Lambda config is deployment, not animation. The skill drops it as a warning and translates the rest. See references/escape-hatch.md.)

How to grade your own translation

Run the test corpus orchestrator:

./assets/test-corpus/run.sh

It runs T1, T2, T3 (render + diff) and T4 (lint validation), prints a per-tier pass/fail table, and emits an aggregate JSON report. Use this to verify the skill is working end-to-end on a clean checkout — and as a regression check after editing any reference.

Validated baseline (as of 2026-04-27):

TierComposition shapeMean SSIMThreshold
T1single-element fade-in0.9740.95
T2multi-scene + spring + audio + image0.9850.95
T3data-driven, custom subcomponents, count-up0.9530.90
T4escape-hatch (8 lint cases)8/8 passn/a

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

hyperframes-cli

No summary provided by upstream source.

Repository SourceNeeds Review
General

hyperframes

No summary provided by upstream source.

Repository SourceNeeds Review
General

gsap

No summary provided by upstream source.

Repository SourceNeeds Review
General

website-to-hyperframes

No summary provided by upstream source.

Repository SourceNeeds Review