Skip to main content
The Ember video pipeline turns a documentation page into a narrated screen-recording video. It chains four stages — script generation, HeyGen avatar rendering, Playwright screen recording, and FFmpeg compositing — and exposes them through two modules:
ModuleRole
compositor.tsFFmpeg wrapper that overlays an avatar as picture-in-picture on a screen recording and concatenates intro/outro clips
orchestrator.tsEnd-to-end stage coordinator with retry, resume, and progress callbacks

Compositor

CompositeOptions

Passed to VideoCompositor.composite() to define a single compositing job.
FieldTypeRequiredDefaultDescription
avatarVideoPathstringYesLocal file path or HTTP URL of the HeyGen-generated avatar MP4. FFmpeg handles both natively; no pre-download is needed.
screenRecordingPathstringYesAbsolute path to the Playwright-generated screen recording (video-only MP4).
outputPathstringNoAuto-generated in outputDirAbsolute path for the composited output file. A UUID-based name is used when omitted.
avatarPosition'bottom-right' | 'bottom-left' | 'top-right' | 'top-left'No'bottom-right'Which corner of the frame the avatar picture-in-picture occupies.
avatarScalenumberNo0.25Avatar overlay size as a fraction of the screen width (greater than 0, up to 1). For example, 0.25 makes the avatar 25% as wide as the screen.
introPathstringNoPath to an MP4 clip to prepend to the composited body. See Generating intro/outro clips.
outroPathstringNoPath to an MP4 clip to append to the composited body. See Generating intro/outro clips.
Example:
import { VideoCompositor } from './video-pipeline/compositor.js';

const compositor = new VideoCompositor({ outputDir: '/tmp/my-videos' });

const result = await compositor.composite({
  avatarVideoPath: 'https://cdn.heygen.com/videos/abc123.mp4',
  screenRecordingPath: '/tmp/recordings/demo.mp4',
  avatarPosition: 'bottom-right',
  avatarScale: 0.25,
  introPath: '/assets/video-templates/intro.mp4',
  outroPath: '/assets/video-templates/outro.mp4',
});

console.log(result.outputPath);       // absolute path to the composited MP4
console.log(result.durationSeconds);  // total runtime in seconds

CompositorConfig

Passed to the VideoCompositor constructor to configure FFmpeg options and output location.
FieldTypeRequiredDefault
outputDirstringYes
ffmpegPathstringNo'ffmpeg' (must be on PATH)
ffprobePathstringNo'ffprobe' (must be on PATH)
videoCodecstringNo'libx264'
audioCodecstringNo'aac'
crfnumberNo23
Use createVideoCompositor() as a convenience factory; it reads FFMPEG_PATH and FFPROBE_PATH from the environment and defaults outputDir to '/tmp/ezforge-compositor' when omitted.

Orchestrator

PipelineInput

Passed to PipelineOrchestrator.run() to kick off a full pipeline run.
FieldTypeRequiredDefaultDescription
titlestringYesDocumentation page title. Used as the video title and passed to the script generator.
contentstringYesDocumentation page body (Markdown or plain text). The script generator uses this to write the narration.
recordingStepsRecordingStep[]YesPlaywright recording steps for the screen recording stage. See the screen recorder docs for the step schema.
avatarIdstringNoOrchestratorConfig.defaultAvatarIdHeyGen avatar ID to use. Required if not set in OrchestratorConfig.
voiceIdstringNoAvatar defaultHeyGen voice ID. Omit to use the avatar’s built-in voice.
targetDurationSecondsnumberNo120Hint to the script generator for how long the narration should run.
audiencestringNoFree-form audience description passed to the script generator (e.g., 'senior backend engineers').
introPathstringNoForwarded directly to CompositeOptions.introPath.
outroPathstringNoForwarded directly to CompositeOptions.outroPath.

PipelineResult

Returned by PipelineOrchestrator.run() on success.
FieldTypeDescription
pipelineIdstringUnique ID for this pipeline run.
outputVideoPathstringAbsolute path to the final composited MP4.
durationSecondsnumberTotal video runtime in seconds.
scriptGeneratedScriptThe script object produced by the script-generation stage.
screenRecordingPathstringAbsolute path to the raw screen recording.
avatarVideoUrlstringHeyGen CDN URL for the avatar-only video.

stateListener callback

The stateListener field in OrchestratorConfig is an optional callback that fires on every stage transitionpending → running → completed/failed. Use it to stream progress updates to a client, log pipeline state, or persist the PipelineState for crash-recovery.
import * as fs from 'fs';
import {
  createPipelineOrchestrator,
  PipelineState,
} from './video-pipeline/orchestrator.js';

const orchestrator = await createPipelineOrchestrator({
  pipeline: { /* ... */ },
  stateListener: (state: PipelineState) => {
    // `state` is a deep clone — safe to mutate or serialise
    const { stages } = state;
    console.log(
      `[${state.id}] script=${stages.script.status}` +
      ` avatar=${stages.avatar.status}` +
      ` screen=${stages.screen.status}` +
      ` composite=${stages.composite.status}`,
    );

    // Persist for crash-recovery (PipelineState is plain JSON)
    fs.writeFileSync(`/tmp/pipeline-${state.id}.json`, JSON.stringify(state));
  },
});
Stage lifecycle:
pending → running → completed
                 └→ failed  (retried up to maxAttempts times)
Each StageState object carries:
FieldTypeDescription
statusStageStatus'pending', 'running', 'completed', or 'failed'
resultT | undefinedStage output (populated on completed)
errorstring | undefinedLast error message (populated on failed)
attemptsnumberNumber of attempts made so far
startedAtstring | undefinedISO-8601 timestamp when the current attempt started
completedAtstring | undefinedISO-8601 timestamp when the stage completed successfully

Resuming a failed run

PipelineState is deliberately plain JSON — serialise it to disk or a database, then pass it back as the initial argument to skip already-completed stages:
let savedState: PipelineState | undefined;

const orchestrator = await createPipelineOrchestrator({
  pipeline: { /* ... */ },
  stateListener: (state: PipelineState) => { savedState = state; },
});

// First run (may crash mid-pipeline)
await orchestrator.run(input).catch(() => {});

// Resume — completed stages are skipped automatically
const result = await orchestrator.run(input, savedState);

Generating intro/outro clips

The assets/video-templates/ directory ships two FFmpeg filter templates for building standardised intro and outro clips from static image assets.
FileEffectDuration
intro-filter.txtFade in from black (0.5 s), hold, fade to black (0.5 s)3 s
outro-filter.txtFade in from black (0.5 s), hold, fade to black (1 s)4 s

Generating the intro clip

Requires a title_card.png at 1280×720 resolution.
ffmpeg -loop 1 -t 3 -i assets/video-templates/title_card.png \
       -f lavfi -t 3 -i "anullsrc=r=44100:cl=stereo" \
       -filter_complex "$(cat assets/video-templates/intro-filter.txt)" \
       -map "[v_intro]" -map "[a_intro]" \
       -c:v libx264 -c:a aac -pix_fmt yuv420p \
       assets/video-templates/intro.mp4

Generating the outro clip

Requires an end_card.png at 1280×720 resolution.
ffmpeg -loop 1 -t 4 -i assets/video-templates/end_card.png \
       -f lavfi -t 4 -i "anullsrc=r=44100:cl=stereo" \
       -filter_complex "$(cat assets/video-templates/outro-filter.txt)" \
       -map "[v_outro]" -map "[a_outro]" \
       -c:v libx264 -c:a aac -pix_fmt yuv420p \
       assets/video-templates/outro.mp4
Once generated, pass the output paths to PipelineInput.introPath / PipelineInput.outroPath (or directly to CompositeOptions.introPath / CompositeOptions.outroPath).
The filter template files are reference assets — they do not ship pre-built MP4 clips. You must run the FFmpeg commands above (substituting your own image assets) before the intro/outro feature can be used in a pipeline run.