Skip to content

feat: Odin talking head for Icelandic onboarding#224

Open
Peleke wants to merge 1 commit intostagingfrom
feature/odin-talking-head-onboarding
Open

feat: Odin talking head for Icelandic onboarding#224
Peleke wants to merge 1 commit intostagingfrom
feature/odin-talking-head-onboarding

Conversation

@Peleke
Copy link
Owner

@Peleke Peleke commented Jan 7, 2026

Summary

  • Adds pre-rendered talking head video integration featuring Odin as a guide for Icelandic language learners
  • Creates reusable TalkingHeadPlayer component with video controls, autoplay, and error handling
  • Integrates Odin videos into 3 onboarding steps: welcome, assessment chat, and congratulations
  • Includes documentation for generating videos with ComfyUI + Sonic

Changes

New Components

  • TalkingHeadPlayer.tsx - Reusable video player with progress bar, mute, replay controls
  • OdinTalkingHead - Pre-configured wrapper for Odin video clips

Modified Files

  • OnboardingWelcome.tsx - New "odin-welcome" step with Norse-themed dark UI
  • AssessmentChat.tsx - Video panel showing Odin for each AI response (Icelandic only)
  • complete/page.tsx - Congratulations video with bilingual transcript

Documentation

  • VIDEO_MANIFEST.md - Required video files, scripts, and specifications
  • sonic-odin-workflow.md - Step-by-step guide for generating videos

User Flow (Icelandic)

Select Icelandic → Odin Welcome Video → Goals → Assessment Chat (with Odin) → Congratulations Video

Screenshots

N/A - Videos need to be generated using Sonic workflow

Test plan

  • Select Icelandic in onboarding - verify Odin welcome screen appears
  • Verify video player controls work (play/pause, mute, replay)
  • Complete assessment chat - verify Odin video panel appears
  • Complete onboarding - verify congratulations video shows
  • Test graceful degradation when videos are missing (retry button)

Next Steps

  1. Generate Odin portrait image (2.5D semi-realistic)
  2. Generate audio with ElevenLabs (Icelandic)
  3. Run Sonic workflow to create video clips
  4. Deploy videos to public/videos/odin/

🤖 Generated with Claude Code

Integrates pre-rendered talking head videos featuring Odin as a guide
for Icelandic language learners during onboarding.

Components:
- TalkingHeadPlayer: Reusable video player with controls, autoplay,
  progress bar, and error handling
- OdinTalkingHead: Pre-configured wrapper for Odin video clips

Integration points:
- OnboardingWelcome: New "odin-welcome" step with Norse-themed UI
  showing welcome video after Icelandic selection
- AssessmentChat: Video panel above chat showing Odin for each AI response
- CompletePage: Congratulations video with Icelandic transcript

Documentation:
- VIDEO_MANIFEST.md: Required clips, scripts, and specifications
- sonic-odin-workflow.md: Guide for generating videos with ComfyUI + Sonic

Videos are pre-rendered placeholders - actual clips to be generated
using Sonic workflow documented in docs/sonic-odin-workflow.md

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@Peleke
Copy link
Owner Author

Peleke commented Jan 14, 2026

🎭 ComfyUI MCP Serverless - Ready for Odin

The ComfyUI MCP serverless deployment is live! Here's what you need for the talking head integration:

Endpoint Info

Item Value
Endpoint ID ppvdlc4x2x6deq
Base URL https://api.runpod.ai/v2/ppvdlc4x2x6deq
Auth Bearer token (RUNPOD_API_KEY)
PR Peleke/comfyui-mcp#9

API Calls

1. Health Check:

POST /runsync
{"input": {"action": "health"}}

2. Portrait Generation:

POST /runsync
{"input": {
  "action": "portrait",
  "description": "Odin, Norse god, one eye, long gray beard, wise ancient appearance, 2.5D semi-realistic style",
  "width": 768, "height": 1024, "steps": 20
}}

3. TTS (Voice Cloning):

POST /runsync
{"input": {
  "action": "tts",
  "text": "Velkomin, traveler...",
  "voice_reference": "<path_or_url_to_voice_sample>"
}}

4. Lip-sync (async):

POST /run
{"input": {
  "action": "lipsync",
  "portrait_image": "<base64_or_url>",
  "audio": "<base64_or_url>"
}}
# Poll with GET /status/{job_id}

Current State

  • ✅ Endpoint created and healthy
  • ⚠️ Workers scaled to 0 (no charges) - wake before use
  • ⚠️ Models not yet loaded (need to add SDXL/Flux, F5-TTS, SONIC)

To Generate Odin Videos

# 1. Wake workers
gh workflow run gpu-control.yml -f action=wake -R Peleke/comfyui-mcp

# 2. Generate portrait, TTS, lip-sync via API

# 3. Download videos to interlinear/public/videos/odin/

# 4. Sleep workers (save $$$)
gh workflow run gpu-control.yml -f action=sleep -R Peleke/comfyui-mcp

TypeScript Client

Available at comfyui-mcp/src/runpod-serverless-client.ts:

const client = new RunPodServerlessClient({ endpointId, apiKey });
await client.portrait({ description: "Odin..." });
await client.tts({ text: "...", voice_reference: "..." });
await client.lipsync({ portrait_image: "...", audio: "..." });

From comfyui-mcp PR #9

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant