Skip to content

feat: Add organic variation to illustration batch generation #34

@madjin

Description

@madjin

Problem

Daily batch illustration outputs look similar because style/character/prompt selection is deterministic. Running illustrate.py --batch on different days produces visually repetitive results despite different content.

Root Cause

  • Hardcoded style mapping: github_updates always → dataviz, discord_updates always → comic_panel
  • Fixed character selection: Same characters per category every day
  • Deterministic scene prompts: Same conceptual framing regardless of content

Proposed Solution: Daily Creative Brief

Inject a "creative brief" into scene generation that varies how to interpret content, not just what style to render it in.

Creative Brief Components

Interpretive Lenses (HOW to see the story):

  • Human emotion behind the news
  • Journey or transformation
  • Tension or conflict
  • Collaboration and teamwork
  • Breakthrough or discovery moment
  • Scale and magnitude
  • Individual impact

Compositional Approaches:

  • Bird's eye view
  • Intimate close-up
  • Wide establishing shot
  • Dynamic diagonal composition
  • Silhouette against backdrop
  • Split frame (cause/effect)

Seasonal Mood (derived from date, no API calls):

  • Winter: stillness, contemplative, cool tones
  • Spring: energy, renewal, fresh growth
  • Summer: vibrancy, peak activity, warm light
  • Autumn: transition, harvest, golden hues

Additional Changes

  1. Style rotation: Use existing suggested_styles from style-presets.json (currently only used in interactive mode)
  2. Character shuffle: Date-seeded randomization of character selection and count

Combinatorial Variety

7 lenses × 6 compositions × 4 seasons = 168 unique creative briefs before any repetition

Reproducibility

All randomness is date-seeded:

  • Same date = same output (reproducible builds)
  • Different date = different interpretation (organic variety)

Implementation

Single file change: scripts/posters/illustrate.py

def generate_creative_brief(date: datetime) -> dict:
    """Generate today's unique creative direction."""
    seed = int(date.strftime("%Y%m%d"))
    rng = random.Random(seed)
    return {
        "lens": rng.choice(DAILY_LENSES),
        "composition": rng.choice(COMPOSITIONS),
        "mood": get_seasonal_mood(date),
    }

Then inject into generate_scene_from_content() prompt.

What We're NOT Doing

  • ❌ External API calls for context (reality_context.py)
  • ❌ A/B generation with vision model selection
  • ❌ Historical deduplication tracking
  • ❌ Complex metaphor banks

Related: Poster pipeline improvements in feat/cdn-media-pipeline branch

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions