Skip to content

kycloudtech/website-visual-scorer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Website Visual Scorer — Claude Skill

A structured 100-point visual audit system for B2B websites, built as a Claude Skill.

Turn subjective client feedback into objective scores. Evaluate any website across 5 conversion-focused dimensions and get actionable findings in minutes — not hours.


The Problem This Solves

If you build websites for B2B clients — especially in manufacturing, export, or tech — you know this situation:

"I don't like the colors."
"Can we make it look more like our competitor?"
"It just doesn't feel right."

These subjective requests lead to endless revision cycles. This skill gives you data to anchor the conversation: a 100-point score, a 20-item rubric, and industry benchmarks — so changes are evaluated on impact, not opinion.


What It Does

Given any URL, the skill:

  1. Fetches the page content via web_fetch (with fallback to web_search)
  2. Extracts 40+ structural, trust, content, and technical signals
  3. Scores across 5 weighted dimensions (20 sub-items, each 1–10)
  4. Classifies findings into Block / Improve / Keep tiers
  5. Outputs a structured report with dimension bars, per-item reasoning, and prioritized action list

Scoring Dimensions

Dimension Weight What It Measures
Visual Hierarchy 25% Can visitors find the value prop in 3 seconds?
Brand Consistency 20% Is the visual identity coherent and industry-appropriate?
Conversion Guidance 25% Do CTAs, trust signals, and forms drive inquiry?
Readability 20% Is content scannable for a busy B2B buyer?
Performance Perception 10% Does the page feel fast and uncluttered?

Score Interpretation

Score Verdict
85–100 ✅ 优秀 — Ready to ship
70–84 🟡 良好 — Minor fixes, then ship
55–69 🟠 待改进 — Targeted improvements needed
< 55 🔴 重构级 — Significant rework required

Quick Start

Option 1 — Use in Claude.ai (Recommended)

  1. Download website-visual-scorer.skill from this repo
  2. In Claude.ai, go to Settings → Skills → Install Skill
  3. Upload the .skill file
  4. Start a new conversation and say:
Evaluate https://yoursite.com

Claude will fetch the page, run the full 20-item audit, and return a structured report.

Option 2 — Use the Raw Skill Files

If you're building on the Claude API or Claude Code, copy the contents of website-visual-scorer/SKILL.md into your system prompt or skill directory.


Example Output

See examples/ for real evaluation reports:

Site Industry Score Report
GNSource Defense / Precision Tech 88 view
TTI Fiber Fiber Optic Manufacturing 79 view
BisonConvey v2 Industrial Conveyor Mfg 76 view
RapidDirect CNC Manufacturing 68 view
BisonConvey v1 Industrial Conveyor Mfg 56 view

Comparison Mode

Provide 2+ URLs to get a side-by-side leaderboard:

Compare https://site-a.com and https://site-b.com

Output includes dimension-by-dimension delta and cross-site best practices.


File Structure

website-visual-scorer/
├── SKILL.md                          # Main skill — execution protocol
└── references/
    ├── scoring-rubric.md             # 20-item scoring criteria with band descriptions
    └── industry-benchmarks.md        # Benchmark template (add your own data)

What's Open Source

File Status Notes
SKILL.md ✅ Open Full execution protocol
references/scoring-rubric.md ✅ Open Complete 20-item rubric
references/industry-benchmarks.md ✅ Template Empty template — populate with your own evaluations

The scoring framework is fully open. The benchmark database (real scores from real sites) is not included — you build your own as you evaluate more sites. This is intentional: your benchmark data is your moat.


Red Flags — Auto-Escalated to Block

These findings are automatically flagged as blocking issues regardless of overall score:

  • Lorem ipsum placeholder text anywhere on page
  • AI-generated image filenames exposed (Gemini_Generated_Image_*, DALL-E-*, etc.)
  • Certification image belonging to a different company than the site owner
  • Spelling errors in navigation menu items
  • Primary CTA button pointing to # (dead link)
  • Non-target-language alt text revealing incomplete internationalization

API Cost Reference

Using Claude Sonnet 4.6 (recommended model):

Configuration Cost per Evaluation
Standard ~$0.07
With Prompt Caching ~$0.05
Batch API ~$0.03
Haiku 4.5 (budget) ~$0.02

At $0.05/eval with prompt caching: 1,000 evaluations/month costs ~$50.


Contributing

Contributions welcome. See CONTRIBUTING.md for guidelines.

High-value contributions:

  • New industry scoring adjustments (e-commerce, SaaS, healthcare, logistics)
  • Additional Red Flag detection patterns
  • Non-English language evaluation support (Japanese, Arabic, Spanish)
  • Evaluation examples from more industries

Do not submit:

  • Proprietary client data or real business benchmark numbers
  • Evaluations of sites without owner permission

Who Built This

This skill was created by a team building AI-powered websites for B2B export companies. We use it internally to QA sites before delivery and to redirect subjective client revision requests toward data.

If you're doing similar work — building or auditing B2B websites — this framework was built for you.


License

MIT License. Use freely, commercially or otherwise. Attribution appreciated but not required.

See LICENSE for details.

About

A Claude Skill for scoring B2B website visual quality across 5 conversion-focused dimensions

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors