ββββ βββββββ βββ βββββββ βββββββ βββββββ βββββββ ββββββββ
βββββ ββββββββ βββββββββββ βββββββββββββββββββββββββββββββββ
ββββββββββββββββββββββ ββββ βββ βββ ββββββ βββββββββ
ββββββββββββββββββββββ βββ βββ βββ ββββββ βββββββββ
βββ βββ ββββββ ββββββββββββ βββββββββββββββββββββββββββββββββ
βββ ββββββ βββ βββββββ βββββββ βββββββ βββββββ ββββββββ
v1.0
Free Offline Agents + Premium Cloud Models
by ERP team
100% Free | 100% Open | 100% Local
π Quick Start β’ π Documentation β’ π€ Meet the Agents β’ π° Cost Comparison β’ π― Roadmap
MHG Code is a production-ready AI-powered development assistant built on top of Claude Code with integrated AWS Bedrock support, providing access to cutting-edge language models including OpenAI GPT-OSS 120B, Amazon Nova Pro, and more.
- β AWS Bedrock Integration: Direct access to premium models via AWS
- β OpenAI GPT-OSS 120B: Powerful open-source 120B parameter model
- β Perfect Arabic RTL Support: Native right-to-left text rendering
- β Production-Ready: Fully tested and deployed
- β Zero Configuration: Works out of the box with proper credentials
Build a production-ready agentic AI development system that:
- Costs $0/month forever (100% free operation with local models)
- Works completely offline (privacy-first, local Ollama primary)
- Provides multi-agent coordination (5 specialized agents working in parallel)
- Offers easy installation (one-command setup)
- Maintains full customizability (open source, fork-friendly)
HiveCode is built by forking HiveCode (Google's official CLI tool) and adding:
- Agent Orchestration: 5 specialized agents (orchestrator, frontend, backend, tester, refactor)
- SPARC Workflow: Multi-agent coordination methodology
- Local-First Routing: 80% Ollama (free) β 15% Gemini (free tier) β 5% Groq (optional)
- Hook System: Pre-tool, post-tool, notification hooks
- TTS Integration: Optional voice announcements
- Memory System: Cross-session context persistence
User Input
β
hivecode [command] [args]
β
HiveCode CLI (forked from HiveCode)
ββ Custom commands (prime|sparc|ask)
ββ Agent orchestration layer
ββ Hook system (pre-tool, post-tool)
ββ TTS announcements (optional)
ββ Local-first routing strategy
β
Model Routing
ββ 80% β Ollama (qwen2.5-coder, free, 3-5s)
ββ 15% β Gemini free tier (15 RPM, fast, free)
ββ 5% β Optional Groq (complex tasks, free tier)
β
Agent System (5 specialized)
ββ Orchestrator (coordination)
ββ Frontend (React/Vue/UI)
ββ Backend (APIs/databases)
ββ Tester (unit/integration tests)
ββ Refactor (cleanup/optimization)
β
Results synthesized
β
Output to user
| Feature | Claude Code | GitHub Copilot | HiveCode |
|---|---|---|---|
| Cost | $20-100/month | $10-20/month | β $0/month |
| AI Quality | Best (Sonnet 4.5) | Good | β‘ Good (qwen2.5-coder) |
| Agents | Yes (56) | No | β Yes (5 core) |
| Privacy | Cloud | Cloud | β 100% Local |
| Customizable | Limited | Limited | β Fully Open |
| Offline | No | No | β Yes |
- STATUS.md - Complete development journey and current status
- PRP.md - Project Requirements Package (vision, architecture, roadmap)
- ACCOMPLISHMENT.md - Python implementation achievements (archived)
- OPENCODE_FINDINGS.md - OpenCode research (why rejected)
Attempt 1: OpenCode Foundation β
- Selected for MCP native support and 29K stars
- Rejected: Not 100% free (AWS Bedrock costs $12-15/month)
- Lesson: "IM EXPECTING A CLI TO BE FULLY FREE!"
Attempt 2: Pure Python + Ollama
- Built from scratch: 1,425 lines, 5 agents, working CLI
- Tested successfully with Ollama qwen2.5-coder
- Rejected: "thats was bad steps! we need to fork somthing not start from scartch"
- Result: Archived to
archive/python-implementationbranch for reference
Attempt 3: HiveCode Fork β Current
- Fork production-ready CLI (Google-maintained)
- Customize with HiveCode features (agents, orchestration, hooks, TTS)
- Keep 100% free with local Ollama primary
- Benefit from existing architecture while maintaining full control
- β 5 Specialized Agents: Orchestrator, Frontend, Backend, Tester, Refactor
- β
3 Core Commands:
hivecode prime,hivecode ask,hivecode sparc - β Local-First Routing: Ollama primary (80%), Gemini fallback (15%), optional Groq (5%)
- β SPARC Workflow: Multi-agent coordination methodology
- β Parallel Execution: 2-3 agents working simultaneously
- β One-Command Install: Automatic Ollama setup + model download
- β³ MCP Integration: 6 servers (memory, shadcn-ui, playwright, n8n, blender, clickup)
- β³ Hook System: Pre-tool, post-tool, notification hooks
- β³ TTS Integration: Voice announcements (Kokoro TTS)
- β³ Memory System: 4-tier hierarchy (Global β Project β Session β Task)
- β³ Checkpoint/Rewind: Safe operation suggestions
- β³ Web UI: Optional browser interface
- β
Archive Python implementation to
archive/python-implementationbranch - β Clean master branch of Python code
- β Update documentation for HiveCode pivot
- β³ Fork HiveCode repository to A1cy/HiveCode
- β³ Rename project (gemini β hivecode)
- β³ Add HiveCode configuration structure (
.mhgcode/config) - β³ Verify base functionality and build system
- β³ Integrate agent orchestration (5 agents)
- β³ Implement model routing (Ollama, Gemini, Groq)
- β³ Add custom commands (prime, ask, sparc)
- β³ Implement parallel execution
- β³ One-command installation script
- β³ Complete documentation rewrite
- β³ Test on Ubuntu/WSL/macOS
- β³ Release HiveCode v0.1.0
Target Release: Week 4
Chosen for:
- β 100% free with Ollama primary routing
- β Open source (full TypeScript source code)
- β Production-ready (Google-maintained)
- β Easy to customize (fork and modify)
- β Apache 2.0 license (permissive)
Alternative Rejected: OpenCode.ai
- β Not 100% free (AWS Bedrock $12-15/month after free tier)
- β External binary (difficult to modify)
Lessons Learned:
- Building CLI framework from scratch is time-intensive
- Production-ready foundation provides reliability and trust
- Fork strategy: faster to market + battle-tested architecture
- User feedback: "we need to fork somthing not start from scartch"
Result: Python implementation archived as reference, fork strategy adopted
Cost Priority:
- Ollama: $0/month, unlimited usage, 100% offline
- Gemini free tier: 15 RPM limit (fallback for complex tasks)
- Speed trade-off: 3-5s Ollama vs <1s cloud (acceptable for free operation)
HiveCode is currently in early development. Contributions will be welcome after v0.1.0 release.
Current Status: Pre-fork preparation Watch This Repo: Get notified when Phase 2 fork begins
MIT License - See LICENSE for details.
- Foundation: HiveCode by Google
- Inspired by: A1xAI Framework and Claude Code
- Powered by: Ollama - Amazing local LLM engine
- Model: qwen2.5-coder - Specialized coding model
Current Phase: Phase 1 Complete β Phase 2 Fork Preparation Last Updated: 2025-10-26 Next Milestone: HiveCode fork
For detailed development status, see STATUS.md
π HiveCode - 100% Free | 100% Open | 100% Local
Phase 1 Complete | Phase 2 Next: Fork HiveCode
π Development Status | π― Project Plan | β Star on GitHub