Skip to content

hyperkiller15/check-up-agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Intelligent Health Checkup Recommendation Agent System

Production-oriented FastAPI backend implementing a modular multi-agent system for hospital health examination recommendation workflows.

Tech Stack

  • Python 3.10+
  • FastAPI
  • oracledb (python-oracledb)
  • Pydantic

Project Structure

project_root/
  api/
    server.py
  agents/
    data_loader_agent.py
    narrative_extraction_agent.py
    feature_agent.py
    risk_agent.py
    guideline_agent.py
    recommendation_agent.py
    verification_agent.py
    tru_agent.py
  database/
    oracle_connector.py
    queries.py
  engine/
    orchestrator.py
  knowledge/
    guideline_rules.py
  memory/
    memory_bank.py
  schemas/
    patient_schema.py
    narrative_schema.py
    recommendation_schema.py
    tru_schema.py
  utils/
    logger.py
    config.py
    llm_client.py

Pipeline

  1. load_patient_data()
  2. extract_narrative_insights() # LLM extraction from summary/advice after DB load
  3. extract_health_features()
  4. identify_health_risks()
  5. retrieve_guidelines()
  6. generate_recommendations()
  7. verify_evidence()
  8. generate_TRU_explanations()

LLM Narrative Extraction

  • Source: summary/advice narrative columns in HM_YW_TJZ000
  • Source fields follow table structure: ZS0000(综述), ZS0001(综述01), JY0000(建议).
  • If LLM_ENABLED=true and API config is provided, extraction uses LLM.
  • If LLM is unavailable, it falls back to deterministic rule extraction.

Quick Start

  1. Install dependencies:
pip install -r requirements.txt
  1. Configure environment:
cp .env.example .env
# edit .env and fill LLM_API_KEY

Notes:

  • Backend auto-loads .env from project root.
  • LLM_API_KEY is intentionally left blank in templates and must be filled by user.
  • First-visit intake uses LLM only when LLM_ENABLED=true and LLM_API_KEY is set.
  • Risk augmentation LLM in risk_agent is disabled by default; enable it explicitly with RISK_LLM_ENABLED=true.
  • TRU_LLM_MAX_ITEMS controls how many recommendations use LLM personalized TRU explanation per request (default 3).
  • To let every TRU use LLM explanation, set TRU_LLM_MAX_ITEMS greater than expected recommendation count (for example 50).
  • If you call APIs from scripts, increase client HTTP timeout accordingly (for example 60-180s) because full TRU LLM generation is slower.
  1. Run API:
uvicorn api.server:app --host 0.0.0.0 --port 8000
  1. Call recommendation endpoint:
curl -X POST http://localhost:8000/api/v1/recommendations/TJ20260001
  1. Quick first-visit intake test (auto-stop on is_finished=true):
"/Users/yxz/Documents/New project/.venv/bin/python" test_intake_flow.py

About

this is for my graduation design

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors