Production-oriented FastAPI backend implementing a modular multi-agent system for hospital health examination recommendation workflows.
- Python 3.10+
- FastAPI
- oracledb (python-oracledb)
- Pydantic
project_root/
api/
server.py
agents/
data_loader_agent.py
narrative_extraction_agent.py
feature_agent.py
risk_agent.py
guideline_agent.py
recommendation_agent.py
verification_agent.py
tru_agent.py
database/
oracle_connector.py
queries.py
engine/
orchestrator.py
knowledge/
guideline_rules.py
memory/
memory_bank.py
schemas/
patient_schema.py
narrative_schema.py
recommendation_schema.py
tru_schema.py
utils/
logger.py
config.py
llm_client.py
load_patient_data()extract_narrative_insights()# LLM extraction from summary/advice after DB loadextract_health_features()identify_health_risks()retrieve_guidelines()generate_recommendations()verify_evidence()generate_TRU_explanations()
- Source: summary/advice narrative columns in
HM_YW_TJZ000 - Source fields follow table structure:
ZS0000(综述),ZS0001(综述01),JY0000(建议). - If
LLM_ENABLED=trueand API config is provided, extraction uses LLM. - If LLM is unavailable, it falls back to deterministic rule extraction.
- Install dependencies:
pip install -r requirements.txt- Configure environment:
cp .env.example .env
# edit .env and fill LLM_API_KEYNotes:
- Backend auto-loads
.envfrom project root. LLM_API_KEYis intentionally left blank in templates and must be filled by user.- First-visit intake uses LLM only when
LLM_ENABLED=trueandLLM_API_KEYis set. - Risk augmentation LLM in
risk_agentis disabled by default; enable it explicitly withRISK_LLM_ENABLED=true. TRU_LLM_MAX_ITEMScontrols how many recommendations use LLM personalized TRU explanation per request (default3).- To let every TRU use LLM explanation, set
TRU_LLM_MAX_ITEMSgreater than expected recommendation count (for example50). - If you call APIs from scripts, increase client HTTP timeout accordingly (for example
60-180s) because full TRU LLM generation is slower.
- Run API:
uvicorn api.server:app --host 0.0.0.0 --port 8000- Call recommendation endpoint:
curl -X POST http://localhost:8000/api/v1/recommendations/TJ20260001- Quick first-visit intake test (auto-stop on
is_finished=true):
"/Users/yxz/Documents/New project/.venv/bin/python" test_intake_flow.py