I develop intelligent interactive AI systems as experimental platforms to study how humans understand, trust, and adapt to AI under uncertainty.
My work lies at the intersection of:
- 🤖 Human-AI Interaction — Understanding how people reason about, trust, and adapt to AI systems.
- 🧠 Computational Cognitive Modeling — Formalizing psychological theory to model human reasoning under uncertainty.
- ⚙️ Adaptive Intelligent Systems — Engineering AI systems that respond to and align with human cognition.
- 🔍 Explainable AI — Informing AI design to improve transparency, interpretability, and user confidence.
I follow a cognition-driven, closed-loop research cycle to study and align intelligent systems with human reasoning under uncertainty:
| 🧠 Question | 🏗️ Build | 🧪 Study | 🔢 Model | 🔄 Derive & Iterate |
|---|---|---|---|---|
| Formulating cognitive and interaction research questions about how humans interpret, trust, and adapt to AI behavior | Engineering interactive AI systems and explainability frameworks as experimental platforms to operationalize these questions | Designing behavioral experiments and user studies to measure trust formation, belief updating, and cognitive adaptation | Developing computational and probabilistic models to formalize human reasoning and social inference about AI | Translating empirical findings into design principles for uncertainty-aware, adaptive, and human-aligned AI systems |
This cycle enables me to bridge AI engineering, computational cognitive science, and human-centered system design.
| 📄 Publication | 📚 Venue | 🔗 Links |
|---|---|---|
| UNET-Based Segmentation for Diabetic Macular Edema Detection in OCT Images | ICCIS 2025, Springer LNNS |




