Skip to content
View dyra-12's full-sized avatar
🎯
Focusing
🎯
Focusing

Highlights

  • Pro

Block or report dyra-12

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
dyra-12/README.md

Research Focus

Coding

I develop intelligent interactive AI systems as experimental platforms to study how humans understand, trust, and adapt to AI under uncertainty.

My work lies at the intersection of:

  • 🤖 Human-AI Interaction — Understanding how people reason about, trust, and adapt to AI systems.
  • 🧠 Computational Cognitive Modeling — Formalizing psychological theory to model human reasoning under uncertainty.
  • ⚙️ Adaptive Intelligent Systems — Engineering AI systems that respond to and align with human cognition.
  • 🔍 Explainable AI — Informing AI design to improve transparency, interpretability, and user confidence.


Research Methodology — Closed-Loop Human-Centered AI

I follow a cognition-driven, closed-loop research cycle to study and align intelligent systems with human reasoning under uncertainty:

🧠 Question 🏗️ Build 🧪 Study 🔢 Model 🔄 Derive & Iterate
Formulating cognitive and interaction research questions about how humans interpret, trust, and adapt to AI behavior Engineering interactive AI systems and explainability frameworks as experimental platforms to operationalize these questions Designing behavioral experiments and user studies to measure trust formation, belief updating, and cognitive adaptation Developing computational and probabilistic models to formalize human reasoning and social inference about AI Translating empirical findings into design principles for uncertainty-aware, adaptive, and human-aligned AI systems

This cycle enables me to bridge AI engineering, computational cognitive science, and human-centered system design.


Featured Projects

🤖 Machine Theory of Mind

Belief-based computational modeling of human social reasoning in AI interaction.

Technical Contribution: Developed a probabilistic framework using Pyro to simulate how humans infer social intent from agent behavior. Modeled belief updating dynamics and trust calibration mechanisms.

Research Impact: Introduced the Social Intelligence Quotient (SIQ) to quantify human-agent alignment. Provides a foundation for socially adaptive AI systems.

Stack: Python Pyro Probabilistic Modeling

🛡️ BiasGuard Pro

Interactive human-in-the-loop auditing system for fairness in NLP decision systems.

Technical Contribution: Integrated DistilBERT with SHAP attribution and counterfactual explanation generation. Built an interactive Gradio interface for real-time bias exploration.

Research Impact: Enables users to understand, audit, and correct algorithmic bias. Demonstrates how explainability affects trust calibration.

Stack: DistilBERT SHAP Counterfactuals Gradio

🧩 CogniViz

Real-time estimation of user cognitive state from interaction traces.

Technical Contribution: Designed a pipeline to infer cognitive state using behavioral signals. Achieved F1 = 0.82 using machine learning models.

Research Impact: Conducted a controlled user study (N=25) to evaluate how adaptive feedback influences performance and trust.

Stack: Scikit-learn JavaScript Behavioral Data Analysis

🧬 AMPlify-Enhanced

Transformer-based framework for antimicrobial peptide discovery.

Technical Contribution: Built deep learning pipelines for sequence modeling. Improved predictive accuracy beyond prior baseline methods.

Research Impact: Supports AI-driven drug discovery for WHO priority pathogens.

Stack: Deep Learning Bioinformatics Transformers


Publications

📄 Publication 📚 Venue 🔗 Links
UNET-Based Segmentation for Diabetic Macular Edema Detection in OCT Images ICCIS 2025, Springer LNNS Paper Code

Tech Stack

Computational Modeling

Python PyTorch TensorFlow Pyro Scikit Learn Hugging Face

Human-Centered Evaluation

Qualtrics A/B Testing Eye Tracking User Testing Figma

Explainable AI

SHAP LIME

Systems & Interaction

JavaScript FastAPI Gradio Streamlit

Tools & Deployment

Docker Git GitHub LaTeX VS Code Colab


GitHub Analytics

GitHub Streak

Contribution Graph


💭 Research Philosophy

"Building systems that understand humans, so humans can better understand AI."

Pinned Loading

  1. Machine-Theory-Of-Mind Machine-Theory-Of-Mind Public

    Python

  2. cognitive-load-analysis cognitive-load-analysis Public

    Jupyter Notebook

  3. BiasGuard-Pro BiasGuard-Pro Public

    Jupyter Notebook

  4. AMPlify-Enhanced-AMP-Prediction AMPlify-Enhanced-AMP-Prediction Public

    Python