Researcher in Multimodal Large Language Models and Video-Language Models.
Senior Applied Mathematician at Innopolis University.
AI Researcher at FusionBrain Lab, Artificial Intelligence Research Institute (AIRI).
- Multimodal LLMs
- Vision-Language Models (VLMs)
- Long-context & Long-video LLMs
- Psychological and behavioral evaluation of LLMs
- Efficient training and evaluation of foundation models
I work at the intersection of multimodal reasoning, long-context modeling, and evaluation methodology for large-scale AI systems.
- Senior Applied Mathematician — Innopolis University
- AI Researcher — AIRI, FusionBrain Lab
Continuing PhD thesis preparation.
- 5 papers published at A/A* conferences in NLP and AI
- Research centered on LLM evaluation, multimodality, and benchmarking
- NLP Data Scientist — SberDevices
- AI Research Engineer — MIPT
- Earlier experience in analytics and engineering
Open to research collaborations in:
- Multimodal LLMs
- Long-video reasoning
- Evaluation benchmarks
- Industrial applications of foundation models


