Building high-performance AI systems · LLM optimization · Multimodal inference · Scalable ML infrastructure
|
Building scalable ML systems and training pipelines |
Optimizing model serving and reducing latency |
Deploying and orchestrating at scale |
Deep Learning Frameworks
Inference & Optimization
Cloud Native & DevOps
Languages & Tools
When not optimizing inference pipelines, you'll find me cycling, exploring photography, or traveling.
© 2025 Leo · Powered by passion for AI and open source
