I’m a Computer Scientist specialising in XR, computational colour, and human perception, with experience building and evaluating perception-based systems for AR/VR.
My experience lies at the intersection of colour pipelines, psychophysics, machine learning and XR, supported by hands-on development in Unity, Python, MATLAB and Blender.
I also bring an industry foundation as a Data Analyst at Bain & Company, where I built reproducible analytical pipelines and worked in agile teams.
- Unity (C#), Blender, OpenXR workflows
- Rendering pipelines, foveation, colour calibration
- Worked on headsets such as the HoloLens 2 and the Varjo VR 3
- Python, PyTorch, OpenCV
- Image processing, feature extraction, deep learning
- Model interpretability
- Colour calibration & characterisation
- Psychophysical experiment design and implementation
- Translucency perception
- Pandas, SQL, Tableau, Alteryx
- MATLAB
Masters thesis project. I designed and conducted experiments to benchmark colour accuracy in optical see-through AR using Unity. The headset used was the Microsoft HoloLens 2.
I implemented characterisation pipelines using MATLAB, and hardware such as a Spectroradiometer, performed perceptual analysis, and trained deep learning models on Python to improve robustness across lighting conditions, as well as background and observer differences.
I developed Unity prototypes to demonstrate distortion correction in XR systems (specifically barrel/pincushion distortions and chromatic aberration). This included writing shaders on Unity.
I implemented multiple levels of foveation and conducted a controlled psychophysical study to evaluate perceptual thresholds using Unity for the varjo VR3 headset as a coursework project.
I analysed performance–quality tradeoffs for real-time XR rendering.
I built a curated dataset of AI and human-generated artworks.
I also implemented deep learning models on PyTorch using spatial and frequency-domain features, achieving ~98% accuracy with emphasised model explainability.
I investigated and published 2 papers about how observers perceive colour differences in translucent 3D objects using psychophysics and ML analysis. The softwares used were Mitsuba renderer, Python, MATLAB and QuickEval.
When I’m not analysing colour perception or building XR experiments, I enjoy creating digital art, exploring new cities, and learning languages.