Research Interests
I’m broadly interested in how we can make AI systems that see, reason, and act more like humans — models that are robust, fair, and aware of their own uncertainty. My focus lies at the intersection of computer vision, vision–language models (VLMs), and robotics.
My previous work explored conformal prediction, object detection, and pose estimation, under the supervision of Dr. Mélanie Ducoffe. I also gained experience at PwC and Airbus AI Research, where I developed a deeper interest in uncertainty quantification and trustworthy perception in real-world AI systems.
I’m particularly drawn to problems involving vision–language reasoning, fairness and robustness in perception models, and active or causal learning for autonomous systems. I find inspiration in robotics and medical AI applications, where understanding context and reliability is crucial.
My goal is to pursue a PhD or research assistantship focused on developing multimodal perception systems that combine vision, language, and reasoning — making them not only accurate, but aware, interpretable, and dependable.