Sandro Pezzelle

Assistant Professor in Responsible AI at the ILLC, Faculty of Science, University of Amsterdam. Studying, teaching, and advancing language-mediated intelligence in humans and machines.
My research combines insights and methods from Natural Language Processing (NLP), Machine and Deep Learning, and Cognitive Science (see my current Research lines below). I published in ACL, EACL, EMNLP, NAACL, CoLM, TACL, Cognition, and Cognitive Science.
Before, I worked as a Postdoc within the DREAM (Distributed dynamic REpresentations for diAlogue Management) ERC project led by Raquel Fernández. And, before, I did a PhD at CIMeC, University of Trento, under the supervision of Raffaella Bernardi
(check QUANTIT-CLIC for some of my PhD work). In 2018, I was research intern at SAP AI Research.
I am a member of the Center for Explainable, Responsible, and Theory-Driven Artificial Intelligence (CERTAIN), a faculty member of the European Laboratory for Learning and Intelligent Systems (ELLIS), a board member of the ACL Special Interest Group in Computational Semantics (SigSem), and a Scientific Advisor at IVADO Labs.
Research lines
Some topics I am currently working on (contact me for thesis projects and collaborations!)
- Understanding and narration of visual events: How good are current VLMs at understanding and narrating visual events, and how can we evaluate these skills? Can we improve narration abilities by leveraging human behavioral and cognitive patterns?
- Ambiguous, underspecified, and implicit language: How do LLMs and VLMs deal with ambiguous (~multiple interpretations), underspecified (~missing information), and implicit (~implying or presupposing a message) language? Can we boost models’ semantic and pragmatic understanding using insights from linguistics?
- Human-inspired mechanistic interpretability: What are the computational LLM/VLM subgraphs (circuits) responsible for a certain specific behavior? Do they mirror the cognitive and neural mechanisms observed in humans?
- Benchmarking and evaluation: Can we use LLMs in real communicative and collaborative contexts?
News
- June 2025: I was one of the invited panelist at the event Metropolis (1927) Reimagined organized by BètaBreak at the University of Amsterdam. Fun to share my perspective on AI, innovation, and society a hundred years after the film's release!
- May 2025: Proud to share that my collaborators and I have 3 papers accepted at ACL 2025 (2 main, 1 Findings)! Look forward to presenting them in Vienna!
- March 2025: Excited to share that our preprint Are formal and functional linguistic mechanisms dissociated in language models? is available on arXiv! Check it out if you're curious to know if the mechanistic circuits found in LLMs mirror the dissociations observed in human brains!
- February 2025: Happy to share that I am now a Scientific Advisor at IVADO Labs!
- September 2024: Happy to share that our work Not (yet) the whole story: Evaluating Visual Storytelling Requires More than Measuring Coherence, Grounding, and Repetition will appear in the Findings of EMNLP 2024! Congrats, Aditya, for your second paper at EMNLP!
- September 2024: I'm thrilled to be one of the keynote speakers of the MultiplEYE mid-term Conference (MuMiCo) in Tirana on September 13! Look forward to attending the conference!
- September 2024: Happy to welcome Walter Paci, PhD candidate at the University of Florence, to our lab! For the coming four months, Walter will be be working with me and the DMG on language implicitness and underspecification! Great to have you here!
- August 2024: A nice batch of invited/keynote talks this autumn! I will be a keynote speaker of MuMiCo 2024 in Tirana (September), a keynote speaker of the Perspectives in NLP x Social Sciences, Cognition, and Humanities workshop in Aarhus (October), and an invited speaker of the CLASP seminar in Gothenburg (December). Look forward to them!
- July 2024: Honored to be one of the keynote speakers of the CMCL workshop at ACL 2024 in Bangkok! Looking forward to being there!
- July 2024: Our preprint LLMs instead of Human Judges? A Large Scale Empirical Study across 20 NLP Evaluation Tasks is now on arXiv. Check it out!
- May 2024: I am more than proud to share that 2 papers from our group have been accepted to appear in the Proceedings of ACL 2024! Congratulations Frank, Michael, Alberto, and Juell on your well-deserved achievement!
- March 2024: It was great to give a talk and connect with researchers at Amazon AI Barcelona! Thanks, Laura Aina, Ionut Sorodoc, and Diego Marcheggiani for inviting me!
Press
- Feb 2025. Our paper "From Tools to Teammates: Evaluating LLMs in Multi-Session Coding Interactions" in collaboration with Cohere and Cohere4AI was featured in the Cohere Research blog!
- Feb 2025. I was interviewed for Folia, the magazine for, about, and by students, faculty, and staff of the University of Amsterdam! You can read the article "UvA scientists outsource expensive and boring tasks to AI – how responsible is that?" here!
- Feb 2025. I was interviewed for the Dutch magazine Kijk! You can find the article (in Dutch) "AI feeds on its own mistakes – and that causes problems" online and in the February issue of the magazine!
- July 2024. Our HUE project on XAI and model interpretability is talked about on Innovation Origins! You can read the article "New research project aims to make AI explainable to humans" here!
- July 2024. Giovanni Cinà and I were interviewed by the University of Amsterdam about "Developing a method to make AI explainable to humans". You can read the article here!
- Feb 2021. Our project on using human eye-tracking data to inform automatic image captioning is talked about on the ILLC blog! You can read the blogpost "Machines that gaze at landscapes" here!
- Oct 2018. Our project on image captioning for visually-impaired users "VizWiz: Computer Vision Researchers Join Forces for Social Good" is talked about on the SAP AI Research blog! You can read the blogpost here!