Pezzelle, S.(preprint). The LAMBADA Dataset. To appear in the International Encyclopedia of Language and Linguistics, 3rd Edition. [preprint]
Bavaresco, A., de Heer Kloots, M., Pezzelle, S., Fernández, R. (preprint). Modelling Multimodal Integration in Human Concept Processing with Vision-and-Language Models [preprint]
Bavaresco, A., Bernardi, R., Bertolazzi, L., Elliott, D., Fernández, R., Gatt, A., Ghaleb, E., Giulianelli, M., Hanna, M., Koller, A., Martins, A., Mondorf, P., Neplenbroek, V., Pezzelle, S., Plank, B., Schlangen, D., Suglia, A., Surikuchi, A., Takmaz, E., Testoni, A. (preprint). LLMs instead of Human Judges? A Large Scale Empirical Study across 20 NLP Evaluation Tasks [preprint]
Cinà, G., Fernandez-Llaneza, D., Deponte, L., Mishra, N., Röber, T. E., Pezzelle, S., Calixto, I., Goedhart, R. and Birbil, Ş. İ. (preprint). Fixing confirmation bias in feature attribution methods via semantic match. [preprint]
[43] Bai, Y. and Pezzelle, S. (2025) If I am smart, I will do the right thing: Combining Complementary Information with Generative Visual Language Models. To appear in EvalMG 2025 at COLING 2025. [preprint]
[42] Mehrparvar, B. and Pezzelle, S. (2024). Detecting and Translating Language Ambiguity with Multilingual LLMs. To appear in MRL 2024 at EMNLP 2024. [preprint]
[41] Surikuchi, A., Fernández, R., and Pezzelle, S. (2024). Not (yet) the whole story: Evaluating Visual Storytelling Requires More than Measuring Coherence, Grounding, and Repetition. To appear in Findings of EMNLP 2024[preprint]
[40] Hanna, M., Pezzelle, S., Belinkov, Y. (2024). Have Faith in Faithfulness: Going Beyond Circuit Overlap When Finding Model Mechanisms. To appear in CoLM 2024[preprint]
[39] Testoni, A., Sprott, J., Pezzelle, S. (2024). Naming, Describing, and Quantifying Visual Objects in Humans and LLMs. ACL 2024[preprint]
[38] Wildenburg, F., Hanna, M., Pezzelle, S. (2024). Do Pre-Trained Language Models Detect and Understand Semantic Underspecification? Ask the DUST!Findings of ACL 2024[preprint]
[37] Takmaz, E., Pezzelle, S., and Fernández, R. (2024). Describing Images Fast and Slow: Quantifying and Predicting the Variation in Human Signals during Visuo-Linguistic Processes. EACL 2024. [paper][bib] [code]
[36] Hanna, M., Belinkov, Y., and Pezzelle, S. (2023). When Language Models Fall in Love: Animacy Processing in Transformer Language Models. EMNLP 2023. [paper][preprint][bib][code&data]
[35] Chen, X., Fernández, R., and Pezzelle, S. (2023). The BLA Benchmark: Investigating Basic Language Abilities of Pre-Trained Multimodal Models. EMNLP 2023. [paper][bib][preprint][code&data]
[34] Surikuchi, A., Pezzelle, S., and Fernández, R. (2023). GROOViST: A Metric for Grounding Objects in Visual Storytelling. EMNLP 2023. [paper][preprint][bib][code]
[33] Pezzelle, S. (2023). Dealing with Semantic Underspecification in Multimodal NLP. ACL 2023. [paper][preprint][bib][code]
[32] Takmaz, E., Brandizzi, N., Giulianelli, M., Pezzelle, S. and Fernández, R. (2023). Speaking the Language of Your Listener: Audience-Aware Adaptation via Plug-and-Play Theory of Mind. Findings of ACL 2023. [paper][preprint][bib][code]
[31] Pezzelle, S. and Fernández, R. (2023). Semantic adaptation to the interpretation of gradable adjectives via active linguistic interaction. Cognitive Science. [paper][bib][code]
[30] Buijtelaar, L. and Pezzelle, S. (2023). A Psycholinguistic Analysis of BERT's Representations of Compounds. EACL 2023. [preprint][paper][bib][code]
[29] Jansen, L., Laichter, S. L., Sinclair, A. , van der Goot, M., Fernández, R., Pezzelle, S. (2022). Controllable Text Generation for All Ages: Evaluating a Plug-and-Play Approach to Age-Adapted Dialogue. GEM 2022 at EMNLP 2022. [website][paper][bib][code]
[28] Takmaz, E., Pezzelle, S., Fernández, R. (2022). Less Descriptive yet Discriminative: Quantifying the Properties of Multimodal Referring Utterances via CLIP. CMCL 2022 at ACL 2022. [paper][bib][code]
[27] Pezzelle, S., Takmaz, E., Fernández, R. (2021). Word Representation Learning in Multimodal Pre-Trained Transformers: An Intrinsic Evaluation. TACL. [paper][bib][github]
[26] Jansen, L., Sinclair, A., Van der Goot, M., Fernández, R., Pezzelle, S. (2021). Detecting Age-Related Linguistic Patterns in Dialogue: Toward Adaptive Conversational Systems. CLIC-it. [website][paper][bib][github]
[25] Van der Goot, M., Georgiou, M., Dolinšek, Š., Jansen, L., Sinclair, A., Fernández, R., Pezzelle, S. (2021). Exploring the potential of adapting conversational systems to different age groups: A pilot study. CONVERSATIONS. [paper] [bib]
[24] Parfenova, I., Elliott, D., Fernández, R., Pezzelle, S. (2021). Probing Cross-Modal Representations in Multi-Step Relational Reasoning. RepL4NLP 2021 at ACL 2021. [paper][bib][github]
[23] Bernardi, R., Pezzelle, S. (2021). Linguistic issues behind visual question answering. Language and Linguistic Compass. [paper][bib]
[22] Jolly, S., Pezzelle, S., Nabi, M. (2021). EaSe: A Diagnostic Tool for VQA based on Answer Diversity. NAACL-HLT 2021. [paper][bib][github]
[21] Gualdoni, E., Bernardi, R., Fernández, R., Pezzelle, S. (2020). Grounded and Ungrounded Referring Expressions in Human Dialogues: Language Mirrors Different Grounding Conditions. CLiC-it 2020. [paper] [bib]
[20] Pezzelle, S., Greco, C., Gandolfi, G., Gualdoni, E., Bernardi, R. (2020). Be Different to Be Better! A Benchmark to Leverage the Complementarity of Language and Vision. Findings of EMNLP 2020. [website][paper][bib][github][poster]
[19] Takmaz, E., Pezzelle, S., Beinborn, L., Fernández, R. (2020). Generating Image Descriptions via Sequential Cross-Modal Alignment Guided by Human Gaze. EMNLP 2020. [paper][bib][github]
[18] Takmaz, E., Giulianelli, M., Pezzelle, S., Sinclair, A., Fernández, R. (2020). Refer, Reuse, Reduce: Generating Subsequent References in Visual and Conversational Contexts. EMNLP 2020. [website][paper][bib][github]
[17] Pezzelle, S., Marelli, M. (2020). Do Semantic Features Capture a Syntactic Classification of Compounds? Insights from Compositional Distributional Semantics. PMWE.[paper][bib]
[16] Pezzelle, S., Fernández, R. (2019). Big Generalizations with Small Data: Exploring the Role of Training Samples in Learning Adjectives of Size. LANTERN 2019 at EMNLP-IJCNLP 2019. [paper][bib][github]
[15] Pezzelle, S., Fernández, R. (2019). Is the Red Square Big? MALeViC: Modeling Adjectives Leveraging Visual Contexts. EMNLP-IJCNLP 2019. [paper][bib][github]
[14] Testoni, A., Pezzelle, S., Bernardi, R. (2019). Quantifiers in a Multimodal World: Hallucinating Vision with Language and Sound. CMCL 2019 at NAACL-HLT 2019. [paper][bib]
[13] Pezzelle, S., Bernardi, R., Piazza, M. (2018). Probing the Mental Representation of Quantifiers. Cognition, 181, 117-126. [paper][preprint][bib]
[12] Pezzelle, S., Steinert-Threlkeld, S., Bernardi, R., Szymanik, J. (2018). Some of them can Be Guessed! Exploring the Effect of Linguistic Context in Predicting Quantifiers, ACL 2018. [paper][bib][arxiv][github][poster]
[11] Pezzelle, S., Sorodoc, I., Bernardi, R. (2018). Comparatives, Quantifiers, Proportions: A Multi-Task Model for the Learning of Quantities from Vision, NAACL-HLT 2018. [paper][bib][github][slides][poster][arxiv]
[10] Sorodoc, I., Pezzelle, S., Dimiccoli, M., Herbelot, A., Bernardi, R. (2018). Learning Quantification from Images: A Structured Neural Architecture. JNLE 2018. [paper][bib] [github] [arxiv]
[9] Smith, D. A., Pezzelle, S., Franzon, F., Zanini, C., Bernardi, R. (2017). Can you See the (Linguistic) Difference? Exploring the Mass/Count Distinction in Vision, IWCS 2017. [paper][poster]
[8] Shekhar, R., Pezzelle, S., Herbelot, A., Nabi, M., Sangineto, E., Bernardi, R. (2017). Vision and Language Integration: Moving beyond Objects, IWCS 2017. [webpage][paper]
[7] Shekhar, R., Pezzelle, S., Klimovich, Y., Herbelot, A., Nabi, M., Sangineto, E., Bernardi, R. (2017). FOIL it! Find One mismatch between Image and Language caption, ACL 2017. [webpage][paper]
[6] Pezzelle, S., Marelli, M., Bernardi, R. (2017). Be Precise or Fuzzy: Learning the Meaning of Cardinals and Quantifiers from Vision, EACL 2017. [paper][bib][poster][slides][arxiv]
[5] Pezzelle, S., Sorodoc, I., Herbelot, A., Bernardi, R. (2016). Imparare a quantificare guardando, CLIC-it 2016. [paper][slides]
[4] Paperno, D., Kruszewski, G., Lazaridou, A., Pham, Q., Bernardi, R., Pezzelle, S., Baroni, M., Boleda, G., and Fernández, R. (2016). The LAMBADA dataset: Word prediction requiring a broad discourse context, ACL 2016. [paper][webpage]
[3] Pezzelle, S., Shekhar, R., Bernardi, R. (2016). Building a bagpipe with a bag and a pipe: Exploring conceptual combination in Vision, VL 2016 at ACL 2016. [paper][data][poster][slides]
[2] Sorodoc, I., Lazaridou, A., Boleda, G., Herbelot, A., Pezzelle, S., Bernardi, R. (2016). 'Look, some green circles!': Learning to quantify from images, VL 2016 at ACL 2016. [paper][poster][slides]
[1] Pezzelle, S. (2015) Lorenzo Da Ponte: Metro e Stile delle Poesie del Periodo Americano, Stilistica e Metrica Italiana XV, Edizioni del Galluzzo, pages 83-120, Firenze, Italy, 2015. [webpage]
Abstracts
Takmaz, E., Pezzelle, S., Fernández, R. (2022). Time Alignment between Gaze and Speech in Image Descriptions: Exploring Theories of Linearization. CogSci 2022.
Pezzelle, S., Fernández, R. (2020). Asking questions with a big impact: Adapting to other interpretations of gradable adjectives. CogSci 2020.
Takmaz, E., Beinborn, L., Pezzelle, S., Fernández, R. (2019). Enhancing Neural Image Captioning with Eye-Tracking. EurNLP 2019.
Pezzelle, S., Greco, C., Herbelot, A., Klein, T., Nabi, M., Bernardi, R. (2018). Be Different to Be Better: Toward the Integration of Vision and Language. SiVL 2018 at ECCV 2018.
Jolly, S., Pezzelle, S., Klein, T., Dengel, A., Nabi, N. (2018). An Evaluative Look at the Evaluation of VQA. SiVL 2018 at ECCV 2018.
Pezzelle, S., Jezek, E., Micheli, M. S. (2017). The different meanings of 'a': Capturing qualia relations of Italian complex nominals with distributional semantics, Workshop on the Role of Constituents in Multi-Word Expressions at DGfS 2017. [abstract]