Analyzing and interpreting neural networks for NLP

Revealing the content of the neural black box: workshop on the analysis and interpretation of neural networks for Natural Language Processing.

BlackboxNLP 2019

We are happy to announce that BlackboxNLP 2019 will be co-located with ACL 2019 in Florence (August 1). We will update this website soon with more information!

Archived information about the 2018 edition:


The workshop will be collocated with ACL 2019 in Florence.

Important dates


Neural networks have rapidly become a central component in NLP systems in the last few years. The improvement in accuracy and performance brought by the introduction of neural networks has typically come at the cost of our understanding of the system: How do we assess what the representations and computations are that the network learns? The goal of this workshop is to bring together people who are attempting to peek inside the neural network black box, taking inspiration from machine learning, psychology, linguistics, and neuroscience. The topics of the workshop will include, but are not limited to:

BlackboxNLP 2019 is the second BlackboxNLP workshop. The programme and proceedings of the previous edition, which was held at EMNLP 2018, can be found here.


Tal Linzen

Tal Linzen ( is an Assistant Professor of Cognitive Science at Johns Hopkins University. He develops computational cognitive models of language. In addition to his work in psycholinguistics and cognitive neuroscience, he has studied the syntactic capabilities of contemporary artificial neural networks and the linguistic information encoded in word embeddings, in work that has appeared in TACL, EACL and CoNLL. He has co-organized the first edition of BlackboxNLP, and has also organized two editions of the workshop on Cognitive Modeling and Computational Linguistics, co-located with EACL 2017 and with SCiL 2018.

Gregorz Chrupala

Gregorz Chrupala ( is an Assistant Professor at the Department of Cognitive Science and Artificial Intelligence at Tilburg University. His recent research focus has been on computational models of language learning from multimodal signals such as speech and vision and on the analysis of linguistic representations emerging in multilayer recurrent neural networks. He regularly serves on program committees of major NLP and AI conferences, workshops and journals. He was an area chair at ACL 2017 and EMNLP 2018, and he co-organized the first edition of BlackboxNLP.

Yonatan Belinkov

Yonatan Belinkov ( is a Postdoctoral Fellow at the Harvard School of Engineering and Applied Sciences (SEAS) and the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). His recent research focuses on representations of language in neural network models, with applications in machine translation and speech recognition. His research has been published at ACL, EMNLP, TACL, ICLR, and NIPS. His PhD dissertation at MIT analyzed internal language representations in deep learning models.

Dieuwke Hupkes

Dieuwke Hupkes ( is a PhD student at the University of Amsterdam. The main focus of her research is understanding how recurrent neural networks can understand and learn the structures that occur in natural language. Developing methods to interpret and interact with neural networks has therefore been an important area of focus in her research. She authored 5 articles directly relevant to the workshop, one of them published in a top AI journal (Journal of Artificial Intelligence), and she is co-organizing a workshop on compositionality, neural networks, and the brain to be held at the Lorentz Center in the summer of 2019.

Submission types

We accept two types of papers

Both papers and abstracts should follow the official ACL 2019 style guidelines and should be submitted via softconf:

Accepted submissions will be presented at the workshop: most as posters, some as oral presentations (determined by the program committee).

Program committee