Analyzing and interpreting neural networks for NLP

Revealing the content of the neural black box: workshop on the analysis and interpretation of neural networks for Natural Language Processing.

This project is maintained by blackboxnlp

Venue

The workshop will be collocated with EMNLP 2018 in Brussels.

Important dates

Proceedings

The workshop proceedings are available via ACL Anthology: proceedings

Workshop program

Time Program item
09:00-09:10 Opening remarks
09:10-10:00 Invited talk 1: Yoav Goldberg
10:00-11:00 Poster session 1 (10:30-11 tea break)
11:00-12:30 Oral presentation session 1 (6 x 15 minutes)
12:30-14:00 Lunch
14:00-14:50 Invited talk 2: Graham Neubig
14:50-16:00 Poster session 2 (15:30-16 tea break)
16:00-16:50 Invited talk 3: Leila Wehbe
16:50-17:20 Oral presentation session 2 (2 x 15 minutes)
17:20-17:30 Best paper announcement and closing remarks

Detailed program

Information for presenters

The maximal poster size is A0: 84 cm (width) by 118 cm (height), or 33 by 46 inch. Posters should be in portrait format.

Contributed talks should be 12 minute long, with 3 minutes for questions.

Invited Speakers

Leila Wehbe

Language representations in human brains and artificial neural networks

When studying language in the brain, it has become more common to image the brain of humans while they process naturalistic language stimuli consisting of rich, natural text. To analyse the brain representation of such complex stimuli, vector representations derived from various NLP methods are extremely useful as a model of the information being processed in the brain. The recent deep learning revolution has ignited a lot of interest in using artificial neural networks as a source of high dimensional vector representation for modeling brain processes. However, these representations are hard to interpret and the problem becomes increasingly difficult: how do we study complex brain activity – a black box we want to understand – using hard-to-interpret artificial neural network representations – another black box we want to understand? In this talk, I will summarize the recent effort in modeling the brain processing of language, the use of artificial neural networks in this process, and how inferences about brain processes and about artificial neural network representations can still be made under this setup.

Bio: Leila Wehbe is an assistant professor of Machine Learning at Carnegie Mellon University. Previously, we was a postdoctoral researcher at the Gallant Lab in the Helen Wills Neuroscience Institute at UC Berkeley. She obtained her PhD from the Machine Learning Department and the Center for the Neural Basis of Cognition at Carnegie Mellon University, where she worked with Tom Mitchell. She works on studying language representations in the brain when subjects engage in naturalistic language tasks. Specifically, she combines functional neuroimaging with natural language processing and machine learning tools to build spatiotemporal maps of the information represented in the brain during language processing.

Graham Neubig

Learning with Latent Linguistic Structure

Slides

Neural networks provide a powerful tool to model language, but also depart from standard methods of linguistic representation, which usually consist of discrete tag, tree, or graph structures. These structures are useful for a number of reasons: they are more interpretable, and also can be useful in downstream tasks. In this talk, I will discuss models that explicitly incorporate these structures as latent variables, allowing for unsupervised or semi-supervised discovery of interpretable linguistic structure, with applications to part-of-speech and morphological tagging, as well as syntactic and semantic parsing.

Bio: Graham Neubig is an assistant professor at the Language Technologies Intitute of Carnegie Mellon University. His work focuses on natural language processing, specifically multi-lingual models that work in many different languages, and natural language interfaces that allow humans to communicate with computers in their own language. Much of this work relies on machine learning to create these systems from data, and he is also active in developing methods and algorithms for machine learning over natural language data. He publishes regularly in the top venues in natural language processing, machine learning, and speech, and his work occasionally wins awards such as best papers at EMNLP, EACL, and WNMT. He is also active in developing open-source software, and is the main developer of the DyNet neural network toolkit.

Yoav Goldberg

Trying to Understand Recurrent Neural Networks for Language Processing

Slides

Recurrent neural networks (RNNs), and in particular LSTM networks, emerge as very capable learners for sequential data. Thus, my group started using them everywhere, achieving strong results on many language understanding and modeling tasks. However, little is known about how RNNs represent sequences, what they actually encode, and what they are capable representing. In this talk, I will describe some attempts at trying to shed light on the inner-working of RNNs. Particularly, I plan to describe at least two of the following: a method for comparing what is captured in vector representations of sentences based on different encoders (Adi et al, ICLR 2017, and more generally the notion of diagnostic classification), a framework for extracting a finite-state automata from trained RNNs (Weiss et al, ICML 2018), and a formal difference between the representation capacity of different RNN variants (Weiss et al, ACL 2018).

Bio: Yoav Goldberg is a Senior Lecturer at Bar Ilan University’s Computer Science Department. Before that, he was a Research Scientist at Google Research New York. He works on problems related to Natural Language Processing and Machine Learning. In particular he is interested in syntactic parsing, structured-prediction models, learning for greedy decoding algorithms, multilingual language understanding, and cross domain learning. Lately, he is also interested in neural network based methods for NLP. He recently published a book on the subject.

Description

Neural networks have rapidly become a central component in language and speech understanding systems in the last few years. The improvements in accuracy and performance brought by the introduction of neural networks has typically come at the cost of our understanding of the system: what are the representations and computations that the network learns? The goal of this workshop is to bring together people who are attempting to peek inside the neural network black box, taking inspiration from machine learning, psychology, linguistics and neuroscience. The topics of the workshop will include, but are not limited to:

Submission types

Both categories of submissions should use EMNLP 2018 templates:

Both papers and abstracts should be submitted via softconf: https://www.softconf.com/emnlp2018/BlackboxNLP/

Accepted submissions will be presented at the workshop: most as posters, some as oral presentations (determined by the program committee).

The workshop will have a best paper award, sponsored by the Department of Cognitive Science at Johns Hopkins University.

Dual submissions

Dual submissions of archival papers with EMNLP (or another conference) are allowed. Please let us know as soon as possible if you decide to withdraw a paper accepted elsewhere. Also please consider that dual submissions increase reviewing burden for the whole community.

Organizers

Tal Linzen is an Assistant Professor of Cognitive Science at Johns Hopkins University. He develops computational cognitive models of language. In addition to his work in psycholinguistics and cognitive neuroscience, he has studied the syntactic capabilities of contemporary artificial neural networks and the linguistic information encoded in word embeddings, in work that has appeared in TACL, EACL and CoNLL. He has co-organized the workshop on Cognitive Modeling and Computational Linguistics which was co-located with EACL 2017, and is co-organizing the next installation of the same workshop at the Society for Computation in Linguistics in January 2018.

Afra Alishahi is an Associate Professor of Cognitive Science and Artificial Intelligence at Tilburg University, the Netherlands. Her main research interest is developing computational models for studying the process of human language acquisition. Recently she has been studying the emergence of linguistic structure in grounded models of language learning. She has chaired CoNLL 2015, and organized the EACL Workshop on Cognitive Aspects of Computational Language Acquisition in 2009.

Grzegorz Chrupała is an Assistant Professor at the Department of Cognitive Science and Artificial Intelligence at Tilburg University. His recent research focus has been on computational models of language learning from multimodal signals such as speech and vision and on the analysis and interpretability of representations emerging in multilayer recurrent neural networks. He regularly serves on program committees of major NLP and AI conferences, workshops and journals. He was area co-chair for Machine Learning at ACL 2017, for Discourse and Dialogue, Summarization and Generation, and Multimodal NLP and Speech at EMNLP 2018, and general chair for Benelearn 2018.

Program committee

Sponsors

Department of Cognitive Science, Johns Hopkins University