BlackboxNLP 2020: Analyzing and interpreting neural networks for NLP – An EMNLP 2020 Workshop
BlackboxNLP 2020 is the third BlackboxNLP workshop. The programme and proceedings of the previous editions, which were held at EMNLP 2018 and ACL 2019, can be found here and here.
Neural networks have rapidly become a central component in NLP systems in the last few years. The improvement in accuracy and performance brought by the introduction of neural networks has typically come at the cost of our understanding of the system: How do we assess what the representations and computations are that the network learns? The goal of this workshop is to bring together people who are attempting to peek inside the neural network black box, taking inspiration from machine learning, psychology, linguistics, and neuroscience.
Topics of interest include, but are not limited to:
- Applying analysis techniques from neuroscience to analyze high-dimensional vector representations in artificial neural networks;
- Analyzing the network’s response to strategically chosen input in order to infer the linguistic generalizations that the network has acquired;
- Examining network performance on simplified or formal languages;
- Proposing modifications to neural architectures that increase their interpretability;
- Scaling up analysis techniques developed in the connectionist literature in the 1990s;
- Testing whether interpretable information can be decoded from intermediate representations;
- Translating neural network interpretation insights from computer vision to language;
- Explaining specific model predictions made by neural networks;
- Generating and evaluating the quality of adversarial examples in NLP;
- Developing open-source tools for analyzing neural networks in NLP;
- Evaluating the analysis results: how do we know that the analysis is valid?
This year, we especially encourage submissions that compare multiple techniques to target a specific research question or that perform careful evaluation of interpretation methods.
We call for two types of papers:
- Archival papers. These are papers reporting on completed, original and unpublished research, with a maximum length of 8 pages + references. Papers shorter than this maximum are also welcome. Accepted papers are expected to be presented at the workshop and will be published in the workshop proceedings. They should report on obtained results rather than intended work. These papers will undergo double-blind peer-review, and should thus be anonymized.
- Extended abstracts. These may report on work in progress or may be cross submissions that have already appeared in a non-NLP venue. The extended abstracts are of maximum 2 pages + references. These submissions are non-archival in order to allow submission to another venue. The selection will not be based on a double-blind review and thus submissions of this type need not be anonymized.
Submissions should follow the official EMNLP 2020 style guidelines. We will soon make the submission link available through the workshop website:
Shared Interpretation Mission
BlackboxNLP 2020 will feature the first edition of the Shared Interpretation Mission. This is an event designed based on shared tasks. However, its goal is not to find a best-performing system for a well-defined problem but rather to encourage the development of useful, creative analysis techniques that would help us better understand existing models. More details and intermediate important dates are available here.
- July 15, 2020 – Workshop Paper submission deadline.
- August 4, 2020 - Shared Mission Report Submission deadline.
- August 17, 2020 – Notification of acceptance.
- August 31, 2020 – Camera-ready papers due.
- November 11 or 12 – Workshop
Note: All deadlines are 11:59PM UTC-12:00.